Search is not available for this dataset
text
string
meta
dict
\documentclass{article} \usepackage[section]{placeins} \usepackage{graphicx, wrapfig, amsmath, amssymb, physics, hyperref} \hypersetup{ colorlinks=true, linkcolor=blue, filecolor=magenta, urlcolor=cyan, } \author{Yaghoub Shahmari} \title{Report - Problem Set No 6} \date{\today} \graphicspath{ {../Figs/} } \begin{document} \maketitle \section*{Problem 1} \textbf{Basic description:} In this problem, we're going to collect samples of the sum of a specific number of particles. According to the central limit theorem, this distribution must be gaussian. As we can see the results approved the theorem. \textbf{The results:} \begin{figure}[!htb] \centering \includegraphics[scale = 0.25]{/Q1/Q1-Hist} \label{fig:1.1} \caption{Histogram of the distribution of the samples.} \end{figure} \pagebreak \section*{Problem 2} \textbf{Basic description:} In this problem, we're going to convert a uniform distribution to a gaussian. So we continue the book solution. We have x1 and x2 samples. To find y1 and y2 (that are from the gaussian distribution) we continue this solution: \begin{gather*} f_{(x)} dx=g_{(y)} dy \Rightarrow \int f_{(x)} dx = \int g_{(y)} dy\\ \Rightarrow x = \int g_{(y)} dy = G_{(y)}\\ \Rightarrow x = {G^{-1}}_{(x)}\\ g(y) = \frac{e^{-\frac{y^2}{2\sigma^2}}}{\sqrt{2\pi\sigma^2}} \end{gather*} We know G(y) can't be expressed as we want, so we use the box-muller solution to get what we want: \begin{gather*} g_{(y_1,y_2)dy_1 dy_2 = g_{(y_1)}g_{(y_1)}} = \frac{e^{-\frac{{y_1}^2 + {y_2}^2}{2\sigma^2}}}{2\pi\sigma^2} dy_1 dy_2 \end{gather*} We transform into polar coordinates: \begin{gather*} y_1 = \rho \sin_{\theta}, \ y_2 = \rho \cos_{\theta}\\ \Rightarrow g_{(y_1,y_2)}dy_1 dy_2 = g_{(\rho,\theta)}\rho \theta = \frac{e^{-\frac{\rho^2}{2\sigma^2}}}{2\pi\sigma^2}\rho d{\rho}d{\theta}\\ \Rightarrow {g_1}_{(\rho)} = \frac{e^{-\frac{\rho^2}{2\sigma^2}}}{\sigma^2}\rho d{\rho}d{\theta},\ {g_2}_{(\theta)} = \frac{1}{2\pi} \\ \Rightarrow {G_1}_{(\rho)} = 1 - e^{-\frac{\rho^2}{2\sigma^2}} ,\ {g_2}_{(\theta)} = \frac{\theta}{2\pi}\\ \Rightarrow \ \rho = \sigma \sqrt{2ln(\frac{1}{1-x_1})},\ \theta = 2\pi x_2 \end{gather*} So we use the following formula to transform uniform distribution to gaussian distribution. \pagebreak I also built a function to create a uniform distribution. So, initially I call my uniform distribution creator function and transform the distribution to the Gaussian distribution. \textbf{The results:} \begin{figure}[!htb] \centering \includegraphics[scale = 0.5]{/Q2/Q2-Hist} \label{fig:2.1} \caption{Histogram of the distribution of the samples.} \end{figure} \centering \textbf{The whole data I gathered is in \href{https://github.com/shahmari/ComputationalPhysics-Fall2021/tree/main/ProblemSet6/Data}{this link}} Thanks for watching :) \end{document}
{ "alphanum_fraction": 0.6333333333, "avg_line_length": 36.2790697674, "ext": "tex", "hexsha": "c048c08a8f1b435690b2c22036be0a25620c3951", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2021-10-21T11:07:08.000Z", "max_forks_repo_forks_event_min_datetime": "2021-10-21T11:07:08.000Z", "max_forks_repo_head_hexsha": "f1681e32258c55697d11009e1702eb86d5f119d4", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "shahmari/ComputationalPhysics-Fall2021", "max_forks_repo_path": "ProblemSet6/TEXfiles/report.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "f1681e32258c55697d11009e1702eb86d5f119d4", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "shahmari/ComputationalPhysics-Fall2021", "max_issues_repo_path": "ProblemSet6/TEXfiles/report.tex", "max_line_length": 184, "max_stars_count": null, "max_stars_repo_head_hexsha": "f1681e32258c55697d11009e1702eb86d5f119d4", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "shahmari/ComputationalPhysics-Fall2021", "max_stars_repo_path": "ProblemSet6/TEXfiles/report.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 1024, "size": 3120 }
%!TEX root = ../thesis.tex %******************************************************************************* %****************************** Third Chapter ********************************** %******************************************************************************* \chapter{Design} \graphicspath{{Chapter5/Figs/Raster/}{Chapter5/Figs/Tex/}{Chapter5/Figs/}} Aligned with the aims of the project, the design of the blockchain network and applications will aim to: \begin{itemize} \setlength\itemsep{0em} \item increase transparency in assessments, \item facilitate the negotiation of personalised curricula, and \item increase privacy and security of learner records. \end{itemize} As concluded in Chapter 2, this project will be using the Hyperledger Fabric blockchain, so the designs have to follow the specifications of this blockchain distribution. It was also clear that a blockchain specific, high-fidelity prototyping tool is needed to create system designs such as data models and transactions. \section{Design Tool} Hyperledger Composer is an open source development toolset and framework that aims to accelerate time to value for blockchain projects. It offers business-centric abstractions, allowing business owners and developers to rapidly develop use cases and model a blockchain network. The design tools offered take the forms of: \begin{itemize} \setlength\itemsep{0em} \item An object-oriented modelling language (.cto file) to define data models in the blockchain network \item JavaScript functions (.js file) to define logic for Smart Contracts triggered by transactions \item An access control language (.acl file) to define access rules for records on the blockchain\\ \citep{official2018composer} \end{itemize} See Figure \ref{fig:composer2fabric} for a visual explainer of how Hyperledger Composer helps designers and developers create these high-level definitions. \begin{figure}[!ht] \centering \includegraphics[width=0.95\textwidth]{composer2fabric} \caption[Hyperledger Composer] {Components in the Hyperledger Composer framework and how it deploys to Hyperledger Fabric \citep{cuicapuza2017composer}} \label{fig:composer2fabric} \end{figure} % https://medium.com/@RichardCuica/hyperledgers-fabric-composer-simplifying-business-networks-on-blockchain-94313b979671 A significant advantage of using Hyperledger Composer is its ability to package these prototyped definitions and deploy it to Hyperledger Fabric, our target blockchain platform. This will speed up the implementation of the proposed demonstrator applications in the next stage. Throughout the design process, the Hyperledger Composer notations are converted or drawn into UML sequence diagrams, class diagrams and flowcharts. PlantUML, an open source language-to-diagram drawing tool was used. The discussion below may regularly refer back to the functional requirements (FR) and non-functional requirements (NR) defined in the previous Chapter 4. \section{Transaction Sequences} A transaction is the only activity that a peer can perform to alter the state of a blockchain. Designing the sequences of transactions for the demonstration user journeys will be able to provide a good overview of the work ahead. It will also shine a light on what data objects and properties must be defined. Two overarching use cases are considered: assessment and curriculum personalisation. \subsection{Assessment Use Case} \begin{figure}[!ht] \centering \includegraphics[width=0.675\textwidth]{assessmentloop} \caption[Assessment Use Case] {A UML sequence diagram denoting the assets, transactions and events between Learner and Teacher participants on the blockchain for the assessment use case} \label{fig:assessmentloop} \end{figure} These four transactions are required to complete the assessment use case (see Figure \ref{fig:assessmentloop}): \begin{itemize} \setlength\itemsep{0em} \item CreateModule: a transaction ordered by a teacher to store metadata about a course module, its units and assessments onto the blockchain; \item AddSubmission: a transaction ordered by a learner to store a submission (assessment attempt) on the blockchain, this could return the result of the assessment if the result is returned by an automatic (machine) marking service; \item SubmitResult: a transaction ordered by a teacher to store details of an assessor assessment for a submission on the blockchain; \item GenCertificate: a transaction ordered by a teacher to create a new certificate on the blockchain. \end{itemize} \subsection{Curriculum Personalisation Use Case} \begin{figure}[!ht] \centering \includegraphics[width=0.7\textwidth]{personalisationloop} \caption[Curriculum Personalisation Use Case] {Sequence diagram denoting the assets, transactions and events between Learner and Teacher participants on the blockchain for the curriculum personalisation use case} \label{fig:personalisationloop} \end{figure} Similarly, we looked at the transactions required to build a minimum viable product that facilitates curriculum personalisation. A curriculum here is simply a list of course modules attached to a learner and a teacher (a personal tutor for the learner). \begin{itemize} \setlength\itemsep{0em} \item ProposeCurriculum: a transaction ordered by a learner or a teacher to propose a new curriculum, or to proposed edits to an existing curriculum on the blockchain. \item ApproveCurriculum: a transaction ordered by a teacher to enrol a learner to the course modules in the learner's curriculum. A teacher cannot change the list of modules with this transaction, that is reserved for ProposeCurriculum. \end{itemize} See Figure \ref{fig:personalisationloop} for where these two transactions occur in a sequence diagram. % automatic formative assessments Annette's student % schema --> reviewer \section{Data Models} Building a blockchain network requires a network-wide schema of what records are allowed to be created, updated and read. The Hyperledger Composer framework calls these resource definitions, and recommends defining objects inheriting three basic types: Participants, Assets and Concepts in its object-oriented modelling language \citep{official2018composer}. \subsection{Participants} A participant is an actor in a blockchain network. A participant can create and exchange other assets by submitting transactions \citep{official2018composer}. The network design for this project will allow the creation of three main types of participants: \begin{itemize} \setlength\itemsep{0em} \item \textit{Teacher}, which can be lecturers, teaching assistants, tutors, etc. \item \textit{Learner}, which can be campus students, distant learners, etc. \item \textit{Reader}, members of the public who are interested in querying or verifying records, such as employers and further education providers. \end{itemize} All three types of participants inherit an abstract (cannot be created) class \textit{User}. The \textit{nid} field that all \textit{Users} must have would contain a one-way hash of their national identification number, which can be a driver's license number, social security number, etc. A hash is a form of cryptographical representation of data that is non-invertible. This allows the system to ensure that all writers in the system are known and unique, while protecting their privacy to the general public. See Figure \ref{fig:participants} for the detailed entity properties of all participant types in a class diagram. \begin{figure}[!ht] \centering \includegraphics[width=0.7\textwidth]{participants} \caption[Participants Class Diagram] {A UML class diagram describing the participants defined on the blockchain} \label{fig:participants} \end{figure} A notable design consideration was the mechanism for allowing tiered access control of learner information. The \textit{acLevels} and \textit{priviledgedReaderIds} fields in \textit{Learner} store access control settings for two tiers of \textit{Readers}, normal and privileged. This will be covered in more detail in the upcoming section 5.5. \subsection{Assets} Assets are "tangible or intangible goods, services, or property, and are stored in registries" \citep{official2018composer}. \begin{figure}[!ht] \centering \includegraphics[width=0.6\textwidth]{assets} \caption[Assets Class Diagram] {A UML class diagram describing the assets defined on the blockchain} \label{fig:assets} \end{figure} See Figure \ref{fig:assets} for the detailed entity properties and relationships of these assets in a class diagram. The asset definitions were modelled carefully according to literature and user requirements. These special design considerations were included: \begin{itemize} \setlength\itemsep{0em} \item \underline{Types of assessments}: Online assessments could be grouped in four categories: self assessment, computer assessment, tutor assessment, and peer assessment \citep[p.68]{paulsen2004online}. In this initial design, two of these types are considered: \textit{AutoAssessment} (computer assessment), and \textit{AssessorAssessment} (tutor assessment). They are daughter classes of the abstract \textit{Assessment}. \item \underline{Transparency of assessment goals}: the design contains mandatory fields to improve transparency in assessments, as identified by \citet{suhre2013determinants}'s research reviewed in Chapter 2.1.1. This includes the \textit{knowledgeRequired} field in \textit{Assessment} and \textit{learningObjectives} in \textit{CourseModule}, which encourages teachers to provide clarity over assessment goals. \\\\ \item \underline{Transparency of assessment procedures}: \textit{Assessment} assets will be on the blockchain and visible to all participants, improving transparency of procedures. They include the \textit{gradeDescriptors} and \textit{criteria} fields that which encourage teachers to provide clarity of assessment criteria, and give Smart Contracts the capabilities to enforce these criteria. \item \underline{Administrative Work for Personalisation}: Primary data suggested that administrative and regulatory work required for curriculum personalisation could be daunting for institutions (Chapter 4.1.2.PS7). The \textit{programmeOutcomes} field in the \textit{Curriculum} asset was one of the suggested regulatory steps for the UK market (Chapter 4.1.2.PS9). Making these administrative data available on the blockchain could allow future Smart Contracts to automate approvals and other administrative steps. This has the potential to reduce bureaucracy by eliminating the middleman. \item \underline{Content flexibility}: It was anticipated that many markets, institutions, teams and teachers will have their own requirements, formats and templates for what their e-Learning content should look like. To cater to this need for content and layout flexibility the blockchain accepts markdown syntax as input in fields such as \textit{detailsMd} in \textit{Assessment}, \textit{materialMd} in \textit{ModuleUnit} and \textit{fineprintMd} in \textit{Certificate}. Markdown is a popular text-to-HTML conversion tool for web writers \citep{gruber2004markdown}, with many internet forums and services releasing their own standards. \end{itemize} \subsection{Concepts} Concepts are abstract classes that are not assets or participants. They are used to define custom properties contained by an asset or participant \citep{official2018composer}. For this project: five of these Concepts were designed. These are related to modelling the assessment results, grade descriptors and criteria. See figure \ref{fig:concepts} for their detailed entity properties. \begin{figure}[!ht] \centering \includegraphics[width=1.0\textwidth]{concepts} \caption[Concepts Class Diagram] {A UML class diagram describing the concepts (other abstract classes that are contained as a field) defined on the blockchain} \label{fig:concepts} \end{figure} A matrix/ grid of assessment criteria against grade definitions, a common format used in schools \citep[p.102]{bryan2006innovative}, was used to record grade breakdowns on the blockchain. The \textit{MarkingCriterion} and \textit{GradeDescriptors} fields were designed to fulfil this feature. \textit{AssessmentResult} stores the overall result of an assessment or course module, as an example of how flexible this can be, three example result formats are designed: \textit{PassFailResult}, \textit{GradeResult}, and \textit{ScoreResult}. \section{Smart Contracts: Transaction Logic and Events} We have previously discussed the nature of Smart Contracts in Chapter 2.3.2. They are autonomous, self-sufficient and decentralised programmes that runs on a blockchain. In Hyperledger Composer, Smart Contracts are called chaincode. They run synchronously with a transaction order and a final output is required, to either accept the transaction, or reject it. Smart Contract chaincode can emit network-wide Events. These Events could be consumed by participants or applications. The discussion below describes the six transactions previously planned in Figure \ref{fig:assessmentloop} and \ref{fig:personalisationloop}. The Smart Contract chaincode triggered by the transactions are explained in pseudocode. \subsection{The CreateModule Transaction} The transaction for creating a course module was kept simple, as this project is more interested in the assessment experience than the content creation experience. A \textit{CourseModule} asset has to be uploaded with all of its \textit{ModuleUnits} and \textit{Assessments}. % \begin{figure}[!ht] % \centering % \includegraphics[width=1.0\textwidth]{cmtx} % \caption[CreateModule Transaction flowchart] % {Flowchart representation of the Smart Contract for the CreateModule Transaction (Tx)} \label{fig:cmtx} % \end{figure} \begin{algorithm} \begin{algorithmic}[0] \Function{CreateModule}{CourseModule mod, ModuleUnit[] units, Assessment[] assessments} \If{integrity checks of uploaded mod passed} \State Add mod, unit and assessment objects to blockchain \State Add relationships between these objects and the Teacher(s) \State \textbf{return} Transaction Accepted \Else \State \textbf{return} Transaction Rejected \EndIf \EndFunction \end{algorithmic} \end{algorithm} Transactions that allow content editing and course archiving (making courses unavailable to new students) will have to be designed for a fully-fledged real world system, but they were deferred to future work for this project. \subsection{The AddSubmission Transaction} This transaction would be ordered by a student to add a submission for the assessment of a module unit. The submission files will be compressed, converted into base64 data strings, and attached to the \textit{content} parameter. Every submission is stored on the blockchain network, providing extra data redundancy that ensures the submission is secure and immutable. In the case of an automatic assessment, peers on the blockchain network will each send the submission file and the test file pre-defined by the teacher to an external marking API, independently confirming the automated marking result. % \begin{figure}[!ht] % \centering % \includegraphics[width=1.0\textwidth]{astx} % \caption[AddSubmission Transaction flowchart] % {Flowchart representation of the Smart Contract for the AddSubmission Transaction (Tx)} \label{fig:astx} % \end{figure} \clearpage \begin{algorithm} \begin{algorithmic}[0] \Function{AddSubmission}{Learner learner, ModuleUnit unit, String content, String comments} \State Assign a Teacher to the submission \If{unit.assessment.type is AutoAssessment} \State Get unit.assessment.testFile \State Post testFile and content to Marking API \Comment{content is the Submission file in base64} \State Await return from Marking API \State Add returned result to Submission object \State Emit ResultAvailable Event (String submissionId, String unitId, String details) \ElsIf{unit.assessment.type is AssessorAssessment} \State Emit SubmissionUploaded Event (String submissionId, String unitId, String teacherId) \EndIf \State Add Submission object to blockchain \State Add relationship between Submission object and the Learner \State \textbf{return} Transaction Accepted \EndFunction \end{algorithmic} \end{algorithm} \subsection{The SubmitResult Transaction} Teachers would submit markings on a marking criteria against grade descriptor grid. This would be uploaded as the transaction parameter \textit{marks}, a one dimensional array where the index correlates to the marking criteria number and the value correlates to the grade number. The grades would be calculated by Smart Contracts according to pre-defined weightings and rules. Every peer will run the same chaincode and achieve a consensus over what the result is. % \begin{figure}[!ht] % \centering % \includegraphics[width=1.0\textwidth]{srtx} % \caption[SubmitResult Transaction flowchart] % {Flowchart representation of the Smart Contract for the SubmitResult Transaction (Tx)} \label{fig:srtx} % \end{figure} \begin{algorithm} \begin{algorithmic}[0] \Function{SubmitResult}{Submission submission, Teacher assessor, Integer[] marks, String feedbackMd} \If{unit.assessment.type is AssessorAssessment} \State \textbf{continue} \Else \textbf{ return} Transaction Rejected \EndIf \If{assessor is submission.teacherAssigned} \State \textbf{continue} \Else \textbf{ return} Transaction Rejected \EndIf \State Calculate result based on grade descriptors and marking criteria \State Update the Submission object on the blockchain with result \State Emit ResultAvailable Event (String submissionId, String unitId, String details) \If{assessment.terminal is true} \If{the overall result of the module is a pass} \State Emit CourseModuleCompleted Event (Teacher teacherAssigned, String submissionId, String modId) \EndIf \EndIf \State \textbf{return} Transaction Accepted \EndFunction \end{algorithmic} \end{algorithm} \subsection{The GenCertificate Transaction} This transaction was designed to allow and encourage manual diligence over course completion evidences before issuing a certificate. It could also require multiple signatures for a certificate. % \begin{figure}[!ht] % \centering % \includegraphics[width=1.0\textwidth]{gctx} % \caption[GenCertificate Transaction flowchart] % {Flowchart representation of the Smart Contract for the GenCertificate Transaction (Tx)} \label{fig:gctx} % \end{figure} \begin{algorithm} \begin{algorithmic}[0] \Function{GenCertificate}{Teacher signatory, String[] subIds, String currId \textit{nullable}, String certId \textit{nullable}} \If{currId is not null} \If{Submissions in subIds are not all passed} \textbf{return} Transaction Rejected \EndIf \If{Submissions do not fulfil Curriculum with currId} \textbf{return} Transaction Rejected \EndIf \Else \If{Submissions in subIds are not all passed} \textbf{return} Transaction Rejected \EndIf \If{Submissions do not fulfil a CourseModule} \textbf{return} Transaction Rejected \EndIf \EndIf \If{certId is not null} \State Update Certificate object with the new signatory on the blockchain \Else \State Create Certificate object on the blockchain \EndIf \If{the required signatories have been added} \State Add relationship between Certificate object and the Learner \State Emit NewCertificate Event (String certId) \EndIf \State \textbf{return} Transaction Accepted \EndFunction \end{algorithmic} \end{algorithm} \subsection{The ProposeCurriculum Transaction} % \begin{figure}[!ht] % \centering % \includegraphics[width=1.0\textwidth]{pctx} % \caption[ProposeCurriculum Transaction flowchart] % {Flowchart representation of the Smart Contract for the ProposeCurriculum Transaction (Tx)} \label{fig:pctx} % \end{figure} \begin{algorithm} \begin{algorithmic}[0] \Function{ProposeCurriculum}{Learner learner, Teacher teacher, String[] modIds, String currId \textit{nullable}} \If{pre/co-requisites conflicts or repeated registrations exist} \State \textbf{return} Transaction Rejected \EndIf \If{currId is not null} \State Overwrite Curriculum object with currId on the blockchain with new modIds \Else \State Create new Curriculum object on the blockchain with modIds \EndIf \State \textbf{return} Transaction Accepted \EndFunction \end{algorithmic} \end{algorithm} \clearpage \subsection{The ApproveCurriculum Transaction} % \begin{figure}[!ht] % \centering % \includegraphics[width=1.0\textwidth]{actx} % \caption[ApproveCurriculum Transaction flowchart] % {Flowchart representation of the Smart Contract for the ApproveCurriculum Transaction (Tx)} \label{fig:actx} % \end{figure} \begin{algorithm} \begin{algorithmic}[0] \Function{ApproveCurriculum}{Teacher approver, String currId} Emit CurriculumApproved Event (String currId, String learnerId) \If (Learner.balance of Curriculum with currId > cost of Curriculum) \State Deduct cost from Learner.balance all CourseModules in Curriculum to Learner.mods \State Add all CourseModules in Curriculum to Learner.mods \State Emit BalanceChanges Event (String learnerId, Double oldBalance, Double newBalance, String details) \State \textbf{return} Transaction Accepted \Else \State \textbf{return} Transaction Rejected \EndIf \EndFunction \end{algorithmic} \end{algorithm} \section{Access Control} The design of access control on the blockchain network was based on the Access Control Language of Hyperledger Composer. The language allows for the creation of access control rules that are conditional upon: the type of participant or asset, the use of a specific transaction, and the values of properties of the participant or asset. This means the mode of access control for the blockchain is both role-based and attribute-based (transaction and condition). Figure \ref{fig:ac_model} is a generic representation of what an access control rule deployed to the Hyperledger Fabric blockchain is composed of, drawn based on the style proposed by \citet{poniszewska2005representation} for extended role-based access controls.\\ \begin{figure}[!ht] \centering \includegraphics[width=1.0\textwidth]{ac_model} \caption[Access Control Model of Hyperledger Composer] {A flowchart representation of the Access Control model offered by Hyperledger Composer \citep{official2018composer}.} \label{fig:ac_model} \end{figure} Most of the rules are a form of Mandatory Access Controls across the network, enforced by the system and cannot be modified by users or client applications, where access to objects is restricted based on fixed security attributes \citep{yuan2005attributed}. Unless explicitly allowed by a defined access control rule, all operations are denied by default. Some rules are more discretionary because the security-related attributes can be changed by users. For example, a \textit{Learner} will be able to set its own \textit{acLevels} and \textit{priviledgedReaderIds} fields to control what records related to the \textit{Learner} that \textit{readers} can read. Integer values are stored in \textit{acLevels} correlating to a permission permutation model inspired by UNIX file permissions. \begin{table}[!ht] \caption{The Reader access permutation model for Learner assets} \centering \label{table:reader_permutations} \begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline & 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 \\ \hline Certificates (1) & & \checkmark & & \checkmark & & \checkmark & & \checkmark \\ \hline Submissions (2) & & & \checkmark & \checkmark & & & \checkmark & \checkmark \\ \hline Learner, Curriculums (4) & & & & & \checkmark & \checkmark & \checkmark & \checkmark \\ \hline \end{tabular} \end{table} So an \textit{acLevels} value of \texttt{[1, 3]} would mean all \textit{Readers} can read your \textit{Certificates}, while \textit{Readers} included in \textit{priviledgedReaderIds} can read your \textit{Certificates} and \textit{Submissions}. Table \ref{table:ac_rules} lists all of the access control rules designed for the blockchain of this project. Note that a DELETE operation does not really delete records on the blockchain, it simply marks an asset in a registry as "deleted". Transactions records on the blockchain are immutable and cannot be deleted. \begin{landscape} \begin{table}[!ht] \caption{The access control rules designed for the blockchain} \centering \label{table:ac_rules} \begin{tabularx}{24cm}{l>{\hsize=.45\hsize}X>{\hsize=.55\hsize}X>{\hsize=0.9\hsize}Xc>{\hsize=.65\hsize}X} & Role & Transaction & Attribute-based Condition & Operation(s) & Object(s) \\ \toprule 1 & Learner, Teacher, Reader & -- & -- & READ & CourseModule, ModuleUnit, Assessment \\ \midrule 2 & Learner & -- & user.uId == object.uId & UPDATE, READ & Learner \\ \midrule 3 & Learner & -- & -- & READ & Teacher, Reader \\ \midrule 4 & Learner & AddSubmission & user.uId == object.learner.uId & CREATE & Submission \\ \midrule 5 & Learner & -- & user.uId == object.learner.uId & READ & Submission \\ \midrule 6 & Learner & ProposeCurriculum & user.uId == object.learner.uId & CREATE, UPDATE & Curriculum \\ \midrule 7 & Learner & -- & user.uId == object.learner.uId & READ & Curriculum \\ \midrule 8 & Learner & -- & user.uId == object.learner.uId & READ & Certificate\\ \midrule 9 & Teacher & -- & user.uId == object.uId & UPDATE, READ & Teacher \\ \midrule 10 & Teacher & -- & object.mods has (mod.teachers has (teacher.uId == user.uId)) & READ & Learner \\ \midrule 11 & Teacher & -- & -- & READ & Reader \\ \midrule 12 & Teacher & CreateModule & object.teachers has \newline(teacher.uId == user.uId) & CREATE & CourseModule, ModuleUnit, Assessment \\ \midrule 13 & Teacher & -- & object.unit.mod.teachers has \newline(teacher.uId == user.uId) & READ & Submission \\ \midrule 14 & Teacher & SubmitResult & user.uId == object.teacherAssigned.uId & UPDATE & Submission \\ \midrule 15 & Teacher & -- & user.uId == object.teacher.uId & READ & Curriculum \\ \bottomrule \end{tabularx} \end{table} (Continued on next page) \end{landscape} \begin{landscape} \begin{table} \begin{tabularx}{24cm}{l>{\hsize=.45\hsize}X>{\hsize=.55\hsize}X>{\hsize=0.9\hsize}Xc>{\hsize=.65\hsize}X} & Role & Transaction & Attribute-based Condition & Operation(s) & Object(s) \\ \toprule 16 & Teacher & ProposeCurriculum, \newline ApproveCurriculum & user.uId == object.teacher.uId & UPDATE & Curriculum \\ \midrule 17 & Teacher & GenCertificate & user.uId == transaction.signatory.uId & CREATE, UPDATE & Certificate \\ \midrule 18 & Teacher & -- & object.signatories has (signatory.uId == user.uId) & READ & Certificate \\ \midrule 19 & Reader & -- & user.uId == object.uId & UPDATE, READ & Reader \\ \midrule 20 & Reader & -- & -- & READ & Teacher \\ \midrule 21 & Reader & -- & n = object.learner.acLevels[0] & READ* & Certificate (n>=1), \newline Submission (n>=3), \newline Learner, Curriculum (n>=5) \\ \midrule 22 & Reader & -- & object.learner.priviledgedReaderIds has (id == user.uId) AND n = object.learner.acLevels[1] & READ* & Certificate (n>=1), \newline Submission (n>=3), \newline Learner, Curriculum (n>=5) \\ \midrule 23 & ADMIN & -- & -- & CREATE, UPDATE, DELETE & composer.system.Participant \\ \midrule 24 & ADMIN & -- & -- & READ & ALL \\ \bottomrule \end{tabularx} \end{table} \end{landscape} \section{Limitations} Some limitations have been discovered for the above blockchain network design. They have not been accounted for or patched because of either a desire to keep the design minimal and achievable, or a shortage of time to conduct background research. Here is a list of the notable limitations: \begin{itemize} \item The \textit{Teacher} participant was not designed to have different tiers of privileges. In higher education, different roles such as module leader, module reviewer, supporting tutors exist. A module leader may also need the permissions to create new \textit{Teacher} participants who are supporting tutors. \item There was no participant type for a higher education staff who is an institution administrator and not a teaching staff. They may require more permissions than a regular \textit{Teacher} or \textit{Reader}, and bespoke transactions. \item There was no further development around the \textit{Reader} participant, with no transactions or client applications planned. This was to save design and development time as we already know from existing projects (Chapter 2.5) that public verification and visibility is possible. \end{itemize} \section{User Interfaces for Client Applications} To demonstrate the improvements a blockchain back-end and Smart Contracts can bring to an e-Learning platform, a Learner client application and a Teacher client application will be built at the implementation stage. While improving usability in e-Learning was not one of the objectives of this project, poor usability and interface designs could adversely affect the evaluation of improvements that this project aimed for. Therefore, \citet{nielsen199510}'s famous five usability design goals and ten usability design heuristics were considered when developing the application prototypes. See Table \ref{table:ux_considerations}: \begin{table}[!ht] \caption[Usability considerations for client applications] {Usability considerations for client applications in the \citet{nielsen199510} framework} \centering \label{table:ux_considerations} \begin{tabularx}{\textwidth}{lXX} Usability Goal & Usability Heuristic & Requirements from Ch. 4 \\ \toprule Learnability & Match between system and the real world\newline Visibility of system status \newline Consistency and standards & Use common higher education glossary \newline Notifications Area (NR3) \newline Adopt a popular design language (NR6) \\ \midrule Efficiency & Flexibility and efficiency of use & Flexibility in results types (FR8) \\ \midrule Memorability & Recognition rather than recall & Always Visible Navigation Menu (NR7) \\ \midrule Error Reduction & Help users recognize, diagnose, and recover from errors & Error Messages on Transaction Failures (NR2) \\ \midrule Satisfaction & Aesthetic and minimalist design & The system should be highly usable and visually appealing (NR6) \\ \\ \bottomrule \end{tabularx} \end{table} To maximise learnability and user familiarity, the client applications will adopt a popular design language for their user interfaces. Material Design, a design system that combines theory, resources, and tools, is used by the most popular mobile operating system Android and many web applications \citep{google2018material}. Resources such as free-to-use icon packs and tools such as well-maintained user interface component libraries on various mobile and web development platforms will enable quick scaffolding of design compliant and highly usable applications. Figure \ref{fig:sitemaps} shows the sitemap trees showing the contextual flow for users in the client applications: \begin{figure}[!ht] \centering \includegraphics[width=.75\textwidth]{sitemaps} \caption[Client Application Sitemaps] {Sitemap designs for the learner and teacher client applications with notes on the location of transaction ordering dialogues.} \label{fig:sitemaps} \end{figure} No low fidelity prototypes for the two demonstrator applications were produced due to the lack of research need, since the usability design was driven by heuristics and not a more user-centred approach. The next chapter will describe further how high fidelity, demonstration-ready web applications were built for this project.
{ "alphanum_fraction": 0.6885907535, "avg_line_length": 58.2285251216, "ext": "tex", "hexsha": "3068c8316becdd32681b6de8a45f931e138b0dc6", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "3e0fae40d106873480f6a07bbe11a4af74091860", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "dtylam/fypp", "max_forks_repo_path": "Chapter5/chapter5.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "3e0fae40d106873480f6a07bbe11a4af74091860", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "dtylam/fypp", "max_issues_repo_path": "Chapter5/chapter5.tex", "max_line_length": 286, "max_stars_count": null, "max_stars_repo_head_hexsha": "3e0fae40d106873480f6a07bbe11a4af74091860", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "dtylam/fypp", "max_stars_repo_path": "Chapter5/chapter5.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 8043, "size": 35927 }
\documentclass[10pt,a4paper,oneside]{article} % This first part of the file is called the PREAMBLE. It includes % customizations and command definitions. The preamble is everything % between \documentclass and \begin{document}. \usepackage[margin=1in]{geometry} % set the margins to 1in on all sides \usepackage{graphicx} % to include figures \usepackage{amsmath} % great math stuff \usepackage{amsfonts} % for blackboard bold, etc \usepackage{amsthm} % better theorem environments \usepackage[english]{babel} \usepackage[square, numbers]{natbib} \usepackage[T1]{fontenc} \usepackage[utf8]{inputenc} \usepackage{lmodern} \usepackage{amssymb} \usepackage{authoraftertitle} \usepackage{hyperref} \usepackage{multicol} \usepackage{caption} \usepackage{subcaption} \usepackage{placeins} \usepackage{setspace} \usepackage{wrapfig} %\onehalfspace \singlespace % Author \author{Ondrej Škopek\\ Faculty of Mathematics and Physics\\ Charles University in Prague\\ \texttt{\href{mailto:[email protected]}{[email protected]}}} \title{Individual Software Project specification -- TransportEditor} %\date{3. mája 2015} \date{\today} % hyperref \hypersetup{ bookmarks=true, % show bookmarks bar? unicode=true, % non-Latin characters in Acrobat’s bookmarks pdftoolbar=true, % show Acrobat’s toolbar? pdfmenubar=true, % show Acrobat’s menu? pdffitwindow=true, % window fit to page when opened pdfstartview={FitV}, % fits the width of the page to the window pdftitle={\MyTitle}, % title pdfauthor={\MyAuthor}, % author pdfsubject={\MyTitle}, % subject of the document pdfcreator={\MyAuthor}, % creator of the document pdfproducer={\MyAuthor}, % producer of the document pdfkeywords={software} {project} {planning} {automated planning} {java} {javafx} {ipc} {icaps} {transport} {transport domain}, % list of keywords pdfnewwindow=true, % links in new PDF window colorlinks=false, % false: boxed links; true: colored links linkcolor=red, % color of internal links (change box color with linkbordercolor) citecolor=green, % color of links to bibliography filecolor=magenta, % color of file links urlcolor=cyan, % color of external links } % various theorems, numbered by section \newtheorem{thm}{Theorem}[section] \newtheorem{lem}[thm]{Lemma} \newtheorem{obsv}[thm]{Observation} \newtheorem{cor}[thm]{Corollary} \newtheorem{conj}[thm]{Conjecture} \DeclareMathOperator{\id}{id} \newcommand{\TODO}[1]{{\textbf{TODO:} #1}} % for TODOs \newcommand{\comment}[1]{} % for comments \newcommand{\dist}{\text{dist}} % distance function \newcommand{\bd}[1]{\mathbf{#1}} % for bolding symbols \newcommand{\RR}{\mathbb{R}} % for Real numbers \newcommand{\ZZ}{\mathbb{Z}} % for Integers \newcommand{\col}[1]{\left[\begin{matrix} #1 \end{matrix} \right]} \newcommand{\comb}[2]{\binom{#1^2 + #2^2}{#1+#2}} \newcommand{\pname}{TransportEditor} % project name (or code name) \begin{document} \maketitle \section{Basic information} \pname{} aims to be a problem editor and plan visualizer for the Transport domain from the International Planning Competition 2008. The goal is to create an intuitive GUI desktop application for making quick changes and re-planning, but also designing a new problem dataset from scratch. \pname{} will help researchers working on this domain fine-tune their planners; they can visualize the various corner cases their planner fails to handle, step through the generated plan and find the points where their approach fails. A secondary motivation is to be able to test approaches for creating plans for the domain as part of our future bachelor thesis. \subsection{The Transport planning domain} \label{domain-info} Transport is a domain designed originally for the International Planning Competition (IPC, part of the International Conference on Automated Planning and Scheduling ICAPS). Originally, Transport appeared at \href{http://icaps-conference.org/ipc2008/deterministic/Domains.html}{IPC-6 2008}. Since then, it has been used in every IPC, specifically \href{http://www.plg.inf.uc3m.es/ipc2011-deterministic/}{IPC-7 2011} and \href{https://helios.hud.ac.uk/scommv/IPC-14/}{IPC-8 2014}. There are two basic formulations of the Transport domain family (i.e. two ``similar Transport domains''): \begin{itemize} \item \verb+transport-strips+ -- the classical, sequential Transport domain. See section \ref{transport-strips} for details. \item \verb+transport-numeric+ -- the numerical Transport domain. See section \ref{transport-numeric} for details. \end{itemize} Both of these formulations have been used interchangeably in various competition tracks. The following is an overview of the distinct datasets, their associated IPC competition, track at the competition and the formulation used (descriptions of the tracks in hyperlinks): \begin{center} \begin{tabular}{c|c|c|c} \textbf{Dataset name} & \textbf{Competition} & \textbf{Track} & \textbf{Formulation} \\ \hline \hline netben-opt-6 & IPC-6 & \href{http://icaps-conference.org/ipc2008/deterministic/NetBenefitOptimization.html}{Net-benefit: optimal} & Numeric \\ seq-opt-6 & IPC-6 & \href{http://icaps-conference.org/ipc2008/deterministic/SequentialSatisficing.html}{Sequential: satisficing} & STRIPS \\ seq-sat-6 & IPC-6 & \href{http://icaps-conference.org/ipc2008/deterministic/SequentialOptimization.html}{Sequential: optimal} & STRIPS \\ tempo-sat-6 & IPC-6 & \href{http://icaps-conference.org/ipc2008/deterministic/TemporalSatisficing.html}{Temporal: satisficing} & Numeric \\ \hline seq-agl-8 & IPC-8 & \href{https://helios.hud.ac.uk/scommv/IPC-14/seqagi.html}{Sequential: agile} & STRIPS \\ seq-mco-8 & IPC-8 & \href{https://helios.hud.ac.uk/scommv/IPC-14/seqmulti.html}{Sequential: multi-core} & STRIPS \\ seq-opt-8 & IPC-8 & \href{https://helios.hud.ac.uk/scommv/IPC-14/seqopt.html}{Sequential: optimal} & STRIPS \\ seq-sat-8 & IPC-8 & \href{https://helios.hud.ac.uk/scommv/IPC-14/seqsat.html}{Sequential: satisficing} & STRIPS \\ \end{tabular} \end{center} Short descriptions of the various tracks and subtracks can be found in the rule pages of \href{https://helios.hud.ac.uk/scommv/IPC-14/rules.html}{IPC-6} and the \href{http://icaps-conference.org/ipc2008/deterministic/CompetitionRules.html}{rule page of IPC-8}. Unfortunately, we weren't able to acquire the datasets for IPC-7, as the \href{http://www.plg.inf.uc3m.es/ipc2011-deterministic/Domains.html}{Subversion repository} that promises to contain them is unavailable. As a bonus, \pname{} supports custom domains based on the Transport domain family. Users can create new Transport-like domains that have any subset of constraints/features of the two basic formulations. Only subsets in which individual constraints are not in conflict with each other are allowed. \subsection{Transport STRIPS formulation description}\label{transport-strips} The STRIPS version of Transport is a logistics domain -- vehicles with limited capacities drive around on a (generally asymmetric) positively-weighted oriented graph, picking up and dropping packages along the way. Picking up or dropping a package costs 1, driving along a road costs depending on the edge weight. All packages have a size of 1. The general aim is to minimize the total cost, while delivering all packages to their destination. \subsection{Transport Numeric formulation description}\label{transport-numeric} The numerical version of Transport is very similar to the STRIPS version, see section \ref{transport-strips}. The key differences are: \begin{itemize} \item Package sizes can now be any positive number. \item The concept of fuel -- every vehicle has a maximum fuel level, current fuel level, and all roads have a fuel demand (generally different than the length of the road). A vehicle can refuel if there is a petrol station at the given location. Refuelling always fills the vehicle's tank to the max. \item The introduction of time: \begin{itemize} \item The duration of driving along a road is equal to it's length. \item The duration of picking a package up or dropping it off is equal to 1. \item The duration of refuelling is equal to 10. \item A vehicle cannot pick up or drop packages concurrently -- it always handles packages one at a time. \item A vehicle cannot do other actions during driving to another location (it is essentially placed ``off the graph'' for the duration of driving). \end{itemize} \item The cost function is removed (we now minimize the total duration of a plan). \end{itemize} \section{Feature requirements} In this section, we present the basic functionality requirements for \pname{}. \subsection{Functionality overview} The basic workflow of \pname{} consists of the following user's steps: \begin{itemize} \item Select which formulation of the Transport domain they want to work with or create their own variant. \item Load or create their own problem of the given domain. See section \ref{input-output} for details on the input format. \item \pname{} draws the given graph as good as it can. \item Iterate among the following options: \begin{itemize} \item Load a planner executable and let \pname{} run the planner on the loaded problem instance for a given time, then load the resulting plan. \item Load a pre-generated plan. \item Step through the individual plan actions and let \pname{} visualize them. The user can go forward and backward in the plan and inspect each step in great detail. \item Edit the graph: add/remove/edit the location or properties of vehicles, packages, roads, locations and possibly petrol stations. \item Save the currently generated plan. \item Save the problem (along with the graph drawing hints). \item Save the domain (export to a PDDL file). \end{itemize} \item Save and close the currently loaded problem. Exit the application or go back to the first step. \end{itemize} There are a lot of requirements that arise from the typical workflow above. We will describe them in the next few sections. \subsection{User interface functionality} \label{ui} In order for \pname{} to be useful to it's users, it has to have enable a quick and efficient workflow. The biggest challenge will be to design an intuitive and responsive graphical user interface. Key parts of the user interface are: \begin{itemize} \item A large and not crowded drawing of the road graph, showing only the most relevant information, possibly letting the user display details on-demand. \item A clear and concise list of actions in the plan the user wants to visualize. \item A panel of tools to edit the problem and graph with. \end{itemize} We will describe these three in detail in the following sections. For an image that aims to capture the formulated requirements, see section \ref{ui-layout}. \subsubsection{Graph visualization} \label{graphviz} \begin{wrapfigure}{r}{0.5\textwidth} \includegraphics[width=0.6\textwidth]{../data/img/pdf/seq-sat-6-p30} \caption{seq-sat-6-p30 prototype graph drawing. Drawn using the \href{http://emr.cs.iit.edu/~reingold/force-directed.pdf}{Fruchterman \& Reingold algorithm} with the help of \href{https://networkx.github.io/}{NetworkX}.} \label{fig:graph} \end{wrapfigure} Above all, the graph should be reasonably well arranged and drawn. However, drawing an arbitrary graph is considered a hard problem on it's own. We will focus on implementing a reasonable graph drawing algorithm and let the user make manual drag-and-drop style changes on top of the graph drawing we produce. A more specific property of the Transport domain graphs is that they have quite a lot of data on the edges (roads) and vertexes (locations) -- road lengths, fuel demands, location names and petrol station, package and vehicle current locations. Clearly, this cannot fit on the graph drawing at once -- we will have to do selective drawing of the information in order not to crowd the graph. Popping up overlays on user mouse-over or a graph legend where the user can select what info to currently show are just a few options. To highlight the need for selective drawing of information, observe the following image on the right (figure \ref{fig:graph}) -- a graph drawing of the p30 problem from the seq-sat-6 dataset. Another important feature is symbolic visualization of individual actions in the plan. There are only 3-4 actions in the Transport domain family (drive, pick-up, drop and possibly refuel in the numerical formulation). We need to be able to visualize the selected plan action in the graph to help the user see what is going on in the plan. \subsubsection{Plan action list} \label{plan-action-list} An important part of the visualization workflow is the actual planner-generated plan. The user interface will show a context-aware list of them. Because plans can be quite long (generally hundreds/thousands of actions), we cannot show them all at once. The list will only show the currently selected action, a few actions preceding it and a few upcoming actions. The graph described in section \ref{graphviz} and the state it represents will change dynamically with the selected action in the list. The user can fluently scroll through the plan actions, continually stepping through the plan either forwards or backwards. Jumping to a specific action will also be supported. If users want to view the effect of a small change in the plan, they can choose to edit the current plan action or reorder it (move it up or down). In the case of temporal domains, they can edit the start time of actions. A user reordering or editing an action cannot invalidate the plan -- there will be dynamic validation in place, which will disable any changes that make the plan invalid. The list will also support filtering: the user can choose to view only those actions, which involve a certain subset of planning objects (vehicle, package, road, \ldots) in the given problem. Above the plan action list, there will be buttons to allow planning or re-planning the current problem using a planner the users choice. During planning, \pname{} will attempt to show the currently best found plan, if it gets the needed information from the planner. \subsubsection{Editing tools} A big feature of \pname{} is editing the domain problems on the fly. All the graph vertexes will be movable using drag-and-drop (the edges will move along, too). Details of individual elements on the graph (vehicles, packages, locations, roads) will be editable on user request. Additionally, a toolbox will be present. It will enable quick access to buttons for adding/removing locations, roads, vehicles and packages. To prevent accidental changes to the problem when we just want to test plan generation, the toolbox will contain a lock feature -- once enabled, no changes to the graph will be allowed. \subsection{Performance} The performance requirements of \pname{} cannot be quantified numerically -- the only criterion is, that the application must be sufficiently responsive to user actions. The implementation specific details on how to achieve this are discussed in the section \ref{used-tech}. \subsection{Data model re-usability} The internal data representation has to be reusable and therefore, reasonably decoupled from the rest of the application. It is very probable that we will want to implement a planner for the Transport domain later on -- and for that, we can reuse the entire back-end data model that is used to represent the domain problem in memory and build the planner on top of it. More details on the specifics of decoupling of individual modules of \pname{} are in section \ref{modules}. \subsection{Optional features} A few more optional features, which may or may not be included in the final product: \begin{itemize} \item Automatic plan verification -- Upon loading a plan, verify (using VAL) that the plan is valid for the given problem and domain. See section \ref{plan-format} for more information. \item When a user select a certain subset of planning object to focus on in the plan action list (see section \ref{plan-action-list}), visualize those on the graph (section \ref{graphviz}). \item Various plan action list visualizations (can be selected based on user preferences): \begin{itemize} \item \href{https://en.wikipedia.org/wiki/Gantt_chart}{Gantt chart} -- can clearly visualize changes in planning object properties (f.~e.~vehicle capacity). \item Partially ordered action graph -- visualizes preconditions and effects of actions clearly, allows the user to see what is ``holding up'' the action. \item Time-ordered multi-list -- for domains without temporal conditions, is equivalent to the original plan action list. For temporal domains, this visualizes concurrently occurring actions side-by-side -- the user can clearly see the relation between duration intervals of actions. \end{itemize} \end{itemize} \section{Program decomposition} In this section, we present a basic decomposition of the application from the logical perspective. \subsection{Modules} \label{modules} The application will be split up into individual modules. Modules should be decoupled as much as possible, communicating only through well specified interfaces. We propose the following modules: \begin{itemize} \item View -- representation of the individual UI elements. \item Controller -- an interfacing module, sitting between the view and model. Serves as a mediator -- updating the view based on model changes, and propagating user actions from the view back into the model. \item Model -- the back-end data model, an internal problem data representation. \item Persistence -- handles saving/loading the model and plans for the model to an external data store (f.e.~hard disk). \end{itemize} %\subsection{UML diagram of the model} % %\TODO Add a UML diagram of the model \section{Software description} In this section, we describe the chosen technology and how we aim to fulfil the requirements using it. \subsection{Used technologies} \label{used-tech} We have to chosen to implement the application along with the graphical user interface as a desktop Java 8 program. The main reasons for this choice were portability and our experiences working in Java and it's associated technologies. The reason for choosing Java 8 as the minimum version is JavaFX 8 -- a successor of JavaFX 2.0, now newly incorporated into the standard Java Runtime Environment (JRE) and available on all Java Standard Edition (Java SE) versions of the JRE. The model will use features not specific to JavaFX and therefore be reusable in any standard Java SE 8 and newer program. We will probably use multiple open-source libraries which will all be specified in the developer documentation (section \ref{docs}) to write cleaner, more expressive code. The application will be shipped as a ZIP archive, containing an executable JAR file and the user documentation. \subsubsection{Environment dependencies} The runtime dependencies that are implied by the previous section are: \begin{itemize} \item Java SE, version 8+ \end{itemize} All other dependencies will be packaged and shipped along with the product. The development dependencies will be: \begin{itemize} \item Java SE, version 8+ \item Maven, version 3+ \end{itemize} Any hardware dependencies or arising software dependencies will be specified in the user or developer documentation (section \ref{docs}). \subsection{Proposed user interface layout} \label{ui-layout} The following image (figure \ref{fig:gui}) aims to be a an abstract draft for the application's user interface. More details on the specifics of the UI can be found in section \ref{ui}. \begin{figure}[h] \centering \includegraphics[width=0.73\textwidth]{../data/img/pdf/gui} \caption{Abstract GUI prototype} \label{fig:gui} \end{figure} \subsection{User and developer documentation} \label{docs} We hope most of the user documentation will be left unread -- the user interface and workflow should be mostly self-explainable and intuitive. Nevertheless, we will produce user documentation, involving: \begin{itemize} \item A help document accessible from the menu bar inside the application. \item Helpful hint and validation dialog windows or other pop-up messages. In general, we do not want to let the user do something that is be invalid, rather than notifying him of his error afterwards. Therefore, we disable buttons where appropriate, validate inputs, \ldots \item A standalone text document, documenting the features, supported data formats, instructions on the installing Java JRE and a demo usage tutorial. \end{itemize} The developer documentation consists of in-code comments, Java Doc comments (and a generated HTML page hierarchy using the \verb+javadoc+ tool) and a standalone text file, containing information on: \begin{itemize} \item Links to the code repository, issue tracker, continuous integration service, \ldots \item Getting the code from the code repository. \item Installing the needed dependencies. \item Compiling and building the project. \item Finding their way around the project -- a high-level overview of the project, module content and usage descriptions, \ldots \end{itemize} \section{Input \& output format} \label{input-output} There are two fundamentally different files \pname{} works with: problems and plans. Problem files contain information on the domain problem (roads, vehicles, packages, locations, \ldots). Plan files contain a sequence of actions on a specific problem that (if a plan is valid) constitute a plan for the given problem. \subsubsection{Domain problem file format}\label{problem-format} Both rule pages of \href{http://icaps-conference.org/ipc2008/deterministic/CompetitionRules.html}{IPC-6} and \href{https://helios.hud.ac.uk/scommv/IPC-14/rules.html}{IPC-8} specify PDDL 3.1 as their official modelling language. Daniel L. Kovacs proposed an updated and corrected BNF (Backus-Naur Form) \href{https://helios.hud.ac.uk/scommv/IPC-14/repository/kovacs-pddl-3.1-2011.pdf}{definition of PDDL 3.1}. \pname{} doesn't load the PDDL domain definitions directly -- those are already built-in. We only read the domain files to check which subset of conditions the user has chosen to model. See section \ref{domain-info} for more information. The loaded problem has to be compatible with the loaded domain variant. \pname{} will make an attempt to read problems which have more information than are needed for the currently modelled domain. To allow \pname{} to save the positions of vertexes in the graph, we make use of an already used ``unwritten extension' of PDDL': above every road predicate, there is a comment in the form of \verb+; a,b -> c,d+, where \verb+a+, \verb+b+, \verb+c+ and \verb+d+ are integers (non-negative whole numbers). Those numbers correspond to the coordinates of the source \verb+(a,b)+ and destination \verb+(c,d)+ locations of the defined road on the line below. The upper left corner of the drawing corresponds to the \verb+(0,0)+ point. A point with the coordinates \verb+(x,y)+ is \verb+x+ units below and \verb+y+ units to the right of the \verb+(0,0)+ point. \subsubsection{Plan file format}\label{plan-format} The plan file format is generally specified as any plan for the given domain that is a valid plan according to the validator \href{http://www.inf.kcl.ac.uk/research/groups/PLANNING/index.php?option=com_content&view=article&id=70&Itemid=77}{VAL}. Source code of VAL \href{https://github.com/KCL-Planning/VAL}{can be found on GitHub}. \pname{} will try to accept any plan which is a valid plan of the given Transport domain formulation according to VAL. \section{Ending notes and remarks} All conversions from hand-drawn pictures to vector images was done using the wonderful open-source project \href{https://github.com/honzajavorek/cartoonist}{cartoonist}. { \footnotesize % 10pt in 12pt article size %\input{bibliography.tex} } \end{document}
{ "alphanum_fraction": 0.7721649911, "avg_line_length": 53.2400881057, "ext": "tex", "hexsha": "b8c9593352d6c15ab0dbe7eca1aadcf2168c995a", "lang": "TeX", "max_forks_count": 3, "max_forks_repo_forks_event_max_datetime": "2019-09-06T00:54:12.000Z", "max_forks_repo_forks_event_min_datetime": "2018-03-19T15:56:29.000Z", "max_forks_repo_head_hexsha": "5f99e64ae6e4068fae69d3df6c1d9e58e73e9d11", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "oskopek/TransportEditor", "max_forks_repo_path": "transport-docs/spec/tex/spec.tex", "max_issues_count": 7, "max_issues_repo_head_hexsha": "5f99e64ae6e4068fae69d3df6c1d9e58e73e9d11", "max_issues_repo_issues_event_max_datetime": "2022-02-01T00:57:38.000Z", "max_issues_repo_issues_event_min_datetime": "2020-05-15T21:05:08.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "oskopek/TransportEditor", "max_issues_repo_path": "transport-docs/spec/tex/spec.tex", "max_line_length": 451, "max_stars_count": 4, "max_stars_repo_head_hexsha": "5f99e64ae6e4068fae69d3df6c1d9e58e73e9d11", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "oskopek/TransportEditor", "max_stars_repo_path": "transport-docs/spec/tex/spec.tex", "max_stars_repo_stars_event_max_datetime": "2021-06-05T18:43:51.000Z", "max_stars_repo_stars_event_min_datetime": "2016-11-19T15:36:55.000Z", "num_tokens": 5696, "size": 24171 }
\documentclass[10pt, conference, compsocconf]{IEEEtran} % Add the compsocconf option for Computer Society conferences. % % If IEEEtran.cls has not been installed into the LaTeX system files, % manually specify the path to it like: % \documentclass[conference]{../sty/IEEEtran} \usepackage{balance} \usepackage{amssymb} \setcounter{tocdepth}{3} \usepackage{graphicx} \usepackage{url} % *** GRAPHICS RELATED PACKAGES *** % \ifCLASSINFOpdf % \usepackage[pdftex]{graphicx} % declare the path(s) where your graphic files are % \graphicspath{{../pdf/}{../jpeg/}} % and their extensions so you won't have to specify these with % every instance of \includegraphics % \DeclareGraphicsExtensions{.pdf,.jpeg,.png} \else % or other class option (dvipsone, dvipdf, if not using dvips). graphicx % will default to the driver specified in the system graphics.cfg if no % driver is specified. % \usepackage[dvips]{graphicx} % declare the path(s) where your graphic files are % \graphicspath{{../eps/}} % and their extensions so you won't have to specify these with % every instance of \includegraphics % \DeclareGraphicsExtensions{.eps} \fi % graphicx was written by David Carlisle and Sebastian Rahtz. It is % required if you want graphics, photos, etc. graphicx.sty is already % installed on most LaTeX systems. The latest version and documentation can % be obtained at: % http://www.ctan.org/tex-archive/macros/latex/required/graphics/ % Another good source of documentation is "Using Imported Graphics in % LaTeX2e" by Keith Reckdahl which can be found as epslatex.ps or % epslatex.pdf at: http://www.ctan.org/tex-archive/info/ % % latex, and pdflatex in dvi mode, support graphics in encapsulated % postscript (.eps) format. pdflatex in pdf mode supports graphics % in .pdf, .jpeg, .png and .mps (metapost) formats. Users should ensure % that all non-photo figures use a vector format (.eps, .pdf, .mps) and % not a bitmapped formats (.jpeg, .png). IEEE frowns on bitmapped formats % which can result in "jaggedy"/blurry rendering of lines and letters as % well as large increases in file sizes. % % You can find documentation about the pdfTeX application at: % http://www.tug.org/applications/pdftex % correct bad hyphenation here \hyphenation{op-tical net-works semi-conduc-tor} \begin{document} % % paper title % can use linebreaks \\ within to get better formatting as desired \title{Handling Contract Violations in Java Card Using Explict Exception Channels} % author names and affiliations % use a multiple column layout for up to two different % affiliations \author{\IEEEauthorblockN{Juliana Ara\'ujo, Rafael Souza, N\'elio Cacho, Anamaria Moreira} \IEEEauthorblockA{Computing and Applied Mathematics Department (DIMAp)\\ Federal University of Rio Grande do Norte (UFRN)\\ Natal-RN, Brazil\\ \{juliana,rafael,neliocacho,anamaria\}@dimap.ufrn.br} \and \IEEEauthorblockN{Pl\'acido A. Souza Neto} \IEEEauthorblockA{Information Technology Department (DIATINF)\\ Federal Institute of Rio Grande do Norte (IFRN)\\ Natal-RN, Brazil\\ [email protected]} } % make the title area \maketitle \begin{abstract} Java Card is a version of Java developed to run on devices with severe storage and processing restrictions. The applets that run on these devices are frequently intended for use in critical, highly distributed, mobile conditions. This requires runtime verification approach based on Design by Contract to improve the safety of Java Card applications. However handling contract violation in Java Card applications is challenging due to their communication structure and platform restrictions. Additionally the Java Card exception handling mechanism requires that developers understand the source of an exception, the place where it is handled, and everything in between. As system development evolves, exceptional control flows become less well-understood, with negative consequences for the program maintainability and robustness. In this paper, we claim that such problem can be addressed by implementing an innovative exception handling model which provides abstractions to explicitly describe global views of exceptional control flows. \end{abstract} % \begin{IEEEkeywords} % component; formatting; style; styling % % \end{IEEEkeywords} \IEEEpeerreviewmaketitle \section{Introduction} The recent advances on microelectronics have enabled the development of a wide variety of smart card applications, which are able to easily and securely perform complicated computations without being influenced by external factors \cite{Rankl}. It is possible to develop totally new approaches to support electronic purse, health care control, mobile communications, id verification, access control, and so forth. As a result, a number of approaches have started to emerge to support the implementation of smart card applications \cite{Krakatoa} \cite{CostaMMN09} \cite{CostaMMN12}. Java Card applets \cite{Chen:2000} is a kind of smart card application that are usually used and deployed in highly distributed and mobile situations and tend to be used in critical applications. To reduce financial and/or human risks, rigorous verification of such applets is often required to guarantee the intended behavior of the system to which these applets belong. Considering the need for correctness in critical systems and bundled applications on cards with memory constraint, Java Card Modeling Language (JCML) \cite{CostaMMN09} \cite{CostaMMN12} has been proposed to support the specification of contracts between modules in accordance with the Design by Contract \cite{Meyer92} principles. Contracts in JCML are defined by means of annotations which express assertions, such as method preconditions and postconditions and class invariants. JCML annotations are automatically translated into runtime assertion checking code that can run on smart cards. Whenever an assertion (contract) is violated, an exception is signaled. Unfortunately, the smart card environment provided by the Java Card platform has some limitations to deal with exceptions. First, there is no guarantee that the exception signaled by one part of the application is going to reach the other part of the same application. Second, it is not possible to know which contract was violated once exceptions do not provide stack trace information. Finally, the Java Card Exception handling is not able to identify all exceptions signaled by the card application. In this context, the contributions of this paper are twofold. First, we discuss the liabilities of conventional exception handling mechanisms to deal with exceptions in smart cards (Section \ref{sec:limitation}). Second, we present the EJCFlow (Section \ref{sec:ChannelInJC}), an implementation of the EFlow \cite{Cacho:2008} \cite{Cacho:2008b} model that supports explicit representation of exception flows in Java Card applications. The proposed implementation exploits existing JCML annotations to make the association of handlers with normal code more flexible. \section{Background} \label{sec:background} \subsection{Java Card Description} Java Card provides a secure environment for applications that run on smart cards and other devices with very limited memory and processing. Due to the nature of these devices, the platform is quite limited. Java Card applications are called applets. In order to run one of these applets in a Java Card device, one must: (i) write the Java Card applet; (ii) compile the applet; (iii) convert the binary classes into a converted applet CAP file; (iv) install the CAP file on the card; (v) run the applet. The main differences between Java Virtual Machine (JVM) and Java Card Virtual Machine (JCVM) standards are the exclusion of some important JVM features such as many primitive types, dynamic classes and threads. Because of these restrictions, a typical Java Card application will be very limited, and only some basic functionalities will be provided on-card. The Java Card API specification \cite{apiJavaCard} \cite{ortiz} \cite{sun2008} defines a small subset of the traditional Java programming language API which is supported by JCVM. On top of this, the Java Card framework defines its own set of core classes that specifically support Java Card applications. The main packages \cite{Chen:2000} are: \textbf{\textit{javacard.framework}}; \textbf{\textit{javacard.security}}; \textbf{\textit{java.oi}}; \textbf{\textit{java.lang}}; and \textbf{\textit{java.rmi}}. Most of the heavier processing is executed on the socalled host side, the program that runs on the terminal to which the card is temporarily connected. However, for safety reasons, some card data may not be seen by the host application. Such sensible data must be manipulated on the card, safely and correctly. That is one of the advantages of smart cards with respect to magnetic strip cards: their oncard code may be used to ensure the safety and correctness of data apart from the host application. Thus, it is important to send clear messages to card users and host applications, and handle exceptions more effectively. \subsection{Exceptions in Java Card} The Java Card platform does not support all of the exception types found in the Java technology core packages \cite{apiJavaCard}. For instance, exceptions related to unsupported features (such as thread) are naturally not supported. Likewise Java, the exception types on Java Card platform can be checked or unchecked. All checked exceptions extend \textit{CardException} and all unchecked exceptions are a specialization of \textit{CardRuntimeException}. Unchecked exceptions may represent execution errors, programming errors or errors in the communication process between the application and the smart card. All checked exceptions must be caught by the applet \cite{Chen:2000}. As a consequence, no applet method can specify a checked exception in its method signature. Whenever this happen, the Java compiler issues a compilation error. As Java Card platform does not support the String class type, thus it is not possible to use string messages as an exception reason. As a quick alternative, Java Card supports the definition of a reason code to provide a meaning for the signaled exceptions. Due to memory restrictions of Java Card devices, the JCVM strongly recommends to reuse exception objects. Figure \ref{1} shows how the Java Card Runtime Environment (JCRE) implements exceptions in a manner that maximizes object recycling by using a singleton each time the \textit{ISOException throwIt()} method is used. The \textit{ISOException} is thrown by calling the \textit{throwIt()} method with a reason (\textit{sw}) for the exception. The \textit{throwIt()} method is declared static so that there is no need to instantiate an ISOException object in order to throw the exception. Instead, simply call \textit{throwIt()} and customize each call with a reason code. The \textit{throwIt()} method in turn invokes throw on the exception object. \begin{figure} \end{figure} \subsection{JCML - Java Card Modeling Language Description} JCML \cite{CostaMMN09} \cite{CostaMMN12} annotates Java Card programs to produce runtime verification code which can be performed on devices with severe memory and processing restrictions. JCML includes all JML \cite{jml} \cite{Leavens_2007} \cite{Bhorkar} constructs which can be translated into Java Card compliant code. JCML specifications are Java Card programs annotated with specification constructs. The specification part of a JCML program is defined using a special kind of comments. Java Card constructs defined within the annotations are treated by JCML compiler and can be used in the pre- and post-conditions and invariants. The code generated by JCML compiler is smaller and faster than equivalent code generated by the original JML compiler. This is due the Java Card limitations, and because of that, the JCML have some limitations. For each condition in the JCML annotations, a checking method is generated. The compiler uses the wrapper approach proposed in \cite{Cacho:2008}. In this approach, the code of each (annotated) method of the program is embedded in a new method, whose task is to verify the assertions and call the original method. The embedded methods are renamed and made private. The wrapper method has the same name and signature as the original method it wraps. The wrapper method checks the preconditions and invariants and then executes the original method. After that, the wrapper checks the invariant and the existing post-conditions. The auxiliary methods are generated for the wrapper and will signal exceptions whenever the assertions are violated. JCML defines three reason codes to be passed as parameter to the \textit{throwIt()} method of the \textit{ISOException} class. The \textit{throwIt()} is invoked whenever any violation of pre/pos condition and invariant occurs. This can be clarified by the example of a Java class \textit{UserAccess} to manage user access to credit, which can be either professor or student. One of its methods is void \textit{addCredits(short)} For which were defined in JCML the following notations: \begin{figure}[ht!] \centering \scriptsize \begin{verbatim} /*@ 1 requires value >= 0 && 2 (value + getCredits()) <= MAX_CREDITS && 3 (userType == STUDENT ==> (value + getCredits()) 4 <= STUDENT_MAX_CREDITS); 5 ensures printerCredits >= value; @*/ \end{verbatim} \end{figure} This specification defines as pre-conditions that the value to be added cannot be negative or exceed the maximum number of credits, \textit{e.g.} 200 Euros, and if user is a student, the value should not exceed the maximum credit defined for students, \textit{e.g.} 100 Euros. As postcondition, the specification defines the current credit value is greater than or equal to the value recently added. The addCredits wrapper method (Figure \ref{2}), generated by JCML compiler, wraps the original method call in a try-catch block that (i) checks the invariant and precondition; (ii) calls the original method and (iii) checks the invariant and postcondition. Figure \ref{2} also shows the generated code for \textit{checkPre\$addCredits\$} and \textit{checkPost\$addCredits\$}. \begin{figure} \end{figure} As the initial goal of JCML language and compiler were the generation of runtime verification code for Java Card programs, the treatment of error messages and condition violation are limitations of it. \section{Limitations of Handing Contract Violations in Java Card} \label{sec:limitation} As described in the previous section, every time a contract is violated, an \textit{ISOException} is signaled by the applet with a specific reason code. In order to show the flow of control created by the propagation of an \textit{ISOException}, Figure \ref{3} depicts part of the architecture of a Java Card application. A black arrow from square box \textbf{a} to square box \textbf{b} indicates that method a invoked method \textbf{b}. As would be expected, each arrow also indicates that control flow is passed from one method to the other. According to Figure \ref{3}, the host application (\textit{App.addCredits} method) calls the \textit{CardChannel.transmit()} method via a \textit{Proxy} class with a \textit{CommandAPDU} as parameter. Notice that smart cards communicate using a packet mechanism called Application Protocol Data Units (APDUs). Hence, the host application sends a \textit{CommandAPDU} to the card application which performs the processing requested and returns a response APDU. If an exception is thrown on card side, the status word field of the \textit{ResponseAPDU} contains the error code which can be verified by the Host Application. At this point, we identify some limitations to handle contract violation in Java Card applications. First, there is no guarantee that the exception signaled by the card application is going to be handled by the host application. This may occur because immediately upon return from invoking \textit{transmit} method (inside \textit{Proxy.setField} at line 3), the developer must check the status word of the \textit{ResponseAPDU} to identify the occurrence of an exception. If this check is not performed, the exception is ignored by the \textit{setField} method. Therefore, the Java Card specification demands that the error handling code must be extensively intermingled with the code for normal processing, making the logic for both cases harder to maintain. Second, it is not possible to know which contract was violated once the \textit{ResponseAPDU} does not provide the stack trace of the exception signaled by the card application. Stack traces are a useful programming construct to support developers in debugging tasks. Consequently, the lack of stack traces makes the task of identifying the cause of a defect/violation even more difficult in Java Card applications. A more serious difficulty occurs if any other exception (apart from \textit{ISOException}) flows out of \textit{Applet.process()} method. In this case, the JCRE catches this outgoing exception and returns a ResponseAPDU with \textit{ISO7816.SW\_UNKNOWN} as the status word field. In order words, the developer knows that something goes wrong but not specifically what. In our view, all the aforementioned problems stem from the fact that the Java Card exception handling mechanism does not appropriately take global \cite{RobillardM00} \cite{Robillard:2003} exceptions into account. In other words, the Java Card exception handling mechanism provides some constructs for raising, propagating and handling exceptions. However, not much explicit support is provided to the tasks of defining and understanding the (un)desirable paths of global exceptions from the raising site to the final handler. \begin{figure} \end{figure} The main consequence of this limitation of Java Card exception handling is that global exceptions introduce implicit control flows. An implicit control flow occurs whenever a developer can not directly observe what code will be executed during run-time. As we described above, implicit control flow created by the Java Card exception handling mechanism makes it difficult to explicitly analyze: if an exception is going to be handled and where, and which components are affected by an exception control flow. \section{Explicit Exception Channels in Java Card} \label{sec:ChannelInJC} In order to address the problems described above, we propose a new implementation for the EFlow model. EFlow \textit{Cacho:2008,Cacho:2008b} is a platform-independent model for exception handling whose major goal is to make exception flow explicit, safe, and understandable by means of explicit exception channels and pluggable handlers. Section \ref{subsec:eflow} describes the concepts that underpin EFlow model. In the following, Section \ref{subsec:limitEflow} illustrates how the abstractions defined by EFlow model are concretized in terms of JCML annotations. Finally, Section \ref{subsec:detailEflow} describes some implementation details of the proposed solution. \subsection{EFlow Model} \label{subsec:eflow} EFlow model is grounded on mechanisms to explicitly represent global exceptional behavior. This specification enables software developers to establish constraints governing the implementation of non-local exceptions flows. An exception handling specification is composed of two abstractions: \textit{explicit exception channel} and \textit{pluggable handlers}. An \textit{explicit exception channel} (\textit{channel}, for short) is an abstract duct through which exceptions flow from a raising site to a handling site. More precisely, an explicit exception channel \textit{EEC} is a 5-tuple consisting of: (1) a set of exception types \textit{E}, (2) a set of raising sites \textit{RS}; (3) a set of handling sites \textit{HS}; (4) a set of intermediate sites IS; and (5) a function \textit{EI} that specifies the channel's exception interface. \textit{Exception types}, as the name indicates, are types that, at runtime, are instantiated to exceptions that flow through the channel. The \textit{raising sites} are loci of computation where exceptions from E can be raised. The actual erroneous condition that must be detected to raise an exception depends on the semantics of the application and on the assumed failure model. For reasoning about exception flow, the fault that caused an exception to be raised is not important, just the fact that the exception was raised. The \textit{handling sites} of an explicit exception channel are loci of computation where exceptions from E are handled, potentially being re-raised or resulting in the raising of new exceptions. In languages such as Java, both raising and handling sites are methods, the program elements that throw and handle exceptions. If an explicit exception channel has no associated handlers for one or more of the exceptions that flow through it, it is necessary to define its exception interface. The latter is a statically verifiable list of exceptions that a channel signals to its enclosing context, similarly to Java's \textit{throws} clause. In our model, the exception interface is defined as a function ($Ex_1 \mapsto Ex_2$) that translates exceptions flowing ($Ex_1$) through the channel to exceptions signaled ($Ex_2$) to the enclosing EHC. Raising and handling sites are the two ends of an explicit exception channel. Handling sites can be potentially any node in the method call graph that results from concatenating all maximal chains of method invocations starting in elements from HS and ending in elements from RS. All the nodes in such graph that are neither handling nor raising sites are considered \textit{intermediate sites}. Intermediate sites comprise the loci of computation through which an exception passes from the raising site on its way to the handling site. Intermediate sites in Java are methods that indicate in their interfaces the exceptions that they throw, \textit{i.e.} exceptions are just propagated through them, without side effects to program behavior. Note that the notions of handling, raising, and intermediate site are purely conceptual and depend on the specification of the explicit exception channel. They are also inherently recursive. For example, an intermediate site of an explicit exception channel can be considered the raising site of another channel. A \textit{pluggable handler} is an exception handler that can be associated to arbitrary EHCs, thus separating error handling code from normal code. A single pluggable handler can be associated, for example, to a method call in a class \textit{C1}, two different method declarations in another class, \textit{C2}, and all methods in a third class \textit{C3}. In this sense, they are an improvement over traditional notions of exception handler. Another difference is that a pluggable handler exists independently of the EHCs to which it is associated. Therefore, these handlers can be reused both within an application and across different applications. We can define several different explicit exception channels in terms of the elements of Figure \ref{2}. For example, let \textit{EEC1} be the explicit exception channel defined by the tuple (\textit{\{ISOException\},\{UserAccessJCML. checkPre\$addCredits\$\},\{App.addCredits\},\{Proxy.addCredits\},\{\}}). Only exception \textit{ISOException} flows through this channel. Explicit exception channel \textit{EEC1} has method \textit{checkPre\$addCredits\$} as its sole raising site and Proxy.addCredits as its only intermediate site. Method App.addCredits is handling site, since one pluggable handler can be bound to it. \textit{EEC1} does not define an exception interface, as it includes handlers for all of its exceptions. \subsection{Proposed Implementation of EFlow Model} \label{subsec:limitEflow} This section presents EJCFlow, a JCML extension that implements the EFlow model to support Java Card applications. EJCFlow provides means for developers to define explicit exception channels and pluggable handlers in terms of the abstractions supported by JCML. In order to support the definition of Explicit Exception Channels and Pluggable Handlers, EJCFlow provides a notation to define \textit{contexts}, which in practice are methods that can represent raising, intermediate or handling sites. Contexts are defined as the following: \begin{figure}[ht!] \centering \scriptsize \begin{verbatim} /*@ econtext(RS, withincode( UserAccessJCML.checkPre$addCredits()) @*/ /*@econtext(HS, withincode(App.addCredits) @*/ \end{verbatim} \end{figure} Here we defined a context called RS bounded to the method \textit{checkPre\$addCredits\$()} and a context called \textit{HS} bounded to the method \textit{addCredits()}. After defining context, EJCFlow provides a way to define Explicit Exception Handling by a notation that takes 3 parameters: (1) the channels's name; (2) the exception type; and (3) the raising context. Explicit channels are defined as the following: \begin{figure}[ht!] \centering \scriptsize \begin{verbatim} /*@ echannel(ECC, ISOException, RS) @*/ \end{verbatim} \end{figure} The above example defines one explicit exception channel called \textit{ECC}, which represents the exception \textit{ISOException} flowing out of the context \textit{RS}, previously defined. In order to specify the handling site of an explicit exception channel, EJCFlow provides a notation that consists of: (1) the associated channel's name; (2) the handling exception type (3) the handling context; and (4) the body of the actual handler implementation. The following code snippet presents a simple example: \begin{figure}[ht!] \centering \scriptsize \begin{verbatim} /*@ ehandler(ECC, ISOException, HS) @*/ public static void aHandler(ISOException e) { //do something } \end{verbatim} \end{figure} The above code snippet defines a method \textit{aHandler()} which handles exceptions of type \textit{ISOException} flowing through explicit exception channel \textit{EEC} specifically when the exception reaches the context HS. When a program cannot handle all the exceptions that flow through an explicit exception channel, it is necessary to declare these exceptions in the channel's exception interface. The einterface annotation serves this purpose. The following code snippet illustrates the definition of exception interfaces: \begin{figure}[ht!] \centering \scriptsize \begin{verbatim} /*@ econtext (rS1, withincode (UserAccessJCML.addCredits( short )) ) @*/ /*@ econtext (rI1, withincode( Applet.addCredits)) @*/ /*@ echannel (ECC1, ISOException , rS1) @*/ /*@ einterface (ECC1 , ISOException, rI1) @*/ \end{verbatim} \end{figure} This above explicitly indicates the exception to be declared in the exception interface of the channel. \subsection{Implementation Details} \label{subsec:detailEflow} EJCFlow consists of two parts: (1) a parser to support EFlow model on Java Card environment; and (2) a bytecode analyzer to make the needed transformations on the applications bytecode. The first is built utilizing the tool JavaCC (Java Compiler Compiler) version 5.0 and the latter is built on top of Soot \cite{Vallee-RaiGHLPS00}. The entire process (Figure \ref{fig:sootEchannel}), from a pure JavaCard program to an transformed program by JCML-Compiler and EJCFlow can be divided in three stages: (1) annotate the Java Card application with JCML and run JCMLc, producing new .class files with runtime condition verifications; (2) specify the contexts, channels and handlers, which will be parsed and verified by our JavaCC-based tool, in order to ensure the validity and correlation of application code with EFlow definitions, generating an representation of EFlow model data (channels, contexts and handlers); and (3) with the EFlow data, host side application binaries and card side JCMLc generated binaries, our soot-based tool will make the bytecode needed transformations, generating new card side and host side application binaries, which are able to communicate and handle properly JCML specified behavior by means of EFlow definitions. \begin{figure}[ht!] \centering \includegraphics[width=0.45 \textwidth]{figs/SootEchannelProcess} \caption{High-level overview of the components of the implementation} \label{fig:sootEchannel} \end{figure} \section{Related Works} \label{sec:relatedWorks} There are a few approaches that try to handle contract violation by means of improving the exception handling behavior. The Krakatoa tool \cite{Krakatoa} for certification of java and Java Card programs translate the java program, annotated with JML, into the WHY \cite{FilliatreM07} input language. The WHY input language is an ML-like minimal language, with very limited imperative features, such as references and exceptions. On the other hand, \cite{Rebelo} defines a set of types of faults that can happen on the interplay between java exception handling mechanism and runtime assertions checkers (RACs) for JML. It presents an error recovery technique for RACs that tackles such limitations. The proposed approach uses of aspect-orientation to monitor the exceptions that represent contract violations and force such exceptions to be signaled, preventing exceptional handling contexts from negatively affecting RAC generated exceptions. The EJCFlow implementation has three advantages in terms of the aforementioned related works. First, EJCFlow abstractions allow the explicit representations of properties governing global exception behaviors. As a consequence, the separation between normal and exceptional behavior is preserved, resulting in untangled software artefacts, reusable handlers, and improved maintainability. Second, EJCFlow abstractions make it possible to understand the potential paths of global exceptions from an end-to-end perspective by looking at a single part of the program. Third, EJCFlow improves reliability in evolving software by enforcing that harmful exceptional behaviors are not added between the two ends of a given explicit exception channel. Also, the EFlow model allows to verify whether or not all exceptions flowing through an explicit exception channels are correctly handled at pluggable handlers. \section{Conclusion} \label{sec:conclusion} This paper has presented an implementation of the EFlow model whose main purpose is to improve the exception handling mechanisms of Java Card Applications. We leverage the JCML specification-level annotations to promote improved separation between normal and error handling code, while keeping track of exception control flows. Our ongoing work encompasses the empirical evaluation of the error proneness on the use of EJCFlow when compared to conventional proposals discussed in this paper. % conference papers do not normally have an appendix % trigger a \newpage just before the given reference % number - used to balance the columns on the last page % adjust value as needed - may need to be readjusted if % the document is modified later %\IEEEtriggeratref{8} % The "triggered" command can be changed if desired: %\IEEEtriggercmd{\enlargethispage{-5in}} % Better way for balancing the last page: \balance % references section % can use a bibliography generated by BibTeX as a .bbl file % BibTeX documentation can be easily obtained at: % http://www.ctan.org/tex-archive/biblio/bibtex/contrib/doc/ % The IEEEtran BibTeX style support page is at: % http://www.michaelshell.org/tex/ieeetran/bibtex/ %\bibliographystyle{IEEEtran} % argument is your BibTeX string definitions and bibliography database(s) %\bibliography{IEEEabrv,../bib/paper} % % <OR> manually copy in the resultant .bbl file % set second argument of \begin to the number of references % (used to reserve space for the reference number labels box) \bibliographystyle{plain} \bibliography{weh'12} % that's all folks \end{document}
{ "alphanum_fraction": 0.7950567387, "avg_line_length": 44.9231843575, "ext": "tex", "hexsha": "994ebc303f8e0083bb8d097aaa415918c917ba3d", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "06e4ff673766438a1c6c61731036454e2087e240", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "mmusicante/PlacidoConfArtigos", "max_forks_repo_path": "WEH2012/src/weh'12.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "06e4ff673766438a1c6c61731036454e2087e240", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "mmusicante/PlacidoConfArtigos", "max_issues_repo_path": "WEH2012/src/weh'12.tex", "max_line_length": 1038, "max_stars_count": null, "max_stars_repo_head_hexsha": "06e4ff673766438a1c6c61731036454e2087e240", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "mmusicante/PlacidoConfArtigos", "max_stars_repo_path": "WEH2012/src/weh'12.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 7429, "size": 32165 }
\chapter{Variables for history output} \label{app:vari_hist} %\begin{table}[htb] % \label{tb:vari_hist} % \caption{Variables assigned to list of history output} %\end{table} \subsubsection{mod\_atmos\_phy\_ae\_driver} \begin{tabularx}{150mm}{|l|X|c|} \hline \rowcolor[gray]{0.9} Item/Variable & Description & Unit \\ \hline \verb|CCN| & cloud condensation nucrei & \\ \hline \end{tabularx} \subsubsection{mod\_atmos\_phy\_ch\_driver} \begin{tabularx}{150mm}{|l|X|c|} \hline \rowcolor[gray]{0.9} Item/Variable & Description & Unit \\ \hline \verb|Ozone| & Ozone & $PPM$ \\\hline \verb|CBMFX| & cloud base mass flux & $kg/m2/s$ \\\hline \end{tabularx} \subsubsection{mod\_atmos\_phy\_cp\_driver} \begin{tabularx}{150mm}{|l|X|c|} \hline \rowcolor[gray]{0.9} Item/Variable & Description & Unit \\ \hline \verb|CBMFX| & cloud base mass flux & $kg/m2/s$ \\\hline \end{tabularx} \subsubsection{mod\_atmos\_phy\_mp\_driver} \begin{tabularx}{150mm}{|l|X|c|} \hline \rowcolor[gray]{0.9} Item/Variable & Description & Unit \\ \hline \verb|RAIN| & surface rain rate & $kg/m2/s$ \\\hline \verb|SNOW| & surface snow rate & $kg/m2/s$ \\\hline \verb|PREC| & surface precipitation rate & $kg/m2/s$ \\\hline \verb|DENS_t_MP| & tendency DENS in MP & $kg/m3/s$ \\\hline \verb|MOMZ_t_MP| & tendency MOMZ in MP & $kg/m2/s2$ \\\hline \verb|MOMX_t_MP| & tendency MOMX in MP & $kg/m2/s2$ \\\hline \verb|MOMY_t_MP| & tendency MOMY in MP & $kg/m2/s2$ \\\hline \verb|RHOT_t_MP| & tendency RHOT in MP & $K*kg/m3/s$ \\\hline \verb|QV_t_MP| & tendency QV in MP & $kg/kg$ \\\hline \end{tabularx} \subsubsection{mod\_atmos\_phy\_rd\_driver} \begin{tabularx}{150mm}{|l|X|c|} \hline \rowcolor[gray]{0.9} Item/Variable & Description & Unit \\ \hline \verb|SOLINS| & solar insolation & $W/m2$ \\\hline \verb|COSZ| & cos(solar zenith angle) & $0-1$ \\\hline \verb|SFLX_LW_up| & SFC upward longwave radiation flux & $W/m2$ \\\hline \verb|SFLX_LW_dn| & SFC downward longwave radiation flux & $W/m2$ \\\hline \verb|SFLX_SW_up| & SFC upward shortwave radiation flux & $W/m2$ \\\hline \verb|SFLX_SW_dn| & SFC downward shortwave radiation flux & $W/m2$ \\\hline \verb|TOAFLX_LW_up| & TOA upward longwave radiation flux & $W/m2$ \\\hline \verb|TOAFLX_LW_dn| & TOA downward longwave radiation flux & $W/m2$ \\\hline \verb|TOAFLX_SW_up| & TOA upward shortwave radiation flux & $W/m2$ \\\hline \verb|TOAFLX_SW_dn| & TOA downward shortwave radiation flux & $W/m2$ \\\hline \verb|SLR| & SFC net longwave radiation flux & $W/m2$ \\\hline \verb|SSR| & SFC net shortwave radiation flux & $W/m2$ \\\hline \verb|OLR| & TOA net longwave radiation flux & $W/m2$ \\\hline \verb|OSR| & TOA net shortwave radiation flux & $W/m2$ \\\hline \verb|RADFLUX_LWUP| & upward longwave radiation flux & $W/m2$ \\\hline \verb|RADFLUX_LWDN| & downward longwave radiation flux & $W/m2$ \\\hline \verb|RADFLUX_LW| & net longwave radiation flux & $W/m2$ \\\hline \verb|RADFLUX_SWUP| & upward shortwave radiation flux & $W/m2$ \\\hline \verb|RADFLUX_SWDN| & downward shortwave radiation flux & $W/m2$ \\\hline \verb|RADFLUX_SW| & net shortwave radiation flux & $W/m2$ \\\hline \verb|TEMP_t_rd_LW| & tendency of temp in rd(LW) & $K/day$ \\\hline \verb|TEMP_t_rd_SW| & tendency of temp in rd(SW) & $K/day$ \\\hline \verb|TEMP_t_rd| & tendency of temp in rd & $K/day$ \\\hline \verb|RHOT_t_RD| & tendency of RHOT in rd & $K.kg/m3/s$ \\\hline \end{tabularx} \subsubsection{mod\_atmos\_phy\_sf\_driver} \begin{tabularx}{150mm}{|l|X|c|} \hline \rowcolor[gray]{0.9} Item/Variable & Description & Unit \\ \hline \verb|SFC_DENS| & surface atmospheric density & $kg/m3$ \\\hline \verb|SFC_PRES| & surface atmospheric pressure & $Pa$ \\\hline \verb|SFC_TEMP| & surface skin temperature (merged) & $K$ \\\hline \verb|SFC_ALB_LW| & surface albedo (longwave,merged) & $0-1$ \\\hline \verb|SFC_ALB_SW| & surface albedo (shortwave,merged) & $0-1$ \\\hline \verb|SFC_Z0M| & roughness length (momentum) & $m$ \\\hline \verb|SFC_Z0H| & roughness length (heat) & $m$ \\\hline \verb|SFC_Z0E| & roughness length (vapor) & $m$ \\\hline \verb|MWFLX| & w-momentum flux (merged) & $kg/m2/s$ \\\hline \verb|MUFLX| & u-momentum flux (merged) & $kg/m2/s$ \\\hline \verb|MVFLX| & v-momentum flux (merged) & $kg/m2/s$ \\\hline \verb|SHFLX| & sensible heat flux (merged) & $W/m2$ \\\hline \verb|LHFLX| & latent heat flux (merged) & $W/m2$ \\\hline \verb|GHFLX| & ground heat flux (merged) & $W/m2$ \\\hline \verb|Uabs10| & 10m absolute wind & $m/s$ \\\hline \verb|U10| & 10m x-wind & $m/s$ \\\hline \verb|V10| & 10m y-wind & $m/s$ \\\hline \verb|T2 | & 2m temperature & $K$ \\\hline \verb|Q2 | & 2m water vapor & $kg/kg$ \\\hline \verb|MSLP| & mean sea-level pressure & $Pa$ \\\hline \end{tabularx} \subsubsection{mod\_atmos\_phy\_tb\_driver} \begin{tabularx}{150mm}{|l|X|c|} \hline \rowcolor[gray]{0.9} Item/Variable & Description & Unit \\ \hline \verb|TKE| & turburent kinetic energy & $m2/s2$ \\\hline \verb|NU| & eddy viscosity & $m2/s$ \\\hline \verb|Ri| & Richardson number & $NIL$ \\\hline \verb|Pr| & Prantle number & $NIL$ \\\hline \verb|MOMZ_t_TB| & MOMZ tendency (TB) & $kg/m2/s2$ \\\hline \verb|MOMX_t_TB| & MOMX tendency (TB) & $kg/m2/s2$ \\\hline \verb|MOMY_t_TB| & MOMY tendency (TB) & $kg/m2/s2$ \\\hline \verb|RHOT_t_TB| & RHOT tendency (TB) & $K.kg/m3/s$ \\\hline \verb|QV_t_TB| & QV tendency (TB) & $kg/kg$ \\\hline \verb|QC_t_TB| & QC tendency (TB) & $kg/kg$ \\\hline \verb|QR_t_TB| & QR tendency (TB) & $kg/kg$ \\\hline \verb|QI_t_TB| & QI tendency (TB) & $kg/kg$ \\\hline \verb|QS_t_TB| & QS tendency (TB) & $kg/kg$ \\\hline \verb|QG_t_TB| & QG tendency (TB) & $kg/kg$ \\\hline \verb|SGS_ZFLX_MOMZ| & SGS Z FLUX of MOMZ & $kg/m/s2$ \\\hline \verb|SGS_XFLX_MOMZ| & SGS X FLUX of MOMZ & $kg/m/s2$ \\\hline \verb|SGS_YFLX_MOMZ| & SGS Y FLUX of MOMZ & $kg/m/s2$ \\\hline \verb|SGS_ZFLX_MOMX| & SGS Z FLUX of MOMX & $kg/m/s2$ \\\hline \verb|SGS_XFLX_MOMX| & SGS X FLUX of MOMX & $kg/m/s2$ \\\hline \verb|SGS_YFLX_MOMX| & SGS Y FLUX of MOMX & $kg/m/s2$ \\\hline \verb|SGS_ZFLX_MOMY| & SGS Z FLUX of MOMY & $kg/m/s2$ \\\hline \verb|SGS_XFLX_MOMY| & SGS X FLUX of MOMY & $kg/m/s2$ \\\hline \verb|SGS_YFLX_MOMY| & SGS Y FLUX of MOMY & $kg/m/s2$ \\\hline \verb|SGS_ZFLX_RHOT| & SGS Z FLUX of RHOT & $K*kg/m2/s$ \\\hline \verb|SGS_XFLX_RHOT| & SGS X FLUX of RHOT & $K*kg/m2/s$ \\\hline \verb|SGS_YFLX_RHOT| & SGS Y FLUX of RHOT & $K*kg/m2/s$ \\\hline \verb|SGS_ZFLX_QV| & SGS Z FLUX of QV & $kg/m2/s$ \\\hline \verb|SGS_XFLX_QV| & SGS X FLUX of QV & $kg/m2/s$ \\\hline \verb|SGS_YFLX_QV| & SGS Y FLUX of QV & $kg/m2/s$ \\\hline \verb|SGS_ZFLX_QC| & SGS Z FLUX of QC & $kg/m2/s$ \\\hline \verb|SGS_XFLX_QC| & SGS X FLUX of QC & $kg/m2/s$ \\\hline \verb|SGS_YFLX_QC| & SGS Y FLUX of QC & $kg/m2/s$ \\\hline \verb|SGS_ZFLX_QR| & SGS Z FLUX of QR & $kg/m2/s$ \\\hline \verb|SGS_XFLX_QR| & SGS X FLUX of QR & $kg/m2/s$ \\\hline \verb|SGS_YFLX_QR| & SGS Y FLUX of QR & $kg/m2/s$ \\\hline \end{tabularx}  \\ \indent *次ページへつづく \subsubsection{mod\_atmos\_phy\_tb\_driver:つづき} \begin{tabularx}{150mm}{|l|X|c|} \hline \rowcolor[gray]{0.9} Item/Variable & Description & Unit \\ \hline \verb|SGS_ZFLX_QI| & SGS Z FLUX of QI & $kg/m2/s$ \\\hline \verb|SGS_XFLX_QI| & SGS X FLUX of QI & $kg/m2/s$ \\\hline \verb|SGS_YFLX_QI| & SGS Y FLUX of QI & $kg/m2/s$ \\\hline \verb|SGS_ZFLX_QS| & SGS Z FLUX of QS & $kg/m2/s$ \\\hline \verb|SGS_XFLX_QS| & SGS X FLUX of QS & $kg/m2/s$ \\\hline \verb|SGS_YFLX_QS| & SGS Y FLUX of QS & $kg/m2/s$ \\\hline \verb|SGS_ZFLX_QG| & SGS Z FLUX of QG & $kg/m2/s$ \\\hline \verb|SGS_XFLX_QG| & SGS X FLUX of QG & $kg/m2/s$ \\\hline \verb|SGS_YFLX_QG| & SGS Y FLUX of QG & $kg/m2/s$ \\\hline \end{tabularx} \subsubsection{mod\_atmos\_vars} \begin{tabularx}{150mm}{|l|X|c|} \hline \rowcolor[gray]{0.9} Item/Variable & Description & Unit \\ \hline \verb|DENS| & density & $kg/m3$ \\\hline \verb|MOMZ| & momentum z & $kg/m2/s$ \\\hline \verb|MOMX| & momentum x & $kg/m2/s$ \\\hline \verb|MOMY| & momentum y & $kg/m2/s$ \\\hline \verb|RHOT| & rho * theta & $kg/m3*K$ \\\hline \verb|QV| & Water Vapor mixing ratio & $kg/kg$ \\\hline \verb|QC| & Cloud Water mixing ratio & $kg/kg$ \\\hline \verb|QR| & Rain Water mixing ratio & $kg/kg$ \\\hline \verb|QI| & Cloud Ice mixing ratio & $kg/kg$ \\\hline \verb|QS| & Snow mixing ratio & $kg/kg$ \\\hline \verb|QG| & Graupel mixing ratio & $kg/kg$ \\\hline \verb|W| & velocity w & $m/s$ \\\hline \verb|U| & velocity u & $m/s$ \\\hline \verb|V| & velocity v & $m/s$ \\\hline \verb|PT| & potential temp. & $K$ \\\hline \verb|QDRY| & dry air & $kg/kg$ \\\hline \verb|QTOT| & total water & $kg/kg$ \\\hline \verb|QHYD| & total hydrometeors & $kg/kg$ \\\hline \verb|QLIQ| & total liquid water & $kg/kg$ \\\hline \verb|QICE| & total ice water & $kg/kg$ \\\hline \verb|LWP| & liquid water path & $g/m2$ \\\hline \verb|IWP| & ice water path & $g/m2$ \\\hline \verb|RTOT| & Total gas constant & $J/kg/K$ \\\hline \verb|CPTOT| & Total heat capacity & $J/kg/K$ \\\hline \verb|PRES| & pressure & $Pa$ \\\hline \verb|T| & temperature & $K$ \\\hline \verb|LWPT| & liq. potential temp. & $K$ \\\hline \verb|RHA| & relative humidity(liq+ice) & $\%$ \\\hline \verb|RH| & relative humidity(liq) & $\%$ \\\hline \verb|RHI| & relative humidity(ice) & $\%$ \\\hline \verb|VOR| & vertical vorticity & $1/s$ \\\hline \verb|DIV| & divergence & $1/s$ \\\hline \verb|HDIV| & horizontal divergence & $1/s$ \\\hline \verb|Uabs| & absolute velocity & $m/s$ \\\hline \end{tabularx}  \\ \indent *次ページへつづく \subsubsection{mod\_atmos\_vars:つづき} \begin{tabularx}{150mm}{|l|X|c|} \hline \rowcolor[gray]{0.9} Item/Variable & Description & Unit \\ \hline \verb|DENS_PRIM| & horiz. deviation of density & $kg/m3$ \\\hline \verb|W_PRIM| & horiz. deviation of w & $m/s$ \\\hline \verb|U_PRIM| & horiz. deviation of u & $m/s$ \\\hline \verb|V_PRIM| & horiz. deviation of v & $m/s$ \\\hline \verb|PT_PRIM| & horiz. deviation of pot. temp. & $K$ \\\hline \verb|W_PRIM2| & variance of w & $m2/s2$ \\\hline \verb|PT_W_PRIM| & resolved scale heat flux & $W/s$ \\\hline \verb|W_PRIM3| & skewness of w & $m3/s3$ \\\hline \verb|TKE_RS| & resolved scale TKE & $m2/s2$ \\\hline \verb|ENGT| & total energy & $J/m3$ \\\hline \verb|ENGP| & potential energy & $J/m3$ \\\hline \verb|ENGK| & kinetic energy & $J/m3$ \\\hline \verb|ENGI| & internal energy & $J/m3$ \\\hline \end{tabularx} \subsubsection{mod\_land\_phy\_driver} \begin{tabularx}{150mm}{|l|X|c|} \hline \rowcolor[gray]{0.9} Item/Variable & Description & Unit \\\hline \verb|LAND_TEMP_t| & tendency of LAND\_TEMP & $K$ \\\hline \verb|LAND_WATER_t| & tendency of LAND\_WATER & $m3/m3$ \\\hline \verb|LAND_SFC_TEMP_t| & tendency of LAND\_SFC\_TEMP & $K$ \\\hline \verb|LAND_ALB_LW_t| & tendency of LAND\_ALB\_LW & $0-1$ \\\hline \verb|LAND_ALB_SW_t| & tendency of LAND\_ALB\_SW & $0-1$ \\\hline \verb|LP_WaterLimit| & LAND PROPERTY maximum soil moisture & $m3/m3$ \\\hline \verb|LP_WaterCritical| & LAND PROPERTY critical soil moisture & $m3/m3$ \\\hline \verb|LP_ThermalCond| & LAND PROPERTY thermal conductivity for soil & $W/K/m$ \\\hline \verb|LP_HeatCapacity| & LAND PROPERTY heat capacity for soil & $J/K/m3$ \\\hline \verb|LP_WaterDiff| & LAND PROPERTY moisture diffusivity in the soil & $m2/s$ \\\hline \verb|LP_Z0M| & LAND PROPERTY roughness length for momemtum & $m$ \\\hline \verb|LP_Z0H| & LAND PROPERTY roughness length for heat & $m$ \\\hline \verb|LP_Z0E| & LAND PROPERTY roughness length for vapor & $m$ \\\hline \end{tabularx} \subsubsection{mod\_land\_vars} \begin{tabularx}{150mm}{|l|X|c|} \hline \rowcolor[gray]{0.9} Item/Variable & Description & Unit \\\hline \verb|LAND_TEMP| & temperature at each soil layer & $K$ \\\hline \verb|LAND_WATER| & moisture at each soil layer & $m3/m3$ \\\hline \verb|LAND_SFC_TEMP| & land surface skin temperature & $K$ \\\hline \verb|LAND_ALB_LW| & land surface albedo (longwave) & $0-1$ \\\hline \verb|LAND_ALB_SW| & land surface albedo (shortwave) & $0-1$ \\\hline \verb|LAND_SFLX_MW| & land surface w-momentum flux & $kg/m2/s$ \\\hline \verb|LAND_SFLX_MU| & land surface u-momentum flux & $kg/m2/s$ \\\hline \verb|LAND_SFLX_MV| & land surface v-momentum flux & $kg/m2/s$ \\\hline \verb|LAND_SFLX_SH| & land surface sensible heat flux & $J/m2/s$ \\\hline \verb|LAND_SFLX_LH| & land surface latent heat flux & $J/m2/s$ \\\hline \verb|LAND_SFLX_GH| & land surface ground heat flux & $J/m2/s$ \\\hline \verb|LAND_SFLX_evap| & land surface water vapor flux & $kg/m2/s$ \\\hline \end{tabularx} \subsubsection{mod\_ocean\_phy\_driver} \begin{tabularx}{150mm}{|l|X|c|} \hline \rowcolor[gray]{0.9} Item/Variable & Description & Unit \\\hline \verb|OCEAN_TEMP_t| & tendency of OCEAN\_TEMP & $K$ \\\hline \verb|OCEAN_SFC_TEMP_t| & tendency of OCEAN\_SFC\_TEMP & $K$ \\\hline \verb|OCEAN_ALB_LW_t| & tendency of OCEAN\_ALB\_LW & $0-1$ \\\hline \verb|OCEAN_ALB_SW_t| & tendency of OCEAN\_ALB\_SW & $0-1$ \\\hline \verb|OCEAN_SFC_Z0M_t| & tendency of OCEAN\_SFC\_Z0M & $m$ \\\hline \verb|OCEAN_SFC_Z0H_t| & tendency of OCEAN\_SFC\_Z0H & $m$ \\\hline \verb|OCEAN_SFC_Z0E_t| & tendency of OCEAN\_SFC\_Z0E & $m$ \\\hline \end{tabularx} \subsubsection{mod\_ocean\_vars} \begin{tabularx}{150mm}{|l|X|c|} \hline \rowcolor[gray]{0.9} Item/Variable & Description & Unit \\\hline \verb|OCEAN_TEMP| & temperature at uppermost ocean layer & $K$ \\\hline \verb|OCEAN_SFC_TEMP| & ocean surface skin temperature & $K$ \\\hline \verb|OCEAN_ALB_LW| & ocean surface albedo (longwave) & $0-1$ \\\hline \verb|OCEAN_ALB_SW| & ocean surface albedo (shortwave) & $0-1$ \\\hline \verb|OCEAN_SFC_Z0M| & ocean surface roughness length (momentum) & $m$ \\\hline \verb|OCEAN_SFC_Z0H| & ocean surface roughness length (heat) & $m$ \\\hline \verb|OCEAN_SFC_Z0E| & ocean surface roughness length (vapor) & $m$ \\\hline \verb|OCEAN_SFLX_MW| & ocean surface w-momentum flux & $kg/m2/s$ \\\hline \verb|OCEAN_SFLX_MU| & ocean surface u-momentum flux & $kg/m2/s$ \\\hline \verb|OCEAN_SFLX_MV| & ocean surface v-momentum flux & $kg/m2/s$ \\\hline \verb|OCEAN_SFLX_SH| & ocean surface sensible heat flux & $J/m2/s$ \\\hline \verb|OCEAN_SFLX_LH| & ocean surface latent heat flux & $J/m2/s$ \\\hline \verb|OCEAN_SFLX_WH| & ocean surface water heat flux & $J/m2/s$ \\\hline \verb|OCEAN_SFLX_evap| & ocean surface water vapor flux & $kg/m2/s$ \\\hline \end{tabularx} \subsubsection{mod\_urban\_phy\_driver} \begin{tabularx}{150mm}{|l|X|c|} \hline \rowcolor[gray]{0.9} Item/Variable & Description & Unit \\\hline \verb|URBAN_TR_t| & tendency of URBAN\_TR & $K$ \\\hline \verb|URBAN_TB_t| & tendency of URBAN\_TB & $K$ \\\hline \verb|URBAN_TG_t| & tendency of URBAN\_TG & $K$ \\\hline \verb|URBAN_TC_t| & tendency of URBAN\_TC & $K$ \\\hline \verb|URBAN_QC_t| & tendency of URBAN\_QC & $kg/kg$ \\\hline \verb|URBAN_UC_t| & tendency of URBAN\_UC & $m/s$ \\\hline \verb|URBAN_TRL_t| & tendency of URBAN\_TRL & $K$ \\\hline \verb|URBAN_TBL_t| & tendency of URBAN\_TBL & $K$ \\\hline \verb|URBAN_TGL_t| & tendency of URBAN\_TGL & $K$ \\\hline \verb|URBAN_RAINR_t| & tendency of URBAN\_RAINR & $K$ \\\hline \verb|URBAN_RAINB_t| & tendency of URBAN\_RAINB & $K$ \\\hline \verb|URBAN_RAING_t| & tendency of URBAN\_RAING & $K$ \\\hline \verb|URBAN_ROFF_t| & tendency of URBAN\_ROFF & $K$ \\\hline \end{tabularx} \subsubsection{mod\_urban\_vars} \begin{tabularx}{150mm}{|l|X|c|} \hline \rowcolor[gray]{0.9} Item/Variable & Description & Unit \\\hline \verb|URBAN_TR| & urban surface temperature of roof & $K$ \\\hline \verb|URBAN_TB| & urban surface temperature of wall & $K$ \\\hline \verb|URBAN_TG| & urban surface temperature of road & $K$ \\\hline \verb|URBAN_TC| & urban canopy air temperature & $K$ \\\hline \verb|URBAN_QC| & urban canopy humidity & $kg/kg$ \\\hline \verb|URBAN_UC| & urban canopy wind & $m/s$ \\\hline \verb|URBAN_TRL| & urban temperature in layer of roof & $K$ \\\hline \verb|URBAN_TBL| & urban temperature in layer of wall & $K$ \\\hline \verb|URBAN_TGL| & urban temperature in layer of road & $K$ \\\hline \verb|URBAN_RAINR| & urban rain strage on roof & $kg/m2$ \\\hline \verb|URBAN_RAINB| & urban rain strage on wall & $kg/m2$ \\\hline \verb|URBAN_RAING| & urban rain strage on road & $kg/m2$ \\\hline \verb|URBAN_ROFF| & urban runoff & $kg/m2$ \\\hline \verb|URBAN_SFC_TEMP| & urban grid average of temperature & $K$ \\\hline \verb|URBAN_ALB_LW| & urban grid average of albedo LW & $0-1$ \\\hline \verb|URBAN_ALB_SW| & urban grid average of albedo SW & $0-1$ \\\hline \verb|URBAN_SFLX_MW| & urban grid average of w-momentum flux & $kg/m2/s$ \\\hline \verb|URBAN_SFLX_MU| & urban grid average of u-momentum flux & $kg/m2/s$ \\\hline \verb|URBAN_SFLX_MV| & urban grid average of v-momentum flux & $kg/m2/s$ \\\hline \verb|URBAN_SFLX_SH| & urban grid average of sensible heat flux & $W/m2$ \\\hline \verb|URBAN_SFLX_LH| & urban grid average of latent heat flux & $W/m2$ \\\hline \verb|URBAN_SFLX_GH| & urban grid average of ground heat flux & $W/m2$ \\\hline \verb|URBAN_SFLX_evap| & urban grid average of water vapor flux & $kg/m2/s$ \\\hline \end{tabularx} \subsubsection{scale\_atmos\_dyn} \begin{tabularx}{150mm}{|l|X|c|} \hline \rowcolor[gray]{0.9} Item/Variable & Description & Unit \\\hline \verb|DENS_t_phys| & tendency of density due to physics & $kg/m3/s$ \\\hline \verb|MOMZ_t_phys| & tendency of momentum z due to physics & $kg/m2/s2$ \\\hline \verb|MOMX_t_phys| & tendency of momentum x due to physics & $kg/m2/s2$ \\\hline \verb|MOMY_t_phys| & tendency of momentum y due to physics & $kg/m2/s2$ \\\hline \verb|RHOT_t_phys| & tendency of rho*theta temperature due to physics & $K kg/m3/s$ \\\hline \verb|QV_t_phys| & tendency of QV due to physics & $kg/kg$ \\\hline \verb|QC_t_phys| & tendency of QC due to physics & $kg/kg$ \\\hline \verb|QR_t_phys| & tendency of QR due to physics & $kg/kg$ \\\hline \verb|QI_t_phys| & tendency of QI due to physics & $kg/kg$ \\\hline \verb|QS_t_phys| & tendency of QS due to physics & $kg/kg$ \\\hline \verb|QG_t_phys| & tendency of QG due to physics & $kg/kg$ \\\hline \verb|DENS_t_damp| & tendency of density due to rayleigh damping & $kg/m3/s$ \\\hline \verb|MOMZ_t_damp| & tendency of momentum z due to rayleigh damping & $kg/m2/s2$ \\\hline \verb|MOMX_t_damp| & tendency of momentum x due to rayleigh damping & $kg/m2/s2$ \\\hline \verb|MOMY_t_damp| & tendency of momentum y due to rayleigh damping & $kg/m2/s2$ \\\hline \verb|RHOT_t_damp| & tendency of rho*theta temperature due to rayleigh damping & $K kg/m3/s$ \\\hline \verb|QV_t_damp| & tendency of QV due to rayleigh damping & $kg/kg$ \\\hline \verb|QC_t_damp| & tendency of QC due to rayleigh damping & $kg/kg$ \\\hline \verb|QR_t_damp| & tendency of QR due to rayleigh damping & $kg/kg$ \\\hline \verb|QI_t_damp| & tendency of QI due to rayleigh damping & $kg/kg$ \\\hline \verb|QS_t_damp| & tendency of QS due to rayleigh damping & $kg/kg$ \\\hline \verb|QG_t_damp| & tendency of QG due to rayleigh damping & $kg/kg$ \\\hline \end{tabularx} \subsubsection{scale\_atmos\_dyn (if CHECK\_MASS is defined in compilation process)} \begin{tabularx}{150mm}{|l|X|c|} \hline \rowcolor[gray]{0.9} Item/Variable & Description & Unit \\\hline \verb|MFLXZ| & momentum flux of z-direction & $kg/m2/s$ \\\hline \verb|MFLXX| & momentum flux of x-direction & $kg/m2/s$ \\\hline \verb|MFLXY| & momentum flux of y-direction & $kg/m2/s$ \\\hline \verb|TFLXZ| & potential temperature flux of z-direction & $K*kg/m2/s$ \\\hline \verb|TFLXX| & potential temperature flux of x-direction & $K*kg/m2/s$ \\\hline \verb|TFLXY| & potential temperature flux of y-direction & $K*kg/m2/s$ \\\hline \verb|ALLMOM_lb_hz| & horizontally total momentum flux from lateral boundary & $kg/m2/s$ \\\hline \end{tabularx} \subsubsection{scale\_atmos\_dyn\_rk (if HIST\_TEND is defined in compilation process)} \begin{tabularx}{150mm}{|l|X|c|} \hline \rowcolor[gray]{0.9} Item/Variable & Description & Unit \\\hline \verb|DENS_t_advcv| & tendency of density (vert. advection) & $kg/m3/s$ \\\hline \verb|MOMZ_t_advcv| & tendency of momentum z (vert. advection) & $kg/m2/s2$ \\\hline \verb|MOMX_t_advcv| & tendency of momentum x (vert. advection) & $kg/m2/s2$ \\\hline \verb|MOMY_t_advcv| & tendency of momentum y (vert. advection) & $kg/m2/s2$ \\\hline \verb|RHOT_t_advcv| & tendency of rho*theta (vert. advection) & $K kg/m3/s$ \\\hline \verb|DENS_t_advch| & tendency of density (horiz. advection) & $kg/m3/s$ \\\hline \verb|MOMZ_t_advch| & tendency of momentum z (horiz. advection) & $kg/m2/s2$ \\\hline \verb|MOMX_t_advch| & tendency of momentum x (horiz. advection) & $kg/m2/s2$ \\\hline \verb|MOMY_t_advch| & tendency of momentum y (horiz. advection) & $kg/m2/s2$ \\\hline \verb|RHOT_t_advch| & tendency of rho*theta (horiz. advection) & $K kg/m3/s$ \\\hline \verb|MOMZ_t_pg| & tendency of momentum z (pressure gradient) & $kg/m2/s2$ \\\hline \verb|MOMX_t_pg| & tendency of momentum x (pressure gradient) & $kg/m2/s2$ \\\hline \verb|MOMY_t_pg| & tendency of momentum y (pressure gradient) & $kg/m2/s2$ \\\hline \verb|MOMZ_t_ddiv| & tendency of momentum z (divergence damping) & $kg/m2/s2$ \\\hline \verb|MOMX_t_ddiv| & tendency of momentum x (divergence damping) & $kg/m2/s2$ \\\hline \verb|MOMY_t_ddiv| & tendency of momentum y (divergence damping) & $kg/m2/s2$ \\\hline \verb|MOMX_t_cf| & tendency of momentum x (coliolis force) & $kg/m2/s2$ \\\hline \verb|MOMY_t_cf| & tendency of momentum y (coliolis force) & $kg/m2/s2$ \\\hline \end{tabularx} \subsubsection{scale\_atmos\_phy\_mp\_kessler} \begin{tabularx}{150mm}{|l|X|c|} \hline \rowcolor[gray]{0.9} Item/Variable & Description & Unit \\\hline \verb|Vterm_QR| & terminal velocity of QR & $m/s$ \\\hline \end{tabularx} \subsubsection{scale\_atmos\_phy\_mp\_suzuki10} \begin{tabularx}{150mm}{|l|X|c|} \hline \rowcolor[gray]{0.9} Item/Variable & Description & Unit \\\hline \verb|aerosol| & aerosol mass & $kg/m^3$ \\\hline \end{tabularx} \subsubsection{scale\_atmos\_phy\_mp\_tomita08} \begin{tabularx}{150mm}{|l|X|c|} \hline \rowcolor[gray]{0.9} Item/Variable & Description & Unit \\\hline \verb|Vterm_QR| & terminal velocity of QR & $m/s$ \\\hline \verb|Vterm_QI| & terminal velocity of QI & $m/s$ \\\hline \verb|Vterm_QS| & terminal velocity of QS & $m/s$ \\\hline \verb|Vterm_QG| & terminal velocity of QG & $m/s$ \\\hline \end{tabularx} \subsubsection{scale\_atmos\_sub\_boundary} \begin{tabularx}{150mm}{|l|X|c|} \hline \rowcolor[gray]{0.9} Item/Variable & Description & Unit \\\hline \verb|DENS_BND| & Boundary Density & $kg/m3$ \\\hline \verb|VELX_BND| & Boundary velocity x-direction & $m/s$ \\\hline \verb|VELY_BND| & Boundary velocity y-direction & $m/s$ \\\hline \verb|POTT_BND| & Boundary potential temperature & $K$ \\\hline \verb|VELZ_BND| & Boundary velocity z-direction & $m/s$ \\\hline \verb|QV_BND| & Boundary QV & $kg/kg$ \\\hline \verb|QC_BND| & Boundary QC & $kg/kg$ \\\hline \verb|QR_BND| & Boundary QR & $kg/kg$ \\\hline \verb|QI_BND| & Boundary QI & $kg/kg$ \\\hline \verb|QS_BND| & Boundary QS & $kg/kg$ \\\hline \verb|QG_BND| & Boundary QG & $kg/kg$ \\\hline \end{tabularx} \subsubsection{scale\_atmos\_sub\_boundary} \begin{tabularx}{150mm}{|l|X|c|} \hline \rowcolor[gray]{0.9} Item/Variable & Description & Unit \\\hline \verb|BND_ref_DENS| & reference DENS & $kg/m3$ \\\hline \verb|BND_ref_VELZ| & reference VELZ & $m/s$ \\\hline \verb|BND_ref_VELX| & reference VELZ & $m/s$ \\\hline \verb|BND_ref_VELY| & reference VELZ & $m/s$ \\\hline \verb|BND_ref_POTT| & reference VELZ & $K$ \\\hline \end{tabularx}
{ "alphanum_fraction": 0.611758821, "avg_line_length": 60.2993039443, "ext": "tex", "hexsha": "4ebec4435a7b8b5c3fb6e7b50f203fcca9145b7d", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2020-01-07T16:29:20.000Z", "max_forks_repo_forks_event_min_datetime": "2020-01-07T16:29:20.000Z", "max_forks_repo_head_hexsha": "4eaf4f74aa03d091d9778eff373b816f178a962f", "max_forks_repo_licenses": [ "BSD-2-Clause" ], "max_forks_repo_name": "Shima-Lab/SCALE-SDM_mixed-phase_Shima2019", "max_forks_repo_path": "scale-les/test/tutorial/doc/A3_variables_for_history.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "4eaf4f74aa03d091d9778eff373b816f178a962f", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "BSD-2-Clause" ], "max_issues_repo_name": "Shima-Lab/SCALE-SDM_mixed-phase_Shima2019", "max_issues_repo_path": "scale-les/test/tutorial/doc/A3_variables_for_history.tex", "max_line_length": 103, "max_stars_count": 2, "max_stars_repo_head_hexsha": "4eaf4f74aa03d091d9778eff373b816f178a962f", "max_stars_repo_licenses": [ "BSD-2-Clause" ], "max_stars_repo_name": "Shima-Lab/SCALE-SDM_mixed-phase_Shima2019", "max_stars_repo_path": "scale-les/test/tutorial/doc/A3_variables_for_history.tex", "max_stars_repo_stars_event_max_datetime": "2021-02-17T06:05:39.000Z", "max_stars_repo_stars_event_min_datetime": "2019-12-08T16:06:36.000Z", "num_tokens": 10420, "size": 25989 }
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % % % KUIP - Reference Manual -- LaTeX Source % % % % Front material % % % %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % % Editor: Alfred Nathaniel CN/ASD % Last Mod.: $Date: 1996/01/23 13:44:22 $ $Author: goossens $ % %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % Title page % %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \def\Ptitle#1{\special{ps: /Printstring (#1) def} \epsfbox{/afs/cern.ch/project/cnas_doc/sources/cnasall/cnastit.eps}} \begin{titlepage} \vspace*{-23mm} \mbox{\epsfig{file=/usr/local/lib/tex/ps/cern15.eps,height=30mm}} \hfill \raise8mm\hbox{\Large\bf CERN Program Library Long Writeup I102} \hfill\mbox{} \begin{center} \mbox{}\\[10mm] \mbox{\Ptitle{KUIP}}\\[2cm] {\LARGE Kit for a User Interface Package}\\[2cm] {\LARGE Version 2.05}\\[3cm] {\Large Application Software Group}\\[1cm] {\Large Computers and Network Division}\\[2cm] \end{center} \vfill \begin{center}\Large CERN Geneva, Switzerland\end{center} \end{titlepage} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % Copyright page % %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \thispagestyle{empty} \framebox[.97\textwidth][t]{\hfill\begin{minipage}{0.92\textwidth}% \vspace*{3mm}\begin{center}Copyright Notice\end{center} \parskip\baselineskip {\bf KUIP -- Kit for a User Interface Package} CERN Program Library entry {\bf I102} \copyright{} Copyright CERN, Geneva 1993 Copyright and any other appropriate legal protection of these computer programs and associated documentation reserved in all countries of the world. These programs or documentation may not be reproduced by any method without prior written consent of the Director-General of CERN or his delegate. Permission for the usage of any programs described herein is granted a priori to those scientific institutes associated with the CERN experimental program or with whom CERN has concluded a scientific collaboration agreement. Requests for information should be addressed to: \vspace*{-.5\baselineskip} \begin{center} \tt\begin{tabular}{l} CERN Program Library Office \\ CERN-CN Division \\ CH-1211 Geneva 23 \\ Switzerland \\ Tel. +41 22 767 4951 \\ Fax. +41 22 767 7155 \\ Bitnet: CERNLIB@CERNVM \\ DECnet: VXCERN::CERNLIB (node 22.190) \\ Internet: [email protected] \end{tabular} \end{center} \vspace*{2mm} \end{minipage}\hfill}%end of minipage in framebox \vspace{6mm} {\bf Trademark notice: All trademarks appearing in this guide are acknowledged as such.} \vfill \begin{tabular}{l@{\quad}l@{\quad}>{\tt}l} {\em Contact Person\/}: & Nicole Cremel /CN & (N.Cremel\atsign{}cern.ch)\\[1mm] {\em Technical Realization\/}: & Alfred Nathaniel /CN & (A.Nathaniel\atsign{}cern.ch)\\[1mm] {\em Edition -- October 1994} \end{tabular} \newpage %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % Introductory material % %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \pagenumbering{roman} \setcounter{page}{1} \section*{Acknowledgments} Many people participated in the design and the implementation of \KUIP{}. The first version of \KUIP{} released in 1987 was designed and implemented by Ren\'e Brun and Pietro Zanarini. Many basic features stem from the package ZCEDEX\cite{bib-ZCEDEX}, implemented at CERN in 1982 by R.\,Brun, C.\,Kersters, D.\,Moffat and A.\,Petrilli, which offered already command parsing, macros, vectors, and functions. The development of \KUIP{} was started in the context of \PAW{}, the \textem{Physics Analysis Workstation} project, and therefore everybody in the \PAW{} team must be acknowledged. Olivier Couet implemented the graphics menus and the graphics interface of \KUIP{} to \HIGZ{}. Achille Petrilli, Fons Rademakers, and Federico Carminati wrote the original routines for break interception on various platforms. Carlo Vandoni integrated the functionality of \SIGMA{}\cite{bib-SIGMA} (\textem{System for Interactive Mathematical Applications}). Colin Caughie (Edinburgh) implemented new features in macro flow control. Ilias Goulas (Turin) provided \KUIB{} (\KUIP{} Interface Builder) as a replacement for the original \KUIPC{} (\KUIP{} Compiler). Alain Michalon (Strasbourg) and Harald Butenschoen (DESY) ported \KUIP{} to the MVS/TSO and NEWLIB environments. Valeri Fine (Dubna) ported \KUIP{} to MSDOS and Windows/NT. C.W.\,Hobbs (DEC) provided the terminal communication routines for the VMS version of \KUIPMotif{}. The maintenance of the overall package and developments for the basic part and KUIPC is in the hands of Alfred Nathaniel. Nicole Cremel is responsible for the \KUIP{} interface to \OSFMotif{}. Fons Rademakers was in charge of the overall maintenance before that and made many important contributions to the \KUIPMotif{} interface. \section*{About this manual} Many features described in this manual edition are available only since the \Lit{94b} release of the CERN libraries. For example, there were many enhancements added to the macro interpreter. The coding of some features (e.g.\ global variables) did not meet the deadline for the \Lit{94b} release. They are already documented here with the remark that they are implemented only \Lit{95a} pre-release in the \Lit{new} area. Therefore, if you find something described in this manual does not work in your program, please check that you are using an up-to-date version of \KUIP{}. The manual is structured in the following way: \begin{UL} \item Chapter~1 gives a short overview of what \KUIP{} is doing. \item Chapter~2 is intended for all application \textem{users} describing the user interface provided by \KUIP{}. \end{UL} The remaining parts of the manual are intended for application \textem{writers}: \begin{UL} \item Chapter~3 describes in the first part how to define the commands to be handled by the application. The second part explains how to use the features provided by \KUIPMotif{}. \item Chapter~4 is a reference for the calling sequences of \KUIP{} routines. \item Chapter~5 is a reference for \KUIP{} built-in commands. \item Appendix~A gives an example for a simple \KUIP{}-based application program. \item Appendix~B gives some useful information about the system dependencies in interfacing Fortran and~C. \end{UL} Throughout this manual we use \texttt{mono-type face} for examples. Since \KUIP{} is mostly case-insensitive, we use \texttt{UPPERCASE} for keywords while the illustrative parts are written in \texttt{lowercase}. When mixing program output and user-typed commands the user input is \texttt{\underline{underlined}}. In the index the page where a command or routine is defined is in {\bf bold}, page numbers where they are referenced are in normal type. This document was produced using \LaTeX~\cite{bib-LATEX} with the {\tt cernman} style option, developed at CERN. A PostScript file {\tt kuip.ps}, containing a printable version of this manual, can be obtained by anonymous ftp as follows (commands to be typed by the user are underlined): \begin{XMP} \Ucom{ftp asis01.cern.ch} Connected to asis01.cern.ch. 220 asis01 FTP server (...) ready Name (asis01.cern.ch:\textsl{your-name}): \Ucom{ftp} Password: \Ucom{\textsl{your-name@your-host}} ftp> \Ucom{cd cernlib/doc/ps.dir} ftp> \Ucom{bin} ftp> \Ucom{get kuip.ps} ftp> \Ucom{quit} \end{XMP} Note that a printer with a fair amount of memory is needed in order to print this manual. There is a Usenet news group \texttt{cern.heplib} being the forum for discussions about \KUIP{} and the other packages of the CERN program library. Bug reports, requests for new features, and questions of general interest should be sent there. People without Usenet access can subscribe to a distribution list by sending a mail to \Lit{[email protected]} containing the message \begin{XMP} SUBSCRIBE HEPLIB \end{XMP} \newpage %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % Tables of contents ... % %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \tableofcontents \newpage \listoftables \newpage \listoffigures
{ "alphanum_fraction": 0.6465126812, "avg_line_length": 39.4285714286, "ext": "tex", "hexsha": "5dd02df49e6f2699f8c25cdfc50d0db68493c4e9", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "76048db0ca60708a16661e8494e1fcaa76a83db7", "max_forks_repo_licenses": [ "CC-BY-4.0" ], "max_forks_repo_name": "berghaus/cernlib-docs", "max_forks_repo_path": "paw/kuifront.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "76048db0ca60708a16661e8494e1fcaa76a83db7", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "CC-BY-4.0" ], "max_issues_repo_name": "berghaus/cernlib-docs", "max_issues_repo_path": "paw/kuifront.tex", "max_line_length": 92, "max_stars_count": 1, "max_stars_repo_head_hexsha": "76048db0ca60708a16661e8494e1fcaa76a83db7", "max_stars_repo_licenses": [ "CC-BY-4.0" ], "max_stars_repo_name": "berghaus/cernlib-docs", "max_stars_repo_path": "paw/kuifront.tex", "max_stars_repo_stars_event_max_datetime": "2019-07-24T12:30:01.000Z", "max_stars_repo_stars_event_min_datetime": "2019-07-24T12:30:01.000Z", "num_tokens": 2195, "size": 8832 }
\documentclass[11pt, oneside]{article} % use "amsart" instead of "article" for AMSLaTeX format % \usepackage{draftwatermark} % \SetWatermarkText{Draft} % \SetWatermarkScale{5} % \SetWatermarkLightness {0.85} % \SetWatermarkColor[rgb]{0.7,0,0} \usepackage{geometry} % See geometry.pdf to learn the layout options. There are lots. \geometry{letterpaper} % ... or a4paper or a5paper or ... %\geometry{landscape} % Activate for for rotated page geometry %\usepackage[parfill]{parskip} % Activate to begin paragraphs with an empty line rather than an indent \usepackage{graphicx} % Use pdf, png, jpg, or eps� with pdflatex; use eps in DVI mode % TeX will automatically convert eps --> pdf in pdflatex \usepackage{amssymb} \usepackage{mathrsfs} \usepackage{hyperref} \usepackage{url} \usepackage{authblk} \usepackage{amsmath} \usepackage{graphicx} \usepackage{fixltx2e} \usepackage{hyperref} \usepackage{alltt} \usepackage{color} \usepackage{bigints} \newcommand{\argmax}{\operatornamewithlimits{argmax}} \newcommand{\argmin}{\operatornamewithlimits{argmin}} \title{Notes on some basic probability stuff} \author{David Meyer \\ dmm@\{1-4-5.net,uoregon.edu,brocade.com,...\}} \date{September 30, 2014} % Activate to display a given date or no date \begin{document} \maketitle \section{Introduction} \label{sec:intro} Note well: There are likely to be many mistakes in this document. That said... \bigskip \noindent Much of what is described here follows from two simple rules: \begin{flalign} \label{eqn:sum_rule} \text{Sum Rule:} & \qquad P(\mathcal{X}) = \sum\limits_{y}{}P(\mathcal{X},\mathcal{Y})\\ \label{eqn:product_rule} \text{Product Rule:} & \qquad P(\mathcal{X},\mathcal{Y}) = P(\mathcal{X}|\mathcal{Y}) P(\mathcal{Y}) \end{flalign} \bigskip \noindent \begin{itemize} \item{The Sum Rule is sometimes called marginalization} \item{The Product Rule is part of the proof of the Hammersley-Clifford Theorem} \end{itemize} \bigskip \noindent Remember also that $\mathcal{X} = \{x_i\}_{i = 1}^{|\mathcal{X}|}$, where each $x_i$ is a realization of the random variable $x$\footnote{Each observation $x_i$ is, in general, a data point in a multidimensional space.}. Lets also say that the set $\Theta$ of probability distribution parameters can be used to explain the \emph{evidence} $\mathcal{X}$. Then we say that the "manner in which the evidence $\mathcal{X}$ depends on the parameters $\Theta$" is the \emph{observation model}. The analytic form of the observation model is the likelihood $P(\mathcal{X} | \Theta)$. \section{Estimating the parameters $\Theta$ with Bayes Rule} Note that \begin{flalign} P(\Theta, \mathcal{X}) &= P(\mathcal{X},\Theta) \\ P(\Theta, \mathcal{X}) &= P(\Theta|\mathcal{X}) P(\mathcal{X}) \\ P(\mathcal{X}, \Theta) &= P(\mathcal{X} | \Theta) P(\Theta) \\ P(\Theta | \mathcal{X}) P(\mathcal{X}) &= P(\mathcal{X}|\Theta) P(\Theta) \end{flalign} \bigskip \noindent Solving for $P(\Theta | \mathcal{X})$ we get Bayes rule \begin{flalign} P(\Theta | \mathcal{X}) &= \frac{P(\mathcal{X}|\Theta) P(\Theta)}{P(\mathcal{X})} \\ \end{flalign} \bigskip \noindent Said another way \begin{flalign} \text{posterior} &= \frac{\text{likelihood} \cdot \text{prior}}{\text{evidence}} \end{flalign} \bigskip \noindent You might also see Bayes rule written using the \emph{Law of Total Probability}\footnote{The Law of Total Probability is a combination of the Sum and Product Rules} which is sometimes written as follows: \begin{flalign} P(A) &= \sum\limits_{n}{} P(A \cap B_{n}) \qquad \qquad \mathbin{\#} \text{by the \emph{Sum Rule} (Equation \ref{eqn:sum_rule})}\\ &= \sum\limits_{n}{} P(A , B_{n}) \qquad \qquad \; \; \, \mathbin{\#} \text{in the notation used in Equation \ref{eqn:sum_rule}}\\ &= \sum\limits_{n}{} P(A|B_{n}) P(B_{n}) \qquad \mathbin{\#} \text{by the \emph{Product Rule} (Equation \ref{eqn:product_rule})} \end{flalign} \bigskip \noindent so that the posterior distribution $P(\mathcal{C}_{1} | \mathbf{x})$ for two classes $\mathcal{C}_1$ and $\mathcal{C}_2$ given input vector $\mathbf{x}$ would look like \bigskip \begin{flalign} P(\mathcal{C}_{1} | \mathbf{x}) = \frac{P(\mathbf{x} | \mathcal{C}_1) P(\mathcal{C}_1)} {P(\mathbf{x}|\mathcal{C}_1)P(\mathcal{C}_1) + P(\mathbf{x}|\mathcal{C}_2)P(\mathcal{C}_2)} \end{flalign} \bigskip \noindent Interestingly, the posterior distribution is related to logistic regression as follows: First recall that the posterior $P(\mathcal{C}_{1} | \mathbf{x})$ is \bigskip \begin{flalign} P(\mathcal{C}_{1} | \mathbf{x}) &= \frac{P(\mathbf{x} | \mathcal{C}_1) P(\mathcal{C}_1)} {P(\mathbf{x}|\mathcal{C}_1)P(\mathcal{C}_1) + P(\mathbf{x}|\mathcal{C}_2)P(\mathcal{C}_2)} \end{flalign} \bigskip \noindent Now, if we set \begin{flalign} a &= \ln \frac{P(\mathbf{x}|\mathcal{C}_1)P(\mathcal{C}_1)} {P(\mathbf{x}|\mathcal{C}_2)P(\mathcal{C}_2)} \end{flalign} \noindent we can see that \begin{flalign} P(\mathcal{C}_{1} | \mathbf{x}) &= \frac{1}{1 + e^{-a}} = \sigma(a) \end{flalign} \noindent that is, the sigmoid function. \subsection{Maximum Likelihood Estimation (MLE)} Given all of that, for the MLE we seek the value of $\Theta$ that maximizes the likelihood $P(\mathcal{X}|\Theta)$ for our observations $\mathcal{X}$. Remembering that $\mathcal{X} = \{x_{1}, x_{2},\hdots\}$ and that the $x_i$ are iid, the value of $\Theta$ we seek maximizes \begin{flalign} \prod\limits_{x_{i} \in \mathcal{X}}^{}P(x_{i}|\Theta) \end{flalign} \noindent Because of the product it is easier to use the log\footnote{and since log(x) is monotonically increasing it doesn't effect the argmax}, we use the log likelihood $\mathcal{L}$: \begin{flalign} \mathcal{L} &= \sum\limits_{x_{i} \in \mathcal{X}}^{} \log P(x_{i}| \Theta) \end{flalign} \noindent and define $\hat{\Theta}_{ML}$ as follows: \begin{flalign} \hat{\Theta}_{ML} &= \argmax\limits_{\Theta}^{}\mathcal{L} \end{flalign} \noindent The maximization is obtained by (calculus tricks): \begin{equation} \frac{\partial \mathcal{L}}{\partial \theta_{i}} = 0 \quad \forall \theta_{i} \in \Theta \end{equation} \bigskip \noindent Note finally that in a Generalized Linear Regression setting, we have \begin{flalign} \eta = \mathbf{w}^{T}\mathbf{x} + b \\ p(y|\mathbf{x}) = p(y|g(\eta); \theta) \end{flalign} \noindent where $g(.)$ is an \emph{inverse link function}, also referred to as an activation function. For example, if the link function is the logistic function, then the inverse link function $g(\eta) = \frac{1}{1 + e^{-\eta}}$ and the negative log-likelihood $\mathcal{L}$ is \begin{flalign} \mathcal{L} &= - \log p(y | g(\eta); \theta) \end{flalign} \subsection{Maximum a Posteriori (MAP) Estimation of $\Theta$} Recall that \begin{flalign} P(\Theta | \mathcal{X}) &= \frac{P(\mathcal{X}|\Theta) P(\Theta)}{P(\mathcal{X})} \end{flalign} \bigskip \noindent We are seeking the value of $\Theta$ that maximizes $P(\Theta|\mathcal{X}$), so the solution can be stated as \begin{flalign} \hat{\Theta}_{MAP} &= \argmax\limits_{\Theta}^{}P(\Theta|\mathcal{X}) \\ &= \argmax\limits_{\Theta}^{} \frac{P(\mathcal{X}|\Theta) \cdot P(\Theta)}{P(\mathcal{X})} \end{flalign} \noindent \bigskip However, since $P(\mathcal{X})$ does not depend on $\Theta$, we can write \begin{flalign} \hat{\Theta}_{MAP} &= \argmax\limits_{\Theta}^{} P(\mathcal{X}|\Theta) \cdot P(\Theta) \\ &= \prod\limits_{x_{i} \in \mathcal{X}}^{} P(x_{i} |\Theta) \cdot P(\Theta) \end{flalign} \bigskip \noindent If we again take the log, we get \begin{flalign} \hat{\Theta}_{MAP} &= \argmax\limits_{\Theta}^{} \Bigg (\sum\limits_{x_{i} \in \mathcal{X}}^{} \log P(x_{i}|\Theta) + \log P(\Theta) \Bigg) \end{flalign} \subsection{Notes} \begin{itemize} \item{Both MLE and MAP are point estimates for $\Theta$ (contrast probability distributions)} \item{MLE notoriously overfits} \item{MAP allows us to take into account knowledge about the prior (which is a sort of a regularizer)} \item{Bayesian estimation, by contrast, calculates the full posterior distribution $P(\Theta | \mathcal{X})$} \end{itemize} \subsection{Bayesian Estimation} \label{sec:be} Recall that Bayesian estimation calculates the full posterior distribution $P(\Theta | \mathcal{X})$, where \bigskip \begin{flalign} P(\Theta | \mathcal{X}) &= \frac{P(\mathcal{X}|\Theta) \: P(\Theta)}{P(\mathcal{X})} \end{flalign} \bigskip \noindent In this case, however, the denominator $P(\mathcal{X})$ cannot be ignored, and we know from the \emph{sum} and \emph{product} rules that \begin{flalign} P(\mathcal{X}) &= \int_{\Theta}^{} P(\mathcal{X},\Theta) \; d \Theta \\ &=\int_{\Theta}^{} P(\mathcal{X}|\Theta) \: P(\Theta) \: d\Theta \end{flalign} \noindent putting it all together we get \begin{flalign} P(\Theta | \mathcal{X}) &= \frac{P(\mathcal{X}|\Theta) \: P(\Theta)}{\int_{\Theta}^{} P(\mathcal{X}|\Theta) \: P(\Theta) \: d\Theta} \end{flalign} \bigskip \noindent If we want to be able to derive an algebraic form for the posterior $P(\Theta|\mathcal{X})$, the most challenging part will be finding the integral in the denominator. This is where the idea of \emph{conjugate priors} and appoximate inference approaches (\emph{Monte Carlo Integration} and \emph{Variational Bayesian methods}\footnote{Variational Bayesian methods are a family of techniques for approximating intractable integrals arising in Bayesian inference and machine learning. They are typically used in complex statistical models consisting of observed variables (usually termed "data") as well as unknown parameters and latent variables, with various sorts of relationships among the three types of random variables, as might be described by a graphical model.}) are useful. Need to futher expand this... \section{Monte Carlo Integration} Suppose we have a distribution $p(\theta)$ (perhaps a posterior) the we want to sample quantities of interest from. To do this analytically, we need to take an integral of the form \begin{flalign} I = \int_{\Theta}^{}g(\theta) \: p(\theta) \:d\theta \end{flalign} where $g(\theta)$ is some function of $\theta$ (typically $g(\theta) = \theta$ (the mean), etc). \noindent Need a deeper analysis here (note to self), but the punchline is that you can estimate \emph{I} using \emph{Monte Carlo Integration} as follows: Sample $M$ values ($\theta^{i}$) from $p(\theta)$ and calculate \begin{equation} \hat{I}_{M} = \frac{1}{M} \sum\limits_{i = 1}^{M} g(\theta^{i}) \end{equation} \noindent Note that this works fine if the samples from $p(\theta)$ are iid\footnote{We know this by the Strong Law of Large Numbers, see Section \ref{sec:slln}.} but if not, we can use a Markov Chain to draw "slightly dependent" samples and depend on the \emph{Ergodic Theorem} (see Section \ref{sec:ergodic}). \section{Acknowledgements} \newpage % \cite{test} \bibliographystyle{plain} \bibliography{/Users/dmm/papers/bib/ml} \section{Appendix} \subsection{Strong Law of Large Numbers} \label{sec:slln} Let $X_{1}, X_{2}, \hdots, X_{M}$ be a sequence of \textbf{independent} and \textbf{identically distributed} random variables, each having a finite mean $\mu_i = E[X_{i}]$. \bigskip \noindent Then with probability 1 \begin{equation} \frac{1}{M}\sum\limits_{i=1}^{M} X_i \rightarrow E[X] \end{equation} as $M \rightarrow \infty$. \subsection{Ergodic Theorem} \label{sec:ergodic} Let $\theta^{(1)}, \theta^{(2)}, \hdots, \theta^{(M)}$ be $M$ samples from a Markov chain that is \emph{aperiodic}, \emph{irreducible}, and \emph{positive recurrent}\footnote{In this case, the chain is said to be \emph{ergodic}.}, and $E[g(\theta)] < \infty$. \bigskip \noindent Then with probability 1 \begin{equation} \frac{1}{M}\sum\limits_{i = 1}^{M} g(\theta_{i}) \rightarrow E[g(\theta)] = \int_{\Theta}^{}g(\theta) \: \pi(\theta) \:d\theta \end{equation} as $M \rightarrow \infty$ and where $\pi$ is the stationary distribution of the Markov chain. \end{document}
{ "alphanum_fraction": 0.6985079603, "avg_line_length": 38.7, "ext": "tex", "hexsha": "c0f3a52e1db1cf9298f4aeaa4aa4c94195cf3b21", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "14f01e0a50b9c643b5176a10c840f270b9da7bc1", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "davidmeyer/davidmeyer.github.io", "max_forks_repo_path": "_my_stuff/papers/ml/ps/ps.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "14f01e0a50b9c643b5176a10c840f270b9da7bc1", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "davidmeyer/davidmeyer.github.io", "max_issues_repo_path": "_my_stuff/papers/ml/ps/ps.tex", "max_line_length": 812, "max_stars_count": null, "max_stars_repo_head_hexsha": "14f01e0a50b9c643b5176a10c840f270b9da7bc1", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "davidmeyer/davidmeyer.github.io", "max_stars_repo_path": "_my_stuff/papers/ml/ps/ps.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 4020, "size": 11997 }
\chapter{Structural linguistics in Copenhagen: Louis Hjelmslev and his circle} \label{ch.hjelmslev} Until the 1960s or so, many American linguists held a somewhat caricatural picture of the difference between their own work and that of their European colleagues. In North America, according to a common view, linguistic research was heavily oriented toward the description and analysis of concrete linguistic data from real languages. Theoretical proposals, if not actually arrived at inductively from such practical study, were at least constantly confronted with as wide an array of factual material as possible. In Europe, on the other hand, much research on language was seen to fall more within the province of speculative philosophy than that of empirical linguistics. Linguistic theories were spun out of essentially aprioristic considerations, with only an occasional nod toward one of a small range of embarrassingly obvious standard examples. If a paper on `the morphosyntax of medial suffixes in Kickapoo', bursting with unfamiliar forms and descriptive difficulties, was typical of American linguistics, its European counterpart was likely to be a paper on `L'arbitraire du signe' whose factual basis was limited to the observation that \emph{tree} means `tree' in \ili{English}, while \emph{arbre} has essentially the same {meaning} in \ili{French}. The gross distortions in this picture (which is obviously unfair to both sides) nonetheless conceal a grain of truth. Much European work in the theory of language through the first half and more of the 20th century was concerned with philosophical problems of the nature of language; and in part for reasons growing out of the historical development of the field in America (see chapters~\ref{ch.boas}--\ref{ch.structuralists} below), much American work of the period focused on problems of fieldwork and the description of a wide array of linguistic structures. If there is one major figure in the history of linguistics that Americans saw as closest to embodying the sort of thing they have expected of Europeans, it is surely Louis {\Hjelmslev}. His views were primarily known to American linguists through \name{Francis}{Whitfield}'s (\citeyear{hjelmslev53:osg.whitfield}) translation of \citealt{hjelmslev43:prolegomena}, and this work is almost exclusively concerned with questions of Theory (with a capital T): philosophical discussions of the nature of language and arcane discussions of the proper application of unfamiliar terms, proceeding with very little reference to actual linguistic material. It is not unfair to suggest that much of what {\Hjelmslev} wrote in this work is close to impenetrable for the modern (especially North American) reader. This results in part from his exuberant coining of new terminology, combined with frequent highly idiosyncratic uses assigned to familiar words. All of this terminological apparatus is quite explicit and internally consistent, but the extremely dense and closely connected nature of his prose and the lack of reference to concrete factual material which might facilitate understanding makes the reader's task an arduous one—with few obvious rewards along the way. {\Hjelmslev} was, however, regarded generally with considerable respect, and a citation (at least in passing) of his name and of the theory of glossematics became a near-obligatory part of any discussion of fundamental views on the nature of language and linguistic theory. Especially during the 1950s, his work was widely praised (both in Europe and North America) for its `rigorous logic', his demand for `explicit formulation', and the exent to which he developed certain Saussurean (or at least Saussure-like) ideas to their ultimate conclusions. Despite the wide range of work in which {\Hjelmslev} is cited, however, and the generally positive terms of such references, as well as the number of languages into which his work has been translated, there is very little evidence that the actual practice of linguists (apart from his immediate students and colleagues, as well as \ili{Danish} dialectologists more generally) has ever been significantly influenced, at least in North America, by specifically Hjelmslevian ideas.\footnote{\posscitet{lamb66:stratificational.grammar,lamb66:epilogomena} Stratificational Grammar is asserted to have its foundations in Glossematics, although the resemblances are limited. \name{Michael}{Halliday}'s Systematic Functional Linguistics is also claimed to be related to {\Hjelmslev}'s ideas, though \citet{bache10:halliday.hjelmslev} argues that these references are quite superficial and in some instances misleading.} Indeed, much of the praise to be found has the character of lip service. Perhaps the favorable references to {\Hjelmslev}'s work are due to a sense of awe inspired by the undoubted elaborateness of the structure, combined with a lack of understanding of just what he was getting at (but a feeling that it must be very significant), rather than representing respect born of profound appreciation of his ideas. {\Hjelmslev}'s view of the structure of language deserves to be better understood than it has been; not, perhaps, because his views and formulations would be assented to if discussed in detail but, rather, because he did raise some important fundamental issues in ways no one else did at the time. His discussion of these issues can be argued to suffer from important limitations. In part, these limitations stem from a vision of linguistic structure which he in his turn inherited from others. The study of this relationship may shed light on the way in which even rather independent work is shaped by the context of assumptions in which it develops. The other side of the same coin is the extent to which that context determined the reception of his work by others: again, the reaction to {\Hjelmslev}'s views by his contemporaries is worth considering. In addition to these considerations of a narrowly historical nature, {\Hjelmslev}'s work independently merits examination by phonologists. Despite the generally abstract emphasis of his writings, he also did a certain amount of \isi{linguistic description}. Much of his work besides \citet{hjelmslev43:prolegomena} was essentially unknown outside Denmark until the publication of his \textsl{Essais linguistiques}, \citet{hjelmslev59:essais.1,hjelmslev73:essais.2,hjelmslev85:nouveaux.essais}). When we take this work into consideration, it becomes clear that {\Hjelmslev}'s place in the canon is not undeserved. His treatments of \ili{Danish} and \ili{French} phonology and Baltic accentuation, although rather summary and incomplete, show clearly that he had interesting ideas concerning what a phonological description should consist of, and what relation should obtain between such a description and the data it is based on, ideas that were quite at variance with much other work of the time. The discussion below will thus focus on relations between {\Hjelmslev}'s views and those of others, and on the novel features to be found in his descriptive practice. This chapter certainly does not form part of a strict linear sequence with the immediately surrounding ones. Instead, it aims to present an alternative view of the proper development of a `structural linguistics', representing an approach distinct to a considerable extent both from that represented by {\Trubetzkoy} and {\Jakobson}, though not entirely independent of them. Finally, in section~\ref{sec:eli}, I will touch briefly on the life and work of \name{Eli}{Fischer-Jørgensen}, known primarily for her work in phonetics but a student of {\Hjelmslev}'s in phonology and an important bridge between linguistics in Copenhagen and in Prague, especially the work of {\Jakobson}. \section{Hjelmslev's life and career} {\Hjelmslev} is clearly the most notable figure in the development of structural linguistics in Denmark, but he is far from isolated in the linguistic history of that country. Especially in relation to its size, Denmark has produced a remarkable number of distinguished linguists: among names from the past one can mention \name{Rasmus}{Rask}, \name{Karl}{Verner}, \name{Holger}{Pedersen}, \name{Vilhelm}{Thomsen}, and \name{Otto}{Jespersen}, to cite only those that would figure in any general history of the field. More recent scholars of international reputation include \name{Viggo}{Brøndal}, \name{Paul}{Diderichsen}, \name{Søren}{Egerod}, \name{Jørgen}{Rischel}, \name{Hans}{Basbøll}, and especially {\Eli} Fischer-Jørgensen (section~\ref{sec:eli},). More important for an understanding of {\Hjelmslev}'s work than any of these individuals, perhaps, is the general fact that a `critical mass' of scholars interested in general linguistics has long existed in the country. {\Hjelmslev} thus had a constant supply of colleagues and students with whom to exchange ideas and encouragement in the development of his own rather individual views. Louis {\Hjelmslev}\footnote{This section is based primarily on the accounts of \citet{efj65:hjelmslev.obit,fischer-jorgensen75:trends} and \citet{jensen.gregersen21:hjelmslev.jakobson}. I am grateful to \name{Frans}{Gregersen} for comments on my account of {\Hjelmslev}'s career.} was born in 1899 in Copenhagen. His father was a mathematician and a prominent figure in {Danish} academic administration at the time, who served as rector of Copenhagen University in 1928-29. It is superficially appealing to credit {\Hjelmslev}'s inclination toward highly abstract and formal theory, described by some as `algebraic', to his father's influence; yet not only did {\Hjelmslev} himself deny such influence, but the sort of work he did seems rather at odds with the specifics of his father's research (which sought precisely to provide a less abstract foundation for geometry, grounded more directly in experience than in purely theoretical constructs). In addition, {\Hjelmslev}'s own use of mathematical terms in ways far removed from their technical acceptation in that field suggests that any influence from his father was in the form of a general intellectual atmosphere rather than any specific mathematical training. More important, perhaps, was the influence from Carnap\ia{Carnap, Rudolf} and others in the Vienna Circle. Their {Danish} pupil \name{Jørgen}{Jørgensen}, logician and professor of philosophy, was a close associate of {\Hjelmslev}'s. \begin{wrapfigure}[14]{l}{.35\textwidth} \includegraphics[width=.8\textwidth]{figures/Hjelmslev_young.jpg} \caption{Louis Trolle Hjelmslev as a young MA} \label{fig:ch.hjelmslev.young_hjelmslev} \end{wrapfigure} In 1917, {\Hjelmslev} entered Copenhagen University, where he studied Romance and (later) comparative philology with a number of distinguished figures, especially \name{Holger}{Pedersen}. Through {\Pedersen}'s influence he became interested in \ili{Lithuanian}, and spent the year 1921 doing research in Lithuania, which resulted in his 1923 master's degree for a thesis on \ili{Lithuanian} phonetics. The year after he received his MA was spent in Prague, where his knowledge of traditional \ili{Indo-European} studies was developed. This travel was somewhat against his will: he had just been engaged to be married to \name{Vibeke}{Mackeprang}, his future wife, and was very much in love. He was much happier to spend 1926 and 1927 in Paris, where he studied with {\Meillet}, {\Vendryes}, and others; the attachment to things \ili{French} formed at this time was a lasting one, as shown in the fact that during his entire career the bulk of his writing in languages other than \ili{Danish} was in \ili{French}. In 1928 he produced a book (\citealt{hjelmslev28:principes}, \textsl{Principes de grammaire générale}) which aimed ambitiously at providing a general theoretical foundation for the study of language. The continuity between this book and his later work is evident from its goal of developing an abstract formal ``system within which the concrete categories are found as possibilities, each having an exact location defined by the conditions for its realization and its combination with other categories'' \citep{efj65:hjelmslev.obit}. This work was quite uncompromisingly theoretical in nature: having read it, {\Pedersen} advised {\Hjelmslev} also to produce something that would allow him to qualify for the doctorate in his own field, \ili{Indo-European} Comparative Philology. he also supported the publication of the \textsl{Principes} in the prestigious series published by the Royal Academy, which also gave {\Hjelmslev} a number of copies to send to other linguists he knew (about), a huge advantage at the time. {\Hjelmslev} thus used the \textsl{Principes} to make his views known as a structuralist and theoretician, but also wrote \textsl{Études baltiques} \citep{hjelmslev32:thesis}, a rather traditional work of historical phonology dealing with Baltic phonology and especially with the principles governing suprasegmental factors in these languages: \isi{tone}, \isi{accent}, and quantity. He defended this for the doctorate with {\Pedersen}'s active help. {\Pedersen}'s advice in 1928, 9 years before it became relevant, had the effect of enabling {\Hjelmslev} to claim to be qualified as a fully trained historical linguist when {\Pedersen}'s chair became available (in 1937 when {\Pedersen} turned 70). Since the doctoral degree had been granted by the University of Copenhagen with {\Pedersen} on the committee, this was an impeccable qualification. Later \name{Viggo}{Brøndal} would try to prevent {\Hjelmslev} from being appointed to the chair, precisely by pointing to his interest in general linguistics, but {\Pedersen} could point to his doctoral degree as proof that he was a qualified \ili{Indo-European} scholar (as well). During the same period, he also undertook (by request) the editing of the manuscripts and other writings of \name{Rasmus}{Rask}. He published three volumes of {\Rask}'s manuscripts \citep{hjelmslev:rask} with commentary and two volumes of letters in 1941. A final volume, \citet{bjerrum68:rask}, consisting of a manuscript catalog and further commentary, was published much later by his student \name{Marie}{Bjerrum}. {\Hjelmslev} was obviously fascinated by {\Rask} both personally and intellectually: he considered that the general evaluation of this scholar was completely misguided, and argued in \citet{hjelmslev:rask.comentaire}, a paper given in Paris in 1950, that the major goal of {\Rask}'s work, especially toward the end of his rather short life, was not the development of historical linguistics (the connection in which his name is generally cited), but the development of a general \isi{typology} of linguistic structure in terms of which a basically ahistorical comparison of languages would be possible. There is a certain amount of anachronism in the resulting picture of {\Rask} as a pioneer of \isi{structuralism}, but probably less than is claimed by \citet{diderichsen60:rask} in his attack on {\Hjelmslev}'s interpretation. The central issue in this controversy has been whether {\Rask} had a clear notion of the difference between typological and genetic comparison as the basis for discussing linguistic relationships. Though he probably did not, and thus should not be credited with an explicit theory of synchronic linguistic structure, his interest seems clearly to have been in the question of how languages are to be compared with one another, and not simply in how they evolve. Unfortunately, {\Rask} fits too conveniently into the conventional wisdom about the development of comparative historical linguistics in the nineteenth century, and (outside of a narrow circle of specialists) {\Hjelmslev}'s view, based on a serious and extended study of all of the available material, has not been seriously integrated into standard histories of the field. {\Hjelmslev}'s work in phonology can be said to date from 1931, the year of the {International Congress of Linguists} in Geneva. At that meeting the phonologists of the Prague school were actively proselytizing for their novel approach to \isi{sound structure} (see chapter~\ref{ch.prague}). One result of this was the formation of `phonological committees' in various research centers; and {\Hjelmslev} participated in the creation of such a committee in Copenhagen under the auspices of the \isi{Linguistic Circle of Copenhagen} (founded on the Prague model in 1931, on {\Hjelmslev}'s initiative: see \citet{jensen.gregersen21:hjelmslev.jakobson} for a detailed account of its aims and activities). The initial goal of this committee was to produce a phonological description of \ili{Danish} according to Praguian principles and as part of the \emph{Internationale Phonologische Arbeitsgemeinschaft,} as {\Hjelmslev} had promised {\Jakobson} when they met at the {Second International Congress of Linguists} in Geneva. Subsequently, however, {\Hjelmslev}'s work tended more toward the creation of a general theory of \isi{sound structure} (and of language in general), especially after he began to work together with \name{Hans Jørgen}{Uldall}. {\Uldall}, born in 1907, had studied {English} in Copenhagen with {\Jespersen} and, in 1927, in London with \name{Daniel}{Jones}. After teaching briefly in Capetown (where he substituted for \name{D. M.}{Beach} at the remarkably young age of twenty-two) and London, he went to the United States in 1930 to do fieldwork on American Indian languages under {\Boas}. He spent 1931--32 in California working closely together with \name{Alfred}{Kroeber}. He worked especially on Nisenan (``Southern Maidu''), and is said to have become fluent in the language. He received his MA in Anthropology from Columbia for this work under {\Boas}'s supervision in 1933 (though he never submitted his thesis), and returned to Copenhagen (where he had no real job awaiting him, a problem that was to plague him for most of his professional life). The collaboration between {\Hjelmslev} and {\Uldall} began shortly after his return, within.the context of the `phonological committee'. Its first concrete result was a paper \citep{hjelmslev.uldall35:phonematics} `On the Principles of Phonematics', delivered to the Second International Congress of Phonetic Sciences in London in 1935 (figure~\ref{fig:ch.firth.icphs_1935}) by {\Hjelmslev} and accompanied by \posscitet{uldall36:london} presentation of the phonematics of \ili{Danish}. While the picture of `phonematics' presented by {\Hjelmslev} is close in spirit to Praguian `phonology, it also diverges quite clearly in significant details. Importantly, {\Hjelmslev} and {\Uldall} reject both the sort of psychological definition of phonemes (as the `psychological equivalent of a \isi{speech sound}' or as the `intention' underlying realized speech) characteristic of the very earliest Prague school work under the influence of {\DeCourtenay}, and also any sort of purely phonetic definition which would identify phonemes with external physical properties of the speech event. Instead, they require that phonemes be defined exclusively by criteria of distribution, \isi{alternation}, etc., within the linguistic pattern, as foreshadowed already in \posscitet{hjelmslev28:principes} \textsl{Principes de grammaire generale}. \begin{wrapfigure}[12]{r}{.5\textwidth} \includegraphics[width=.9\textwidth]{figures/Hjelmslev_and_uldall.jpg} \caption{Hans Jørgen Uldall and Louis Hjelmslev} \label{fig:ch.hjelmslev.uldall-hjelmslev} \end{wrapfigure} The differences between {\Hjelmslev}'s views and those of the Prague phonologists were quite explicit; indeed, this is a point {\Hjelmslev} insisted on many times. Virtually all of his papers dealing with sound structure contain at least as an aside, and sometimes as the main point, a reproof of `phonology' as making an important conceptual mistake in basing its analysis on considerations of substance—especially on phonetic properties. {\Hjelmslev}'s interaction with both {\Trubetzkoy} and {\Jakobson} involved a considerable amount of mutual criticism. This was never explicitly bitter or personal in tone on either side, although \citet[17]{martinet85:hjelmslev} reports that ``[l]e refus de reconnaître toute dette envers Prague était, chez {\Hjelmslev}, au moins partiellement déterminé par une hostilité personnelle – le mot n'est pas trop fort – envers Troubetzkoy''.\footnote{``The refusal to recognize any debts to Prague was, for {\Hjelmslev}, at least partially determined by a personal hostility – this word is not too strong – towards {\Trubetzkoy}''} The feeling seems to have been mutual: after the presentation by \citet{hjelmslev.uldall35:phonematics} at the Congress in London, in which the basis of phonemic entities in phonetic substance was strongly rejected in favor of purely formal criteria, {\Trubetzkoy} wrote \citep[248]{liberman01:trubetzkoy.anthology} to {\Jakobson}, who had not been present at the Congress, that ``To a certain extent, {\Hjelmslev} is an enemy. [\ldots] I believe that {\Hjelmslev} is trying to ``out-Herod Herod,'' that is, us.'' (See also \citealt[22]{early.years:efj-rj.letters}; \citealt[19]{efj97:jakobson}). As well as in his letter to {\Jakobson}, \citet[83]{trubetzkoy39:grundzuge} explicitly criticizes the point of view of {\Hjelmslev}'s London paper. Relations between glossematics and other forms of structural linguistics seem never to have been particularly warm either, although {\Jakobson} himself enjoyed warm personal relations with {\Hjelmslev} over many years \citep{efj97:jakobson}. Since 1934, {\Hjelmslev} had been a reader in comparative linguistics in Aarhus, where {\Uldall} had joined him in order to continue their joint work. In 1937, {\Hjelmslev} succeeded {\Pedersen} in the chair of general linguistics in Copenhagen (al\-though {\Uldall} was still without a regular job). By this time, the two had decided that their views on phonematics could be combined with {\Hjelmslev}'s earlier work on grammatical categories (represented in his \textsl{Principes}, and also by his work \citep{hjelmslev:case} on case, into a general theory of language. Both felt that this was the first approach to language that treated it in itself and for its own sake rather than as a combination of the objects of other, non-linguistic disciplines—such as psychology, physiological and acoustic phonetics, etc. A distinct name seemed warranted to emphasize this difference from previous `linguistics', and thus was born the field of \emph{glossematics}. In order to give substance to glossematics, {\Hjelmslev} and {\Uldall} wanted to provide a complete set of definitions and concepts that would constitute a rigorous, internally consistent framework of principles, founded on a bare minimum of terms from outside the system. Such a theoretical apparatus would specify the sorts of formal system that count as `languages' in the most general terms, and also what constitutes an `analysis' of a language. The latter notion is described in glossematic writings as a set of `procedures' of analysis—probably an unfortunate term, since it suggested the sort of field procedures a linguist not knowing a given language might actually apply to arrive at an analysis of it. In fact, the notion of `procedure' in glossematics is a specification of the form a finished analysis takes, not the way one arrives at it. To say that texts are made up of paragraphs, which are made up of sentences, which are made up of clauses, etc., is to say nothing at all about how to go about dividing up an actual text in practice, and glossematics had no real practical hints to offer on this score. Rather, it was assumed that the linguist went about learning and analyzing a language using any methods or shortcuts that turned out to be convenient: only after arriving at an analysis was it to be organized so as to conform to the glossematic `procedure'. {\Hjelmslev} and {\Uldall} kept developing and elaborating their analytic framework and system of definitions, with the hopes of publishing soon a detailed Outline of Glossematics. In 1936 at the International Congress of Linguists in Copenhagen, they distributed a pamphlet of a few pages \citep{hjelmslev.uldall36:outline}, identified as a sample from a work of this title ``to be published in the autumn.'' No year was specified for this ``autumn'' however, and it became a standing joke among linguists in Copenhagen.\footnote{Such a long-delayed but much-referred-to work, supplying the conceptual underpinning for a good deal of other work, cannot fail to remind linguists of a more recent vintage of the \textsl{Sound Pattern of English}.} In 1939, as the war was beginning, {\Uldall} finally was offered a more secure position—in Greece, with the British Council. His departure effectively severed the glossematic collaboration during the war years, but the two continued to work independently on what they still considered their joint project. {\Hjelmslev} completed a sort of outline of the theory, but felt he ought not to publish it in {\Uldall}'s absence (it was ultimately published as \citet{hjelmslev75:resume}, \textsl{Resume of a Theory of Language}). Instead, he produced \citet{hjelmslev43:prolegomena}, a sort of introduction to the theory and its conceptual basis, under the title \textsl{Omkring sprogteoriens grundlæggelse} (translated into {English} in 1953 with some minor revisions as \citet{hjelmslev53:osg.whitfield}, \textsl{Prolegomena to a Theory of Language}, further revised in collaboration with Whitfield\ia{Whitfield, Francis} as \citet{hjelmslev61:prolegomena}). Though {\Hjelmslev} at least claimed to regard this as a sort of `popular' work, indeed a work of ``vulgarization'', it is surely one of the densest and least readable works ever produced in linguistics. It is largely through this book (and reviews of it), however, that linguists outside {\Hjelmslev}'s immediate circle came to know anything about the substance of glossematics. In 1952, he taught in the {Linguistic Society of America}'s Summer {Linguistic Institute}, where he had an opportunity to present his views to a North American audience. This event certainly made glossematics better known outside Europe, but does not appear to have produced a great many converts to the theory. {\Hjelmslev} and {\Uldall} continued to work independently on the theory over the following years, but were unable to spend much time together. {\Uldall} was briefly in London, and held a succession of positions in Argentina, Edinburgh, and later in Nigeria; he was able to spend 1951-52 in Copenhagen, but by this time it appears that his and {\Hjelmslev}'s views had come to diverge significantly. They still hoped to bring out a unified Outline of Glossematics; in fact, {\Uldall} published part 1 of such a work \citep{uldall57:outline}, but {\Hjelmslev} found himself unable to write his proposed part 2 on the basis of {\Uldall}'s presentation. {\Uldall} himself died of a heart attack in 1957; and {\Hjelmslev}'s own time during the 1950s and early 1960s was increasingly devoted to university administrative tasks rather than to the further development of glossematics. Though he produced a number of papers on particular topics, including at least one \citep{hjelmslev54:stratification} with a general scope, he never published any more comprehensive description of his theory beyond that in the \textsl{Prolegomena}. He was very ill in his last years, and died in 1965. \section{Hjelmslev's notion of an `immanent' Linguistics} Following {\Saussure}, {\Hjelmslev} regarded languages as a class of sign systems: the essence of a language is to define a system of correspondences between sound and sense. The analysis of a language, then, involves describing each of these two \emph{planes} and their interconnections. The domain of the Saussurian \emph{\isi{signifié}}—the `meanings' of signs—{\Hjelmslev} calls the plane of \emph{content}, while the domain of the \emph{\isi{signifiant}} is the plane of \emph{expression}. Each of these planes in any given language has its own structure: words (or morpheme-sized units, to reduce attention to elements the size of a minimal sign) are realized by a sequence of segments in the expression plane; and their meanings can be regarded as combinations of smaller componential units in the plane of content. Importantly, these two analyses of the sign are not conformal, in the sense that units of expression are not related in a one-to-one fashion to units of content. The word \emph{ram} /ræm/ in \ili{English} can be regarded as a sequence of /r/ plus /æ/ plus /m/ in the \isi{plane of expression}, and as the combination of \{male\} and \{sheep\} in the \isi{plane of content}, but there is no exact one-to-one correspondence between the two analyses. {\Hjelmslev} considered that previous (and contemporary) linguistics had failed to provide an analysis of either content or expression in terms of its own, strictly linguistic or \emph{immanent} structure. In particular, the linguistic analysis of content had been directed toward an account of linguistic categories of \isi{meaning} based on general aspects of human mental or psychological organization; while the analysis of expression had attempted to reduce this aspect of linguistic structure to the study of general \isi{acoustics} or physiological phonetics. In his opinion, other linguists were attempting to study the categories of language as special cases of more general domains, each of which (in particular, psychology and phonetics) constituted a more comprehensive field that was in principle independent of the special properties of language. For {\Hjelmslev}, all such moves are fundamentally mistaken, in that they obscure or deny the specifically \emph{linguistic} character of language. The only way to study language in its own right, according to him, was to develop a notion of linguistic structure completely independent of the specifics of either phonetic realization or concrete intentional meanings. The radicalism of {\Hjelmslev}'s project lies in its seemingly paradoxical proposal to study systems of correspondence between \isi{sound and meaning} with methods that are to be completely independent of either sounds or meanings. His critics, needless to say, did not fail to point out and even exaggerate the apparent contradictions of such an approach. I will discuss the basis and justifiability of this program below. At this point it is worth pointing out, however, that in embracing it {\Hjelmslev} became the first modern linguist to campaign specifically against the notion that `\isi{naturalness}' in linguistics is to be achieved by reducing facts of linguistic structure to facts from other, not specifically linguistic, domains. This issue was often ignored in later structuralist discussion; or misstated, as when {\Hjelmslev} is cited simply as advocating the analysis of linguistic structure without appeal to \isi{meaning}, ignoring the fact that phonetic facts are in his terms just as irrelevant as semantic ones. Indeed, in the strongly positivistic atmosphere of scientific studies in the 1930s, 1940s and 1950s, it seemed hard to take seriously an approach to language that renounced at the start a foundation in operational, verifiable external facts. It would, in fact, be a mistake to equate the kind of program against which {\Hjelmslev} was actually reacting, one that saw the goal of linguistics as the reduction of language to non-linguistic principles, with empiricist approaches to the field in general. In advocating the complete independence of linguistics from both semantics and (more importantly) phonetics, however, {\Hjelmslev} left few substantive points of contact between his view of glossematics and the empiricist linguistics of his time—except for an appeal to rigor and explicitness, a kind of `motherhood' issue that no scientist could possibly fail to applaud. The fundamental presuppositions about the role of `\isi{naturalness}' in linguistic structure against which {\Hjelmslev} aimed his appeal for an immanent linguistics were not really discussed by most of his critics. These concerns would reappear later, however, in the context of post-\textsl{Sound Pattern of English} generative phonology under conditions that make substantive discussion easier to engage (see chapter~\ref{ch.spe} below, and \citealt{sra81:unnatural}). {\Hjelmslev} argues for the independence of linguistics from external considerations (at least from the phonetic facts that might be thought to play an essential role in the \isi{plane of expression}) by claiming that in fact the same linguistic \emph{system} can be realized in radically different media. In particular, the \isi{linguistic system} of a given language can be realized either orally, in the sounds studied by phoneticians, or orthographically, by symbols of an alphabet. Even within the limits of phonetic realization, he suggests that arbitrary replacements of one phonetic segment by another (so long as the same number of contrasts remain, and the pattern of distribution, \isi{alternation}, etc., of contrasting elements stays the same) would have no effect on the system. If [t] and [m] were systematically interchanged in all \ili{German} words, he suggests, the result would still be identically the same system as that of standard \ili{German}. Additionally, the same system could be realized in systems of manual signs, flag signals, Morse code, etc.: a potentially limitless range of ways to express what would remain in its essence the same \isi{linguistic system}. If this is indeed the case, the system itself as we find it can have no intrinsic connection with phonetic reality (to the exclusion of other possible realizations). Reviewers and others discussing glossematics replied that (a) orthographic and other systems are obviously secondary in character, parasitic on the nature of spoken language and developed only long after spoken language had arisen; and (b) in any event, such systems do not in general display the `same' system as spoken language. To the first of these objections {\Hjelmslev} replied that the historically second\-ary character of writing, etc., is irrelevant, because what is important is the \emph{possibility} of realizing the system in another medium, not the fact of whether this possibility was or was not realized at some specific time. As to the supposedly derivative character of writing, {\Hjelmslev} quite simply denied that writing was invented as a way of representing (phonetically prior) speech: he maintained that writing represented an independent analysis of the expression system of language. If sound and writing both serve as realizations of the same system of elements composing the expression side of signs, it is only natural that they should show close correspondences; but the lack of detailed isomorphism between the concrete facts of \isi{phonetics} and \isi{orthography} in all known writing systems, together with our obvious lack of knowledge of the specific motivation and procedures of the inventors of writing, make the point at least moot. The content of the second objection noted above is that when one studies the system of, for example, \ili{English} writing in \ili{Latin} characters, one arrives at a rather different system than when one studies \ili{English} phonetics. Written \ili{English} does not have contrastive \isi{stress} (or any \isi{stress} at all, for that matter); it makes some contrasts spoken \ili{English} does not (e.g., \emph{two} vs. \emph{too} vs. \emph{to}), and \emph{vice versa} (e.g. \emph{read} [rijd] vs. \emph{read} [rɛd]); the \isi{distinctive features} of the letters involved (insofar as this notion can be carried over into such a domain) establish rather different candidates for the status of \isi{natural class} than do phonetic criteria; and so forth. Again, it could be argued that this is beside the point: {\Hjelmslev} was quite ready to concede that in practice rather different systems of expression form (e.g., those corresponding to phonetic and to written norms) might be matched to the same system of content form as variants of the `same' language; but what matters is the fact that in principle it would be \emph{possible} to develop a writing system that would mirror the same system of expression as that operative in a given spoken language. Indeed, an adequate system of \isi{phonemic transcription} (perhaps representing phonemes as graphic feature complexes, somewhat along the lines of the Korean Hangeul \isi{orthography}) would serve to make {\Hjelmslev}'s point in principle. Discussion of {\Hjelmslev}'s views in the literature, thus, cannot really be said to have effectively refuted his position on the independence of linguistic structure from external considerations. It would have been to the point, perhaps, to question whether it is really accurate to say that the character of the system would remain unchanged if arbitrary substitutions were made in the realizations of its elements. After all, the whole thrust of Neogrammarian \isi{explanation} had been that the character of a synchronic state of language results from the cumulative history of its accidental details. If such a state represents a system, indeed, that fact must in some way grow out of a combination of specific particulars; and those particulars must have an influence on the development and maintenance of the system's internal equilibrium. Though such an exclusively historical view of language largely disappeared (outside of some Indo-Europeanist circles, at least) with the rise of \isi{structuralism}, most linguists would still agree that the working of \isi{sound change} and \isi{analogy} (both crucially, though not exclusively, based in the details of the external form of signs) contribute to the formation of the linguistic system. If that is so, arbitrary changes in the external form of its signs could not be said to leave the system of a language essentially unchanged. The principal objections made to {\Hjelmslev}'s radical position do not seem to have been based on grounds such as these, however. \section{Basic terms of glossematic analysis} To make explicit the separation he intended between the system and its manifestation, {\Hjelmslev} proposed a system of terms that has not always been well understood by later writers: \citet{efj66:form.and.substance,fischer-jorgensen75:trends} discusses and clarifies this terminology and its history. First of all, he proposed to distinguish between linguistic \emph{form} and linguistic \emph{substance}: `form' is the array of purely abstract, relational categories that make up the systems of expression and of content in a given language, while `substance' is constituted by some specific manifestation of these formal elements. Since the system itself is independent of any concrete manifestation, and any such manifestation only has a \emph{linguistic} reality insofar as there is a system underlying it, {\Hjelmslev} maintained that ``substance presupposes form but not vice versa.'' Although by the logic of the terms in question this proposition is essentially tautologous, it was considered one of his most controversial assertions. This is, of course, because it is in this claim that the independence of linguistic structure from phonetic (and semantic) reality becomes concrete. A particular linguistic substance is regarded as the manifestation of a given linguistic form in a particular \emph{purport}. This latter is a kind of `raw material' subject to being used for linguistic purposes, but which has no linguistic character in itself unless shaped by a linguistic form into a linguistic substance. {\Hjelmslev} uses the image of a net (representing form) casting shadows on a surface (the \isi{purport}) and thereby dividing it into individual cells or areas (the elements of substance). The complete range of human vocal possibilities (considered as a multidimensional continuum) constitute one sort of linguistic \isi{purport}, which can substantiate the manifestation of a linguistic form (e.g,, the sound pattern of \ili{English}) in a substance (roughly, the `phonemic' system of \ili{English} in structuralist terms). In the nature of things, the same \isi{purport} may be formed into different substance by different systems (e.g., the same space of vocal possibilities is organized differently by different languages), just as the same form may be `projected' onto different \isi{purport} to yield different substances (as when both phonetic and orthographic manifestations can serve to substantiate the system of expression of the same language). The notion of \isi{purport} is reasonably clear in the domain of expression (given the ideas of `form' and `substance' in their glossematic sense), but it is not so obvious that there is a range of potentially different `purports' available to substantiate the content plane of language, although the putative independence of language from semantics implies that there ought to be. The analysis of each plane, content as well as expression, involves a search for the set of constitutive elements of signs within that plane and for the principles governing the organization of these elements into larger units. The specific glossematic implementation of this search is the `\isi{commutation test}', according to which two elements of linguistic substance in a given plane manifest different elements of linguistic form if the substitution of one for the other leads to a change in the other plane. In one direction, this is quite standard structuralist procedure: phonemic \isi{contrast} exists between two phonetic elements when substitution of one for the other leads to change of \isi{meaning}. An innovation in glossematics consists in the fact that the same procedure is supposed to be applicable in looking for minimal elements of content as well: thus, substitution of \{male\}+\{sheep\} for \{female\}+\{sheep\} leads to a change in expression (from \emph{ewe} to \emph{ram}), and so establishes \{male\} and \{female\} as different elements of content form in \ili{English}. It must be added that this program of analyzing content form as well as expression form by essentially the same procedure remained a purely theoretical one, with no substantial, extended descriptions of the glossematic content form of particular languages ever having been produced. In general, the complete symmetry of the two planes (expression and content) was a major tenet of \isi{glossematic theory}; but in the absence of serious studies of content form, it remained a point of principle with little empirical content. The `\isi{commutation test}' sounds like an eminently practical procedure; indeed it resembles in its essentials the sort of thing students of field linguistics in North America were being told to do in studying unfamiliar languages. Seen in that light, however, it would seem to compromise the claim that substance presupposes form but not vice versa: if the only way form can be elucidated is by such a manipulation of the elements of substance, its independence seems rather limited. But here it is important to note that {\Hjelmslev} did not at all mean the \isi{commutation test} to serve in this way: as observed above, he felt linguists engaged in field description should make use of whatever expedients helped them arrive at an analysis (including an operational analog of the \isi{commutation test}, if that proved useful), but that the validation of the analysis was completely \emph{ex post facto}, and not to be found in the procedures by which it was arrived at. In other words, the analysis could perfectly well come to the analyst fully formed in a dream: the role of the \isi{commutation test} was to demonstrate its correctness as a formal system underlying a particular association of content substance and expression substance. The goal of linguistics, in glossematic terms, is the development of an `algebra' (or \isi{notational system}) within which all possible linguistic systems can be expressed. Such a theory specifies the range of abstract possibilities for the systems of expression form and content form in all languages, independent of particular manifestations of such systems in specific substance. Each of the `grammars' specified by such a theory is simply a network of relationally defined formal elements: a set of categories available for forming suitable \isi{purport} into substance. The elements of such a network are themselves defined entirely by their distinctness within the system (their commutability), and by their possibilities of combination, distribution, \isi{alternation}, etc. The element of \ili{English} expression form which we identify with the \isi{phoneme} /t/, thus, is not definitionally a voiceless dental stop but, rather, something that is distinct from /p/, /d/, /n/, etc.; that occurs initially, finally, after /s/, etc.; that alternates with /d/ (in the dental preterite ending), etc. The labels attached to such minimal elements of expression form (and to corresponding elements of content form) are completely arbitrary, as far as the system is concerned: their identity resides entirely in their relations to other elements, not in their own positive properties. Such a view is clearly (and explicitly) an attempt to realize {\Saussure}'s notion of \emph{langue} as form and not substance. \begin{wrapfigure}{r}{.4\textwidth} \includegraphics[width=.9\textwidth]{figures/hjelmslev_later.jpg} \caption{Louis Hjelmslev} \label{fig:ch.hjelmslev.hjelmslev2} \end{wrapfigure} As we have emphasized repeatedly above, it is this complete independence between linguistic form and its manifestation in substance that is both the hallmark and the most controversial aspect of {\Hjelmslev}'s view of language. Taken in a maximally literal sense, for instance, this separation seems to preclude any sort of even halfway coherent analysis of actual languages: if we ignore `substance', how are we to identify the initial and final variants of a single phonetic type (e.g., [k])? Indeed, how can we even identify initial [k] when followed by [i] with the [k] which is followed by [u]? If we carry out a consistent analysis based on identifying elements only by their possibility of commuting with others under given distributional conditions, we arrive at an analysis in which, for example, there are ten contrasting units initially before [i], eight initially before [u], and six finally; but what basis do we have for identifying the units found in one position with those found in another, except their substantive (phonetic) resemblance? The issue is reminiscent of the approach taken to the \isi{phoneme} by {\Twaddell} (section~\ref{sec:twaddell} below). For {\Hjelmslev}, the answer to this problem did not lie in conditions on the possible form of grammars. In the general case, a variety of different forms will be available for the same set of concrete linguistic facts. The theoretical validity of any proposed formal interpretation of a \isi{linguistic system} is assured by the fact that (a) it satisfies the \isi{commutation test} (in that exactly those changes in one plane that result in changes in the other are registered as changes between distinct elements of the system), and (b) it satisfies the oddly named `\isi{empirical principle}' in that the system itself is internally consistent, exhaustive (i.e., accounts for all of the facts), and as simple as possible (in that it posits a minimal number of constituted elements in each plane). We will return to the `\isi{empirical principle}' (and especially to the notion of \isi{simplicity} it contains) below; for the moment, it is sufficient to note that this principle considerably under-determines the formal interpretation of a given linguistic usage. The solution to the problem of providing a phonetically plausible formal interpretation of a given usage lies rather in the way in which a linguist matches a potential formal system (selected from the range of possibilities given by the theory) to match that usage. The linguist chooses that one of the formal possibilities which is most appropriate to the substance, in that it provides the best and most straightforward match between formal and substantive categories. Thus, there is nothing in the nature of linguistic form that requires the linguist to choose the `right' system (as long as the system he chooses is one that satisfies the \isi{empirical principle}, and accounts for commutation) — but there is nothing in the theory that prevents him from doing so, either. The principles that govern the appropriateness of particular formal interpretations of linguistic usage fall outside of the study of form \emph{per se}, as they must if substance is to presuppose form but not \emph{vice versa}. This answer, while logically adequate, is unlikely to satisfy those who feel {\Hjelmslev}'s separation of form from substance is too radical. On the one hand, he is undoubtedly correct in insisting that the system of language is centrally governed by properly \emph{linguistic} principles, principles which cannot be reduced to special cases of the laws of physiology, physics, general psychology, logic, etc. But on the other hand, the categories of linguistic form show too close a correspondence to those of substance to allow linguists to treat this relationship as some sort of extra-systemic consideration, or even as a colossal accident. By and large, the \isi{regularities} of distribution, \isi{alternation}, and similar properties of linguistic elements operate with reference to \emph{phonetically} natural classes, have \emph{phonetic} explanations (at least in part), etc. Further, we see that linguistic systems when expressed in other media than the phonetic show a similar dependence on, and determination by, the properties of that medium. Striking demonstration of this has come in the research on manual (or `sign') languages conducted since roughly the 1960s. When {\Hjelmslev} wrote, the only such systems that were generally considered by linguists were systems of finger spelling, in which manual signs serve in a more or less direct way to represent letters of an established \isi{orthography}—itself, in turn, representing a spoken language. With increasing attention to signed languages in their more general form has come the realization that their structure falls within the range of systems known from spoken languages, and that they are grounded in the same cognitive and neural bases as spoken languages, \emph{modulo} physical differences of modality. On the other hand, they typically represent unique, autonomous systems that are quite different in structure from (and not essentially parasitic on) the spoken languages of the communities within which they are used. An introduction to some basic properties of manual languages and their distinctness from spoken languages is provided by \citet{bellugi.klima79:sign}; a collection of results from the more recent (massive) literature is provided by \citet{brentari10:sign-languages}. While falling well within the class of human natural languages, the organizing principles of these systems, the natural classes of elements that function in linguistic \isi{regularities} and the principles of \isi{historical change} operating on the elements, etc., can only be understood in terms of the specific characteristics of their manual implementation—suggesting that a similar understanding of phonetic implementation is essential to an account of spoken languages. Paradoxically, language seems to be subject in its essence to its own proper set of organizing principles, while its concrete details can be largely related to extra-linguistic factors. This contradiction is nowhere resolved (or even admitted) by {\Hjelmslev}, but his work has the merit of stressing one side of the question so strongly as at least to raise the issue. Many other investigators have asserted the autonomy of linguistic structure, but few have been willing to follow this proposition in its most absolute form nearly so far. Probably the only view of phonology to pose the problem and a concrete solution to it is that associated with {DeCourtenay} and {\Kruszewski} (cf. above, chapter~\ref{ch.kazan}): here the extra-linguistic factors serve as \isi{constraints} on the raw material that enters the \isi{linguistic system}, while the system itself is subject to its own distinct set of principles. As we have already noted, this is more a program for research than a concretely articulated theory, but it does propose an account of what must be considered the most central issue raised by {\Hjelmslev}: the relation between form and substance in linguistic structure. There are numerous other issues in general linguistics that are addressed by {\Hjelmslev}'s work, but considerations of space preclude further discussion here. On the basis of the overall account given above of the conceptual foundation and goals of glossematics, I now move on to the proposals made within that theory concerning sound structure, and their instantiation in particular descriptions. \section{Hjelmslev's approach to the description of sound structure} The abstract character of the issues treated thus far in the present chapter and their distance from actual empirical descriptions of particular language data are completely typical of the writings for which {\Hjelmslev} is known. His study of such theoretical topics was not, however, carried out in as near total isolation from concrete factual material as is sometimes believed. His early training in {Indo-European} studies, for example, involved the study of a range of languages necessary to pursue that kind of research. His work on Baltic (especially \ili{Lithuanian}) for his doctorate involved direct fieldwork, and forced him to pay attention to a set of descriptive problems in the domain of \isi{accent} to which he would return numerous times in his later, theoretical writings. \begin{wrapfigure}{r}{.4\textwidth} \includegraphics[width=.8\textwidth]{figures/Hjelmslev3.jpg} \caption{Louis Hjelmslev} \label{fig:ch.hjelmslev.hjelmslev3} \end{wrapfigure} In addition, he developed descriptive analyses of at least two other languages in some detail: \ili{French} and \ili{Danish}. The description of \ili{French} is known primarily from a summary by \name{Eli}{Fischer-Jørgensen} of lectures {\Hjelmslev} gave in 1948-49 \citep{hjelmslev70:french}. The analysis of \ili{Danish} is presented in an outline by {\Hjelmslev} himself \citep{hjelmslev51:danish}, again representing lecture material rather than a finished paper \emph{per se}. Despite its incompleteness and inconsistencies, the analysis presented of \ili{Danish} is quite interesting and substantial; it has remained little known because it appeared only in \ili{Danish} in a comparatively obscure publication. An {English} translation has, however, been published as \citealt[247--266]{hjelmslev73:essais.2}. Of special help to present-day readers is the fact that {\Hjelmslev}'s analysis of \ili{Danish} has been presented and extended by {\Basbøll} in a series of two articles (\citealt{basboll71:hjelmslev1,basboll72:hjelmslev2}; cf. also \citealt{efj72:note}). {\Basbøll}'s aim is to demonstrate the potential descriptive scope of a strictly glossematic analysis of sound structure, and he stays explicitly within that framework in explicating, improving, and further developing {\Hjelmslev}'s analysis. According to \citet{efj72:note}, a number of {\Basbøll}'s proposed modifications represent points that {\Hjelmslev} and others had discussed informally and with which {\Hjelmslev} was more or less in agreement. More recently, \citet{basboll17:hjelmslev.french} provides similar treatment of {\Hjelmslev}'s account of \ili{French}, and \citet{basboll21:glossematics} updates and compares the two glossematic analyses. The sketchy analyses of \ili{Danish}, \ili{French}, and, to some extent, \ili{Lithuanian} that we find in {\Hjelmslev}'s work perhaps raise more questions about the descriptive methodology that should be attributed to the theory of glossematics than they answer. Nonetheless, it is possible to gain a reasonable idea of what such descriptions would look like in practice from a study of the material referred to above, especially that dealing with \ili{Danish}. The few other descriptions that have been produced under the label of glossematics are not, unfortunately, reliable as indicators of {\Hjelmslev}'s own views \citep{fischer-jorgensen75:trends}. Within the scope of this chapter, we cannot address all of the points of interest raised in {\Hjelmslev}'s work. I attempt here only to give a notion of the dimensions along which his views differed from those of his contemporaries, and especially to present his views as they bear on the central issues of this book. For {\Hjelmslev}, the analysis of the expression system of a given language starts from the set of elements that commute (or \isi{contrast}) with each other. These are all at least candidates for the status of elementary constituents of the expression system; as we will see below, however, the inventory may later be reduced if there are reasons to represent some items as combinations or variants of others. Within each of the two planes of language, the elementary constituents of linguistic form are called \emph{taxemes}. These are the minimal units that can be arrived at in any particular analysis: in the plane of expression they are roughly the `size' of a segment (or \isi{phoneme}). The point of introducing this terminology was (at least in principle) to emphasize the independence of glossematic notions of linguistic form, and especially its relation to substance, from their `phonological' counterparts (primarily the views of the Prague school and those of \name{Daniel}{Jones} --- see chapter~\ref{ch.firth}). The essential difference is supposed to lie in the fact that \isi{taxemes} are elements of pure linguistic form, having no necessary connection with substance. The \isi{taxemes} could, of course, be manifested phonetically: in that case, the units of phonetic substance that manifest them are called \emph{phonematemes} by {\Hjelmslev}. These are roughly units similar to structuralist phonemes, if we construe these as segments given a `broad phonetic' characterization from which most or all non-distinctive phonetic detail is omitted. The \isi{taxemes} can be further dissolved into combinations of prime factors called \emph{glossemes}. In scope, these units are comparable (in the \isi{plane of expression}) to \isi{distinctive features}; but their analysis is purely formal and universal, and depends in no way on the actual phonetic content of the segments manifesting the \isi{taxemes}.\footnote{At least in principle, although it is striking how much {\Hjelmslev} refers to phonetic substance in the end.} The \isi{glossemes} in the \isi{plane of expression} are called \emph{cenemes} and those in the \isi{plane of content} \emph{pleremes}. {\Hjelmslev} sometimes refers more generally to elements as `cenematic' or `plerematic' (i.e., as units of expression and content, respectively); and to `cenematics' and `plerematics' as the study of expression and the study of content. Since the analysis of \isi{taxemes} into \isi{glossemes} has much less systematic significance for the questions of interest to us here, I will ignore these terms below and treat \isi{taxemes} as the minimal units of linguistic form in each of the two planes. The \isi{taxemes} of expression form are themselves defined by the network of relations into which they enter. In his \citeyear{hjelmslev.uldall35:phonematics} (preglossematic) treatment, {\Hjelmslev} divides the {rules} characterizing these into three classes: (a) {rules} of grouping, which specify the distributional, clustering, etc., properties of elements; (b) {rules} of \isi{alternation}, which specify the replacement of one element by another under specified grammatical conditions; and (c) {rules} of implication, which specify replacements that take place under phonematic conditions. This last definition cannot be taken literally, since phonematic realization is only one possible manifestation of linguistic form (others being orthographic, etc., as discussed in previous sections). The distinction being made is nonetheless clear: alternations involve two or more distinct expression-forms that correspond to the same content, where the choice between them is determined by conditions only represented on the plane of content; while implications involve conditions for the occurrence of one or another form that are present in the expression plane itself. These three classes of {rules}, incidentally, are asserted by {\Hjelmslev} to be mutually exclusive in governing the relation between particular \isi{phonematemes}. This would entail, if correct, the claim that two segments which alternate (under either grammatical or phonological conditions) cannot be systematically related in cluster formation. He illustrates this by arguing that in \ili{German} the voiced and voiceless obstruents which alternate in syllable-final position do not co-occur in clusters. No one has ever actually examined this claim in any detail; if true, it would be a remarkable fact indeed about the sound patterns of natural languages. The notion that the units of a linguistic analysis are to be defined in terms of their role in a network of {rules} is maintained in {\Hjelmslev}'s later, more strictly glossematic work, though much heavier emphasis there is put on {rules} governing distribution than on the principles of \isi{alternation}. The basic idea is clearly related to (and in part derived from) the same proposal by {\Sapir}, discussed below in chapter~\ref{ch.sapir}. Another influence on {\Hjelmslev} in this regard can be traced in his papers on linguistic reconstruction, an enterprise in which he believed that the purely relational character of \isi{taxemes} is strikingly shown. The reconstruction of earlier, unattested stages of a language (or family) proceeds in a way that is completely independent of any actual claims about the pronunciation of that ancestral language, at least in principle. The result is the establishment of a system of pure relations, whose terms are correspondences among phonological elements in related systems, but are not themselves phonetic realities. In this connection, he invokes the notion of `\isi{phoneme}' as used by {\Saussure} in his \textsl{Mémoire}: an element in the system of a reconstructed language, as attested by a unique set of correspondences in the daughter languages. As we have seen above (chapter~\ref{ch.saussure_sound}), {\Saussure}'s later use of the term \emph{phonème} in the sense of `\isi{speech sound}' was from {\Hjelmslev}'s point of view diametrically opposed to this, but in fact {\Hjelmslev} had read the \textsl{Mémoire} and been impressed with it long before he devoted serious attention to {\Saussure}'s work in general linguistics. While the distinctions among expression \isi{taxemes} are purely formal and relational, they usually correspond to surface phonetic differences as well. This is not always the case, however, since substance (here, phonetics) does not alone indicate what is most important about an element of the \isi{linguistic system}: its function, or role in the system of relations. For instance, in his description of \ili{French}, {\Hjelmslev} notes that \isi{schwa} must be kept phonologically apart from {[œ]}, not because they differ in any phonetic way (they do not, at least in `standard', conservative \ili{French}), but rather because \isi{schwa} can be latent (deleted) or facultative (optionally inserted) under specified conditions, while the presence of {[œ]} is constant in a given form. It is precisely its behavior with respect to certain {rules} that establishes \isi{schwa} as a distinct element of the system of \ili{French} expression form. Differences of this sort between formal and substantive categories in language show up most clearly when we consider the role in {\Hjelmslev}'s system of (a) \isi{neutralization} or \isi{syncretism}; and (b) reductions in the inventory of \isi{taxemes} due to representing certain elements as combinations or variants of others. I discuss these two aspects of glossematic description below. Neutralization is defined as the ``suspension of a commutation'' under some specifiable conditions. The result of the fact that certain (otherwise contrastive) elements fail to \isi{contrast} under the conditions in question is an \emph{overlapping}; the element that occurs in this position is called a \emph{syncretism}. For example, syllable-final voiced and voiceless obstruents fail to \isi{contrast} in \ili{German}, and so the final element of words like \emph{Bund} `association' and \emph{bunt} `colorful' (both phonetically {[bʊnt]}) is the \isi{syncretism} `t/d'. Clearly, a \isi{syncretism} in this sense is similar to an \isi{archiphoneme} in the Prague school sense (cf. chapter~\ref{ch.prague}), but there are also several differences between the two concepts. For one thing, syncretisms are not limited (as archiphonemes are) to cases in which the elements which fail to \isi{contrast} share certain properties to the exclusion of all other phonological elements in the language. Such a condition would make no sense in {\Hjelmslev}'s system, since \isi{syncretism}s involve elements of linguistic form and not substance, and phonetic features are aspects of substance. Also, \isi{syncretism}s are not limited to the \isi{neutralization} of binary \isi{oppositions}, a condition on archiphonemes imposed somewhat arbitrarily by {\Trubetzkoy}, as noted above in chapter~\ref{ch.prague}. Finally, and perhaps most importantly, \isi{syncretism}s are only posited when there is an actual \isi{alternation} involved (as in the case of \ili{German} \isi{final devoicing}), and not in cases where a particular \isi{contrast} simply fails to appear in a given environment (as in the case of \ili{English} \isi{stops} after {[s]}, where only phonetically voiceless, unaspirated elements occur). The latter are treated simply as instances of \isi{defective distribution} of certain phonological elements: it is a fact about \ili{English} \isi{stops} that, while the voiceless ones appear after {[s]}, the voiced ones do not. Although {\Hjelmslev} maintained that evidence from alternations was necessary to the positing of a \isi{syncretism}, he did not in fact always adhere to this in his practice. Thus, he posits an abstract consonant `h' in \ili{French} (to account for the well-known class of \emph{h-aspiré} words, which begin phonetically with a vowel but behave in liaison as if they began with a consonant). This segment is uniformly syncretic with ∅ (i.e., it is never realized phonetically), despite the fact that there are no alternations to support the \isi{syncretism}. On the other hand, once a \isi{syncretism} is established between certain elements in a certain position, the same analysis is extended to other forms which do not happen to show any \isi{alternation}. Thus, \ili{German} \emph{ab} is said to end in the \isi{syncretism} `p/b' (not simply in `p') even though it does not alternate, since other alternating forms establish the syncretisms of voiced and voiceless obstruents in this position. This treatment leads to a difference between two conditions under which a \isi{syncretism} may occur. In the case of alternating forms, related words provide evidence of which element is basic to the \isi{syncretism} (thus, \emph{Bunde} establishes that `d' underlies the \isi{syncretism} `d/t' of \emph{Bund} `bund/t'); while no such evidence is available for non-alternating forms (e.g., \emph{ab}). The \isi{syncretism} in the latter case is said to be \emph{irresoluble}, as opposed to the \emph{resoluble} \isi{syncretism} in `bund/t'. Naturally, the question of whether a given \isi{syncretism} is resoluble or not is a property of individual forms, not of syncretisms themselves, since a \isi{syncretism} that was irresoluble everywhere would lack the sort of basis in actual alternations necessary to establish it in the first place. Syncretisms are divided into several sorts, though the difference is primarily terminological. When the opposition between two elements is suspended in favor of one of them (as when both voiced and voiceless obstruents are represented syllable-finally in \ili{German} by voiceless ones), the \isi{syncretism} is called an \emph{implication}. When the representative of a \isi{syncretism} is distinct from either element (e.g., the \isi{neutralization} of various unstressed vowels in \ili{English} as \isi{schwa}), it is called \emph{fusion}; the same term is applied to a \isi{syncretism} represented by either of the neutralized elements, in free \isi{variation}. An example of this latter situation is furnished by \ili{Danish}, where syllable-final `p' and `b' do not normally \isi{contrast}, but where the \isi{syncretism} can be pronounced as either aspirated or not. A special case of a \isi{syncretism} is a \emph{latency}: this is a \isi{syncretism} between an overt taxeme and ∅. A \ili{French} form such as the adjective \emph{petit}, for example, ends in a `latent' `t'. In fact, in {\Hjelmslev}'s analysis, all final \isi{consonants} in \ili{French} are latent (unless followed by a vowel, such as a \isi{schwa}—itself latent in final position under most circumstances). That is, there is an implication of a consonant to ∅ in this position. Syncretisms form a part of the \isi{phonological system} of a language, and a representation on the \isi{plane of expression} in which all \isi{syncretism}s are indicated has a systematic status. When all possible \isi{syncretism}s are resolved (including the supplying of latent elements, a process called \emph{encatalysis}), we obtain another expression representation which also has systematic status. Such a notation, in which all possible resoluble \isi{syncretism}s are resolved, is called \emph{ideal}, while the notation with \isi{syncretism}s indicated is called \emph{actualized}. It is the \isi{actualized notation} that is directly manifested in substance as a series of \isi{phonematemes}, but the \isi{ideal notation} that serves as the basic expression form of a sign. Diagrammatically, the relation among these elements of a description is as in figure~\ref{fig:glossematic.description}. \begin{figure}[ht] \centering \begin{tabular}{cccll} &\isi{ideal notation}&\rdelim\}{3}{1em}&&`bund'\\ \isi{syncretism} {rules}&↓&&(Form)\\ &\isi{actualized notation}&&&`bund/t'\\ manifestation {rules}&↓\\ &\isi{phonematemes}&\}&(Substance)&{[bʊnt]} \end{tabular} \caption{Components of a glossematic description: German \emph{Bund}} \label{fig:glossematic.description} \end{figure} {\Hjelmslev}'s \isi{ideal notation} for expression form is certainly rather abstract. It clearly cannot be recovered uniquely from surface forms, for example, which is a condition of great importance in most other schools of structuralist phonology. His descriptions make clear that, in practice, it is quite similar to \isi{representations} that in other schools were called morphophonemic, or to the underlying \isi{representations} of generative phonology. A number of important differences separate {\Hjelmslev}'s picture of \isi{sound structure} from that of generative phonology, however. One of these concerns the actualized notation, which corresponds to nothing in a generative description, but which is assigned systematic significance in glossematics. On the other hand, the multiple (unsystematic) intermediate \isi{representations} in a classical generative or morphophonemic description have no correspondents in {\Hjelmslev}'s picture, since his {rules} all apply simultaneously rather than in an ordered sequence. Another difference lies in the fact that no syntactic or other grammatical information is in principle allowed in ideal notations—leading, as \citet{basboll71:hjelmslev1,basboll72:hjelmslev2} observes, to rather labored analyses in cases where different word classes show systematically different phonological behavior. This constraint follows, of course, from the fact that such information concerns content, while ideal \isi{representations} are an aspect of linguistic expression, and the two planes are quite distinct in {\Hjelmslev}'s view. As a result, grammatically conditioned alternations are represented as the relation of two systematically different expressions that correspond to the same content, while phonologically conditioned \isi{variation} is represented as a relation between a single ideal expression form and its various actualized correspondents. \section{The role of simplicity in a glossematic description} The distance between {\Hjelmslev}'s `cenematic' \isi{representations} and phonetic ones is further increased by the fact that he makes every possible effort to reduce the taxeme inventory by treating some elements as variants or combinations of others. To this end, he makes extensive use of aspects of \isi{representations} that others might consider arbitrary. An important role in this respect is played by the notion of `\isi{syllable}' (cf. section~\ref{sec:nonseg-struct}). Since {\Hjelmslev} explicitly denied that any phonetic definition of syllables was relevant to their identification and delimitation, he was largely free to posit them wherever they seemed useful. For example, he noted that, in \ili{German}, only {[z]} and not {[s]} occurs in undoubted syllable-initial position (e.g., when word-initial); while {[s]} occurs to the exclusion of {[z]} in undoubted word-final position. Medially, the two \isi{contrast} in, e.g., \emph{reisen} `to travel' (with {[z]}) \emph{vs}. \emph{reißen} `to tear' (with {[s]}); but here {\Hjelmslev} proposes to treat the \isi{contrast} as a matter of the location of \isi{syllable} \isi{boundaries}—`rai.sən' vs. `rais.ən'. In this way the two segments are reduced to positional variants of a single one. Similarly, the small number of superficially contrastive instances of the `ich-Laut' [ç] in \ili{German} (in e.g. \emph{Kuhchen} `little cow', as opposed to \emph{kuchen} `to cook', with {[x]}) are treated as differing in syllabic position (`ku.xən' vs. `kux.ən'), removing the need to posit a rather counterintuitive phonological difference between palatal and velar \isi{fricatives} (see chapter~\ref{ch.structuralists} below for the attempt to represent this difference within American structuralist theory as depending on another sort of inaudible boundary element). Similarly, a single segment may be represented as the manifestation of a cluster. In \ili{Danish} (and other languages), {[ŋ]} can be represented as manifesting `n' before `k' or `g', where `g' is itself often latent in this situation. Thus, apparently distinctive {[ŋ]} can be treated as the only overt manifestation of the cluster `ng'. In this instance, the single segment does not actually manifest a cluster \emph{per se}; but the difference between {[n]} and {[ŋ]} is the difference between simple `n' and `n' forming a cluster with a latent `g'. Somewhat different formally is {\Hjelmslev}'s proposed reduction of aspirated \isi{stops} {[p]}, {[t]}, and {[k]} in \ili{Danish} to clusters of `b', `d', and `g' with `h'. In fact in initial position, the \isi{stops} \emph{p}, \emph{t} and \emph{k} in \ili{Danish} are aspirated, and thus an analysis of `p' as `bh', etc., would be phonetically realistic; but there are two objections to this. First, the phonetic fact is entirely a matter of substance, and as such strictly irrelevant to the analysis of form. More importantly, though, {\Hjelmslev} in fact writes `hb', `hd', and `hg' in most cases rather than `bh', etc., and there is no phonetic justification whatever for this. Ultimately, he resolves this issue by an appeal to distributional \isi{regularities}, as will be described below. Apart from the choice of `hb' over `bh' for `p', however, this analysis raises a classic problem which is as real for any other theory as for {\Hjelmslev}'s: how far should an analysis go in reducing surface diversity to a small number of basic elements? In its most extreme form, such reduction would allow every language to be reduced to a system of one or two underlying elements, such as the `dot' and `dash' of the Morse code. {\Hjelmslev} explicitly renounces any such reduction, saying reductions should only be made when they are not `arbitrary'; but the problem is precisely to provide a suitable notion of `arbitrary' to constrain analyses. Intuitively, the reduction of {[ŋ]} to `ng' is less arbitrary than that of {[p]} to `hb', but (especially in the absence of considerations of substance) it is hard to make this intuition precise. It cannot be said that {\Hjelmslev} provided any explicit criterion for when a proposed reduction is allowed, and when it is disallowed as `arbitrary'. One might argue that, in fact, the principle {\Hjelmslev} appeals to in distinguishing \isi{syncretism}s from \isi{defective distribution} is of relevance here: that {rules} have to be founded in alternations (in the most general sense of the term) in order to be justified. This condition would allow the representation of \isi{nasal vowels} in \ili{French} as ideal sequences of vowel plus \isi{nasal consonant}, for example, as argued by {\Hjelmslev}, and perhaps the representation of \ili{Danish} {[ŋ]} as `ng'. It would, however, prohibit the representation of \ili{Danish} \emph{p}, \emph{t} and \emph{k} as combinations of `b', `d' and `g' preceded by `h' (though {\Hjelmslev} does attempt to adduce evidence from alternations in loanwords such as \emph{lak} {[lagͦ]} `lacquer (n.)' \emph{vs}. \emph{lakere} {[lagͦhe':rə]} `to lacquer (vb.)'). More drastically, it would prevent any sort of analysis in which two segments that are in \isi{complementary distribution} (but do not alternate) are represented as the same underlying element: for instance, {\Hjelmslev}'s treatment of \ili{Danish} \isi{syllable} initial {[t]} and final {[d]} as representing the expression taxeme `t', while initial {[d]} and final {[ð]} represent `d'; or his elimination of the vowel {[œ]} from the \ili{Danish} vowel system, as a variant of `ø'. In fact, while the requirement of an \isi{alternation} to support a rule makes obvious sense in the case of \isi{syncretism}s, it is not clear how such a condition could be coherently formulated with reference to {rules} of manifestation—and most of the problematic cases of possibly spurious reductions fall within this area. {\Hjelmslev} does indeed argue for the plausibility of certain reductions to manifestation {rules} by invoking productive {rules}. For example, in discussing quantity in \ili{Lithuanian}, he notes that there are certain semi-productive alternations between long and short vowels; and then observes that the same \isi{alternation}, in terms of its grammatical conditioning, relates the vowels \emph{a} and \emph{o}, and \emph{e} and \emph{ė} (with the first of each of these pairs occurring in the categories that show short vowels, and the second in categories that show long vowels). On the basis of this rule, he concludes that \emph{o} and \emph{ė} should be treated as the long correspondents of \emph{a} and \emph{e}, respectively. Since his analysis treats long vowels as clusters of short vowels, this allows him to eliminate \emph{o} and \emph{ė} completely from the inventory of \ili{Lithuanian} expression \isi{taxemes} (treating them as `a͡a' and `e͡e', respectively). It is by no means clear, however, that similar arguments can be provided for all cases in which some segment is eliminated from the inventory by treating it as a manifestation of another. For {\Hjelmslev}, the overriding condition motivating the reduction of taxeme inventories is that portion of the `\isi{empirical principle}' referred to above that requires a description to be as simple as possible. In fact, the \isi{meaning} of `\isi{simplicity}' here is quite clear and explicit: the simplest description is the one that posits the minimum number of elements. On this basis, subject to the essential but unclarified prohibition of `arbitrary' analyses, the linguist must obviously make every reduction possible. Interestingly enough, the notion of minimizing the number of elements posit\-ed in an analysis has two quite distinct senses for {\Hjelmslev}, with rather different implications. On the one hand, of course, it refers to minimizing the taxeme inventory: it is in this way that the reduction of \ili{Danish} {[p]} to `hb' is motivated by the principle of \isi{simplicity}. In addition, however, the principle is used to motivate the positing of \isi{syncretism}s. This is because an analysis which assigns two different expression forms to the same content form (i.e., which treats an \isi{alternation} as grammatically conditioned or suppletive) is taken to posit more `elements' (here, in the sense of sign expressions) than an analysis which assigns a single, constant expression form to a single content form. Thus, the principle that morphological elements should be given unitary underlying forms wherever possible (often taken to be a hallmark of generative phonology, and of morphophonemic analysis) is a governing one in glossematic analyses. Of course, not all differences in the expression of a single element of content can be so treated, as {\Hjelmslev} recognizes: \ili{English} \emph{be}, \emph{am}, \emph{are}, etc., cannot be described by {rules} of \isi{syncretism} in the plane of expression. A considerable range of (often, rather idiosyncratic) alternations \emph{are} so treated, however: in his analysis of \ili{French}, {\Hjelmslev} proposes a specific rule that makes the sequences `sə', `fə' latent before `z' to account for the small number of unusual words like \emph{os} ({[ɔs]} `bone', pl. {[o]} with the ideal notation `osəz') and \emph{bœuf} {[bœf]} `ox', pl. {[bø]} with the \isi{ideal notation} `bœfəz'). It is essential to recognize that the notion of \isi{simplicity} invoked by {\Hjelmslev} is only applicable to inventories, whether of \isi{taxemes} or sign expressions, and not at all to the {rules} or other statements of \isi{systematic relations} that enter into the analysis. This is at first sight somewhat at variance with his overall theoretical premises: after all, the units (\isi{taxemes}, etc.) of the analysis only have their existence insofar as they are defined by the network of \isi{rules} and relations into which they enter; so it would seem reasonable to claim that the primary way in which an analysis is `simple' is in having simple {rules}. Nonetheless, it is clear that \isi{simplicity} of {rules} plays little systematic role in {\Hjelmslev}'s thinking. For example, the elimination of {[œ]} from the expression taxeme inventory of \ili{Danish} is a comparatively limited gain in comparison with the complexity of the {rules} which are necessary to predict it as a variant of `ø', but this consideration is never raised in relation to the analysis, and it would appear that it was quite irrelevant to the decision to make the reduction in question. Given that the elements of the analysis are supposed to derive their reality only from the {rules} that relate them, a condition such as that part of the \isi{empirical principle} which requires their number to be minimized seems quite unfounded; but we must recognize that {\Hjelmslev} probably approached the issue from a rather different vantage point. After all, previous linguistics (including the earlier forms of \isi{structuralism} with which he was familiar) had discussed sound structure in terms of a theory of \emph{phonemes}, elements which represented in one way or another the minimal contrastive constituents of sound patterns. {\Hjelmslev} argued that these minimal elements of form should be derived in a purely immanent way from the relations among them, and to that extent emphasized the role of the {rules} in grounding an analysis; but he does not seem to have escaped in his own thought from the tendency to hypostatize the terms of these relations. Though his is clearly a theory that differs from others in the ontological status assigned to `phonemes', it is still primarily a theory of units rather than a theory of relations in its actual content and application. As such, the notion that \isi{simplicity} of relations (and not simply of inventories) should play a role in the theory does not seem to have occurred to him. To the claim that \isi{simplicity} of {rules} played no part in {\Hjelmslev}'s notion of phonological theory, two partial exceptions come to mind—one more significant than the other. On the one hand, he maintained consistently that the following constraint applied to consonant clusters in all (or at least nearly all) languages: if C$_{1}$C$_{2}$C$_{3}$ is a possible cluster, then both C$_{1}$C$_{2}$ and C$_{2}$C$_{3}$ must be possible as well. In other words, all clusters of more than two elements must be made up of sequences that are well formed in a local, pairwise fashion as well. It is this constraint, in fact, that causes him to represent \ili{Danish} {[p]} as `hb' rather than `bh' in many cases: if a cluster such as {[pl]} were represented as `bhl', this would violate the condition because `hl' is not otherwise possible in \ili{Danish}. The relevance of this constraint to the \isi{simplicity} issue comes from the fact that it might be seen as a requirement that the {rules} of consonant clustering be `simple' in the particular sense that the {rules} for long clusters be reducible to the {rules} for shorter ones. The notion that this is a question of \isi{simplicity} of {rules}, however, does not seem very plausible. {\Hjelmslev} fairly clearly regarded this generalization about clusters as having the status of an independent principle of linguistic structure, and not at all as a theorem to be derived from the requirement of \isi{simplicity} applied to the {rules} of an analysis. Another fact is perhaps more significant. Recall that, as stated above, a large number of formal analyses are typically provided by the theory of glossematics for any particular language, where the choice among them is to be made on the basis of which analysis is most appropriate to the substance in which the language is realized. This cannot be interpreted otherwise than as the requirement that the {rules} of manifestation be maximally simple. Of course, since the {rules} of manifestation are not an aspect of linguistic form at all \emph{per se}, it is clear that the requirement of \isi{simplicity} in the empirical principle (a principle that governs the range of possible linguistic forms) cannot be responsible for this; but it is nonetheless a consideration which would have a role to play in a more fully elaborated \isi{glossematic theory} of \isi{sound structure}. Simplicity of {rules}, then, may play a role in the relation between linguistic form and its manifestation, but apparently not in the {rules} underlying linguistic form itself. From this, and the expulsion of all phonetic (or `substance') considerations from the analysis of form, it might seem that the theory is hopelessly unable to deal adequately with the nature of linguistic structure. One could maintain, for example, that no analysis of \ili{German} can be said to describe the sound pattern of the language unless it treats the changes of /b/ to /p/, /d/ to /t/, etc. in \isi{syllable} final position as aspects of a unitary fact; and it is only the requirement of \isi{simplicity} of {rules}, combined with the definition of the segments involved as a substantive natural class, that has this consequence. To this objection, {\Hjelmslev} would probably have replied that it states the issue backwards. In his terms, that is, it is the fact that /b/, /d/, /g/, etc. undergo syllable-\isi{final devoicing} that establishes the link among them, not their phonetic similarity. Different languages containing the same segments may assign them quite different phonological properties (a point made most explicitly by {\Sapir}; cf. chapter~\ref{ch.sapir} below), and so phonetic properties cannot be taken as diagnostic in themselves of whether or not a class of segments is phonologically `natural'. The very fact that the implications affecting all of the voiced obstruents in \ili{German} have the same form is what constitutes the similarity among the segments involved—not the substance property of voiced implementation. Of course, to make this notion more explicit it is still necessary to develop a notion of when a set of implications (or other {rules}) are relevantly `similar' to one another; but it is clear that such a notion of rule similarity, implicit in glossematic notions, could be formulated in a purely immanent way—that is, based on the form of {rules} rather than on the substantial content of either {rules} or segments. In fact, it was just such a principle of formally based rule collapsing that underlay the notion of \isi{simplicity} characteristic of early work in generative phonology (chapter~\ref{ch.spe}). In this theory it was in effect proposed that the only intrusion of substance into this question was through a universal phonetically based notation system (the set of \isi{distinctive features}) for linguistic forms. A perceived failure of this theory due directly to its attempt to disregard the substance of {rules} (as opposed to \isi{representations}) was responsible for the addition of the notion of `\isi{markedness}' in the final chapter of \citet{spe}. These issues take us increasingly away from the theory of glossematics, at least as far as it exists in {\Hjelmslev}'s writings and analyses. It is worth noting, however, that that theory raises rather directly a number of questions that are less evident in relation to other forms of \isi{structuralism}, and that would only be treated systematically in much later work. \section{Nonsegmental structure in glossematic phonology} \label{sec:nonseg-struct} Another aspect of {\Hjelmslev}'s theory which distinguishes it from most of its contemporaries, and which has considerable relevance to present-day work is the attention he paid to phonological structure and properties that cannot be localized within the scope of a single segment. Of course, other theories of phonology recognized the existence of a certain range of `suprasegmental' properties, and in fact the major contribution of the British school of \isi{prosodic analysis} (chapter~\ref{ch.firth}) was precisely in this area. Nonetheless, {\Hjelmslev} is unique among structuralists in the importance he accorded to questions of \isi{syllable} structure and prosodic phenomena within a primarily segmental framework. \begin{wrapfigure}{l}{.35\textwidth} \includegraphics[width=.9\textwidth]{figures/Hjelmslev.jpg} \caption{Louis Hjelmslev} \label{fig:ch.hjelmslev.hjelmslev} \end{wrapfigure} Centrally, {\Hjelmslev} regarded a text as organized in a hierarchical fashion: into paragraphs each of which can be divided into sentences, which can in turn be divided into clauses that are divisible into phrases, etc. Of particular interest for phonological analysis, a phrase (which can represent a complete utterance by itself) is divided into syllables, and each \isi{syllable} is divided into segments. Syllables thus play an important role in organizing the utterance: they are the building blocks of phrases, and the domains within which the distribution of segments is to be specified. It might be possible to give a phonetic definition of the \isi{syllable}, but (as noted above), this would be irrelevant to the analysis of linguistic form even if such a substance-based characterization were available. What matters to the analysis of linguistic form is a functional definition of the \isi{syllable}, and {\Hjelmslev} proposes several at various stages in his writings. The one he finally settles on, which appears prominently in his descriptive work, is the following: a \isi{syllable} is the hierarchical unit of organization that bears one and only one \isi{accent}. To understand this, we must of course ask how an `\isi{accent}' is to be defined. The answer rests on {\Hjelmslev}'s view of the nature of phonologically relevant properties. In the course of an analysis (i.e., a division of a text into successively smaller hierarchical units), one eventually arrives at units that cannot be further subdivided (essentially, at the division of the utterance into segments). These units can be said to constitute the chain that is a text. But in addition to these, other relevant properties appear in the text which are not localized uniquely in a single such unit. Examples of such properties are intonations (which occur over an entire utterance), \isi{stress} (which occurs over an entire phonetic \isi{syllable}), \isi{pitch} accents in languages like \ili{Lithuanian}, which occur over a sequence of vowel(s) and following sonorants, etc. Another example is the \emph{stød} in \ili{Danish}: though this is realized in its strongest form \citep{gronnum.basboll01:stod,gronnum.etal13:stod} as a quasi-segmental element (a glottal stop, with associated perturbation of laryngeal activity), {\Hjelmslev} analyzes it as a signal for certain types of \isi{syllable} structure. The stød is a property of certain segmental patterns within a larger hierarchical unit, rather than a segment \emph{per se}. Such an element that ``characterizes the chain without constituting it'' is called a \emph{prosodeme}: these are in turn divided into two types: \emph{modulations}, which characterize an entire utterance as their minimal domain (e.g., intonation patterns that characterize questions), and \emph{accents}, which do not (e.g., \isi{stress}, \isi{pitch} \isi{accent}, the stød, etc.). It is in terms of this notion that {\Hjelmslev} defines the \isi{syllable} as a hierarchical unit bearing one and only one \isi{accent}. Interestingly, since (on his analysis) none of these elements are present in \ili{French}, he concludes that in the absence of accents, \ili{French} has no syllables \citep{hjelmslev39:syllable}. Such a conclusion is typical of {\Hjelmslev}'s intellectual style: he liked very much to use the consequences of a rigorous set of definitions to derive striking, indeed shocking conclusions. This theory bears some resemblances to the proposals of Metrical Phonology, at least in outline (chapter~\ref{ch.otlabphon}). Both depend essentially on the notion that utterances are organized into hierarchical units, and both claim that certain properties are associated with units at one level while others are associated with units at another level. The view of \isi{stress} as a property of syllables, for example, can be opposed to \posscitet{spe} view of \isi{stress}, which treats it as a property of individual vowels (just as, e.g., height, backness, or rounding is a property of a vowel). As opposed to Metrical Phonology, {\Hjelmslev} appears to treat \isi{stress} not as a relation between syllables but as a property (typically `strong \isi{stress}' vs. `weak \isi{stress}') which is assigned to a particular \isi{syllable}. Since `weak \isi{stress}' presupposes `strong \isi{stress}', though, and this relation might appear at several \isi{levels}, it may well be that apparent differences between {\Hjelmslev}'s view of the nature of \isi{stress} and that characteristic of Metrical Phonology are merely a matter of notation. An interesting aspect of {\Hjelmslev}'s theory of \isi{syllable} structure is that he uses it to define the notions of vowel and consonant. A vowel is defined as a segment that can constitute a \isi{syllable} by itself, or as one that has the same distribution as such a segment. Consonants are segments that do not fall into this category and that can appear in various positions dependent on vowels.\footnote{For \ili{French}, which as noted above his account mandates the absence of syllables, {\Hjelmslev} establishes a category of ``pseudo-vowels'' on the basis of words consisting of only a single segment (\emph{à, ai, y, eau, ou, eu}) in order to make the relevant distinctions \citep[217]{hjelmslev70:french}.} The definitions offered are not always as precise and adequate as one might wish, but the view of syllabicity that is involved is fairly clear. Syllables, that is, contain an obligatory \isi{nucleus} and various optional peripheral positions (which depend on the particular language). A segment occupying the position of the \isi{nucleus} is \emph{ipso facto} a vowel (regardless, as {\Hjelmslev} points out, of its articulatory properties: liquids, nasals, and even some obstruents can be `vowels' if they occupy the appropriate position in the \isi{syllable}), while a segment whose dependence on such a \isi{nucleus} has to be specified within the \isi{syllable} (i.e., part of an onset or syllable-final margin) is \emph{ipso facto} a consonant. The \isi{syllable} as a hierarchical unit thus has properties (e.g., accents) associated directly with it, rather than with its constituent segments; it also has an internal structure which is essential to defining the traditional notions of vowel and consonant. As a hierarchical unit of linguistic form, it must of course first be identified in terms of some relational properties it exhibits, and the basis of the unit in these terms is its role in the statement of segmental distributions. For {\Hjelmslev}, this was an extremely important fact: the \isi{syllable} is the domain within which the grouping properties of segments are defined, and (aside from special restrictions that refer to \isi{boundaries} of the larger unit, the utterance) this is the only such unit. He explicitly denies, for example, that there are any grouping restrictions that apply precisely to units of the size of a morpheme, insofar as this is not coextensive with a \isi{syllable}. This impossibility is claimed to follow from the separation of the planes of expression and content: morphemes, as content units, have no autonomous existence on the expression plane. Any limitations on the distribution of expression units must thus be stated in terms of properties of the expression plane, and it is exactly there that syllables have their reality. Actually, since {\Hjelmslev} appears to count sign expressions as units whose inventory should be minimized, it is not clear that this consequence follows; but it is clear that {\Hjelmslev} intended the \isi{syllable} to have this central role as the locus of distributional restrictions. One other such strange conclusion results from the definition of a `consonant' . A segment only qualifies for this status insofar as its distribution within the \isi{syllable} has to be specified; thus, a hypothetical language with exclusively (C)V syllables, in which the distribution of any consonant is completely determined by the fact of its belonging to a specific \isi{syllable}, would have no `\isi{consonants}'. The consonantal segments, that is, could be treated as a sort of prosodic property of their syllables. In fact, he describes the earliest reconstructable stage of \ili{Indo-European} as having this property: all syllables were open, and instead of having `\isi{consonants}' in the strict sense, the language had a large inventory of `converted prosodemes'. This is claimed to be an unstable situation, which led to radical restructuring of the \ili{Indo-European} \isi{phonological system}. Though there is considerably more to be said about {\Hjelmslev}'s view of non-segmental structure in phonology, I will close the discussion here. The point of citing this aspect of \isi{glossematic theory} is not to argue that {\Hjelmslev} had important insights here that have otherwise been lost, but simply to point out the central role he assigned to such structure. Most other forms of structuralist phonology concentrated their attention on segmental structure alone, and attempted insofar as possible to treat other phenomena (such as \isi{stress} and \isi{accent}) as segmental properties. Here as elsewhere, the theory of glossematics occupies a unique position within the structuralist tradition, one which is closer in some ways to that of present-day phonology than to those of his contemporaries. \section{Eli Fischer-Jørgensen} \label{sec:eli} There is mild irony in the fact that following {\Hjelmslev}'s insistence that the fundamental questions of linguistics concerned matters of form and not substance, the international reputation of Denmark as a center of important work in the field should have been carried on by a research community primarily known as phoneticians. Nonetheless, it is undeniable that {\Eli},\footnote{Throughout her career, Eli Fischer-Jørgensen preferred to be called simply ``Eli''. In the present section, I follow that usage. For a rich and detailed account of Eli's life and career, see \citealt{skytte16:eli}.} a phonetician, was Denmark's most prominent general linguist in the years following {\Hjelmslev}'s death, and it was her associated students and colleagues who continued the tradition of {Danish} linguistics. The transition was not a completely abrupt one, however. {\Eli} was born in Nakskov in Lolland on 11 February, 1911. At the age of 8, her family moved to Fåborg in Funen where she spent her early school days, going to the gymnasium in nearby Svendborg. In 1929, she entered the University of Copenhagen where she began her studies in \ili{German} and \ili{French}. She already had some interest in linguistics and in literature, but her first experiences in phonetics were somewhat off-putting: the courses in \ili{German} and \ili{French} phonetics she took consisted mainly of learning physiological descriptions and making transcriptions from written texts, rather than actually producing any sounds. Courses in \ili{German} linguistics with \name{Louis}{Hammerich} and in \ili{French} with \name{Kristian}{Sandfeld} and \name{Viggo}{Brøndal} were somewhat better, and {\Brøndal}'s course in \ili{French} phonetics was more appealing than her earlier experiences. It was only when she took a course in \ili{Danish} phonetics with \name{Poul}{Andersen}, however, that she began to engage with that field. Her primary interests at this time, though were in general linguistics, and she was eager to read the work of {\Saussure}, {\Meillet}, Shuchardt\ia{Schuchardt, Hugo} and {\Jespersen}, among others. When the initial volumes of the Prague \textsl{Travaux} appeared, {\Hammerich} drew her attention to them, and loaned her the books. These introduced her to the work of {\Trubetzkoy} and {\Jakobson}, which greatly appealed to her, although she was critical of the extent to which the early phonologists ``passed too lightly over the phonetic substance which at the start was pushed somewhat aside as belonging to natural science'' \citep[62f.]{fischer-jorgensen81:causerie}, a reaction which would constitute a recurring theme in her relation with other linguists. Already as a student she became a member of the recently established \isi{Linguistic Circle of Copenhagen}, and greatly enjoyed the discussions there involving {\Brøndal}, {\Hjelmslev} and others. She also attended {\Hjelmslev}'s lectures on \name{Rasmus}{Rask} and on \posscitet{grammont33:traite} historical phonetics. After receiving her MA in 1936 for a thesis on the importance of dialect geography for an understanding of \isi{sound change}, she received a scholarship to study in Germany. She spent two terms in Marburg, a center for dialect studies, but found the work there of little interest. She was by now greatly interested in phonology, in part because of her appreciation for the \isi{Prague School} phonologists and in part as a rejection of an earlier interest in syntax (she had written a prize essay in Copenhagen on the definition of the sentence, and ``was fed up with \ldots\ all the pseudo-philosophical twaddle {[she]} had had to read for this purpose'' \citealt[63]{fischer-jorgensen81:causerie}). She wrote to {\Trubetzkoy} and proposed studying with him in Vienna; he replied very positively (according to {\Jakobson}, the last letter in his hand before his death), but shortly afterwards died so that she never actually met him. \begin{wrapfigure}{r}{.3\textwidth} \includegraphics[width=.9\textwidth]{figures/EFJ_young.jpg} \caption{Eli Fischer-Jørgensen (1949)} \label{fig:ch.hjelmslev.efj-young} \end{wrapfigure} Instead, she went to Paris, where she studied phonology with \name{André}{Martinet} (chapter~\ref{ch.martinet} below) and experimental phonetics with \name{Pierre}{Fouché} and \name{Marguérite}{Durand}, as well as attending lectures by \name{Émile}{Benveniste} on \ili{Indo-European}. Just before going to Paris, in July of 1938, she attended the Third International Congress of Phonetic Sciences in Ghent, where she met {\Jakobson} for the first time. Following her time in Paris, she was invited to work on phonetics with \name{Eberhardt}{Zwirner} in Berlin. She was greatly impressed with {\Zwirner}, but his work was interrupted as a result of political difficulties, and he was called up for service in the {German} army. She left Berlin to return to Denmark just two weeks before war broke out. {\Hammerich} hired {\Eli} as a teaching assistant in \ili{German}, and she was able to continue her studies in phonetics. In 1943, she was appointed as a Lecturer in Phonetics, a new post under the chair of linguistics occupied by {\Hjelmslev}. Opportunities for experimental work in phonetics were initially quite limited, due to a lack of equipment and facilities, but gradually through her own efforts and with {\Hjelmslev}'s support, she developed a laboratory adequate to support her program. During the {German} occupation of Denmark in World War II, {\Eli} was engaged in rather dangerous work with the resistance group led by Prof. \name{Carsten}{Høeg} in assembling files on Nazi collaborators for prosecution after the war (\citealt[67ff.]{skytte16:eli}, \citealt{efj.ege05:resistance}). {\Hjelmslev} was already the primary figure in {Danish} linguistics at the time, and his book \citealt{hjelmslev43:prolegomena}, the main basis on which his ideas would be known outside Denmark, was the center of discussion in the Copenhagen circle for many years. Her review of the book \citep{efj43:rvw.hjelmslev} in the same year served as a landmark both in the exposition of {\Hjelmslev}'s ideas, introducing these to students and others in somewhat more readable form, and in their critical evaluation. After the war, {\Eli} received a scholarship for a year's study abroad. On {\Hjelmslev}'s advice, she went to London where she could receive serious training in phonetics. While there, she attended lectures by \name{Daniel}{Jones} at University College and also ones by J. R. {\Firth} at the School of Oriental Studies, as well as doing practical phonetics work on \ili{English} and \ili{French} with \name{Hélène}{Coustenoble}. In 1949, she renewed her acquaintance with \name{Roman}{Jakobson}, initiating a correspondence that continued until his death in 1982. These letters, collected as \citealt{early.years:efj-rj.letters}, document an intellectual interaction at times quite sharp but always cordial. {\Jakobson} was quite interested in discussing the relation between {\Hjelmslev}'s views and his own theory of distinctive features. They met in person again when {\Jakobson} visited Copenhagen in May, 1950, and in 1952 {\Eli} received a Rockefeller Scholarship to visit the US. She spent five weeks at MIT, where she had a visiting appointment arranged with {\Jakobson}'s support. He was very eager for her to spend the rest of the year working on a project with him, but she found it impossible to stay on. The circumstances are not entirely clear, but after her departure from Cambridge, her subsequent letters to him went unanswered until they met again in 1957 at the Eighth {International Congress of Linguists} in Oslo. After her stay at MIT, she went to New York to work with the group of phoneticians at Haskins Laboratories. She was particularly taken with the possibilities offered for speech synthesis research by the pattern-playback machine. From there, she went to Oklahoma to study briefly with \name{Kenneth}{Pike} before returning to Copenhagen. Other collaborations included a stay in Stockholm in 1954 to work with \name{Gunnar}{Fant}. {\Eli} always downplayed her \isi{competence} in the domain of physical \isi{acoustics}, but she was particularly delighted to have been the official opponent at {\Fant}'s defense of his doctoral thesis \citep{fant60:acoustic.theory}. \begin{wrapfigure}[16]{l}{.35\textwidth} \includegraphics[width=.9\textwidth]{figures/Eli Fischer-Jørgensen.jpg} \caption{Eli Fischer-Jørgensen (1968)} \label{fig:ch.hjelmslev.efj} \end{wrapfigure} Over the following years, she devoted herself to a rich program of teaching and research in phonetics. She was appointed to a personal professorship in phonetics in 1966, associated with the establishment of her own department, the Institut for Fonetik. The program thrived, and with it her research and that of her colleagues. In 1968 she was admitted to the Royal {Danish} Academy of Sciences and Letters, the first woman to become a domestic member. In 1979, the Ninth International Congress of Phonetic Sciences was held in Copenhagen, largely organized by {\Eli}. In 1981, she reached the mandatory retirement age and withdrew somewhat (but not toally) from academic life. She died in February, 2010, soon after her 99th birthday. It would be hard to argue that \name{Eli}{Fischer-Jørgensen} was a major innovative figure in the development of phonological theory. As she says, from her first participation in the discussions of the \isi{Linguistic Circle of Copenhagen} she ``realized the great difference between being able to understand a theory and being able to create a theory'' \citep[63]{fischer-jorgensen81:causerie}. And she contributed greatly to the field by being able to understand a wide variety of related but quite different theoretical positions, and to bring that understanding to bear in confronting those theories with one another's insights and with the hard facts elicited in the phonetics laboratory. Familiar at first hand with the work and opinions of {\Hjelmslev}, {\Jakobson}, {\Martinet}, {\Jones}, {\Firth}, {\Pike} and others, she was uniquely positioned to provide a balanced view of the comparative merits and deficiencies of a variety of notions of what ``phonology'' ought to be, as illustrated in her history of the field \citep{fischer-jorgensen75:trends}. This was particularly true of her position between {\Hjelmslev} and {\Jakobson}, two important figures whose views were quite at odds but who maintained very friendly relations (perhaps dating to the cordial reception {\Hjelmslev} and other {Danish} linguists offered to {\Jakobson} in his initial escape from the Germans at the beginning of the war). {\Eli} was able to bring the arguments of each to bear on the other, and also to argue against what she saw as the weaknesses in each of their positions. In both cases, her objections follow from her experience as a phonetician. \begin{wrapfigure}[16]{r}{.4\textwidth} \includegraphics[width=.9\textwidth]{figures/eli_lab.jpg} \caption{Eli Fischer-Jørgensen in the Phonetics Lab (1981)} \label{fig:ch.hjelmslev.efj_lab} \end{wrapfigure} In the case of {\Hjelmslev}'s views, she points out in a series of papers (including \citealt{efj43:rvw.hjelmslev,efj52:distribution} and elsewhere) that the Glossematic attempt to analyze linguistic structure as purely algebraic in character, with no grounding in phonetic reality on the ``expression'' side, dating back already to \posscitet{hjelmslev.uldall35:phonematics} London Congress presentation, could not really be carried out. For one thing, the only way the realizations of formal elements in different structural positions could be satisfactorily identified with one another (e.g. initial {[t]} and post-vocalic {[d]} in \ili{Danish}) must involve at least an implicit appeal to their phonetic identity. Other specific problems with important aspects of the theory such as the Commutation Test \citep{efj56:commutation}, and the overall relation between the notions of Form and Substance \citep{efj66:form.and.substance} emerged from her extensive discussions with {\Hjelmslev}, in meetings of the Copenhagen Linguistic Circle and elsewhere. These differences, along with areas of agreement between them, are surveyed in \citealt[chap. 7]{fischer-jorgensen75:trends}. {\Eli}'s relation to {\Jakobson}'s views was somewhat more nuanced, but again grounded in her respect for phonetic facts. One point between them concerned {\Jakobson}'s insistence that phonological features had to be binary in character, though they agreed to disagree about this. \begin{quotation} I remember once meeting him in America --- it was in the beginning of our acquaintance --- and his first words were ``I know you are my enemy!'' I was very astonished and asked him how he had got that idea, and he answered': ``You do not believe in the binary principle''. So I explained that I found the binary principle very important, but speaking a language with four degrees of aperture in the vowels it was somewhat difficult for me to accept the binary principle as an absolute universal. He understood this, and we have been friends ever since. \citep[23]{efj97:jakobson} \end{quotation} {\Eli}'s respect for {\Jakobson}'s ideas went back to her initial enthusiasm for the publications of the Prague phonologists, though from the beginning she had reservations about the extent to which the phonological perspective was claimed to exclude phonetics, relegating this to the realm of the natural sciences. It was particularly \posscitet{jakobson:kindersprache} \textsl{Kindersprache} that fascinated her, and {\Eli} was generally approving of his efforts to ground the \isi{distinctive features} in phonetic definitions. Nonetheless, she had a number of objections to {\Jakobson}'s specific program in this regard, summarized in \citealt[162ff.]{fischer-jorgensen75:trends}. Over the years, as documented in the letters collected by \citet{early.years:efj-rj.letters}, she showed no reluctance to challenge {\Jakobson}'s position from the secure basis of her work as a phonetician. It seems reasonable to suggest that just as {\Eli}'s view of phonology was strongly shaped by her training and research in phonetics, so also the phonetic topics on which she worked extensively (e.g. \ili{Danish} \emph{stød}, \citealt{efj87:stod}; the properties distinguishing ``voiced'' and ``voiceless'' \isi{stops}, \citealt{efj68:stops} and elsewhere, etc.) were informed by insights from phonological analysis. This productive interplay between a variety of approaches to phonology and the hard data obtained in the phonetics lab characterized her research program and that of her institute, and was continued in the way linguistics in Denmark maintained its significance, through {\Eli}'s students and colleagues such as Jørgen {\Rischel}, \name{Nina}{Grønnum}, \name{Hans}{Basbøll} and others. %%% Local Variables: %%% mode: latex %%% TeX-master: "/Users/sra/Dropbox/Docs/Books/P20C_2/LSP/main.tex" %%% End:
{ "alphanum_fraction": 0.7950792104, "avg_line_length": 60.9017857143, "ext": "tex", "hexsha": "1a6ed7a6e0a0473c8bd37dcbf16f7b6fc5811710", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "3793c564a7fc6dd5b201a2479f0bcd21ca265c87", "max_forks_repo_licenses": [ "CC-BY-4.0" ], "max_forks_repo_name": "langsci/327", "max_forks_repo_path": "chapters/07_hjelmslev.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "3793c564a7fc6dd5b201a2479f0bcd21ca265c87", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "CC-BY-4.0" ], "max_issues_repo_name": "langsci/327", "max_issues_repo_path": "chapters/07_hjelmslev.tex", "max_line_length": 144, "max_stars_count": null, "max_stars_repo_head_hexsha": "3793c564a7fc6dd5b201a2479f0bcd21ca265c87", "max_stars_repo_licenses": [ "CC-BY-4.0" ], "max_stars_repo_name": "langsci/327", "max_stars_repo_path": "chapters/07_hjelmslev.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 28901, "size": 115957 }
\section{Instance models and instance graphs} \label{sec:transformation_framework:instance_models_and_instance_graphs} In the previous section, the structure of the framework was applied to type models and type graphs. In this section, the structure will be applied to instance models and instance graphs. Since instance models and instance graphs directly depend on type models and type graphs, some definitions will be borrowed from the previous section. First, the general structure of the framework applied to instance models and instance graphs is discussed. Then the required definitions and theorems are given. \begin{figure} \centering \begin{tikzpicture} \path (-3,4) node[circle,draw,minimum size=10mm,inner sep=0pt](ME) {$O$} (-4.5,2) node[circle,draw,minimum size=10mm,inner sep=0pt](MA) {$Im_A$} (-1.5,2) node[circle,draw,minimum size=10mm,inner sep=0pt](MB) {$Im_B$} (-3,0) node[circle,draw,minimum size=10mm,inner sep=0pt](MAB) {$Im_{AB}$} (3,4) node[circle,draw,minimum size=10mm,inner sep=0pt](GN) {$N$} (1.5,2) node[circle,draw,minimum size=10mm,inner sep=0pt](GA) {$IG_A$} (4.5,2) node[circle,draw,minimum size=10mm,inner sep=0pt](GB) {$IG_B$} (3,0) node[circle,draw,minimum size=10mm,inner sep=0pt](GAB) {$IG_{AB}$}; \path[] (ME) [-, black, out=240, in=90] edge node[above] {} (MA) (ME) [-, black, out=300, in=90] edge node[above] {} (MB) (MA) [-{Latex[width=5]}, black, out=270, in=90] edge node[above] {} (MAB) (MB) [-{Latex[width=5]}, black, out=270, in=90] edge node[above] {} (MAB) (GN) [-, black, out=240, in=90] edge node[above] {} (GA) (GN) [-, black, out=300, in=90] edge node[above] {} (GB) (GA) [-{Latex[width=5]}, black, out=270, in=90] edge node[above] {} (GAB) (GB) [-{Latex[width=5]}, black, out=270, in=90] edge node[above] {} (GAB) (ME) [-{Latex[width=5]}, black, out=25, in=155] edge node[above] {$f$} (GN) (GN) [-{Latex[width=5]}, black, out=165, in=15] edge node[above] {} (ME) (MA) [-{Latex[width=5]}, black, out=35, in=145] edge node[above] {$f_A$} (GA) (GA) [-{Latex[width=5]}, black, out=155, in=25] edge node[above] {} (MA) (MB) [-{Latex[width=5]}, black, out=35, in=145] edge node[above] {$f_B$} (GB) (GB) [-{Latex[width=5]}, black, out=155, in=25] edge node[above] {} (MB) (MAB) [-{Latex[width=5]}, black, out=25, in=155] edge node[above] {$f_{A} \sqcup f_{B}$} (GAB) (GAB) [-{Latex[width=5]}, black, out=165, in=15] edge node[above] {} (MAB) ; \end{tikzpicture} \caption{Structure for transforming between instance models and instance graphs} \label{fig:transformation_framework:instance_models_and_instance_graphs:structure_instance_models_graphs} \end{figure} \cref{fig:transformation_framework:instance_models_and_instance_graphs:structure_instance_models_graphs} shows one more alternation of the structure proposed in \cref{sec:transformation_framework:structure}. This version of the structure is applied to instance models and instance graphs. As before, instance model $Im_A$ represents the partially build model which corresponds to instance graph $IG_A$ under the transformation function $f_A$. Instance model $Im_B$ represents the next building block to add to this model. It corresponds to instance graph $IG_B$ under the bijective transformation function $f_B$. Instance models $Im_A$ and $Im_B$ are entirely distinct except for a set objects $O$, which means $O \subseteq Object_{Im_A} \land O \subseteq Object_{Im_B}$. In a similar way, instance graphs $IG_A$ and $IG_B$ are entirely distinct except for a set of nodes $N$, so $N \subseteq N_{IG_A} \land N \subseteq N_{IG_B}$. Instance models $Im_A$ and $Im_B$ are combined into instance model $Im_{AB}$ using \cref{defin:transformation_framework:instance_models_and_instance_graphs:combining_instance_models:combine}. In a similar way instance graphs $IG_A$ and $IG_B$ are combined into instance graph $IG_{AB}$ using \cref{defin:transformation_framework:instance_models_and_instance_graphs:combining_instance_graphs:combine}. \cref{defin:transformation_framework:instance_models_and_instance_graphs:combining_instance_models:imod_combine_merge_correct} and \cref{defin:transformation_framework:instance_models_and_instance_graphs:combining_instance_graphs:ig_combine_merge_correct} respectively show that $Im_{AB}$ and $IG_{AB}$ are valid. Then \cref{defin:transformation_framework:instance_models_and_instance_graphs:combining_transformation_functions:combination_transformation_function_instance_model_instance_graph} and \cref{defin:transformation_framework:instance_models_and_instance_graphs:combining_transformation_functions:combination_transformation_function_instance_graph_instance_model} can be used to merge the transformation functions $f_A$ and $f_B$ into $f_{A} \sqcup f_{B}$, where \cref{defin:transformation_framework:instance_models_and_instance_graphs:combining_transformation_functions:ig_combine_mapping_correct} and \cref{defin:transformation_framework:instance_models_and_instance_graphs:combining_transformation_functions:ig_combine_mapping_function_correct} show that $f_{A} \sqcup f_{B}$ is again a valid transformation function transforming $Im_{AB}$ to $IG_{AB}$. Similarly, \cref{defin:transformation_framework:instance_models_and_instance_graphs:combining_transformation_functions:imod_combine_mapping_correct} and \cref{defin:transformation_framework:instance_models_and_instance_graphs:combining_transformation_functions:imod_combine_mapping_function_correct} show that the inverse function of $f_{A} \sqcup f_{B}$ is again a valid transformation function transforming $IG_{AB}$ to $Im_{AB}$. \input{tex/04_transformation_framework/04_instance_models_and_instance_graphs/01_combining_instance_models.tex} \input{tex/04_transformation_framework/04_instance_models_and_instance_graphs/02_combining_instance_graphs.tex} \input{tex/04_transformation_framework/04_instance_models_and_instance_graphs/03_combining_transformation_functions.tex}
{ "alphanum_fraction": 0.7633487146, "avg_line_length": 101.1333333333, "ext": "tex", "hexsha": "cdc584447de8b2c1d6911c1c1568e6f080dfbe37", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "a0e860c4b60deb2f3798ae2ffc09f18a98cf42ca", "max_forks_repo_licenses": [ "AFL-3.0" ], "max_forks_repo_name": "RemcodM/thesis-ecore-groove-formalisation", "max_forks_repo_path": "thesis/tex/04_transformation_framework/04_instance_models_and_instance_graphs.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "a0e860c4b60deb2f3798ae2ffc09f18a98cf42ca", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "AFL-3.0" ], "max_issues_repo_name": "RemcodM/thesis-ecore-groove-formalisation", "max_issues_repo_path": "thesis/tex/04_transformation_framework/04_instance_models_and_instance_graphs.tex", "max_line_length": 1999, "max_stars_count": null, "max_stars_repo_head_hexsha": "a0e860c4b60deb2f3798ae2ffc09f18a98cf42ca", "max_stars_repo_licenses": [ "AFL-3.0" ], "max_stars_repo_name": "RemcodM/thesis-ecore-groove-formalisation", "max_stars_repo_path": "thesis/tex/04_transformation_framework/04_instance_models_and_instance_graphs.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 1744, "size": 6068 }
%!TEX root = ../main.tex \clearpage \chapter*{Nondisclosure Statement} \thispagestyle{scrheadings} The \textit{Bachelor Thesis} on hand \begin{center}{\itshape{} \thetitle\/}\end{center} contains internal respectively confidential data of \textit{\thecompany}. It is intended solely for inspection by the assigned examiner, the \textit{Digital Service Innovation} department and, if necessary, the Audit Committee of the \theuniversity in \theplace. It is strictly forbidden \begin{itemize} \item to distribute the content of this paper (including data, figures, tables, charts, etc.) as a whole or in extracts, \item to make copies or transcripts of this paper or of parts of it, \item to display this paper or make it available in digital, electronic or virtual form. \end{itemize} Exceptional cases may be considered through permission granted in written form by the author and the company. \vspace{3em} \theplace, \thedate \vspace{4em} \rule{6cm}{0.4pt}\\ \theauthor
{ "alphanum_fraction": 0.7692307692, "avg_line_length": 41.1666666667, "ext": "tex", "hexsha": "94453ff90d6ee1aacdf72d88bba71096f2143c05", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "d4478cb06264133c9e86eff98428320047d11b5e", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "Quintinius97/dhbw-latex-template", "max_forks_repo_path": "adds/nondisclosurenotice.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "d4478cb06264133c9e86eff98428320047d11b5e", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "Quintinius97/dhbw-latex-template", "max_issues_repo_path": "adds/nondisclosurenotice.tex", "max_line_length": 287, "max_stars_count": 2, "max_stars_repo_head_hexsha": "d4478cb06264133c9e86eff98428320047d11b5e", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "Quintinius97/dhbw-latex-template", "max_stars_repo_path": "adds/nondisclosurenotice.tex", "max_stars_repo_stars_event_max_datetime": "2021-02-13T15:31:51.000Z", "max_stars_repo_stars_event_min_datetime": "2018-09-19T08:57:50.000Z", "num_tokens": 251, "size": 988 }
% Copyright 2006 INRIA % $Id: messages.tex,v 1.2 2006-03-01 14:39:03 doligez Exp $ \chapter{Warnings and error messages}\label{chap:messages}
{ "alphanum_fraction": 0.7310344828, "avg_line_length": 29, "ext": "tex", "hexsha": "a6f36712e515f2c9a53acca0dde81283eb488176", "lang": "TeX", "max_forks_count": 12, "max_forks_repo_forks_event_max_datetime": "2021-11-12T22:18:25.000Z", "max_forks_repo_forks_event_min_datetime": "2020-02-26T19:58:37.000Z", "max_forks_repo_head_hexsha": "13a1993263642092a521ac046c11e3cb5fbcbc8b", "max_forks_repo_licenses": [ "BSD-2-Clause" ], "max_forks_repo_name": "damiendoligez/tlapm", "max_forks_repo_path": "zenon/doc/messages.tex", "max_issues_count": 49, "max_issues_repo_head_hexsha": "13a1993263642092a521ac046c11e3cb5fbcbc8b", "max_issues_repo_issues_event_max_datetime": "2022-02-07T17:43:24.000Z", "max_issues_repo_issues_event_min_datetime": "2020-03-04T18:13:13.000Z", "max_issues_repo_licenses": [ "BSD-2-Clause" ], "max_issues_repo_name": "damiendoligez/tlapm", "max_issues_repo_path": "zenon/doc/messages.tex", "max_line_length": 60, "max_stars_count": 31, "max_stars_repo_head_hexsha": "13a1993263642092a521ac046c11e3cb5fbcbc8b", "max_stars_repo_licenses": [ "BSD-2-Clause" ], "max_stars_repo_name": "damiendoligez/tlapm", "max_stars_repo_path": "zenon/doc/messages.tex", "max_stars_repo_stars_event_max_datetime": "2022-02-19T18:38:07.000Z", "max_stars_repo_stars_event_min_datetime": "2016-08-16T14:58:40.000Z", "num_tokens": 53, "size": 145 }
% Autogenerated translation of publications.md by Texpad % To stop this file being overwritten during the typeset process, please move or remove this header \documentclass[12pt]{book} \usepackage{graphicx} \usepackage{fontspec} \usepackage[utf8]{inputenc} \usepackage[a4paper,left=.5in,right=.5in,top=.3in,bottom=0.3in]{geometry} \setlength\parindent{0pt} \setlength{\parskip}{\baselineskip} \setmainfont{Helvetica Neue} \usepackage{hyperref} \pagestyle{plain} \begin{document} \hrule layout: archive title: "Publications" permalink: /publications/ \section*{author\_profile: true} \{\% if author.googlescholar \%\} You can also find my articles on \texttt{$<$u$>$}\texttt{$<$a href="\{\{author.googlescholar\}\}"$>$}my Google Scholar profile\texttt{$<$/a$>$}.\texttt{$<$/u$>$} \{\% endif \%\} \{\% include base\_path \%\} \{\% for post in site.publications reversed \%\} \{\% include archive-single.html \%\} \{\% endfor \%\} \end{document}
{ "alphanum_fraction": 0.7114375656, "avg_line_length": 28.0294117647, "ext": "tex", "hexsha": "2a93198946db6c24220e28b1cf25eb3a8284cf68", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "d8300d6f255c8a9e363335b7291c252dd0d52b35", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "PangChung/pangchung.github.io", "max_forks_repo_path": "_pages/publications.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "d8300d6f255c8a9e363335b7291c252dd0d52b35", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "PangChung/pangchung.github.io", "max_issues_repo_path": "_pages/publications.tex", "max_line_length": 163, "max_stars_count": null, "max_stars_repo_head_hexsha": "d8300d6f255c8a9e363335b7291c252dd0d52b35", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "PangChung/pangchung.github.io", "max_stars_repo_path": "_pages/publications.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 300, "size": 953 }
\documentclass[a4paper,10pt]{report} \title{User Manual \\ Kinematics \& Dynamics Library} \author{Ruben Smits, Herman Bruyninckx} \usepackage{listings} \usepackage{amsmath} \usepackage{url} \usepackage{todo} \usepackage{hyperref} \begin{document} \lstset{language=c++} \maketitle \chapter{Introduction to KDL} \label{cha:introduction-kdl} \section{What is KDL?} \label{sec:what-kdl} Kinematics \& Dynamics Library is a c++ library that offers \begin{itemize} \item classes for geometric primitives and their operations \item classes for kinematic descriptions of chains (robots) \item (realtime) kinematic solvers for these kinematic chains \end{itemize} \section{Getting support - the Orocos community} \label{sec:gett-supp-oroc} \begin{itemize} \item This document! \item The website : \url{http://www.orocos.org/kdl}. \item The mailinglist. \todo{add link to mailinglist} \end{itemize} \section{Getting Orocos KDL} \label{sec:getting-orocos-kdl} First off all you need to succesfully install the KDL-library. This is explained in the \textbf{Installation Manual}, it can be found on \url{http://www.orocos.org/kdl/Installation_Manual} \chapter{Tutorial} \label{cha:tutorial} \paragraph{Important remarks} \label{sec:important-remarks} \begin{itemize} \item All geometric primitives and there operations can be used in realtime. None of the operations lead to dynamic memory allocations and all of the operations are deterministic in time. \item All values are in [m] for translational components and [rad] for rotational components. \end{itemize} \section{Geometric Primitives} \label{sec:geometric-primitives} \paragraph{Headers} \label{sec:headers} \begin{itemize} \item \lstinline$frames.hpp$: Definition of all geometric primitives and there transformations/operators. \item \lstinline$frames_io.hpp$: Definition of the input/output operators. \end{itemize} The following primitives are available: \begin{itemize} \item Vector~\ref{sec:vector} \item Rotation~\ref{sec:rotation} \item Frame~\ref{sec:frame} \item Twist~\ref{sec:twist} \item Wrench~\ref{sec:wrench} \end{itemize} \subsection{Vector} \label{sec:vector} Represents the 3D position of a point/object. It contains three values, representing the X,Y,Z position of the point/object with respect to the reference frame: $\mathrm{Vector} = \left[\begin{array}{c} x \\ y \\ z \end{array} \right]$ \paragraph{Creating Vectors} \label{sec:creating-vectors} There are different ways to create a vector: \begin{lstlisting} Vector v1; //The default constructor, X-Y-Z are initialized to zero Vector v2(x,y,z); //X-Y-Z are initialized with the given values Vector v3(v2); //The copy constructor Vector v4=Vector::Zero(); //All values are set to zero \end{lstlisting} \paragraph{Get/Set individual elements} \label{sec:gets-indiv-elem} The operators \lstinline$[]$ and \lstinline$()$ use indeces from 0..2, index checking is enabled/disabled by the DEBUG/NDEBUG definitions: \begin{lstlisting} v1[0]=v2[1];//copy y value of v2 to x value of v1 v2(1)=v3(3);//copy z value of v3 to y value of v2 v3.x( v4.y() );//copy y value of v4 to x value of v3 \end{lstlisting} \paragraph{Multiply/Divide with a scalar} \label{sec:mult-with-scal} You can multiply or divide a Vector with a double using the operator \lstinline$*$ and \lstinline$/$: \begin{lstlisting} v2=2*v1; v3=v1/2; \end{lstlisting} \paragraph{Add and subtract vectors} \label{sec:add-subtract-vectors} You can add or substract a vector with another vector: \begin{lstlisting} v2+=v1; v3-=v1; v4=v1+v2; v5=v2-v3; \end{lstlisting} \paragraph{Cross and scalar product} \label{sec:cross-scalar-product} You can calculate the cross product of two vectors, which results in new vector or calculate the scalar(dot) product of two vectors: \begin{lstlisting} v3=v1*v2; //Cross product double a=dot(v1,v2)//Scalar product \end{lstlisting} \paragraph{Resetting} \label{sec:resetting} You can reset the values of a vector to zero: \begin{lstlisting} SetToZero(v1); \end{lstlisting} \paragraph{Comparing vectors} \label{sec:comparing-vectors} With or without giving an accuracy: \begin{lstlisting} v1==v2; v2!=v3; Equal(v3,v4,eps);//with accuracy eps \end{lstlisting} \subsection{Rotation} \label{sec:rotation} Represents the 3D rotation of an object wrt the reference frame. Internally it is represented by a 3x3 matrix which is a non-minimal representation: $\mathrm{Rotation} = \left[\begin{array}{ccc} Xx&Yx&Zx\\ Xy&Yy&Zy\\ Xz&Yz&Zz \end{array}\right] $ \paragraph{Creating Rotations} \label{sec:creating-rotations} \subparagraph{Safe ways to create a Rotation} \label{sec:safe-ways-create} The following result always in consistent Rotations. This means the rows/columns are always normalized and orthogonal: \begin{lstlisting} Rotation r1; //The default constructor, initializes to an 3x3 identity matrix Rotation r1 = Rotation::Identity();//Identity Rotation = zero rotation Rotation r2 = Rotation::RPY(roll,pitch,yaw); //Rotation build out off Roll-Pitch-Yaw angles Rotation r3 = Rotation::EulerZYZ(alpha,beta,gamma); //Rotation build out off Euler Z-Y-Z angles Rotation r4 = Rotation::EulerZYX(alpha,beta,gamma); //Rotation build out off Euler Z-Y-X angles Rotation r5 = Rotation::Rot(vector,angle); //Rotation build out off an equivalent axis(vector) and an angle. \end{lstlisting} \subparagraph{Other ways} \label{sec:other-ways} The following should be used with care, they can result in inconsistent rotation matrices, since there is no checking if columns/rows are normalized or orthogonal \begin{lstlisting} Rotation r6( Xx,Yx,Zx,Xy,Yy,Zy,Xz,Yz,Zz);//Give each individual element (Column-Major) Rotation r7(vectorX,vectorY,vectorZ);//Give each individual column \end{lstlisting} \paragraph{Getting values} \label{sec:getting-values} Individual values, the indices go from 0..2: \subsection{Frame} \label{sec:frame} \subsection{Twist} \label{sec:twist} \subsection{Wrench} \label{sec:wrench} \section{Kinematic Structures} \label{sec:kinematic-structures} \subsection{Joint} \label{sec:joint} \subsection{Segment} \label{sec:segment} \subsection{Chain} \label{sec:chain} \section{Kinematic Solvers} \label{sec:kinematic-solvers} \subsection{Forward position kinematics} \label{sec:forw-posit-kinem} \subsection{Forward velocity kinematics} \label{sec:forw-veloc-kinem} \subsection{Jacobian calculation} \label{sec:jacobian-calculation} \subsection{Inverse velocity kinematics} \label{sec:inverse-veloc-kinem} \subsection{Inverse position kinematics} \label{sec:inverse-posit-kinem} \section{Motion Specification Primitives} \label{sec:moti-spec-prim} \subsection{Path} \label{sec:path} \subsection{Velocity profile} \label{sec:velocity-profile} \subsection{Trajectory} \label{sec:trajectory} \end{document} %%% Local Variables: %%% mode: latex %%% TeX-master: t %%% End:
{ "alphanum_fraction": 0.7581180546, "avg_line_length": 26.4465648855, "ext": "tex", "hexsha": "f3dd15b07d788dc7426cd8222c729b5c52a9e2ac", "lang": "TeX", "max_forks_count": 425, "max_forks_repo_forks_event_max_datetime": "2022-03-29T06:59:06.000Z", "max_forks_repo_forks_event_min_datetime": "2017-07-04T22:03:29.000Z", "max_forks_repo_head_hexsha": "8f359c8d00dd4a98f56ec2276c5663cb6c100e47", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "numberen/apollo-platform", "max_forks_repo_path": "ros/orocos_kinematics_dynamics/orocos_kdl/doc/tex/UserManual.tex", "max_issues_count": 73, "max_issues_repo_head_hexsha": "8f359c8d00dd4a98f56ec2276c5663cb6c100e47", "max_issues_repo_issues_event_max_datetime": "2022-03-07T08:07:07.000Z", "max_issues_repo_issues_event_min_datetime": "2017-07-06T12:50:51.000Z", "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "numberen/apollo-platform", "max_issues_repo_path": "ros/orocos_kinematics_dynamics/orocos_kdl/doc/tex/UserManual.tex", "max_line_length": 110, "max_stars_count": 742, "max_stars_repo_head_hexsha": "8f359c8d00dd4a98f56ec2276c5663cb6c100e47", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "numberen/apollo-platform", "max_stars_repo_path": "ros/orocos_kinematics_dynamics/orocos_kdl/doc/tex/UserManual.tex", "max_stars_repo_stars_event_max_datetime": "2022-03-30T12:55:43.000Z", "max_stars_repo_stars_event_min_datetime": "2017-07-05T02:49:36.000Z", "num_tokens": 1993, "size": 6929 }
%Abstract \chapter*{Abstract} \addcontentsline{toc}{chapter}{Abstract} %The structures and energies of different types of clusters are studied %computationally. For this, a combination of graph theoretical methods, %approximate interaction potentials and accurate quantum chemical calculations is %used to investigate gold clusters, Lennard-Jones (LJ) clusters and clusters %bound by the sticky-hard-sphere (SHS) interaction potential. The structures and stabilities of hollow gold clusters are investigated by means of density functional theory (DFT) as topological duals of carbon fullerenes. Fullerenes can be constructed by taking a graphene sheet and wrapping it around a sphere, which requires the introduction of exactly 12 pentagons. In the dual case, a (111) face-centred cubic (fcc) gold sheet can be deformed in the same way, introducing 12 vertices of degree five, to create hollow gold nano-cages. This one-to-one relationship follows trivially from Euler's polyhedral formula and there are as many golden dual fullerene isomers as there are carbon fullerenes. Photoelectron spectra of the clusters are simulated and compared to experimental results to investigate the possibility of detecting other dual fullerene isomers. The stability of the hollow gold cages is compared to compact structures and a clear energy convergence towards the (111) fcc sheet of gold is observed. The relationship between the Lennard-Jones (LJ) and sticky-hard-sphere (SHS) potential is investigated by means of geometry optimisations starting from the SHS clusters. It is shown that the number of non-isomorphic structures resulting from this procedure depends strongly on the exponents of the LJ potential. Not all LJ minima, that have been discovered in previous work, can be retrieved this way and the mapping from the SHS to the LJ structures is therefore non-injective and non-surjective. The number of missing structures is small and they correspond to energetically unfavourable minima on the energy landscape. The optimisations are also carried out for an extended Lennard-Jones potential derived from coupled-cluster calculations for the xenon dimer, and, although the shape of the potential is not too different from a regular (6,12)-LJ potential, the number of minima increases substantially. Gregory-Newton clusters, which are clusters where 12 spheres surround and touch a central sphere, are obtained from the complete set of SHS clusters. All 737 structures result in an icosahedron, when optimised with a (6,12)-LJ potential. Furthermore, the contact graphs, consisting only of atoms from the outer shell of the clusters, are all edge-induced sub-graphs of the icosahedral graph. For higher LJ exponents the symmetry of the potential energy surface breaks away from the icosahedral motif towards the SHS landscape, which does not support a perfect icosahedron for energetic reasons. This symmetry breaking is mainly governed by the shape of the potential in the repulsive region, with the long-range attractive region having little influence.
{ "alphanum_fraction": 0.8200522534, "avg_line_length": 58.8846153846, "ext": "tex", "hexsha": "70c2297637d460f695d496ceac68be881736721a", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "0eb1d2a109a0f91271fbdd6d85b4706b2e281d93", "max_forks_repo_licenses": [ "CC-BY-4.0" ], "max_forks_repo_name": "Trombach/thesis", "max_forks_repo_path": "abstract.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "0eb1d2a109a0f91271fbdd6d85b4706b2e281d93", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "CC-BY-4.0" ], "max_issues_repo_name": "Trombach/thesis", "max_issues_repo_path": "abstract.tex", "max_line_length": 81, "max_stars_count": null, "max_stars_repo_head_hexsha": "0eb1d2a109a0f91271fbdd6d85b4706b2e281d93", "max_stars_repo_licenses": [ "CC-BY-4.0" ], "max_stars_repo_name": "Trombach/thesis", "max_stars_repo_path": "abstract.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 671, "size": 3062 }
\chapter{Introduction} \label{introduction} Getting a college degree is an investment that many people make, with the expectation that once they get their degree, they will have the basic skills needed to start a career in the field of their choice. However, what happens if a student does everything they are told to do (takes required courses, gets a high GPA, and graduates on time), but once they start applying for jobs, they are constantly turned down and told that their coursework and high GPA are not enough? If colleges promise to prepare students for the workforce, they should be doing everything they can do deliver this promise, even if it means redefining what it means for a student to be ``successful." For ICS students in 2017, it is hard to land a job when the only thing on your resume is a couple of standard programming courses and 3.8 GPA. Several ICS alumni have told me that they realized too late that employers are scouring incoming resumes for other things like internships, independent projects, and hackathons. It is also difficult for ICS students to find jobs in new industries that may not have existed, or may have been far less prominent four years ago when they began their ICS degree. How can these students prepare for these new fields, when they had no concept of it during their degree experience? Furthermore, is it reasonable to expect colleges to create new courses each time there is an advancement in technology? These initial observations made me wonder if other ICS students over past decade were experiencing similar problems. To answer this question, I navigated to the Hawaii technology community site, TechHui \cite {TechHuiQuestions}, and found a forum question entitled ``Three bad things about being an ICS student." I gathered responses from 199 ICS students from 2008 to 2016, and compiled a list of the ten most common complaints over the past 8 years: \begin{enumerate} \item The ICS department needs to offer classes more frequently. \item The ICS department needs to offer a wider variety of classes. \item The ICS department needs a better sense of community. \item Some of the professors in the ICS department need to improve their teaching. \item The ICS department should offer more focused areas of study. \item ICS classes are too time consuming and take up more time than anticipated. \item The ICS department should offer more classes that meet focus requirements. \item ICS books are too expensive. \item ICS courses should involve more group work \item ICS should encourage more interaction among students. \end{enumerate} Complaints 1, 2, 5, 6, 7, and 8 suggest problems with degree planning and coursework itself and complaints 3, 4, 9, and 10 suggest social and communication related problems within the department. There were also some other complaints among students on TechHui that were not as common but stuck out to me nonetheless. There were at least eight students who mentioned that they felt intimidated when they started out in ICS, due to the impressions they got from their classmates and the major overall. This caused them to feel discouraged and had an overall negative impact on their ICS experience. These sentiments further suggest social problems with the ICS community, as well as with how the department is perceived outside of the community. Overall, the TechHui complaints reveal several academic and social problems in the ICS department, while recent alumni struggling to find jobs suggest problems with helping ICS students to develop professionally. As ideal as it would be, it is hard to meet the needs of all current, past and present students in a department. However, after taking student and alumni feedback into consideration, several of these problems could potentially be alleviated by creating an online platform that provides students with the help they need to become a truly successful student--academically, professionally, and socially. Creating a useful degree planner that helps students get all the information they need to create an ideal plan for their personal goals could help students both academically and professionally. Creating an online social network for the ICS community could help encourage students to connect with others in the department and become more supported socially, which could potentially lead to both academic and professional advancements. Adding gamification aspects to the degree experience could help give students the extra incentives they need to go beyond the UHM graduation requirements, and take the initiative to become overall more well prepared both academically and professionally for finding a job after graduation. In the following section, I investigate existing degree planners, social networks, and games. I summarize what is currently available for students, and whether their features have the potential to meet students' academic, professional, and social engagement needs. Next, I describe the baseline survey I designed and deployed in Spring 2017 to establish initial values for students before using RadGrad. Finally, from September 2016 to May 2017, I worked with the RadGrad team to develop the initial version of the RadGrad system, and the last section of the paper describes how RadGrad combines degree planning, social networking, and gamification in a new way that potentially addresses many of the aforementioned student problems and needs.
{ "alphanum_fraction": 0.8123167155, "avg_line_length": 181.8666666667, "ext": "tex", "hexsha": "d9efef70e9ec5fb7200d5a6ff1c16a874b5a0f5f", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "fcea41b4bae02e99515d45cc8911a9f5e55d8047", "max_forks_repo_licenses": [ "CC0-1.0" ], "max_forks_repo_name": "amymalia/radgrad-thesis", "max_forks_repo_path": "introduction.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "fcea41b4bae02e99515d45cc8911a9f5e55d8047", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "CC0-1.0" ], "max_issues_repo_name": "amymalia/radgrad-thesis", "max_issues_repo_path": "introduction.tex", "max_line_length": 1337, "max_stars_count": 1, "max_stars_repo_head_hexsha": "fcea41b4bae02e99515d45cc8911a9f5e55d8047", "max_stars_repo_licenses": [ "CC0-1.0" ], "max_stars_repo_name": "amymalia/radgrad-thesis", "max_stars_repo_path": "introduction.tex", "max_stars_repo_stars_event_max_datetime": "2017-07-20T00:25:18.000Z", "max_stars_repo_stars_event_min_datetime": "2017-07-20T00:25:18.000Z", "num_tokens": 1082, "size": 5456 }
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % some commonly used commands \definecolor{darkblue}{rgb}{0,0,0.6} \definecolor{darkgreen}{rgb}{0,0.6,0} \definecolor{darkred}{rgb}{0.7,0,0} \newcommand{\mynote}[1]{ \ensuremath{\left\langle\right.\!\!\left\langle\right.\!}\bf #1% \ensuremath{\!\left.\right\rangle\!\!\left.\right\rangle}} \newcommand{\DC}[1]{\textcolor{blue}{\mynote{DC: #1}}} \newcommand{\RM}[1]{\textcolor{darkgreen}{\mynote{RM: #1}}} \newcommand{\JP}[1]{\textcolor{darkred}{\mynote{JP: #1}}} \newcommand{\JS}[1]{\textcolor{violet}{\mynote{JS: #1}}} \newcommand{\muTVL}{\ensuremath{\mu\textrm{TVL}}\xspace} \newcommand{\fid}{\ensuremath{\mathsf{FID}}\xspace} \newcommand{\aid}{\ensuremath{\mathsf{AID}}\xspace} \newcommand{\name}{\ensuremath{\mathsf{Name}}\xspace} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %% Concrete syntax used in D1.2 \newcommand{\NT}[1]{\textit{#1}} %\newcommand{\TR}[1]{\texttt{\underline{\raisebox{-2.2pt}{\rule{0pt}{0pt}}#1}}} \newcommand{\TR}[1]{\ensuremath{\mathtt{#1}}} \newcommand{\TRS}[1]{\texttt{#1}} \newcommand{\VAR}[1]{\textsc{#1}} %\newcommand{\MANY}[1]{\ensuremath{\bnfmany{#1}}} %\newcommand{\MANYG}[1]{\ensuremath{\bnfmany{(#1)}}} \newcommand{\MANY}[1]{#1\ensuremath{^*}} \newcommand{\MANYG}[1]{\ensuremath{(}#1\ensuremath{)^*}} \newcommand{\PLUS}[1]{\bnfplus{#1}} \newcommand{\PLUSG}[1]{\bnfplus{(#1)}} \newcommand{\OPT}[1]{\ensuremath{[}#1\ensuremath{]}} \newcommand{\OPTG}[1]{\ensuremath{[}#1\ensuremath{]}} %\raisebox{-2pt}{\rule{0pt}{0pt} %\newcommand{\defNT}[1]{\NT{#1} &::=&} \newcommand{\defn}{&::=&} \newcommand{\defc}{\\&\multicolumn{1}{r}{\ensuremath{|}}&} \newcommand{\concrDefn}[1]{\defn #1} \newcommand{\concrCont}[1]{\defc #1} \newcommand{\concrNewline}[1]{\\&& #1} \newenvironment{bnfgrammar} {\begin{tabular}{lrl@{\hspace{3mm}}l}} {\end{tabular}} \newenvironment{abssyntax} {~\\[1ex]\noindent\textbf{Syntax:}\\[1ex] \begin{bnfgrammar}} {\end{bnfgrammar}\\[1ex]} \newenvironment{abstractgrammar} {\[ \begin{array}[t]{@{}lrl@{\hspace{3mm}}l@{}}} {\end{array}\]} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %% Making grammars consistent \newcommand{\bnfoptional}[1] {{ [} #1 { ]}\, } \newcommand{\bnfplus}[1]{#1\ensuremath{^+}}%one or more occurrences, separated by comma \newcommand{\bnfdef} {::=} \newcommand{\bnfbar} {\mathrel{|}} \newcommand{\seqof}[1]{\MANY{#1}} \newcommand{\bnfmany}[1]{\MANY{#1}} \newcommand{\many}[1]{\MANY{#1}} \newcommand{\manymath}[1]{\ensuremath{\overline{#1}}} \newcommand{\kw}[1]{\ensuremath{\mathtt{#1}}} % same as \TR \newcommand{\seqofg}[1]{\MANYG{#1}} \newcommand{\bnfmanyg}[1]{\MANYG{#1}} \newcommand{\manyg}[1]{\MANYG{#1}} \newcommand{\grshape}{\slshape} %\newcommand{\grface}{\texttt} \newcommand{\gmany}[1] {\{#1\}} %%% %\newcommand{\gopt}[1] {\textup{[}{#1}\textup{]}} \newcommand{\gopt}[1] {[{#1}]} \newcommand{\gbar}{\ensuremath{|\ }} \newcommand{\posix}[1] {\textup{[:\,}{#1}\textup{\,:]}} %%% %\newcommand{\grface}{\textsl} %\newcommand{\grface}{\texttt} \newcommand{\sourcefont}{\ttfamily} \newcommand{\commentfont}{\slshape\rmfamily\color{black!70}} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \colorlet{keywordcolor}{blue!50!black} \lstdefinelanguage{ABS}{ language=Java, deletekeywords={void,long,int,byte,boolean,true,false}, morekeywords={type, data, def, export, import, get, let, local, then, in, await, assert,suspend,module,from,now,duration,deadline, delta,uses,adds,modifies,removes,original,core,productline,corefeatures,optionalfeatures,after,when,product,hasAttribute,hasMethod}, } %\def\codesize{\fontsize{9}{10}} \lstnewenvironment{abs}{% \lstset{language=ABS,columns=fullflexible,mathescape=true,% keywordstyle=\bf\sffamily,commentstyle=\sl\sffamily,% basicstyle=\footnotesize\normalfont\sffamily,inputencoding=latin1, % i would prefer utf8 extendedchars,xleftmargin=2em,showstringspaces=false}% \csname lst@SetFirstLabel\endcsname} {\csname lst@SaveFirstLabel\endcsname} %%%% mTVL \lstdefinelanguage{MTVL}{keywords= {root,group,extension,oneof,allof,opt}, sensitive=true, morecomment=[l]{//}, morestring=[b]"} %\def\codesize{\fontsize{9}{10}} \lstnewenvironment{mtvl}{% \lstset{language=MTVL,columns=fullflexible,mathescape=true,% keywordstyle=\bf\sffamily,commentstyle=\sl\sffamily,% basicstyle=\footnotesize\normalfont\sffamily,inputencoding=latin1, % i would prefer utf8 extendedchars,xleftmargin=2em}% \csname lst@SetFirstLabel\endcsname} {\csname lst@SaveFirstLabel\endcsname} %%%% Spec lang \lstdefinelanguage{SPEC}{keywords={interface,spec,in,out,calls,returns}, sensitive=true, comment=[l]{//}, morecomment=[s]{/*}{*/}, morestring=[b]"} \lstnewenvironment{spec}{% \lstset{language=SPEC,columns=fullflexible,mathescape=true,% keywordstyle=\bf\sffamily,commentstyle=\sl\sffamily,% basicstyle=\footnotesize\normalfont\sffamily,inputencoding=latin1, % i would prefer utf8 extendedchars,xleftmargin=2em}% \csname lst@SetFirstLabel\endcsname} {\csname lst@SaveFirstLabel\endcsname} \lstdefinestyle{absgrammar}{ basicstyle=\ttfamily, % moredelim=[is][\bfseries]{'}{'}, % morecomment=[s][\bfseries]{'}{'}, % stringstyle=\bfseries, morestring=[b]', columns = fullflexible,%spaceflexible,%fullflexible, columns = fixed, fontadjust = true, keepspaces = true, % do not summarize spaces mathescape={false}, xleftmargin=1.5em, showstringspaces = false, } \lstdefinestyle{absnobg} { language=ABS, basicstyle=\sourcefont\upshape, commentstyle=\commentfont, keywordstyle=\bfseries\color{keywordcolor}\upshape, % deletekeywords={static,public,private} classoffset=1, % standard types: morekeywords={Unit,Int, Rat, Bool, List, Set, Pair, Fut, Maybe, String, Triple, Either, Map}, keywordstyle=\color{violet}, classoffset=0, mathescape={true}, escapechar={\#}, columns = fullflexible,%spaceflexible,%fullflexible, fontadjust = true, keepspaces = true, % do not summarize spaces showstringspaces = false, % inputencoding=utf8, % numbers=left, xleftmargin=1.5em, framexleftmargin=1em, framextopmargin=0.5ex, % breaklines=true, % lineskip=0pt, % abovecaptionskip=-1ex, % belowcaptionskip=-1ex, % breakautoindent = true, % breakindent = 2em, % breaklines = true, emptylines=10 } \lstdefinestyle{abs} { style=absnobg, framerule=1pt, backgroundcolor=\color{blue!5}, rulecolor=\color{gray!50}, frame=tblr } % make abs the default style %\lstset{ %style=abs %} \lstdefinestyle{absexample}{ style=abs, %backgroundcolor=\color{green!80!black!10}, } \newcommand{\absinline}[1]{\lstinline[style=abs]{#1}} \newcommand{\absinlinesmall}[1]{\lstinline[style=abs,basicstyle=\sourcefont\footnotesize\upshape, commentstyle=\commentfont\footnotesize, keywordstyle=\bfseries\color{keywordcolor}\footnotesize\upshape]{#1}} \lstnewenvironment{abscode}{ \lstset{style=abs}} {} \lstnewenvironment{abscodesmall}{ \lstset{style=abs, basicstyle=\sourcefont\footnotesize\upshape, commentstyle=\commentfont\footnotesize, keywordstyle=\bfseries\color{keywordcolor}\footnotesize\upshape }} {} \lstnewenvironment{abscodesmallnobg}{ \lstset{style=absnobg, basicstyle=\sourcefont\footnotesize\upshape, commentstyle=\commentfont\footnotesize, keywordstyle=\bfseries\color{keywordcolor}\footnotesize\upshape }} {} \lstnewenvironment{absgrammar}{% ~\\[1ex]\noindent\textbf{Syntax:} \lstset{style=absgrammar}} {} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \newcommand{\grsh}[1]{{\grshape #1}} \newenvironment{grprode}[2] {{\grshape #1}\,: \\\indent\indent{\grshape #2}} {\vspace{.75\baselineskip}} \newcommand{\gprod}[2]{\begin{grprode}{#1}{#2}\end{grprode}} \newcommand{\gnewline}{\\\mbox{}\hspace{4em}} \newcommand{\gexnewline}{\\\mbox{}\indent} %\newcommand{\gnlbar}{\gnewline\mbox{}\hspace{-1em}\gbar} %or \newcommand{\gnlbar}{\gnewline\gbar} \lstnewenvironment{absexample} {\subsubsection*{Example:} \lstset{style=absexample}} {} \newcommand{\kwdecl}{\upshape\ttfamily} \newcommand{\mcode}[1]{{\texttt{#1}}} %mycode \newcommand{\kwlbrace} {\kw{\{\,}} \newcommand{\kwrbrace} {\kw{\}}} \newcommand{\kwlparen} {\kw{(\,}} \newcommand{\kwrparen} {\kw{)}} \newcommand{\kwdot} {\kw{.}} \newcommand{\kwassign} {$\,\kw{=}\,$} \newcommand{\kwsemi} {$\,\kw{;}\,$} \newcommand{\kwbar} {$\,\kw{|}\,$} \newcommand{\kwcomma} {$\,\kw{,}\,$} \newcommand{\kwlt} {$\,\kw{<}$} \newcommand{\kwgt} {$\kw{>}\,$} \newcommand{\kwclass} {\kw{class}} \newcommand{\kwinterface} {\kw{interface}} \newcommand{\kwextends} {\kw{extends}} \newcommand{\kwdata} {\kw{data}} \newcommand{\kwdef} {\kw{def}} \newcommand{\kwimplements} {\kw{implements}} \newcommand{\kwwhile} {\kw{while}} \newcommand{\kwassert} {\kw{assert}} \newcommand{\kwreturn} {\kw{return}} \newcommand{\kwfut} {\kw{Fut}} \newcommand{\kwskip} {\kw{skip}} \newcommand{\kwget} {\kw{get}} \newcommand{\kwnull} {\kw{null}} \newcommand{\kwawait} {\kw{await}} \newcommand{\kwif} {\kw{if}} \newcommand{\kwthen} {\kw{then}} \newcommand{\kwelse} {\kw{else}} \newcommand{\kwsuspend} {\kw{suspend}} \newcommand{\kwnew} {\kw{new}} \newcommand{\kwthis} {\kw{this}} \newcommand{\kwcase} {\kw{case}} \newcommand{\kwlet} {\kw{let}} \newcommand{\kwin} {\kw{in}} \newcommand{\kwcog} {\kw{cog}} \newcommand{\kwtype} {\kw{type}} \newcommand{\kwguardand} {\kw{\&}} \newcommand{\kwbang} {\kw{!}} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %% tikz styles \tikzstyle{box} = [draw=black,thick,bottom color=black!20,top color=white] \tikzstyle{bx2} = [draw=black,thick,bottom color=blue!40,top color=white] \tikzstyle{art} = [draw=black,rounded corners=7pt, thick, fill=blue!20, text=black, inner sep=2mm,font=\sffamily] \tikzstyle{art2}= [art, bottom color=blue!10!gray!40, top color=white] \pgfdeclarelayer{background} \pgfdeclarelayer{cogs} \pgfdeclarelayer{main} \pgfsetlayers{background,cogs,main} \newcommand{\diagramfont}{\sffamily} \tikzstyle{lightshadow}=[drop shadow={shadow scale=1, fill=black!20, %opacity=.5, shadow xshift=1.2pt, shadow yshift=-1.2pt}] \tikzstyle{ultralightshadow}=[ drop shadow={shadow scale=1, fill=black!10, shadow xshift=2.5pt, shadow yshift=-2.5pt}] \tikzstyle{diagramnodeWithoutShadow}=[draw=black!50,fill=white,font=\diagramfont, line width=0.8pt, text depth=0ex] \tikzstyle{diagramnode}=[diagramnodeWithoutShadow, lightshadow] \tikzstyle{object}=[diagramnode,rectangle,rounded corners=2pt, inner sep=2mm,fill=black!2,fill=Lavender!80,%fill=blue!20!white, %shadow xshift=0.2ex, shadow yshift=-0.2ex} %path fading=south, shading angle=45} ] \tikzstyle{mainobject}=[object,font=\diagramfont\bfseries] \tikzstyle{cogform}=[rectangle,line width=1.5pt,draw=black!20, text depth=0ex,inner sep=6pt,rounded corners] \tikzstyle{cog}=[cogform, ultralightshadow, fill=black!2, %draw=blue!50,fill=blue!10, ] % inner sep = 0.45cm \tikzstyle{legendtext}=[right,text depth=0,font=\diagramfont\footnotesize] \tikzstyle{legendsymbol}=[left,text depth=0,font=\diagramfont\footnotesize] \tikzstyle{myarrow}=[single arrow,draw=gray!25,fill=gray!10,very thick] %%% Local Variables: %%% mode: latex %%% TeX-master: "main" %%% End:
{ "alphanum_fraction": 0.6784289387, "avg_line_length": 31.0716253444, "ext": "tex", "hexsha": "4b5fb6f4927320ef9a14774cda3242107fc3ae02", "lang": "TeX", "max_forks_count": 33, "max_forks_repo_forks_event_max_datetime": "2022-01-26T08:11:55.000Z", "max_forks_repo_forks_event_min_datetime": "2015-04-23T09:08:09.000Z", "max_forks_repo_head_hexsha": "6f245ec8d684efb0977049d075e853a4b4d7d8dc", "max_forks_repo_licenses": [ "BSD-3-Clause" ], "max_forks_repo_name": "oab/abstools", "max_forks_repo_path": "abs-docs/ReferenceManual/abslanguage_definitions.tex", "max_issues_count": 271, "max_issues_repo_head_hexsha": "6f245ec8d684efb0977049d075e853a4b4d7d8dc", "max_issues_repo_issues_event_max_datetime": "2022-03-28T09:05:50.000Z", "max_issues_repo_issues_event_min_datetime": "2015-07-30T19:04:52.000Z", "max_issues_repo_licenses": [ "BSD-3-Clause" ], "max_issues_repo_name": "oab/abstools", "max_issues_repo_path": "abs-docs/ReferenceManual/abslanguage_definitions.tex", "max_line_length": 132, "max_stars_count": 38, "max_stars_repo_head_hexsha": "6f245ec8d684efb0977049d075e853a4b4d7d8dc", "max_stars_repo_licenses": [ "BSD-3-Clause" ], "max_stars_repo_name": "oab/abstools", "max_stars_repo_path": "abs-docs/ReferenceManual/abslanguage_definitions.tex", "max_stars_repo_stars_event_max_datetime": "2022-03-18T19:26:34.000Z", "max_stars_repo_stars_event_min_datetime": "2015-04-23T09:08:06.000Z", "num_tokens": 3884, "size": 11279 }
\documentclass[twocolumn]{article} \usepackage{usenix} %\documentclass[times,10pt,twocolumn]{article} %\usepackage{latex8} %\usepackage{times} \usepackage{url} \usepackage{graphics} \usepackage{amsmath} \usepackage{epsfig} \pagestyle{empty} \renewcommand\url{\begingroup \def\UrlLeft{<}\def\UrlRight{>}\urlstyle{tt}\Url} \newcommand\emailaddr{\begingroup \def\UrlLeft{<}\def\UrlRight{>}\urlstyle{tt}\Url} \newcommand{\workingnote}[1]{} % The version that hides the note. %\newcommand{\workingnote}[1]{(**#1)} % The version that makes the note visible. % If an URL ends up with '%'s in it, that's because the line *in the .bib/.tex % file* is too long, so break it there (it doesn't matter if the next line is % indented with spaces). -DH %\newif\ifpdf %\ifx\pdfoutput\undefined % \pdffalse %\else % \pdfoutput=1 % \pdftrue %\fi \newenvironment{tightlist}{\begin{list}{$\bullet$}{ \setlength{\itemsep}{0mm} \setlength{\parsep}{0mm} % \setlength{\labelsep}{0mm} % \setlength{\labelwidth}{0mm} % \setlength{\topsep}{0mm} }}{\end{list}} % Cut down on whitespace above and below figures displayed at head/foot of % page. \setlength{\textfloatsep}{3mm} % Cut down on whitespace above and below figures displayed in middle of page \setlength{\intextsep}{3mm} \begin{document} %% Use dvipdfm instead. --DH %\ifpdf % \pdfcompresslevel=9 % \pdfpagewidth=\the\paperwidth % \pdfpageheight=\the\paperheight %\fi \title{Tor: The Second-Generation Onion Router} %\\DRAFT VERSION} % Putting the 'Private' back in 'Virtual Private Network' \author{Roger Dingledine \\ The Free Haven Project \\ [email protected] \and Nick Mathewson \\ The Free Haven Project \\ [email protected] \and Paul Syverson \\ Naval Research Lab \\ [email protected]} \maketitle \thispagestyle{empty} \begin{abstract} We present Tor, a circuit-based low-latency anonymous communication service. This second-generation Onion Routing system addresses limitations in the original design by adding perfect forward secrecy, congestion control, directory servers, integrity checking, configurable exit policies, and a practical design for location-hidden services via rendezvous points. Tor works on the real-world Internet, requires no special privileges or kernel modifications, requires little synchronization or coordination between nodes, and provides a reasonable tradeoff between anonymity, usability, and efficiency. We briefly describe our experiences with an international network of more than 30 nodes. % that has been running for several months. We close with a list of open problems in anonymous communication. \end{abstract} %\begin{center} %\textbf{Keywords:} anonymity, peer-to-peer, remailer, nymserver, reply block %\end{center} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{Overview} \label{sec:intro} Onion Routing is a distributed overlay network designed to anonymize TCP-based applications like web browsing, secure shell, and instant messaging. Clients choose a path through the network and build a \emph{circuit}, in which each node (or ``onion router'' or ``OR'') in the path knows its predecessor and successor, but no other nodes in the circuit. Traffic flows down the circuit in fixed-size \emph{cells}, which are unwrapped by a symmetric key at each node (like the layers of an onion) and relayed downstream. The Onion Routing project published several design and analysis papers \cite{or-ih96,or-jsac98,or-discex00,or-pet00}. While a wide area Onion Routing network was deployed briefly, the only long-running public implementation was a fragile proof-of-concept that ran on a single machine. Even this simple deployment processed connections from over sixty thousand distinct IP addresses from all over the world at a rate of about fifty thousand per day. But many critical design and deployment issues were never resolved, and the design has not been updated in years. Here we describe Tor, a protocol for asynchronous, loosely federated onion routers that provides the following improvements over the old Onion Routing design: \textbf{Perfect forward secrecy:} In the original Onion Routing design, a single hostile node could record traffic and later compromise successive nodes in the circuit and force them to decrypt it. Rather than using a single multiply encrypted data structure (an \emph{onion}) to lay each circuit, Tor now uses an incremental or \emph{telescoping} path-building design, where the initiator negotiates session keys with each successive hop in the circuit. Once these keys are deleted, subsequently compromised nodes cannot decrypt old traffic. As a side benefit, onion replay detection is no longer necessary, and the process of building circuits is more reliable, since the initiator knows when a hop fails and can then try extending to a new node. \textbf{Separation of ``protocol cleaning'' from anonymity:} Onion Routing originally required a separate ``application proxy'' for each supported application protocol---most of which were never written, so many applications were never supported. Tor uses the standard and near-ubiquitous SOCKS~\cite{socks4} proxy interface, allowing us to support most TCP-based programs without modification. Tor now relies on the filtering features of privacy-enhancing application-level proxies such as Privoxy~\cite{privoxy}, without trying to duplicate those features itself. \textbf{No mixing, padding, or traffic shaping (yet):} Onion Routing originally called for batching and reordering cells as they arrived, assumed padding between ORs, and in later designs added padding between onion proxies (users) and ORs~\cite{or-ih96,or-jsac98}. Tradeoffs between padding protection and cost were discussed, and \emph{traffic shaping} algorithms were theorized~\cite{or-pet00} to provide good security without expensive padding, but no concrete padding scheme was suggested. Recent research~\cite{econymics} and deployment experience~\cite{freedom21-security} suggest that this level of resource use is not practical or economical; and even full link padding is still vulnerable~\cite{defensive-dropping}. Thus, until we have a proven and convenient design for traffic shaping or low-latency mixing that improves anonymity against a realistic adversary, we leave these strategies out. \textbf{Many TCP streams can share one circuit:} Onion Routing originally built a separate circuit for each application-level request, but this required multiple public key operations for every request, and also presented a threat to anonymity from building so many circuits; see Section~\ref{sec:maintaining-anonymity}. Tor multiplexes multiple TCP streams along each circuit to improve efficiency and anonymity. \textbf{Leaky-pipe circuit topology:} Through in-band signaling within the circuit, Tor initiators can direct traffic to nodes partway down the circuit. This novel approach allows traffic to exit the circuit from the middle---possibly frustrating traffic shape and volume attacks based on observing the end of the circuit. (It also allows for long-range padding if future research shows this to be worthwhile.) \textbf{Congestion control:} Earlier anonymity designs do not address traffic bottlenecks. Unfortunately, typical approaches to load balancing and flow control in overlay networks involve inter-node control communication and global views of traffic. Tor's decentralized congestion control uses end-to-end acks to maintain anonymity while allowing nodes at the edges of the network to detect congestion or flooding and send less data until the congestion subsides. \textbf{Directory servers:} The earlier Onion Routing design planned to flood state information through the network---an approach that can be unreliable and complex. % open to partitioning attacks. Tor takes a simplified view toward distributing this information. Certain more trusted nodes act as \emph{directory servers}: they provide signed directories describing known routers and their current state. Users periodically download them via HTTP. \textbf{Variable exit policies:} Tor provides a consistent mechanism for each node to advertise a policy describing the hosts and ports to which it will connect. These exit policies are critical in a volunteer-based distributed infrastructure, because each operator is comfortable with allowing different types of traffic to exit from his node. \textbf{End-to-end integrity checking:} The original Onion Routing design did no integrity checking on data. Any node on the circuit could change the contents of data cells as they passed by---for example, to alter a connection request so it would connect to a different webserver, or to `tag' encrypted traffic and look for corresponding corrupted traffic at the network edges~\cite{minion-design}. Tor hampers these attacks by verifying data integrity before it leaves the network. %\textbf{Improved robustness to failed nodes:} A failed node %in the old design meant that circuit building failed, but thanks to %Tor's step-by-step circuit building, users notice failed nodes %while building circuits and route around them. Additionally, liveness %information from directories allows users to avoid unreliable nodes in %the first place. %% Can't really claim this, now that we've found so many variants of %% attack on partial-circuit-building. -RD \textbf{Rendezvous points and hidden services:} Tor provides an integrated mechanism for responder anonymity via location-protected servers. Previous Onion Routing designs included long-lived ``reply onions'' that could be used to build circuits to a hidden server, but these reply onions did not provide forward security, and became useless if any node in the path went down or rotated its keys. In Tor, clients negotiate {\it rendezvous points} to connect with hidden servers; reply onions are no longer required. Unlike Freedom~\cite{freedom2-arch}, Tor does not require OS kernel patches or network stack support. This prevents us from anonymizing non-TCP protocols, but has greatly helped our portability and deployability. %Unlike Freedom~\cite{freedom2-arch}, Tor only anonymizes %TCP-based protocols---not requiring patches (or built-in support) in an %operating system's network stack has been valuable to Tor's %portability and deployability. We have implemented all of the above features, including rendezvous points. Our source code is available under a free license, and Tor %, as far as we know, is unencumbered by patents. is not covered by the patent that affected distribution and use of earlier versions of Onion Routing. We have deployed a wide-area alpha network to test the design, to get more experience with usability and users, and to provide a research platform for experimentation. As of this writing, the network stands at 32 nodes %in thirteen %distinct administrative domains spread over two continents. We review previous work in Section~\ref{sec:related-work}, describe our goals and assumptions in Section~\ref{sec:assumptions}, and then address the above list of improvements in Sections~\ref{sec:design},~\ref{sec:rendezvous}, and~\ref{sec:other-design}. We summarize in Section~\ref{sec:attacks} how our design stands up to known attacks, and talk about our early deployment experiences in Section~\ref{sec:in-the-wild}. We conclude with a list of open problems in Section~\ref{sec:maintaining-anonymity} and future work for the Onion Routing project in Section~\ref{sec:conclusion}. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{Related work} \label{sec:related-work} Modern anonymity systems date to Chaum's {\bf Mix-Net} design~\cite{chaum-mix}. Chaum proposed hiding the correspondence between sender and recipient by wrapping messages in layers of public-key cryptography, and relaying them through a path composed of ``mixes.'' Each mix in turn decrypts, delays, and re-orders messages before relaying them onward. %toward their destinations. Subsequent relay-based anonymity designs have diverged in two main directions. Systems like {\bf Babel}~\cite{babel}, {\bf Mixmaster}~\cite{mixmaster-spec}, and {\bf Mixminion}~\cite{minion-design} have tried to maximize anonymity at the cost of introducing comparatively large and variable latencies. Because of this decision, these \emph{high-latency} networks resist strong global adversaries, but introduce too much lag for interactive tasks like web browsing, Internet chat, or SSH connections. Tor belongs to the second category: \emph{low-latency} designs that try to anonymize interactive network traffic. These systems handle a variety of bidirectional protocols. They also provide more convenient mail delivery than the high-latency anonymous email networks, because the remote mail server provides explicit and timely delivery confirmation. But because these designs typically involve many packets that must be delivered quickly, it is difficult for them to prevent an attacker who can eavesdrop both ends of the communication from correlating the timing and volume of traffic entering the anonymity network with traffic leaving it~\cite{SS03}. These protocols are similarly vulnerable to an active adversary who introduces timing patterns into traffic entering the network and looks for correlated patterns among exiting traffic. Although some work has been done to frustrate these attacks, most designs protect primarily against traffic analysis rather than traffic confirmation (see Section~\ref{subsec:threat-model}). The simplest low-latency designs are single-hop proxies such as the {\bf Anonymizer}~\cite{anonymizer}: a single trusted server strips the data's origin before relaying it. These designs are easy to analyze, but users must trust the anonymizing proxy. Concentrating the traffic to this single point increases the anonymity set (the people a given user is hiding among), but it is vulnerable if the adversary can observe all traffic entering and leaving the proxy. More complex are distributed-trust, circuit-based anonymizing systems. In these designs, a user establishes one or more medium-term bidirectional end-to-end circuits, and tunnels data in fixed-size cells. Establishing circuits is computationally expensive and typically requires public-key cryptography, whereas relaying cells is comparatively inexpensive and typically requires only symmetric encryption. Because a circuit crosses several servers, and each server only knows the adjacent servers in the circuit, no single server can link a user to her communication partners. The {\bf Java Anon Proxy} (also known as JAP or Web MIXes) uses fixed shared routes known as \emph{cascades}. As with a single-hop proxy, this approach aggregates users into larger anonymity sets, but again an attacker only needs to observe both ends of the cascade to bridge all the system's traffic. The Java Anon Proxy's design calls for padding between end users and the head of the cascade~\cite{web-mix}. However, it is not demonstrated whether the current implementation's padding policy improves anonymity. {\bf PipeNet}~\cite{back01, pipenet}, another low-latency design proposed around the same time as Onion Routing, gave stronger anonymity but allowed a single user to shut down the network by not sending. Systems like {\bf ISDN mixes}~\cite{isdn-mixes} were designed for other environments with different assumptions. %XXX please can we fix this sentence to something less demeaning In P2P designs like {\bf Tarzan}~\cite{tarzan:ccs02} and {\bf MorphMix}~\cite{morphmix:fc04}, all participants both generate traffic and relay traffic for others. These systems aim to conceal whether a given peer originated a request or just relayed it from another peer. While Tarzan and MorphMix use layered encryption as above, {\bf Crowds}~\cite{crowds-tissec} simply assumes an adversary who cannot observe the initiator: it uses no public-key encryption, so any node on a circuit can read users' traffic. {\bf Hordes}~\cite{hordes-jcs} is based on Crowds but also uses multicast responses to hide the initiator. {\bf Herbivore}~\cite{herbivore} and $\mbox{\bf P}^{\mathbf 5}$~\cite{p5} go even further, requiring broadcast. These systems are designed primarily for communication among peers, although Herbivore users can make external connections by requesting a peer to serve as a proxy. Systems like {\bf Freedom} and the original Onion Routing build circuits all at once, using a layered ``onion'' of public-key encrypted messages, each layer of which provides session keys and the address of the next server in the circuit. Tor as described herein, Tarzan, MorphMix, {\bf Cebolla}~\cite{cebolla}, and Rennhard's {\bf Anonymity Network}~\cite{anonnet} build circuits in stages, extending them one hop at a time. Section~\ref{subsubsec:constructing-a-circuit} describes how this approach enables perfect forward secrecy. Circuit-based designs must choose which protocol layer to anonymize. They may intercept IP packets directly, and relay them whole (stripping the source address) along the circuit~\cite{freedom2-arch,tarzan:ccs02}. Like Tor, they may accept TCP streams and relay the data in those streams, ignoring the breakdown of that data into TCP segments~\cite{morphmix:fc04,anonnet}. Finally, like Crowds, they may accept application-level protocols such as HTTP and relay the application requests themselves. Making this protocol-layer decision requires a compromise between flexibility and anonymity. For example, a system that understands HTTP can strip identifying information from requests, can take advantage of caching to limit the number of requests that leave the network, and can batch or encode requests to minimize the number of connections. On the other hand, an IP-level anonymizer can handle nearly any protocol, even ones unforeseen by its designers (though these systems require kernel-level modifications to some operating systems, and so are more complex and less portable). TCP-level anonymity networks like Tor present a middle approach: they are application neutral (so long as the application supports, or can be tunneled across, TCP), but by treating application connections as data streams rather than raw TCP packets, they avoid the inefficiencies of tunneling TCP over TCP. Distributed-trust anonymizing systems need to prevent attackers from adding too many servers and thus compromising user paths. Tor relies on a small set of well-known directory servers, run by independent parties, to decide which nodes can join. Tarzan and MorphMix allow unknown users to run servers, and use a limited resource (like IP addresses) to prevent an attacker from controlling too much of the network. Crowds suggests requiring written, notarized requests from potential crowd members. Anonymous communication is essential for censorship-resistant systems like Eternity~\cite{eternity}, Free~Haven~\cite{freehaven-berk}, Publius~\cite{publius}, and Tangler~\cite{tangler}. Tor's rendezvous points enable connections between mutually anonymous entities; they are a building block for location-hidden servers, which are needed by Eternity and Free~Haven. % didn't include rewebbers. No clear place to put them, so I'll leave % them out for now. -RD \section{Design goals and assumptions} \label{sec:assumptions} \noindent{\large\bf Goals}\\ Like other low-latency anonymity designs, Tor seeks to frustrate attackers from linking communication partners, or from linking multiple communications to or from a single user. Within this main goal, however, several considerations have directed Tor's evolution. \textbf{Deployability:} The design must be deployed and used in the real world. Thus it must not be expensive to run (for example, by requiring more bandwidth than volunteers are willing to provide); must not place a heavy liability burden on operators (for example, by allowing attackers to implicate onion routers in illegal activities); and must not be difficult or expensive to implement (for example, by requiring kernel patches, or separate proxies for every protocol). We also cannot require non-anonymous parties (such as websites) to run our software. (Our rendezvous point design does not meet this goal for non-anonymous users talking to hidden servers, however; see Section~\ref{sec:rendezvous}.) \textbf{Usability:} A hard-to-use system has fewer users---and because anonymity systems hide users among users, a system with fewer users provides less anonymity. Usability is thus not only a convenience: it is a security requirement~\cite{econymics,back01}. Tor should therefore not require modifying familiar applications; should not introduce prohibitive delays; and should require as few configuration decisions as possible. Finally, Tor should be easily implementable on all common platforms; we cannot require users to change their operating system to be anonymous. (Tor currently runs on Win32, Linux, Solaris, BSD-style Unix, MacOS X, and probably others.) \textbf{Flexibility:} The protocol must be flexible and well-specified, so Tor can serve as a test-bed for future research. Many of the open problems in low-latency anonymity networks, such as generating dummy traffic or preventing Sybil attacks~\cite{sybil}, may be solvable independently from the issues solved by Tor. Hopefully future systems will not need to reinvent Tor's design. %(But note that while a flexible design benefits researchers, %there is a danger that differing choices of extensions will make users %distinguishable. Experiments should be run on a separate network.) \textbf{Simple design:} The protocol's design and security parameters must be well-understood. Additional features impose implementation and complexity costs; adding unproven techniques to the design threatens deployability, readability, and ease of security analysis. Tor aims to deploy a simple and stable system that integrates the best accepted approaches to protecting anonymity.\\ \noindent{\large\bf Non-goals}\label{subsec:non-goals}\\ In favoring simple, deployable designs, we have explicitly deferred several possible goals, either because they are solved elsewhere, or because they are not yet solved. \textbf{Not peer-to-peer:} Tarzan and MorphMix aim to scale to completely decentralized peer-to-peer environments with thousands of short-lived servers, many of which may be controlled by an adversary. This approach is appealing, but still has many open problems~\cite{tarzan:ccs02,morphmix:fc04}. \textbf{Not secure against end-to-end attacks:} Tor does not claim to completely solve end-to-end timing or intersection attacks. Some approaches, such as having users run their own onion routers, may help; see Section~\ref{sec:maintaining-anonymity} for more discussion. \textbf{No protocol normalization:} Tor does not provide \emph{protocol normalization} like Privoxy or the Anonymizer. If senders want anonymity from responders while using complex and variable protocols like HTTP, Tor must be layered with a filtering proxy such as Privoxy to hide differences between clients, and expunge protocol features that leak identity. Note that by this separation Tor can also provide services that are anonymous to the network yet authenticated to the responder, like SSH. Similarly, Tor does not integrate tunneling for non-stream-based protocols like UDP; this must be provided by an external service if appropriate. \textbf{Not steganographic:} Tor does not try to conceal who is connected to the network. \subsection{Threat Model} \label{subsec:threat-model} A global passive adversary is the most commonly assumed threat when analyzing theoretical anonymity designs. But like all practical low-latency systems, Tor does not protect against such a strong adversary. Instead, we assume an adversary who can observe some fraction of network traffic; who can generate, modify, delete, or delay traffic; who can operate onion routers of his own; and who can compromise some fraction of the onion routers. In low-latency anonymity systems that use layered encryption, the adversary's typical goal is to observe both the initiator and the responder. By observing both ends, passive attackers can confirm a suspicion that Alice is talking to Bob if the timing and volume patterns of the traffic on the connection are distinct enough; active attackers can induce timing signatures on the traffic to force distinct patterns. Rather than focusing on these \emph{traffic confirmation} attacks, we aim to prevent \emph{traffic analysis} attacks, where the adversary uses traffic patterns to learn which points in the network he should attack. Our adversary might try to link an initiator Alice with her communication partners, or try to build a profile of Alice's behavior. He might mount passive attacks by observing the network edges and correlating traffic entering and leaving the network---by relationships in packet timing, volume, or externally visible user-selected options. The adversary can also mount active attacks by compromising routers or keys; by replaying traffic; by selectively denying service to trustworthy routers to move users to compromised routers, or denying service to users to see if traffic elsewhere in the network stops; or by introducing patterns into traffic that can later be detected. The adversary might subvert the directory servers to give users differing views of network state. Additionally, he can try to decrease the network's reliability by attacking nodes or by performing antisocial activities from reliable nodes and trying to get them taken down---making the network unreliable flushes users to other less anonymous systems, where they may be easier to attack. We summarize in Section~\ref{sec:attacks} how well the Tor design defends against each of these attacks. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{The Tor Design} \label{sec:design} The Tor network is an overlay network; each onion router (OR) runs as a normal user-level process without any special privileges. Each onion router maintains a TLS~\cite{TLS} connection to every other onion router. %(We discuss alternatives to this clique-topology assumption in %Section~\ref{sec:maintaining-anonymity}.) % A subset of the ORs also act as %directory servers, tracking which routers are in the network; %see Section~\ref{subsec:dirservers} for directory server details. Each user runs local software called an onion proxy (OP) to fetch directories, establish circuits across the network, and handle connections from user applications. These onion proxies accept TCP streams and multiplex them across the circuits. The onion router on the other side of the circuit connects to the requested destinations and relays data. Each onion router maintains a long-term identity key and a short-term onion key. The identity key is used to sign TLS certificates, to sign the OR's \emph{router descriptor} (a summary of its keys, address, bandwidth, exit policy, and so on), and (by directory servers) to sign directories. %Changing %the identity key of a router is considered equivalent to creating a %new router. The onion key is used to decrypt requests from users to set up a circuit and negotiate ephemeral keys. The TLS protocol also establishes a short-term link key when communicating between ORs. Short-term keys are rotated periodically and independently, to limit the impact of key compromise. Section~\ref{subsec:cells} presents the fixed-size \emph{cells} that are the unit of communication in Tor. We describe in Section~\ref{subsec:circuits} how circuits are built, extended, truncated, and destroyed. Section~\ref{subsec:tcp} describes how TCP streams are routed through the network. We address integrity checking in Section~\ref{subsec:integrity-checking}, and resource limiting in Section~\ref{subsec:rate-limit}. Finally, Section~\ref{subsec:congestion} talks about congestion control and fairness issues. \subsection{Cells} \label{subsec:cells} Onion routers communicate with one another, and with users' OPs, via TLS connections with ephemeral keys. Using TLS conceals the data on the connection with perfect forward secrecy, and prevents an attacker from modifying data on the wire or impersonating an OR. Traffic passes along these connections in fixed-size cells. Each cell is 512 bytes, %(but see Section~\ref{sec:conclusion} for a discussion of %allowing large cells and small cells on the same network), and consists of a header and a payload. The header includes a circuit identifier (circID) that specifies which circuit the cell refers to (many circuits can be multiplexed over the single TLS connection), and a command to describe what to do with the cell's payload. (Circuit identifiers are connection-specific: each circuit has a different circID on each OP/OR or OR/OR connection it traverses.) Based on their command, cells are either \emph{control} cells, which are always interpreted by the node that receives them, or \emph{relay} cells, which carry end-to-end stream data. The control cell commands are: \emph{padding} (currently used for keepalive, but also usable for link padding); \emph{create} or \emph{created} (used to set up a new circuit); and \emph{destroy} (to tear down a circuit). Relay cells have an additional header (the relay header) at the front of the payload, containing a streamID (stream identifier: many streams can be multiplexed over a circuit); an end-to-end checksum for integrity checking; the length of the relay payload; and a relay command. The entire contents of the relay header and the relay cell payload are encrypted or decrypted together as the relay cell moves along the circuit, using the 128-bit AES cipher in counter mode to generate a cipher stream. The relay commands are: \emph{relay data} (for data flowing down the stream), \emph{relay begin} (to open a stream), \emph{relay end} (to close a stream cleanly), \emph{relay teardown} (to close a broken stream), \emph{relay connected} (to notify the OP that a relay begin has succeeded), \emph{relay extend} and \emph{relay extended} (to extend the circuit by a hop, and to acknowledge), \emph{relay truncate} and \emph{relay truncated} (to tear down only part of the circuit, and to acknowledge), \emph{relay sendme} (used for congestion control), and \emph{relay drop} (used to implement long-range dummies). We give a visual overview of cell structure plus the details of relay cell structure, and then describe each of these cell types and commands in more detail below. %\begin{figure}[h] %\unitlength=1cm %\centering %\begin{picture}(8.0,1.5) %\put(4,.5){\makebox(0,0)[c]{\epsfig{file=cell-struct,width=7cm}}} %\end{picture} %\end{figure} \begin{figure}[h] \centering \mbox{\epsfig{figure=cell-struct,width=7cm}} \end{figure} \subsection{Circuits and streams} \label{subsec:circuits} Onion Routing originally built one circuit for each TCP stream. Because building a circuit can take several tenths of a second (due to public-key cryptography and network latency), this design imposed high costs on applications like web browsing that open many TCP streams. In Tor, each circuit can be shared by many TCP streams. To avoid delays, users construct circuits preemptively. To limit linkability among their streams, users' OPs build a new circuit periodically if the previous ones have been used, and expire old used circuits that no longer have any open streams. OPs consider rotating to a new circuit once a minute: thus even heavy users spend negligible time building circuits, but a limited number of requests can be linked to each other through a given exit node. Also, because circuits are built in the background, OPs can recover from failed circuit creation without harming user experience.\\ \begin{figure}[h] \centering \mbox{\epsfig{figure=interaction,width=8.75cm}} \caption{Alice builds a two-hop circuit and begins fetching a web page.} \label{fig:interaction} \end{figure} \noindent{\large\bf Constructing a circuit}\label{subsubsec:constructing-a-circuit}\\ %\subsubsection{Constructing a circuit} A user's OP constructs circuits incrementally, negotiating a symmetric key with each OR on the circuit, one hop at a time. To begin creating a new circuit, the OP (call her Alice) sends a \emph{create} cell to the first node in her chosen path (call him Bob). (She chooses a new circID $C_{AB}$ not currently used on the connection from her to Bob.) The \emph{create} cell's payload contains the first half of the Diffie-Hellman handshake ($g^x$), encrypted to the onion key of Bob. Bob responds with a \emph{created} cell containing $g^y$ along with a hash of the negotiated key $K=g^{xy}$. Once the circuit has been established, Alice and Bob can send one another relay cells encrypted with the negotiated key.\footnote{Actually, the negotiated key is used to derive two symmetric keys: one for each direction.} More detail is given in the next section. To extend the circuit further, Alice sends a \emph{relay extend} cell to Bob, specifying the address of the next OR (call her Carol), and an encrypted $g^{x_2}$ for her. Bob copies the half-handshake into a \emph{create} cell, and passes it to Carol to extend the circuit. (Bob chooses a new circID $C_{BC}$ not currently used on the connection between him and Carol. Alice never needs to know this circID; only Bob associates $C_{AB}$ on his connection with Alice to $C_{BC}$ on his connection with Carol.) When Carol responds with a \emph{created} cell, Bob wraps the payload into a \emph{relay extended} cell and passes it back to Alice. Now the circuit is extended to Carol, and Alice and Carol share a common key $K_2 = g^{x_2 y_2}$. To extend the circuit to a third node or beyond, Alice proceeds as above, always telling the last node in the circuit to extend one hop further. This circuit-level handshake protocol achieves unilateral entity authentication (Alice knows she's handshaking with the OR, but the OR doesn't care who is opening the circuit---Alice uses no public key and remains anonymous) and unilateral key authentication (Alice and the OR agree on a key, and Alice knows only the OR learns it). It also achieves forward secrecy and key freshness. More formally, the protocol is as follows (where $E_{PK_{Bob}}(\cdot)$ is encryption with Bob's public key, $H$ is a secure hash function, and $|$ is concatenation): \begin{equation*} \begin{aligned} \mathrm{Alice} \rightarrow \mathrm{Bob}&: E_{PK_{Bob}}(g^x) \\ \mathrm{Bob} \rightarrow \mathrm{Alice}&: g^y, H(K | \mathrm{``handshake"}) \\ \end{aligned} \end{equation*} \noindent In the second step, Bob proves that it was he who received $g^x$, and who chose $y$. We use PK encryption in the first step (rather than, say, using the first two steps of STS, which has a signature in the second step) because a single cell is too small to hold both a public key and a signature. Preliminary analysis with the NRL protocol analyzer~\cite{meadows96} shows this protocol to be secure (including perfect forward secrecy) under the traditional Dolev-Yao model.\\ \noindent{\large\bf Relay cells}\\ %\subsubsection{Relay cells} % Once Alice has established the circuit (so she shares keys with each OR on the circuit), she can send relay cells. %Recall that every relay cell has a streamID that indicates to which %stream the cell belongs. %This streamID allows a relay cell to be %addressed to any OR on the circuit. Upon receiving a relay cell, an OR looks up the corresponding circuit, and decrypts the relay header and payload with the session key for that circuit. If the cell is headed away from Alice the OR then checks whether the decrypted cell has a valid digest (as an optimization, the first two bytes of the integrity check are zero, so in most cases we can avoid computing the hash). %is recognized---either because it %corresponds to an open stream at this OR for the given circuit, or because %it is the control streamID (zero). If valid, it accepts the relay cell and processes it as described below. Otherwise, the OR looks up the circID and OR for the next step in the circuit, replaces the circID as appropriate, and sends the decrypted relay cell to the next OR. (If the OR at the end of the circuit receives an unrecognized relay cell, an error has occurred, and the circuit is torn down.) OPs treat incoming relay cells similarly: they iteratively unwrap the relay header and payload with the session keys shared with each OR on the circuit, from the closest to farthest. If at any stage the digest is valid, the cell must have originated at the OR whose encryption has just been removed. To construct a relay cell addressed to a given OR, Alice assigns the digest, and then iteratively encrypts the cell payload (that is, the relay header and payload) with the symmetric key of each hop up to that OR. Because the digest is encrypted to a different value at each step, only at the targeted OR will it have a meaningful value.\footnote{ % Should we just say that 2^56 is itself negligible? % Assuming 4-hop circuits with 10 streams per hop, there are 33 % possible bad streamIDs before the last circuit. This still % gives an error only once every 2 million terabytes (approx). With 48 bits of digest per cell, the probability of an accidental collision is far lower than the chance of hardware failure.} This \emph{leaky pipe} circuit topology allows Alice's streams to exit at different ORs on a single circuit. Alice may choose different exit points because of their exit policies, or to keep the ORs from knowing that two streams originate from the same person. When an OR later replies to Alice with a relay cell, it encrypts the cell's relay header and payload with the single key it shares with Alice, and sends the cell back toward Alice along the circuit. Subsequent ORs add further layers of encryption as they relay the cell back to Alice. To tear down a circuit, Alice sends a \emph{destroy} control cell. Each OR in the circuit receives the \emph{destroy} cell, closes all streams on that circuit, and passes a new \emph{destroy} cell forward. But just as circuits are built incrementally, they can also be torn down incrementally: Alice can send a \emph{relay truncate} cell to a single OR on a circuit. That OR then sends a \emph{destroy} cell forward, and acknowledges with a \emph{relay truncated} cell. Alice can then extend the circuit to different nodes, without signaling to the intermediate nodes (or a limited observer) that she has changed her circuit. Similarly, if a node on the circuit goes down, the adjacent node can send a \emph{relay truncated} cell back to Alice. Thus the ``break a node and see which circuits go down'' attack~\cite{freedom21-security} is weakened. \subsection{Opening and closing streams} \label{subsec:tcp} When Alice's application wants a TCP connection to a given address and port, it asks the OP (via SOCKS) to make the connection. The OP chooses the newest open circuit (or creates one if needed), and chooses a suitable OR on that circuit to be the exit node (usually the last node, but maybe others due to exit policy conflicts; see Section~\ref{subsec:exitpolicies}.) The OP then opens the stream by sending a \emph{relay begin} cell to the exit node, using a new random streamID. Once the exit node connects to the remote host, it responds with a \emph{relay connected} cell. Upon receipt, the OP sends a SOCKS reply to notify the application of its success. The OP now accepts data from the application's TCP stream, packaging it into \emph{relay data} cells and sending those cells along the circuit to the chosen OR. There's a catch to using SOCKS, however---some applications pass the alphanumeric hostname to the Tor client, while others resolve it into an IP address first and then pass the IP address to the Tor client. If the application does DNS resolution first, Alice thereby reveals her destination to the remote DNS server, rather than sending the hostname through the Tor network to be resolved at the far end. Common applications like Mozilla and SSH have this flaw. With Mozilla, the flaw is easy to address: the filtering HTTP proxy called Privoxy gives a hostname to the Tor client, so Alice's computer never does DNS resolution. But a portable general solution, such as is needed for SSH, is an open problem. Modifying or replacing the local nameserver can be invasive, brittle, and unportable. Forcing the resolver library to prefer TCP rather than UDP is hard, and also has portability problems. Dynamically intercepting system calls to the resolver library seems a promising direction. We could also provide a tool similar to \emph{dig} to perform a private lookup through the Tor network. Currently, we encourage the use of privacy-aware proxies like Privoxy wherever possible. Closing a Tor stream is analogous to closing a TCP stream: it uses a two-step handshake for normal operation, or a one-step handshake for errors. If the stream closes abnormally, the adjacent node simply sends a \emph{relay teardown} cell. If the stream closes normally, the node sends a \emph{relay end} cell down the circuit, and the other side responds with its own \emph{relay end} cell. Because all relay cells use layered encryption, only the destination OR knows that a given relay cell is a request to close a stream. This two-step handshake allows Tor to support TCP-based applications that use half-closed connections. % such as broken HTTP clients that close their side of the %stream after writing but are still willing to read. \subsection{Integrity checking on streams} \label{subsec:integrity-checking} Because the old Onion Routing design used a stream cipher without integrity checking, traffic was vulnerable to a malleability attack: though the attacker could not decrypt cells, any changes to encrypted data would create corresponding changes to the data leaving the network. This weakness allowed an adversary who could guess the encrypted content to change a padding cell to a destroy cell; change the destination address in a \emph{relay begin} cell to the adversary's webserver; or change an FTP command from {\tt dir} to {\tt rm~*}. (Even an external adversary could do this, because the link encryption similarly used a stream cipher.) Because Tor uses TLS on its links, external adversaries cannot modify data. Addressing the insider malleability attack, however, is more complex. We could do integrity checking of the relay cells at each hop, either by including hashes or by using an authenticating cipher mode like EAX~\cite{eax}, but there are some problems. First, these approaches impose a message-expansion overhead at each hop, and so we would have to either leak the path length or waste bytes by padding to a maximum path length. Second, these solutions can only verify traffic coming from Alice: ORs would not be able to produce suitable hashes for the intermediate hops, since the ORs on a circuit do not know the other ORs' session keys. Third, we have already accepted that our design is vulnerable to end-to-end timing attacks; so tagging attacks performed within the circuit provide no additional information to the attacker. Thus, we check integrity only at the edges of each stream. (Remember that in our leaky-pipe circuit topology, a stream's edge could be any hop in the circuit.) When Alice negotiates a key with a new hop, they each initialize a SHA-1 digest with a derivative of that key, thus beginning with randomness that only the two of them know. Then they each incrementally add to the SHA-1 digest the contents of all relay cells they create, and include with each relay cell the first four bytes of the current digest. Each also keeps a SHA-1 digest of data received, to verify that the received hashes are correct. To be sure of removing or modifying a cell, the attacker must be able to deduce the current digest state (which depends on all traffic between Alice and Bob, starting with their negotiated key). Attacks on SHA-1 where the adversary can incrementally add to a hash to produce a new valid hash don't work, because all hashes are end-to-end encrypted across the circuit. The computational overhead of computing the digests is minimal compared to doing the AES encryption performed at each hop of the circuit. We use only four bytes per cell to minimize overhead; the chance that an adversary will correctly guess a valid hash %, plus the payload the current cell, is acceptably low, given that the OP or OR tear down the circuit if they receive a bad hash. \subsection{Rate limiting and fairness} \label{subsec:rate-limit} Volunteers are more willing to run services that can limit their bandwidth usage. To accommodate them, Tor servers use a token bucket approach~\cite{tannenbaum96} to enforce a long-term average rate of incoming bytes, while still permitting short-term bursts above the allowed bandwidth. % Current bucket sizes are set to ten seconds' worth of traffic. %Further, we want to avoid starving any Tor streams. Entire circuits %could starve if we read greedily from connections and one connection %uses all the remaining bandwidth. We solve this by dividing the number %of tokens in the bucket by the number of connections that want to read, %and reading at most that number of bytes from each connection. We iterate %this procedure until the number of tokens in the bucket is under some %threshold (currently 10KB), at which point we greedily read from connections. Because the Tor protocol outputs about the same number of bytes as it takes in, it is sufficient in practice to limit only incoming bytes. With TCP streams, however, the correspondence is not one-to-one: relaying a single incoming byte can require an entire 512-byte cell. (We can't just wait for more bytes, because the local application may be awaiting a reply.) Therefore, we treat this case as if the entire cell size had been read, regardless of the cell's fullness. Further, inspired by Rennhard et al's design in~\cite{anonnet}, a circuit's edges can heuristically distinguish interactive streams from bulk streams by comparing the frequency with which they supply cells. We can provide good latency for interactive streams by giving them preferential service, while still giving good overall throughput to the bulk streams. Such preferential treatment presents a possible end-to-end attack, but an adversary observing both ends of the stream can already learn this information through timing attacks. \subsection{Congestion control} \label{subsec:congestion} Even with bandwidth rate limiting, we still need to worry about congestion, either accidental or intentional. If enough users choose the same OR-to-OR connection for their circuits, that connection can become saturated. For example, an attacker could send a large file through the Tor network to a webserver he runs, and then refuse to read any of the bytes at the webserver end of the circuit. Without some congestion control mechanism, these bottlenecks can propagate back through the entire network. We don't need to reimplement full TCP windows (with sequence numbers, the ability to drop cells when we're full and retransmit later, and so on), because TCP already guarantees in-order delivery of each cell. %But we need to investigate further the effects of the current %parameters on throughput and latency, while also keeping privacy in mind; %see Section~\ref{sec:maintaining-anonymity} for more discussion. We describe our response below. \textbf{Circuit-level throttling:} To control a circuit's bandwidth usage, each OR keeps track of two windows. The \emph{packaging window} tracks how many relay data cells the OR is allowed to package (from incoming TCP streams) for transmission back to the OP, and the \emph{delivery window} tracks how many relay data cells it is willing to deliver to TCP streams outside the network. Each window is initialized (say, to 1000 data cells). When a data cell is packaged or delivered, the appropriate window is decremented. When an OR has received enough data cells (currently 100), it sends a \emph{relay sendme} cell towards the OP, with streamID zero. When an OR receives a \emph{relay sendme} cell with streamID zero, it increments its packaging window. Either of these cells increments the corresponding window by 100. If the packaging window reaches 0, the OR stops reading from TCP connections for all streams on the corresponding circuit, and sends no more relay data cells until receiving a \emph{relay sendme} cell. The OP behaves identically, except that it must track a packaging window and a delivery window for every OR in the circuit. If a packaging window reaches 0, it stops reading from streams destined for that OR. \textbf{Stream-level throttling}: The stream-level congestion control mechanism is similar to the circuit-level mechanism. ORs and OPs use \emph{relay sendme} cells to implement end-to-end flow control for individual streams across circuits. Each stream begins with a packaging window (currently 500 cells), and increments the window by a fixed value (50) upon receiving a \emph{relay sendme} cell. Rather than always returning a \emph{relay sendme} cell as soon as enough cells have arrived, the stream-level congestion control also has to check whether data has been successfully flushed onto the TCP stream; it sends the \emph{relay sendme} cell only when the number of bytes pending to be flushed is under some threshold (currently 10 cells' worth). %% Maybe omit this next paragraph. -NM %Currently, non-data relay cells do not affect the windows. Thus we %avoid potential deadlock issues, for example, arising because a stream %can't send a \emph{relay sendme} cell when its packaging window is empty. These arbitrarily chosen parameters seem to give tolerable throughput and delay; see Section~\ref{sec:in-the-wild}. \section{Rendezvous Points and hidden services} \label{sec:rendezvous} Rendezvous points are a building block for \emph{location-hidden services} (also known as \emph{responder anonymity}) in the Tor network. Location-hidden services allow Bob to offer a TCP service, such as a webserver, without revealing his IP address. This type of anonymity protects against distributed DoS attacks: attackers are forced to attack the onion routing network because they do not know Bob's IP address. Our design for location-hidden servers has the following goals. \textbf{Access-control:} Bob needs a way to filter incoming requests, so an attacker cannot flood Bob simply by making many connections to him. \textbf{Robustness:} Bob should be able to maintain a long-term pseudonymous identity even in the presence of router failure. Bob's service must not be tied to a single OR, and Bob must be able to migrate his service across ORs. \textbf{Smear-resistance:} A social attacker should not be able to ``frame'' a rendezvous router by offering an illegal or disreputable location-hidden service and making observers believe the router created that service. \textbf{Application-transparency:} Although we require users to run special software to access location-hidden servers, we must not require them to modify their applications. We provide location-hiding for Bob by allowing him to advertise several onion routers (his \emph{introduction points}) as contact points. He may do this on any robust efficient key-value lookup system with authenticated updates, such as a distributed hash table (DHT) like CFS~\cite{cfs:sosp01}.\footnote{ Rather than rely on an external infrastructure, the Onion Routing network can run the lookup service itself. Our current implementation provides a simple lookup system on the directory servers.} Alice, the client, chooses an OR as her \emph{rendezvous point}. She connects to one of Bob's introduction points, informs him of her rendezvous point, and then waits for him to connect to the rendezvous point. This extra level of indirection helps Bob's introduction points avoid problems associated with serving unpopular files directly (for example, if Bob serves material that the introduction point's community finds objectionable, or if Bob's service tends to get attacked by network vandals). The extra level of indirection also allows Bob to respond to some requests and ignore others. \subsection{Rendezvous points in Tor} The following steps are %We give an overview of the steps of a rendezvous. These are performed on behalf of Alice and Bob by their local OPs; application integration is described more fully below. \begin{tightlist} \item Bob generates a long-term public key pair to identify his service. \item Bob chooses some introduction points, and advertises them on the lookup service, signing the advertisement with his public key. He can add more later. \item Bob builds a circuit to each of his introduction points, and tells them to wait for requests. \item Alice learns about Bob's service out of band (perhaps Bob told her, or she found it on a website). She retrieves the details of Bob's service from the lookup service. If Alice wants to access Bob's service anonymously, she must connect to the lookup service via Tor. \item Alice chooses an OR as the rendezvous point (RP) for her connection to Bob's service. She builds a circuit to the RP, and gives it a randomly chosen ``rendezvous cookie'' to recognize Bob. \item Alice opens an anonymous stream to one of Bob's introduction points, and gives it a message (encrypted with Bob's public key) telling it about herself, her RP and rendezvous cookie, and the start of a DH handshake. The introduction point sends the message to Bob. \item If Bob wants to talk to Alice, he builds a circuit to Alice's RP and sends the rendezvous cookie, the second half of the DH handshake, and a hash of the session key they now share. By the same argument as in Section~\ref{subsubsec:constructing-a-circuit}, Alice knows she shares the key only with Bob. \item The RP connects Alice's circuit to Bob's. Note that RP can't recognize Alice, Bob, or the data they transmit. \item Alice sends a \emph{relay begin} cell along the circuit. It arrives at Bob's OP, which connects to Bob's webserver. \item An anonymous stream has been established, and Alice and Bob communicate as normal. \end{tightlist} When establishing an introduction point, Bob provides the onion router with the public key identifying his service. Bob signs his messages, so others cannot usurp his introduction point in the future. He uses the same public key to establish the other introduction points for his service, and periodically refreshes his entry in the lookup service. The message that Alice gives the introduction point includes a hash of Bob's public key % to identify %the service, along with and an optional initial authorization token (the introduction point can do prescreening, for example to block replays). Her message to Bob may include an end-to-end authorization token so Bob can choose whether to respond. The authorization tokens can be used to provide selective access: important users can get uninterrupted access. %important users get tokens to ensure uninterrupted access. %to the %service. During normal situations, Bob's service might simply be offered directly from mirrors, while Bob gives out tokens to high-priority users. If the mirrors are knocked down, %by distributed DoS attacks or even %physical attack, those users can switch to accessing Bob's service via the Tor rendezvous system. Bob's introduction points are themselves subject to DoS---he must open many introduction points or risk such an attack. He can provide selected users with a current list or future schedule of unadvertised introduction points; this is most practical if there is a stable and large group of introduction points available. Bob could also give secret public keys for consulting the lookup service. All of these approaches limit exposure even when some selected users collude in the DoS\@. \subsection{Integration with user applications} Bob configures his onion proxy to know the local IP address and port of his service, a strategy for authorizing clients, and his public key. The onion proxy anonymously publishes a signed statement of Bob's public key, an expiration time, and the current introduction points for his service onto the lookup service, indexed by the hash of his public key. Bob's webserver is unmodified, and doesn't even know that it's hidden behind the Tor network. Alice's applications also work unchanged---her client interface remains a SOCKS proxy. We encode all of the necessary information into the fully qualified domain name (FQDN) Alice uses when establishing her connection. Location-hidden services use a virtual top level domain called {\tt .onion}: thus hostnames take the form {\tt x.y.onion} where {\tt x} is the authorization cookie and {\tt y} encodes the hash of the public key. Alice's onion proxy examines addresses; if they're destined for a hidden server, it decodes the key and starts the rendezvous as described above. \subsection{Previous rendezvous work} %XXXX Should this get integrated into the earlier related work section? -NM Rendezvous points in low-latency anonymity systems were first described for use in ISDN telephony~\cite{jerichow-jsac98,isdn-mixes}. Later low-latency designs used rendezvous points for hiding location of mobile phones and low-power location trackers~\cite{federrath-ih96,reed-protocols97}. Rendezvous for anonymizing low-latency Internet connections was suggested in early Onion Routing work~\cite{or-ih96}, but the first published design was by Ian Goldberg~\cite{ian-thesis}. His design differs from ours in three ways. First, Goldberg suggests that Alice should manually hunt down a current location of the service via Gnutella; our approach makes lookup transparent to the user, as well as faster and more robust. Second, in Tor the client and server negotiate session keys with Diffie-Hellman, so plaintext is not exposed even at the rendezvous point. Third, our design minimizes the exposure from running the service, to encourage volunteers to offer introduction and rendezvous services. Tor's introduction points do not output any bytes to the clients; the rendezvous points don't know the client or the server, and can't read the data being transmitted. The indirection scheme is also designed to include authentication/authorization---if Alice doesn't include the right cookie with her request for service, Bob need not even acknowledge his existence. \section{Other design decisions} \label{sec:other-design} \subsection{Denial of service} \label{subsec:dos} Providing Tor as a public service creates many opportunities for denial-of-service attacks against the network. While flow control and rate limiting (discussed in Section~\ref{subsec:congestion}) prevent users from consuming more bandwidth than routers are willing to provide, opportunities remain for users to consume more network resources than their fair share, or to render the network unusable for others. First of all, there are several CPU-consuming denial-of-service attacks wherein an attacker can force an OR to perform expensive cryptographic operations. For example, an attacker can %\emph{create} cell full of junk bytes can force an OR to perform an RSA %decrypt. %Similarly, an attacker can fake the start of a TLS handshake, forcing the OR to carry out its (comparatively expensive) half of the handshake at no real computational cost to the attacker. We have not yet implemented any defenses for these attacks, but several approaches are possible. First, ORs can require clients to solve a puzzle~\cite{puzzles-tls} while beginning new TLS handshakes or accepting \emph{create} cells. So long as these tokens are easy to verify and computationally expensive to produce, this approach limits the attack multiplier. Additionally, ORs can limit the rate at which they accept \emph{create} cells and TLS connections, so that the computational work of processing them does not drown out the symmetric cryptography operations that keep cells flowing. This rate limiting could, however, allow an attacker to slow down other users when they build new circuits. % What about link-to-link rate limiting? Adversaries can also attack the Tor network's hosts and network links. Disrupting a single circuit or link breaks all streams passing along that part of the circuit. Users similarly lose service when a router crashes or its operator restarts it. The current Tor design treats such attacks as intermittent network failures, and depends on users and applications to respond or recover as appropriate. A future design could use an end-to-end TCP-like acknowledgment protocol, so no streams are lost unless the entry or exit point is disrupted. This solution would require more buffering at the network edges, however, and the performance and anonymity implications from this extra complexity still require investigation. \subsection{Exit policies and abuse} \label{subsec:exitpolicies} % originally, we planned to put the "users only know the hostname, % not the IP, but exit policies are by IP" problem here too. Not % worth putting in the submission, but worth thinking about putting % in sometime somehow. -RD Exit abuse is a serious barrier to wide-scale Tor deployment. Anonymity presents would-be vandals and abusers with an opportunity to hide the origins of their activities. Attackers can harm the Tor network by implicating exit servers for their abuse. Also, applications that commonly use IP-based authentication (such as institutional mail or webservers) can be fooled by the fact that anonymous connections appear to originate at the exit OR. We stress that Tor does not enable any new class of abuse. Spammers and other attackers already have access to thousands of misconfigured systems worldwide, and the Tor network is far from the easiest way to launch attacks. %Indeed, because of its limited %anonymity, Tor is probably not a good way to commit crimes. But because the onion routers can be mistaken for the originators of the abuse, and the volunteers who run them may not want to deal with the hassle of explaining anonymity networks to irate administrators, we must block or limit abuse through the Tor network. To mitigate abuse issues, each onion router's \emph{exit policy} describes to which external addresses and ports the router will connect. On one end of the spectrum are \emph{open exit} nodes that will connect anywhere. On the other end are \emph{middleman} nodes that only relay traffic to other Tor nodes, and \emph{private exit} nodes that only connect to a local host or network. A private exit can allow a client to connect to a given host or network more securely---an external adversary cannot eavesdrop traffic between the private exit and the final destination, and so is less sure of Alice's destination and activities. Most onion routers in the current network function as \emph{restricted exits} that permit connections to the world at large, but prevent access to certain abuse-prone addresses and services such as SMTP. The OR might also be able to authenticate clients to prevent exit abuse without harming anonymity~\cite{or-discex00}. %The abuse issues on closed (e.g. military) networks are different %from the abuse on open networks like the Internet. While these IP-based %access controls are still commonplace on the Internet, on closed networks, %nearly all participants will be honest, and end-to-end authentication %can be assumed for important traffic. Many administrators use port restrictions to support only a limited set of services, such as HTTP, SSH, or AIM. This is not a complete solution, of course, since abuse opportunities for these protocols are still well known. We have not yet encountered any abuse in the deployed network, but if we do we should consider using proxies to clean traffic for certain protocols as it leaves the network. For example, much abusive HTTP behavior (such as exploiting buffer overflows or well-known script vulnerabilities) can be detected in a straightforward manner. Similarly, one could run automatic spam filtering software (such as SpamAssassin) on email exiting the OR network. ORs may also rewrite exiting traffic to append headers or other information indicating that the traffic has passed through an anonymity service. This approach is commonly used by email-only anonymity systems. ORs can also run on servers with hostnames like {\tt anonymous} to further alert abuse targets to the nature of the anonymous traffic. A mixture of open and restricted exit nodes allows the most flexibility for volunteers running servers. But while having many middleman nodes provides a large and robust network, having only a few exit nodes reduces the number of points an adversary needs to monitor for traffic analysis, and places a greater burden on the exit nodes. This tension can be seen in the Java Anon Proxy cascade model, wherein only one node in each cascade needs to handle abuse complaints---but an adversary only needs to observe the entry and exit of a cascade to perform traffic analysis on all that cascade's users. The hydra model (many entries, few exits) presents a different compromise: only a few exit nodes are needed, but an adversary needs to work harder to watch all the clients; see Section~\ref{sec:conclusion}. Finally, we note that exit abuse must not be dismissed as a peripheral issue: when a system's public image suffers, it can reduce the number and diversity of that system's users, and thereby reduce the anonymity of the system itself. Like usability, public perception is a security parameter. Sadly, preventing abuse of open exit nodes is an unsolved problem, and will probably remain an arms race for the foreseeable future. The abuse problems faced by Princeton's CoDeeN project~\cite{darkside} give us a glimpse of likely issues. \subsection{Directory Servers} \label{subsec:dirservers} First-generation Onion Routing designs~\cite{freedom2-arch,or-jsac98} used in-band network status updates: each router flooded a signed statement to its neighbors, which propagated it onward. But anonymizing networks have different security goals than typical link-state routing protocols. For example, delays (accidental or intentional) that can cause different parts of the network to have different views of link-state and topology are not only inconvenient: they give attackers an opportunity to exploit differences in client knowledge. We also worry about attacks to deceive a client about the router membership list, topology, or current network state. Such \emph{partitioning attacks} on client knowledge help an adversary to efficiently deploy resources against a target~\cite{minion-design}. Tor uses a small group of redundant, well-known onion routers to track changes in network topology and node state, including keys and exit policies. Each such \emph{directory server} acts as an HTTP server, so clients can fetch current network state and router lists, and so other ORs can upload state information. Onion routers periodically publish signed statements of their state to each directory server. The directory servers combine this information with their own views of network liveness, and generate a signed description (a \emph{directory}) of the entire network state. Client software is pre-loaded with a list of the directory servers and their keys, to bootstrap each client's view of the network. % XXX this means that clients will be forced to upgrade as the % XXX dirservers change or get compromised. argue that this is ok. When a directory server receives a signed statement for an OR, it checks whether the OR's identity key is recognized. Directory servers do not advertise unrecognized ORs---if they did, an adversary could take over the network by creating many servers~\cite{sybil}. Instead, new nodes must be approved by the directory server administrator before they are included. Mechanisms for automated node approval are an area of active research, and are discussed more in Section~\ref{sec:maintaining-anonymity}. Of course, a variety of attacks remain. An adversary who controls a directory server can track clients by providing them different information---perhaps by listing only nodes under its control, or by informing only certain clients about a given node. Even an external adversary can exploit differences in client knowledge: clients who use a node listed on one directory server but not the others are vulnerable. Thus these directory servers must be synchronized and redundant, so that they can agree on a common directory. Clients should only trust this directory if it is signed by a threshold of the directory servers. The directory servers in Tor are modeled after those in Mixminion~\cite{minion-design}, but our situation is easier. First, we make the simplifying assumption that all participants agree on the set of directory servers. Second, while Mixminion needs to predict node behavior, Tor only needs a threshold consensus of the current state of the network. Third, we assume that we can fall back to the human administrators to discover and resolve problems when a consensus directory cannot be reached. Since there are relatively few directory servers (currently 3, but we expect as many as 9 as the network scales), we can afford operations like broadcast to simplify the consensus-building protocol. To avoid attacks where a router connects to all the directory servers but refuses to relay traffic from other routers, the directory servers must also build circuits and use them to anonymously test router reliability~\cite{mix-acc}. Unfortunately, this defense is not yet designed or implemented. Using directory servers is simpler and more flexible than flooding. Flooding is expensive, and complicates the analysis when we start experimenting with non-clique network topologies. Signed directories can be cached by other onion routers, so directory servers are not a performance bottleneck when we have many users, and do not aid traffic analysis by forcing clients to announce their existence to any central point. \section{Attacks and Defenses} \label{sec:attacks} Below we summarize a variety of attacks, and discuss how well our design withstands them.\\ \noindent{\large\bf Passive attacks}\\ \emph{Observing user traffic patterns.} Observing a user's connection will not reveal her destination or data, but it will reveal traffic patterns (both sent and received). Profiling via user connection patterns requires further processing, because multiple application streams may be operating simultaneously or in series over a single circuit. \emph{Observing user content.} While content at the user end is encrypted, connections to responders may not be (indeed, the responding website itself may be hostile). While filtering content is not a primary goal of Onion Routing, Tor can directly use Privoxy and related filtering services to anonymize application data streams. \emph{Option distinguishability.} We allow clients to choose configuration options. For example, clients concerned about request linkability should rotate circuits more often than those concerned about traceability. Allowing choice may attract users with different %There is economic incentive to attract users by %allowing this choice; needs; but clients who are in the minority may lose more anonymity by appearing distinct than they gain by optimizing their behavior~\cite{econymics}. \emph{End-to-end timing correlation.} Tor only minimally hides such correlations. An attacker watching patterns of traffic at the initiator and the responder will be able to confirm the correspondence with high probability. The greatest protection currently available against such confirmation is to hide the connection between the onion proxy and the first Tor node, by running the OP on the Tor node or behind a firewall. This approach requires an observer to separate traffic originating at the onion router from traffic passing through it: a global observer can do this, but it might be beyond a limited observer's capabilities. \emph{End-to-end size correlation.} Simple packet counting will also be effective in confirming endpoints of a stream. However, even without padding, we may have some limited protection: the leaky pipe topology means different numbers of packets may enter one end of a circuit than exit at the other. \emph{Website fingerprinting.} All the effective passive attacks above are traffic confirmation attacks, which puts them outside our design goals. There is also a passive traffic analysis attack that is potentially effective. Rather than searching exit connections for timing and volume correlations, the adversary may build up a database of ``fingerprints'' containing file sizes and access patterns for targeted websites. He can later confirm a user's connection to a given site simply by consulting the database. This attack has been shown to be effective against SafeWeb~\cite{hintz-pet02}. It may be less effective against Tor, since streams are multiplexed within the same circuit, and fingerprinting will be limited to the granularity of cells (currently 512 bytes). Additional defenses could include larger cell sizes, padding schemes to group websites into large sets, and link padding or long-range dummies.\footnote{Note that this fingerprinting attack should not be confused with the much more complicated latency attacks of~\cite{back01}, which require a fingerprint of the latencies of all circuits through the network, combined with those from the network edges to the target user and the responder website.}\\ \noindent{\large\bf Active attacks}\\ \emph{Compromise keys.} An attacker who learns the TLS session key can see control cells and encrypted relay cells on every circuit on that connection; learning a circuit session key lets him unwrap one layer of the encryption. An attacker who learns an OR's TLS private key can impersonate that OR for the TLS key's lifetime, but he must also learn the onion key to decrypt \emph{create} cells (and because of perfect forward secrecy, he cannot hijack already established circuits without also compromising their session keys). Periodic key rotation limits the window of opportunity for these attacks. On the other hand, an attacker who learns a node's identity key can replace that node indefinitely by sending new forged descriptors to the directory servers. \emph{Iterated compromise.} A roving adversary who can compromise ORs (by system intrusion, legal coercion, or extralegal coercion) could march down the circuit compromising the nodes until he reaches the end. Unless the adversary can complete this attack within the lifetime of the circuit, however, the ORs will have discarded the necessary information before the attack can be completed. (Thanks to the perfect forward secrecy of session keys, the attacker cannot force nodes to decrypt recorded traffic once the circuits have been closed.) Additionally, building circuits that cross jurisdictions can make legal coercion harder---this phenomenon is commonly called ``jurisdictional arbitrage.'' The Java Anon Proxy project recently experienced the need for this approach, when a German court forced them to add a backdoor to their nodes~\cite{jap-backdoor}. \emph{Run a recipient.} An adversary running a webserver trivially learns the timing patterns of users connecting to it, and can introduce arbitrary patterns in its responses. End-to-end attacks become easier: if the adversary can induce users to connect to his webserver (perhaps by advertising content targeted to those users), he now holds one end of their connection. There is also a danger that application protocols and associated programs can be induced to reveal information about the initiator. Tor depends on Privoxy and similar protocol cleaners to solve this latter problem. \emph{Run an onion proxy.} It is expected that end users will nearly always run their own local onion proxy. However, in some settings, it may be necessary for the proxy to run remotely---typically, in institutions that want to monitor the activity of those connecting to the proxy. Compromising an onion proxy compromises all future connections through it. \emph{DoS non-observed nodes.} An observer who can only watch some of the Tor network can increase the value of this traffic by attacking non-observed nodes to shut them down, reduce their reliability, or persuade users that they are not trustworthy. The best defense here is robustness. \emph{Run a hostile OR.} In addition to being a local observer, an isolated hostile node can create circuits through itself, or alter traffic patterns to affect traffic at other nodes. Nonetheless, a hostile node must be immediately adjacent to both endpoints to compromise the anonymity of a circuit. If an adversary can run multiple ORs, and can persuade the directory servers that those ORs are trustworthy and independent, then occasionally some user will choose one of those ORs for the start and another as the end of a circuit. If an adversary controls $m>1$ of $N$ nodes, he can correlate at most $\left(\frac{m}{N}\right)^2$ of the traffic---although an adversary could still attract a disproportionately large amount of traffic by running an OR with a permissive exit policy, or by degrading the reliability of other routers. \emph{Introduce timing into messages.} This is simply a stronger version of passive timing attacks already discussed earlier. \emph{Tagging attacks.} A hostile node could ``tag'' a cell by altering it. If the stream were, for example, an unencrypted request to a Web site, the garbled content coming out at the appropriate time would confirm the association. However, integrity checks on cells prevent this attack. \emph{Replace contents of unauthenticated protocols.} When relaying an unauthenticated protocol like HTTP, a hostile exit node can impersonate the target server. Clients should prefer protocols with end-to-end authentication. \emph{Replay attacks.} Some anonymity protocols are vulnerable to replay attacks. Tor is not; replaying one side of a handshake will result in a different negotiated session key, and so the rest of the recorded session can't be used. \emph{Smear attacks.} An attacker could use the Tor network for socially disapproved acts, to bring the network into disrepute and get its operators to shut it down. Exit policies reduce the possibilities for abuse, but ultimately the network requires volunteers who can tolerate some political heat. \emph{Distribute hostile code.} An attacker could trick users into running subverted Tor software that did not, in fact, anonymize their connections---or worse, could trick ORs into running weakened software that provided users with less anonymity. We address this problem (but do not solve it completely) by signing all Tor releases with an official public key, and including an entry in the directory that lists which versions are currently believed to be secure. To prevent an attacker from subverting the official release itself (through threats, bribery, or insider attacks), we provide all releases in source code form, encourage source audits, and frequently warn our users never to trust any software (even from us) that comes without source.\\ \noindent{\large\bf Directory attacks}\\ \emph{Destroy directory servers.} If a few directory servers disappear, the others still decide on a valid directory. So long as any directory servers remain in operation, they will still broadcast their views of the network and generate a consensus directory. (If more than half are destroyed, this directory will not, however, have enough signatures for clients to use it automatically; human intervention will be necessary for clients to decide whether to trust the resulting directory.) \emph{Subvert a directory server.} By taking over a directory server, an attacker can partially influence the final directory. Since ORs are included or excluded by majority vote, the corrupt directory can at worst cast a tie-breaking vote to decide whether to include marginal ORs. It remains to be seen how often such marginal cases occur in practice. \emph{Subvert a majority of directory servers.} An adversary who controls more than half the directory servers can include as many compromised ORs in the final directory as he wishes. We must ensure that directory server operators are independent and attack-resistant. \emph{Encourage directory server dissent.} The directory agreement protocol assumes that directory server operators agree on the set of directory servers. An adversary who can persuade some of the directory server operators to distrust one another could split the quorum into mutually hostile camps, thus partitioning users based on which directory they use. Tor does not address this attack. \emph{Trick the directory servers into listing a hostile OR.} Our threat model explicitly assumes directory server operators will be able to filter out most hostile ORs. % If this is not true, an % attacker can flood the directory with compromised servers. \emph{Convince the directories that a malfunctioning OR is working.} In the current Tor implementation, directory servers assume that an OR is running correctly if they can start a TLS connection to it. A hostile OR could easily subvert this test by accepting TLS connections from ORs but ignoring all cells. Directory servers must actively test ORs by building circuits and streams as appropriate. The tradeoffs of a similar approach are discussed in~\cite{mix-acc}.\\ \noindent{\large\bf Attacks against rendezvous points}\\ \emph{Make many introduction requests.} An attacker could try to deny Bob service by flooding his introduction points with requests. Because the introduction points can block requests that lack authorization tokens, however, Bob can restrict the volume of requests he receives, or require a certain amount of computation for every request he receives. \emph{Attack an introduction point.} An attacker could disrupt a location-hidden service by disabling its introduction points. But because a service's identity is attached to its public key, the service can simply re-advertise itself at a different introduction point. Advertisements can also be done secretly so that only high-priority clients know the address of Bob's introduction points or so that different clients know of different introduction points. This forces the attacker to disable all possible introduction points. \emph{Compromise an introduction point.} An attacker who controls Bob's introduction point can flood Bob with introduction requests, or prevent valid introduction requests from reaching him. Bob can notice a flood, and close the circuit. To notice blocking of valid requests, however, he should periodically test the introduction point by sending rendezvous requests and making sure he receives them. \emph{Compromise a rendezvous point.} A rendezvous point is no more sensitive than any other OR on a circuit, since all data passing through the rendezvous is encrypted with a session key shared by Alice and Bob. \section{Early experiences: Tor in the Wild} \label{sec:in-the-wild} As of mid-May 2004, the Tor network consists of 32 nodes (24 in the US, 8 in Europe), and more are joining each week as the code matures. (For comparison, the current remailer network has about 40 nodes.) % We haven't asked PlanetLab to provide %Tor nodes, since their AUP wouldn't allow exit nodes (see %also~\cite{darkside}) and because we aim to build a long-term community of %node operators and developers.} Each node has at least a 768Kb/768Kb connection, and many have 10Mb. The number of users varies (and of course, it's hard to tell for sure), but we sometimes have several hundred users---administrators at several companies have begun sending their entire departments' web traffic through Tor, to block other divisions of their company from reading their traffic. Tor users have reported using the network for web browsing, FTP, IRC, AIM, Kazaa, SSH, and recipient-anonymous email via rendezvous points. One user has anonymously set up a Wiki as a hidden service, where other users anonymously publish the addresses of their hidden services. Each Tor node currently processes roughly 800,000 relay cells (a bit under half a gigabyte) per week. On average, about 80\% of each 498-byte payload is full for cells going back to the client, whereas about 40\% is full for cells coming from the client. (The difference arises because most of the network's traffic is web browsing.) Interactive traffic like SSH brings down the average a lot---once we have more experience, and assuming we can resolve the anonymity issues, we may partition traffic into two relay cell sizes: one to handle bulk traffic and one for interactive traffic. Based in part on our restrictive default exit policy (we reject SMTP requests) and our low profile, we have had no abuse issues since the network was deployed in October 2003. Our slow growth rate gives us time to add features, resolve bugs, and get a feel for what users actually want from an anonymity system. Even though having more users would bolster our anonymity sets, we are not eager to attract the Kazaa or warez communities---we feel that we must build a reputation for privacy, human rights, research, and other socially laudable activities. As for performance, profiling shows that Tor spends almost all its CPU time in AES, which is fast. Current latency is attributable to two factors. First, network latency is critical: we are intentionally bouncing traffic around the world several times. Second, our end-to-end congestion control algorithm focuses on protecting volunteer servers from accidental DoS rather than on optimizing performance. % Right now the first $500 \times 500\mbox{B}=250\mbox{KB}$ %of the stream arrives %quickly, and after that throughput depends on the rate that \emph{relay %sendme} acknowledgments arrive. To quantify these effects, we did some informal tests using a network of 4 nodes on the same machine (a heavily loaded 1GHz Athlon). We downloaded a 60 megabyte file from {\tt debian.org} every 30 minutes for 54 hours (108 sample points). It arrived in about 300 seconds on average, compared to 210s for a direct download. We ran a similar test on the production Tor network, fetching the front page of {\tt cnn.com} (55 kilobytes): % every 20 seconds for 8952 data points while a direct download consistently took about 0.3s, the performance through Tor varied. Some downloads were as fast as 0.4s, with a median at 2.8s, and 90\% finishing within 5.3s. It seems that as the network expands, the chance of building a slow circuit (one that includes a slow or heavily loaded node or link) is increasing. On the other hand, as our users remain satisfied with this increased latency, we can address our performance incrementally as we proceed with development. %\footnote{For example, we have just begun pushing %a pipelining patch to the production network that seems to decrease %latency for medium-to-large files; we will present revised benchmarks %as they become available.} %With the current network's topology and load, users can typically get 1-2 %megabits sustained transfer rate, which is good enough for now. %Indeed, the Tor %design aims foremost to provide a security research platform; performance %only needs to be sufficient to retain users~\cite{econymics,back01}. %We can tweak the congestion control %parameters to provide faster throughput at the cost of %larger buffers at each node; adding the heuristics mentioned in %Section~\ref{subsec:rate-limit} to favor low-volume %streams may also help. More research remains to find the %right balance. % We should say _HOW MUCH_ latency there is in these cases. -NM %performs badly on lossy networks. may need airhook or something else as %transport alternative? Although Tor's clique topology and full-visibility directories present scaling problems, we still expect the network to support a few hundred nodes and maybe 10,000 users before we're forced to become more distributed. With luck, the experience we gain running the current topology will help us choose among alternatives when the time comes. \section{Open Questions in Low-latency Anonymity} \label{sec:maintaining-anonymity} In addition to the non-goals in Section~\ref{subsec:non-goals}, many questions must be solved before we can be confident of Tor's security. Many of these open issues are questions of balance. For example, how often should users rotate to fresh circuits? Frequent rotation is inefficient, expensive, and may lead to intersection attacks and predecessor attacks~\cite{wright03}, but infrequent rotation makes the user's traffic linkable. Besides opening fresh circuits, clients can also exit from the middle of the circuit, or truncate and re-extend the circuit. More analysis is needed to determine the proper tradeoff. %% Duplicated by 'Better directory distribution' in section 9. % %A similar question surrounds timing of directory operations: how often %should directories be updated? Clients that update infrequently receive %an inaccurate picture of the network, but frequent updates can overload %the directory servers. More generally, we must find more %decentralized yet practical ways to distribute up-to-date snapshots of %network status without introducing new attacks. How should we choose path lengths? If Alice always uses two hops, then both ORs can be certain that by colluding they will learn about Alice and Bob. In our current approach, Alice always chooses at least three nodes unrelated to herself and her destination. %% This point is subtle, but not IMO necessary. Anybody who thinks %% about it will see that it's implied by the above sentence; anybody %% who doesn't think about it is safe in his ignorance. % %Thus normally she chooses %three nodes, but if she is running an OR and her destination is on an OR, %she uses five. Should Alice choose a random path length (e.g.~from a geometric distribution) to foil an attacker who uses timing to learn that he is the fifth hop and thus concludes that both Alice and the responder are running ORs? Throughout this paper, we have assumed that end-to-end traffic confirmation will immediately and automatically defeat a low-latency anonymity system. Even high-latency anonymity systems can be vulnerable to end-to-end traffic confirmation, if the traffic volumes are high enough, and if users' habits are sufficiently distinct~\cite{statistical-disclosure,limits-open}. Can anything be done to make low-latency systems resist these attacks as well as high-latency systems? Tor already makes some effort to conceal the starts and ends of streams by wrapping long-range control commands in identical-looking relay cells. Link padding could frustrate passive observers who count packets; long-range padding could work against observers who own the first hop in a circuit. But more research remains to find an efficient and practical approach. Volunteers prefer not to run constant-bandwidth padding; but no convincing traffic shaping approach has been specified. Recent work on long-range padding~\cite{defensive-dropping} shows promise. One could also try to reduce correlation in packet timing by batching and re-ordering packets, but it is unclear whether this could improve anonymity without introducing so much latency as to render the network unusable. A cascade topology may better defend against traffic confirmation by aggregating users, and making padding and mixing more affordable. Does the hydra topology (many input nodes, few output nodes) work better against some adversaries? Are we going to get a hydra anyway because most nodes will be middleman nodes? Common wisdom suggests that Alice should run her own OR for best anonymity, because traffic coming from her node could plausibly have come from elsewhere. How much mixing does this approach need? Is it immediately beneficial because of real-world adversaries that can't observe Alice's router, but can run routers of their own? To scale to many users, and to prevent an attacker from observing the whole network, it may be necessary to support far more servers than Tor currently anticipates. This introduces several issues. First, if approval by a central set of directory servers is no longer feasible, what mechanism should be used to prevent adversaries from signing up many colluding servers? Second, if clients can no longer have a complete picture of the network, how can they perform discovery while preventing attackers from manipulating or exploiting gaps in their knowledge? Third, if there are too many servers for every server to constantly communicate with every other, which non-clique topology should the network use? (Restricted-route topologies promise comparable anonymity with better scalability~\cite{danezis:pet2003}, but whatever topology we choose, we need some way to keep attackers from manipulating their position within it~\cite{casc-rep}.) Fourth, if no central authority is tracking server reliability, how do we stop unreliable servers from making the network unusable? Fifth, do clients receive so much anonymity from running their own ORs that we should expect them all to do so~\cite{econymics}, or do we need another incentive structure to motivate them? Tarzan and MorphMix present possible solutions. % advogato, captcha When a Tor node goes down, all its circuits (and thus streams) must break. Will users abandon the system because of this brittleness? How well does the method in Section~\ref{subsec:dos} allow streams to survive node failure? If affected users rebuild circuits immediately, how much anonymity is lost? It seems the problem is even worse in a peer-to-peer environment---such systems don't yet provide an incentive for peers to stay connected when they're done retrieving content, so we would expect a higher churn rate. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{Future Directions} \label{sec:conclusion} Tor brings together many innovations into a unified deployable system. The next immediate steps include: \emph{Scalability:} Tor's emphasis on deployability and design simplicity has led us to adopt a clique topology, semi-centralized directories, and a full-network-visibility model for client knowledge. These properties will not scale past a few hundred servers. Section~\ref{sec:maintaining-anonymity} describes some promising approaches, but more deployment experience will be helpful in learning the relative importance of these bottlenecks. \emph{Bandwidth classes:} This paper assumes that all ORs have good bandwidth and latency. We should instead adopt the MorphMix model, where nodes advertise their bandwidth level (DSL, T1, T3), and Alice avoids bottlenecks by choosing nodes that match or exceed her bandwidth. In this way DSL users can usefully join the Tor network. \emph{Incentives:} Volunteers who run nodes are rewarded with publicity and possibly better anonymity~\cite{econymics}. More nodes means increased scalability, and more users can mean more anonymity. We need to continue examining the incentive structures for participating in Tor. Further, we need to explore more approaches to limiting abuse, and understand why most people don't bother using privacy systems. \emph{Cover traffic:} Currently Tor omits cover traffic---its costs in performance and bandwidth are clear but its security benefits are not well understood. We must pursue more research on link-level cover traffic and long-range cover traffic to determine whether some simple padding method offers provable protection against our chosen adversary. %%\emph{Offer two relay cell sizes:} Traffic on the Internet tends to be %%large for bulk transfers and small for interactive traffic. One cell %%size cannot be optimal for both types of traffic. % This should go in the spec and todo, but not the paper yet. -RD \emph{Caching at exit nodes:} Perhaps each exit node should run a caching web proxy~\cite{shsm03}, to improve anonymity for cached pages (Alice's request never leaves the Tor network), to improve speed, and to reduce bandwidth cost. On the other hand, forward security is weakened because caches constitute a record of retrieved files. We must find the right balance between usability and security. \emph{Better directory distribution:} Clients currently download a description of the entire network every 15 minutes. As the state grows larger and clients more numerous, we may need a solution in which clients receive incremental updates to directory state. More generally, we must find more scalable yet practical ways to distribute up-to-date snapshots of network status without introducing new attacks. \emph{Further specification review:} Our public byte-level specification~\cite{tor-spec} needs external review. We hope that as Tor is deployed, more people will examine its specification. \emph{Multisystem interoperability:} We are currently working with the designer of MorphMix to unify the specification and implementation of the common elements of our two systems. So far, this seems to be relatively straightforward. Interoperability will allow testing and direct comparison of the two designs for trust and scalability. \emph{Wider-scale deployment:} The original goal of Tor was to gain experience in deploying an anonymizing overlay network, and learn from having actual users. We are now at a point in design and development where we can start deploying a wider network. Once we have many actual users, we will doubtlessly be better able to evaluate some of our design decisions, including our robustness/latency tradeoffs, our performance tradeoffs (including cell size), our abuse-prevention mechanisms, and our overall usability. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %% commented out for anonymous submission \section*{Acknowledgments} We thank Peter Palfrader, Geoff Goodell, Adam Shostack, Joseph Sokol-Margolis, John Bashinski, and Zack Brown for editing and comments; Matej Pfajfar, Andrei Serjantov, Marc Rennhard for design discussions; Bram Cohen for congestion control discussions; Adam Back for suggesting telescoping circuits; and Cathy Meadows for formal analysis of the \emph{extend} protocol. This work has been supported by ONR and DARPA. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \bibliographystyle{latex8} \bibliography{tor-design} \end{document} % Style guide: % U.S. spelling % avoid contractions (it's, can't, etc.) % prefer ``for example'' or ``such as'' to e.g. % prefer ``that is'' to i.e. % 'mix', 'mixes' (as noun) % 'mix-net' % 'mix', 'mixing' (as verb) % 'middleman' [Not with a hyphen; the hyphen has been optional % since Middle English.] % 'nymserver' % 'Cypherpunk', 'Cypherpunks', 'Cypherpunk remailer' % 'Onion Routing design', 'onion router' [note capitalization] % 'SOCKS' % Try not to use \cite as a noun. % 'Authorizating' sounds great, but it isn't a word. % 'First, second, third', not 'Firstly, secondly, thirdly'. % 'circuit', not 'channel' % Typography: no space on either side of an em dash---ever. % Hyphens are for multi-part words; en dashs imply movement or % opposition (The Alice--Bob connection); and em dashes are % for punctuation---like that. % A relay cell; a control cell; a \emph{create} cell; a % \emph{relay truncated} cell. Never ``a \emph{relay truncated}.'' % % 'Substitute ``Damn'' every time you're inclined to write ``very;'' your % editor will delete it and the writing will be just as it should be.' % -- Mark Twain
{ "alphanum_fraction": 0.7959236661, "avg_line_length": 50.3464052288, "ext": "tex", "hexsha": "dff1b4068b8cd9638dad69ad8f897dde14ecdaa2", "lang": "TeX", "max_forks_count": 60, "max_forks_repo_forks_event_max_datetime": "2022-01-10T02:44:32.000Z", "max_forks_repo_forks_event_min_datetime": "2015-03-19T10:30:54.000Z", "max_forks_repo_head_hexsha": "3a0f0738254bdef2f3d9229da561a5fca6801079", "max_forks_repo_licenses": [ "BSD-3-Clause" ], "max_forks_repo_name": "chengxiaoming/Tor", "max_forks_repo_path": "doc/design-paper/tor-design.tex", "max_issues_count": 3, "max_issues_repo_head_hexsha": "3a0f0738254bdef2f3d9229da561a5fca6801079", "max_issues_repo_issues_event_max_datetime": "2020-11-16T05:52:30.000Z", "max_issues_repo_issues_event_min_datetime": "2015-02-15T14:35:52.000Z", "max_issues_repo_licenses": [ "BSD-3-Clause" ], "max_issues_repo_name": "chengxiaoming/Tor", "max_issues_repo_path": "doc/design-paper/tor-design.tex", "max_line_length": 85, "max_stars_count": 188, "max_stars_repo_head_hexsha": "3a0f0738254bdef2f3d9229da561a5fca6801079", "max_stars_repo_licenses": [ "BSD-3-Clause" ], "max_stars_repo_name": "cluesblues/Tor", "max_stars_repo_path": "doc/design-paper/tor-design.tex", "max_stars_repo_stars_event_max_datetime": "2022-03-19T13:48:56.000Z", "max_stars_repo_stars_event_min_datetime": "2015-01-01T18:10:25.000Z", "num_tokens": 22624, "size": 100139 }
\iffalse meta-comment File: l3term-glossary.tex Copyright (C) 2018-2020 The LaTeX3 Project It may be distributed and/or modified under the conditions of the LaTeX Project Public License (LPPL), either version 1.3c of this license or (at your option) any later version. The latest version of this license is in the file https://www.latex-project.org/lppl.txt This file is part of the "l3kernel bundle" (The Work in LPPL) and all files in that bundle must be distributed together. The released version of this bundle is available from CTAN. \fi \documentclass{l3doc} \title{% Glossary of \TeX{} terms used to describe \LaTeX3 functions% } \author{% The \LaTeX3 Project\thanks {% E-mail: \href{mailto:[email protected]}% {[email protected]}% }% } \date{Released 2020-10-27} \newcommand{\TF}{\textit{(TF)}} \begin{document} \maketitle This file describes aspects of \TeX{} programming that are relevant in a \LaTeX3 context. \section{Reading a file} Tokenization. Treatment of spaces, such as the trap that \verb|\~~a| is equivalent to \verb|\~a| in expl syntax, or that \verb|~| fails to give a space at the beginning of a line. \section{Structure of tokens} We refer to the documentation of \texttt{l3token} for a complete description of all \TeX{} tokens. We distinguish the meaning of the token, which controls the expansion of the token and its effect on \TeX{}'s state, and its shape, which is used when comparing token lists such as for delimited arguments. At any given time two tokens of the same shape automatically have the same meaning, but the converse does not hold, and the meaning associated with a given shape change when doing assignments. Apart from a few exceptions, a token has one of the following shapes. \begin{itemize} \item A control sequence, characterized by the sequence of characters that constitute its name: for instance, \cs{use:n} is a five-letter control sequence. \item An active character token, characterized by its character code (between $0$ and $1114111$ for \LuaTeX{} and \XeTeX{} and less for other engines) and category code~$13$. \item A character token such as |A| or |#|, characterized by its character code and category code (one of $1$, $2$, $3$, $4$, $6$, $7$, $8$, $10$, $11$ or~$12$ whose meaning is described below). \end{itemize} The meaning of a (non-active) character token is fixed by its category code (and character code) and cannot be changed. We call these tokens \emph{explicit} character tokens. Category codes that a character token can have are listed below by giving a sample output of the \TeX{} primitive \tn{meaning}, together with their \LaTeX3 names and most common example: \begin{itemize} \item[1] begin-group character (|group_begin|, often |{|), \item[2] end-group character (|group_end|, often |}|), \item[3] math shift character (|math_toggle|, often |$|), % $ \item[4] alignment tab character (|alignment|, often |&|), \item[6] macro parameter character (|parameter|, often |#|), \item[7] superscript character (|math_superscript|, often |^|), \item[8] subscript character (|math_subscript|, often |_|), \item[10] blank space (|space|, often character code~$32$), \item[11] the letter (|letter|, such as |A|), \item[12] the character (|other|, such as |0|). \end{itemize} Category code~$13$ (|active|) is discussed below. Input characters can also have several other category codes which do not lead to character tokens for later processing: $0$~(|escape|), $5$~(|end_line|), $9$~(|ignore|), $14$~(|comment|), and $15$~(|invalid|). The meaning of a control sequence or active character can be identical to that of any character token listed above (with any character code), and we call such tokens \emph{implicit} character tokens. The meaning is otherwise in the following list: \begin{itemize} \item a macro, used in \LaTeX3 for most functions and some variables (|tl|, |fp|, |seq|, \ldots{}), \item a primitive such as \tn{def} or \tn{topmark}, used in \LaTeX3 for some functions, \item a register such as \tn{count}|123|, used in \LaTeX3{} for the implementation of some variables (|int|, |dim|, \ldots{}), \item a constant integer such as \tn{char}|"56| or \tn{mathchar}|"121|, used when defining a constant using \cs{int_const:Nn}, \item a font selection command, \item undefined. \end{itemize} Macros can be \tn{protected} or not, \tn{long} or not (the opposite of what \LaTeX3 calls |nopar|), and \tn{outer} or not (unused in \LaTeX3). Their \tn{meaning} takes the form \begin{quote} \meta{prefix} |macro:|\meta{argument}|->|\meta{replacement} \end{quote} where \meta{prefix} is among \tn{protected}\tn{long}\tn{outer}, \meta{argument} describes parameters that the macro expects, such as |#1#2#3|, and \meta{replacement} describes how the parameters are manipulated, such as~|\int_eval:n{#2+#1*#3}|. This information can be accessed by \cs{cs_prefix_spec:N}, \cs{cs_argument_spec:N}, \cs{cs_replacement_spec:N}. When a macro takes an undelimited argument, explicit space characters (with character code $32$ and category code $10$) are ignored. If the following token is an explicit character token with category code $1$ (begin-group) and an arbitrary character code, then \TeX{} scans ahead to obtain an equal number of explicit character tokens with category code $1$ (begin-group) and $2$ (end-group), and the resulting list of tokens (with outer braces removed) becomes the argument. Otherwise, a single token is taken as the argument for the macro: we call such single tokens \enquote{N-type}, as they are suitable to be used as an argument for a function with the signature~\texttt{:N}. When a macro takes a delimited argument \TeX{} scans ahead until finding the delimiter (outside any pairs of begin-group/end-group explicit characters), and the resulting list of tokens (with outer braces removed) becomes the argument. Note that explicit space characters at the start of the argument are \emph{not} ignored in this case (and they prevent brace-stripping). \section{Quantities and expressions} Integer denotations, dimensions, glue (including \texttt{fill} and \texttt{true pt} and the like). Syntax of integer expressions (including the trap that \verb|-(1+2)| is invalid). \section{\LaTeX3 terms} Terms like ``intexpr'' or ``seq var'' used in syntax blocks. \end{document}
{ "alphanum_fraction": 0.7305426357, "avg_line_length": 40.3125, "ext": "tex", "hexsha": "051c23c1bbdcb4106014e46ea1d8a715099b9cd0", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "f655fc53a166e82db4e8d44af9196f63de7cd196", "max_forks_repo_licenses": [ "LPPL-1.3c" ], "max_forks_repo_name": "zauguin/latex3", "max_forks_repo_path": "l3kernel/doc/l3term-glossary.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "f655fc53a166e82db4e8d44af9196f63de7cd196", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "LPPL-1.3c" ], "max_issues_repo_name": "zauguin/latex3", "max_issues_repo_path": "l3kernel/doc/l3term-glossary.tex", "max_line_length": 98, "max_stars_count": null, "max_stars_repo_head_hexsha": "f655fc53a166e82db4e8d44af9196f63de7cd196", "max_stars_repo_licenses": [ "LPPL-1.3c" ], "max_stars_repo_name": "zauguin/latex3", "max_stars_repo_path": "l3kernel/doc/l3term-glossary.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 1785, "size": 6450 }
%\appendix \section{Appendix: The Initial Static Basis} \label{init-stat-bas-app} \insertion{\thelibrary}{ In this appendix (and the next) we define a minimal initial basis for execution. Richer bases may be provided by libraries.} \insertion{\thelibrary}{ We\index{73.1} shall indicate components of the initial basis by the subscript 0. The initial static basis is $\B_0 = \T_0,\F_0,\G_0,\E_0$, where $F_0 = \emptymap$, $\G_0 = \emptymap$ and $$\T_0\ =\ \{\BOOL,\INT,\REAL,\STRING,\CHAR,\WORD,\LIST,\REF,\EXCN\}$$ The members of $\T_0$ are type names, not type constructors; for convenience we have used type-constructor identifiers to stand also for the type names which are bound to them in the initial static type environment $\TE_0$. Of these type names, ~\LIST~ and ~\REF~ have arity 1, the rest have arity 0; all except $\EXCN$ \insertion{\thelibrary}{and $\REAL$} admit equality. Finally, $\E_0 = (\SE_0,\TE_0,\VE_0)$, where $\SE_0 = \emptymap$, while $\TE_0$ and $\VE_0$ are shown in Figures~\ref{stat-te} and \ref{stat-ve}, respectively. } \deletion{\thelibrary}{ We\index{73.1} shall indicate components of the initial basis by the subscript 0. The initial static basis is \[ \B_0\ =\ (\M_0,\T_0),\F_0,\G_0,\E_0\] where \begin{itemize} \item $\M_0\ =\ \emptyset$ \item $\T_0\ =\ \{\BOOL,\INT,\REAL,\STRING,\LIST,\REF,\EXCN,\INSTREAM,\OUTSTREAM\}$ \item $\F_0\ =\ \emptymap$ \item $\G_0\ =\ \emptymap$ \item $\E_0\ =\ \longE{0}$ \end{itemize} The members of $\T_0$ are type names, not type constructors; for convenience we have used type-constructor identifiers to stand also for the type names which are bound to them in the initial static type environment $\TE_0$. Of these type names, ~\LIST~ and ~\REF~ have arity 1, the rest have arity 0; all except \EXCN, \INSTREAM~ and ~\OUTSTREAM~ admit equality. The components of $\E_0$ are as follows: \begin{itemize} \item $\SE_0\ =\ \emptymap$ \item $\VE_0$ is shown in Figures~\ref{stat-ve} and \ref{stat-veio}. Note that $\Dom\VE_0$ contains those identifiers ({\tt true}, {\tt false}, {\tt nil}, \verb+::+, {\tt ref}) which are basic value constructors, for reasons discussed in Section~\ref{stat-proj}. $\VE_0$ also includes $\EE_0$, for the same reasons. \item $\TE_0$ is shown in Figure~\ref{stat-te}. Note that the type structures in $\TE_0$ contain the type schemes of all basic value constructors. \item $\Dom\EE_0\ =\ \BasExc$~, the set of basic exception names listed in Section~\ref{bas-exc}. In each case the associated type is ~\EXCN~, except that ~$\EE_0({\tt Io})=\STRING\rightarrow\EXCN$. \end{itemize} } \deletion{\thelibrary}{ \begin{figure} \begin{center} \begin{tabular}{|rl|} \hline $\var$ & $\mapsto\ \tych$\\ \hline {\tt std\_in} & $\mapsto\ \INSTREAM$\\ {\tt open\_in} & $\mapsto\ \STRING\to\INSTREAM$\\ {\tt input} & $\mapsto\ \INSTREAM\ \ast\ \INT\to\STRING$\\ {\tt lookahead} & $\mapsto\ \INSTREAM\to\STRING$\\ {\tt close\_in} & $\mapsto\ \INSTREAM\to\UNIT$\\ {\tt end\_of\_stream} & $\mapsto\ \INSTREAM\to\BOOL$\\ \multicolumn{2}{|c|}{}\\ {\tt std\_out} & $\mapsto\ \OUTSTREAM$\\ {\tt open\_out} & $\mapsto\ \STRING\to\OUTSTREAM$\\ {\tt output} & $\mapsto\ \OUTSTREAM\ \ast\ \STRING\to\UNIT$\\ {\tt close\_out} & $\mapsto\ \OUTSTREAM\to\UNIT$\\ \hline \end{tabular} \end{center} \vspace{3pt} \caption{Static $\VE_0$ (Input/Output)\index{75.1}} \label{stat-veio} \end{figure}} % \begin{figure} % \begin{center} % \begin{tabular}{|rll|} % \hline % $\tycon$ & $\mapsto\ \{\ \typefcn$, & $\{\con_1\mapsto\tych_1,\ldots,\con_n\mapsto\tych_n\}\ \}\quad (n\geq0)$\\ % \hline % \UNIT & $\mapsto\ \{\ \Lambda().\{ \}$, % & $\emptymap\ \}$ \\ % \BOOL & $\mapsto\ \{\ \BOOL$, & $\{\TRUE\mapsto\BOOL, % \ \FALSE\mapsto\BOOL\}\ \}$\\ % \INT & $\mapsto\ \{\ \INT$, & $\{\}\ \}$\\ % \REAL & $\mapsto\ \{\ \REAL$, & $\{\}\ \}$\\ % \STRING & $\mapsto\ \{\ \STRING$, & $\{\}\ \}$\\ % \LIST & $\mapsto\ \{\ \LIST$, & $\{\NIL\mapsto\forall\atyvar\ .\ \atyvar\ \LIST$,\\ % & & \ml{::}$\mapsto\forall\atyvar\ . % \ \atyvar\ast\atyvar\ \LIST % \to\atyvar\ \LIST\}\ \}$\\ % %\LIST & \multicolumn{2}{l|}{$\mapsto\ \{\ \LIST, \{\NIL\mapsto % % \forall\alpha.\alpha\LIST,\ % % \ml{::}\mapsto\forall\alpha. % % \alpha\ast\alpha\LIST % % \to\alpha\LIST\}\ \}$ % % }\\ % \REF & $\mapsto\ \{\ \REF$, & \ adhocreplacementl{\thenoimptypes}{6cm}{$\{\REF\mapsto\forall\ \aityvar\ .\ % \aityvar\to\aityvar\ \REF\}\ \}$}{$\{\REF\mapsto\forall\ \atyvar\ .\ % \atyvar\to\atyvar\ \REF\}\ \}$}\\ % \EXCN & $\mapsto\ \{\ \EXCN$, & $\emptymap\ \}$\\ % \INSTREAM & $\mapsto\ \{\ \INSTREAM$,& $\emptymap\ \}$ \\ % \OUTSTREAM & $\mapsto\ \{\ \OUTSTREAM$,& $\emptymap\ \}$ \\ % \hline % \end{tabular} % \end{center} % \caption{Static $\TE_0$\index{75.2}} % \label{stat-te} % \end{figure} \vskip-5mm \insertion{\thelibrary}{ \begin{figure}[h] \begin{center} \begin{tabular}{|rll|} \hline $\tycon$ & $\mapsto\ (\ \typefcn$, & $\{\vid_1\mapsto(\tych_1,\is_1),\ldots,\vid_n\mapsto(\tych_n,\is_n)\}\ )\quad (n\geq0)$\\ \hline \UNIT & $\mapsto\ (\ \Lambda().\{ \}$, & $\emptymap\ )$ \\ \BOOL & $\mapsto\ (\ \BOOL$, & $\{\TRUE\mapsto(\BOOL,\isc), \ \FALSE\mapsto(\BOOL,\isc)\}\ )$\\ \INT & $\mapsto\ (\ \INT$, & $\{\}\ )$\\ \WORD & $\mapsto\ (\ \WORD$, & $\{\}\ )$\\ \REAL & $\mapsto\ (\ \REAL$, & $\{\}\ )$\\ \STRING & $\mapsto\ (\ \STRING$, & $\{\}\ )$\\ %\UNISTRING & $\mapsto\ (\ \UNISTRING$, & $\{\}\ )$\\ \CHAR & $\mapsto\ (\ \CHAR$, & $\{\}\ )$\\ %\UNICHAR & $\mapsto\ (\ \UNICHAR$, & $\{\}\ )$\\ \LIST & $\mapsto\ (\ \LIST$, & $\{\NIL\mapsto(\forall\atyvar\ .\ \atyvar\ \LIST, \isc)$,\\ & & \ml{::}$\mapsto(\forall\atyvar\ . \ \atyvar\ast\atyvar\ \LIST \to\atyvar\ \LIST, \isc)\}\ )$\\ \REF & $\mapsto\ (\ \REF$, & $\{\REF\mapsto(\forall\ \atyvar\ .\ \atyvar\to\atyvar\ \REF,\isc)\}\ )$\\ \EXCN & $\mapsto\ (\ \EXCN$, & $\emptymap\ )$\\ \hline \end{tabular} \end{center} \vskip-3mm \caption{Static $\TE_0$\index{75.2}} \vskip-3mm \label{stat-te} \end{figure}} \note{\thelibrary}{Figure (Static $VE_0$): many entries removed (now in library/basis)} \insertion{\thelibrary}{\begin{figure}[h] \begin{tabular}{|rl|rl|} \multicolumn{2}{c}{NONFIX}& \multicolumn{2}{c}{INFIX}\\ \hline $\vid$ & $\mapsto\ (\tych,\is)$ & $\vid$ & $\mapsto\ (\tych,\is)$\\ \hline \REF & $\mapsto\ (\forall\ \atyvar\ .\ \atyvar\to\atyvar\ \REF$, \isc) & \multicolumn{2}{l|}{Precedence 5, right associative :} \\ {\tt nil} & $\mapsto\ (\forall\atyvar.\ \atyvar\ \LIST$, \isc) & \boxml{::} & $\mapsto\ (\forall\atyvar. \atyvar\;{\ast}\;\atyvar\;\LIST \to\atyvar\;\LIST$, \isc)\\ {\tt true} & $\mapsto\ (\BOOL,\isc)$ & \multicolumn{2}{l|}{Precedence 4, left associative :}\\ {\tt false} & $\mapsto\ (\BOOL,\isc)$ & \boxml{=} & $\mapsto\ (\forall\aetyvar.\ \aetyvar\ \ast\ \aetyvar\to\BOOL, \isv)$\\ {\tt Match} & $\mapsto\ (\EXCN,\ise)$ & \multicolumn{2}{l|}{Precedence 3, left associative :} \\ {\tt Bind} & $\mapsto\ (\EXCN, \ise)$ & \boxml{:=} & $\mapsto\ (\forall\atyvar.\ \atyvar\ \REF\ \ast\ \atyvar\to\{\}, \isv)$\\ \hline \end{tabular} \vspace{3pt} Note: In type schemes we have taken the liberty of writing $\ty_1\ast\ty_2$ in place of $\{\mbox{\tt 1}\mapsto\ty_1,\mbox{\tt 2}\mapsto\ty_2\}$. \vskip-4mm \caption{Static $\VE_0$\index{74}} \vskip-2mm \label{stat-ve} \end{figure} }
{ "alphanum_fraction": 0.5154795158, "avg_line_length": 42.7462686567, "ext": "tex", "hexsha": "2c48ecf82835c59b4c8fe94c775a204c5af8d384", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "8aaa3f3c23be43210be064cd0c0bf4c56c6c50cf", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "baguette/emblem-sandbox", "max_forks_repo_path": "doc/definition/app3.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "8aaa3f3c23be43210be064cd0c0bf4c56c6c50cf", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "baguette/emblem-sandbox", "max_issues_repo_path": "doc/definition/app3.tex", "max_line_length": 128, "max_stars_count": 1, "max_stars_repo_head_hexsha": "8aaa3f3c23be43210be064cd0c0bf4c56c6c50cf", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "baguette/emblem-sandbox", "max_stars_repo_path": "doc/definition/app3.tex", "max_stars_repo_stars_event_max_datetime": "2016-01-11T20:01:15.000Z", "max_stars_repo_stars_event_min_datetime": "2016-01-11T20:01:15.000Z", "num_tokens": 2955, "size": 8592 }
% (c) Jakub Stejskal % Master Thesis % Performance Testing and Analysis of Qpid-Dispatch Router % Chapter 6 \chapter{Experimental Evaluation} \label{Experimental Evaluation} This chapter summarizes results of the performance testing and experimental evaluation of Maestro. We split the experiments into two parts. The first performs a basic measurement of Maestro\,1.3.0 which includes Maestro Agent and AMQP Inspector. During this experiments we focused on reclaiming the highest possible throughput of singlepoint topology of Qpid-Dispatch (message router) and \emph{Apache ActiveMQ Artemis} (message broker) and multipoint topologies with three nodes of Qpid-Dispatch and with Apache ActiveMQ Artemis in the middle. These experimental topologies are depicted in the Figure \ref{fig:basic_topologies}. The later series of experiments are focused on behavior testing of topologies, which involves Message Router reliability and recovery testing. For experimental evaluation we used Qpid-Dispatch stable version 1.0.0 and Apache ActimeMQ Artemis stable version 2.3.0. Note, that Qpid-Dispatch will be referred as message router and Apache ActiveMQ Artemis as message broker in this chapter. Since the testing was executed over multiple topology types, we used Topology Generator for quick automatic changes of topology. Each test was executed against established topology where all components were newly installed and restarted between each test scenario. This was done during the cleaning stage. For experimental evaluation we used machines specified in the Table \ref{tab:machines}. The reason why clients use more powerful machines is that we needed more machines for SUT, but only two IBM Xeon machines were available during the experimental evaluation and we needed at least three machines for the SUT nodes. For proper comparison we need all SUTs on the same machine type. % Please add the following required packages to your document preamble: % \usepackage[table,xcdraw]{xcolor} % If you use beamer only pass "xcolor=table" option, i.e. \documentclass[xcolor=table]{beamer} \begingroup \setlength{\tabcolsep}{10pt} % Default value: 6pt \renewcommand{\arraystretch}{1.35} % Default value: 1 \begin{table}[H] \centering \begin{tabular}{|l|r|r|r|} \hline \rowcolor[HTML]{C5E3DF} \textbf{Component} & \textbf{Machine} & \textbf{CPU} & \textbf{RAM [$GB$]} \\ \hline SUT & Opteron & 8 & 8 \\ \hline Clients & IBM Xeon & 16 & 16 \\ \hline \end{tabular} \caption{Machines and their properties, which were used for the experimental evaluation.} \label{tab:machines} \end{table} \endgroup \section{Basic Performance Measurements} \label{Basic Performance Measurements} Maestro works as the orchestration system, and requires proper infrastructure before one can run any test for experimental evaluation. The architecture of Maestro, described in the Chapter \ref{Messaging Performance Tool}, specifies that in ideal scenario one needs at least four machines for running a simple test: maestro broker, sender, receiver, and SUT. The amount of needed machines obviously rises with more complex scenarios and larger networks. Examples of used generated experimental topologies are depicted in the Figures \ref{fig:basic_topologies}. For these configurations we compared the throughput and latency of these combinations. During all measurements we used Maestro Inspector to inspect one of the SUT depending on the topology type. Note, that for Message Router we used AMQP Inspector and for message broker we used ActiveMQ Inspector. The topologies were picked based on current performance testing and known topologies, where some performance degradation was already found during the previous testing. \begin{figure}[h] \centering \begin{minipage}{0.45\linewidth} \subfloat[Topology with a single router node.\label{fig:basic_topology_router}]{\includegraphics[width=\linewidth]{obrazky-figures/basic_topology_router_single.pdf}} \end{minipage} \begin{minipage}{0.45\linewidth} \subfloat[Topology with a single Broker node.\label{fig:basic_topology_broker}]{\includegraphics[width=\linewidth]{obrazky-figures/basic_topology_broker_single.pdf}} \end{minipage} \begin{minipage}{0.45\linewidth} \subfloat[Topology consisting of routers nodes only.\label{fig:basic_topology_router_line}]{\includegraphics[width=\linewidth]{obrazky-figures/basic_topology_router.pdf}} \end{minipage} \begin{minipage}{0.45\linewidth} \subfloat[Topology with Broker in the middle.\label{fig:basic_topology_broker_line}]{\includegraphics[width=\linewidth]{obrazky-figures/basic_topology_broker.pdf}} \end{minipage} \caption[Examples of experimental topologies created for basic performance testing and experiments with Maestro.]{Examples of experimental topologies created for basic performance testing and experiments with Maestro. The arrows indicates the communication path between topology components.}\label{fig:basic_topologies} \end{figure} Each test case has specific parameters which can be defined by the user. The summary of available parameter is in the following list: \begin{itemize} \setlength\itemsep{0em} \item \textbf{MESSAGE\_SIZE}\,---\,message size in bytes. \item \textbf{PARALLEL\_COUNT}\,---\,number of connected clients to the SUT during the test. \item \textbf{TEST\_DURATION}\,---\,test duration specified as time value (e.g. \texttt{120s}, \texttt{10m}) or number of messages (10,000,000) to transfer. \item \textbf{RATE}\,---\,rate of each connected client; 0 represents unbounded test. \item \textbf{INSPECTOR\_NAME}\,---\,name of inspector implementation (ActivemqInspector or InterconnectInspector). \item \textbf{MANAGEMENT\_INTERFACE}\,---\,URL where inspector will inspect the SUT. \item \textbf{MAESTRO\_BROKER}\,---\,URL to Maestro Broker. \item \textbf{SEND\_RECEIVE\_URL} (singlepoint only)\,---\,URL where sender and receiver connects. \item \textbf{SEND\_URL}\,---\,URL where sender connects. \item \textbf{RECEIVE\_URL}\,---\,URL where receiver connects. \item \textbf{EXT\_POINT\_SOURCE}\,---\,URL to public git repository with code handlers. \item \textbf{EXT\_POINT\_BRANCH}\,---\,branch which should be used for ext point repository. \item \textbf{EXT\_POINT\_COMMAND}\,---\,command executed by the Agent. \end{itemize} \subsection{Throughput} \label{Throughput} We measured throughput only by load generators\,---\,\emph{Maes\-tro Sender} and \emph{Maestro Receiver}. Load generation depends on the test properties as one can see the test properties for each test case in the Table \ref{tab:test_case_throughput}. Maestro is able to create an unbounded rate test, during which it generates as much load as it can. This type of test was used to reach the maximum handled rate of message router and Message Broker. The unbound rate during the test is achieved by setting the environment variable \emph{RATE} to value 0. The throughput test cases are focused on maximum throughput of simple or complex topologies. % Please add the following required packages to your document preamble: % \usepackage{multirow} % \usepackage[table,xcdraw]{xcolor} % If you use beamer only pass "xcolor=table" option, i.e. \documentclass[xcolor=table]{beamer} \begingroup \setlength{\tabcolsep}{10pt} % Default value: 6pt \renewcommand{\arraystretch}{1.35} % Default value: 1 % \setlength{\arrayrulewidth}{2pt} \begin{table}[H] \centering \begin{tabular}{|l|r|r|r|r|} \hline \rowcolor[HTML]{C5E3DF} & \multicolumn{2}{c|}{\textbf{Singlepoint}} & \multicolumn{2}{c|}{\textbf{Multipoint}} \\ \hline \rowcolor[HTML]{C5E3DF} \textbf{Test Property} & \textbf{Router} & \textbf{Broker} & \textbf{Full Router} & \textbf{With Broker} \\ \hline \textbf{MESSAGE\_SIZE [$B$]} & \multicolumn{4}{r|}{256} \\ \hline \textbf{PARALLEL\_COUNT} & \multicolumn{4}{r|}{5} \\ \hline \textbf{TEST\_DURATION [$min.$]} & \multicolumn{4}{r|}{15} \\ \hline \textbf{RATE [$msg \cdot s^{-1}$]} & \multicolumn{4}{r|}{0} \\ \hline \end{tabular} \caption{Test case settings for throughput measurements.} \label{tab:test_case_throughput} \end{table} \endgroup \subsubsection*{Single Node} The first tests were ran against the single node topologies, which are depicted in the Figures \ref{fig:basic_topology_router} and \ref{fig:basic_topology_broker}. These topologies contains only one SUT node, which is forwarding messages from sender to receiver. During the test the SUT node is inspected by the proper Maestro Inspector. The measured throughput is depicted in the Figure \ref{fig:rate-single} where one can see the comparison of tests with 15\,minutes duration, which tries to achieve the highest possible throughput. One can see that the maximum throughput of message router, as a standalone network component, can reach around 90,000 messages per second. On the other hand, the lone Messaging Broker reaches only about 30,000 messages per second. This throughput difference is caused by the fact, that Broker stores all of the messages in the memory until clients want them. This is the main feature of the broker, because it operates as an message distributor in the network. On contrary the router only routes the messages to the destination so it does not need to store message in the memory. \begin{figure}[H] \centering \includegraphics[width=1\linewidth]{obrazky-figures/charts/singlepoint-throughput.pdf} \caption{Chart of the maximum throughput of router and broker during the singlepoint test case. One can see the significant difference between those two components.} \label{fig:rate-single} \end{figure} In the Figure \ref{fig:router-single-memory} we can see the memory usage of message router during the test. We can see here, that the totally allocated memory is around 45\,kB from which it is used only around 13-28\,kB. If we compare this with the memory allocation for the Broker, we can see the huge difference between these values. The memory allocated for the Broker is depicted in the Figure \ref{fig:broker-single-memory} and we can see that the allocated memory is around 2\,GB of memory and used memory is around 300-900\,MB. This is caused by messages being stored in the memory. \begin{figure}[H] \centering \includegraphics[width=1\linewidth]{obrazky-figures/charts/singlepoint-router-throughput-memory.pdf} \caption{The total allocated memory and memory-in-use by message router during the test. The data was collected by the inspector every 5\,seconds.} \label{fig:router-single-memory} \end{figure} \begin{figure}[H] \centering \includegraphics[width=1\linewidth]{obrazky-figures/charts/singlepoint-broker-throughput-memory.pdf} \caption{The total memory allocation for the message broker service. One can see that the broker allocates more memory compared to message router in the Figure \ref{fig:router-single-memory}.} \label{fig:broker-single-memory} \end{figure} \subsubsection*{Multipoint Topology} For the multipoint experiments we used topologies depicted in the Figures \ref{fig:basic_topology_router_line} and \ref{fig:basic_topology_broker_line}. The network throughput can naturally be influenced by other devices connected to the topology. So the singlepoint topology was extended by another components by adding two other routers around the original SUT. The versions of the additional SUTs are the same as the original ones. \begin{figure}[H] \centering \includegraphics[width=1\linewidth]{obrazky-figures/charts/multipoint-throughput.pdf} \caption{Measured throughput of message router and message broker during the multipoint case study. One can see the performance degradation of message router and improvements of message broker on that Figure.} \label{fig:rate-multipoint-router} \end{figure} In the Figure \ref{fig:rate-multipoint-router} one can see, that adding routers to the broker node raises achievable throughput to the 48,000 messages per second. On the other hand, the topology consisting only of the routers shows significant performance degradation. The throughput falls from the 90,000 messages per second to the approximately 23,000 messages per second. This degradation is caused by the interior flow-control mechanism, which should prevent the overload of the network. However, in this case study we can see that the performance degradation is too high and the mechanism used in the Qpid-Dispatch should be re-implemented. \begin{figure}[H] \centering \includegraphics[width=1\linewidth]{obrazky-figures/charts/multipoint-router-only-throughput-memory.pdf} \caption{Message router's memory usage during the multipoint case study. Used memory is higher than in the single-point.} \label{fig:router-multipoint-memory} \end{figure} Based on that mechanism, the memory usage of the middle router depicted in the Figure \ref{fig:router-multipoint-memory} is higher than during the previous case study. Memory used by all threads is around two times higher and the mean value is around 43\,kB. On the other hand, the memory allocated for the broker component remains the same as in the previous case study. The memory monitoring for this case study is depicted in the Figure \ref{fig:broker-multipoint-memory}. \begin{figure}[H] \centering \includegraphics[width=1\linewidth]{obrazky-figures/charts/multipoint-router-broker-throughput-memory.pdf} \caption{Memory usage for Broker remains almost the same as in the single-point case, but with less spikes.} \label{fig:broker-multipoint-memory} \end{figure} \subsubsection*{Conclusion} The collected data during the throughput measurements revealed unexpected and considerable performance degradation in the serial connection of the message router. The comparison between the single and multi-point case study is in the Figure \ref{fig:basic-throughput-comparison}, which groups together all throughput measurements data into one chart. Here one can see the performance improvement between single instance Broker test and the test of topology with the broker (yellow and green color), and performance degradation between router topologies (red and blue color). The summary of results is also available in the Table \ref{tab:throughput-summary}. % Please add the following required packages to your document preamble: % \usepackage{multirow} % \usepackage[table,xcdraw]{xcolor} % If you use beamer only pass "xcolor=table" option, i.e. \documentclass[xcolor=table]{beamer} \begingroup \setlength{\tabcolsep}{10pt} % Default value: 6pt \renewcommand{\arraystretch}{1.35} % Default value: 1 \begin{table}[h] \centering \begin{tabular}{|l|r|r|r|r|} \hline \rowcolor[HTML]{C5E3DF} & \multicolumn{2}{c|}{\textbf{Throughput [$msg \cdot s^{-1}$]}} & \multicolumn{2}{c|}{\textbf{Memory}} \\ \hline \rowcolor[HTML]{C5E3DF} \textbf{Test Type} & \textbf{Expected} & \textbf{Measured} & \textbf{Total} & \textbf{Used max} \\ \hline \textbf{Single Router} & - & 90,000 & 45\,kB & 28\,kB \\ \hline \textbf{Single Broker} & - & 30,000 & 2\,GB & 0.9\,GB \\ \hline \textbf{Line of Routers} & 90,000 & \cellcolor[HTML]{FFCCC9}23,000 & 49\,kB & 43\,kB \\ \hline \textbf{Line with Broker} & 30,000 & \cellcolor[HTML]{9AFF99}48,000 & 2\,GB & 0.9\,GB \\ \hline \end{tabular} \caption{Table with collected data with highlighted performance improvements and degradations.} \label{tab:throughput-summary} \end{table} \endgroup \begin{figure}[H] \centering \includegraphics[width=1\linewidth]{obrazky-figures/charts/basic-throughput.pdf} \caption{The comparison of all measured throughputs for different components and topologies.} \label{fig:basic-throughput-comparison} \end{figure} \subsection{Latency} \label{Latency} Latency is measured only by Maestro Receiver from certain load samples. Since the broker is a distribution service, which needs to store messages for some time, or create and keep queues for clients, it has higher requirements for system resources. On the other hand message router has only one purpose\,---\,to route the messages. This makes it more faster than the Broker. So high load can be unprofitable if one wants better latency during the communication, especially in the case of topology with the broker. The broker can handle less messages than router, but using router can raise broker's throughput since it can control the load. Thus it gives more time to broker to process messages even with higher load. The test cases for latency measurements has slightly different settings than throughput measurement. The settings for this measurements are shown in the Table \ref{tab:test_case_latency}. Note, that \emph{RATE} and \emph{TEST\_DURATION} are sets for each of five connected clients, which means that test is finished after sending 10,000,000 messages. % Please add the following required packages to your document preamble: % \usepackage{multirow} % \usepackage[table,xcdraw]{xcolor} % If you use beamer only pass "xcolor=table" option, i.e. \documentclass[xcolor=table]{beamer} \begingroup \setlength{\tabcolsep}{10pt} % Default value: 6pt \renewcommand{\arraystretch}{1.35} % Default value: 1 \begin{table}[H] \centering \begin{tabular}{|l|r|r|r|r|} \hline \rowcolor[HTML]{C5E3DF} & \multicolumn{2}{c|}{\textbf{Singlepoint}} & \multicolumn{2}{c|}{\textbf{Multipoint}} \\ \hline \rowcolor[HTML]{C5E3DF} \textbf{Test Property} & \textbf{Router} & \textbf{Broker} & \textbf{Full Router} & \textbf{With Broker} \\ \hline \textbf{MESSAGE\_SIZE [$B$]} & \multicolumn{4}{r|}{256} \\ \hline \textbf{PARALLEL\_COUNT} & \multicolumn{4}{r|}{5} \\ \hline \textbf{TEST\_DURATION [$msg$]} & \multicolumn{4}{r|}{2,000,000} \\ \hline \textbf{RATE [$msg \cdot s^{-1}$]} & 15,000 & 4,600 & 3,600 & 7,600 \\ \hline \end{tabular} \caption{Test case settings for latency measurements.} \label{tab:test_case_latency} \end{table} \endgroup \subsubsection*{Single Node} The latency measurements are done with 80\% of maximum rate, which were discussed in the Subsection \ref{Throughput}. In the Figure \ref{fig:latency-single-router} you can see the latency difference that we measured between message router and message broker. In single node measurements, the router's latency is slightly higher in the most of the cases. After discussion we did not find a reason why is router slower than Broker in that case. %This is caused by the Maestro Sender and Receiver technologies, because they are implemented in Java and Maestro Sender and Receiver are using JMS for sending and receiving messages, which is same approach as the Broker uses. For better latency comparison it is necessary to measure Broker's latency with current implementations and Qpid-Dispatch's latency with Python clients. However, the current version of Maestro offers only the JMS clients. \begin{figure}[H] \centering \includegraphics[width=1\linewidth]{obrazky-figures/charts/singlepoint-latency.pdf} \caption{Latency chart showing the difference between the router and the broker latency at 80\,\% of maximum rate.} \label{fig:latency-single-router} \end{figure} \begin{figure}[H] \centering \includegraphics[width=1\linewidth]{obrazky-figures/charts/singlepoint-latency-18k.pdf} \caption{Latency chart showing the difference between the router and the broker latency at same load. Router's latency is significantly better then in previous case.} \label{fig:latency-single-same-load} \end{figure} Then we tried to rerun the latency measurements with same load for both test cases. The load was set to 4,500 messages per second for each connected client. The output is depicted in the Figure \ref{fig:latency-single-same-load}, where the router is significantly faster, but still slower than Broker. This is probably caused by some Maestro internal processes. The memory used by message router is slightly lower and much stable than in the case of maximum throughput as one can see in the Figure \ref{fig:latency-single-router-memory}. This proves, that used memory is dependent on the load. If the load on the router is higher then it needs more memory for proper routing. \begin{figure}[H] \centering \includegraphics[width=1\linewidth]{obrazky-figures/charts/singlepoint-router-latency-memory.pdf} \caption{Memory usage of message router is much stable when the router is not under the maximum load. The spikes are caused by some unexpected events in the topology.} \label{fig:latency-single-router-memory} \end{figure} \begin{figure}[H] \centering \includegraphics[width=1\linewidth]{obrazky-figures/charts/singlepoint-broker-latency-memory.pdf} \caption{The Broker's memory usage has less spikes when the load is only about of 80\,\% of maximum.} \label{fig:latency-single-broker-memory} \end{figure} In the Figure \ref{fig:latency-single-broker-memory} one can see the Inspector output for Broker's used memory. The used memory here is much stable than in the previous cases, which is caused, as in the router case, by lower load on the Broker. Maximum used memory stags the same as in the previous cases. \subsubsection*{Multipoint Topology} One can see the measured latency on multinode topology of three routers, and two routers with middle-broker in the Figure \ref{fig:latency-multipoint-router}. The latency curve proves, that routers are able to deliver messages into its destination faster than the topology with the Broker, again because the Broker needs to store them in the memory. The latency of the topology with broker reaches around 16\,$ms$ in 90\,\% of samples; on the other hand, topology consisting of routers has significantly better latency that is around 1\,$ms$ in 90\,\% of samples. The conclusion is that the collected data proves the router should be much faster than the broker during the certain circumstances.. \begin{figure}[H] \centering \includegraphics[width=1\linewidth]{obrazky-figures/charts/multipoint-latency.pdf} \caption{Latency comparison between topologies with only routers and with the middle-broker. The router network is here significantly faster.} \label{fig:latency-multipoint-router} \end{figure} \begin{figure}[H] \centering \includegraphics[width=1\linewidth]{obrazky-figures/charts/multipoint-router-only-latency-memory.pdf} \caption{Memory usage shows, that memory usage of the router is affected by the throughput.} \label{fig:latency-multiple-router-memory} \end{figure} Collected data about the memory usage proves the previous statements. In the Figure \ref{fig:latency-multiple-router-memory} we show used memory by message router. The curve is very stable and the values moves around the 9\,MB of used memory. The used memory by the Broker is shown in the Figure \ref{fig:latency-multiple-broker-memory} and is very similar as in the previous measurements. \begin{figure}[H] \centering \includegraphics[width=1\linewidth]{obrazky-figures/charts/multipoint-router-broker-latency-memory.pdf} \caption{Chart of memory allocation on the Broker node.} \label{fig:latency-multiple-broker-memory} \end{figure} \subsubsection*{Conclusion} \enlargethispage{2em} During the latency measurements we collected and compared data for the message router and message broker topologies. The summary of latency measurements is available in the Table \ref{tab:latency-summary}. Is it was already mentioned, message router is faster in the model environment the message broker. % Please add the following required packages to your document preamble: % \usepackage{multirow} % \usepackage[table,xcdraw]{xcolor} % If you use beamer only pass "xcolor=table" option, i.e. \documentclass[xcolor=table]{beamer} \begingroup \setlength{\tabcolsep}{10pt} % Default value: 6pt \renewcommand{\arraystretch}{1.35} % Default value: 1 \begin{table}[H] \centering \begin{tabular}{|l|r|r|r|r|r|} \hline \rowcolor[HTML]{C5E3DF} & \multicolumn{2}{c|}{\textbf{Latency [$ms$]}} & \multicolumn{2}{c|}{\textbf{Memory}} & \\ \hline \rowcolor[HTML]{C5E3DF} \textbf{Test Type} & \textbf{90 \%} & \textbf{99 \%} & \textbf{Total} & \textbf{Used max} & \textbf{Duration} [$s$] \\ \hline \textbf{Single Router} & 2.263 & 12.495 & 38 kB & 28 kB & \cellcolor[HTML]{9AFF99}136 \\ \hline \textbf{Single Broker} & 0.386 & 181.759 & 2 GB & 0.9 GB & \cellcolor[HTML]{FFCCC9}425 \\ \hline \textbf{Line of Routers} & \cellcolor[HTML]{9AFF99}1.292 & 50.815 & 46 kB & 8 kB & \cellcolor[HTML]{FFCCC9}540 \\ \hline \textbf{Line with Broker} & \cellcolor[HTML]{FFCCC9}15.487 & 1031.167 & 2 GB & 0.9 GB & \cellcolor[HTML]{9AFF99}250 \\ \hline \end{tabular} \caption{The summary table with collected latency data with highlighted performance improvements and degradations.} \label{tab:latency-summary} \end{table} \endgroup \section{Behavior Measurements} \label{Behavior Measurements} Moreover, we present some results collected during the behavioral testing using the Maestro Agent extension. The topologies used in the following scenarios are depicted in the Figure \ref{fig:agent_topologies}. The topology depicted in the Figure \ref{fig:agent_line} is used to demonstrate Agent functions and message loss during the crash. The other topology depicted in the Figure \ref{fig:agent_redundant} represent a basic line link with redundant router\,3 which is configured as a slave and root router\,2 which is configured as a master. Here we demonstrate the recovery time of Qpid-Dispatch. \begin{figure}[h] \centering \begin{minipage}{0.45\linewidth} \subfloat[Line topology with connected Inspector and Agent.\label{fig:agent_line}]{\includegraphics[width=\linewidth]{obrazky-figures/basic_topology_router_agent_line.pdf}} \end{minipage} \begin{minipage}{0.45\linewidth} \subfloat[Topology with redundant router.\label{fig:agent_redundant}]{\includegraphics[width=\linewidth]{obrazky-figures/basic_topology_router_agent_redundant.pdf}} \end{minipage} \caption[Examples of experimental topologies created for behavioral performance testing and experiments with Maestro.]{Examples of experimental topologies created for behavioral performance testing and experiments with Maestro. The arrows indicate the communication path between topology components and the numbers represent the cost of the path.}\label{fig:agent_topologies} \end{figure} On each topology four tests were executed with different actions performed by the Agent. The test properties remains the same as during the latency testing for router line topology with the difference in test duration, which was set to 1,500,000 messages per connected client. The following actions, with additional parameter such as duration, were performed during the test: \begin{itemize} \setlength\itemsep{0em} \item \textbf{Restart}\,---\,simple router restart. \item \textbf{Shutdown}\,---\,simple shutdown and restart for different time duration. \end{itemize} \subsection{Agent Demonstration} \label{Agent Demonstration} The agent performed specific action in the third minute of the test scenario (there can be a small delay caused by the repository download on the Agent). The shutdown actions have specific duration, which was set to 10, 60 and 120 seconds. Since the topology used for this type of tests does not have any redundant path to destination or Broker work message store, the messages got lost during the actions. Note, that the test was triggered without message acknowledgment settings for the router and the clients. In the Figure \ref{fig:agent-throughput} one can see the throughput affected by the restart and shutdown actions in every case study. The magnitude of the action impact is based on the action duration, hence, the longer shutdown will lose more message than short restart. However, the chart proves, that routers can establish lost connection with the clients without problems when the router is started again. The different test duration points to the fact, that Maestro detected connections issues and wait for the connection to be established. \begin{figure}[H] \centering \includegraphics[width=1\linewidth]{obrazky-figures/charts/agent-throughput.pdf} \caption{Maestro Agent demonstration against a simple topology with restart and shutdown in the third minute of test.} \label{fig:agent-throughput} \end{figure} The latency is affected as well, one can see that significant message amount raises the latency from 1\,$ms$ to 64\,$ms$. However note, that some messages were lost which leads to smaller number of samples for latency computation. The message loss ratio is captured in the Table~\ref{tab:agent_demonstration}. One can see that message router lost 39,518 messages which corresponds to throughput for 2,195\,$ms$. Regarding this, we can say that router restart interrupts the link for 2,195,$ms$s. \begin{figure}[H] \centering \includegraphics[width=1\linewidth]{obrazky-figures/charts/agent-latency.pdf} \caption{Latency diagram affected by the actions simulating the connection issues.} \label{fig:agent-latency} \end{figure} % Please add the following required packages to your document preamble: % \usepackage{multirow} % \usepackage[table,xcdraw]{xcolor} % If you use beamer only pass "xcolor=table" option, i.e. \documentclass[xcolor=table]{beamer} \begingroup \setlength{\tabcolsep}{10pt} % Default value: 6pt \renewcommand{\arraystretch}{1.35} % Default value: 1 \begin{table}[H] \centering \begin{tabular}{|l|r|r|r|r|} \hline \rowcolor[HTML]{C5E3DF} \textbf{Action} & \textbf{Duration [$s$]} & \textbf{Expected [$msg$]} & \textbf{Lost [$msg$]} & \textbf{Percent} \\ \hline Restart & 0 & & 39,518 & 0.53 \% \\ \cline{1-2} \cline{4-5} & 10 & & 220,445 & 2.94 \% \\ \cline{2-2} \cline{4-5} & 60 & & 871,661 & 11,62 \% \\ \cline{2-2} \cline{4-5} \multirow{-3}{*}{Shutdown} & 120 & \multirow{-4}{*}{7,500,000} & 918,266 & 12,25 \% \\ \hline \end{tabular} \caption{Table with summary of lost messages during the specific actions on the middle router node.} \label{tab:agent_demonstration} \end{table} \endgroup \subsection{Measurement With Redundant Router} During this experiment the Agent perform the same actions as in the previous test cases. The difference is, that given topology now has a slave router connected into the network which is ready to route the messages when master router crashes. In the Figure \ref{fig:agent-redundant-throughput} the throughput is depicted for all tests on this topology. The Agent performs actions in third minute which causes spike under the stable load curve, but the throughput has risen back quickly. This spike is caused by a small delay when the redundant router starts his job. It needs some time for warm-up, which involves the memory allocation depicted in the Figure \ref{fig:agent-redundant-memory}. As one can see, there is no additional spikes after then master is turned on, hence the first spike is causes only by first routing redundant router. \begin{figure}[H] \centering \includegraphics[width=1\linewidth]{obrazky-figures/charts/agent-redundant-throughput.pdf} \caption{Throughput comparison between the test cases with different Agent executions. The spike is caused by warm-up period of redundant router.} \label{fig:agent-redundant-throughput} \end{figure} \begin{figure}[H] \centering \includegraphics[width=1\linewidth]{obrazky-figures/charts/restart-redundant-agent-memory.pdf} \caption{Allocated memory for redundant router during the restart. One can see that router allocated new memory when the master router crashed and the slave had to handle the load. This memory is allocated until the tear down.} \label{fig:agent-redundant-memory} \end{figure} \begin{figure}[H] \centering \includegraphics[width=1\linewidth]{obrazky-figures/charts/agent-redundant-latency.pdf} \caption{Latency diagram of redundant router topology where the Agent perform different actions. The latency remains the same for all the test cases which points to a good routing between the routers.} \label{fig:agent-redundant-latency} \end{figure} Since we want to know how long it takes router to re-establish connections after the crash we can find the answer in the Figure \ref{fig:agent-redundant-unpressetled}. One can see the detail of test case with restart router action which is executed three minutes after the test starts. The monitored router is the redundant one, so we can see that it handled the load for two seconds which is time needed for restart. After this time the master router was able to route load again and the slave router just awaits for another communication. This statement is also supported by results collected and discuss in the Section \ref{Agent Demonstration}. \begin{figure}[H] \centering \includegraphics[width=1\linewidth]{obrazky-figures/charts/restart-redundant-agent-routerLink.pdf} \caption{Chart captures unsettled messages on the redundant router node. The slave router handled load for two seconds.} \label{fig:agent-redundant-unpressetled} \end{figure} The conclusion is, that Qpid-Dispatch is able to recover after a crash in less than three seconds, when there is no block for service start. When the router is down, the topology is updated and the previous hop does not have path to the crashed router, so the clients cannot affect the router start after the crash. However, even with redundant path there is a chance that some messages are lost as it is captured in the Table \ref{tab:agent_redundant_lost}. To avoid this case it is necessary to turn on acknowledge mechanism for AMQP messages, which should avoid message loss but it will affect the performance. % Please add the following required packages to your document preamble: % \usepackage{multirow} % \usepackage[table,xcdraw]{xcolor} % If you use beamer only pass "xcolor=table" option, i.e. \documentclass[xcolor=table]{beamer} \begingroup \setlength{\tabcolsep}{10pt} % Default value: 6pt \renewcommand{\arraystretch}{1.35} % Default value: 1 \begin{table}[H] \centering \begin{tabular}{|l|r|r|r|r|} \hline \rowcolor[HTML]{C5E3DF} \textbf{Action} & \textbf{Duration [$s$]} & \textbf{Expected [$msg$]} & \textbf{Lost [$msg$]} & \textbf{Percent} \\ \hline Restart & 0 & & 21,804 & 0.29 \% \\ \cline{1-2} \cline{4-5} & 10 & & 13,359 & 0.18 \% \\ \cline{2-2} \cline{4-5} & 60 & & 16,205 & 0.22 \% \\ \cline{2-2} \cline{4-5} \multirow{-3}{*}{Shutdown} & 120 & \multirow{-4}{*}{7,500,000} & 22,042 & 0.29 \% \\ \hline \end{tabular} \caption{Table with summary of lost messages during the specific action was performed on the middle router node without redundant path.} \label{tab:agent_redundant_lost} \end{table} \endgroup
{ "alphanum_fraction": 0.7725594276, "avg_line_length": 77.548098434, "ext": "tex", "hexsha": "0b6949556177af8dc4650cbd2ba01ccc2294de75", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "3cce932e8a04b7223155a6624bf1dfc2e45d4e95", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "Frawless/Master-thesis", "max_forks_repo_path": "2017-performance-testing/chapters/xstejs24-performance-06-chapter.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "3cce932e8a04b7223155a6624bf1dfc2e45d4e95", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "Frawless/Master-thesis", "max_issues_repo_path": "2017-performance-testing/chapters/xstejs24-performance-06-chapter.tex", "max_line_length": 1068, "max_stars_count": null, "max_stars_repo_head_hexsha": "3cce932e8a04b7223155a6624bf1dfc2e45d4e95", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "Frawless/Master-thesis", "max_stars_repo_path": "2017-performance-testing/chapters/xstejs24-performance-06-chapter.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 9063, "size": 34664 }
% !TEX root =../thesis-letomes.tex \chapter{Software Engineering} The project has been defined by a relatively wide scope of disciplines that we have worked on. Mathematic derivations, astrophysics intuition, machine learning, high-performance computing, and software engineering. The last of those particularly because of the simultaneous need for both rapid prototyping and strong performance characteristics. Essentially, we needed to be able to test a plethora of different ideas to answer all our questions, but since the computations are so relatively expensive, we wanted to build a few solid modules that could be treated as black boxes for our overarching purposes. Essentially, we rewrote the simulator from the precursor project \cite{Saxe2015} from the ground up, with a focus on modularity and extensibility. This was done on basis of the model inherent in the original, to give us a consistent "ground truth" with which to compare our new simulator. \section{Software Architecture Overview} Our system is organized as two general spheres with an interface between them. One sphere is the ES algorithm and its various auxiliary functions, and the other is the orbsim module, which takes care of the astrophysics aspect; turning launch parameters into trajectories. Between them is an interface that turns machine learning parameters into astrophysics parameters, and the reverse. Visualized in \cref{fig:software_architecture}, this architecture lets us keep variable names consistent with the literature for both domains. The multidisciplinary nature of this project and its authors made this a boon. \begin{figure} \centering \includegraphics[width=\linewidth]{fig/software_architecture} \caption{The architecture of the project, ES module inputs parameters to orbital simulator, which returns scores and trajectories. Not pictured are the many auxiliary analysis scripts which also use orbsim} \label{fig:software_architecture} \end{figure} \section{Software Architecture} \subsection{Orbsim Module} We have arranged our code into a python module that offers us some nice modularity and extensibility, at least within the bounds of our relatively specific problem. What follows is an explanation of the architecture of this module. We packaged the new simulator into its own python module, installable with pip, to ensure proper reproducibility across machines and different python configurations. We then implemented the simulator into our ES framework, which we had previously developed using mock data, and tuned its launch parameters to let it function as a decent search strategy for the lunar case. As we pursued this goal, performance became a severe issue. The ES algorithm wants to compute trajectories thousands or millions of times, and that was not feasible with each trajectory taking \SIrange{60}{300}{\second} to compute, so we delved into the simulator again in order to optimize it with regards to runtime. \subsubsection{Abstractions} The simulator is a numerical solver for an analytical problem, implementing algorithms that have been defined through that analysis. Thus, we have taken care to keep nomenclature consistent between the simulator and its analytical foundations (i.e. consistency between ``code and paper''). This is complicated somewhat by the fact that we are applying Evolution Strategies as our search strategy, which carries with it its own set of nomenclature from the machine learning world. We have kept the two things largely separate, with the ES module interacting with the simulator through a relatively simple interface. This works fine, since ES is a black-box optimizer anyway. \subsubsection{Simulator module} Here, we have the interface between ES portion and simulator. \noindent The function \texttt{launch\_sim} takes launch parameters that the ES algorithm uses for its optimization (in the form of a decision vector \(\psi\)), reformulates them, and starts a simulation based on them. It then returns a delta-v for that single run, along with the associated saved path. That delta-v is, as mentioned before, our fitness function, so based on that result, ES can continue to do its job. \subsubsection{Integrators Module} This contains the main loop of our simulator algorithm, along with subfunctions for the individual steps. \subsubsection{Equations of Motion Module} This module contains all coupled equations of motion and a ton of different convenience functions that compute some intermediate equation for use in the main algorithm. They are hidden away here to reduce clutter in \texttt{Integrators.py}. \subsection{The ES module} There are several different variations of the ES module, each executing the algorithm in a slightly different way. They are all conducting black-box optimization on a decision vector that describes the initial parameters for a trajectory. The basic, and first one works by calling the orbsim module every time the algorithm wants to find the fitness of a given point in our parameter space. It then waits a few seconds, depending on how long the given simulation takes, and uses the returned value as the function to minimize. The algorithm is parallelized to work on however many CPU cores are available, with different starting conditions. We found out quickly that this approach was not reasonable for rapid iteration on the parameters of the ES algorithm, so we leveraged the fact that the angle of the burn vector is relatively unimportant, and is theoretically sub-optimal at any value but 0, to flatten the problem space to two dimensions and eventually compute the matrix seen in \cref{fig:golf_course_s1024}. We then made another implementation that simply made a lookup in that matrix, truncating its $\psi$ coordinates to integers. This let us actually make changes and see their effect quickly. This was the critical step necessary to make progress on the question of whether ES is a good method for the problem, and it did in fact not happen until relatively late into the project, with more of our effort in this regard having gone towards optimizing and developing the CUDA version. The algorithm defines a list of starting points $\psi_{0..k}$, corresponding to k multi-start individuals. They are sampled from a set of disjoint bounds and applied to the evolution function. The evolution function The fitness function takes the decision vector \(\psi\) as argument, calls on orbsim to run a simulation with the contained parameters, and returns the result of that simulation: The \(\Delta v\) value, or if the rocket did not hit the target, how close it came at the minimum, \(d_{min}\). The fitness is a single scalar, so the fact that that scalar can mean two different things (a change in velocity or a distance) gives some problems for estimating gradients in our space. We mitigate this by penalizing an unsuccessful mission heavily, by adding an upward bias and weight. We then define the ES algorithm with an \texttt{evolve} function, which executes a single run of the algorithm. This function takes a population \(\psi\): A collection of decision vectors. For each vector at step t \((\psi_{i,t})\), \texttt{evolve} creates several jittered copies \(\epsilon\) of those vectors, akin to creating a gaussian point cloud in the 3-dimensional problem space that the vectors span. Each of the points in \(\epsilon\) is then run through the fitness function, and the weighted average over \(\epsilon\) -- according to fitness score -- is deemed as the direction for the next point \(\psi_{i,t+1}\). We then take a step in that direction, modified by our learning rate \(\alpha\). This process repeats, until the algorithm has converged upon a local minimum, or until some maximum number of steps have been taken. For the points in \(\epsilon\), we took an adaptive approach, in the interest of performance. \(\epsilon_i\) should contain enough points to get a decent estimation of the gradient around \(\psi_i\). That number is of the order \num{1e3}. This implies a large number of fitness evaluations, and with our relatively expensive fitness function, we needed to compromise a bit. We let the fitness function accept a custom limit on its runtime, and let evaluations \(F(\theta_\epsilon)\) terminate after a number of steps that was two orders of magnitude lower than that for \(F(\theta_\psi)\). For such evaluations, we also increased the error tolerance commensurately, since it would otherwise mean that those paths would be only one a fraction of the length, which would have been completely useless. This adaptive precision approach was not a reasonable compromise, we found, since the paths diverged much too quickly from each other, and in particular because they diverged on their absolute length as well, puzzlingly. With the creation of the pre-computed fitness matrix, this method became obsolete. This function is run in parallel on many cores, each with a different set of starting conditions \(\psi\). When all threads have finished, the best results are extracted and plotted, giving their \(\Delta v\). Originally, we took care of parallelization with PyGMO's method of creating an archipelago, i.e a collection of islands. Each island would then run the algorithm on their own separate populations. Each island is assigned its own thread of execution. This convenient metaphor and easy implementation was the main draw of PyGMO for us, but we found that it introduced a significant performance overhead, and furthermore did not allow customization to the degree that we required. We therefore reimplemented the parallelization with less abstraction towards the end of the project. \subsection{Early Adaptive Measures} With our original, relatively unrestrained boundaries (two full circles, and a wide range of thrust power), we naturally saw a very large amount of failures. Many were cheap, computationally such as those that shot directly into the earth. Others were very expensive, like ones that were only barely above escape velocity, and just orbited the earth until the iteration counter maxed out. The model also wasted a lot of time looking at paths that were unmitigably bad: shooting at full power directly away from the moon, for example. With small perturbations of $\psi$, those paths see very little in the way of improvement, so for an inordinate number of evolution cycles, such paths would languish in that neighborhood, never getting better. Our human intuition tells us, of course, that it's more feasible to simply kill that candidate, and re-randomize. However, this has the effect of selecting for individuals that converge very quickly. That, as it turns out means selecting for Hohmann transfers, essentially, since they are much less sensitive to perturbations. We are not looking for the straight-forward solutions, but rather the unexpected ones. The paths that were successful in \cite{Saxe2015} were a hairs breadth in either direction from spinning off into oblivion, at least from the perspective of an algorithm working with a black-box simulator. This means that it is at least principally important that we do not discard individuals for performing poorly initially, given the very 'all-or-nothing' nature of the problem space. We did, however, in the interest of computing time, try to nudge them on a little, by implementing a relatively naive system of adaptive learning rate and adaptive jitter spread. If an individual performs very poorly, we use our knowledge that its neighbors in a wide area will likely perform similarly, and increase its search radius $\sigma$ by some factor. This means that it will have a higher chance of selecting some jitter points that show some improvement, either by getting it closer to the target or by hitting it. Then, once the cloud of jittered points $\epsilon$ has been weighted, and our direction has been found, the step $\alpha$ that we take in that direction is commensurately increased as well. The idea is essentially a softer version of the notion of killing under-performing individuals, mentioned above, since if the scaling factor for $\alpha$ and $\sigma$ were very large, that would be essentially equivalent to re-initializing the point entirely. This method at least retains the idea of giving the mavericks a chance to find that one crazy solution that comes out of nowhere. This was a naive non-standard way of conduction variational optimization. The idea is good, but there are formal, empirically proven ways to do this properly. We also restricted the search boundaries, though we were hesitant to do so overly. Shooting directly into the earth is not expensive, computationally (it terminates in one cycle), but it is pretty pointless to try, so we restricted the burn angle to only shoot outwards from earth. Additionally, we scaled back the maximum power of our burn so we wouldn't go rocketing into space with an impulse that would see us never coming back. Okay, not exactly rocket science so far (it is, actually, but not idiomatically at least). One other bad type of path that ended up taking a lot of our computing time was paths that fired roughly perpendicular to our starting angular velocity. If our burn vector was too short, we would be recaptured by earth, but instead of going back into an orbit, we would 'stall', and come straight back down, again hitting the earth. If the burn vector was high magnitude, we would either sail off into space, or in the lucky event we were captured by the moon, simply have conducted a really inefficient Hohmann-like transfer. This would become the flattened, pre-computed search space seen in \cref{fig:golf_course_s1024}. Restricting the search space with outside knowledge of the problem was and is always valuable when doing machine learning. \section{High Performance Computing} ES is an approach that requires a large number of evaluations of its fitness function with each iteration. This means, that for a problem like ours, where each fitness evaluation requires a significant numerical simulation to complete, performance optimization is critical. The update function is as such: \begin{empheq}{align} \label{eq:Feval} \psi_{t+1} = \alpha\frac{1}{n\sigma}\sum_{i=1}^{n}F(\psi_t + \sigma\epsilon_i)\epsilon_i \end{empheq} With $F(x)$ corresponding to a fitness evaluation with parameters x. as such, each iteration requires $n$ fitness evaluations. Proper choice of $n$ varies with problem dimensionality; the higher, the better a gradient estimator we get. For our low-dimensional problem, 30 is decent, but 100-200 is noticeably smoother. Performance optimization is two-dimensional in this case. Decrease the time taken by each fitness evaluation, and parallelize; execute several fitness evaluations at the same time. The good thing about ES is that the many fitness evaluations are independent, within the iteration of the algorithm at least. Since we are multi-starting, we evaluate multiple sets of iterations in parallel as well. The algorithm is thus incredibly well-suited for parallelism: We can utilize thousands of parallel execution threads with approximately linear efficiency gain. \subsection{Numba} Python is a notoriously slow language, due to its paradigm of not compiling code, but simply running on the script itself; with Numba however, we can mitigate this problem, without sacrificing too much of the very high productivity that Python offers. We attempted to have Numba optimize our code for us, but found that our "proper software engineering" philosophy was preventing this from working to any meaningful effect. Numba is only able to give significant performance increases if working with basic types: Integers, Floats, Strings, and lists of those, more or less. As soon as complicated data structures are introduced into the mix, Numba drops to `object mode', where it attempts to find subsections of the function that are purely defined with basic types. This mode was useless for us simply making the code run slower due to the overhead of the compilation. In order to get the performance increases that the problem required, we had to make some sacrifices in our idealized architecture, ditching any complicated return types, any use of exceptions and the \texttt{Planet} class that we were using to contain our data. This was a painful sacrifice, since it prevented us from simply calling the algorithm with a named celestial body, and instantly have all the relevant parameters in place for computing a trajectory. Instead, we have been forced to split it into several functions with duplicated code, and a significant increase in visual and lexicographical complexity; a large reason for the redesign happening in the first place. Regrettable, but a valuable learning experience. All this heart-wrenching work was not for nothing however, since with the pre-compilation of the symplectic Störmer-Verlet algorithm in place, we were seeing speedups of a factor 200-300. Once we had this architecture in place, and giving the same results as the original simulator, we were able to make the first running versions of the ES algorithm on real data, though as it turned out it was still too slow. Fitness evaluations took on the order of 3-5 seconds each, prohibiting any true at-scale ES. \subsection{CUDA} Attempting to gain additional performance from our simulator, we tried to run it on GPUs. Given that the fitness evaluations are completely independent in ES, it is a great candidate for massive parallelization. This is also doable with Numba, but requires an even more granular refactoring of the simulator code, and conscious management of data transfer between CPU and GPU, since the communication overhead associated with GPU work can very easily eclipse the performance gained by the massive parallelization. This data transfer rate is very small for the ES algorithm, however, since we do not need to communicate the actual path, but rather just fitness and a boolean hit/no hit value. We began by looking at a performance optimized version for CPU, but its speed was not enough, even on the DTU HPC CPU clusters, that have two \SI{2.8}{\GHz} 10-core Intel Xeon processors. They are nothing to scoff at by themselves, but performance was still a limiting factor for much of the project, so it made sense to build a GPU version. In the end, most of the experimentation was conducted on a pre-computed fitness grid, to improve the iteration time, with experiments being tested on real-valued versions of the algorithm on GPU. \section{Unit Testing} \label{sec:unit-testing} For quite a few weeks of this MSc thesis project, our 3D simulator gave nonsense results when we tried running a simple circular orbit around the sun. Instead of staying in the circular orbit, it immediately plummeted into the sun in 2160 iterations of the program faster than the speed of light, as shown in \cref{fig:r4b-bug}. Something was clearly wrong with the program, and with so many lines of code containing lengthy equations of motion, coordinate transformation, initial conditions etc., it was very difficult to spot exactly where the problem was. After spending days on looking for the mistake, we decided to take a more systematic approach, and starting \emph{unit testing} the critical parts of the code, and steadily increased the \emph{test coverage} of the program. \begin{figure}[H] \centering \includegraphics[width=0.50\linewidth]{fig/r4b-bug.png} \caption{3D plot of an early buggy 3D simulator. The yellow dot in the middle marks the Sun, Earth's orbit is traced in blue, Mars orbit in Red and the spacecraft trajectory traced in black. Instead of staying in orbit around the Earth, the spacecraft accelerated hard towards the sun in the very first time step iteration, crashing the program after 1473 iterations, just before reaching the sun faster than the speed of light.} \label{fig:r4b-bug} \end{figure} \subsection{What is unit testing?} Complex computer software usually undergo various kinds of testing in its development cycle. For a computer program such as ours with lots of implemented mathematical formulae, the most relevant kind of systematic testing is \emph{unit testing}. In unit testing, various units of code are tested in isolation against some ``ground truth'', i.e. some test values that are obtained externally or manually written up. The units are often, as in our case, the \emph{functions} of the program. \subsection{Our Mathematica-Python Unit Testing Workflow} Our simulator program is written in the Python programming language, most of it inside functions. For example, we have a function that iterates a single coordinate or momentum a single time step forward, a function that converts positions or velocity vectors between different coordinate system, and a function that calculates circular orbital speed or period. The whole point of doing this is that \textbf{Mathematica provides nice visual formatting that allows mathematical equations to be pretty much identical to the complicated mathematical formulae written in hand or LaTeX}. Thus we can easily visually compare the implemented equations in Mathematica and in handwriting / LaTeX, and see that they match up. To give an example, here in \cref{fig:unit-testing-sympletic-euler-step-python-1,fig:unit-testing-sympletic-euler-step-python-2} is the implementation of the \texttt{} function in Python: \begin{figure}[H] \centering \includegraphics[width=0.90\linewidth]{fig/unit-testing-sympletic-euler-step-python-1.png} \caption{The main function of \texttt{symplectic\_euler\_step()} in Python.} \label{fig:unit-testing-sympletic-euler-step-python-1} \end{figure} \begin{figure}[H] \centering \includegraphics[width=0.90\linewidth]{fig/unit-testing-sympletic-euler-step-python-2.png} \caption{Function \texttt{get\_Bdot\_R(), which is just one of the sub-routines of \texttt{symplectic\_euler\_step()}}, in Python.} \label{fig:unit-testing-sympletic-euler-step-python-2} \end{figure} And here in \cref{fig:unit-testing-sympletic-euler-step-mathematica} is the same function in Mathematica: \begin{figure}[H] \centering \includegraphics[width=1.0\linewidth]{fig/unit-testing-sympletic-euler-step-mathematica.pdf} \caption{The same function \texttt{symplecticEulerStep()} in Mathematica, which is obviously way more compact and readable than the Python counterpart.} \label{fig:unit-testing-sympletic-euler-step-mathematica} \end{figure} The point of comparison of \cref{fig:unit-testing-sympletic-euler-step-python-1,fig:unit-testing-sympletic-euler-step-python-2,fig:unit-testing-sympletic-euler-step-mathematica} above is that \emph{compactness} and \emph{readability} of the Mathematica version, thanks in large part to the 2D input syntax; Mathematica is simply a language built for handling mathematics really well. On the other hand, Mathematica doesn't scale well to larger and more complicated programs, performing on GPUs, using version control etc., where Python excels. But it does mean that Mathematica is a great application for testing your Python code – or rather providing test values that the Python code can be tested up against. \subsection{The Python-Mathematica Testing Workflow} The testing workflow can be described as such: \begin{enumerate} \item \textbf{Write some piece of functionality as a function in Python.} It could be a mathematical formula, algorithm or just data-processing function. \item \textbf{Re-implement the same functionality Mathematica.} Ideally, one can use a built-in function if possible, such as when converting between spherical coordinates and cartesian coordinates. The benefit of Mathematica over Python is the aforementioned 2D display of input, which makes mathematical expressions much more compact and readable compared to Python code. \item \textbf{Export the test results (JSON) of various pairs of input/output.} Depending on the function a list of input can be a mix of random input, interesting edge-case input and invalid input. For functions with many input arguments, Mathematica can vary each input according to a list of numbers, and then generate all possible combinations of input arguments. \texttt{JSON} stands for JavaScript Object Notation, and is a standard file format for storing structured data. In this case, we used it for storing pairs of input and output for various functions. \item \textbf{Import test results (JSON) in python test script.} We used a Python test framework called pytest to set up automated tests that imported test data, and ran the test ``Output from Python-version-of-function(input) == Output from Mathematica-version-of-function(input)''. \item \textbf{Run tests using pytest and Coverage.py}. Coverage.py is a tool for measuring \emph{code coverage} of our Python program, meaning it can show what parts of the code is executed during running of all unit tests, both percentage-wise on an overall level, on file level and even line-by-line within code files, see \cref{fig:Coverage-py,fig:Coverage-py-file}. This was useful for systematically ensuring that all critical parts of the 3D simulator had been tested, and allowed everyone to directly see the progress in the debugging process. \end{enumerate} \begin{figure}[ht] \centering \includegraphics[width=0.90\linewidth]{fig/Coverage-py.png} \caption{Coverage.py generates a HTML file showing the \emph{code coverage} of the testing suite on a subset of the code. When we reached 70\% coverage, we were satisfied with the degree of certain of correctness of the critical parts of the code, and could verify that the 3D simulator now game the expected results in various scenarios, so we stopped testing then.} \label{fig:Coverage-py} \end{figure} \begin{figure}[ht] \centering \includegraphics[width=0.90\linewidth]{fig/Coverage-py-file.png} \caption{Clicking a file shown in previous \cref{fig:Coverage-py} shows details of which lines have test coverage in the given file. Here, no test covers the case of $v > 180$ in the function $\texttt{radians\_in\_domain().}$} \label{fig:Coverage-py-file} \end{figure} During the debugging process we found a $+$-sign and $*$-sign that was mixed up, and a parenthesis that was closed at the wrong place, but was syntactically correct. Following this, the 3D simulator started to work as expected during basic tests of closed circular orbits etc.
{ "alphanum_fraction": 0.7996676863, "avg_line_length": 162.4601226994, "ext": "tex", "hexsha": "2a20418fc86957d9a348cedf5879738b28c0b326", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "5f73a4066fcf69260cb538c105acf898b22e756d", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "GandalfSaxe/letomes", "max_forks_repo_path": "report/chapters/5-Software-Engineering.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "5f73a4066fcf69260cb538c105acf898b22e756d", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "GandalfSaxe/letomes", "max_issues_repo_path": "report/chapters/5-Software-Engineering.tex", "max_line_length": 1445, "max_stars_count": null, "max_stars_repo_head_hexsha": "5f73a4066fcf69260cb538c105acf898b22e756d", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "GandalfSaxe/letomes", "max_stars_repo_path": "report/chapters/5-Software-Engineering.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 5631, "size": 26481 }
%---------------------------Maximum Aspect Frobenius--------------------------- \section{Maximum Aspect Frobenius} For hexahedra, there is not a unique definition of the aspect Frobenius. Instead, we use the aspect Frobenius defined for tetrahedra (see section~\S\ref{s:tet-aspect-Frobenius}), but choose the reference $W$ element to be right isosceles at the hexahedral corner. Consider the eight tetrahedra formed by edges incident to the corner of a hexahedron. Given a corner vertex $i$ and its three adjacent vertices $j$, $k$, and $\ell$ ordered in a clockwise manner (so that $ijk\ell$ is a positively oriented tetrahedron), denote the tetrahedral aspect frobenius of that corner as $F_{ijk\ell}$. To obtain a single value for the metric, we take the maximum value of the eight unique tetrahedral aspects \[ q = \max\left(F_{0134}, F_{1205}, F_{2316}, F_{3027}, F_{4750}, F_{5461}, F_{6572}, F_{7643} \right). \] In the past, this metric was called the condition number and computed in terms of the Jacobian matrices $A_i$ and their determinants $\alpha_i$ as in \S\ref{s:hex}. We provide that method of computation below for reference purposes. First, define \[ \kappa(A_i) = \left|A_i\right| \left|A_i^{-1}\right| = \frac {\left|A_i\right| \left|\mathrm{adj}(A_i)\right|}{\alpha_i}. \] There are 9 of these matrices and we evaluate the condition number at each and take a third of the maximum: \[ q = \frac {1}{3} \max\left\{ \kappa(A_0), \kappa(A_1), \ldots, \kappa(A_8) \right\} \] The first 8 matrices represent the condition at the corners and the last represents the condition number at the element's center. Note that if $\alpha_i \leq DBL\_MIN$, for any $i$, then $q = DBL\_MAX$. \hexmetrictable{maximum aspect frobenius}% {$1$}% Dimension {$[1,3]$}% Acceptable range {$[1,DBL\_MAX]$}% Normal range {$[1,DBL\_MAX]$}% Full range {$1$}% Cube {\cite{knu:00}}% Citation {v\_hex\_max\_aspect\_frobenius \textnormal{or} % v\_hex\_condition$^*$}% Verdict function name \noindent\,$^*$ indicates a function that is deprecated and may be removed in future versions of \verd.
{ "alphanum_fraction": 0.6443388073, "avg_line_length": 49.2340425532, "ext": "tex", "hexsha": "89bf36136dc75a5c513125850559b120b507bd8f", "lang": "TeX", "max_forks_count": 5, "max_forks_repo_forks_event_max_datetime": "2022-01-03T11:15:39.000Z", "max_forks_repo_forks_event_min_datetime": "2015-03-23T21:13:19.000Z", "max_forks_repo_head_hexsha": "6bc7b595a4a7f86e8fa969d067360450fa4e0a6a", "max_forks_repo_licenses": [ "BSD-3-Clause" ], "max_forks_repo_name": "Armand0s/homemade_vtk", "max_forks_repo_path": "Utilities/verdict/docs/VerdictUserManual2007/HexMaxAspectFrobenius.tex", "max_issues_count": 1, "max_issues_repo_head_hexsha": "6bc7b595a4a7f86e8fa969d067360450fa4e0a6a", "max_issues_repo_issues_event_max_datetime": "2022-03-02T21:23:25.000Z", "max_issues_repo_issues_event_min_datetime": "2022-02-17T11:40:17.000Z", "max_issues_repo_licenses": [ "BSD-3-Clause" ], "max_issues_repo_name": "Armand0s/homemade_vtk", "max_issues_repo_path": "Utilities/verdict/docs/VerdictUserManual2007/HexMaxAspectFrobenius.tex", "max_line_length": 107, "max_stars_count": 8, "max_stars_repo_head_hexsha": "b54ac74f4716572862365fbff28cd0ecb8d08c3d", "max_stars_repo_licenses": [ "BSD-3-Clause" ], "max_stars_repo_name": "Lin1225/vtk_v5.10.0", "max_stars_repo_path": "Utilities/verdict/docs/VerdictUserManual2007/HexMaxAspectFrobenius.tex", "max_stars_repo_stars_event_max_datetime": "2022-03-23T10:49:02.000Z", "max_stars_repo_stars_event_min_datetime": "2021-06-01T00:15:23.000Z", "num_tokens": 642, "size": 2314 }
\section{Heterogeneous Systems} \noindent Now that we know how to solve the ``easy case'' of homogeneous systems of linear differential equations, we can tackle the general case of heterogeneous systems. Much like for single equations, we'll look for a homogeneous solution and particular solution, and we'll use undetermined coefficients or variation of parameters to find the particular solution. These methods will then lead into seeing how systems of first order equations relate to single higher order equations. % Undetermined coefficients \input{./linearSystems/heterogeneousSystems/undeterminedCoefficients.tex} % Variation of Parameters \input{./linearSystems/heterogeneousSystems/variationOfParameters.tex}
{ "alphanum_fraction": 0.8236914601, "avg_line_length": 72.6, "ext": "tex", "hexsha": "72fc7263a4f64e540b5f625a5a59425c71db8a3e", "lang": "TeX", "max_forks_count": 10, "max_forks_repo_forks_event_max_datetime": "2021-08-17T15:21:12.000Z", "max_forks_repo_forks_event_min_datetime": "2020-04-10T05:41:17.000Z", "max_forks_repo_head_hexsha": "20a0efd79057a1f54e093b5021fbc616aab78c3f", "max_forks_repo_licenses": [ "Unlicense" ], "max_forks_repo_name": "aneziac/Math-Summaries", "max_forks_repo_path": "diffEq/linearSystems/heterogeneousSystems/heterogeneousSystems.tex", "max_issues_count": 26, "max_issues_repo_head_hexsha": "20a0efd79057a1f54e093b5021fbc616aab78c3f", "max_issues_repo_issues_event_max_datetime": "2021-10-07T04:47:03.000Z", "max_issues_repo_issues_event_min_datetime": "2020-03-28T17:44:18.000Z", "max_issues_repo_licenses": [ "Unlicense" ], "max_issues_repo_name": "aneziac/Math-Summaries", "max_issues_repo_path": "diffEq/linearSystems/heterogeneousSystems/heterogeneousSystems.tex", "max_line_length": 195, "max_stars_count": 39, "max_stars_repo_head_hexsha": "20a0efd79057a1f54e093b5021fbc616aab78c3f", "max_stars_repo_licenses": [ "Unlicense" ], "max_stars_repo_name": "aneziac/Math-Summaries", "max_stars_repo_path": "diffEq/linearSystems/heterogeneousSystems/heterogeneousSystems.tex", "max_stars_repo_stars_event_max_datetime": "2022-03-17T17:38:45.000Z", "max_stars_repo_stars_event_min_datetime": "2020-03-26T06:20:36.000Z", "num_tokens": 139, "size": 726 }
\documentclass[letterpaper]{article} \usepackage[utf8]{inputenc} \usepackage[margin=1in]{geometry} \usepackage{listings} \usepackage[hidelinks]{hyperref} \hypersetup{colorlinks, allcolors=blue} \lstset { breaklines=true, postbreak=\mbox{\textcolor{red}{$\hookrightarrow$}\space}, basicstyle=\ttfamily, numbers=left, numberstyle=\normalsize, numbersep=10pt, frame=single, } \setlength\parindent{0pt} \begin{document} \pagenumbering{gobble} \vspace*{\fill} \begin{center} \Large Documentation for Financial Transactions HTML Page \large Jason N. April 26, 2020 \end{center} \vspace*{\fill} \newpage \pagenumbering{roman} \tableofcontents \newpage \pagenumbering{arabic} \parskip 10pt \section{Foreword} Some of the code samples in this document were copied by hand. If there are any discrepencies between code in this document and in the source files, refer to the source files. This does not apply to the appendix. Code in the appendix was generated directly from the source files. \section{HTML}\label{HTML} \subsection{Preamble and head} This line declares that the document is an HTML5 document. \begin{lstlisting}[firstnumber=1] <!DOCTYPE html> \end{lstlisting} \lstinline{<head>} tags are used to contain meta information about the document. \begin{lstlisting}[firstnumber=2] <head> <meta charset = "UTF-8"/> <link rel="stylesheet" type="text/css" href="./style.css"/> <script src="./script.js"></script> </head> \end{lstlisting} Within the \lstinline{head} element: \begin{itemize} \item The first line defines the character set of the document. \item The second line defines the source of an external CSS document. \item The third line defines the source of an external Javascript document. \end{itemize} \subsection{Inputs} The input section of this page is contained within \lstinline{<article>} tags for the purpose of organisation. This can be used to facilitate styling this part of the page with CSS if desired. \begin{lstlisting}[firstnumber=10] <article id="inputFields"> \end{lstlisting} The \lstinline{article} element has been assigned a unique id for the purpose of styling. Specifically, this id is used to define padding and overflow. This is described in further detail in section~\ref{overflow-x} of this document. All input fields and buttons are contained within \lstinline{<form>} tags. Althought this is not strictly necessary for the purpose of this project, it is useful for organising data and specifying the fields from which data should be submitted. \begin{lstlisting}[firstnumber=11] <form onsubmit="return false" autocomplete="off"> \end{lstlisting} The attribute \lstinline{onsubmit} is used to define a Javascript function to be executed when pressed. The form expects that \lstinline{true} is returned when data is successfully submitted. If so, the default behaviour is to clear the fields and enter the data in the browser URL bar as arguments. To prevent this behaviour, \lstinline{onsubmit} is set to \lstinline{return false}. The attribute \lstinline{autocomplete} can be used to specify whether user input from a previous session should be used to populate input fields. This attribute also determines whether or not suggestions are displayed when the user enters data. In this case, \lstinline{autocomplete} has been set to \lstinline{off} to prevent these actions from occuring. This does not affect the functionality of the program. The buttons and input fields within the \lstinline{form} element are contained within \lstinline{<section>} tags for organisation. This is primarily done to allow elements to be positioned properly by the CSS file. \subsubsection{Common attributes} All \lstinline{input} elements in this \lstinline{form} have been assigned a \lstinline{name} attribute. The \lstinline{name} attribute is not strictly relevant in this case, but is often used to identify the data when submitting to a database. All \lstinline{input} elements have the \lstinline{required} attribute. Normally this prevents a \lstinline{form} from being submitted unless all \lstinline{required} fields contain data. This does not apply to our case as we have disabled the built-in submit function. However, it does still outline missing fields in red. \subsubsection{Labels} Each of the inputs are given a label to specify to a user the type of information which should be entered in the given field. This is done with the \lstinline{input} element. \begin{lstlisting}[firstnumber=12] <label for="date">Date:</label><br/> \end{lstlisting} The \lstinline{for} attribute is used to specify an element which corresponds to this label. This is done by setting the attribute to the id of the other element. Labels allow a user to select an input field by clicking the label rather than the field itself. Labels are also used to facilitate the use of assistive technologies. \subsubsection{Date} The date of a transaction is specified through the use of an \lstinline{input} element with a \lstinline{type} attribute of \lstinline{date}. This can be used to effectively restrict the input to a valid date format and provides an intuitive method for inputting data. \begin{lstlisting}[firstnumber=11] <section> <label for="date">Date:</label><br/> <input id="date" name="date" type="date" required/> </section> \end{lstlisting} This type of input field is also useful for interpreting dates in Javascript, as it provides methods which return the date in various formats to facilitate displaying and comparing dates. \subsubsection{Text} \lstinline{input} elements with a \lstinline{type} attribute of \lstinline{text} can be used to retrieve a string from a user. This is also the field used for numbers, as these can be easily verified and converted in Javascript. \begin{lstlisting}[firstnumber=16] <section> <label for="account">Account Number:</label><br/> <input id="account" name="account" type="text" placeholder="Account Number" required/> </section> \end{lstlisting} The advantage of taking numbers from an input field is that it allows for characters such as \$ to be included. In the case of this project, users are able to submit Dollar Amounts as purely numberic values, or in a currency format. Currently, the program only accepts dollars as a currency, however, it is possible to allow and store any number of currencies. These characters, of course, have to be filtered out before the number is interpretted and re-inserted before displaying the value. \subsubsection{List} Dropdown lists are created using \lstinline{<select>} tags containing \lstinline{option} elements. Each \lstinline{option} element represents a possible value, the first element is selected by default. \begin{lstlisting}[firstnumber=21] <section> <label for="type">Transaction Type:</label><br/> <select id="type" name="type"> <option value=""></option> <option value="BUY">BUY</option> <option value="SELL">SELL</option> <option value="DIVIDEND">DIVIDEND</option> <option value="INTEREST">INTEREST</option> <option value="WITHDRAW">WITHDRAW</option> <option value="DEPOSIT">DEPOSIT</option> </select> </section> \end{lstlisting} The \lstinline{innerHTML} of an \lstinline{option} element is the text that will be displayed to the user. The \lstinline{value} attribute of the element is the value that will be read by Javascript. For this project, the \lstinline{value} and \lstinline{innerHTML} were made to be identical so that the text in the table would be the same as the text the user had seen in the list. \subsubsection{Buttons} \lstinline{button} elements are clickable elements which can execute Javascript code specified by an \lstinline{onclick} attribute. Text within the \lstinline{innerHTML} of the \lstinline{button} will be displayed as text within the button, which is useful for communicating the purpose of the button. \begin{lstlisting}[firstnumber=49] <section> <button id="add" type="submit" onclick="addTransactionButton();">Add Transaction</button> <button id="save" type="submit" hidden="true" onclick="saveChanges();">Save</button> <button id="discard" type="button" hidden="true" onclick="discardChanges();">Discard</button> </section> \end{lstlisting} In this case, three buttons are present, each set to execute a different Javascript function when clicked. Two of the three buttons have a \lstinline{type} attribute of \lstinline{submit}. This causes each function to trigger the \lstinline{submit} event along with the Javascript function. However, for this project, this event has been disabled by the \lstinline{form} \lstinline{onsubmit="return false"} attribute. Thus, the only difference is that this causes missing fields to be outlined in red when the button is pressed. The last button is of \lstinline{type} \lstinline{button}. This element functions exactly the same, except it does not trigger the \lstinline{submit} event. For this project, this means that missing fields will not be highlighted red, as this is not necessary for the `Discard Changes' button. Two of the three buttons also have the \lstinline{hidden="true"} attribute. This causes the page to render as if these elements did not exist, as these elements are only relevant when editing a row. All three buttons are given unique ids so that \lstinline{hidden} attributes can be added or removed as needed. \subsection{Table} \subsubsection{thead} The header of the table is enclosed in \lstinline{<thead>} tags. This element includes the first row of the table, denoted by \lstinline{<tr>} tags, which contains headers for each column. Every cell in the header is denoted by \lstinline{<td>} tags. These cells differ from normal cells, such as those in the body of the table, in how they format their contents. Using this element for header cells makes them stand out slightly as well as making it easier to differentiate when styling with CSS. \begin{lstlisting}[firstnumber=61] <th> <section> Transaction ID </section> <section class="sort"> <button type="button" onclick="sortTable(0, true)">^</button> <button type="button" onclick="sortTable(0, false)">V</button> </section> </th> \end{lstlisting} The first 8 header cells are split into two separate \lstinline{section} elements. This was done to allow for the proper positioning of the header text and the sort buttons. For this reason, the latter \lstinline{section} element is given the class \lstinline{sort} to differentiate between the two. Each of the first 8 header cells contain two buttons for sorting. All sorting buttons call the same function \lstinline{sort(column, ascending)}, however, they pass different arguments to this function. The first argument is the column number, starting from 0, which allows the Javascript function to determine which column to use when comparing rows. The second argument defines whether data should be sorted in ascending or descending order. The last header cell contains nothing but text. This column is used to contain the delete and edit buttons created for each row. \begin{lstlisting}[firstnumber=133] <th>Actions</th> \end{lstlisting} \subsubsection{tbody} The table body is enclosed in \lstinline{<tbody>} tags. This element is meant to be the main container of data in a table. \begin{lstlisting}[firstnumber=136] <tbody id="tableBody"> \end{lstlisting} The table body is important for this project as it is the parent element of all data which will be manipulated. For this reason, it has been given a unique id to reference in Javascript. This was not strictly necessary, as it is also possible to reference this element by its tag name, being the only \lstinline{tbody} element. Nevertheless, I consider this to be good practice as it is clear which element is being referred to in Javascript and allows for other tables to be added in the future if necessary without breaking the current functionality. \newpage \section{Javascript}\label{JS} The following section describes all Javascript functions used in this project. Functions have been grouped according to their purpose, some functions have been omitted for being too similar to other functions. Each section contains a section in which these are compared to an equivalent function from the Google Sheets project. \subsection{getData()} This function is used to retrieve and format data from the input fields. \begin{lstlisting}[firstnumber=1] function getData() { var date = document.getElementById("date"); var account = document.getElementById("account").value; var type = document.getElementById("type").value; var security = document.getElementById("security").value; var amount = document.getElementById("amount").value; var dAmount = document.getElementById("dAmount").value; amount = Number(amount); if(dAmount[0] == '$') { dAmount = dAmount.substr(1); } dAmount = Number(dAmount); if(validate(date, account, type, security, amount, dAmount)) { var costBasis = calculateCostBasis(amount, dAmount); date = date.value; dAmount = '$' + dAmount.toFixed(2); return [ date, account, type, security, amount, dAmount, costBasis ]; } else return false; } \end{lstlisting} The function checks whether the data is valid by calling the \lstinline{validate()} function. If so, data is formatted and sent to the function which called \lstinline{getData()}. Currently, the caller is either \lstinline{addTransactionButton()} or \lstinline{saveChanges()}. The function first stores the \lstinline{date} element and the raw values of the other input fields. \lstinline{date} is treated differently as the element includes useful methods for comparing the date in different formats. \begin{lstlisting}[firstnumber=2] var date = document.getElementById("date"); var account = document.getElementById("account").value; var type = document.getElementById("type").value; var security = document.getElementById("security").value; var amount = document.getElementById("amount").value; var dAmount = document.getElementById("dAmount").value; \end{lstlisting} Next, some of the data is processed. \lstinline{amount} is converted from a string, as it originated from a text field, to a number. This is done with the built-in \lstinline{Number()} function, which takes a string as an argument and returns it as a numeric value when possible. If the argument cannot be converted, the function returns \lstinline{NaN} or `Not a Number'. We are not concerned with validating that the value can be converted at this stage, as we can simply check if the value is \lstinline{NaN} during the validation stage, therefore it is safe to convert to a number here. \begin{lstlisting}[firstnumber=9] amount = Number(amount); \end{lstlisting} A similar conversion is performed on the \lstinline{dAmount} value. However, before this occurs, we check whether the first character in the string is a dollar sign. If so, we remove the dollar sign by taking a substring of \lstinline{dAmount} which includes everything including and after the second character. This effectively removes the dollar sign from the string, allowing it to be converted to a numeric value. \begin{lstlisting}[firstnumber=9] if(dAmount[0] == '$') { dAmount = dAmount.substr(1); } dAmount = Number(dAmount); \end{lstlisting} The function then calls \lstinline{validate} and passes all the stored variables as arguments to determine whether all the data is valid. If not, the function will return \lstinline{false} and exit, preventing subsequent steps from occuring. \begin{lstlisting}[firstnumber=16] if(validate(date, account, type, security, amount, dAmount)) { var costBasis = calculateCostBasis(amount, dAmount); date = date.value; dAmount = '$' + dAmount.toFixed(2); return [ date, account, type, security, amount, dAmount, costBasis ]; } else return false; \end{lstlisting} If all data is valid, the function calculates and stores the costBasis by calling \lstinline{calculateCostBasis()} and passing the necessary values. The function also formats the date and dollar amount in the correct formats to be exported to the table. The \lstinline{toFixed()} method is used to fix the value to 2 decimal places, and a dollar sign is added to the front of the value. Lastly, the function returns a list including all the data to the caller. \textbf{Comparison to Google Sheets project} There is no equivalent function in the Google Sheets project. The \lstinline{getData()} function is required to store input values in memory. Google Apps Script had a built-in function to move or copy cells and did not require most values to be stored like this. \subsection{validate()} The \lstinline{validate()} function is used to verify that all fields include valid data. \begin{lstlisting}[firstnumber=26] function validate(date, account, type, security, amount, dAmount) { if(!validateDate(date)) return false; if(!validateAccount(account)) return false; if(!validateType(type)) return false; if(!validateSecurity(security)) return false; if(!validateAmount(amount)) return false; if(!validateDAmount(dAmount)) return false; return true; } \end{lstlisting} The function calls several functions, each of which validates a different input field. If any of the calls returns false, this function returns false. If none of the calls returned false, the function returns true, allowing the caller to proceed. \textbf{Comparison to Google Sheets project} The function uses the same method as the Google Sheets project for validating data, by calling different functions which return \lstinline{true} or \lstinline{false}. The difference here is that data is taken as arguments and passed to the validating functions, as this data is no longer read from the sheet. The returns of this function have also been standardised such that \lstinline{false} always indicated an invalid value, this is done mostly for readability. \subsubsection{Check empty} In cases where the only check necessary is that the field is not empty, the function simply compares the value to an empty string. If the value is equal to an empty string, the function prints an alert and returns false, otherwise it returns true. \begin{lstlisting}[firstnumber=54] function validateAccount(account) { if(account == '') { alert('Error: Missing Account Number'); return false; } return true; } \end{lstlisting} This is the template used to check account, transaction type, and security, as these are all strings. Although the transaction type field is not a text box, by setting the default empty option to have a value of an empty string, this template still applies. \textbf{Comparison to Google Sheets project} The function to check for an empty field is now done by the same function that check that a field is valid. This was done to better organise the validation process and allow functions to be modified more easily. The Google Sheets project used the \lstinline{isBlank()} method of a cell, as no data was stored and passed to it. This project does pass values to the function, therefore the check can be simplified by comparing it to an empty string. \subsubsection{Check NaN} The function to check whether or not a value is a number is identical to the functions that check for only empty values, except for one key difference. In addition to checking if the value is empty, the function checks if the value is \lstinline{NaN}. This is done using the \lstinline{isNaN()} function, which takes a value as an argument and returns true of the value is \lstinline{NaN}. \begin{lstlisting}[firstnumber=87] if(isNaN(amount)) { alert('Error: Invalid Amount'); return false; } \end{lstlisting} \textbf{Comparison to Google Sheets project} This check was not performed in the Google Sheets project, however, if one were to validate a number in the Google Sheets project, one would check that the value was a numeric type, similar to how the date was validated in Google Sheets. In this case, the number can be validated much simpler, by checking whether or not the conversion was successful. \subsubsection{Check date} Currently, a valid date is a date that is not in the future. In order to check this, the function gets the current date by creating a new \lstinline{Date} object and storing it as a variable. The function also stores the value of the \lstinline{date} element in a number format which can be easily compared. This is done by storing the \lstinline{valueAsNumber} property of the element. \begin{lstlisting}[firstnumber=37] function validateDate(date) { realDate = new Date(); inputDate = date.valueAsNumber; if(!date.checkValidity()) { alert('Error: Invalid date'); return false; } if(realDate.valueOf() < inputDate) { alert('Error: Date is in the future'); return false; } return true; } \end{lstlisting} The first check performed is whether or not the user had inputted a date that exists. If the date field was left empty or incomplete, or the date is non-existant (e.g. November 31) the \lstinline{date.checkValidity()} function will return \lstinline{false}. Therefore, we can reuse the statement that checks whether or not a field contains a valid number. Next, the function must check that the date is not in the future. This is done by simply comparing the dates in number format, with a greater value indicating a later date. \textbf{Comparison to Google Sheets project} In Google Sheets, a date was validated by checking that it was of the \lstinline{[object Date]} type. In this project, we can be confident that the object is of the correct type as it was created by a specific type of input, therefore, it is not necessary to validate the type. To check that the date was not in the future, the Google Sheets project directly compared two date objects. This was not done for the current project, as we have two different types of data, an element and a \lstinline{Date} object. In order to compare these, both are converted to the same numeric format. \subsection{generateId()} A unique id generated with the length specified by \lstinline{idLength} and with characters specified by \lstinline{characters}. \begin{lstlisting}[firstnumber=109] function generateId() { var id = ''; var idLength = 6; var characters = 'ABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789'; var charactersLength = characters.length; var unique = false; while(!unique) { for(var i = 0; i < idLength; i++) { id += characters.charAt(Math.floor(Math.random() * charactersLength)); } unique = true; for(var i = 0; i < document.getElementsByClassName('idCell').length; i++) { if(document.getElementsByClassName('idCell')[i].innerText == id) { unique = false; break; } } } return id; } \end{lstlisting} The \lstinline{unique} variable is used to store whether the generated id is unique. A \lstinline{while} loop continuously generates and checks ids until a unique one is found. \begin{lstlisting}[firstnumber=119] for(var i = 0; i < idLength; i++) { id += characters.charAt(Math.floor(Math.random() * charactersLength)); } \end{lstlisting} For every character in the id, a random character is chosen from the character set. A number between 0 and the number of possible characters is generated by \lstinline{Math.random() * charactersLength} as \lstinline{Math.random()} generates a number $ 0 \leq n < 1 $. This number is converted to an integer by rounding down using the \lstinline{Math.floor()} function. The result of this integer is used as the index from which to take a character using the \lstinline{charAt()} method. \begin{lstlisting}[firstnumber=123] unique = true; for(var i = 0; i < document.getElementsByClassName('idCell').length; i++) { if(document.getElementsByClassName('idCell')[i].innerText == id) { unique = false; break; } } \end{lstlisting} Every generated id is initially assumed to be unique. The function loops through every element with a class of \lstinline{'idCell'}, comparing the \lstinline{innerText} of this cell to the generated id. If the values match, the id is not unique and another id must be generated. If no matching id has been found, the id is considered unique and the loop exits, passing the id to the caller. \textbf{Comparison to Google Sheets project} The process for generating an id is exactly the same as in the Google Sheets project. The id is compared to the \lstinline{innerText} of cells exactly as the id was compared to other cells in the first column of the Google Sheets. The two processes are essentially identical, except the process is much faster outside of Google Sheets. \subsection{calculateCostBasis()} The function receives two numberic values as arguments, calculates the quotient of the two, and formats the result as a dollar value. \begin{lstlisting}[firstnumber=134] function calculateCostBasis(amount, dAmount) { costBasis = '$' + (dAmount / amount).toFixed(2); return costBasis; } \end{lstlisting} The \lstinline{toFixed()} method is used to fix the value to 2 decimal places, and a dollar sign is added to the front of the value. \textbf{Comparison to Google Sheets project} This function is essentially the same as that of the Google Sheets project, except the data is passed to this function, rather than being read from the sheet. Another minor difference is in how data is formatted, as Google Apps Script formatted data by setting a number format, while that format does not exist in pure Javascript and must be set manually. \subsection{addTransaction()} The process of adding a transaction is initiated when the \lstinline{addTransactionButton()} function is called. This function calls the \lstinline{getData()} function and stores the result. If the result is not \lstinline{false}, the function generates a unique id for the transaction and adds this to the front of the \lstinline{data} array using the \lstinline{unshift()} method. The function then passes the \lstinline{data} array to the \lstinline{addTransaction()} function, which creates and populates the row. \begin{lstlisting}[firstnumber=156] function addTransactionButton() { var data = getData(); if(data) { var id = generateId(); data.unshift(id); addTransaction(data); } } \end{lstlisting} The \lstinline{addTransaction} function gets all necessary data as an array argument. It stores the body of the table in a variable by referencing the element using \lstinline{document.getElementById('tableBody')}. A new row is created using the \lstinline{insertRow()} method of the table body and stored as a variable so that contents can be added. The row is assigned a class of \lstinline{bodyRow} for reference by other functions or styling. \begin{lstlisting}[firstnumber=139] function addTransaction(data) { var tableBody = document.getElementById('tableBody'); var newRow = tableBody.insertRow(0); newRow.classList += "bodyRow"; var actionsContent = "<button type='button' onclick='editRow(this)'>Edit</button> <button type='button' onclick='deleteRow(this)'>Delete</button>"; data.push(actionsContent); for(var i = 0; i < data.length; i++) { var newCell = newRow.insertCell(i); newCell.innerHTML = data[i]; if(i == 0) { newCell.classList += "idCell"; } } } \end{lstlisting} The function creates string containing HTML code for a delete and an edit button and pushes this to the end of the \lstinline{data} array. The \lstinline{onclick} functions pass \lstinline{this} as an argument, which specifies the element that called the function. This is done so that the row to edit or delete can be selected. \begin{lstlisting}[firstnumber=144] var actionsContent = "<button type='button' onclick='editRow(this)'>Edit</button> <button type='button' onclick='deleteRow(this)'>Delete</button>"; data.push(actionsContent); \end{lstlisting} For each item in the array, the function calls the \lstinline{insertCell()} method of the row to create a new cell. The contents of this cell are defined by the \lstinline{innerHTML} property, which is set to the corresponding element of the array. For the first element, the cell is also given a special class to identify it as the cell containing the transaction id. \begin{lstlisting}[firstnumber=148] var newCell = newRow.insertCell(i); newCell.innerHTML = data[i]; if(i == 0) { newCell.classList += "idCell"; } \end{lstlisting} \textbf{Comparison to Google Sheets project} The Google Sheets project copied data from the input range to the output range, whereas this function stores the values in memory then writes the new cells. As the table is within a self-contained element, the more complicated process of moving all contents to a separate area is not required. Therefore, the process of adding a row is significantly simpler in HTML and JS. \subsection{deleteRow()} The function takes an element as an argument, this element is supposed to be the button which called the function. The function gets the parent of the parent of the delete button and removes this element. The first parent of the delete button is the cell, the second parent is the row. By selecting the second parent, we are selecting the row containing the button. The function uses the \lstinline{removeChild()} method of the table to remove the specified row element. \begin{lstlisting}[firstnumber=166] function deleteRow(button) { var row = button.parentElement.parentElement; document.getElementById("tableBody").removeChild(row); if(document.getElementsByClassName('editing').length == 0) { document.getElementById('add').removeAttribute('hidden'); document.getElementById('save').setAttribute('hidden', true); document.getElementById('discard').setAttribute('hidden', true); } } \end{lstlisting} It is possible that the delete button is used on the row being edited. If this is the case, the save and discard buttons would remain visible, although there would be no row being edited. To resolve this, the function checks if there are any rows with the class \lstinline{'editing'}. If so, the function hides the save and discard buttons, and unhides the add button. This is performed whenever the function finds no row being edited, however, it has no effect when the row being edited was not deleted. \textbf{Comparison to Google Sheets project} Deleting is far simpler as it is not necessary to transfer the rows to a separate sheet. HTML and JS also allows for a new button to be easily created for every row. The Google Sheets project did not hide and show buttons depending on whether or not a function was being editted, however, this is done in HTML as the process is much simpler. If one were to do this in Google Sheets, the buttons would have to be indexed before hand so they could be referenced in code. \subsection{editRow()} This element is responsible for highlighting the row being editted and moving the values in the row to the input fields. \begin{lstlisting}[firstnumber=177] function editRow(button) { if(document.getElementsByClassName('editing').length > 0) document.getElementsByClassName('editing')[0].classList = "bodyRow"; var row = button.parentElement.parentElement; var rowContent = row.getElementsByTagName('td'); row.classList = "bodyRow editing"; document.getElementById('date').value = rowContent[1].innerText; document.getElementById('account').value = rowContent[2].innerText; document.getElementById('type').value = rowContent[3].innerText; document.getElementById('security').value = rowContent[4].innerText; document.getElementById('amount').value = rowContent[5].innerText; document.getElementById('dAmount').value = rowContent[6].innerText; document.getElementById('add').setAttribute('hidden', true); document.getElementById('save').removeAttribute('hidden'); document.getElementById('discard').removeAttribute('hidden'); } \end{lstlisting} First, the function checks whether or not there is a cell already highlighted for editing. This is done by geting a list of all functions with the \lstinline{'editing'} class. If so, it resets the class to \lstinline{'bodyRow'}. This is done to ensure that at most, one row is being edited at a time. \begin{lstlisting}[firstnumber=178] if(document.getElementsByClassName('editing').length > 0) document.getElementsByClassName('editing')[0].classList = "bodyRow"; \end{lstlisting} The function uses the same method as the \lstinline{deleteRow()} function to get the parent row of the button. The function finds the second parent of the calling button and stores this as a variable for reference. The cells in this row are stored in an array by getting all children of the row with the \lstinline{td} tag. The row is changed to have the classes \lstinline{bodyRow} and \lstinline{editing}. Each input field is then identified using their id and given a value of the \lstinline{innerText} of the row. \begin{lstlisting}[firstnumber=181] var row = button.parentElement.parentElement; var rowContent = row.getElementsByTagName('td'); row.classList = "bodyRow editing"; document.getElementById('date').value = rowContent[1].innerText; document.getElementById('account').value = rowContent[2].innerText; document.getElementById('type').value = rowContent[3].innerText; document.getElementById('security').value = rowContent[4].innerText; document.getElementById('amount').value = rowContent[5].innerText; document.getElementById('dAmount').value = rowContent[6].innerText; \end{lstlisting} Lastly, the add button is hidden by assigning it a \lstinline{hidden} attribute. The save and discard buttons are revealed by removing the \lstinline{hidden} attributes. \begin{lstlisting}[firstnumber=192] document.getElementById('add').setAttribute('hidden', true); document.getElementById('save').removeAttribute('hidden'); document.getElementById('discard').removeAttribute('hidden'); \end{lstlisting} \textbf{Comparison to Google Sheets project} The process is essentially the same, except that the class is used to mark a row as being editted and for formatting, whereas the Google Sheets project marked this by placing a 1 in the I column. The process of hiding and revealing buttons is unique to this project, as this is made much simpler by the fact that elements can be referred to using a unique id. \subsection{saveChanges()} The function calls \lstinline{getData()} to retrieve data from the input fields and stores this as a variable. If \lstinline{getData()} does not return false, the function gets the row being editted by getting the first item with a class of \lstinline{'editing'}. The function stores the cells of this row as a an array by getting all child elements with \lstinline{'td'} tags. Every element in the \lstinline{data} array is written to a cell in the row, starting from the second cell so that the id is not replaced. The function then sets the class to \lstinline{bodyRow}, removing the \lstinline{editing} class. \begin{lstlisting}[firstnumber=197] function saveChanges() { data = getData(); if(data) { rowToEdit = document.getElementsByClassName('editing')[0]; cellsToEdit = rowToEdit.getElementsByTagName('td'); for(var i = 0; i < data.length; i++) { cellsToEdit[i + 1].innerHTML = data[i]; } rowToEdit.classList = "bodyRow"; } document.getElementById('add').removeAttribute('hidden'); document.getElementById('save').setAttribute('hidden', true); document.getElementById('discard').setAttribute('hidden', true); } \end{lstlisting} Lastly, the function reveals the `add transaction' button and hides the `save' and `discard' buttons. \textbf{Comparison to Google Sheets project} The function differs in how data is transfered. Instead of using a built-in \lstinline{copyTo()} method, each cell is individually given data. This project hides and reveals buttons based on whether or not a row is being edited, as HTML and Javascript makes this process much simpler. This step was not taken in the Google Sheets project. \subsection{discardChanges()} This function is responsible for unmarking the editing row and changing the state of buttons. Firstly, the element with a class of \lstinline{'editing'} is reset to only have the \lstinline{'bodyRow'} class. Then the `add transaction' button is revealed, and the `save' and `discard' buttons are hidden. \begin{lstlisting}[firstnumber=214] function discardChanges() { document.getElementsByClassName('editing')[0].classList = "bodyRow"; document.getElementById('add').removeAttribute('hidden'); document.getElementById('save').setAttribute('hidden', true); document.getElementById('discard').setAttribute('hidden', true); } \end{lstlisting} \textbf{Comparison to Google Sheets project} The Google Sheets project cleared formatting and removed the edit indicator by clearing the entire I column and reseting formatting for all rows. With Javascript, the process is much simpler as it is easy to select the specific row to reset by referencing its class. The Google Sheets project also cleared all input fields when changes were discarded. This can be done by setting all input fields to an empty string, but I felt that this was unnecessary and that entering similar data would be easier without this feature. \subsection{sortTable()} This function is responsible for sorting each column. The function takes two arguments: the column number, and a true or false value to determine whether the data should be sorted in ascending or descending order. This function utilises a simple sorting algorithm known as `bubble sort' or `sinking sort'. The function steps through every element in the function, comparing it to an adjacent element, and swapping when necessary. This is done until a loop occurs in which no swaps were made. Although the algorithm is considered inefficient compared to other available algorithms, it is far easier to implement and is more than sufficient for this project. \begin{lstlisting}[firstnumber=222] function sortTable(column, ascending) { var rows = document.getElementsByClassName('bodyRow'); var sorting = true; while(sorting) { sorting = false; for(var i = 0; i < (rows.length - 1); i++) { rowA = rows[i].getElementsByTagName('td')[column]; rowB = rows[i + 1].getElementsByTagName('td')[column]; var swap = false; if(ascending && rowA.innerHTML.toLowerCase() > rowB.innerHTML.toLowerCase()) swap = true; else if(!ascending && rowA.innerHTML.toLowerCase() < rowB.innerHTML.toLowerCase()) swap = true; if(swap) { sorting = true; document.getElementById('tableBody').insertBefore(rows[i + 1], rows[i]); } } } } \end{lstlisting} First, all the rows are stored in a list of elements for reference. This is done by taking a list of all elements with the \lstinline{'bodyRow'} class. It would also be possible to do this without the class, as all body rows are contained within \lstinline{<tbody>} tags. One could get a list of all rows that are children of the \lstinline{tbody} element with \lstinline{document.getElementsByTagName('tbody')[0].getElementsByTagName('tr');}. However, I believe that the approach above makes the function less readable, and therefore, I have gone with the approach using a separate class. \begin{lstlisting}[firstnumber=223] var rows = document.getElementsByClassName('bodyRow'); \end{lstlisting} Next, the main loop of the function begins. A variable \lstinline{sorting} is given a value of \lstinline{true} so that the function is allowed to start. With every loop, \lstinline{sorting} is set to \lstinline{false}. The value changes during the loop if any rows have been swapped, otherwise, the rows are considered to be in the correct order and the function terminates as the \lstinline{while} loop ends. Every loop, the function steps through every row excluding the last row. It stores the cell of the current row and the row below in the specified column. \begin{lstlisting}[firstnumber=225] var sorting = true; while(sorting) { sorting = false; for(var i = 0; i < (rows.length - 1); i++) { rowA = rows[i].getElementsByTagName('td')[column]; rowB = rows[i + 1].getElementsByTagName('td')[column]; \end{lstlisting} A variable \lstinline{swap} is initially set to false. All text is converted to lowercase for comparison using the \lstinline{toLowerCase()} method. The values in the cells defined by \lstinline{rowA} and \lstinline{rowB} are compared using a \lstinline{>} or \lstinline{<} operator depending on whether \lstinline{ascending} is \lstinline{true} or \lstinline{false}. If the values are determined to be in the wrong order, \lstinline{swap} is set to \lstinline{true}. If the rows are required to be swapped, the \lstinline{sorting} variable is set to true so that the \lstinline{while} loop does not terminate. The \lstinline{insertBefore()} method can be used to change the order of the children of an element. This method needs to be called by the parent element, hence why the \lstinline{tableBody} element is chosen. The method takes two elements of arguments, and places the first element above the second element. \lstinline{rows[i]} is always above \lstinline{rows[i + 1]}, therefore to swap them, we list \lstinline{rows[i + 1]} as the first argument. \begin{lstlisting}[firstnumber=232] swap = false; if(ascending && rowA.innerHTML.toLowerCase() > rowB.innerHTML.toLowerCase()) swap = true; else if(!ascending && rowA.innerHTML.toLowerCase() < rowB.innerHTML.toLowerCase()) swap = true; if(swap) { sorting = true; document.getElementById('tableBody').insertBefore(rows[i + 1], rows[i]); } \end{lstlisting} \textbf{Comparison to Google Sheets project} In Google Sheets, when buttons are assigned to a function, the arguments are not defined. For this reason, it was necessary to have a different button for ascending and descending orders. In this project, we can define both the column and whether it should sort in ascending or descending by passing different arguments. Therefore, only one function is required to sort any column. This function is somewhat more complex than the Google Sheets equivalent. For the Google Sheets project, there existed a built-in function to sort a column, whereas this had to be written from scratch for HTML and Javascript. \newpage \section{CSS}\label{CSS} \subsection{Vertical Scrolling Table} The table id refers to the \lstinline{article} that contains the table, rather than the table itself. This article was given a max height of 80 visual heights, or approximately 80\% of the screen height. \lstinline{overflow: auto;} specifies that, if necessary, a scroll bar should be present, this is true for both horizontal and vertical scrolling. \begin{lstlisting}[firstnumber=29] #table { max-height: 80vh; overflow: auto; } \end{lstlisting} The last three properties below are important for keeping the header in place while scrolling. \begin{lstlisting}[firstnumber=39] th { min-width: 200px; width: 10%; position: sticky; background: white; top: 0; } \end{lstlisting} \begin{itemize} \item \lstinline{position: sticky;} is used to keep the object in place when scrolling. \item \lstinline{background: white;} is used to give the element a non-transparent background, so that data cannot be seen through the header. \item \lstinline{top: 0;} is used to specify that the element should remain at the top of its parent element with no offset. \end{itemize} \subsection{Horizontal Scrolling on Overflow}\label{overflow-x} The inputFields id is used to identify the \lstinline{article} element that contains the \lstinline{form}. The important property here is the \lstinline{overflow-x: auto;} line, which specifies that, if the child element is wider than this element, a horizontal scroll bar should be present. The \lstinline{form} element, which is a child of the \lstinline{article} is given a minimum width to ensure that the scroll bar is created rather than reducing the width of the element. \begin{lstlisting}[firstnumber=5] #inputFields { padding: 10px 0; overflow-x: auto; } form { min-width: 1900px; } \end{lstlisting} \subsection{Miscellaneous} \subsubsection{Sort buttons} The header cells contain two \lstinline{sections}, one of which has a special class. Both sections are given a \lstinline{margin} and \lstinline{padding} of 0 to minimise wasted space. Both sections are also set to \lstinline{display: inline-block} to specify that they should be arranged horizontally. Both sections are given a \lstinline{width} of 80\% of the parent element. However, this is overruled for the element with a class of \lstinline{sort}, which is assigned a \lstinline{width} of 10\% to ensure that both elements fit horizontally in the parent. To further reduce wasted space, the \lstinline{border} and \lstinline{padding} of the buttons are set to 0. The button's \lstinline{display} property is set to \lstinline{block} as otherwise, it would inherit the \lstinline{inline-block} property from its parent and attempt to arrange horizontally. Lastly, the buttons are set to take up the entire width of the parent element. \begin{lstlisting}[firstnumber=47] th > section { width: 80%; display: inline-block; padding: 0; margin: 0; } .sort { width: 10%; } .sort > button { padding: 0; border: 0; display: block; width: 100%; } \end{lstlisting} \subsubsection{Editing highlight} The current row selected for highlighting is specified using a class. As such, it is possible to give this row unique styling. For example, currently the row is given a yellow background. \begin{lstlisting}[firstnumber=65] .editing { background-color: yellow; } \end{lstlisting} \subsubsection{Table borders} Table borders do not render properly with a scrolling body and fixed header. To resolve this, borders are rendered using the \lstinline{box-shadow} property. Two box shadows are defined, one extends outwards from the right and bottom by one pixel in each direction. The other is given the \lstinline{inset} property, so that it extends inwards from the left and top. This creates a full border with a width of 1 pixel in each direction. As these borders are offset from the element, adjacent elements will overlap borders, preventing borders from combining into extra thick borders. \begin{lstlisting}[firstnumber=69] #table, table, td, th { box-shadow: 1px 1px black, inset 1px 1px black; } \end{lstlisting} \newpage \appendix \section{HTML Source Code} \lstinputlisting{../src/main.html} \newpage \section{Javascript Source Code} \lstinputlisting{../src/script.js} \newpage \section{CSS Source Code} \lstinputlisting{../src/style.css} \end{document}
{ "alphanum_fraction": 0.7580723502, "avg_line_length": 45.0980952381, "ext": "tex", "hexsha": "7563820c6e41b0d7d6b56fb770ebd084081bc19e", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "2fb8e033099aff16338bcd7a1c96abe556e7088f", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "Green-Avocado/financial-transaction-web-app", "max_forks_repo_path": "docs_legacy/docs.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "2fb8e033099aff16338bcd7a1c96abe556e7088f", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "Green-Avocado/financial-transaction-web-app", "max_issues_repo_path": "docs_legacy/docs.tex", "max_line_length": 237, "max_stars_count": null, "max_stars_repo_head_hexsha": "2fb8e033099aff16338bcd7a1c96abe556e7088f", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "Green-Avocado/financial-transaction-web-app", "max_stars_repo_path": "docs_legacy/docs.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 10719, "size": 47353 }
%\subsection{Overview}\label{filter_impactsExecutable_filter_impactsOverview} The \code{filter\_\-impacts} script filters out the low-\/impact incidents from an impact file. The \code{filter\_\-impacts} command reads an impact file, filters out the low-\/impact incidents, rescales the impact values and outputs another impact file. \subsection{Usage}\label{filter_impactsExecutable_filter_impactsUsage} \begin{unknownListing} filter_impacts [options...] <impact-file> <out-file> \end{unknownListing} \subsection{Options}\label{filter_impactsExecutable_filter_impactsOptions} \begin{unknownListing} --threshold=<val> The contamination incidents with undetected impacts above a specified threshold should be kept. --percent=<num> The percentage of contamination incidents with the worst undetected impact that should be kept. --num=<num> The number of contamination incidents with the worst undetected impact that should be kept. --rescale Rescale the impacts using a log10 scale. --round Round input values to the nearest integer. \end{unknownListing} \subsection{Arguments}\label{filter_impactsExecutable_filter_impactsArguments} \begin{unknownListing} <impact-file> The input impact file. <out-file> The output impact file. \end{unknownListing} %\subsection{Description}\label{filter_impactsExecutable_filter_impactsDescription} % %The \code{filter\_\-impacts} command reads an impact file, filters out the %low-\/impact incidents, rescales the impact values, and writes out another %impact file. % %\subsection{Notes}\label{filter_impactsExecutable_filter_Notes} % %None.
{ "alphanum_fraction": 0.762892709, "avg_line_length": 35.1458333333, "ext": "tex", "hexsha": "1615570f3e94144d669dc6ff54b57d35321d8990", "lang": "TeX", "max_forks_count": 2, "max_forks_repo_forks_event_max_datetime": "2020-12-05T18:11:43.000Z", "max_forks_repo_forks_event_min_datetime": "2020-09-24T19:04:14.000Z", "max_forks_repo_head_hexsha": "6b6b68e0e1b3dcc8023b453ab48a64f7fd740feb", "max_forks_repo_licenses": [ "BSD-3-Clause" ], "max_forks_repo_name": "USEPA/Water-Security-Toolkit", "max_forks_repo_path": "doc/wst/executables/filter_impactsExecutable.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "6b6b68e0e1b3dcc8023b453ab48a64f7fd740feb", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "BSD-3-Clause" ], "max_issues_repo_name": "USEPA/Water-Security-Toolkit", "max_issues_repo_path": "doc/wst/executables/filter_impactsExecutable.tex", "max_line_length": 100, "max_stars_count": 3, "max_stars_repo_head_hexsha": "6b6b68e0e1b3dcc8023b453ab48a64f7fd740feb", "max_stars_repo_licenses": [ "BSD-3-Clause" ], "max_stars_repo_name": "USEPA/Water-Security-Toolkit", "max_stars_repo_path": "doc/wst/executables/filter_impactsExecutable.tex", "max_stars_repo_stars_event_max_datetime": "2020-12-05T18:11:40.000Z", "max_stars_repo_stars_event_min_datetime": "2019-06-10T18:04:14.000Z", "num_tokens": 386, "size": 1687 }
\section{Outlook} Recommender systems seem to have a bright future. More and more e-commerce sites use them to boost both user satisfaction and sales. \citep[p.~4-7]{ricci:2011} But the application of recommender systems is not restricted to websites only. They will be further integrated into everyones life by building them into operating systems. Ubuntu for example directs search queries to e-commerce provider such as Amazon.com to help user's finding items.\citep{ubuntu:2014} %Future research This thesis restricted itself on implementing and evaluating a RS utilizing Rocchio's algorithm. Future works may test the combination of Rocchio's algorithm in combination with other recommender systems as this may result in good recommendations. It would also be interesting to investigate how well Rocchio's algorithm for relevance feedback works when using it as support for a recommender system by pre-selecting items which could be relevant. This would require to use the algorithm in its original use-case and using a search query as input. This thesis only considered e-commerce. However it should be possible to adopt some recommender techniques in real-world shops. All technical preconditions for realizing this are already satisfied. A content-based RS in the context of an apparel shop could be build using RFID-Tags on every piece of clothing. Every time a user enters a changing room an RS could look up which clothes the user took into the cubicle and generate recommendations from this information. All generated recommendations could be displayed on a small screen mounted in the changing cubicle. Testing user's acceptance of RS in a physical shop and how well it works may also be an interesting field of research.
{ "alphanum_fraction": 0.8179207352, "avg_line_length": 66.9615384615, "ext": "tex", "hexsha": "d3d0322f18b8846f39d8a495b6353c6b86b6a797", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "be06aaeb1b4d73f727a19029a3416a9b8043194d", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "dustywind/bachelor-thesis", "max_forks_repo_path": "thesis/inc/outlook/outlook.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "be06aaeb1b4d73f727a19029a3416a9b8043194d", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "dustywind/bachelor-thesis", "max_issues_repo_path": "thesis/inc/outlook/outlook.tex", "max_line_length": 199, "max_stars_count": null, "max_stars_repo_head_hexsha": "be06aaeb1b4d73f727a19029a3416a9b8043194d", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "dustywind/bachelor-thesis", "max_stars_repo_path": "thesis/inc/outlook/outlook.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 340, "size": 1741 }
\subsection{Raising and lowering indices} We showed that the inner product between two vectors with the same basis can be written as: \(\langle v, w\rangle =\langle \sum_i e_iv^i, \sum_j f_jw^j\rangle \) \(\langle v, w\rangle =v^i\overline {w^j}\langle e_i, e_j\rangle\) Defining the metric as: \(g_{ij}:=\langle e_i,e_j\rangle \) \(\langle v, w\rangle =v^i\overline {w^j}g_{ij}\) \subsubsection{Metric inverse} We can use this to define the inverse of the metric. \(g^{ij}:=(g_{ij})^{-1}\) We can use this to raise and lower vectors. \(v_i:=v^jg_{ij}\) \subsubsection{Raising and lowering indices of tensors} If we have tensor: \(T_{ij}\) We can define: \(T_i^k=T_{ij}g^{jk}\) \(T^{il}=T_{ij}g^{jk}g^{kl}\) \subsubsection{Tensor contraction} If we have: \(T_{ij}x^j\) We can contract it to: \(T_{ij}x^j=v_i\) Similarly we can have: \(T^{ij}x_j=v^i\)
{ "alphanum_fraction": 0.6659064994, "avg_line_length": 16.8653846154, "ext": "tex", "hexsha": "c14c6a18900eec8e3f3d5acc64b9b487d12b854d", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "adamdboult/nodeHomePage", "max_forks_repo_path": "src/pug/theory/geometry/tensors/02-02-juggling.tex", "max_issues_count": 6, "max_issues_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_issues_repo_issues_event_max_datetime": "2022-01-01T22:16:09.000Z", "max_issues_repo_issues_event_min_datetime": "2021-03-03T12:36:56.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "adamdboult/nodeHomePage", "max_issues_repo_path": "src/pug/theory/geometry/tensors/02-02-juggling.tex", "max_line_length": 91, "max_stars_count": null, "max_stars_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "adamdboult/nodeHomePage", "max_stars_repo_path": "src/pug/theory/geometry/tensors/02-02-juggling.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 306, "size": 877 }
\documentclass{beamer} \usefonttheme[onlymath]{serif} \usepackage[english]{babel} %For internationalization \usepackage[utf8]{inputenc} %For character encoding \usepackage{amsmath} %For mathematical typesetting \usepackage{amssymb} %For mathematical typesetting \usepackage{graphicx} %For handling graphics \newcommand{\be}{\begin{equation}} \newcommand{\bea}{\begin{equation*}} \newcommand{\ben}[1]{\begin{equation}\label{#1}} \newcommand{\ee}{\end{equation}} \newcommand{\eea}{\end{equation*}} \newcommand{\aq}{\overset{\sim}{q}} \mathchardef\mhyphen="2D \title {An Introduction to Discontinuous Galerkin Methods} \subtitle{Module 3B: To Higher-Orders - Discrete System} \author[Bevan] {J.~Bevan} \institute[UMass Lowell] { Department of Mechanical Engineering, Grad Student\\ University of Massachusetts at Lowell } \date[Fall 2014] {} \subject{Discontinuous Galerkin} \begin{document} \frame{\titlepage} \frame{\frametitle{Module 3B: To Higher-Orders - Discrete System}\tableofcontents} %NEW SECTION \section{Numerical Quadrature (Gauss)} \frame{\frametitle{\textbf{\secname}} \begin{itemize} \item We now have a method for generating a robust arbitrary order solution approximation, but unlike before it isn't practical to analytically pre-calculate all the integrals. \item We can use a numerical quadrature technique to do this instead, able to integrate arbitrary functions \item If we call the interpolation of a function $In f$ then we assume that for a sufficiently accurate interpolation, we can use the interpolation of the function wherever we could use the function itself before. This is in general M-1 order accurate. \be f \approx In f =\sum_{i=0}^M f(x_i)L_i(x) \ee \end{itemize} \be \int f \approx \int In f = \int \sum_{i=0}^M f(x_i)L_i(x) = \sum_{i=0}^M f(x_i) \int L_i(x) = \sum_{i=0}^M f(x_i)w_i\ee } %NEW SECTION \section{Hermite Interpolation (and quadrature)} \frame{\frametitle{\textbf{\secname}} \begin{itemize} \item Recall: Hermite interpolation includes derivatives of interpolated function as well \item Consider a Hermite interpolation polynomial that includes first derivatives as well, the interpolation would be $2M-1$ accurate. The quadrature using this polynomial would look like: \be \int f \, dx \approx \sum_{i=0}^M f(x_i)w_i+\sum_{i=0}^M\left[ f'(x_i)\int (x-x_i)L_i^2(x) \, dx \right] \ee \item It turns out if we choose our quadrature/interpolation points to be the Legendre roots, the integral for the second term is zero. Thus no first derivative terms are needed for the Hermite quadrature, even though it is $2M-1$ order accurate. \end{itemize} } %NEW SECTION \section{Truncation error/exact quadrature} \frame[shrink]{\frametitle{\textbf{\secname}} \begin{itemize} \item It is important to consider error sources from the approximation to the solution and integrals, these can affect convergence \item Three main error sources in quadrature: aliasing, truncation, and inexact quadrature \item Aliasing occurs if the function is not sampled frequently enough, it is assumed that the sol'n is sufficiently smooth and the discretization suitably fine to avoid this in most cases \item Truncation is unavoidable except where the exact function is of equal or lesser order than the interpolation/quadrature. Higher order terms present in the exact function are left off. \item Inexact quadrature occurs when the total polynomial order of the product of the interpolated functions undergoing quadrature exceeds the exactness of the quadrature. For Gauss-Legendre quadrature this isn't a problem for one and even two functions in the integrand. Each function is of order M-1 and the quadrature is exact for 2M-1, so the quadrature is able to exactly integrate the interpolation \end{itemize} } %NEW SECTION \section{GL Lagrange Orthogonality} \frame{\frametitle{\textbf{\secname}} \begin{itemize} \item A final useful property of the Lagrange basis with Legendre interpolation points is orthogonality \item The product of two $M-1$ order Lagrange bases can be rearranged to be a Legendre poly of order $M$ and a remainder polynomial of order $M-2$ \item The remainder polynomial can be expressed as a linear combination of Legendre polys all of order $<M$, all are orthogonal to the order $M$ Legendre, so \be \int_{-1}^1 L_i(x)L_j(x) \,dx = \delta_{ij}w_i \ee \end{itemize} } %NEW SECTION \section{Local Mapping Function (Jacobian)} \frame{\frametitle{\textbf{\secname}} \begin{itemize} \item The domain of orthogonality for Legendre polynomials (and by extension GL Lagrange) is $[-1, 1]$ \item Elements may be arbitrary sizes though, we'd like to be able to transform $x \in [x_L, x_R] \rightarrow X \in [-1, 1]$ by means of a mapping $x=g(X)$ and it's inverse $X=G(x)$ \be x=g(X)=\frac{X+1}{2}\Delta x + x_L\quad ,\quad X=G(x)=\frac{2(x-x_L)}{\Delta x}+1 \ee \item Applying a change of variables for the mapping \be \int_{x_L}^{x_R} L_i(x) L_j(x) \,dx \rightarrow \int_{-1}^1 L_i(g(X))L_j(g(X)) g' \, dX \ee \item $J=g'=\frac{\Delta x}{2}$ is called the determinant of the Jacobian matrix \end{itemize} } %NEW SECTION \section{Mass Matrix- Diagonalization} \frame{\frametitle{\textbf{\secname}} \begin{itemize} \item We have done what seems like quite a bit of tangential work, but it now pays off \be \sum_{i=0}^M \frac{d \aq_i}{dt} \int_k \psi_i(x) \phi_j(x) \,dx = \sum_{i=0}^M \frac{d \aq_i}{dt} \int_I L_i(X) L_j(X) \frac{\Delta x}{2} \,dX \ee \be= \frac{\Delta x}{2} \sum_{i=0}^M \frac{d q_i}{dt} \delta_{ij}w_i = \frac{\Delta x}{2}q_j'w_j \quad for\,all\, j \ee \item We've reduced the full mass matrix into a diagonal mass matrix with all other terms zero, compared to Module 2 solver case the mass matrix is trivially invertible. We have: $\frac{\Delta x}{2}\mathbf{q' \,M}$ where $\mathbf{M}_{jj} = w_j$ \end{itemize} } %NEW SECTION \section{Log differentiation} \frame{\frametitle{\textbf{\secname}} \begin{itemize} \item We can get a closed form expression for $L_j'(x)$ in the stiffness term by using logarithmic differentiation \item The main idea is that in general $f'/f = ln(f)$ so applying this to our Lagrange basis \be L_j'(x) = L_j \sum_{r=0,r\neq j}^N \frac{1}{x-x_r} \ee \item using our previous function code for "\textit{Lag(x)}" we can get a general expression \textit{dLag= @(x,nv) Lag(x,nv).*sum(1./bsxfun(@minus,x,nn(nv,:,:)),3)} \item We need a more involved method for evaluation at the interp points (see dLagrange.m) \end{itemize} } %NEW SECTION \section{Stiffness Integral} \frame{\frametitle{\textbf{\secname}} \begin{itemize} \item We now have a routine for calculating $L_j'$, and suitable quadrature; we can evaluate the stiffness integral \be \int_k \, \sum_{i=0}^M\left[ c\aq_i \psi_i(x)\right] \phi'_j(x) \,dx = \sum_{i=0}^M c\aq_i \int_I L_i(X) L_j'(X) \,dX \ee \item No Jacobian term from the mapping. The derivative in the integrand produces a complementary $1/J$ that cancels due to the change of variables. \item No tricks to be had for reducing the stiffness matrix, it is a full matrix. We have: $c\mathbf{K}_{ji} \, \mathbf{\aq}$ \end{itemize} } %NEW SECTION \section{Numerical Flux (Extrapolated)} \frame{\frametitle{\textbf{\secname}} \begin{itemize} \item One downside of using Gauss-Legendre points is there are no points on the boundary \item It is easy to calculate the boundary solution values from the solution interpolation at the left end (same idea for the right) \be \aq(x_L) = \sum_{i=0}^M \aq_i L_i(x_L) \ee \item Call $L_i(x_L) = L_{iL}$, the vector notation is then $\aq_L = \mathbf{L}_{iL}^T \, \mathbf{\aq}$ \item So that our numerical flux vector is $\mathbf{\hat{f}} =c(\overset{k}{q}_R\mathbf{L}_{jR}-\overset{k-1}{q_R}\mathbf{L}_{jL})$ \item Lobatto alternative gives boundary points, but would make the mass matrix a full matrix. Inversion is likely more expensive than interpolation \end{itemize} } %NEW SECTION \section{Assembly of System} \frame{\frametitle{\textbf{\secname}} \begin{itemize} \item We can now combine the mass, stiffness, and numerical flux terms \be \frac{\Delta x}{2}\mathbf{q' \,M} + \mathbf{\hat{f}} - c\mathbf{K}_{ji} \, \mathbf{\aq}=0 \ee \item solving for $q'$ \be \mathbf{\aq}' = \frac{2}{\Delta x} (c\mathbf{K}_{ji} \, \mathbf{\aq} -\mathbf{\hat{f}}) \mathbf{M}^{-1} \ee \item it is also possible to represent it componentwise easily thanks to the diagonal mass matrix \be \aq_j' = \frac{2}{w_j\Delta x }(c \mathbf{K}_{j\mhyphen}\mathbf{\aq}-\hat{f}_j)\ee \end{itemize} } %NEW SECTION \section{RK4 Time discretization} \frame{\frametitle{\textbf{\secname}} \begin{itemize} \item Compared to the simple linear DG solver, we'd like to use a higher order time discretization, Runge-Kutta 4th order: RK4. We can express as a function: $\mathbf{\aq}'(\mathbf{\aq})$ from our discrete system. RK4 consists of 4 trial steps: $k_1 = \mathbf{\aq}'(\mathbf{\aq}) \, \Delta t$\\ $k_2 = \mathbf{\aq}'(\mathbf{\aq}+\frac{k_1}{2}) \, \Delta t$\\ $k_3 = \mathbf{\aq}'(\mathbf{\aq}+\frac{k_2}{2}) \, \Delta t$\\ $k_4 = \mathbf{\aq}'(\mathbf{\aq}+k_3) \, \Delta t$\\ $\mathbf{q}(t+1) = \mathbf{q} + \frac{k_1}{6}+ \frac{k_2}{3}+ \frac{k_3}{3}+ \frac{k_4}{6}$ \end{itemize} } %NEW SECTION \section{Investigate p-Convergence} \frame{\frametitle{\textbf{\secname}} \begin{itemize} \item How does the p-convergence rate compare with h-convergence? \item Does the smoothness of the initial sol'n seem to effect the rate of convergence? (e.g. sin(x) vs gaussian curve) \item Does h or p refinement seem to be more efficient? \end{itemize} } \end{document}
{ "alphanum_fraction": 0.7294696098, "avg_line_length": 52.8121546961, "ext": "tex", "hexsha": "a9c08db9f3da8cf44587f5f4a1c200d09a683d17", "lang": "TeX", "max_forks_count": 22, "max_forks_repo_forks_event_max_datetime": "2021-12-14T09:03:21.000Z", "max_forks_repo_forks_event_min_datetime": "2016-04-30T00:20:42.000Z", "max_forks_repo_head_hexsha": "34527e2fa77a193a58bd2f47f781833d70c59315", "max_forks_repo_licenses": [ "BSD-3-Clause" ], "max_forks_repo_name": "userjjb/22_6xx-DG-for-PDEs", "max_forks_repo_path": "Module 3B/22_6xx-Module3B.tex", "max_issues_count": 2, "max_issues_repo_head_hexsha": "e9463ab6faee98af26102949e04ab8be75405154", "max_issues_repo_issues_event_max_datetime": "2016-03-20T08:48:32.000Z", "max_issues_repo_issues_event_min_datetime": "2016-03-15T11:50:32.000Z", "max_issues_repo_licenses": [ "BSD-3-Clause" ], "max_issues_repo_name": "LuciaZhang9/22_6xx-DG-for-PDEs", "max_issues_repo_path": "Module 3B/22_6xx-Module3B.tex", "max_line_length": 404, "max_stars_count": 31, "max_stars_repo_head_hexsha": "34527e2fa77a193a58bd2f47f781833d70c59315", "max_stars_repo_licenses": [ "BSD-3-Clause" ], "max_stars_repo_name": "sconde/22_6xx-DG-for-PDEs", "max_stars_repo_path": "Module 3B/22_6xx-Module3B.tex", "max_stars_repo_stars_event_max_datetime": "2022-01-20T00:41:34.000Z", "max_stars_repo_stars_event_min_datetime": "2016-03-15T11:40:20.000Z", "num_tokens": 3035, "size": 9559 }
%!TEX root = ../thesis.tex %******************************************************************************* %****************************** Third Chapter ********************************** %******************************************************************************* \chapter{Methodology} \label{ch:method} % **************************** Nomenclature ********************************** \nomenclature[d-1-XTrain]{$X_\mathrm{train}$}{Datasets used for training the algorithms} \nomenclature[d-1-XTest]{$X_\mathrm{test}$}{Datasets used to provide a score for the algorithms} \nomenclature[d-2-xunknown]{$x_\mathrm{unknown}$}{Data points where the true label is not available to the algorithms used} \nomenclature[d-2-xknown]{$x_\mathrm{known}$}{Data points where the true label is available to the algorithms used} \nomenclature[d-3-yunknown]{$y_\mathrm{known}$}{True labels available to the algorithms used} \nomenclature[d-3-yunknown]{$y_\mathrm{unknown}$}{True labels unavailable to the algorithms used} \nomenclature[d-4-yunknown]{$n$}{The number of samples per iteration} % \nomenclature[u-3-ypredict]{$y_\mathrm{predict}$}{Predicted labels by the algorithm} % **************************** Define Graphics Path ************************** \graphicspath{{Chapter3/Figs/Vector/}{Chapter3/Figs/}} \section{Data} Each dataset used consists of a 1024-bit Morgan fingerprint for the features and these associated pChEMBL values. The sets used for parameter fitting and score reporting make up a set of 2094 files from \textcite{CHEMBL} and compiled by \textcite{king18}. These were filtered to prevent datasets with fewer than 1000 entries to be admitted into the main script. Columns were added with the scoring limits, as will be discussed later within the chapter. Consideration was given to reducing larger datasets to 1000 data points, although this notion was disregarded as the data was seen as too valuable to ignore. The data sets used within the scripts is given at \url{https://github.com/rjb255/researchProject/tree/master/data/big/qsar_with_lims}. Morgan fingerprints were chosen due to the ease in which it is to calculate the vectors, the popularity of them within the chemoinformatics sphere, and the success enjoyed by others when using them for predictive purposes. It was decided that physical properties would not be used as this could increase the onus on data sanitation and preparation rather than active learning, although it is unavoidable using physical data for the labels. Here, pChEMBL, as defined in (\ref{eq:pChEMBL}), is used due to comparability, easy interface with \textcite{CHEMBL}, and perceived informativeness. % \section{Custom Algorithms} % As well as the algorithms used mentioned in Chapter~\ref{ch:2}, several custom algorithms were developed and added to the testing set. These methods do use parameters, and so require the minimisation technique. Addtionally, these algorithms take a composite methodology, using other active learning methods in order to reach a conclusion, so some concepts will be assumed knowledge for Chapter~\ref{ch:2}. \section{Computational Methodology} The methodology presents a novel means of assessing different parametrised batch active learning methods on existing data sets, allowing for a robust answer into the use of active learning in drug rediscovery. Results can thus be given with a given belief. This approach has taken principles commonly used in machine learning and applied it to more traditional algorithmic methods. Python was used as the scripting language, with the source code provided at \url{https://github.com/rjb255/researchProject/tree/master/purePython}. Firstly, a collection of pre-existing data sets, $X$, are used. $X$ is then split into two sub sets: $X_{\mathrm{train}}$ and $X_\mathrm{test}$. Similarly to classical machine learning methods, the former of these subsets is used in fitting the parameters of the equation, and the latter is used to provide a result without the risk of data leakage into the training set. Parallelisation is used to efficiently train the algorithms, allowing the time for training to be $\sim{}\mathcal{O}(c)$ provided an unrestricted number of processors. Datasets used have at least 1000 entries resulting in 164 datasets used for training, and a further 42 used for testing. Examining the smaller details, each algorithm is provided with the sets $x_\mathrm{known}$, $y_\mathrm{known}$, and $x_\mathrm{unknown}$. Various algorithms are given these sets and allowed to generate a subset of $x_\mathrm{unknown}$ to be added into $x_\mathrm{known}$ alongside corresponding $y_\mathrm{known}$. This can then repeat until a predefined stopping point is reached. Scores are reported using a weighted mean squared error [] based upon $y_\mathrm{predict}$ for all $x$. This is similar to a standard machine learning methodology with a couple of differences. Firstly, no distinction is made between the training and testing set within a dataset contrary to standard practice. This is due to two reasons. Firstly, the datasets are not large enough for an accurate representation of the data within the testing set, and secondly, the scoring to each dataset is not used within the machine learning algorithms to fit parameters as is usually the case. All algorithms used rely upon a simple custom composite model to allow for flexibility and consistency. \subsection{Model} The machine learning model is the only custom class used. Here, a similar structure is used when compared with Scikit's machine learning \cite{scikit}, as is demonstrated in Table~\ref{tab:Model}. To manage this, it has four methods:\lstinline{__init__},\lstinline{fit},\lstinline{predict}, and\lstinline{predict_error}. The last of these is not seen in all Scikit's machine learning models and is usually reserved for those which can report a certainty of prediction. Here, this was achieved by taking a standard deviation of the models. \begin{table}[H] \centering \begin{tabular}{@{}ccc@{}} \toprule & Name & Description \\ \midrule Attributes & Models: List & List of models to be used in composite \\ \midrule \multirow{3}{*}{Methods} & fit(X: int[][], Y: double[]) & Fits the models in Models \\ & predict(X: int[][]): double[] & \begin{tabular}[c]{@{}c@{}}Takes a set of labels and returns mean\\ predicted label from all the models.\end{tabular} \\ & predict\_error(X: int[][]): double[][] & \begin{tabular}[c]{@{}c@{}}Takes a set of labels and returns the mean\\ predicted label from all the models and\\ standard deviations of model predictions.\end{tabular} \\ \bottomrule \end{tabular} \caption{Schema for the Model Class.} \label{tab:Model} \end{table} The models used for the composite model were Bayesian-ridge, k-nearest neighbours, random forest regressor, stochastic gradient descent regressor with Huber-loss, epsilon-support vector regression, and AdaBoost regressor \cite{scikit}. This was kept consistent during testing, allowing for direct comparison of the algorithms without influence from model selection. \subsection{Scoring} This method implements a weighted mean squared error ($\mathrm{wmse}$) given in (\ref{eq:wmse}) where $w$ is a normalization of the true label such that $\sum{w_i}=1$ and ${0\leq{}w_i\leq{}1}$. Further modification to this ensures the base case with five data points provides a $\mathrm{score}=1$ and the score if the entire dataset is modelled provides a $\mathrm{score}=0$. \begin{equation} \mathrm{wmse}=\frac{1}{n}\sum_{i=0}^{n-1}{w_i(y_i-\bar{y})^2} \label{eq:wmse} \end{equation} This achieves several goals. Firstly, it targets the higher values of pChEMBL, as these are the most beneficial for drug development. Secondly, it reduces the natural spread in results for datasets, preventing those poorly capable of being predicted the model from displacing results from the algorithm. Finally, it allows the results to be given as a fractional improvement instead. It allows a target of "85\%" prediction to be given for stopping criteria if desired. It is also useful having a learning rate metric, as defined in (\ref{eq:rate}), where $N$ is the total number of samples sampled. This provides a measure of the rate of learning. \begin{equation} \dot{\mathrm{score}}=-\frac{\Delta\mathrm{score}}{\Delta{}N}\times{}10^4 \label{eq:rate} \end{equation} \subsection{Active Learning Algorithms} The algorithms tested are all provided with $x_\mathrm{known}$, $x_\mathrm{unknown}$, $y_\mathrm{known}$, a model fitted to $x_\mathrm{known}$ and $y_\mathrm{known}$, and a memory object to allow for information to be kept through iterations if required. This is useful for clustering, where online training is possible. It is within the memory object where parameters may also be provided. As a result, it is impossible for the suppressed $y_\mathrm{Y_known}$ to influence an algorithms scoring process. The algorithms then return a list in the same order as $y_\mathrm{unknown}$, with the lowest scores designating higher priority in sampling. This allows uniformity across algorithms and the amalgamation of different algorithms without the duplication of code. \subsubsection{Monte Carlo} The Monte Carlo algorithm employs random sampling. This represents the least computationally expensive approach, and is thus used as a baseline in comparing other algorithms. Since the datasets are shuffled prior to being used, the algorithm is extremely simple, as demonstrated in Algorithm~\ref{alg:MC}. \begin{algorithm}[H] \KwData{$X_\mathrm{unknown}$} \KwResult{An array of priority-scores for sampling} \Return{$\mathrm{ones\_like}(X_\mathrm{unknown})$} \caption{Monte Carlo Sampling} \label{alg:MC}\SetAlgoLined \end{algorithm} \subsubsection{Greed} Since the largest activity is sought, a methodology proposed is to simply seek the predicted highest label. Here, the predict() method (see Table~\ref{tab:Model}) was used to return a prediction and a standard deviation. The indices of $x_\mathrm{unknown}$ were then returned, ordered descending with respect to the afore mentioned standard deviations. The algorithm used is given in Algorithm~\ref{alg:greedy}. \begin{algorithm}[H] \KwData{$X_\mathrm{known}$, $Y_\mathrm{known}$, $X_\mathrm{unknown}$, Model} \KwResult{An array of priority-scores for sampling} Model.fit($X_\mathrm{known}$, $Y_\mathrm{known}$)\; prediction = Model.predict\_error($X_\mathrm{unknown}$)\; \Return{$-\mathrm{prediction}$} \caption{Greed Sampling Selection} \label{alg:greedy}\SetAlgoLined \end{algorithm} \subsubsection{Region of Disagreement} \textbf{R}egion of \textbf{D}isagreement (ROD) uses the predict\_error() method (see Table~\ref{tab:Model}) to return a prediction and a standard deviation. The prediction is ignored. The negative of the standard deviation is returned to ensure the largest uncertainty has the lowest "score". This is shown with Algorithm~\ref{alg:rod}. \begin{algorithm}[H] \KwData{$X_\mathrm{known}$, $Y_\mathrm{known}$, $X_\mathrm{unknown}$, Model} \KwResult{$X$ ordered according to priority for sampling} Model.fit($X_\mathrm{known}$, $Y_\mathrm{known}$)\; \_, error = Model.predict\_error($X_\mathrm{unknown}$)\; \Return{$-\mathrm{error}$} \caption{RoD Sampling Selection} \label{alg:rod}\SetAlgoLined \end{algorithm} \subsubsection{Hotspot Clusters} Three clustering algorithms were trialled, all based upon the ideology presented in Section~\ref{sec:litRevDH}. The function shared by all three algorithms is shown in Algorithm~\ref{alg:cluster1}. Here, $c$ is the number of clusters sought, and is a parameter that requires fitting. Bounds can be placed upon this. The lower limit can be set as the number of known data points, and the upper as the total number of data points in the data set, although it is hypothesised that beyond the sum of the known points and the samples sought would make little, to no difference. To test this hypothesis, the upper limit will be set at $\mathrm{length}(X_\mathrm{unknown})+1.5n$. The combined limits have been shown in (\ref{eq:limsClust1}). It is important to note the algorithm used for clustering: K-Means with a huber loss function. This follows recommendations from \textcite{SciClus} for scalability. \begin{equation} \label{eq:limsClust1} {\mathrm{length}(X_\mathrm{known})<c<\mathrm{length}(X_\mathrm{unknown})+1.5n} \end{equation} \begin{algorithm}[H] \KwData{$z_\mathrm{known}$, $z_\mathrm{unknown}$, $c$} \KwResult{Score of data points} combined\_z = concat($z_\mathrm{known}$, $z_\mathrm{unknown}$)\; clusters = cluster(number\_of\_clusters=$c$)\; clusters.fit(combined\_z)\; predicted\_custers = clusters.predict($z_\mathrm{unknown}$)\; distances = clusters.distance\_to\_nearest\_centroid($z_\mathrm{unknown}$)\; indices = $z_\mathrm{known}$.index\; sorted\_indices = sort(indices -> By cluster size followed by distance to centroid) \; high\_priority, low\_priority = split(sorted\_indices, if cluster contains $z_\mathrm{known}$)\; high\_priority.riffle()\; low\_priority.riffle()\; order = join(high\_priority, low\_priority)\; \Return{$-\mathrm{error}$} \caption{Uncertainty Sampling Selection} \label{alg:cluster1}\SetAlgoLined \end{algorithm} Several key steps are involved within the algorithms to fit to the ideology. Firstly, clusters containing samples from $x_\mathrm{known}$ are given lower priority. These are perceived as known clusters so ideally would not undergo further testing. Secondly, the sorting needs to be addressed. Here, the sample is sorted into the relevant cluster groups. These groups are then ordered by size, with larger cluster favoured. Samples within the cluster are sorted by distance to the equivalent centroid. The clusters are then split into those containing sampled points and those that do not. With each of these groups, a riffling procedure is used. Named after the common card shuffling technique, this ensures the priority is given to different clusters, with the highest priority going to the point from the most populated cluster, and closest to the centroid. The two groups of clusters are then concatenated. The three versions of clusterisation differ by the $z$ provided. In Cluster I, $z\equiv{}x$, whereas in Cluster II, $y_\mathrm{known}$ and $y_\mathrm{unknown}$ is joined to $x_\mathrm{known}$ and $x_\mathrm{unknown}$ respectively. Cluster III takes this a step further by combining ${s_g}_\mathrm{unknown}$ into $z_\mathrm{unknown}$ with 0 being the equivalent value used for $z_\mathrm{known}$. \subsubsection{Region of Disagreement with Greed} The first composite algorithm explored is \textbf{R}egion \textbf{o}f \textbf{D}isagreement with \textbf{G}reed (RoDG), combining both the greedy sampling, and the uncertainty sampling algorithms. This metric is shown in (\ref{eq:rodAndGreed}). \begin{equation} \label{eq:rodAndGreed} {\mathrm{score}_\mathrm{RoG}=\mathrm{score}_\mathrm{Greed}^{\alpha}\mathrm{score}_\mathrm{RoD}^{1-\alpha}} \end{equation} Here, $\alpha$ is a parameter which needs to be found, bounded as $0<\alpha{}<1$. Note that at the limits, the algorithm reduces to the RoD and Greed algorithms. \subsubsection{Region of Disagreement with Greed and Clustering} \textbf{R}egion \textbf{o}f \textbf{D}isagreement with \textbf{G}reed and Clust\textbf{er}ing (RoDGer) is a second order composite function, involving RoD with Greed and Cluster III, as shown in (\ref{eq:holyTrinity}). \begin{equation} \label{eq:holyTrinity} {\mathrm{score}_\mathrm{RoDGer}=\mathrm{score}_\mathrm{Cluster III}^{\alpha}\mathrm{score}_\mathrm{RoDG}^{1-\alpha}} \end{equation} Both of the constituent algorithms are parameterised, implying a total of three parameters. Bounds on initial estimates will be provided by the results of these algorithms taken individually. \subsection{Parallelisation} The large number of datasets used presents a problem: time. Indeed, each iteration sees a new fitting of a machine learning model. Within the training stage, this would correspond to a minimum of 1000 models trained: a considerable number. Thus, by exploiting parallelisation, the time can be reduced in execution to the case, where given an infinite number of processes, the training and testing framework would scale as $\mathcal{O}(c)$. This requires circumventing pythons global interpreter lock, accomplished using Pathos due to several shortcomings found with the default multiprocessing package \cite{pathos1,pathos2}. \subsection{Minimisation} Due to the available parallelisation, only one iteration was performed in minimisation. This approach consisted of generating a uniform distribution of test parameters, testing upon the datasets in one parallelised step, and selecting the best performing parameter combination.
{ "alphanum_fraction": 0.7320650928, "avg_line_length": 100.2988505747, "ext": "tex", "hexsha": "3f4c83204c4e8197836c6057eb25a5a88be970c9", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "7b0c118ee1adaf0c68f83d5b4a043c6aa5a55331", "max_forks_repo_licenses": [ "CC0-1.0" ], "max_forks_repo_name": "rjb255/researchProject", "max_forks_repo_path": "finalWriteup/Chapter3/chapter3.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "7b0c118ee1adaf0c68f83d5b4a043c6aa5a55331", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "CC0-1.0" ], "max_issues_repo_name": "rjb255/researchProject", "max_issues_repo_path": "finalWriteup/Chapter3/chapter3.tex", "max_line_length": 1068, "max_stars_count": null, "max_stars_repo_head_hexsha": "7b0c118ee1adaf0c68f83d5b4a043c6aa5a55331", "max_stars_repo_licenses": [ "CC0-1.0" ], "max_stars_repo_name": "rjb255/researchProject", "max_stars_repo_path": "finalWriteup/Chapter3/chapter3.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 4082, "size": 17452 }
% -*- root: Main.tex -*- \section{Essentials} \subsection*{Matrix/Vector} \begin{compactdesc} \item[Orthogonal:] (i.e. columns are orthonormal!) $\mathbf{A}^{-1} = \mathbf{A}^\top$, $\mathbf{A} \mathbf{A}^\top = \mathbf{A}^\top \mathbf{A} = \mathbf{I}$, $\operatorname{det}(\mathbf{A}) \in \{+1, -1\}$, $\operatorname{det}(\mathbf{A}^\top \mathbf{A}) = 1$ \item[Inner Product:] $\langle \mathbf{x}, \mathbf{y} \rangle = \mathbf{x}^\top \mathbf{y} = \sum_{i=1}^{N} \mathbf{x}_i \mathbf{y}_i$. \begin{inparaitem} \item $\langle \mathbf{x} \pm \mathbf{y}, \mathbf{x} \pm \mathbf{y} \rangle = \langle \mathbf{x}, \mathbf{x} \rangle \pm 2 \langle \mathbf{x}, \mathbf{y} \rangle + \langle \mathbf{y}, \mathbf{y} \rangle$ \item $\langle \mathbf{x}, \mathbf{y} + \mathbf{z} \rangle = \langle \mathbf{x}, \mathbf{y} \rangle + \langle \mathbf{x}, \mathbf{z} \rangle$ \item $\langle \mathbf{x} + \mathbf{y}, \mathbf{z} \rangle = \langle \mathbf{x}, \mathbf{z} \rangle + \langle \mathbf{y}, \mathbf{z} \rangle$ \item $\langle \mathbf{x}, \mathbf{y} \rangle = \|\mathbf{x}\|_2 \cdot \|\mathbf{y}\|_2 \cdot \cos(\theta)$ \item If $\mathbf{y}$ is a unit vector then $\langle \mathbf{x}, \mathbf{y} \rangle$ projects $\mathbf{x}$ onto $\mathbf{y}$ \end{inparaitem} \item[Outer Product:] $\mathbf{u} \mathbf{v}^\top$, $(\mathbf{u} \mathbf{v}^\top)_{i, j} = \mathbf{u}_i \mathbf{v}_j$ \item[Transpose:] $(\mathbf{A}^\top)^{-1} = (\mathbf{A}^{-1})^\top$, $(\mathbf{A}\mathbf{B})^\top= \mathbf{B}^\top\mathbf{A}^\top$, $(\mathbf{A}+\mathbf{B})^\top= \mathbf{A}^\top + \mathbf{B}^\top$ \item[Cross product:] $\vec{a}\times\vec{b}=(a_2b_3-a_3b_2, a_3b_1-a_1b_3, a_1b_2-a_2b_1)^\top$ \end{compactdesc} \subsection*{Norms} \begin{inparaitem} \item $\|\mathbf{x}\|_0 = |\{i | x_i \neq 0\}|$ \item $\|\mathbf{x}\|_2 = \sqrt{\sum_{i=1}^{N} \mathbf{x}_i^2} = \sqrt{\langle \mathbf{x}, \mathbf{x} \rangle}$ \item $\|\mathbf{u}-\mathbf{v}\|_2 = \sqrt{(\mathbf{u}-\mathbf{v})^\top(\mathbf{u}-\mathbf{v})}$ \item $\|\mathbf{x}\|_p = \left( \sum_{i=1}^{N} |x_i|^p \right)^{\frac{1}{p}}$ \item $\mathbf{A} \in \mathbb{R}^{N \times N}, \|\mathbf{A}\|_F =\allowbreak \sqrt{trace(\mathbf{A}\mathbf{A}^\top)}$ \item $\|\mathbf{M}\|_F =\allowbreak \sqrt{\sum_{i=1}^{m} \sum_{j=1}^{n}\mathbf{m}_{i,j}^2} =\allowbreak \sqrt{\sum_{i=1}^{\min\{m, n\}} \sigma_i^2}$ \item $\|\mathbf{M}\|_1 = \sum_{i,j} | m_{i,j}|$ \item $\|\mathbf{M}\|_2 = \sigma_{\text{max}}(\mathbf{M})$\\ $\|\mathbf{M}\|_p = \max_{\mathbf{v} \neq 0} \frac{\|\mathbf{M}\mathbf{v}\|_p}{\|\mathbf{v}\|_p}$ \item $\|\mathbf{M}\|_\star = \sum_{i=1}^{\min(m, n)} \sigma_i$ \end{inparaitem} \subsection*{Derivatives} $\frac{\partial}{\partial \mathbf{x}}(\mathbf{b}^\top \mathbf{x}) = \frac{\partial}{\partial \mathbf{x}}(\mathbf{x}^\top \mathbf{b}) = \mathbf{b}$ \quad $\frac{\partial}{\partial \mathbf{x}}(\mathbf{x}^\top \mathbf{x}) = 2\mathbf{x}$\\ $\frac{\partial}{\partial \mathbf{x}}(\mathbf{x}^\top \mathbf{A}\mathbf{x}) = (\mathbf{A}^\top + \mathbf{A})\mathbf{x}$ \quad $\frac{\partial}{\partial \mathbf{x}}(\mathbf{b}^\top \mathbf{A}\mathbf{x}) = \mathbf{A}^\top \mathbf{b}$\\ $\frac{\partial}{\partial \mathbf{X}}(\mathbf{c}^\top \mathbf{X} \mathbf{b}) = \mathbf{c}\mathbf{b}^\top$ \quad $\frac{\partial}{\partial \mathbf{X}}(\mathbf{c}^\top \mathbf{X}^\top \mathbf{b}) = \mathbf{b}\mathbf{c}^\top$\\ $\frac{\partial}{\partial \mathbf{x}}(\| \mathbf{x}-\mathbf{b} \|_2) = \frac{\mathbf{x}-\mathbf{b}}{\|\mathbf{x}-\mathbf{b}\|_2}$ \quad $\frac{\partial}{\partial \mathbf{x}}(\|\mathbf{x}\|^2_2) = \frac{\partial}{\partial \mathbf{x}} (\mathbf{x}^\top \mathbf{x}) = 2\mathbf{x}$\\ $\frac{\partial}{\partial \mathbf{X}}(\|\mathbf{X}\|_F^2) = 2\mathbf{X}$ \quad $\frac{\partial}{\partial \mathbf{x}}\log(x) = \frac{1}{x}$ \subsection*{Eigenvalue / -vectors} Eigenvalue Problem: $\mathbf{Ax} = \lambda \mathbf{x}$\\ 1. solve $\operatorname{det}(\mathbf{A} - \lambda \mathbf{I}) \overset{!}{=} 0$ resulting in $\{\lambda_i\}_i$\\ 2. $\forall \lambda_i$: solve $(\mathbf{A} - \lambda_i \mathbf{I}) \mathbf{x}_i = \mathbf{0}$, for $\mathbf{x}_i$. \subsection*{Eigendecomposition} $\mathbf{A} \in \mathbb{R}^{N \times N}$ then $\mathbf{A} = \mathbf{Q} \boldsymbol{\Lambda} \mathbf{Q}^{-1}$ with $\mathbf{Q} \in \mathbb{R}^{N \times N}$.\\ if fullrank: $\mathbf{A}^{-1} = \mathbf{Q} \boldsymbol{\Lambda}^{-1} \mathbf{Q}^{-1}$ and $(\boldsymbol{\Lambda}^{-1})_{i,i} = \frac{1}{\lambda_i}$.\\ if $\mathbf{A}$ symmetric: $A = \mathbf{Q} \boldsymbol{\Lambda} \mathbf{Q^\top}$ ($\mathbf{Q}$ orthogonal). \subsection*{Probability / Statistics} \begin{inparaitem} \item $P(x) := Pr[X = x] := \sum_{y \in Y} P(x, y)$ \item $P(x|y) := Pr[X = x | Y = y] := \frac{P(x,y)}{P(y)},\quad \text{if } P(y) > 0$ \item $\forall y \in Y: \sum_{x \in X} P(x|y) = 1$ (property for any fixed $y$) \item $P(x, y) = P(x|y) P(y)$ \item $P(x|y) = \frac{P(y|x)P(x)}{P(y)}$ (Bayes' rule) \item $P(x|y) = P(x) \Leftrightarrow P(y|x) = P(y)$ (iff $X$, $Y$ independent) \item $P(x_1, \ldots, x_n) = \prod_{i=1}^n P(x_i)$ (iff IID) \item Variance $Var[X]:= E[(X-\mu_x)^2]:=\sum_{x \in X}(x-\mu_x)^2P(x)$ \item expectation $\mu_x := E[X]:=\sum_{x \in X}xP(x)$ \item standard deviation $\sigma_x := \sqrt{Var[X]}$ \end{inparaitem}
{ "alphanum_fraction": 0.5945278358, "avg_line_length": 74.1267605634, "ext": "tex", "hexsha": "f6bfb4db64d59610ab3f306ea5b146e2bce78c12", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2018-08-03T18:47:02.000Z", "max_forks_repo_forks_event_min_datetime": "2018-08-03T18:47:02.000Z", "max_forks_repo_head_hexsha": "c9e7c4b1e6b5aecc2f8b37ee18956517a8d44bc4", "max_forks_repo_licenses": [ "CC0-1.0" ], "max_forks_repo_name": "ABBDVD/eth-cil-exam-summary", "max_forks_repo_path": "Essentials.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "c9e7c4b1e6b5aecc2f8b37ee18956517a8d44bc4", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "CC0-1.0" ], "max_issues_repo_name": "ABBDVD/eth-cil-exam-summary", "max_issues_repo_path": "Essentials.tex", "max_line_length": 262, "max_stars_count": null, "max_stars_repo_head_hexsha": "c9e7c4b1e6b5aecc2f8b37ee18956517a8d44bc4", "max_stars_repo_licenses": [ "CC0-1.0" ], "max_stars_repo_name": "ABBDVD/eth-cil-exam-summary", "max_stars_repo_path": "Essentials.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 2365, "size": 5263 }
\BoSSSopen{shortTutorialMatlab/tutorialMatlab} \graphicspath{{shortTutorialMatlab/tutorialMatlab.texbatch/}} \BoSSScmd{ /// In this short tutorial we want to use common Matlab commands within the \BoSSS{} framework. /// \section{Problem statement} /// For our matrix analysis we use the following random matrix: /// \begin{equation*} /// A = \begin{bmatrix} /// 1 & 2 & 3\\ /// 4 & 5 & 6\\ /// 7 & 8 & 9 /// \end{bmatrix} /// \end{equation*} /// and the symmetric matrix: /// \begin{equation*} /// S = \begin{bmatrix} /// 1 & 2 & 3\\ /// 2 & 3 & 2\\ /// 3 & 2 & 1 /// \end{bmatrix} /// \end{equation*} /// We are going to evaluate some exemplary properties of the matrices and check if the matrices are symmetric, both in the \BoSSS{} framework and in Matlab. /// \section{Solution within the \BoSSS{} framework} /// First, we have to initialize the new project: } \BoSSSexeSilent \BoSSScmd{ restart; } \BoSSSexeSilent \BoSSScmd{ using ilPSP.LinSolvers;\newline using ilPSP.Connectors.Matlab; } \BoSSSexe \BoSSScmd{ /// We want to implement the two 3x3 matrices in \BoSSSpad{}: int Dim = 3;\newline MsrMatrix A = new MsrMatrix(Dim,Dim);\newline MsrMatrix S = new MsrMatrix(Dim,Dim);\newline double[] A\_firstRow = new double[]\{1,2,3\};\newline double[] A\_secondRow = new double[]\{4,5,6\};\newline double[] A\_thirdRow = new double[]\{7,8,9\};\newline \newline double[] S\_firstRow = new double[]\{1,2,3\};\newline double[] S\_secondRow = new double[]\{2,3,2\};\newline double[] S\_thirdRow = new double[]\{3,2,1\};\newline \newline for(int i=0; i<Dim; i++)\{\newline \btab A[0, i] = A\_firstRow[i];\newline \btab S[0, i] = S\_firstRow[i];\newline \}\newline \newline for(int i=0; i<Dim; i++)\{\newline \btab A[1, i] = A\_secondRow[i];\newline \btab S[1, i] = S\_secondRow[i];\newline \}\newline \newline for(int i=0; i<Dim; i++)\{\newline \btab A[2, i] = A\_thirdRow[i];\newline \btab S[2, i] = S\_thirdRow[i];\newline \} } \BoSSSexe \BoSSScmd{ /// \paragraph{Test for symmetry in \BoSSS{}:}$~~$\\ /// To analyze if the matrices are symmetric, we need to compare the original matrix with the transpose: MsrMatrix AT = A.Transpose();\newline MsrMatrix ST = S.Transpose();\newline bool SymmTest\_A;\newline bool SymmTest\_S;\newline for(int i = 0; i<Dim; i++)\{\newline \btab for(int j = 0; j<Dim; j++)\{\newline \btab \btab if(A[i,j] == AT[i,j])\{\newline \btab \btab \btab SymmTest\_A = true;\newline \btab \btab \btab \}\newline \btab \btab else\{\newline \btab \btab \btab SymmTest\_A = false;\newline \btab \btab \btab break;\newline \btab \btab \btab \}\newline \btab \btab \}\newline \btab \}\newline for(int i = 0; i<Dim; i++)\{\newline \btab for(int j = 0; j<Dim; j++)\{\newline \btab \btab if(S[i,j] == ST[i,j])\{\newline \btab \btab \btab SymmTest\_S = true;\newline \btab \btab \btab \}\newline \btab \btab else\{\newline \btab \btab \btab SymmTest\_S = false;\newline \btab \btab \btab break;\newline \btab \btab \btab \}\newline \btab \btab \}\newline \btab \}\newline if(SymmTest\_A == true)\{\newline Console.WriteLine("Matrix A seems to be symmetric.");\newline \}\newline else\{\newline Console.WriteLine("Matrix A seems NOT to be symmetric.");\newline \}\newline if(SymmTest\_S == true)\{\newline Console.WriteLine("Matrix S seems to be symmetric.");\newline \}\newline else\{\newline Console.WriteLine("Matrix S seems NOT to be symmetric.");\newline \} } \BoSSSexe \BoSSScmd{ /// \paragraph{The interface to Matlab:}$~~$\\ /// The \code{BatchmodeConnector} initializes an interface to Matlab: Console.WriteLine("Calling MATLAB/Octave...");\newline BatchmodeConnector bmc = new BatchmodeConnector();\newline /// We have to transfer out matrices to Matlab: bmc.PutSparseMatrix(A, "Matrix\_A");\newline bmc.PutSparseMatrix(S, "Matrix\_S");\newline /// Now we can do calculations in Matlab within the \BoSSSpad{} using the \code{Cmd} command. It commits the Matlab commands as a string. We can calculate e.g. the rank of the matrix or the eigenvalues: bmc.Cmd("Full\_A = full(Matrix\_A)");\newline bmc.Cmd("Full\_S = full(Matrix\_S)");\newline bmc.Cmd("Rank\_A = rank(Full\_A)");\newline bmc.Cmd("Rank\_S = rank(Full\_S)");\newline bmc.Cmd("EV\_A = eig(Full\_A)");\newline bmc.Cmd("EV\_S = eig(Full\_S)");\newline bmc.Cmd("Det\_A = det(Full\_A)");\newline bmc.Cmd("Det\_S = det(Full\_S)");\newline bmc.Cmd("Trace\_A = trace(Full\_A)");\newline bmc.Cmd("Trace\_S = trace(Full\_S)");\newline /// We can transfer matrices or arrays from Matlab to \BoSSSpad{} as well, here we want to have the results: MultidimensionalArray Results = MultidimensionalArray.Create(2, 3);\newline bmc.Cmd("Results = [Rank\_A, Det\_A, Trace\_A; Rank\_S, Det\_S, Trace\_S]");\newline bmc.GetMatrix(Results, "Results");\newline /// and the eigenvalues: MultidimensionalArray EV\_A = MultidimensionalArray.Create(3, 1);\newline bmc.GetMatrix(EV\_A, "EV\_A");\newline MultidimensionalArray EV\_S = MultidimensionalArray.Create(3, 1);\newline bmc.GetMatrix(EV\_S, "EV\_S");\newline /// After finishing using Matlab we need to close the interface to Matlab: bmc.Execute(false);\newline Console.WriteLine("MATLAB/Octave closed, return to BoSSSPad"); } \BoSSSexe \BoSSScmd{ /// And here are our results back in the \BoSSSpad{}: double Rank\_A = Results[0,0];\newline double Rank\_S = Results[1,0];\newline double Det\_A = Results[0,1];\newline double Det\_S = Results[1,1];\newline double Trace\_A = Results[0,2];\newline double Trace\_S = Results[1,2];\newline Console.WriteLine("The results of matrix A are: rank: " + Rank\_A + ", trace: " + Trace\_A + ", dterminant: " + Det\_A);\newline Console.WriteLine("The results of matrix S are: rank: " + Rank\_S + ", trace: " + Trace\_S + ", determinant: " + Det\_S);\newline Console.WriteLine();\newline Console.WriteLine("The eigenvalues of matrix A are: " + EV\_A[0,0] + ", " + EV\_A[1,0] + " and " + EV\_A[2,0]);\newline Console.WriteLine("The eigenvalues of matrix S are: " + EV\_S[0,0] + ", " + EV\_S[1,0] + " and " + EV\_S[2,0]); } \BoSSSexe \BoSSScmd{ /// \paragraph{Test for symmetry within Matlab using the \code{BatchmodeConnector}:}$~~$\\ /// We do the same test for symmetry for both matrices. In Matlab we can use the convenient command \code{isequal}: Console.WriteLine("Calling MATLAB/Octave...");\newline BatchmodeConnector bmc = new BatchmodeConnector();\newline bmc.PutSparseMatrix(A, "Matrix\_A");\newline bmc.PutSparseMatrix(S, "Matrix\_S");\newline bmc.Cmd("Full\_A = full(Matrix\_A)");\newline bmc.Cmd("Full\_S = full(Matrix\_S)");\newline bmc.Cmd("A\_Transpose = transpose(Full\_A)");\newline bmc.Cmd("S\_Transpose = transpose(Full\_S)");\newline bmc.Cmd("SymmTest\_A = isequal(Full\_A, A\_Transpose)");\newline bmc.Cmd("SymmTest\_S = isequal(Full\_S, S\_Transpose)");\newline \newline MultidimensionalArray SymmTest\_A = MultidimensionalArray.Create(1, 1);\newline bmc.GetMatrix(SymmTest\_A, "SymmTest\_A");\newline MultidimensionalArray SymmTest\_S = MultidimensionalArray.Create(1, 1);\newline bmc.GetMatrix(SymmTest\_S, "SymmTest\_S");\newline bmc.Execute(false);\newline Console.WriteLine("MATLAB/Octave closed, return to BoSSSPad"); } \BoSSSexe \BoSSScmd{ if(SymmTest\_A[0,0] == 1)\{\newline Console.WriteLine("Matrix A seems to be symmetric.");\newline \}\newline else\{\newline Console.WriteLine("Matrix A seems NOT to be symmetric.");\newline \} \newline if(SymmTest\_S[0,0] == 1)\{\newline Console.WriteLine("Matrix S seems to be symmetric.");\newline \}\newline else\{\newline Console.WriteLine("Matrix S seems NOT to be symmetric.");\newline \} } \BoSSSexe
{ "alphanum_fraction": 0.6849740933, "avg_line_length": 39.793814433, "ext": "tex", "hexsha": "afaf81017bcc26e0d8e4cb9835c2493d938ed018", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "39f58a1a64a55e44f51384022aada20a5b425230", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "leyel/BoSSS", "max_forks_repo_path": "doc/handbook/shortTutorialMatlab/tutorialMatlab.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "39f58a1a64a55e44f51384022aada20a5b425230", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "leyel/BoSSS", "max_issues_repo_path": "doc/handbook/shortTutorialMatlab/tutorialMatlab.tex", "max_line_length": 202, "max_stars_count": 1, "max_stars_repo_head_hexsha": "39f58a1a64a55e44f51384022aada20a5b425230", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "leyel/BoSSS", "max_stars_repo_path": "doc/handbook/shortTutorialMatlab/tutorialMatlab.tex", "max_stars_repo_stars_event_max_datetime": "2018-12-20T10:55:58.000Z", "max_stars_repo_stars_event_min_datetime": "2018-12-20T10:55:58.000Z", "num_tokens": 2552, "size": 7720 }
\chapter{Incorporation of $n$-component BEC simulations on graphics processing units} \label{ch-multicomp} \section{Abstract syntax tree / memory management} \section{Heat engine?}
{ "alphanum_fraction": 0.7967032967, "avg_line_length": 30.3333333333, "ext": "tex", "hexsha": "8382027fd63c3698b604ef59f451a388571d5970", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2019-02-07T08:07:50.000Z", "max_forks_repo_forks_event_min_datetime": "2019-02-07T08:07:50.000Z", "max_forks_repo_head_hexsha": "0fdbfdc9b42a967a9b3492be4ab788bdf76d4ce5", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "leios/thesis", "max_forks_repo_path": "MainText/multi-comp.tex", "max_issues_count": 1, "max_issues_repo_head_hexsha": "0fdbfdc9b42a967a9b3492be4ab788bdf76d4ce5", "max_issues_repo_issues_event_max_datetime": "2019-02-07T08:22:21.000Z", "max_issues_repo_issues_event_min_datetime": "2019-02-07T08:22:21.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "leios/thesis", "max_issues_repo_path": "MainText/multi-comp.tex", "max_line_length": 85, "max_stars_count": 9, "max_stars_repo_head_hexsha": "0fdbfdc9b42a967a9b3492be4ab788bdf76d4ce5", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "leios/thesis", "max_stars_repo_path": "MainText/multi-comp.tex", "max_stars_repo_stars_event_max_datetime": "2021-03-03T09:46:28.000Z", "max_stars_repo_stars_event_min_datetime": "2019-02-12T02:41:25.000Z", "num_tokens": 46, "size": 182 }
% \includegraphics[scale=0.95]{competences} \section{Calculer} Calculer astucieusement en détaillant les calculs \begin{questions} % \question[2]  $\num{5.5} + 4 + \num{2.5} + 8$ % \fillwithdottedlines{2cm} % % % \begin{solution} % % \end{solution} \question[2]  $\num{3.3} + \num{7.4} + \num{2.7} + \num{2.6} + 8 $ \fillwithdottedlines{2cm} \begin{solution} \end{solution} \question[2]  $\num{3.2} + \num{7.5} + \num{2.8} + \num{5.5} + 15 $ \fillwithdottedlines{2cm} \begin{solution} \end{solution} \question[2]  $\num{5} \times 25 \times 11 \times \num{4} \times 2$ \fillwithdottedlines{2cm} \begin{solution} \end{solution} \question[2]  $\num{2.5} \times 4 \times 3 \times 7$ \fillwithdottedlines{2cm} \begin{solution} \end{solution} \question[2]  $\num{12.5} \times 25 \times \num{8} \times 4$ \fillwithdottedlines{2cm} \begin{solution} \end{solution} \newpage \question[2]  $\num{5} \times 50 \times \num{4} \times 2$ \fillwithdottedlines{2cm} \begin{solution} \end{solution} \question[2]  $\num{3} \times 45 \times 20 \times \num{5} $ \fillwithdottedlines{2cm} \begin{solution} \end{solution} \question[2]  $(2 + 5) \times (7 + 2) $ \fillwithdottedlines{2cm} \begin{solution} \end{solution} \question[2]  $2 + 8 \times 7 + 2 $ \fillwithdottedlines{2cm} \begin{solution} \end{solution} \question[2]  $(2 + 8 \times 7) \times 2 $ \fillwithdottedlines{2cm} \begin{solution} \end{solution} \end{questions}
{ "alphanum_fraction": 0.5965021861, "avg_line_length": 17.5934065934, "ext": "tex", "hexsha": "b90d1562b713beba04f0b6a285e28d78fcde206c", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "540337598037f7925fbf2f0e7232c4e18813c25b", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "malhys/maths_projects", "max_forks_repo_path": "college/6e/3_addition/interros/interro3/v1.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "540337598037f7925fbf2f0e7232c4e18813c25b", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "malhys/maths_projects", "max_issues_repo_path": "college/6e/3_addition/interros/interro3/v1.tex", "max_line_length": 69, "max_stars_count": null, "max_stars_repo_head_hexsha": "540337598037f7925fbf2f0e7232c4e18813c25b", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "malhys/maths_projects", "max_stars_repo_path": "college/6e/3_addition/interros/interro3/v1.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 725, "size": 1601 }
% Options: [twoside, leqno, 11pt], etc.. leqno is "number equations on the left hand side" \RequirePackage{tikz} \documentclass[12pt]{thesis} \usepackage{setspace} \usepackage{amsmath} \usepackage{graphicx} \usepackage{wrapfig} \usepackage{longtable} \usepackage{subcaption} \usepackage{setspace} \usepackage{float} \usepackage{listings} \usepackage{rotating} \usepackage{cancel} \usepackage{array} \usepackage{tikz} \usepackage{multirow} \usetikzlibrary{matrix, shapes} \interfootnotelinepenalty=10000 \graphicspath{ {C:/Users/Marissa/network-similarity/} } \usepackage{array} \newcolumntype{L}[1]{>{\raggedright\let\newline\\\arraybackslash\hspace{0pt}\vspace{0pt}}m{#1}} \newcolumntype{C}[1]{>{\centering\let\newline\\\arraybackslash\hspace{0pt}}m{#1}} \newcolumntype{R}[1]{>{\raggedleft\let\newline\\\arraybackslash\hspace{0pt}\vspace{0pt}}m{#1}} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%% DOCUMENT PROPERTIES %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \author{Marissa Graham} % Titles must be in mixed case. Style guide: https://www.grammarcheck.net/capitalization-in-titles-101/. \title{A computationally driven comparative survey of network alignment, graph matching, and network comparison in pattern recognition and systems biology} \degree{Master of Science} \university{Brigham Young University} \department{Department of Mathematics} \committeechair{Emily Evans} %% These are fields that are stored in the PDF but are not visible in the document itself. They are optional. \memberA{Benjamin Webb} \memberB{Christopher Grant} \subject{Writing a thesis using LaTeX} % Subject of your thesis, e.g. algebraic geometry \keywords{LaTeX, PDF, BYU, Math, Thesis} \month{June} \year{2018} \pdfbookmarks \makeindex %%%%%%%%%%%%%%%%%%%%%%%%% THEOREM DEFINITIONS AND CUSTOM COMMANDS %%%%%%%%%%%%%%%%%%%%%%%%%%% %% Define the theorem styles and numbering \theoremstyle{plain} \newtheorem{theorem}{Theorem}[chapter] \newtheorem{proposition}[theorem]{Proposition} \newtheorem{conjecture}[theorem]{Conjecture} \newtheorem{corollary}[theorem]{Corollary} \newtheorem{lemma}[theorem]{Lemma} \theoremstyle{definition} \newtheorem{definition}[theorem]{Definition} \newtheorem{example}[theorem]{Example} \theoremstyle{remark} \newtheorem*{remark}{Remark} %% Create shortcut commands for various fonts and common symbols \newcommand{\s}[1]{\mathcal{#1}} \newcommand{\N}{\mathbb{N}} \newcommand{\Z}{\mathbb{Z}} \newcommand{\Q}{\mathbb{Q}} \newcommand{\R}{\mathbb{R}} \newcommand{\C}{\mathbb{C}} \newcommand{\F}{\mathbb{F}} %% Declare custom math operators \DeclareMathOperator{\tr}{tr} \DeclareMathOperator{\diag}{diag} \DeclareMathOperator*{\argmin}{argmin} \DeclareMathOperator*{\argmax}{argmax} \DeclareMathOperator{\Span}{Span} \DeclareMathOperator{\rank}{rank} %% Sets and systems \newcommand{\br}[1]{\left\langle #1 \right\rangle} \newcommand{\paren}[1]{\left(#1\right)} \newcommand{\sq}[1]{\left[#1\right]} \newcommand{\set}[1]{\left\{\: #1 \:\right\}} \newcommand{\setp}[2]{\left\{\, #1\: \middle|\: #2 \, \right\}} \newcommand{\abs}[1]{\left| #1 \right|} \newcommand{\norm}[1]{\left\| #1 \right\|} \newcommand{\system}[1]{\left\{ \begin{array}{rl} #1 \end{array} \right.} %% referencing commands \newcommand{\thmref}[1]{Theorem \ref{#1}} \newcommand{\corref}[1]{Corollary \ref{#1}} \newcommand{\lemref}[1]{Lemma \ref{#1}} \newcommand{\propref}[1]{Proposition \ref{#1}} \newcommand{\defref}[1]{Definition \ref{#1}} \newcommand{\exampleref}[1]{Example \ref{#1}} \newcommand{\exerref}[1]{Exercise \ref{#1}} \renewcommand{\labelenumi}{(\roman{enumi})} \begin{document} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%% FRONT MATTER %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \frontmatter \maketitle \begin{abstract}\index{abstract} Comparative graph and network analysis plays an important role in both systems biology and pattern recognition, but existing surveys on the topic have historically ignored or underserved one or the other of these fields. We present a integrative introduction to the key objectives and methods of graph and network comparison in each, with the intent of remaining accessible to relative novices in order to mitigate the barrier to interdisciplinary idea crossover. To guide our investigation, and to quantitatively justify our assertions about what the key objectives and methods of each field are, we have constructed a citation network containing 5,793 vertices from the full reference lists of over two hundred relevant papers, which we collected by searching Google Scholar for ten different network comparison-related search terms. This dataset is freely available on Github. We have investigated its basic statistics and community structure, and framed our presentation around the papers found to have high importance according to five different standard centrality measures. We have also made the code framework used to create and analyze our dataset available as documented Python classes and template Mathematica notebooks, so it can be used for a similarly computationally-driven investigation of any field. \vskip 2.25in \noindent Keywords: % Keywords need to be as close to the bottom of the page as possible without moving to a new page. graph matching, graph alignment, graph comparison, graph similarity, graph isomorphism, network matching, network alignment, network comparison, network similarity, network alignment, comparative analysis, local alignment, global alignment, protein network, computational biology, biological networks, protein-protein interactions, computational graph theory, pattern recognition, exact graph matching, inexact graph matching, graph edit distance, graphlets, network motifs, graph matching algorithms, bipartite graph matching, node similarity, graph similarity search, attributed relational graphs \end{abstract} %\begin{acknowledgments} \end{acknowledgments} \tableofcontents \listoftables \listoffigures \mainmatter %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% CHAPTER 1 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \chapter{Introduction}\label{chapter:introduction_and_background} \section{Motivation} \begin{wrapfigure}{L}{0.4\textwidth} \centering \vspace{-20pt} \includegraphics[width=0.38\textwidth]{foodweb.png} \scriptsize ``SimpleFoodWeb" Mathematica sample network. \caption{Using a network to represent relationships in a food web.} \vspace{-20pt} \label{fig:foodweb} \end{wrapfigure} Networks are first and foremost a way to model the relationships between objects, which we do by representing objects as vertices and their relationships as edges. For example, we might use a graph to represent the relationships in a food web, or between characters in a successful movie franchise. %\footnote{The term \textit{network} is often used somewhat interchangeably with the term \textit{graph}. While they both refer to the same mathematical object, we attempt to follow the heuristic throughout of using the term \textit{graph} to refer to a purely mathematical object and \textit{network} to refer to a real-world system.} In some cases, using this representation simply to visualize relationships is useful, but we generally would also like to computationally exploit it in order to gain further insight about the system we are modeling. \begin{wrapfigure}{r}{0.5\textwidth} \centering \vspace{-15pt} \includegraphics[width=0.48\textwidth,height=0.28\textwidth]{avengers.png} \scriptsize Diagram from the mic.com article ``Here's How the Marvel Cinematic Universe's Heroes Connect--in One Surprising Map". \caption{Using a network to represent relationships between movie characters.} \vspace{-20pt} \label{fig:avengers} \end{wrapfigure} As a first example, consider what questions we might ask about an infrastructure network such as a road network, phone lines, power grid, or the routers and fiber optic connections of the Internet itself. How do we efficiently get from here to there? How much traffic can flow through the network? What happens if an intersection is clogged, or a power plant fails? We can also ask questions about social networks representing relationships between people. Who is the most important? To what extent do the people you consider your friends consider themselves \textit{your} friend? To what extent are your friends also friends with each other? To what extent do people form relationships with people who are like them? What do communities and strong friend groups look like, mathematically? One common question across all of mathematics is how similar objects are to each other. With networks, we can ask this question about individual vertices in a network, but we also frequently want to ask it about networks themselves. For example, we might ask ``How similar are you to other students, based on your friendships at school?", but we can also ask ``Which proteins, protein interactions and groups of interactions are likely to have equivalent functions across species?" \cite{Sharan_2006}. We may also want to define statistics that allow us to classify networks according to certain meaningful properties. For objects as combinatorially complex as networks, these kinds of comparisons present a difficult problem, the study of which has its roots in the 60s and 70s \cite{Conte_2004} and, as illustrated in Figure \ref{fig:year_distributions}, has gained significant attention in the past twenty years as interesting network data has become more readily available and the problem has become more computationally feasible. \begin{figure}[h!] \centering \includegraphics[width=0.9\textwidth]{year_distribution.png} \caption{Distribution of papers published by year in our citation network of network similarity-related papers. Interestingly, year cutoffs for significant percentiles seem to roughly correspond to the spread of computers, personal computers, and the Internet, respectively.} \label{fig:year_distributions} \end{figure} The goal of this project is to provide a broad outline of the comparative study of networks. Without the help of prior expertise in the field, however, it is difficult to determine which works are important and which are irrelevant, and we therefore use the tools of network theory to study the network of citations between scientific papers on this topic. This allows us to use standard network analysis techniques to determine which papers are the most important or influential, and has the additional advantage of bringing transparency to the process. That is, we can quantitatively justify our assertions using standard centrality and community detection measures, rather than relying on existing expertise in the field to give weight to our claims. The remainder of this work is structured as follows: In the remainder of this chapter, we introduce the necessary mathematical background to inform our analysis of our citation network dataset. In Chapters \ref{chapter:dataset_creation_and_analysis} and \ref{chapter:partitioning}, we introduce our citation network dataset. We analyze its basic structure, find two main fields of application within the study of network comparison, and choose a reading list from the high centrality vertices in our dataset. In Chapters \ref{chapter:pattern_recognition} and \ref{chapter:systems_biology}, we discuss our findings from this reading, and then conclude by discussing potential cross-applications between our two observed fields in Chapter 6. \section{Background} \begin{figure}[t] \centering \includegraphics[width=0.9\textwidth]{basic_properties_demo.png} \caption{Basic types of networks} \label{fig:basic_properties_demo} \end{figure} In this section we introduce the definitions and notation required to give context to our analysis of the citation network. Our presentation follows Newman's \textit{Networks: An Introduction} \cite{newman2010} closely, with the remainder of definitions not otherwise cited sourced from \textit{Algorithms and Models for Network Data and Link Analysis} \cite{fouss2016}. A \textbf{graph}\index{graph} $G(V,E)$ is formally defined as a finite, nonempty set $V$ of \textbf{nodes}\index{nodes} or \textbf{vertices}\index{vertices}, combined with a set $E\subset V\times V$ of \textbf{edges}\index{edges} representing relationships between pairs of vertices. Throughout this work, we denote the number of vertices in a graph by $n$ and the number of edges by $m$ where not otherwise specified. In this work we will deal with \textbf{simple graphs}\index{simple graph}, which are those that do not have more than one edge between any pair of vertices (that is, a \textbf{multiedge}\index{multiedge}\footnote{A multigraph without edge loops is not simple, but if we represented the number of edges between pairs of vertices as edge weights instead of multiedges, that would be a simple graph.}), and do not have any edges from a vertex to itself (a \textbf{self-edge}\index{self-edge} or \textbf{self-loop}). We also are concerned with whether a graph is \textbf{directed}\index{directed graph} or \textbf{undirected}\index{undirected graph}. In an undirected graph, we have an edge \textit{between} two vertices, whereas in a directed graph we have edges \textit{from} one vertex \textit{to} another vertex. Throughout this work, we will use the notation $v_i \leftrightarrow v_j$ for an undirected edge between vertices $v_i$ and $v_j$, and $v_i \rightarrow v_j$ for a directed edge. In either case, the edge $v_i\leftrightarrow v_j$ or $v_i\rightarrow v_j$ is \textbf{incident}\index{incident} to vertices $v_i$ and $v_j$, and vertices $v_i$ and $v_j$ are therefore considered \textbf{neighbors}\index{neighbor}. A graph can also be \textbf{weighted}\index{weighted graph}, meaning each edge is assigned some real, generally positive value $w_{ij}$ representing the ``strength" of the connection between vertices $v_i$ and $v_j$. In an undirected graph, the \textbf{degree}\index{degree} of a vertex is the sum of the weights of the incident edges, and in a directed graph, the \textbf{indegree}\index{indegree} and \textbf{outdegree}\index{outdegree} are the total weight of a vertex's incoming and outgoing edges, respectively. If the graph is unweighted, this is simply the number of adjacent, incoming, or outgoing edges, as the weight of each edge is one. When studying real-world networks, we also make a distinction between \textbf{deterministic}\index{deterministic} and \textbf{random}\index{random} networks. This distinction is roughly the same as that between a variable and a random variable. The vertices and edges in a deterministic network are ``fixed", while in a random network, they need to be inferred from data using statistical inference methods. For example, our citation network is deterministic, but a network of protein interactions for a given species is not, as it must be inferred from experimental data on a limited number of members of that species. \subsection{Computational network properties} Whether a network is directed or weighted or simple will inform our approach to its analysis, but these properties are generally included as metadata rather than computationally determined. The remainder of the properties we consider in network analysis are determined computationally, and their calculation involves varying degrees of algorithmic complexity. \subsubsection{Connectivity} \begin{wrapfigure}{L}{0.3\textwidth} \centering \vspace{-20pt} \includegraphics[width=0.28\textwidth]{path_demo.png} \caption{A path of length three on a small network.} \vspace{-20pt} \label{fig:path_demo} \end{wrapfigure} One fairly intuitive but particularly relevant network property is the question of whether a network is \textbf{connected}\index{connected graph}, as illustrated in Figure \ref{fig:connectivity_demo}; that is, there is a \textbf{path}\index{path} between any pair of vertices, where a path is defined to be a sequence of vertices such that consecutive vertices are connected by an edge. The \textbf{length}\index{path length} of a path is the number of edges connecting the vertices in the sequence. In the case of a directed graph, we make the distinction between weak and strong connectivity. A \textbf{weakly connected graph}\index{weakly connected graph} is one which is connected when each edge is considered as undirected, while a \textbf{strongly connected graph}\index{strongly connected graph} requires a path from every vertex to every other vertex, even while respecting edge directions. \begin{figure}[t] \centering \includegraphics[width=0.9\textwidth]{connectivity_demo.png} \caption{Simple examples of different types of network connectivity.} \label{fig:connectivity_demo} \end{figure} If an undirected network is not connected, or a directed network, is not weakly connected, the network has multiple \textbf{connected components}\index{connected component} or \textbf{weakly connected components}\index{weakly connected component}. Each component is a subset of vertices such that there is a path between every pair of member vertices, and no paths between any member and a nonmember. For example, the disconnected graph in Figure \ref{fig:connectivity_demo} has two components. The weakly connected components of a directed graph are the components in the corresponding undirected network. In a typical real world network, there is generally a single large component or weakly connected component which contains most of the vertices, with the rest of the vertices contained in many small disconnected components. We refer to this large component as the \textbf{giant component}\index{giant component}, and its relative size gives us a measure for how ``close" a network is to being connected; the higher the percentage of vertices are in the giant component, the closer the network is to being connected. \subsubsection{Assortativity} \begin{wrapfigure}{L}{0.6\textwidth} \centering \vspace{-20pt} \includegraphics[width=0.58\textwidth]{assortativity_demo.png} \scriptsize ``USPoliticsBooks" Mathematica sample network. \caption{A network of U.S. politics books. Vertices categorized as liberal, conservative, or neutral are colored blue, red, and white, respectively, and edges that run between different categories are bolded.} \label{fig:assortativity_demo} \end{wrapfigure} We can also consider whether a network is \textbf{assortative}\index{assortative}. That is, if the vertices in the network have some discrete-valued property, we ask whether the edges in the network are more likely to run between vertices of the same type. If all of the edges run between vertices of the same type, the assortativity of the network is 1; if all edges run between edges of different types, the assortativity is $-1$. For example, the network of U.S. politics books in Figure \ref{fig:assortativity_demo} is strongly assortative with an assortativity value of $0.72$. Most of the connections are between books with the same political classification, which we can visually confirm by coloring the vertices accordingly and highlighting the few edges of the graph that run between books with different classifications. \subsubsection{Acyclic networks} \begin{figure}[t] \centering \includegraphics[width=0.58\textwidth]{acyclic_demo.png} \caption{An acyclic directed network (left) vs. one which contains a cycle (right).} \label{fig:acyclic_demo} \end{figure} We also consider whether a directed network is \textbf{acyclic}\index{directed acyclic network}, meaning that it contains no \textbf{cycles}\index{cycle} or nontrivial paths from any vertex to itself. Our most important example of a directed acyclic network is a \textbf{citation network}\index{citation network}. In a citation network, we include an edge from a paper to each reference it cites. Any cycle in this network would require edges both from a newer paper to an older paper, and from an older paper to a newer paper. If we only cite papers which have already been written, which is the case for the papers in our dataset and generally true for academia as a whole, we cannot have any cycles. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% CHAPTER 2 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \chapter{Creating the citation network}\label{chapter:dataset_creation_and_analysis} \section{Approach} Citation network creation is not a trivial task. Although some journals and databases provide a citation network of the references in their own domain, the field we are considering is highly interdisciplinary, so investigating only the citations within a single database or journal would discard large sections of the desired network. As a result of this, and since intellectual property restrictions preclude simply scraping an entire citation network, we constructed the dataset manually by collecting reference lists for relevant papers and then building the network accordingly. Relevant papers were found by searching google scholar for ``graph" or ``network" + ``alignment", ``comparison", ``similarity", ``isomorphism", or ``matching"\footnote{Future work should probably include ``graph kernel(s)" in this list.}. Topic-relevant papers were initially collected from the first five pages of results for each of these ten search terms on May 4th, 2018, after which new papers published through June 25th, 2018 were collected from a google scholar email alert for those same ten search terms. For each of these papers, we stored the plaintext reference list in a standardized format which could be easily split into the individual freeform citations. Any paper for which we have a reference list is referred to as a ``parent" record, and the references are referred to as the ``child" records. In total, we collected 7,790 child references from 221 parent papers. In order to create the network, we needed to parse the freeform citations for each reference list to obtain metadata and recognize records as repeatedly cited. This is a difficult problem, as the records in the database span several hundred years\footnote{Not an exaggeration. I've got an Euler paper from 1736 in there.} and represent a wide variety of citation styles and languages, as well as significant optical character recognition and Unicode-related challenges. Instead of attempting to parse a citation into component parts, we used the REST API to search for each record in the CrossRef database, which already has the metadata parsed for any record it includes. We marked results as duplicate if their metadata matches and both are known to be correct, or if both their metadata and original freeform citation match exactly. We considered the results given by the CrossRef API to be correct if the title of the record could be found in the original freeform citation, and unverified otherwise. We were able to automatically verify results for about 75\% of the parent records, and about half of their children. We were conservative about marking records as duplicate, which meant that having so many unverified records dramatically misrepresents the structure of the network. We therefore went through the unverified records by hand. For unverified parents, we manually corrected or found title, year, author, DOI, and URL information as well as reference and citation counts. For unverified child references, we first went through and marked any correct but unverified results. We found about half of the unverified child results to be correct even though they were unable to be automatically verified due to punctuation discrepancies, misspellings, unicode issues, or citation styles that do not include the title. Next, results were counted as correct (but noted as ``half-right") if the CrossRef API returned a review, purchase listing or similar for the correct record. For the remaining incorrect references, we manually parsed the author, title, and year from the citation, or looked them up if not included. Finally, we deleted any records which did not refer to a written work of some kind; specifically, references simply citing a website, web service, database, software package/library, programming language, or ``personal communication". We then wrote the entire citation network to a GML file which could be loaded in Mathematica. By default, the code used to generate the GML file includes the title, year, reference and citation counts for each record as vertex properties. Including further metadata as vertex properties is not difficult, but additional string-valued properties dramatically slow Mathematica's ability to load such a large network\footnote{The network contains 5,793 vertices and 7,491 edges, and takes almost exactly two minutes to load on a 2.6GHz 6th-gen quad core Intel Core i7 CPU with 16GB of RAM running Windows 10 using Mathematica 11.2.}, and a network as large as ours cannot be loaded at all, and so we do not include any more of them than strictly necessary. The dataset itself and the code and source files used to generate it can be found at \url{https://github.com/marissa-graham/network-similarity}, as well as documentation and instructions for using it to generate a similar dataset for any collection of properly-formatted reference list files. \begin{figure}[p] \centering \includegraphics[width=0.9\textwidth]{full_citation_network.png} \caption{The full citation network of the dataset used for the project.} \label{fig:full_database} \end{figure} \section{Basic statistics} \subsubsection{Construction-related issues} Our full citation network contains a total of 7,491 references between 5,793 papers. This results in a fairly low mean degree, or average number of references per paper, which is an inherent limitation in the construction of almost any citation network. We can include all the references for a small group of papers, but including all the references for \textit{their} references and so on is an exponentially more expensive task, and we therefore generally only include children for a small fraction of the total vertices. Since the edges in our network are hand-constructed using individual reference lists, it includes an abnormally small fraction of vertices with children. A typical citation network, such as the SciMet and Zewail datasets which are displayed in Appendix Figures \ref{fig:sciMet} and \ref{fig:zewail}, respectively, and whose analysis is included in Table \ref{tab:network_table}, is constructed by scraping a single database. This results in a much higher fraction of children whose references are included, but any references which are not in the database in question are missed, so the mean degree is still quite low. \begin{table}[ht] \centering \begin{tabular}{|p{0.31\linewidth}||R{0.07\linewidth}|R{0.07\linewidth}|R{0.075\linewidth}|R{0.07\linewidth}|R{0.07\linewidth}|R{0.07\linewidth}|} \hline & $G$ & $G_p$ & sciMet & zewail & $R$ & $R_d$ \\ \hline\hline% & $R_{d,p}$ \\ \hline\hline Vertices & 5793 & 1062 & 1092 & 3145 & 5793 & 5793 \\ \hline % & 1077 \\ \hline % Edges & 7491 & 2775 & 1308 & 3743 & 7491 & 7491\\ \hline % & 2775 \\ \hline Mean degree & 1.29 & 2.61 & 1.20 & 1.19 & 1.29 & 1.29 \\ \hline %& 2.58 \\ \hline Fraction with children & 0.038 & 0.193 & 0.523 & 0.599 & 0.733 & 0.038 \\ \hline %& 0.202\\ \hline Diameter & 10 & 9 & 14 & 22 & 21 & 9\\ \hline % & 7 \\ \hline Connected components & 16 & 1 & 114 & 281 & 504 & 3 \\ \hline %& 3\\ \hline Fraction in giant component & 0.960 & 1.000 & 0.784 & 0.797 & 0.900 & 0.999 \\ \hline %& 1.000 \\ \hline %Assortativity by indegree & 0.113 & 0.016 & 0.055 & 0.158 & 0.007 & -0.008\\ \hline % & -0.007 \\ \hline %Assortativity by outdegree & -0.014 & -0.014 & -0.025 & 0.056 & -0.018 & -0.007 \\ \hline %& -0.013 \\ \hline \end{tabular} \caption{Comparing statistics for our dataset to other networks.} \label{tab:network_table} \end{table} \subsubsection{The pruned network $G_p$} \begin{figure}[h] \centering \includegraphics[width=0.9\textwidth]{subnetwork.png} \caption{The pruned citation network.} \label{fig:pruned_network} \end{figure} We also consider the \textit{pruned network}, shown in Figure \ref{fig:pruned_network}, which is defined to be the giant component of the subnetwork of vertices with positive outdegree or indegree greater than one. That is, we discard any papers which are not part of the giant component or are not cited by multiple parents. We do so because the main purpose of our dataset is to determine which papers are important in the field of network similarity, but the vast majority of references in the database are only cited by one paper and frequently have very little relevance to network similarity itself. To reduce the influence of off-topic papers on our results, we restrict our network to our parent vertices, which we have hand-curated to be relevant to network similarity, and all vertices which are cited by more than one parent paper. This shrinks the number of vertices by a factor of almost six, correspondingly raises the fraction of vertices with children, and approximately doubles the mean degree. \subsubsection{Comparison to other networks} In Table \ref{tab:network_table}, we calculate the mean degree, fraction of vertices with children, diameter, number of connected components, and fraction of vertices in the giant component for six different networks: our full network $G$, its pruned version $G_p$, two datasets from the Garfield citation network collection, a uniformly generated directed random graph $R$, and a random graph $R_d$ with the same degree sequences as $G$. The random network $R$ is generated from a uniform distribution. The other, $R_d$, is constructed\footnote{We note that the construction method of $R_d$ does not exclude multiedges. This is not really a problem here, but allowing multiedges does cause issues with our analysis of graphlet degree distributions of PPI networks as shown in Figure \ref{fig:GDD_demo}. In that case, we do exclude multiedges in our construction of the random networks matching the degree distribution.} to match the degree sequence of $G$. \subsubsection{Connectivity} Our full network displays a high level of connectivity; 96\% of vertices are contained in the giant component, and it has only 16 connected components, compared to 90\% containment in the giant component and 504 connected components for a randomly generated network of the same size. The diameter is also low compared to a random network and to our choices of real-world network. Since our network consists of papers collected on a specific topic, which have an outside reason to cite the same papers, this high level of connectivity is not surprising. The construction of the network also explains the high connectivity compared to the real-world datasets, which only have about 80\% of their vertices in their giant components; if the generation of the citation network is limited to a single database, as the sciMet and zewail datasets are, cocitation connections in other databases will be lost. This makes it more difficult for the giant component to fill the network, and results in longer paths between connected vertices. The only dataset tested with better connectivity than ours is the random network $R_d$, which has almost complete containment--$99.9\%$--in the giant component. This is not surprising. Approximately speaking, in order for a parent vertex to be disconnected from the giant component, its children must all have exactly one parent, and no children. In a real-world network, this is easier to find, since a paper's references are not randomly selected. A single work from an author or topic which is relatively disconnected to the rest of the network similarity academic community can generate a significant number of references which are not in the giant component. By contrast, since about 82\% of the vertices in $G$ have exactly one parent and no children, the probability of a parent vertex with outdegree $n$ being disconnected from the giant component is $(0.82)^n$, or $(0.82)^{26.2} \approx 0.5\%$ for the mean outdegree of the parents. This probability shrinks further as the number of disconnected parent vertices grows, as fewer single-parent vertices become available compared to the rest. \section{Centrality}\label{section:centrality} Our main goal in creating this citation network was to determine which papers are most important or \textbf{central}\index{centrality}. The question of which vertices are the most central to a network is widely researched in network theory, and there correspondingly exists a wide variety of centrality measures used to quantify different ideas about importance. For this project, we chose five centrality measures that we expect to coincide with an intuitive definition of which papers in the network are the most relevant. Our definitions of these come from \cite{newman2010}. \subsubsection{Indegree} This measures the number of times each paper was cited by the parent papers in our network. Since the parent papers approximately represent everything determined to be most relevant by google scholar in a search for network comparison-related search terms\footnote{This is somewhat skewed by our inclusion of newly published papers collected from an email alert after the initial search, which may not be the most relevant overall.}, the top values for indegree should give us a rough idea of which papers are formative for the field and therefore more frequently cited by the parent papers. \subsubsection{Outdegree} Since the pruned network consists of the parent papers as well as the papers that appear in more than one parent's reference list, this measures the number of a parent's references which have been cited by other parents in the network. The top values for outdegree should therefore give us an idea of which papers survey the most well-known topics in the field. \subsubsection{Betweenness} Betweenness centrality measures the extent to which a vertex lies on paths between other vertices. Intuitively, this should correspond to papers which make uncommon connections between other works; either those that are applicable to a wide variety of fields and applications, or those which build on disparate ideas in an original way. Unsurprisingly, we see some overlap between the papers with the highest outdegree and those with the highest betweenness. \subsubsection{Closeness} Recall that a path between two vertices is a sequence of vertices such that consecutive vertices are connected by an edge; a \textbf{geodesic path}\index{geodesic path} is the shortest possible path between any two vertices, and its length is the geodesic distance. We define \textbf{closeness}\index{closeness centrality} to be the inverse of the average geodesic distance between a vertex and all other vertices in the network. Closeness centrality therefore takes on the highest values for a vertex which has a short average distance from all other vertices. For example, in Figure \ref{fig:closeness_demo} the highlighted vertex on the left has closeness centrality 1, since it has a distance of 1 from all other vertices. On the right, the average geodesic distance from the highlighted vertex to the other six is $\frac{1}{6}(1+1+2+2+2+2)=\frac{5}{3}$, so the closeness centrality is 0.6. \begin{wrapfigure}{r}{0.6\textwidth} \centering \vspace{-15pt} \includegraphics[width=0.55\textwidth]{closeness_demo.png} \vspace{-10pt} \caption{Two graphs with vertices labeled by their geodesic distance from the highlighted vertex.} \vspace{-15pt} \label{fig:closeness_demo} \end{wrapfigure} A paper in our citation network will have higher closeness centrality if it only takes a few steps through a paper's reference list or citations to another paper's reference list or citations to get to every other paper in the network. \subsubsection{HITS} For a directed network such as the citation networks used here, we may want to separately consider the notion that a vertex is important if it points to other important vertices, and the notion that a vertex is important if it is pointed to by other important vertices. This is the goal of the HITS or hyperlink-induced topic search algorithm. We define two different types of centrality for each vertex. We have \textbf{authority centrality}\index{authority centrality}, which measures whether a specific vertex is being pointed to by vertices with high \textbf{hub centrality}\index{hub centrality}, which in turn measures whether a specific vertex points to vertices with high authority centrality. By defining the hub and authority centralities of a vertex to be proportional to the sum of the authority and hub centralities, respectively, of its neighbors, this definition reduces to a pair of eigenvalue equations which can be easily solved numerically. That is, if $x_i$ is the authority centrality of the $i$-th vertex in a network, $y_i$ is the hub centrality of the $j$-th vertex, $A_{ij}$ is the weight of the edge from $j$ to $i$ if it exists, and 0 otherwise, and $\alpha, \beta$ are proportionality constants, we have \[x_i = \alpha \sum_j A_{ij}y_j\text{ and } y_i = \beta \sum_j A_{ji}x_j.\] \subsection{High centrality vertices}\label{section:high_centrality_vertices} We can observe that the pruned network, shown in Figure \ref{fig:pruned_network}, seems to contain two clusters of more tightly connected vertices\footnote{We discuss the motivation behind and significance of this partition in detail in Chapter \ref{chapter:partitioning}.}. We would like to collect the important papers for both the network as a whole and for the distinct communities it contains, so we partition the dataset in half using a modularity maximizing partition; that is, we choose two groups of vertices such that the fraction of edges running between vertices in different groups is minimized. For both the pruned network and the two halves of our partition, we collect the top ten papers according to these five different centrality measures and summarize the results in the following tables. Since the numerical values for indegree and outdegree have intuitive meaning, we report the value itself. However, the values for betweenness, closeness, and the two HITS centralities are unintuitive, context-free real numbers, so we report the rank of each paper with respect to each measure rather than the actual value. We also calculate betweenness and closeness for the undirected version of the network, to allow those rankings to be based on citing relationships in either direction. The papers in each table are sorted from maximum to minimum with respect to \[ f(p) = \frac{k^{in}_p}{k^{in}_{max}} + \frac{k^{out}_p}{ k^{out}_{max}} + \sum_{i=1}^4 \frac{1}{r_i(p)}, \] where $k^{in}_p$ and $k^{out}_p$ are the indegree and outdegree of a paper $p$, the maximums of which are taken with respect to the pruned network or partition half in question, and $r_i(p)$ is the rank of a paper $p$ according to the $i$-th of our four centrality metrics, which is defined to be infinity if a paper is not in the top ten for that metric. In all three tables, a $^*$ indicates that a paper's indegree or outdegree is ranked top ten within the relevant network. \begin{table}[H] \centering \vspace{-.5cm} {\setstretch{1}\fontsize{10}{13}\selectfont \begin{tabular}{|L{0.7\linewidth}|c|c|c|c|c|c|} \hline Title & \rotatebox[origin=c]{90}{Indegree} & \rotatebox[origin=c]{90}{Outdegree} & \rotatebox[origin=c]{90}{Betweenness} & \rotatebox[origin=c]{90}{Closeness} & \rotatebox[origin=c]{90}{HITS Auth.} & \rotatebox[origin=c]{90}{HITS Hub} \\ \hline\hline $^\Diamond$Thirty Years of Graph Matching in Pattern Recognition \cite{Conte_2004} & 20* & 109* & 1 & 2 & & 1 \\ \hline $\dagger$Fifty years of graph matching, network alignment and network comparison \cite{Emmert_Streib_2016} & 6 & 71* & 2 & 1 & & 3 \\ \hline $\dagger$Networks for systems biology: conceptual connection of data and function \cite{Emmert_Streib_2011} & 2 & 102* & 3 & 3 & & 2 \\ \hline $^\Diamond$An Algorithm for Subgraph Isomorphism \cite{Ullmann_1976} & 20* & 4 & 7 & 4 & 1 & \\ \hline $\dagger$Modeling cellular machinery through biological network comparison \cite{Sharan_2006} & 9 & 41* & 8 & & & \\ \hline $^\Diamond$Computers and Intractability: A Guide to the Theory of NP-Completeness \cite{Hartmanis_1982} & 16* & 0 & 4 & 5 & & \\ \hline $^\Diamond$The graph matching problem \cite{Livi_2012} & 2 & 55* & 5 & 6 & & 7 \\ \hline $\dagger$A new graph-based method for pairwise global network alignment \cite{Klau_2009} & 9 & 13 & & 8 & & \\ \hline $\dagger$On Graph Kernels: Hardness Results and Efficient Alternatives \cite{Gartner_2003} & 11 & 10 & 6 & & & \\ \hline $^\Diamond$Error correcting graph matching: on the influence of the underlying cost function \cite{Bunke_1999} & 10 & 16 & & 7 & 7 & 8 \\ \hline $^\Diamond$A graduated assignment algorithm for graph matching \cite{Gold_1996} & 18* & 0 & & & 5 & \\ \hline $^\Diamond$The Hungarian method for the assignment problem \cite{Kuhn_1955} & 17* & 0 & & & & \\ \hline $^\Diamond$An eigendecomposition approach to weighted graph matching problems \cite{Umeyama_1988} & 15* & 5 & & & 6 & \\ \hline $^\Diamond$Recent developments in graph matching \cite{Bunke_2000} & 1 & 51* & & & & 4 \\ \hline $\dagger$MAGNA: Maximizing Accuracy in Global Network Alignment \cite{Saraph_2014} & 5 & 35* & & & & \\ \hline $^\Diamond$A distance measure between attributed relational graphs for pattern recognition \cite{Sanfeliu_1983} & 14* & 0 & & & 3 & \\ \hline $\dagger$Pairwise Global Alignment of Protein Interaction Networks by Matching Neighborhood Topology \cite{Singh_2007} & 13* & 0 & & & & \\ \hline $\dagger$Topological network alignment uncovers biological function and phylogeny \cite{Bunke_1998} & 12* & 0 & & & & \\ \hline A graph distance metric based on the maximal common subgraph \cite{Kuchaiev_2010} & 10 & 0 & & 10 & 4 & \\ \hline $^\Diamond$Efficient Graph Matching Algorithms \cite{Messmer_1995} & 0 & 43* & & & & 5 \\ \hline Local graph alignment and motif search in biological networks \cite{Berg_2004} & 8 & 10 & 10 & & & \\ \hline $\dagger$Global alignment of multiple protein interaction networks with application to functional orthology detection \cite{Singh_2008} & 11* & 0 & & & & \\ \hline On a relation between graph edit distance and maximum common subgraph \cite{Bunke_1997} & 11 & 0 & & & 2 & \\ \hline $^\Diamond$Graph matching applications in pattern recognition and image processing \cite{Conte_2003} & 0 & 40* & & & & 6 \\ \hline $^\Diamond$Fast and Scalable Approximate Spectral Matching for Higher Order Graph Matching \cite{Park_2014} & 0 & 41* & 9 & & & \\ \hline $^\Diamond$Structural matching in computer vision using probabilistic relaxation \cite{Christmas_1995} & 9 & 0 & & & 10 & \\ \hline $^\Diamond$A new algorithm for subgraph optimal isomorphism \cite{El_Sonbaty_1998} & 2 & 21 & & & & 9 \\ \hline BIG-ALIGN: Fast Bipartite Graph Alignment \cite{Koutra_2013} & 2 & 21 & & 9 & & \\ \hline $^\Diamond$A graph distance measure for image analysis \cite{Eshera_1984} & 8 & 0 & & & 8 & \\ \hline A new algorithm for error-tolerant subgraph isomorphism detection \cite{Messmer_1998} & 8 & 0 & & & 9 & \\ \hline $^\Diamond$A (sub)graph isomorphism algorithm for matching large graphs \cite{Cordella_2004} & 3 & 16 & & & & 10 \\ \hline \end{tabular} \vspace{-.03cm} $\dagger$Also top for Group 1 (biology dominated); $^\Diamond$Also top for Group 2 (computer science dominated) } \vspace{-.25cm} \caption{Highest centrality papers for the entire pruned network.} \label{tab:toppapers_all} \end{table} \begin{table}[H] {\setstretch{1}\fontsize{10}{13}\selectfont \begin{tabular}{|L{0.7\linewidth}|c|c|c|c|c|c|} \hline Title & \rotatebox[origin=c]{90}{Indegree} & \rotatebox[origin=c]{90}{Outdegree} & \rotatebox[origin=c]{90}{Betweenness} & \rotatebox[origin=c]{90}{Closeness} & \rotatebox[origin=c]{90}{HITS Auth.} & \rotatebox[origin=c]{90}{HITS Hub} \\ \hline\hline $^\Diamond$Networks for systems biology: conceptual connection of data and function \cite{Emmert_Streib_2011} & 2 & 90* & 1 & 2 & & 1 \\ \hline $^\Diamond$Fifty years of graph matching, network alignment and network comparison \cite{Emmert_Streib_2016} & 4 & 56* & 2 & 1 & & 2 \\ \hline $^\Diamond$Modeling cellular machinery through biological network comparison \cite{Sharan_2006} & 9 & 40* & 4 & 3 & 10 & 9 \\ \hline $^\Diamond$MAGNA: Maximizing Accuracy in Global Network Alignment \cite{Saraph_2014} & 5 & 35* & 7 & 6 & & 3 \\ \hline $^\Diamond$On Graph Kernels: Hardness Results and Efficient Alternatives \cite{Gartner_2003} & 10* & 9 & 3 & 8 & & \\ \hline Biological network comparison using graphlet degree distribution \cite{Przulj_2007} & 11* & 0 & & 7 & 4 & 7 \\ \hline $^\Diamond$A new graph-based method for pairwise global network alignment \cite{Klau_2009} & 8 & 12 & 9 & 4 & 6 & \\ \hline Network Motifs: Simple Building Blocks of Complex Networks \cite{Milo_2002} & 11* & 0 & & 9 & 8 & \\ \hline $^\Diamond$Pairwise Global Alignment of Protein Interaction Networks by Matching Neighborhood Topology \cite{Singh_2007} & 12* & 0 & & & 3 & \\ \hline $^\Diamond$Topological network alignment uncovers biological function and phylogeny \cite{Kuchaiev_2010} & 12* & 0 & & & 2 & \\ \hline NETAL: a new graph-based method for global alignment of protein-protein interaction networks \cite{Neyshabur_2013} & 6 & 26* & & & & 5 \\ \hline Collective dynamics of ``small-world" networks \cite{Watts_1998} & 10* & 0 & & 10 & 5 & \\ \hline Global network alignment using multiscale spectral signatures \cite{Patro_2012} & 11* & 0 & & & 9 & \\ \hline $^\Diamond$Global alignment of multiple protein interaction networks with application to functional orthology detection \cite{Singh_2008} & 10* & 0 & & & & \\ \hline Conserved patterns of protein interaction in multiple species \cite{Sharan_2005} & 10* & 0 & & & 7 & \\ \hline Pairwise Alignment of Protein Interaction Networks \cite{Koyuturk_2006} & 10* & 0 & & & 1 & \\ \hline Alignment-free protein interaction network comparison \cite{Ali_2014} & 2 & 22 & 6 & 5 & & \\ \hline Graphlet-based measures are suitable for biological network comparison \cite{Hayes_2013} & 1 & 30* & & & & 8 \\ \hline Survey on the Graph Alignment Problem and a Benchmark of Suitable Algorithms \cite{Dopmann_2013} & 0 & 26 & & & & 4 \\ \hline Predicting Graph Categories from Structural Properties \cite{Canning_2018} & 0 & 30* & 5 & & & \\ \hline Fast parallel algorithms for graph similarity and matching \cite{Kollias_2014} & 1 & 23 & & & & 6 \\ \hline Complex network measures of brain connectivity: Uses and interpretations \cite{Rubinov_2010} & 0 & 28* & 8 & & & \\ \hline Graph-based methods for analysing networks in cell biology \cite{Aittokallio_2006} & 0 & 30* & & & & 10 \\ \hline Demadroid: Object Reference Graph-Based Malware Detection in Android \cite{Wang_2018} & 0 & 25 & 10 & & & \\ \hline Early Estimation Model for 3D-Discrete Indian Sign Language Recognition Using Graph Matching \cite{Kumar_2018a} & 0 & 29* & & & & \\ \hline Indian sign language recognition using graph matching on 3D motion captured signs \cite{Kumar_2018b} & 0 & 29* & & & & \\ \hline \end{tabular} \vspace{-.03cm} $^\Diamond$Also a top-centrality paper for the entire network} \caption{Highest centrality papers for Group 1 (biology dominated) in our partition of the pruned network.} \label{tab:toppapers_bio} \end{table} \begin{table}[H] {\setstretch{1}\fontsize{10}{13}\selectfont \begin{tabular}{|L{0.7\linewidth}|c|c|c|c|c|c|} \hline Title & \rotatebox[origin=c]{90}{Indegree} & \rotatebox[origin=c]{90}{Outdegree} & \rotatebox[origin=c]{90}{Betweenness} & \rotatebox[origin=c]{90}{Closeness} & \rotatebox[origin=c]{90}{HITS Auth.} & \rotatebox[origin=c]{90}{HITS Hub} \\ \hline\hline $^\Diamond$Thirty Years of Graph Matching in Pattern Recognition \cite{Conte_2004} & 17* & 107* & 1 & 1 & & 1 \\ \hline $^\Diamond$An Algorithm for Subgraph Isomorphism \cite{Ullmann_1976} & 15* & 2 & 10 & 5 & 2 & \\ \hline $^\Diamond$A graduated assignment algorithm for graph matching \cite{Gold_1996} & 18* & 0 & 7 & 4 & 3 & \\ \hline $^\Diamond$An eigendecomposition approach to weighted graph matching problems \cite{Umeyama_1988} & 15* & 5 & & 2 & 4 & \\ \hline $^\Diamond$The graph matching problem \cite{Livi_2012} & 2 & 36* & 3 & 3 & & 8 \\ \hline $^\Diamond$A distance measure between attributed relational graphs for pattern recognition \cite{Sanfeliu_1983} & 13* & 0 & & 7 & 1 & \\ \hline $^\Diamond$Recent developments in graph matching \cite{Bunke_2000} & 0 & 50* & 8 & & & 2 \\ \hline $^\Diamond$Error correcting graph matching: on the influence of the underlying cost function \cite{Bunke_1999} & 9* & 16 & & 8 & & 6 \\ \hline $^\Diamond$Fast and Scalable Approximate Spectral Matching for Higher Order Graph Matching \cite{Park_2014} & 0 & 41* & 2 & & & \\ \hline $^\Diamond$Efficient Graph Matching Algorithms \cite{Messmer_1995} & 0 & 42* & 5 & & & 4 \\ \hline $^\Diamond$Computers and Intractability: A Guide to the Theory of NP-Completeness \cite{Hartmanis_1982} & 11* & 0 & 6 & & & \\ \hline $^\Diamond$The Hungarian method for the assignment problem \cite{Kuhn_1955} & 14* & 0 & & & & \\ \hline $^\Diamond$Graph matching applications in pattern recognition and image processing \cite{Conte_2003} & 0 & 40* & & & & 3 \\ \hline Efficient Graph Similarity Search Over Large Graph Databases \cite{Zheng_2015} & 0 & 28* & 4 & 6 & & \\ \hline A linear programming approach for the weighted graph matching problem \cite{Almohamad_1993} & 8 & 8 & & 9 & 9 & \\ \hline $^\Diamond$Structural matching in computer vision using probabilistic relaxation \cite{Christmas_1995} & 9* & 0 & & & 5 & \\ \hline $^\Diamond$A graph distance measure for image analysis \cite{Eshera_1984} & 8 & 0 & & & 6 & \\ \hline Inexact graph matching for structural pattern recognition \cite{Bunke_1983} & 10* & 0 & & & & \\ \hline $^\Diamond$A new algorithm for subgraph optimal isomorphism \cite{El_Sonbaty_1998} & 2 & 21 & & & & 5 \\ \hline Approximate graph edit distance computation by means of bipartite graph matching \cite{Riesen_2009} & 9 & 0 & & & & \\ \hline Linear time algorithm for isomorphism of planar graphs (Preliminary Report) \cite{Hopcroft_1974} & 9 & 0 & & & & \\ \hline Structural Descriptions and Inexact Matching \cite{Shapiro_1981} & 9 & 0 & & & 7 & \\ \hline $^\Diamond$A (sub)graph isomorphism algorithm for matching large graphs \cite{Cordella_2004} & 3 & 16 & & & & 7 \\ \hline A Probabilistic Approach to Spectral Graph Matching \cite{Egozi_2013} & 0 & 25* & 9 & 10 & & \\ \hline Hierarchical attributed graph representation and recognition of handwritten chinese characters \cite{Lu_1991} & 6 & 0 & & & 8 & \\ \hline Exact and approximate graph matching using random walks \cite{Gori_2005} & 1 & 14 & & & & 9 \\ \hline A shape analysis model with applications to a character recognition system \cite{Rocha_1994} & 5 & 0 & & & 10 & \\ \hline Fast computation of Bipartite graph matching \cite{Serratosa_2014} & 1 & 23* & & & & \\ \hline Graph Matching Based on Node Signatures \cite{Jouili_2009} & 0 & 17 & & & & 10 \\ \hline Unsupervised Domain Adaptation Using Regularized Hyper-Graph Matching \cite{Das_2018} & 0 & 22* & & & & \\ \hline \end{tabular} \vspace{-.03cm} $^\Diamond$Also a top-centrality paper for the entire network} \caption{Highest centrality papers for Group 2 (computer science dominated) in our partition of the pruned network.} \label{tab:toppapers_CS} \end{table} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% CHAPTER 3 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \chapter{Partitioning the citation network}\label{chapter:partitioning} \section{The tagging and partitioning process} \begin{figure}[h] \centering \includegraphics[width=0.9\textwidth]{subnetwork_partition.png} \caption{The two halves of our partition of the pruned network.} \label{fig:partitioned_network} \end{figure} In the process of collecting relevant papers for our citation network, we noticed that network similarity applications seem to be almost exclusively found in the fields of biology and computer science, and we would therefore like to investigate the structure of the network with respect to these two categories. %\footnote{While the ``computer science" papers in our reading list all fall into the category of ``pattern recognition", this is not necessarily the case for their references, and we therefore use the broader label in our subject tagging process. Similarly, we use ``biology" instead of ``systems biology" when describing our category labels.} Unfortunately, the metadata for the papers in our network does not include the subject information we would need to simply partition the network using on these categories; while the CrossRef API does sometimes include a ``subject" category, it is present in less than 1\% of items, and with almost six thousand references in our database, it is impractical to categorize their subjects by hand. Instead, we partition the network into two categories of equal size. If we choose our partition in a way that minimizes the fraction of edges running between its two groups, it will preserve and separate the two clusters of more densely connected vertices first observed in Figure \ref{fig:pruned_network}, as we can see in Figure \ref{fig:partitioned_network}. We then set out to determine whether these two halves of the network correspond to the two fields of study we noticed while constructing the dataset. To do this, we need some way to roughly tag papers by their subject. We chose to do according to their journal(s) of publication, since this information is available for over 97\% of the papers in our network\footnote{Journal information was not manually corrected for the papers for which the CrossRef API returned an incorrect result. However, in the vast majority of those cases, the result returned was very similar to the correct one--i.e., written by most of the same authors, or an older paper on the same subject. The subject information should therefore still be accurate enough for our purposes.}. There were a total of 2,285 unique journal names, which we tagged as ``Computer Science", ``Biology", and ``Mathematics" according to the keywords listed in Table \ref{tab:tagging_keywords}. This strategy allowed us to quickly tag the majority of the papers as at least one of these three subjects. Our journal-based tagging is a drastic improvement over the subject information provided by CrossRef, giving us information for about 67\% of the total papers, and 53\% of those in the pruned network. We found this to be sufficient to confirm our initial suspicions that the two communities observed in the network do in fact correspond to the fields of computer science (primarily pattern recognition) and systems biology. In the remainder of this chapter, we show how these categories are reflected in the structure of the dataset, both overall and with respect to our partition, and then discuss the advantages that our dataset and analysis provide in our reading and writing process. \begin{figure}[p] \centering \begin{minipage}[c]{0.23\textwidth} \includegraphics[width=\textwidth]{color_key.png} \end{minipage} \hfill \begin{minipage}[c]{0.7\textwidth} a)\includegraphics[width=\textwidth]{color_coded_full.png} \end{minipage} \begin{minipage}[c]{0.49\textwidth} \includegraphics[width=0.95\textwidth]{color_coded_left.png} b) \vspace{-16pt} \end{minipage} \hfill \begin{minipage}[c]{0.49\textwidth} c)\includegraphics[width=0.95\textwidth]{color_coded_right.png} \end{minipage} \caption{a) The pruned network $G_p$, and b)-c) two halves of its partition $G_p^{(1)}$ and $G_p^{(2)}$, with vertices colored according to their subject label.} \label{fig:subject_color_coded} \end{figure} \section{Results} Our first step is to color code the vertices in the pruned network according to their subject, as shown in Figure \ref{fig:subject_color_coded}, so we can get a visual sense of how our tagged subjects are spread through the network. While the main two categories we are interested in are computer science and biology, we have also tagged the mathematics papers, so that we have a third category of similar size and generality as the other two. This serves as a control group, and allows us to consider and reject the hypothesis that there are three main subnetworks of similar papers instead of two. Intuitively, we can see that the red and blue vertices are mostly clustered together on the two halves of the pruned network, confirming our suspicion that the two dense clusters of vertices we see in Figure \ref{fig:pruned_network} correspond to the two categories we observed while constructing the dataset. We also notice that the cluster of blue vertices only fills about half of its side of the partition, indicating that the biology category is significantly smaller than the computer science category. The yellow seems to be about evenly spread across the two halves, meaning that we do not have three distinct meaningful subnetworks of papers. There also appear to be significantly more untagged vertices on the biology side of the partition. It is likely that this is not only because the computer science category is inherently larger, but because its papers are more likely to be tagged as such. A full half of the papers in the computer science category are published in an ACM, IEEE, or SIAM\footnote{Both ``SIAM" and ``algorithm" were used as keywords for both math and computer science, which accounts for about half of the overlap between the two categories.} journal (IEEE journals alone represent 35\% of the CS-tagged papers), all of which are easily tagged using these acronyms as a keyword. There were not any analagous dominant organizations with acronym keywords for the biology journals in our dataset, so the tagging relies on topical keywords and therefore can identify fewer of the biology papers using a reasonable number of keywords in our search. As a result of this, the computer science network is more strongly identified as such, and therefore more structurally visible to the partitioning algorithm. \begin{table}[t] \centering \begin{tabular}{|l|r|r|r|r|} \hline & $G$ & $G_p$ & $G_p^{(1)}$ & $G_p^{(2)}$ \\ \hline Total vertices & 5793 & 1062 & 531 & 531 \\ \hline Untagged & 1922 & 502 & 311 & 191 \\ \hline Tagged & 3871 & 560 & 220 & 340 \\ \hline CS & 2533 & 405 & 93 & 312 \\ \hline Biology & 984 & 122 & 108 & 14 \\ \hline Math & 787 & 97 & 44 & 53 \\ \hline Both CS and biology & 108 & 13 & 9 & 4 \\ \hline Both CS and math & 305 & 49 & 15 & 34 \\ \hline Both biology and math & 24 & 3 & 2 & 1 \\ \hline All three & 4 & 1 & 1 & 0 \\ \hline \end{tabular} \caption{Number of vertices tagged as computer science, biology, math, or some combination of these in $G$, $G_p$, and the two halves of the partition $G_p^{(1)}$ and $G_p^{(2)}$.} \label{tab:subject_counts} \end{table} We then count the number of vertices in each color-coded category, as shown in Table \ref{tab:subject_counts}. This confirms what we can intuitively see in Figure \ref{fig:subject_color_coded}. That is, almost all of the biology papers are found on one side of the partition, the majority of the computer science papers are found on the other side, and the math papers are fairly evenly spread between the two. The number of biology papers is much smaller than the number of computer science papers, which helps explain why there are more untagged papers on the biology side, and why there are significantly more computer science papers on the biology side than there are biology papers on the computer science, both by percentage and by total number. Color codings for the full network and the subnetwork of high centrality papers can be found in Figures \ref{fig:full_subject_color_coded} and \ref{fig:reading_list_subject_colored}, and a table of vertex counts similar to Table \ref{tab:subject_counts} for the high centrality subnetwork can be found in Table \ref{tab:reading_list_subject_counts}. \subsection{Assortativity results} \begin{table}[h] \centering \begin{tabular}{|l|r|r|} \hline & $G$ & $G_p$ \\ \hline\hline Outdegree & -0.0178 & -0.0141 \\ \hline Publication year & 0.0067 & 0.0041 \\ \hline Citation count & 0.0006 & 0.0654 \\ \hline Reference count & 0.0193 & -0.0061 \\ \hline Tagged with any subject & 0.1089 & -0.0094 \\ \hline Subject & 0.1837 & 0.0712 \\ \hline Subject is CS & 0.2624 & 0.1529 \\ \hline Subject is biology & 0.3354 & 0.1773 \\ \hline Subject is math & 0.0732 & 0.0164 \\ \hline Subject is CS or biology & 0.1500 & 0.0188 \\ \hline Subject is CS or math & 0.2458 & 0.1256 \\ \hline Subject is biology or math & 0.1713 & 0.0414 \\ \hline \end{tabular} \caption{Assortativity of the full and pruned citation networks with respect to various network properties.} \label{tab:assortativity} \end{table} We would like to calculate the assortativity of the network with respect to our subject tagging, to measure the degree to which papers on a certain topic cite other papers on the same topic. It is unclear how to do so, however, since our vertices can belong to multiple categories, while the assortativity algorithm requires categories to be exclusive. To handle this issue, we report results in Table \ref{tab:assortativity} with respect to multiple strategies for dealing with multiple category membership. We can either define category intersections to be their own, separate category, which was the approach for the ``Subject" row in Table \ref{tab:assortativity}, or we can calculate assortativity with respect to whether a vertex is or is not tagged as a certain subject or group of subjects, which was the approach for the ``Subject is --" lines in Table \ref{tab:assortativity}. For non-subject properties, our assortativity values are all very low in absolute value, meaning that vertices are neither more or less likely to cite vertices with similar outdegree, publication year, citation count, or reference counts as themselves as those with different counts. We do, however notice nontrivial assortativity with respect to several our subject-based properties. The values are much lower than what we observed for the example in Figure \ref{fig:assortativity_demo}, which had an assortativity of 0.72, but this is not surprising. Many of the papers on each of our topics could not be tagged as such, so the assortativity is not as high as it likely would be with perfect subject tagging. We also would not expect to see as much assortativity in the citation network of an interdisciplinary academic research area as we would in a network of non-academic political books, especially when there is significant overlap between our categories. Survey papers in particular will lower the assortativity as they draw connections between work on a similar topic, but in different disciplines. We first notice that the assortativity values in $G_p$ are lower than their corresponding values in all of $G$. That is, the papers with only one parent, which we have removed in $G_p$, are more likely to have the same subject tag as their parent than those cited by multiple papers. We also observe that there is very little assortativity with respect to whether a paper's subject is mathematics, which justifies our hypothesis that the assortativity with respect to computer science and biology is noteworthy, and not observed in any subject classification. Finally, we note that the assortativity with respect to whether a paper is either computer science or biology, or neither, is much lower than with respect to either category on its own, and only somewhat higher than the assortativity with respect to whether a paper is tagged at all. Then the assortativity with respect to whether a paper is computer science or math is only slightly lower than with respect to computer science by itself, while the assortativity with respect to whether a paper is computer science or biology is much lower than with respect to biology by itself. That is, the math category is more highly structurally linked to computer science than biology (which is unsurprising, given the relative sizes of its intersections with each category), and biology is the most structurally distinct category overall. \section{How centrality and context inform a better survey} In Section \ref{section:high_centrality_vertices}, we introduced a collection of 61 papers which were found to have high centrality either overall or within one of the sides of our partition of the pruned network. Our goal is to frame our presentation around the most important papers in the network, so they form our primary reading list. To facilitate our reading, we collected and tabulated metadata for the high centrality papers, including their author, year, title, DOI number, whether they are a parent in the network, their rank with respect to each of our centrality metrics overall and within each side of the partition, and their overall rank as discussed in Section \ref{section:high_centrality_vertices}. We also considered the subnetwork of our high centrality vertices, which we refer to as $G_R$, and visually organized them as shown in Figure \ref{fig:reading_list} and Figure \ref{fig:reading_list_neighborhood} to allow us to see at a glance the context of each paper in the wider reading list. In Table \ref{tab:neighborhood_partition_counts}, we show how many vertices in each paper's neighborhood within $G_R$ fall on either side of the partition, which we can use as a guide to which papers might be highly interdisciplinary either in the references they cite, or the papers they are cited by. Finally, we can use the Mathematica representation of the network to easily calculate how many and which of a paper's references are in the pruned network and on either side of the partition, and check the intersection of the neighborhoods of two or more papers. \begin{sidewaysfigure} \centering \includegraphics[width=0.85\textwidth]{reading_list0pt9crop.png} \caption{The subnetwork $S$ of high centrality papers, as listed in Tables \ref{tab:toppapers_all}, \ref{tab:toppapers_bio}, and \ref{tab:toppapers_CS}. Green vertices are in group 1 (biology dominated) of the partition of $G_p$, and blue vertices are in group 2 (CS dominated).} \vspace{-12pt}\flushleft\scriptsize Note: ``Unsupervised Domain Adaptation Using Regularized Hyper-Graph Matching" is not in the connected component and is not displayed. \label{fig:reading_list} \end{sidewaysfigure} \begin{sidewaysfigure} \centering \includegraphics[width=0.85\textwidth]{reading_list_neighborhood0pt9crop.png} \caption{The subnetwork $S$ of high centrality vertices, highlighting the neighborhood of ``Fifty years of graph matching, network alignment, and network comparison" \cite{Emmert_Streib_2016}.} \vspace{-12pt}\flushleft\scriptsize Note: ``Unsupervised Domain Adaptation Using Regularized Hyper-Graph Matching" is not in the connected component and is not displayed. \label{fig:reading_list_neighborhood} \end{sidewaysfigure} At this point, we have a powerful amount of context to guide our reading. We know that there are two main categories of application in our dataset, we know roughly how they inform its structure, and we know which papers are important in each category as well as overall. We know that we are only reading papers which are considered important in some way, we know that they are important within our topic of interest specifically, and we know exactly \textit{how} important they are considered to be in comparison to the rest and why. As we read, we can easily check a paper's neighbors against those we have already read to place it within the context of the overall shape of the field , and to compare our computational insights into which of the references included are relevant with the author's choices in how to frame them. We can choose to start by reading the papers with high authority centrality, in order to gain an understanding of concepts before building upon them, and follow both year and forward reference information to track the development of the field over time. We can check for connections in the form of cocitations between papers we might expect to be connected based on the ideas they discuss--which is interesting both when the connections are there, and when they are not--and thereby get a sense of what the idea transfer between the two disciplines has been up to this point, which ideas have found crossover and which have not, and why. Overall, we acquire a global sense of the shape of the field of network similarity, which we can trust to be less biased by our own or other author's perceptions of what is important. At this point, it is easy to choose additional papers to read and refer to as needed throughout the reading and writing process, and to use their reference lists to put them in the context of our existing understanding. As a last remark, we note that our context is based largely on the reading list subnetwork alone--which was formed using an arbitrary cutoff of ``top ten papers"--and that the network mathematics involved is quite basic. As helpful as we have found this context to be, we have likely only scratched the surface of the possible benefits of this method of exploration. In the following chapters, we outline the motivation and approach for our two main categories of application. We begin with pattern recognition in Chapter \ref{chapter:pattern_recognition} and then discuss biology in Chapter \ref{chapter:systems_biology}. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% CHAPTER 4 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \chapter{Pattern Recognition}\label{chapter:pattern_recognition} \section{Motivation} The complex, combinatorial nature of graphs makes them computationally very difficult to work with, but it also makes them an incredibly powerful data structure for the representation of various objects and concepts. They are particularly useful in computer vision, where we would often like to recognize certain objects in an image (or across images) that seem different at the pixel level as a result of things like angles, lighting, and image scaling. Since graphs are invariant under positional changes including translations, rotations, and mirroring, they are well suited for this task. Applications in the area of computer vision include optical character recognition \cite{Lu_1991,Rocha_1994}, biometric identification \cite{isenor1986fingerprint,deng2010retinal} and medical diagnostics \cite{sharma2012determining}, and 3D object recognition \cite{Christmas_1995}. In 2018\footnote{It's a separate sentence because the other ones are much older and I want to emphasize that these are recent.}, work has been published in the computer vision-related areas of Indian sign language recognition \cite{Kumar_2018a,Kumar_2018b}, spotting subgraphs (e.g., certain characters) in comic book images \cite{le2018ssgci}, and stacking MRI image slices \cite{clough2018mri}. A timeline with more comprehensive counts of papers in various application areas of pattern recognition through 2002 can be found in \cite{Conte_2004}. \begin{figure}[h!] \centering \includegraphics[width=\textwidth]{motion_capture_demo.png} \scriptsize Source: http://ultimatefifa.com/2012/fifa-13-motion-capture-session/ \caption{A graph-based representation of a human body, with vertices corresponding to the markers on a motion capture suit.} \label{motion_capture_demo} \end{figure} In computer vision applications, as well as for pattern recognition in general, we can create a graph representation for an image by decomposing it into parts and using edges to represent the relationships between these components. For example, we can describe a person using the relationships between various body parts--head, shoulders, knees, toes, and so on. This is the idea behind motion capture, as illustrated in Figure \ref{motion_capture_demo}. After we have a graph representation of the objects we would like to compare, the problem of recognition, and in particular of database search, is reduced to a graph matching problem. We must compare the input graph for an unknown object to our database of model graphs to determine which is the most similar. \section{Graph matching}\label{section:defining_graph_matching} In the literature, the term ``graph matching"\index{graph matching} is used significantly more often than it is explicitly defined. When a definition is given, it is usually tailored to the purposes of a particular author, and specific to a certain \textit{type} of graph matching; i.e. exact, inexact, error-correcting, bipartite, and so on. The distinctions between these can be subtle, and are typically only explicitly addressed in survey papers. Furthermore, we sometimes address the question of finding a matching \textit{in} a graph \cite{wikiMatchingInAGraph}, which is different from but still related to the problem of graph matching, in which we want to find a mapping \textit{between} two graphs. And finally, there is a significant presence in the literature of \textit{elastic} graph matching\index{elastic graph matching}, which is widely used in pattern recognition but is not in fact a form of graph matching \cite{Conte_2003}. Clearly a good taxonomy is needed. In this section, we give an overview of graph matching-related terms and summarize their distinctions. \subsection{Preliminary definitions} Graph isomorphism is the strictest form of graph matching and a natural place to begin our discussion. We therefore give definitions for graph isomorphism and the two main relevant generalizations of the graph isomorphism problem, as well as the temporal complexity-related terms necessary to compare their computational difficulty. A decision problem (i.e. one which can be posed as a yes or no question), of which the graph isomorphism problem is an example, is in \textbf{NP}\index{NP} or \textbf{nondeterministic polynomial time} if the instances where the answer is ``yes" can be verified or rejected in polynomial time \cite{Hartmanis_1982,wikiNPComplexity}. It is \textbf{NP-hard}\index{NP-hard} if it is ``at least as difficult as every other NP problem"; that is, every problem which is in NP can be reduced to it in polynomial time. An NP-hard problem does not necessarily have to be in NP itself \cite{Hartmanis_1982,wikiNPHardness}. If a decision problem is both NP and NP-hard, it is \textbf{NP-complete}\index{NP-complete} \cite{Hartmanis_1982,wikiNPCompleteness}. An \textbf{induced subgraph}\index{induced subgraph} of a graph is a graph formed from a subset of vertices in the larger graph, and all the edges between them \cite{wikiInducedSubgraph}. By contrast, a \textbf{subgraph}\index{subgraph} is simply a graph formed from a subset of the vertices and edges in the larger graph \cite{wikiSubgraph}. \begin{figure}[!t] \centering \includegraphics[width=\textwidth]{isomorphism_demos.png} \caption{A visual summary of the distinctions between graph isomorphism, subgraph isomorphism, maximum common subgraph, and inexact matching.} \label{fig:isomorphism_demos} \end{figure} A \textbf{graph isomorphism}\index{graph isomorphism} is a bijective mapping between the vertices of two graphs of the same size, which is \textbf{edge-preserving}\index{edge-preserving}; that is that is, if two vertices in the first graph are connected by an edge, they are mapped to two vertices in the second graph which are also connected by an edge \cite{Conte_2004}. The decision problem of determining whether two graphs are isomorphic is neither known to be in NP nor known to be solvable in polynomial time \cite{wikiGraphIsomorphism}. A \textbf{subgraph isomorphism}\index{subgraph isomorphism} is an edge-preserving injective mapping from the vertices of a smaller graph to the vertices of a larger graph. That is, there is an isomorphism between the smaller graph and some induced subgraph of the larger \cite{Conte_2004}. The decision problem of determining whether a graph contains a subgraph which is isomorphic to some smaller graph is known to be NP-complete \cite{wikiSubgraphIsomorphism}. Finally, a \textbf{maximum common induced subgraph}\index{maximum common subgraph} (MCS) of two graphs is a graph which is an induced subgraph of both, and has as many vertices as possible \cite{wikiMaximumCommonSubgraph}. Formulating the MCS problem as a graph matching problem can be done by defining the metric \[d(G_1,G_2) = 1 - \frac{|MCS(G_1,G_2)|}{\max\{|G_1|,|G_2|\}},\] where $|G|$ is the number of vertices in $G$ \cite{Bunke_1998,Bunke_1997}. \begin{table}[h] \centering \begin{tabular}{|L{0.35\linewidth}|L{0.15\linewidth}|L{0.15\linewidth}|L{0.22\linewidth}|} \hline & Graph isomorphism & Subgraph isomorphism & Maximum common induced subgraph \\ \hline $G_1$ and $G_2$ must have the same number of vertices & X & & \\ \hline Mapping must include all vertices of either $G_1$ or $G_2$ & X & X & \\ \hline Mapping must be edge-preserving & X & X & X \\ \hline NP-complete & Unknown & X & X* \\ \hline \end{tabular} \flushleft\footnotesize *The associated decision problem of determining whether $G_1$ and $G_2$ have a common induced subgraph with at least $k$ vertices is NP-complete, but the problem of finding the maximum common induced subgraph (as required for graph matching) is NP-hard \cite{wikiMaximumCommonSubgraph}. \caption{A summary of exact graph matching problem formulations.} \label{NP_classifications} \end{table} \subsection{Exact and inexact matching}\label{section:exact_and_inexact_matching} \begin{table}[h] \centering \begin{tabular}{|L{0.16\linewidth}|L{0.13\linewidth}|L{0.11\linewidth}|L{0.12\linewidth}|L{0.105\linewidth}|L{0.165\linewidth}|} \hline & Edge preserving? & Result in? & Mapping seeking? & Optimal? & Complexity \\ \hline Graph isomorphism & Yes & \{0,1\} & Yes & Yes & Likely between P and NP \\ \hline Subgraph isomorphism & Yes & \{0,1\} & Yes & Yes & NP-complete \\ \hline MCS computation & Yes & [0,1] & Yes & Yes & NP-hard \\ \hline Edit distances (exact) & No & [0,1] & No & Yes & Generally exponential \\ \hline Edit distances (approximate) & No & [0,1] & No & No & Generally polynomial \\ \hline Other inexact formulations & No & [0,1] & Sometimes & No* & Generally polynomial \\ \hline \end{tabular} \caption{Summary of the distinctions between exact and inexact graph matching styles.} \footnotesize *The Hungarian algorithm can be used to find an optimal assignment in $O(n^3)$ time based on a given cost function, but the assignment problem minimizes a cost function which is only an approximation of the true matching cost. \label{exact_vs_inexact} \end{table} We define a graph matching method to be \textbf{exact}\index{exact matching} if it seeks to find a mapping between the vertices of two graphs which is edge preserving. Exact matching is also sometimes defined by whether a method seeks a \textit{boolean} evaluation of the similarity of two graphs \cite{Livi_2012,Emmert_Streib_2016}. For graph and subgraph isomorphism, this characterization is equivalent; either they are isomorphic/there is a subgraph in the larger which is isomorphic to the smaller, or they are not. Since the maximum common subgraph problem is edge preserving, we consider it in this work to be an exact matching problem. However, it does not seek a boolean evaluation, and it is therefore sometimes considered to be an inexact matching problem \cite{Livi_2012}. In \textbf{inexact matching}\index{inexact matching}, we allow mappings which are not edge-preserving. This allows us to compensate for the inherent variability of the data in an application, as well as the noise and randomness introduced by the process of constructing graph representations of that data. Instead of matchings between vertices being forbidden if edge-preservation requirements are unsatisfied, they are simply penalized in some way. We then seek to find a matching that minimizes the sum of this penalty cost. Instead of returning a value in $\{0,1\}$, we return a value in $[0,1]$ measuring the similarity or dissimilarity between two graphs\footnote{Returning 1 for an isomorphism is analagous to a boolean evaluation and would be considered a similarity measure. Most algorithms for inexact matching seek to minimize some function, so they would return 0 for an isomorphism and are therefore considered \textit{dis}similarity measures.}. Inexact matching algorithms which are based on an explicit cost function or edit distance are often called \textbf{error tolerant}\index{error tolerant} or \textbf{error correcting}\index{error correcting}. \subsubsection{Optimal and approximate algorithms} Generally, the problem formulations used for inexact matching seek to minimize some nonnegative cost function which should theoretically be zero for two graphs which are isomorphic. An \textbf{optimal}\index{optimal algorithm} algorithm is one which is guaranteed to find a global minimum to this cost function; it will find an isomorphism if it exists, while still handling the problem of graph variability. However, this comes at the cost of making optimal algorithms for inexact matching significantly more expensive than their exact counterparts \cite{Conte_2004}. Most inexact matching algorithms are therefore \textbf{approximate}\index{approximate algorithm} or \textbf{suboptimal}\index{suboptimal algorithm}. They only find a local minimum of the cost function (either by optimizing it directly, or approximating it by some other cost function), which may or may not be close to the true minimum. This may or may not be acceptable in a certain application, but these algorithms are much less expensive to calculate, usually polynomial time \cite{Conte_2004}. \subsubsection{Mapping-seeking and non-mapping-seeking algorithms} Finally, we can draw the distinction of whether the algorithm seeks primarily to find a mapping between vertices (and returns a result in $\{0,1\}$ or $[0,1]$ as a byproduct), or whether it does not. All exact formulations seek a mapping, and many inexact formulations do as well. Mapping-seeking inexact matching is more commonly referred to as \textbf{alignment}\index{alignment}, and is one of two overwhelmingly dominant comparison strategies in biological applications. Alignment is discussed in more detail in Chapter \ref{chapter:systems_biology}. \subsection{Graph kernels and embeddings} ``The Graph Matching Problem" \cite{Livi_2012}, published in 2012, claims that there are three main approaches to the inexact graph matching problem: edit distances, graph kernels, and graph embeddings. We did not observe this to be the case in our reading, but we still give a brief introduction to graph kernels and embeddings, and discuss our observations. Graph \textbf{embeddings}\index{graph embedding} are a general strategy of mapping a graph into some high-dimensional feature space, and performing comparisons there \cite{Emmert_Streib_2011}. For example, we could identify a graph with a vector in $\R^n$ containing the seven statistics reported in Table \ref{tab:network_table}, or the eigenvalues of its adjacency matrix, and compare them using Euclidean distance. Mapping a graph into Euclidean space certainly makes comparison easier, but it is not obvious how to create a mapping that preserves graph properties in a sensible way. Graph statistics and embeddings must be shown experimentally to be useful; they may allow us to distinguish between different classes of graphs, correlate with some other desirable property, narrow down matching candidates in a large database (hashing), and so on. Graph \textbf{kernels}\index{graph kernel} are a special kind of graph embeddings, in which we have a continuous map $k:\mathcal{G}\times \mathcal{G}\rightarrow \R$, where $\mathcal{G}$ is the space of all possible graphs, such that $k$ is symmetric and positive definite or semidefinite \cite{Livi_2012}. Creating a kernel for graphs would allow us to take advantage of the techniques and theory of general kernel methods, but it has been shown that computing a strictly positive definite graph kernel is at least as hard as solving the graph isomorphism problem \cite{Gartner_2003}. We suspect that the amount of work involved in creating a (not necessarily strictly positive definite) graph kernel with enough desirable properties to take advantage of kernel methods is prohibitive enough in many cases to make this an impractical strategy. We note that the strategy of using the \textbf{graphlet degree distribution}\index{graphlet degree distribution} and other local and global graph statistics, which we discuss in Chapter \ref{chapter:systems_biology}, is a form of embedding. Furthermore, the graph kernel strategies described in the references of \cite{Livi_2012} seem to follow the assignment problem-style approach of calculating some sort of similarity notion between pairs of vertices in two graphs, and using that matrix to create the desired alignment or kernel. We therefore consider the strategies of graph kernels and graph embeddings to be part of the families of other categories which we describe in this work, rather than being mainstream approaches in their own right. \section{Exact matching and graph edit distance} The field of graph matching is large and well-established, and we cannot hope to give a full overview of all existing techniques without sacrificing our focus on remaining accessible to the relative novice. If the reader is interested in a more comprehensive investigation, the definitive\footnote{This is clear from the way it is discussed by the authors which cite it, but it also has the highest indegree, outdegree, betweenness ranking, and HITS hub ranking in our pruned network, and is second for closeness.} source on graph matching developments through 2004 is ``Thirty Years of Graph Matching In Pattern Recognition" \cite{Conte_2004}. Two of its authors collaborated with various others on a similar survey in 2014 covering the ten years since the prior survey's publication \cite{foggia2014graph}, and in June of 2018 published a large-scale performance comparison of graph matching algorithms on huge graphs \cite{carletti2018comparing} that may also be of interest. We partition the field into three general approaches: \begin{enumerate} \item Exact matching methods, which are primarily based around some kind of pruning of the search space. \item Edit distance-based methods for optimal inexact matching. \item Continuous optimization-based methods for inexact matching. \end{enumerate} We present optimization methods in their own section, network alignment and comparison methods in Chapter \ref{chapter:systems_biology}, and in this section aim to introduce the concept of search space pruning (which is the most dominant approach for exact matching), and the concept of a graph edit path and its corresponding graph edit distance. Our presentation of edit distances is primarily inspired by \cite{Livi_2012} and \cite{Riesen_2009}. \subsection{Search space pruning} Most algorithms for exact graph matching are based on some form of tree search with backtracking \cite{Conte_2004}. The process is analagous to solving a grid-based logic puzzle. We represent all possible matching options in a grid format, and then rule out infeasible possibilities based on clues or heuristics about the problem. When we get to the point where our clues can no longer rule out any further possibilities, we must arbitrarily choose from among the remaining possible options for a certain item and follow through the correspondingly ruled out possibilities until we either complete the puzzle or reach a state where there are no possible solutions remaining. In the latter case, we backtrack, rule out our initial arbitrary choice, and try other possible options for the same certain item until we either find a solution or exhaust all possible choices. The seminal algorithm for exact matching is found in Ullmann's 1976 paper ``An Algorithm for Subgraph Isomorphism" \cite{Ullmann_1976}, and is applicable to both graph and subgraph isomorphism. We assume two graphs $g_1$ and $g_2$ with vertex counts $m$ and $n$, respectively, and assume without loss of generality that $m\leq n$. This allows us to represent all matching candidate possibilities in a $m\times n$ matrix of zeros and ones. The Ullmann algorithm\index{subgraph isomorphism algorithm} uses two principles to rule out matching possibilities: \begin{enumerate} \item In a subgraph isomorphism, a vertex in $g_1$ can only be mapped to a vertex in $g_2$ with the same or higher degree. This is used to rule out possibilities initially. In Example \ref{ex:ullmann}, degree comparison is able to reduce the number of possible matchings from $8^4=4096$ to $5^*5^*1^*8=200$, a drastic improvement found at a cost of at most $mn$ operations (comparing the degree of each vertex in $g_1$ against the degree of each vertex in $g_2$). \item For any feasible matching candidate $v_2\in g_2$ for $v_1\in g_1$, the neighbors of $v_1$ must each have a feasible matching candidate among the neighbors of $v_2$. Testing this is called the \textbf{refinement}\index{refinement} procedure, and it forms the heart of the algorithm. In Example \ref{ex:ullmann}, after a single stage of the refinement process and before we begin backtracking, we have reduced the number of possible matchings down to $3^*3^*1^*3=27$. \end{enumerate} \begin{example}\label{ex:ullmann} As Ullmann's presentation of his own algorithm is very detail-oriented and does not seek to give a broad intuition for the method, to illustrate the method we step through an example found on StackOverflow \cite{ullmannStackOverflow}. We have two graphs $g_1$ and $g_2$, as shown, and we want to determine if a subgraph isomorphism exists between them. \vspace{-30pt} \begin{center} \begin{tabular}{C{0.35\textwidth}C{0.6\textwidth}} \includegraphics[width=0.95\textwidth]{ullmann_demo_cropped.png} & \\ $g_1$ & $g_2$\\ \end{tabular} \end{center} First, we use degree comparison to determine the initial candidates for mapping vertices in $g_1$ to vertices in $g_2$. Vertex $d$ has degree 1, so it can be mapped to anything in $g_2$. Vertices $a$ and $b$ have degree 2, so they cannot be mapped to vertices $3, 4$, or $8$ in $g_2$, as these vertices have degree 1. Finally, vertex $c$ has degree 3, so it can only be mapped to vertex 6. \begin{center} \begin{tikzpicture} \matrix (m)[ matrix of math nodes, nodes in empty cells, inner sep=4pt, ]{ & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 \\ \hline a & 1 & 1 & 0 & 0 & 1 & 1 & 1 & 0 \\ b & 1 & 1 & 0 & 0 & 1 & 1 & 1 & 0 \\ c & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 \\ d & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 \\ }; \draw (m-5-2.south west) -- (m-1-2.north west); \end{tikzpicture} \footnotesize\singlespacing Candidate mapping pairs which satisfy degree requirements. \end{center} \vspace{-5pt} Next, we begin the refinement procedure. We illustrate two cases of the refinement procedure for the candidates of vertex $a$ in $g_1$: one where the candidate is valid, and another where it is ruled out. \vspace{-30pt} \begin{center} \begin{tabular}{C{0.35\textwidth}C{0.6\textwidth}} \includegraphics[width=0.95\textwidth]{ullmann_demo_cropped.png} & \\ $g_1$ & $g_2$\\ \end{tabular} \end{center} \begin{center} \begin{minipage}{0.1\textwidth} \hfill \end{minipage} \begin{minipage}{0.3\textwidth} \begin{tikzpicture} \matrix (m)[ matrix of math nodes, nodes in empty cells, inner sep=4pt, ]{ & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 \\ \hline a & 1 & 1 & 0 & 0 & 1 & 1 & 1 & 0 \\ b & 1 & 1 & 0 & 0 & 1 & 1 & 1 & 0 \\ c & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 \\ d & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 \\ }; \fill[gray,opacity=0.2] (m-3-1.south west) rectangle (m-3-9.north east); \fill[gray,opacity=0.2] (m-4-1.south west) rectangle (m-4-9.north east); \fill[gray,opacity=0.2] (m-5-3.south west) rectangle (m-1-3.north east); \fill[gray,opacity=0.2] (m-5-7.south west) rectangle (m-1-7.north east); \draw (m-5-2.south west) -- (m-1-2.north west); \draw[black,thick,radius=7pt] (m-2-2) circle; \draw[green,thick,radius=7pt] (m-3-3) circle; \draw[green,thick,radius=7pt] (m-4-7) circle; \end{tikzpicture} \footnotesize\singlespacing Vertex 1 is a suitable candidate for vertex $a$. \end{minipage} \begin{minipage}{0.1\textwidth} \hfill \end{minipage} \begin{minipage}{0.3\textwidth} \begin{tikzpicture} \matrix (m)[ matrix of math nodes, nodes in empty cells, inner sep=4pt, ]{ & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 \\ \hline a & 1 & \cancel{1} & 0 & 0 & 1 & 1 & 1 & 0 \\ b & 1 & 1 & 0 & 0 & 1 & 1 & 1 & 0 \\ c & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 \\ d & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 \\ }; \fill[gray,opacity=0.2] (m-3-1.south west) rectangle (m-3-9.north east); \fill[gray,opacity=0.2] (m-4-1.south west) rectangle (m-4-9.north east); \fill[gray,opacity=0.2] (m-5-2.south west) rectangle (m-1-2.north east); \fill[gray,opacity=0.2] (m-5-4.south west) rectangle (m-1-4.north east); \draw (m-5-2.south west) -- (m-1-2.north west); \draw[black,thick,radius=7pt] (m-2-3) circle; \draw[green,thick,radius=7pt] (m-3-2) circle; \draw[red,thick,radius=7pt] (m-4-2) circle; \draw[red,thick,radius=7pt] (m-4-4) circle; \end{tikzpicture} \footnotesize\singlespacing Vertex 2 is not a suitable candidate for vertex $a$. \end{minipage} \begin{minipage}{0.1\textwidth} \hfill \end{minipage} \end{center} \bigskip On the left, we consider vertex 1 in $g_2$ as a candidate for vertex $a$ in $g_1$. We highlight the rows corresponding to $a$'s neighbors, and the columns corresponding to 1's neighbors. Each neighbor of $a$ must have a candidate among the neighbors of 1; i.e., there must be a 1 somewhere in the intersections of the highlighted columns with each highlighted row. Since this is the case on the left, 1 remains a candidate for $a$. On the right, however, we find that vertex 2 is not a valid candidate for $a$. While there is a candidate for $b$ among the neighbors of 2, there is not a candidate for $c$ among the neighbors of 2. After performing the refinement process for all candidate pairings, the remaining candidates for each vertex in $g_1$ are as shown below. At this point, it is time to begin backtracking. \vspace{5pt} \begin{center} \begin{tikzpicture} \matrix (m)[ matrix of math nodes, nodes in empty cells, inner sep=4pt, ]{ & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 \\ \hline a & 1 & 0 & 0 & 0 & 1 & 0 & 1 & 0 \\ b & 1 & 0 & 0 & 0 & 1 & 0 & 1 & 0 \\ c & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 \\ d & 1 & 0 & 0 & 0 & 1 & 0 & 1 & 0 \\ }; \draw (m-5-2.south west) -- (m-1-2.north west); \end{tikzpicture} \footnotesize\singlespacing Candidate mapping pairs after the initial refinement procedure. \end{center} \vspace{-5pt} For the backtracking procedure, we try mapping a vertex to each of its candidates in turn. At each stage, if we cannot find any viable candidates for a vertex among the neighbors of the candidate in question, we backtrack and try again. The algorithm stops when we either find a subgraph isomorphism, or eliminate all candidates for a vertex. \vspace{-30pt} \begin{center} \begin{tabular}{C{0.35\textwidth}C{0.6\textwidth}} \includegraphics[width=0.95\textwidth]{ullmann_demo_cropped.png} & \\ $g_1$ & $g_2$\\ \end{tabular} \end{center} \begin{minipage}{0.3\textwidth} \begin{center} \begin{tikzpicture} \matrix (m)[ matrix of math nodes, nodes in empty cells, inner sep=4pt, ]{ & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 \\ \hline a & \cancel{1} & 0 & 0 & 0 & 1 & 0 & 1 & 0 \\ b & 0 & 0 & 0 & 0 & 1 & 0 & 1 & 0 \\ c & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 \\ d & 0 & 0 & 0 & 0 & 1 & 0 & 1 & 0 \\ }; \fill[gray,opacity=0.2] (m-3-1.south west) rectangle (m-3-9.north east); \fill[gray,opacity=0.2] (m-4-1.south west) rectangle (m-4-9.north east); \fill[gray,opacity=0.2] (m-5-3.south west) rectangle (m-1-3.north east); \fill[gray,opacity=0.2] (m-5-7.south west) rectangle (m-1-7.north east); \draw (m-5-2.south west) -- (m-1-2.north west); \draw[black,thick,radius=7pt] (m-2-2) circle; \draw[green,thick,radius=7pt] (m-4-7) circle; \draw[red,thick,radius=7pt] (m-3-3) circle; \draw[red,thick,radius=7pt] (m-3-7) circle; \end{tikzpicture} \end{center} \footnotesize\singlespacing Try mapping $a$ to $1$. There is no viable candidate for $b$ among the neighbors of $1$, so we backtrack and try again. \end{minipage}\hfill \begin{minipage}{0.3\textwidth} \begin{center} \begin{tikzpicture} \matrix (m)[matrix of math nodes,inner sep=4pt ]{ & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 \\ \hline a & 0 & 0 & 0 & 0 & \cancel{1} & 0 & 1 & 0 \\ b & 1 & 0 & 0 & 0 & 0 & 0 & 1 & 0 \\ c & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 \\ d & 1 & 0 & 0 & 0 & 0 & 0 & 1 & 0 \\ }; \draw (m-5-2.south west) -- (m-1-2.north west); \fill[gray,opacity=0.2] (m-3-1.south west) rectangle (m-3-9.north east); \fill[gray,opacity=0.2] (m-4-1.south west) rectangle (m-4-9.north east); \fill[gray,opacity=0.2] (m-5-5.south west) rectangle (m-1-5.north east); \fill[gray,opacity=0.2] (m-5-7.south west) rectangle (m-1-7.north east); \draw[black,thick,radius=7pt] (m-2-6) circle; \draw[green,thick,radius=7pt] (m-4-7) circle; \draw[red,thick,radius=7pt] (m-3-5) circle; \draw[red,thick,radius=7pt] (m-3-7) circle; \end{tikzpicture} \end{center} \footnotesize\singlespacing Try mapping $a$ to 5. There is no viable candidate for $b$ among the neighbors of $5$, so we backtrack and try again. \end{minipage}\hfill \begin{minipage}{0.3\textwidth} \begin{center} \begin{tikzpicture} \matrix (m)[matrix of math nodes,inner sep=4pt ]{ & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 \\ \hline a & 0 & 0 & 0 & 0 & 0 & 0 & \cancel{1} & 0 \\ b & 1 & 0 & 0 & 0 & 1 & 0 & 0 & 0 \\ c & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 \\ d & 1 & 0 & 0 & 0 & 1 & 0 & 0 & 0 \\ }; \draw (m-5-2.south west) -- (m-1-2.north west); \fill[gray,opacity=0.2] (m-3-1.south west) rectangle (m-3-9.north east); \fill[gray,opacity=0.2] (m-4-1.south west) rectangle (m-4-9.north east); \fill[gray,opacity=0.2] (m-5-9.south west) rectangle (m-1-9.north east); \fill[gray,opacity=0.2] (m-5-7.south west) rectangle (m-1-7.north east); \draw[black,thick,radius=7pt] (m-2-8) circle; \draw[green,thick,radius=7pt] (m-4-7) circle; \draw[red,thick,radius=7pt] (m-3-9) circle; \draw[red,thick,radius=7pt] (m-3-7) circle; \end{tikzpicture} \end{center} \footnotesize\singlespacing Try mapping $a$ to 7. There is no viable candidate for $b$ among the neighbors of $7$, and no more candidates for $a$, so we stop. \end{minipage} \vspace{15pt} We cannot find a suitable candidate for $b$ among the neighbors of any candidate of $a$, so there is no subgraph isomorphism between $g_1$ and $g_2$. \end{example} \subsection{Graph edit distance} \begin{wrapfigure}{R}{0.4\textwidth} \begin{tabular}{|lcl|l|} \hline cat & $\rightarrow$ & ca\textit{r}t & Insertion \\ \hline \textit{c}art & $\rightarrow$ & \textit{d}art & Substitution \\ \hline \textit{d}art & $\rightarrow$ & art & Deletion \\ \hline \textit{ar}t & $\rightarrow$ & \textit{ra}t & Transposition \\ \hline \end{tabular} \caption{Edit operations for strings.} \vspace{-10pt} \label{fig:string_edit_operations} \end{wrapfigure} One way to measure the distance between two objects is to measure how much work it takes to turn the first into the second, and take the length of the \textbf{edit path}\index{edit path}. For example, in Figure \ref{fig:string_edit_operations}, we find an edit path of length 4 between ``cat" and ``rat". However, this should clearly not be the edit \textit{distance}\index{graph edit distance} between ``cat" and ``rat", as we can transform one into the other with a single substitution. The \textbf{edit distance} between two objects is therefore the \textit{minimum} over the lengths of all possible edit paths between them. \begin{figure}[!t] \centering \begin{tabular}{C{0.18\textwidth}C{0.03\textwidth}C{0.18\textwidth}C{0.03\textwidth}C{0.18\textwidth}C{0.03\textwidth}C{0.18\textwidth}} \multicolumn{3}{c}{Vertex insertion} & & \multicolumn{3}{c}{Edge insertion} \\ \includegraphics[width=0.18\textwidth]{vertex_insertion_left.png} & $\rightarrow$ & \includegraphics[width=0.18\textwidth]{vertex_insertion_right.png} & & \includegraphics[width=0.18\textwidth]{edge_insertion_left.png} & $\rightarrow$ & \includegraphics[width=0.18\textwidth]{edge_insertion_right.png} \\ \multicolumn{3}{c}{Vertex deletion} & & \multicolumn{3}{c}{Edge deletion} \\ \includegraphics[width=0.18\textwidth]{vertex_deletion_left.png} & $\rightarrow$ & \includegraphics[width=0.18\textwidth]{vertex_deletion_right.png} & & \includegraphics[width=0.18\textwidth]{edge_deletion_left.png} & $\rightarrow$ & \includegraphics[width=0.18\textwidth]{edge_deletion_right.png} \\ \multicolumn{7}{c}{Vertex substitution} \\ & & \includegraphics[width=0.18\textwidth]{vertex_substitution_left.png} & $\rightarrow$ & \includegraphics[width=0.18\textwidth]{vertex_substitution_right.png} & & \\ \end{tabular} \caption{Edit operations for graphs.} \label{fig:graph_edit_operations} \end{figure} For graphs, the relevant edit operations are \textbf{vertex substitution}\index{graph edit operations}, \textbf{vertex insertion}, \textbf{vertex deletion}, \textbf{edge insertion}, and \textbf{edge deletion}, as illustrated in Figure \ref{fig:graph_edit_operations}. Instead of simply taking the length of the edit path, however, each of these operations is associated with some nonnegative cost function $c(u,v)\in \R^+$ (the ``penalty" mentioned in Section \ref{section:exact_and_inexact_matching}) which avoids rewarding unnecessary edit operations by satisfying the inequality \[c(u,w)\leq c(u,v)+c(v,w),\] where $u, v$, and $w$ are vertices or edges, or sometimes null vertices/edges in the case of insertion and deletion. We also assume that the cost of deleting a vertex with edges is equivalent to that of first deleting each of its edges and then deleting the resultant neighborless vertex. %If two graphs are isomorphic to one another, the edit distance between them is the total cost of relabeling--i.e. substituting--all $n$ vertices. The edit distance is then the total cost of all operations involved in an edit path, and it critically depends on the costs of the underlying edit operations \cite{Bunke_1998}. This can be helpful in some cases, as it allows us to easily tweak parameters in our notion of similarity, but it is also sometimes desirable to avoid this dependence on the cost function. This is one motivation for the formulation of inexact graph matching as a continuous optimization problem, which we discuss in the next section. Finally, we note that it was shown by Bunke in 1999 that the graph isomorphism, subgraph isomorphism, and maximum common subgraph problems are all special cases of the problem of calculating the graph edit distance under certain cost functions \cite{Bunke_1999}. \section{Suboptimal methods for inexact matching} We noted previously that optimal methods for inexact graph matching tend to be very expensive, and therefore only suitable for graphs of small size. To address this issue, Riesen and Bunke in 2009 introduced an algorithm for approximating the graph edit distance in a substantially faster way \cite{Riesen_2009}, which Serratosa published an improved variant of in 2014 \cite{Serratosa_2014}. This is not the only suboptimal inexact matching method in existence with the goal of suboptimally calculating a graph edit distance, but it provides an interesting connection between the seemingly disparate strategies of search space pruning and of casting the problem as one of continuous optimization. \subsection{The assignment problem} The key to this connection is the idea of the assignment problem. Instead of searching the space of possible edit paths to find the graph edit distance, we approximate the graph edit distance with a solution to a certain matrix optimization problem. The following definition is due to Riesen and Bunke \cite{Riesen_2009}: \begin{definition} Consider two sets $A$ and $B$, each of cardinality $n$, together with an $n\times n$ cost matrix $C$ of real numbers, where the matrix elements $c_{i,j}$ correspond to the cost of assigning the $i$-th element of $A$ to the $j$-th element of $B$. The \textbf{assignment problem}\index{assignment problem} is that of finding a permutation $p=\{p_1,\dots,p_n\}$ of the integers $\{1,2,\dots,n\}$ which minimizes $\sum_{i=1}^n c_{i,p_i}$. \end{definition} A brute force algorithm for the assignment problem would require a $O(n!)$ time complexity, which is impractical. Instead, we can use the \textbf{Hungarian method}\index{Hungarian algorithm}\index{Munkres' algorithm}. This algorithm is originally due to Kuhn in 1955 \cite{Kuhn_1955} and solves the problem in maximum time $O(n^3)$ by transforming the original cost matrix into an equivalent matrix with $n$ independent zero elements\footnote{Independent meaning that they are in distinct rows and columns.} which correspond to the optimal assignment pairs. The version of the algorithm described in \cite{Riesen_2009} is a refinement of the original Hungarian algorithm published by Munkres in 1957 \cite{munkres1957algorithms}. \subsubsection{Relationship to the bipartite graph matching problem} \begin{figure}[!t] \centering \begin{tabular}{m{0.3\textwidth}m{0.05\textwidth}m{0.3\textwidth}} $C = $\bordermatrix{ & 1 & 2 & 3 \cr a & 3 & 2 & 1 \cr b & 1 & 3 & 4 \cr c & 2 & 5 & 2 } & $\Leftrightarrow$ & \includegraphics[width=0.3\textwidth]{bipartite_assignment_problem.png} \\ \multicolumn{3}{c}{$A = \{a,b,c\}, B=\{1,2,3\}$} \end{tabular} \caption{Reformulating the assignment problem as that of finding an optimal matching in a bipartite graph. The edges and their weight labels in the bipartite graph are colored to make it easier to see which weights belong to which edges.} \label{bipartite_reformulation} \end{figure} As noted in Section \ref{section:defining_graph_matching}, we sometimes must address the question of finding a matching \textit{in} a graph. This is defined as a set of edges without common vertices. It is straightforward to reformulate the assignment problem as one of finding an optimal matching within a \textbf{bipartite graph}\index{bipartite graph}, that is, a graph whose vertices can be divided into two disjoint independent sets such that no edges run between vertices of the same type. If $A$ and $B$ are two sets of cardinality $n$ as in the assignment problem, the elements of $A$ form one vertex group, the elements of $B$ form the other, and we define the edge weight between the $i$-th element of $A$ and the $j$-th element of $B$ to be the cost of that assignment, as shown in Figure \ref{bipartite_reformulation}. The assignment problem is therefore also referred to as the \textbf{bipartite graph matching problem}. \subsubsection{Relationship to graph edit distance} To connect the assignment problem to graph edit distance computation, we define a cost matrix $C$ such that each $c_{i,j}$ entry corresponds to the cost of substituting the $i$-th vertex in our source graph to the $j$-th vertex in our target graph \cite{Riesen_2009}. We can generalize this approach further to handle graphs with different numbers of vertices by using a modified version of the Hungarian method which applies to rectangular matrices \cite{bourgeois1971extension} by considering vertex insertions and deletions as well as substitutions. In this modified version of the Hungarian method, the resulting cost matrix (again, definition due to Riesen and Bunke \cite{Riesen_2009}) becomes \[ C = \left[ \begin{array}{cccc|cccc} c_{1,1} & c_{1,2} & \dots & c_{1,m} & c_{1,-} & \infty & \dots & \infty \\ c_{2,1} & c_{2,2} & \dots & c_{2,m} & \infty & c_{2,-} & \ddots & \vdots \\ \vdots & \vdots & \ddots & \vdots & \vdots & \ddots & \ddots & \infty \\ c_{n,1} & c_{n,2} & \dots & c_{n,m} & \infty & \dots & \infty & c_{n,-} \\ \hline c_{-,1} & \infty & \dots & \infty & 0 & 0 & \dots & 0 \\ \infty & c_{-,2} & \ddots & \vdots & 0 & 0 & \ddots & \vdots \\ \vdots & \ddots & \ddots & \infty & \vdots & \ddots & \ddots & 0\\ \infty & \dots & \infty & c_{-,m} & 0 & \dots & 0 & 0 \\ \end{array} \right], \] where $n$ is the number of vertices in the source graph, $m$ the number of vertices in the target, and a $-$ is used to represent null values. The upper left corner of this matrix represents the cost of vertex substitutions, and the bottom left and top right corners represent the costs of vertex insertions and deletions. Since each vertex can be inserted or deleted at most once, the off-diagonal elements of these are set to infinity. Finally, since substitutions of null values should not impose any costs, the bottom right corner of $C$ is set to zero. This is only a rough approximation of the true edit distance, as it does not consider any information about the costs of edge transformations. We can improve the approximation by adding the minimum sum of edge edit operation costs implied by a vertex substitution to that substitution's entry in the cost matrix, but we still only have a suboptimal solution for the graph edit distance problem, even though the assignment problem can be solved optimally in a reasonable amount of time. \subsubsection{Other suboptimal graph matching methods using the assignment problem} Approximating the graph edit distance is far from the only graph matching strategy which is based around the assignment problem. Instead of a cost matrix based around the cost function of a graph edit distance measure, we can incorporate other measures of similarity or affinity between vertices. The advantage of this approach is that we can incorporate both topological\footnote{That is, information derived directly from the structure of a network.} and external notions of similarity into our definition. To be effective, this strategy requires a relevant source of external information, and as a result is much more prevalent in biological applications, as we will see in the next chapter. In either case, we attempt to maximize the edges in the induced common subgraph only indirectly; a good cost function will be an effective proxy for how much a certain node pairing will contribute to a good overall mapping, but it does not directly maximize the edge preservation. \subsection{Weighted graph matching vs. the assignment problem} Most of the suboptimal graph matching methods we observed are based around either the assignment problem, or around some formulation of the \textit{weighted graph matching problem}. \begin{definition} The \textbf{weighted graph matching problem}\index{weighted graph matching problem} (WGMP) is typically defined as finding an optimum permutation matrix which minimizes a distance measure between two weighted graphs; generally, if $A_G$ and $A_H$ are the adjacency matrices of these, both $n\times n$, we seek to minimize $||A_G - PA_HP^T||$ with respect to some norm \cite{Umeyama_1988, Koutra_2013, Almohamad_1993} , or minimize some similarly formulated energy function \cite{Gold_1996}. The specific norm and definition depends on the technique being used to solve the optimization problem. \end{definition} Weighted graph matching is an inexact graph matching method, and its techniques are generally suboptimal, searching for a \textit{local} minimum of the corresponding continuous optimization problem. There is a wide variety of techniques in use, including linear programming \cite{Almohamad_1993}, eigendecomposition \cite{Umeyama_1988}, gradient descent \cite{Koutra_2013}, and graduated assignment \cite{Gold_1996}. Other techniques mentioned in \cite{Almohamad_1993, Umeyama_1988, Gold_1996, Conte_2004} include Lagrangian relaxation, symmetric polynomial transformation, replicator equations, other spectral methods, neural networks, and genetic algorithms. The weighted graph matching problem is similar to the assignment problem in that we seek a permutation between the $n$ vertices of two graphs, but unlike the assignment problem, there is no need to define a cost or similarity matrix ahead of time. Instead, we directly measure the quality of a permutation assignment with respect to the structure of a graph, and optimize this quantity directly to find our matching. This allows us to avoid relying on the heuristics inherent in any formulation of a cost or similarity matrix, but it also means we cannot easily incorporate external information into our solution of the problem. We also cannot choose to favor alignments with desirable properties other than conserved edges; for example, we may prefer a connected alignment to a more scattered one, even if it conserves fewer edges. Whether weighted graph matching techniques are preferable to assignment problem-based strategies is therefore dependent on the specific problem to be solved. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% CHAPTER 5 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \chapter{Biology}\label{chapter:systems_biology} \section{Motivation} A fundamental goal in biology is to understand complex systems in terms of their interacting components. Network representations of these systems are particularly helpful at the cellular and molecular level, at which we have large-scale, experimentally determined data about the interactions between biomolecules such as proteins, genes, and metabolites. In the past twenty years, there has been an explosion of availability of large-scale interaction data between biomolecules, paralleling the surge of DNA sequence information availability that was facilitated by the Human Genome Project \cite{humanGenomeProject}. Sequence information comparison tools have been revolutionary in advancing our understanding of basic biological processes, including our models of evolutionary processes and disease \cite{HGPimpact}, and the comparative analysis of biological networks presents a similarly powerful method for organizing large-scale interaction data into models of cellular signaling and regulatory machinery \cite{Sharan_2005}. In particular, we can use network comparison to address fundamental biological questions such as ``Which proteins, protein interactions and groups of interactions are likely to have equivalent functions across species?", ``Can we predict new functional information about proteins and interactions that are poorly characterized, based on similarities in their interaction networks?", and ``What do these relationships tell us about the evolution of proteins, networks, and whole species?" \cite{Sharan_2006}. Comparison strategies and metrics are also key to developing mathematical models of biological networks which represent their structure in a meaningful way, which is a key step towards understanding them. For example, good comparison techniques allow us to model dynamical systems on biological networks \cite{Watts_1998} (e.g., the spread of infectious diseases), and create appropriate null hypotheses for drawing conclusions about experimental networks\footnote{Section 2.3.1 in \cite{Hayes_2013} is a brief and superb discussion of the importance of modeling biological networks.} \cite{Hayes_2013}. \section{Comparison to pattern recognition} As mentioned in Chapter \ref{chapter:pattern_recognition}, we observed two overwhelmingly dominant network comparison strategies in biology applications. We use the term \textit{comparison} strategies and not \textit{matching} or \textit{alignment} strategies because unlike pattern recognition, not all biological applications seek a mapping between two networks. This is a result of the difference between the typical networks with which each field is concerned; we summarize these distinctions in Table \ref{tab:bio_vs_CS_summary}. In pattern recognition, graphs are primarily a convenient data structure to represent the objects that we would like to compare, and do not necessarily represent inherent real-world relationships. The goal of comparison is typically to find the ``closest" object from a large database, and seeking an ``isomorphism with tolerance" is a viable strategy, because we typically have a small number of vertices, and we expect to be able to find a close structural match. By contrast, in biology we work with networks which are much larger, typically incompletely explored, non-deterministic\footnote{Since biological networks are constructed entirely based on experimental data, edges can only ever represent probabilities, and their analysis must take this into consideration.} and which we cannot expect to be ``almost the same" globally. Furthermore, we have highly significant external information about their vertices, such as BLAST scores (sequence similarities) or known functions of individual proteins, and every edge and subgraph corresponds to real-world interactions that have real-world meaning. Mapping-seeking comparison strategies for biology do not seek to give an overall measure of the similarity between two graphs, but to find regions which are conserved between two or more networks, under what Flannick et. al. \cite{flannick2006graemlin} calls ``perhaps the most important premise of modern biology": the assumption that evolutionary conservation implies functional significance. When comparing and analyzing biological networks as a whole, we can find meaning using strategies other than seeking a mapping between networks. These are generally based on investigating the frequencies and/or distributions of relevant subgraphs of a large network, i.e. \textit{graphlets} and \textit{motifs}, which we now introduce by way of generalization of the network statistics we discussed in Chapters 1 and 2. \begin{table}[h] \centering \begin{tabular}[t]{L{0.3\linewidth}L{0.3\linewidth}L{0.3\linewidth}} \textbf{Graph matching for pattern recognition} & \textbf{Biological network alignment} & \textbf{Alignment-free biological network comparison} \\ \hline\hline \noalign{\medskip} Seeks to recognize objects based on their graph representations & Seeks to find conserved regions between multiple networks & Seeks to compare networks on a global scale \\\noalign{\medskip} Attempts to find a mapping or near-mapping between graphs & Attempts to find a mapping or near-mapping between networks & Does not attempt to find a mapping \\\noalign{\medskip} Deals with large numbers of small graphs & Deals with small numbers of large or very large networks & Deals with a variable number of large or very large networks \\\noalign{\medskip} Expects to find graphs which are near-isomorphic & Networks are not generally near-isomorphic & Networks are not generally near-isomorphic \\\noalign{\medskip} Graphs are simply a convenient data structure to represent objects we would like to compare & Each vertex and edge independently represents meaningful information about the real world & Each vertex and edge independently represents meaningful information about the real world\\ \end{tabular} \caption{Summary of differences between graph matching for pattern recognition and biological network comparison. Note: I'm unsure whether this table is worth including.} \label{tab:bio_vs_CS_summary} \end{table} \section{Network statistics and measures} \begin{figure}[t] \renewcommand{\arraystretch}{1.5} \begin{tabular}[c]{ccccc} \multicolumn{5}{l}{\textbf{Network comparison with univariate measures}} \\ \noalign{\smallskip} Network & $\rightarrow$ & Something in $\R^n$ & \multirow{2}{0.04\linewidth}{$\Bigr\}\rightarrow$} & \multirow{2}{0.42\linewidth}{Similarity score derived from a metric or aggregation measure on $\R^n$} \\ \noalign{\smallskip} Network & $\rightarrow$ & Something in $\R^n$ \\ \noalign{\medskip} Examples: & \multicolumn{4}{p{0.8\textwidth}}{Any metric on $\R^n$ applied to number of vertices and edges, mean degree, diameter, connectivity, degree distribution, centrality distributions, local graphlet counts, graphlet degree distributions, etc.} \\ \end{tabular} \hfill \vspace{0.5cm} \begin{tabular}[c]{ccccc} \multicolumn{5}{l}{\textbf{Network comparison with bivariate measures}} \\ \noalign{\smallskip} Network & \multirow{2}{0.05\linewidth}{$\Bigr\}\rightarrow$} & \multirow{2}{0.19\linewidth}{Mapping process of some sort} & \multirow{2}{0.03\linewidth}{$\rightarrow$} \multirow{2}{0.35\linewidth}{Similarity measure derived from the mapping in some way}\\ \noalign{\smallskip} Network & \\ \noalign{\medskip} Examples: & \multicolumn{4}{p{0.8\textwidth}}{Graph edit distance; node correctness, edge correctness, induced conserved structure \cite{Patro_2012}, or symmetric substructure score \cite{Saraph_2014} with respect to an alignment; MCS-related metrics (i.e. \cite{Bunke_1998})} \\ \end{tabular} \caption{A summary of the distinction between univariate and bivariate measures.} \label{fig:univariate_and_bivariate} \end{figure} In Table \ref{tab:network_table}, we compared various network statistics for our constructed citation network and its pruned version to two other citation networks and two other random networks. Our goal in doing so was to determine whether the calculated statistics were noteworthy in some way. Without some kind of context, we cannot tell, for example, whether a diameter of ten is above average, remarkably small, or somewhere in between, or if it is typical to have 96\% of the vertices in the network contained in the giant component. That is, we know that \textit{our network statistics are only meaningful when we compare them to the same statistics for other networks}. Network statistics such as vertex count, edge count, and diameter, therefore, can be thought of as a similarity measure between two networks\footnote{See \cite{Aittokallio_2006} for a discussion of how these are relevant to biology specifically.}. This is generally a much weaker form of similarity than even suboptimal inexact graph matching, but the more important distinction is that these are \textbf{univariate}\index{univariate measure} measures. That is, they map from a single network to $\R^n$, in contrast to \textbf{bivariate}\index{bivariate measure} measures, which give us a single number representing the similarity between two networks. The distinction between univariate and bivariate measures is illustrated in Figure \ref{fig:univariate_and_bivariate}. We can treat univariate measures as bivariate by using some sort of metric or other aggregation measure on their output for two different networks, but they have the additional advantage of allowing us to classify networks based on their output, rank them on a spectrum, and so on. This is particularly important for creating random models of networks that share the key properties of the real-world networks we would like to study \cite{Hayes_2013, Watts_1998}. \subsection{Local and global network statistics} Network statistics like vertex and edge count, diameter, size of giant component, and assortativity values are all \textbf{global}\index{global network statistics}, meaning they give us a single value for the entire graph. The centrality measures we introduced in Section \ref{section:centrality}, in contrast, are \textbf{local}\index{local network statistics}, meaning they give us a value at each vertex. Examples of global and local network statistics can be found in Table \ref{tab:local_vs_global_statistics}. So far, we have seen these values used to rank vertices and choose a small number of them for further analysis, but we can gain insight into the graph as a whole by looking at \textit{distributions} of these values. The \textbf{degree distribution}\index{degree distribution} is the simplest example of this. We count the number of vertices with degree zero, one, two, and so on, and display the results in a histogram. For a directed graph, we have distributions for both indegree and outdegree. Further information about degree distributions and their relevance can be found in \cite{newman2010}. \begin{table}[!t] \renewcommand{\arraystretch}{1.5} \centering \begin{tabular}{L{0.13\linewidth}L{0.25\linewidth}L{0.45\linewidth}} \hline \textbf{Type} & \textbf{Description} & \textbf{Examples} \\ \hline Global & Single value for an entire network & Mean degree, maximum degree, diameter, edge density, assortativity, global clustering coefficient \\ Local & Value at each vertex in a network & Indegree, outdegree, graphlet degrees, betweenness centrality, closeness centrality, HITS centralities, local clustering coefficient \\ \hline \end{tabular} \caption{A summary of the distinction between local and global network statistics.} \label{tab:local_vs_global_statistics} \end{table} %\begin{figure}[h] %\centering %\includegraphics[width=\textwidth]{degree_distributions.png} %\caption{Indegree and outdegree distributions for our citation network. As a result of our construction methods, the vast majority of our papers have an %outdegree of zero and an indegree of one, so we use a log scale to better observe the spread in the data.} %\label{fig:degree_distributions} %\end{figure} \section{Graphlets and motifs} \textbf{Graphlets}\index{graphlets} are small connected non-isomorphic induced subgraphs of a simple undirected network \cite{Przulj_2007}, introduced by Nata\u{s}a Pr\u{z}ulj in the mid-2000s for the purpose of designing a new measure of local structural similarity between two networks based on their relative frequency distributions. \begin{figure}[t] \centering \includegraphics[width=0.95\textwidth]{graphlets_figure.png} \caption{The 73 automorphism orbits for the 30 possible graphlets with 2-5 nodes. In each graphlet, vertices belonging to the same automorphism orbit are the same shade. Figure reproduced from \cite{Przulj_2007}.} \label{fig:graphlets} \end{figure} First, we recall that the degree distribution measures, for each value of $k$, the number of vertices of degree $k$. In other words, for each value of $k$, it gives the number of vertices touching $k$ edges. We note that a single edge is the only graphlet with two nodes, and call it $G_0$. The degree distribution can therefore be thought of as measuring how many vertices touch one $G_0$ graphlet, how many vertices touch two $G_0$ graphlets, and so on. We can generalize this idea to larger graphlets, and count how many vertices touch a certain number of each graphlet $G_0, \dots, G_{29}$, where $G_0,\dots, G_{29}$ are defined as in Figure \ref{fig:graphlets}. For example, the $G_2$ distribution measures how many vertices touch one triangle, two triangles, and so on. However, we notice that for most graphlets larger than two vertices, \textit{which} vertex of the graphlet we touch is topologically relevant; for example, touching the middle node of $G_1$ is different from touching an end node. This is because the end and middle vertices are in different \textbf{automorphism orbits}. Two vertices are in the same automorphism orbit if they can be mapped to each other in an isomorphism from the graph to itself. Intuitively, this means that we have no way of telling them apart without labeling the graph. If two vertices are in different automorphism orbits, we can tell them apart using their degree, the neighbors of their neighbors, and so on, as we did in our demonstration of the Ullmann subgraph isomorphism algorithm. The \textbf{graphlet degree} therefore measures how many of a certain graphlet \textit{in a specific automorphism orbit} each vertex touches. There are 73 different automorphism orbits for the thirty graphlets with 2-5 nodes, and we therefore obtain 73 \textbf{graphlet degree distributions (GDDs)} analagous to (and including) the degree distribution. Since these are based on the neighborhoods of each vertex, they measure the local structure of a graph. An example of the $G_0$ and $G_2$ distributions for the PPI networks of \textit{Mycoplasma genitalium} (an STD) and \textit{Schizosaccharomyces pombe} (yeast), which we have chosen for their relatively small size and prevalence as a model organism, is shown in Figure \ref{fig:GDD_demo}, with corresponding statistics in Table \ref{tab:ppi_networks}. We obtained both networks from the STRING database \cite{szklarczyk2014string}, and included links with an interaction confidence score above 950, i.e. those representing 95\% probability of an interaction between two proteins. \begin{table}[t] \centering {\setstretch{1}\fontsize{11}{13}\selectfont \begin{tabular}{L{0.25\linewidth}C{0.085\linewidth}C{0.065\linewidth}C{0.08\linewidth}C{0.11\linewidth}C{0.11\linewidth}C{0.11\linewidth}} \hline & Vertices & Edges & Edge density & Clustering & Maximum $G_0$ degree & Maximum $G_2$ degree \\ \hline \textit{Mycoplasma genitalium} & 444 & 1860 & 0.94\% & 0.758 & 66 & 1376 \\ Match degree sequence & 444 & 1860 & 0.94\% & 0.410 & 66 & 817 \\ Match size and density & 444 & 1860 & 0.94\% & 0.022 & 17 & 5 \\ \hline \textit{S. pombe} & 5100 & 30118 & 0.11\% & 0.750 & 213 & 14592 \\ Match degree sequence & 5100 & 30118 & 0.11\% & 0.148 & 213 & 3027 \\ Match size and density & 5100 & 30118 & 0.11\% & 0.002 & 27 & 5 \\ \hline \end{tabular} } \caption{Statistics for PPI networks of two small organisms and two comparable random graphs for each. The ``clustering" value is the global clustering coefficient \cite{newman2010}, which measures the fraction of connected triplets in the network which are closed. Edge density is the fraction of edges compared to the total number of possible edges, i.e. $m/n^2$.} \label{tab:ppi_networks} \end{table} \begin{figure}[!ph] \centering \includegraphics[width=0.95\textwidth]{graphlet_degree_distributions.png} \caption{Visualizations and $G_0$ and $G_2$ distributions for the PPI networks of \textit{S. pombe}, \textit{Mycoplasma genitalium}, and two randomly generated networks comparable to the latter.} \label{fig:GDD_demo} \end{figure} Just by looking at the maximum $G_2$ degree, we can easily distinguish between a real network and a random network with the same degree sequence; we can also see different patterns in the shape of the $G_2$ distributions for the PPI networks of our two model organisms. It is easy to see how the varying shapes of all 73 GDDs could help us distinguish between different graphs in a meaningful way. In order to use these distributions for computational network comparison, however, we must somehow reduce this large quantity of information to a single measure. Pr\u{z}ulj introduces one possible method in \cite{Przulj_2007} by considering the Euclidean distance between each GDD for two networks, after appropriate scaling and normalization, and then taking the arithmetic or geometric mean over all 73. Other methods are of course possible, and potentially better, depending on the application in question. \subsection{Graphlets vs. Motifs} Network \textbf{motifs}\index{motif} are similar to graphlets in that they are both small induced subgraphs of large networks. Unlike graphlets, however, the definition of motifs requires these subgraphs to be \textit{statistically overrepresented} in the network \cite{Milo_2002}; that is, they are patterns of interactions which occur more frequently than you would reasonably expect due to random chance. In a random graph with the same degree sequence(s) as a real network, we are not likely to see connected triangles, for example--but as we can see in Table \ref{tab:ppi_networks} and Figure \ref{fig:GDD_demo}, connected triangles appear frequently in real biological networks, associated with feedback or feed-forward loops in transcription and neural networks, clusters in protein interaction networks, and so on \cite{Berg_2004}. We can generalize further to seek \textbf{topological motifs}, which are statistically overrepresented ``similar" network sub-patterns. Such patterns can be used as a first step towards understanding the basic structural elements particular to certain classes of networks \cite{Milo_2002}; different types of networks contain different types of elementary structures which reflect the underlying processes that generated them, and as discussed in \cite{Berg_2004}, motifs are in fact indicative of biological functions. These methods are a useful way to distinguish patterns of biological function in the topology of molecular interaction networks from random background, but they are not well-suited for full-scale comparison of multiple networks \cite{Przulj_2007}. They are sensitive to the choice of random network model used to determine statistical significance, and ignore subnetworks with low or average frequency. These low-frequency subgraphs may still be functionally important, especially if such subnetworks are consistently seen across multiple real networks despite occurring rarely within any individual one. Graphlets are one way to address this issue; alignment strategies are another. We also note that \cite{Przulj_2007} defines graphlets for undirected graphs only, while the motifs discussed in \cite{Milo_2002} and \cite{Berg_2004} are primarily directed. In both cases, however, the definition does not include multiedges or self-loops. \subsubsection{Computational complexity} We already know that subgraph isomorphism is a computationally expensive problem. Exhaustively finding all occurrences of small isomorphic or near-isomorphic subgraphs in a network is infeasible for the gene and protein-protein interaction (PPI) networks of all but small organisms such as \textit{E. coli} \cite{Emmert_Streib_2016}. As a result, motif search in large networks (which requires an exhaustive search over an entire \textit{ensemble} of random graphs in order to determine statistical significance) is generally limited to subgraphs of at most five nodes. Graphlet statistics are similarly expensive to compute, and the combinatorially increasing number of possible graphlets for an increasing number of nodes is an additional limitation on the size of graphlets we can reasonably consider. In order to process the interaction networks of higher organisms, various search heuristics and estimation procedures for motifs and graphlet statistics are used. \subsection{Netdis} Netdis \cite{Ali_2014}, introduced by Ali et. al. in 2014, is another method for network comparison which is primarily based on counting the occurrences of small subgraphs in a larger network. For each vertex, they count the number of occurrences of each possible graphlet of 3-5 nodes in a neighborhood of radius two around it. Each vertex is thereby associated with a vector of graphlet counts, which is normalized with respect to the same counts for a gold standard protein-protein interaction (PPI) network, as a proxy for the counts we would expect to see in a suitable random model for a typical PPI network. These centered counts are then combined into an overall statistic. This statistic is used to correctly separate random graph model types and to build the correct phylogenetic tree of species based on their protein interaction networks, showing that Netdis is a relevant comparison method for large networks. It is also highly tractable; since we only search for subgraphs in a given neighborhood, its computational complexity grows about linearly with the number of vertices in a network if neighborhood sizes stay relatively small, and it is therefore well-suited for full-scale comparison of many large networks. We note that although Netdis uses graphlet counts, it is not a generalization or special case of graphlet degrees. When counting graphlets in the two-step neighborhood of a vertex, we only need to consider the 29 possible graphlets of 3-5 nodes, rather than their 72 possible automorphism orbits. Not all the graphlets a vertex touches will necessarily be present in its two-step neighborhood, and we can also have graphlets in the two-step neighborhood of a vertex which do not directly touch the vertex itself. \section{Local and global network alignment} In graph matching applications and in subgraph counting, any mappings found by an algorithm have not been the primary result of interest in network comparison. When calculating graphlet statistics and network motifs, we do perform graph matching, but the subgraphs involved are so small that whether the matching algorithm is mapping-seeking is not a particularly relevant question; the computational difficulty is a result of the large number of individually trivial matches performed. In pattern recognition, we use inexact matching methods in order to allow for error in the comparison of slightly larger\footnote{Typically up to a hundred nodes at most, based on the experimental results reported by our references.} graphs that should represent the same or similar objects. There are enough nodes to make an exhaustive search difficult, so the matching method we use matters, but the mapping between two networks does not typically give us relevant real-world insights about the structures they represent. When comparing large biological networks, however, this is not the case, and we therefore frequently seek regions of similarity and dissimilarity in two or more large networks (thousands of nodes and tens of thousands of edges). Just as longer DNA sequences which are conserved across species indicate functional significance and can help classify evolutionary relationships, larger subnetworks of biomolecular interactions which are conserved across species are likely to represent true functional modules\footnote{Throughout the rest of this work, we will use \textit{module} to refer to a subgraph of a biological network which is larger than a motif, but still relatively small compared to a full-size network, and which is known to have functional significance.} \cite{Sharan_2006} and give insight into evolutionary processes \cite{Ali_2014}. To find these conserved regions, we seek an \textbf{alignment}, or a mapping between vertices in two or more networks which approximates the true structure they have in common. Alignment algorithms may or may not be able to find an isomorphism, if one exists, but this is not important when the networks being aligned are essentially guaranteed to not be isomorphic anyway. \subsection{Local vs. global alignment} \renewcommand{\topfraction}{0.65} \begin{figure}[!t] \centering \includegraphics[width=0.5\textwidth]{local_alignment.png} \caption{Local alignment of a network. Vertices may be used for multiple ``pieces" of the overall mapping, i.e. the mapping is not required to be one-to-one.} \includegraphics[width=0.5\textwidth]{global_alignment.png} \caption{Global alignment of a network. Not all vertices must be mapped to a vertex in the other network, but the mapping must be one-to-one in both directions for those which are.} \label{fig:alignment} \end{figure} Alignment strategies for biological applications fall into two categories: local alignment, and global alignment. In both cases, we seek to find regions of similarity between networks, and mappings do not need to be defined for every vertex in each network. In \textbf{local alignment}\index{local alignment}, the goal is to find local regions of isomorphism between the two networks, where each region implies a mapping which is independent of the others. These mappings do not need to be mutually consistent; that is, a vertex can be mapped differently under different regions of the mapping, as illustrated in Figure \ref{fig:alignment}. We can choose a locally optimal mapping for each region of similarity, even if this results in overlap. In \textbf{global alignment}\index{global alignment}, by contrast, we must define a single mapping across all parts of the input, even if it is locally suboptimal in some regions of the networks \cite{Singh_2007}. To understand the distinctions between these two alignment strategies, we must understand the motivations behind their development, and the biological implications of their results\footnote{The first few paragraphs of \cite{flannick2006graemlin} are a concise and effective overview of the origins and goals of biological network alignment and its relationship to sequence alignment.}. The study of biological \textit{networks} has been guided first and foremost by the study of biological \textit{sequences}, primarily DNA sequences, and the development of biological network alignment closely parallels that of sequence alignment\footnote{A timeline comparing these developments as of 2006 can be found in \cite{Sharan_2006}.}. Sequence alignment is based on the assumptions that \begin{enumerate} \item Patterns which occur frequently (i.e. motifs) are likely to have functional significance. \item Sequence regions which are conserved across multiple species are likely to have functional significance. \item The degree to which sequences differ is related to their evolutionary distance. \end{enumerate} The study of (i) is roughly local sequence alignment, while the study of (ii) and (iii) is roughly global sequence alignment. Local network alignment, by analogy, searches for highly similar network regions that likely represent conserved functional structures (i.e. evolutionarily conserved building blocks of cellular machinery), which often results in relatively small mapped subnetworks and in some network regions not being a part of the alignment. Global network alignment, on the other hand, looks for the best superimposition of the entire input networks (i.e. a single comprehensive mapping between the sets of protein interactions from different species), which typically results in large but suboptimally conserved mapped subnetworks \cite{guzzi2017survey}. Sequence alignment cannot be perfectly generalized to network alignment; sequence complexity is linear, while network complexity is combinatorial. An alignment of two sequences of length $m$ and $n$ can be done in $O(mn)$ time using the Needleman-Wunsch algorithm \cite{Wunsch_time_is_over}, but the closest analogue of sequence alignment for networks is the maximum common induced subgraph problem, which we know to be NP-hard (see Table \ref{NP_classifications}). As a result, the analogy to sequence alignment must be comprised for the sake of tractability. This is least necessary in the case of very small subgraphs; searching exhaustively for very small subgraphs in a network is computationally expensive, but tractable. As a result, network motifs are a fairly straightforward generalization of the idea of sequence motifs\footnote{Graphlets can be thought of as a generalization of motifs; instead of counting all occurrences of a pattern and asking whether that number is abnormally high, we look at the \textit{distribution} of all occurrences of a small pattern, and take it as it is.}. For slightly larger subgraphs, a common compromise is to search for a predetermined pattern, rather than for \textit{all} frequently occurring patterns. The number of possible patterns on 2-5 vertices is small enough to make searching for all of them feasible, as in graphlets and motifs, but we can also search a less well-known network or the network of another species for a certain pathway or module already known to be significant. Another compromise is to construct an alignment based on a deterministic notion of which nodes are similar and which edges are conserved; since we have significant amounts of external information about the vertices (sequence similarity of proteins, etc.), this can yield useful results. Not all local alignment algorithms we observed make the compromise of searching only for predetermined patterns--although this is a common strategy--but they \textit{do} all define alignments deterministically through external information; the more general definition of local alignment compared to global alignment makes it too broad a question for network topology to answer on its own. Global alignment, on the other hand, makes the compromise of investigating the assignment problem rather than the maximum common induced subgraph problem. The assignment problem has the advantage of being fully solvable in low polynomial time via the Hungarian algorithm (and we can get good results with even cheaper methods), which allows us to handle much larger networks. The tradeoff is that our ability to find conserved networks is highly dependent on the way we define our cost function for mapping pairs of nodes onto each other, and we introduce a need for heuristic measures of quality for the resulting assignment. The cost function is generally based on a combination of external (i.e. sequential and/or functional) and topological information about the vertices in a network. Unlike local alignment, the use of external information is frequently optional, and never the primary driver of the results\footnote{I don't really feel like making one, but given everything else it feels like I really ought to make a little compare and contrast table for local and global alignment. Thoughts?}. At the time IsoRank \cite{Singh_2007} was introduced in 2007, the global network alignment problem had received little attention in the literature; no papers referencing ``global alignment" or ``global network alignment" which address the topic as we have defined it were published any earlier than 2007. After IsoRank was introduced, however, global alignment began to receive significant attention, and most of the papers in our dataset which address biological network alignment address global alignment. The transition from local to global alignment seems to have occurred as biological network comparison strategies began to increasingly seek to learn from topological network structure rather than deterministically attempting to generalize sequence alignment, and as a result of increasingly accurate PPI network data and increased computational resources for network analysis. \subsection{Local alignment algorithms} \subsubsection{PathBLAST and NetworkBLAST} The PathBLAST algorithm introduced by Kelley et. al. in 2004 \cite{kelley2004pathblast} and its successor NetworkBLAST \cite{Sharan_2005, Sharan_2006} both search for query pathways or query networks in a larger target network (or in the case of NetworkBLAST, multiple target networks). PathBLAST, as the name implies, is primarily designed to search for conserved pathways; it handles query networks by searching over conserved pathways. NetworkBLAST extends the algorithm to multiple networks by searching in an alignment network of multiple species, which it constructs deterministically by matching vertices according to the sequence similarity of their corresponding proteins and defining fixed conditions under which an edge is considered to be conserved rather than searching over the space of possible alignments. \subsubsection{Graemlin} While NetworkBLAST does allow for searching within multiple networks, it is not an effective answer to the general problem of finding conserved modules of arbitary topology (i.e. not just pathways) within an arbitrary number of networks. Graemlin (General and Robust Alignment of Multiple Large Interaction Networks) \cite{flannick2006graemlin} was published in 2006 wiith the goal of addressing this problem. Like all other local alignment algorithms, it defines an alignment between networks deterministically. In order to effectively search for an alignment across multiple networks, it uses a progressive strategy of successively aligning the closest pairs of networks, using a phylogenetic tree. At each level of the tree, it pairwise aligns the most closely related species, and uses the alignment results as the ``parent" of each pair until it reaches the root of the tree. This is an effective and scalable method of searching for conserved structures across multiple networks, which the authors illustrate in their results; we note, however, that it depends on prior knowledge of the relationships between species. It cannot \textit{infer} a phylogenetic tree solely from topological data, as Netdis does. Updated variant: Graemlin 2.0, year, citation, main distinction \subsubsection{MAWISH} 2006 also saw the introduction of an evolutionarily-inspired framework for the local alignment problem \cite{Koyuturk_2006}, introduced by Koyuturk et. al. with the goal of extending the concepts of matches, mismatches and gaps in sequence alignment to networks. They construct an alignment graph between two networks, where each node represents a pair of ortholog proteins\footnote{Proteins in different species derived from a common ancestor gene.}, and edges between two pairs of orthologs are deterministically assigned weights which encode evolutionary information about the proteins in each pair. Instead of searching for small known patterns, however, the goal in this approach is to solve the \textit{maximum weight induced subgraph problem}: \begin{definition}\textbf{Maximum weight induced subgraph problem (MAWISH).} Given a weighted graph $G(V,E)$ with edge weights $w(v,v')$ for vertices $v,v'\in V$ and a constant $\epsilon>0$, find a subset of vertices $V^*\subset V$ such that the sum of the weights of the edges in the subgraph induced by $V^*$ is at least $\epsilon$; that is, $\sum_{v,v'\in V^*} w(v,v') \geq \epsilon$. \end{definition} Note that in order for this to be a nontrivial problem, we must have both positive and negative edge weights in $G$; else the obvious solution is to simply choose all nodes in $G$. In the case of the alignment graph constructed in \cite{Koyuturk_2006}, negative edge weights are the result of evolutionarily-inspired matching penalties. This is also technically a decision problem, not an optimization problem; we aim to reach a certain goal sum of edge weights, not the absolute maximum. If we add the requirement that no edges in the result share a common vertex, this becomes the problem of finding a high-scoring matching within a graph, which is closely related to the bipartite graph matching problem and therefore the assignment problem. As stated, however, MAWISH is NP-complete, which can be shown by reduction from the maximum clique problem for the alignment graph defined by \cite{Koyuturk_2006} (for a non-complete graph, a maximum clique will not necessarily be a solution). Similarly to the assignment problem strategies we will discuss shortly, a reasonable solution can be found by seeding an alignment at high scoring nodes and growing it in a greedy manner. \section{Global alignment algorithms} We previously introduced the idea of the assignment problem, where given a matrix whose entries represent the cost of assigning vertices in one network to vertices in another, we seek to find an overall assignment with a minimum total cost. In order to find an alignment using the assignment problem, we must: \begin{enumerate} \item Construct a cost matrix\footnote{We can obviously trivially reformulate the problem to accomodate notions of either cost or similarity, and we use the terms interchangeably for the remainder of this chapter.}. \item Use that cost matrix to construct a mapping. \end{enumerate} The canonical\footnote{THANK YOU} example of (ii) is the Hungarian method, while strategies for (i) vary significantly but are usually based on some combination of topological and external information about the vertices in a network. All global alignment algorithms we observed follow this same basic template, and we frame our discussion of them accordingly. A summary of cost matrix and mapping construction strategies for global alignment algorithms can be found in Table \ref{tab:alignment_algorithms}. In this section, we present alignment algorithms in the field of pattern recognition as well as biology in order to facilitate comparison of their methods and highlight the distinctions between the two fields. This is not meant to be a comprehensive overview; we include all algorithms directly introduced in the high centrality vertices of our dataset, and have attempted to select the most important or influential of those which those papers discuss. In our presentation of each algorithm, we highlight the similarity definition, the mapping construction strategy, and the advantages of a method over previous methods. \begin{table}[!hp] \centering {\setstretch{1}\fontsize{11}{13}\selectfont \begin{tabular}{|L{0.2\textwidth}|L{0.055\textwidth}|L{0.33\textwidth}|L{0.31\textwidth}|} \hline \textbf{Biology} & \textbf{Year} & \textbf{Similarity Scoring} & \textbf{Alignment Construction} \\ \hline\hline IsoRank \cite{Singh_2007} & 2007 & Convex combination of external information and eigenvalue problem-based topological node similarities & Maximum-weight bipartite matching OR Repeated greedy pairing of highest scores \\ \hline Natalie \cite{Klau_2009} & 2009 & Convex combination of external info-based node mapping scores and topology- based edge mapping scores & Cast as an integer linear programming problem and use Lagrangian relaxation \\ \hline GRAAL \cite{Kuchaiev_2010} & 2010 & Convex combination of graphlet signatures and local density & Greedy neighborhood alignment around highest-scoring pairs \\ \hline PINALOG \cite{phan2012pinalog} & 2012 & Only sequence and functional similarity of proteins initially, but includes topological similarity for extension mapping & Detect communities, pair similar proteins from communities, extend the mapping to their neighbors \\ \hline GHOST \cite{Patro_2012} & 2012 & Eigenvalue distributions of appropriately normalized neighborhood Laplacians & Seed-and-extend with approximate solutions to the QAP, then local search step \\ \hline SPINAL \cite{aladaug2013spinal} & 2013 & Convex combination of sequence similarity and neighbor matching-based topological similarity & Seed-and-extend with local swaps \\ \hline NETAL \cite{Neyshabur_2013} & 2013 & Update an initial scoring based on the fraction of common neighbors between matched pairs in its corresponding greedy alignment & Repeated greedy pairing of highest scores, while updating expected number of conserved interactions \\ \hline MAGNA \cite{Saraph_2014} & 2014 & Any & Improve a population of existing alignments with crossover and a fitness function \\ \hline Node fingerprinting \cite{radu2014node} & 2014 & Minimize degree differences and reward adjacency to already-matched pairs & Progressively add high-scoring pairings to an alignment and update scores\\ \hline \hline \textbf{Non-biology} & \textbf{Year} & \textbf{Similarity Scoring} & \textbf{Alignment Construction} \\ \hline\hline Node signatures \cite{Jouili_2009} & 2009 & Vertex degree and incident edge weights & Hungarian method \\ \hline Graph edit distance approximation \cite{Riesen_2009} & 2009 & Edit costs (vertex insertions, substitutions, deletions) & Generalized (non-square) Munkres' algorithm \\ \hline Modified GED approximation \cite{Serratosa_2014} & 2014 & Modification of edit costs when edit distance is a proper distance function & Generalized (non-square) Munkres' algorithm \\ \hline \end{tabular} } \caption{Broad summary of alignment algorithms discussed in this section. The distinctions between the various topological similarity scores used are discussed in each algorithm's individual section.} \label{tab:alignment_algorithms} \end{table} \subsection{Non-biology methods} In our initial introduction of the assignment problem, we discussed how an approximation of the graph edit distance (GED) can be found by searching for an optimal matching within a matrix of the costs of specific edit operations, using a modified version of the Hungarian method \cite{Riesen_2009} which accomodates graphs with different vertex counts. An improved variant of Riesen and Bunke's 2009 algorithm was introduced by Serratosa in 2014, which uses the same modified Hungarian method, but defines a different and smaller matrix cost in the case where edit costs result in an edit distance which is an actual distance function; that is, costs are nonnegative, substitution of identical-attribute nodes has zero cost, insertion and deletion have the same cost, and substitution costs no more than performing both an insertion and a deletion. In the pattern recognition portion of our reading list, we saw one other assignment-problem style method, from Jouili and Tabbone in 2009 \cite{Jouili_2009}. It uses an extremely basic notion of node similarity and the Hungarian method, with decent though not particularly impressive results. More notable is the fact that it was published in 2009, two years after the introduction of IsoRank; it does not seem that the insights gained from the study of global alignment methods in biology were widely known by computer scientists at that time. This was less the case as of 2014, when Kollias et. al \cite{Kollias_2014} introduced an adapted, parallelized version of IsoRank itself, which they used to perform global alignment of networks two orders of magnitude larger than previously possible--up to about a million nodes--but the influence of biological strategies in computer science overall still seems to be limited. \subsection{IsoRank} IsoRank was the pioneering global network alignment method. Introduced by Singh et. al. in 2007 for the alignment of two networks \cite{Singh_2007}, and extended to the alignment of multiple networks in 2008 \cite{Singh_2008}, it has remained a common benchmark for the performance of subsquent algorithms, with all other (biological) global alignment strategies discussed here using it for comparison. The authors calculate a similarity score between nodes by linearly interpolating between sequence similarity scores of proteins and topological similarity scores. The topological similarity score between vertex $i$ in the source network $V_1$ and vertex $j$ in the target network $V_2$ is defined to be the sum of the similarity scores for their neighbors, proportional to the total number of possible neighbor pairings. That is, we solve the eigenvalue problem \[R_{ij} = \sum_{u\in N(i)} \sum_{v\in N(j)} \frac{R_{uv}}{|N(u)||N(v)|}, \text{ } i\in V_1, j\in V_2, \] where $N(u)$ is the number of neighbors of vertex $u$. The overall similarity score between vertices $i$ and $j$ is then the solution of the eigenvalue problem \[R = \alpha AR + (1-\alpha) E, \alpha \in [0,1], \] where $E$ is a normalized vector of pairwise sequence similarity scores and $A$ is a doubly indexed $|V_1||V_2|\times |V_1||V_2|$ matrix where $A[i,j][u,v] = 1/[N(u)N(v)]$ if there is an edge from vertex $i$ to $u$ in $V_1$ and from vertex $j$ to $v$ in $V_2$, and zero otherwise; $A[i,j][u,v]$ refers to the entry at row $(i,j)$ and column $(u,v)$. The parameter $\alpha$ controls the weight of the topological data compared to the sequence data in the overall similarity scores. This eigenvalue problem is solved via the power method, and Singh et. al. discuss two methods to construct an alignment; either construction of the maximum-weight bipartite matching, or a greedy method which repeatedly removes the highest-scoring pairs from consideration until the alignment is finished, and which the authors found to sometimes perform even better than the more principled algorithm. Once all nodes are aligned, the conserved edges are simply those whose endpoints are both paired to each other in the mapping. Later variants: IsoRank-N \subsection{Natalie} In 2009, Gunnar Klau introduced the maximum structural matching formulation for pairwise global network alignment \cite{Klau_2009}, combined with a Lagrangian relaxation-based algorithm for solving it which was made available as the software tool NATALIE\footnote{How to smallcaps?}. Given two networks $G_1(V_1,E_1)$ and $G_2(V_2,E_2)$, a scoring function $\sigma:V_2\times V_2\rightarrow \R^{\geq 0}$ for mapping individual nodes onto each other, and a scoring function $\tau:(V_1\times V_2)\times (V_1\times V_2)\rightarrow\R_{\geq 0}$ for mapping pairs of nodes (i.e. edges) onto each other, a \textbf{maximum structural matching} of $G_1$ and $G_2$ is a mapping $M=\{M_i\}_{i=1}^n$ between the nodes (where each $M_i$, $i\in \{1,\dots,n\}$ and $n\leq \min\{|V_1|,|V_2|\}$ is a unique pair of nodes, one from each graph) which maximizes \[s(M) = \sum_{i=1}^n \sigma(M_i) + \sum_{i=1}^n \sum_{j=i+1}^n \tau(M_i,M_j) \] The maximization of $s(M)$ is then cast as a non-linear integer programming problem, which is then reformulated as an integer linear program using Lagrangian decomposition, and solved via Lagrangian relaxation. Like IsoRank, this is an initial paper which serves primarily as a proof of concept; the specific $\sigma$ and $\tau$ functions used for the NATALIE algorithm are not elaborate. We define $\sigma$ to be $-\infty$ for proteins which are not potential orthologs according to an arbitrary sequence similarity threshold, and zero otherwise, while $\tau$ is 1 for vertex pairs that correspond to an edge in both $V_1$ and $V_2$, and zero otherwise. An improved algorithm using the same integer linear programming framework was published in 2011 and made available as NATALIE 2.0 \cite{el2011lagrangian}. \subsection{GRAAL} Introduced by Kuchaiev et. al. in 2010 \cite{Kuchaiev_2010}, GRAAL (GRAphlet ALigner) is unique in that it incorporates \textit{only} topological information into its node similarity scores. These similarity scores are based on the \textbf{graphlet degree signature} of each node, which is simply a vector of the number of each type of graphlet that the node touches. As with a graphlet degree distribution, we distinguish between different automorphism orbits of each graphlet. Each of these 73 orbits is assigned a weight $w_i$ that accounts for dependencies between orbits\footnote{For example, differences in counts of orbit 3 will imply differences in counts of all orbits that contain a triangle, and it is therefore assigned a higher weight.}, and the \textit{signature similarity} between nodes $u\in G$ and $v\in H$ is then \[D(u,v) = \frac{1}{\sum_{i=0}^{72} w_i} \left[ w_i \times \frac{|\log(u_i + 1) - \log(v_i+1) |}{\log (\max\{u_i, v_i\} + 2)} \right], \] where $u_i$ is the number of times a node $u$ is touched by orbit $i$ in a graph $G$. The overall similarity between two nodes is then their signature similarity, scaled according to their relative degrees in order to favor aligning the densest parts of the networks first. To construct an alignment using this similarity score matrix, GRAAL chooses an initial high-scoring pair, and then builds neighborhoods of all possible radii around each member of the pair. These are aligned using a greedy strategy. If this does not result in a match for all the vertices in the smaller network, the same strategy is repeated for the graph $G^2$ (whose edges run between nodes connected by a path of length up to 2 in $G$), $G^3$, and so on until all the nodes in the smaller network are aligned. This has the advantage of being well suited for networks of very different size, in which a typical greedy alignment such as that used in IsoRank is not likely to produce a connected component in the larger graph. Most global alignment strategies published after GRAAL use some variation on a seed-and-extend-neighborhoods strategy like this one in order to favor connected components in the alignment result. The authors compare GRAAL to IsoRank and show greatly improved results for edge conservation and connected component size in the alignment of the PPI networks for yeast and fly. Later variants: H-GRAAL (2010, alignment via Hungarian algorithm), MI-GRAAL (2011, more topological metrics, seed-and-extend but uses hungarian algorithm to compute assignment between local neighborhoods) (Like IsoRank, MI-GRAAL is a commonly used benchmark for later algorithms) (this will be actual sentences, I'm not leaving it like that, don't worry) \subsection{PINALOG} Introduced by Phan and Sternberg in 2012 \cite{phan2012pinalog}, PINALOG (Protein Interaction Network ALignment through Ontology of Genes, presumably) computes a global alignment between protein interaction networks by detecting communities within each network by merging adjacent cliques, mapping highly similar proteins from these communities onto each other using the Hungarian method with vertex similarity scores based on sequence and functional similarity of their corresponding proteins, and finally extending the mapping to these highly similar proteins' neighbors to obtain the remainder of the alignment. The extension of the mapping to the neighbors of similar communities incorporates topological similarity as well as the external information of sequence and functional similarity scores in order to get a matrix of similarity scores, from which optimal pairings are selected using the Hungarian method. The authors compare their method to IsoRank, Graemlin 2.0, and MI-GRAAL with respect to various metrics and for various tasks. PINALOG was able to conserve more interactions than IsoRank, but fewer than MI-GRAAL, and shows a much higher conservation of interactions with functional similarity than either, which is unsurprising given its incorporation of functional similarity scores into the algorithm. \subsection{GHOST} Introduced by Patro and Kingsford in 2012 \cite{Patro_2012}, GHOST\footnote{According to the author, this is wordplay based on ``spectral" rather than an abbreviation or acronym, and intended to allow colloquial usage of the name as a verb as is common with the BLAST algorithm.} is a pairwise network alignment strategy which interpolates between topological and sequence distance information to get its overall node similarity scores. Its topological distance scores are based on the density functions of the spectra for the normalized Laplacian of various-radius neighborhoods of a given vertex. Density is used, rather than comparing the spectra themselves, as the length of each spectrum varies according to the size of the neighborhood it originates from. The distances between spectral densities for two vertices are averaged over several neighborhood radii to produce the final topological distance between them. To align two networks using these scores, GHOST seeds regions of an alignment with high scoring pairs of nodes from the different networks and then extends the alignment around the neighborhoods of the two nodes, matching the neighborhoods by computing an approximate solution to the quadratic assignment problem\footnote{The QAP is just like the assignment problem, except that the cost function is expressed in terms of quadratic inequalities instead of being linear.} (QAP). This process continues until all nodes from the smaller network have been aligned to a node in the larger network, at which point regions of the solution space around the initial result are explored to potentially find a better solution. The authors evaluate the performance of GHOST against IsoRank, GRAAL, MI-GRAAL, H-GRAAL, and NATALIE 2.0. They find that GHOST is able to compute alignments of good topological and biological quality between different species; IsoRank and MI-GRAAL generally achieve only one or the other, and while Natalie 2.0 improves on both IsoRank and MI-GRAAL in this regard, it is dominated by GHOST in topological quality when biological quality levels are the same. The most distinct advantage of GHOST is its robustness to noise; it is effectively able to maintain high node and edge correctness in its alignments of an increasingly noisy yeast PPI network, while the correctness of the other algorithms deteriorates. \subsection{SPINAL} In SPINAL (Scalable Protein Interaction Network ALignment), introduced by Alada\u{g} and Erten in 2013 \cite{aladaug2013spinal}, as in IsoRank and GHOST, similarity scores are a convex combination of a topological similarity score and sequence similarity score between a pair of proteins. Topological similarity scores are computed based on maximum potential conserved edges between neighbors of a potential matching pair, rather than simply being scaled by the product of their degrees as in IsoRank. The score matrix is calculated iteratively using a gradient-based method. The alignment is then constructed with a seed-and-extend method; we seed the alignment at the highest-scoring unaligned pairs, and then grow connected components of the alignment around them by constructing a maximum weight matching for their neighbors (i.e. a local solution to the bipartite graph matching problem). Finally, we check if any improvements can be made via local swaps. The authors extensively compare SPINAL to IsoRank and MI-GRAAL, showing improved accuracy performance on the PPI networks for various organisms and noting SPINAL's reasonable running times compared to IsoRank and MI-GRAAL. \subsection{NETAL} NETAL (NETwork ALigner) \cite{Neyshabur_2013}, published in 2013, uses a fairly typical strategy for defining similarity scores, iteratively updating an initial scoring based on the fraction of common neighbors between matched pairs in its corresponding greedy alignment. They define their scoring schema for both topological and external biological information, but do not use the biological score matrix in the calculation of their results. In the example they give, the topological score between a vertex in one network and a vertex in the other appears to simply be the smaller degree of the two divided by the larger, but it is unclear from their description of the algorithm whether this is always the case. Their alignment construction is also typical; high-scoring node pairs are iteratively added to the alignment while their corresponding rows and columns are removed from the similarity scoring matrix. The novel feature of NETAL is the concept of an interaction score matrix, which approximates the expected number of conserved edges obtained from aligning a certain vertex pair and which is updated as we add node pairs to the alignment. The overall similarity matrix at each stage is then a convex combination $A = \lambda T + (1-\lambda) I$ of the topological similarity matrix $T$ and the interaction score matrix $I$. The authors compare NETAL to IsoRank, GRAAL, and MI-GRAAL, using $\lambda=0.0001$ (about the inverse of the number of nodes in the larger network), and show improved results with respect to edge correctness and with respect to the largest common connected (not necessarily induced) subgraph in the result, and improved robustness to noise compared to MI-GRAAL. The noise experiments are conducted on the same yeast dataset as GHOST's, and GHOST has much better noise robustness, but it is unclear if the results are truly comparable, as NETAL's noisiness results for MI-GRAAL are significantly worse than GHOST's. The main advantage of NETAL is its speed. Its time complexity\footnote{This complexity result assumes $m \simeq n\log n$, which in the case of biological networks is an acceptable assumption.} is $O(n^2\log^2n)$\, compared to $O(n^5)$ for the GRAAL family; as a result, NETAL's runtime was hundreds of times lower than that of MI-GRAAL and GRAAL for the experiments performed. \subsection{MAGNA} Introduced by Saraph and Milenkovi\`{c} in 2014 \cite{Saraph_2014}, MAGNA (Maximizing Accuracy in Global Network Alignment) is unique among the alignment strategies we observed in that it is not a specific alignment method, but rather a technique for improving upon existing alignments, which can be generated using any method. The key idea is the observation that existing alignment methods align similar nodes in hopes of maximizing the number of edges in a final alignment, since directly maximizing the number of conserved edges is intractable. The authors use a genetic algorithm to improve upon a population of existing alignments, where the most ``fit" alignments are those which maximizes the edge conservation in both the source and target networks being aligned. Their (novel) fitness function achieves this by penalizing both the mapping of sparse regions to dense regions and the mapping of dense regions to sparse regions, unlike previous alignment quality scoring methods. They also introduce a novel crossover method which takes the midpoint of the shortest path of transpositions between two parent alignments, which are sampled from a probability distribution of the fitness of all alignments in the population. This method is shown by the authors to improve the results of initial populations of alignments generated randomly and by IsoRank, MI-GRAAL, and GHOST. Overall, it is able to outperform all these methods with respect to both node and edge conservation, and topological and biological alignment quality. \subsection{Node Fingerprinting} Node fingerprinting (NF) was introduced by Radu and Charleston in 2014 \cite{radu2014node}, and it aims to quickly compute accurate alignments between two networks in a parallelizable manner without the need to rely on external information such as protein sequence alignment similarity scores or on tunable network alignment parameters which can introduce an increased computational overhead. Their approach allows for the inclusion of such external information, but the authors choose to run experiments without it in order to avoid the circularity of using sequence information to both carry out and validate an alignment. Like SPINAL and NETAL, the similarity scores used in node fingerprinting are updated throughout the process of constructing an alignment. These \textit{pairing scores} are based on the relative differences in the in and out degrees (or simply the degree for an undirected network) of the neighbors of a potential pair to be matched; this difference is meant to be minimized. The scoring function adds a bonus for node pairings that are adjacent to already mapped node pairs, and a penalty for nodes with differing in or out degrees. The algorithm then repeatedly adds the node pairings with above average scores to the alignment and recalculates pairing scores until a complete alignment has been reached. Like MAGNA, the authors compare their algorithm to IsoRank, MI-GRAAL, and GHOST. They run experiments on both real and synthetic data and show equal or improved accuracy, especially for large networks which contain more structural information than smaller ones. In some experiments using smaller \textit{Human Herpesvirus} networks, GHOST or MI-GRAAL is able to outperform NF, but at a greatly increased runtime and memory cost, while NF still returns reasonable results. The advantage of this method is thus primarily in the analysis of very large networks, where it is able to take advantage of increased structural information at a low overall computational cost. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% CHAPTER 6 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \chapter{Conclusion} In this work, we set out to present an integrative overview of network comparison in pattern recognition and in systems biology, and we conclude here by discussing the high-level distinctions between the two fields and potential cross-applications for further study. We also reiterate the value of our citation-network based approach in the process of writing this survey. In pattern recognition, we followed methods from the strictest possible definition of similarity between two graphs--subgraph isomorphism, which we determine by way of exact matching--through edit distances to approximate formulations and the assignment problem. We then introduced the differences in problem types between pattern recognition and biology, and followed biological network comparison strategies from univariate statistics to graphlets and motifs to local and global alignment. A clear theme in both fields is the degree to which the graphs and networks being investigated inform the approaches used. In the context of large, highly meaningful real-world networks which we know to have strong relationships with each other and expect to be composed of meaningful fundamental structures, it makes sense to investigate basic statistics and measure the occurrences of small patterns, but this might not help us much in finding the ``closest" graph to a specific target graph in a large database of many distinct graphs, all of which are a convenient data structure rather than having their own real-world meaning. Alignment strategies in pattern recognition do not even consider the idea of incorporating external information into their definition of node similarity, while in biological applications, it is indispensable--in constructing a notion of similarity or evaluating the quality of the result or both. In graph matching, the only measure of quality we have seen for a mapping is how well it tells us whether two graphs are isomorphic; if we can find an isomorphism, it is the only ``right answer" we need. In biology, however, alignments seek regions of similarity to find functional significance; even if we could easily find subgraph isomorphisms between two networks, they would be useless if they failed to reflect the conserved structures in the two species those networks represent. How can we connect the ideas in these fields, despite their disparate problem types and methods, and how might they learn from each other? The concept of graphlets could potentially be highly applicable to pattern recognition applications, and particularly towards the task of searching for a close match in a large database. We have seen how different two networks can be in one graphlet degree distribution despite being completely identical in another, and it would likely be quite difficult for two graphs, no matter how large small, to match each other precisely in all 73 without being isomorphic. Enumerating the graphlet counts in a small graph and storing even a few relevant statistics from each distribution could give us a quick way to narrow down isomorphism candidates in a large database. At the time of this writing, this question does not appear to have been addressed in the pattern recognition literature; the term ``graphlets" is used, but not in the same sense. Similar ideas have likely been explored, but the details, theory, and available algorithms for graphlets as we have defined them could be a useful resource. In larger networks, graphlet degree distributions could inspire random network models as the original degree distribution inspired the idea of ``scale-free" networks, and corresponding generation models for them. When properties such as scale-free, small-world, and clustering appear in real-world networks across disparate applications\footnote{There's a paper I'm planning to cite here, not in the citation dataset at all, but I want to go to bed.} we can gain better insights into how those properties arise, improve our random network models, and gain a better sense of what \textit{does} distinguish different categories of networks. We also suspect that the heuristic strategies of global biological network alignment could prove useful for non-biological alignment algorithms. The use of the assignment problem in pattern recognition seems fairly niche, despite being a dominant and well-explored area of investigation in the study of PPI networks, and computer scientists could likely find inspiration in its techniques. The graphs used in pattern recognition might share the property of biological networks that connectivity is a desirable property in the mapping result; the seed-and-extend strategies used to construct well-connected alignments might be an effective strategy for quickly ruling out or narrowing down search areas for a potential subgraph isomorphism between a small graph and a much larger one. A more far-fetched application could be to apply the techniques of understanding biological neural networks (protein interaction networks are the most studied, it seems, as DNA is the dominant type of sequence studied, but the techniques discussed are certainly not limited to them) to understanding the results of deep neural nets, which--like biological neural networks--are frustratingly opaque. Alignment strategies are the clear analogue, but the computational resources involved may be too prohibitive to attempt such an analysis, especially as the size of networks used in deep learning grows in concert with the computational resources available. They are also regular enough in structure to make the prospects of interesting results based on their graphlet degree distributions or a motif search dim, but eliminating edges corresponding to low weights in a well-trained deep neural net to get a less deterministic structure could be interesting. All in all, the connections we have drawn and our organization of the ideas in this work would not have been possible without the context and guidance of the citation network we used to facilitate our reading. It made it immediately clear that there were two main fields we ought to discuss, and the exhaustive search of its construction process allowed us to reasonably sure that we were looking at the field as a whole without missing any truly key ideas. Simply paying attention to how the other papers from our reading list were discussed by their neighbors--knowing at all times why and how each is relevant--allowed us to make our own judgments on various author's claims about which methods and concepts in the field are most important and how they should be classified, and we believe this presentation has benefited greatly as a result. %%%%%%%%%%%%%%%%%%%%%%%%% GLOSSARY, APPENDIX AND END MATTER %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \appendix \chapter{Additional Figures and Tables}\label{chapter:appendices} \begin{figure}[H] \centering \includegraphics[width=\textwidth]{display_sciMet.png} \caption{The sciMet network dataset used in our Table \ref{tab:network_table} comparison to $G$. Note the high number of connected components, and low number of children per parent, in contrast to the ``blooming" behavior in $G$ created by the inclusion of ALL child references from each parent paper.} \label{fig:sciMet} \end{figure} \begin{figure}[H] \centering \includegraphics[width=\textwidth]{display_zewail.png} \caption{The zewail citation network dataset used in our Table \ref{tab:network_table} comparison to $G$. Note the high number of connected components, and low number of children per parent, in contrast to the ``blooming" behavior in $G$ created by the inclusion of ALL child references from each parent paper.} \label{fig:zewail} \end{figure} \begin{figure}[H] \centering \includegraphics[width=\textwidth]{full_network_color_coded.png} \caption{The full network $G_p$, with vertices colored according to their subject label as in Figure \ref{fig:subject_color_coded}.} \label{fig:full_subject_color_coded} \end{figure} \begin{sidewaysfigure} \centering \includegraphics[width=0.85\textwidth]{reading_list_subject_colored_CROPPED.png} \caption{The subnetwork $S$ of high centrality papers, as listed in Tables \ref{tab:toppapers_all}, \ref{tab:toppapers_bio}, and \ref{tab:toppapers_CS}, with vertices colored according to their subject label. Grey vertices are unlabeled, pink is computer science, blue is biology, yellow is math, orange is both math and computer science, and purple is both computer science and biology.} \vspace{-12pt}\flushleft\scriptsize Note: ``Unsupervised Domain Adaptation Using Regularized Hyper-Graph Matching" is not in the connected component and is not displayed. \label{fig:reading_list_subject_colored} \end{sidewaysfigure} \begin{singlespace} \begin{longtable}{|L{0.095\textwidth}|C{0.04\textwidth}|C{0.045\textwidth}|C{0.045\textwidth}|L{0.7\textwidth}|} \hline Subject & $N$ & $G_R^{(1)}$ & $G_R^{(2)}$ & Title \\ \hline \hline \endhead None & 22 & 15 & 7 & Fifty years of graph matching, network alignment and network comparison \\ \hline None & 15 & 10 & 5 & Networks for systems biology: conceptual connection of data and function \\ \hline Biology & 7 & 7 & 0 & Global network alignment using multiscale spectral signatures \\ \hline None & 13 & 1 & 12 & Error correcting graph matching: on the influence of the underlying cost function \\ \hline None & 10 & 10 & 0 & MAGNA: Maximizing Accuracy in Global Network Alignment \\ \hline None & 4 & 4 & 0 & Graphlet-based measures are suitable for biological network comparison \\ \hline None & 7 & 6 & 1 & On Graph Kernels: Hardness Results and Efficient Alternatives \\ \hline Biology & 8 & 8 & 0 & Pairwise Alignment of Protein Interaction Networks \\ \hline None & 6 & 6 & 0 & Alignment-free protein interaction network comparison \\ \hline Biology & 7 & 7 & 0 & Biological network comparison using graphlet degree distribution \\ \hline None & 10 & 10 & 0 & NETAL: a new graph-based method for global alignment of protein-protein interaction networks \\ \hline CS/Math & 10 & 4 & 6 & Computers and Intractability: A Guide to the Theory of NP-Completeness \\ \hline None & 16 & 1 & 15 & Recent developments in graph matching \\ \hline None & 10 & 10 & 0 & Modeling cellular machinery through biological network comparison \\ \hline CS & 7 & 2 & 5 & A graph distance metric based on the maximal common subgraph \\ \hline None & 24 & 1 & 23 & Thirty years of graph matching in pattern recognition \\ \hline None & 6 & 6 & 0 & Collective dynamics of ``small-world" networks \\ \hline None & 13 & 11 & 2 & A new graph-based method for pairwise global network alignment \\ \hline None & 12 & 4 & 8 & An Algorithm for Subgraph Isomorphism \\ \hline CS & 8 & 2 & 6 & On a relation between graph edit distance and maximum common subgraph \\ \hline None & 7 & 7 & 0 & Topological network alignment uncovers biological function and phylogeny \\ \hline CS & 7 & 0 & 7 & A linear programming approach for the weighted graph matching problem \\ \hline None & 10 & 0 & 10 & An eigendecomposition approach to weighted graph matching problems \\ \hline CS & 8 & 0 & 8 & A graduated assignment algorithm for graph matching \\ \hline None & 9 & 0 & 9 & A new algorithm for subgraph optimal isomorphism \\ \hline CS & 9 & 0 & 9 & A distance measure between attributed relational graphs for pattern recognition \\ \hline CS & 5 & 0 & 5 & Inexact graph matching for structural pattern recognition \\ \hline CS & 6 & 2 & 4 & A new algorithm for error-tolerant subgraph isomorphism detection \\ \hline CS & 6 & 0 & 6 & Structural Descriptions and Inexact Matching \\ \hline CS & 6 & 0 & 6 & A shape analysis model with applications to a character recognition system \\ \hline CS & 9 & 0 & 9 & A graph distance measure for image analysis \\ \hline CS & 8 & 0 & 8 & Structural matching in computer vision using probabilistic relaxation \\ \hline CS & 6 & 0 & 6 & Hierarchical attributed graph representation and recognition of handwritten chinese characters \\ \hline CS & 4 & 0 & 4 & Linear time algorithm for isomorphism of planar graphs (Preliminary Report) \\ \hline CS/Bio & 6 & 5 & 1 & Pairwise Global Alignment of Protein Interaction Networks by Matching Neighborhood Topology \\ \hline None & 6 & 6 & 0 & Conserved patterns of protein interaction in multiple species \\ \hline None & 5 & 0 & 5 & Approximate graph edit distance computation by means of bipartite graph matching \\ \hline None & 8 & 4 & 4 & Local graph alignment and motif search in biological networks \\ \hline None & 6 & 6 & 0 & Network Motifs: Simple Building Blocks of Complex Networks \\ \hline None & 8 & 1 & 7 & The Hungarian method for the assignment problem \\ \hline CS & 9 & 0 & 9 & Graph Matching Based on Node Signatures \\ \hline None & 7 & 0 & 7 & Exact and approximate graph matching using random walks \\ \hline None & 6 & 6 & 0 & Global alignment of multiple protein interaction networks with application to functional orthology detection \\ \hline None & 10 & 9 & 1 & Fast parallel algorithms for graph similarity and matching \\ \hline CS & 9 & 0 & 9 & Fast computation of Bipartite graph matching \\ \hline CS & 4 & 0 & 4 & Fast and Scalable Approximate Spectral Matching for Higher Order Graph Matching \\ \hline CS & 4 & 0 & 4 & A Probabilistic Approach to Spectral Graph Matching \\ \hline CS & 3 & 0 & 3 & Graph matching applications in pattern recognition and image processing \\ \hline None & 10 & 1 & 9 & The graph matching problem \\ \hline Biology & 4 & 4 & 0 & Graph-based methods for analysing networks in cell biology \\ \hline CS & 10 & 4 & 6 & BIG-ALIGN: Fast Bipartite Graph Alignment \\ \hline None & 7 & 0 & 7 & A (sub)graph isomorphism algorithm for matching large graphs \\ \hline Biology & 4 & 4 & 0 & Complex network measures of brain connectivity: Uses and interpretations \\ \hline CS & 7 & 0 & 7 & Efficient Graph Similarity Search Over Large Graph Databases \\ \hline 3 & 4 & 3 & 1 & Demadroid: Object Reference Graph-Based Malware Detection in Android \\ \hline None & 2 & 2 & 0 & Indian sign language recognition using graph matching on 3D motion captured signs \\ \hline None & 6 & 5 & 1 & Predicting Graph Categories from Structural Properties \\ \hline None & 10 & 10 & 0 & Survey on the Graph Alignment Problem and a Benchmark of Suitable Algorithms \\ \hline CS/Math & 12 & 0 & 12 & Efficient Graph Matching Algorithms \\ \hline CS & 2 & 2 & 0 & Early Estimation Model for 3D-Discrete Indian Sign Language Recognition Using Graph Matching \\ \hline None & 1 & 0 & 1 & Unsupervised Domain Adaptation Using Regularized Hyper-Graph Matching \\ \hline \caption{Number of vertices in the neighborhood of each paper in $G_R$, and how many vertices in it lie on each side of the partition.} \label{tab:neighborhood_partition_counts} \end{longtable} \end{singlespace} \begin{table}[t] \centering \begin{tabular}{|l|r|r|r|} \hline & $G_R$ & $G_R^{(1)}$ & $G_R^{(2)}$ \\ \hline Total vertices & 61 & 27 & 34 \\ \hline Untagged & 30 & 8 & 22 \\ \hline Tagged & 31 & 19 & 12 \\ \hline CS & 24 & 2 & 22 \\ \hline Biology & 6 & 6 & 0 \\ \hline Math & 3 & 1 & 2 \\ \hline Both CS and biology & 1 & 1 & 0 \\ \hline Both CS and math & 2 & 0 & 2 \\ \hline Both biology and math & 0 & 0 & 0 \\ \hline All three & 0 & 0 & 0 \\ \hline \end{tabular} \caption{Number of vertices tagged as computer science, biology, math, or some combination of these in the reading list subnetwork $G_R$, and its intersections $G_R^{(1)}$ and $G_R^{(2)}$ with the two halves of the partition $G_p^{(1)}$ and $G_p^{(2)}$. See Table \ref{tab:subject_counts}.} \label{tab:reading_list_subject_counts} \end{table} \begin{table}[t] \centering \begin{tabular}{| l | l |} \hline Usage & Package Name(s) \\ \hline Mathematical Computation & NumPy \\ & NetworkX \\ \hline Figure creation* & Matplotlib \\ \hline File I/O Handling & csv\\ & glob \\ & re \\ \hline API Request Handling & urllib \\ & Requests \\ & time \\ \hline Interfacing with Google Sheets & gspread$^\Diamond$ \\ & oauth2client$^\Diamond$ \\ \hline The \verb+defaultdict+ datatype & collections \\ \hline Interfacing my own modules with Jupyter notebooks & importlib$^\Diamond$ \\ \hline \end{tabular} \caption{Python packages used for the project. All but those marked with a $\Diamond$ are found in either the standard library or available in Anaconda for Python 3.6 on 64-bit Windows in mid-2018, and the remainder can be installed via pip.} \footnotesize *Only Figure \ref{fig:year_distributions} was created in Python. The remainder were made in Mathematica. \label{tab:python_packages} \end{table} %\begin{figure}[p] %\textbf{Initial loading of $G$, sciMet, and zewail GML files} %\begin{verbatim} %G = Import["citation_network.gml"] %sciMet = Import["sciMet_dataset.gml"] %zewail = Import["zewail_dataset.gml"] %\end{verbatim} %\textbf{Creating the pruned network $G_p$.} %\begin{verbatim} %parents = Position[VertexOutDegree[G], _?(# > 0 &)] // Flatten; %popular = Position[VertexInDegree[G], _?(# > 1 &)] // Flatten; %pruned = Subgraph[G, Union[parents, popular], Options[G]]; %Gp = Subgraph[pruned, WeaklyConnectedComponents[pruned][[1]], % Options[pruned]] %\end{verbatim} %\textbf{Creating the random network $R$.} %\begin{verbatim} %R = RandomGraph[{VertexCount[G], EdgeCount[G]}, DirectedEdges -> True] %\end{verbatim} %\textbf{Creating the random network $R_d$ with the same degree sequences as $G$.} %\begin{verbatim} %outStubs = VertexOutDegree[G]; %inStubs = VertexInDegree[G]; %vertexlist = Range[VertexCount[G]]; %edgelist = {}; %For[i = 1, i <= EdgeCount[g], i++, % source = RandomSample[outStubs -> vertexlist, 1][[1]]; % target = RandomSample[inStubs -> vertexlist, 1][[1]]; % outStubs[[source]] -= 1; % inStubs[[target]] -= 1; % AppendTo[edgelist, source -> target];] %Rd = Graph[vertexlist, edgelist]; %\end{verbatim} %\caption{The Mathematica code used to create the networks analyzed in Table \ref{tab:network_table}.} %\label{fig:network_creation_source_code} %\end{figure} \begin{table}[p] \centering \begin{tabular}{| l | l | l |} \hline \textbf{Computer Science} & \textbf{Biology} & \textbf{Mathematics} \\ \hline\hline ACM & Biochem- & Algebra \\ Algorithm & Biocomputing & Algorithm \\ Artificial Intelligence & Bioengineering & Chaos \\ CIVR & Bioinformatic & Combinatori- \\ Computational Intelligence & Biological & Fixed Point \\ Computational Linguistics & Biology & Fractal \\ Computer & Biomedic- & Functional Analysis \\ Computer Graphics & Biosystem & Geometr- \\ Computer Science & Biotechnology & Graph \\ Computer Vision & Brain & Kernel \\ Data & Cancer & Linear Regression \\ Data Mining & Cardiology & Markov \\ Document Analysis & Cell & Mathemati- \\ Electrical Engineering & Disease & Multivariate \\ Graphics & DNA & Network \\ IEEE & Drug & Optimization \\ Image Analysis & Endocrinology & Permutation Group \\ Image Processing & Epidemiology & Probability \\ Intelligent System & Genetic & Riemann Surface \\ Internet & Genome & SIAM \\ ITiCSE & Genomic & Statistic- \\ Language Processing & Medical & Topology \\ Learning & Medicinal & Wavelet \\ Machine Learning & Medicine & \\ Machine Vision & Metabolic & \\ Malware & Microbiology & \\ Neural Network & Molecular & \\ Pattern Recognition & Neuro- & \\ Robotic & Neurobiological & \\ Scientific Computing & Pathology & \\ SIAM & Pathogen & \\ Signal Processing & Pharma- & \\ Software & Plant & \\ World Wide Web & Protein & \\ & Proteom- & \\ & Psych- & \\ & Psychology & \\ & Virology & \\ & Virus & \\ \hline \end{tabular} \caption{Keywords used to tag journal names as various subjects.} \vspace{-6pt}\flushleft\footnotesize *Note: Both a term and its plural are considered a match, and hyphens indicate a word with several ending variations which were all considered to be associated with the tag. While the search process was case sensitive in order to avoid false positives for short words like ``ACM", case-insensitive duplicate words have been excluded from the table. The words ``algorithm" and ``SIAM" are considered to be both computer science and mathematics. \label{tab:tagging_keywords} \end{table} \bibliographystyle{plain} \bibliography{thesis_bibliography} % If you want to include an index, this prints the index at this location. You must have \makeindex uncommented in the preamble \printindex \end{document}
{ "alphanum_fraction": 0.7753981413, "avg_line_length": 100.0552995392, "ext": "tex", "hexsha": "093d7506ab826df6a0c029d1c3a5e2c00b27db0e", "lang": "TeX", "max_forks_count": 5, "max_forks_repo_forks_event_max_datetime": "2019-12-24T07:18:24.000Z", "max_forks_repo_forks_event_min_datetime": "2018-11-11T03:58:30.000Z", "max_forks_repo_head_hexsha": "a6e23a593e17f9c90c292d4b4742a19da81e61d0", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "marissa-graham/network-similarity", "max_forks_repo_path": "ThesisClass/one_last_time.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "a6e23a593e17f9c90c292d4b4742a19da81e61d0", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "marissa-graham/network-similarity", "max_issues_repo_path": "ThesisClass/one_last_time.tex", "max_line_length": 1416, "max_stars_count": 20, "max_stars_repo_head_hexsha": "a6e23a593e17f9c90c292d4b4742a19da81e61d0", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "marissa-graham/network-similarity", "max_stars_repo_path": "ThesisClass/one_last_time.tex", "max_stars_repo_stars_event_max_datetime": "2021-11-19T10:13:17.000Z", "max_stars_repo_stars_event_min_datetime": "2019-05-17T21:57:52.000Z", "num_tokens": 48360, "size": 195408 }
\section{Output Size} \label{sec:value-size} Figure \ref{fig:test} gives the formula for calculating the size of a UTxO entry in the Alonzo era. In addition to the data found in the UTxO in the ShelleyMA era, the hash of a datum (or $\Nothing$) is added to the output type, which is accounted for in the size calculation. \begin{figure*}[h] \emph{Constants} \begin{align*} & \mathsf{JustDataHashSize} \in \MemoryEstimate \\ & \text{The size of a datum hash wrapped in $\DataHash^?$} \\~ \\ & \mathsf{NothingSize} \in \MemoryEstimate \\ & \text{The size of $\Nothing$ wrapped in $\DataHash^?$} \end{align*} % \emph{Helper Functions} \begin{align*} & \fun{dataHashSize} \in \DataHash^? \to \MemoryEstimate \\ & \fun{dataHashSize}~ \Nothing = \mathsf{NothingSize} \\ & \fun{dataHashSize}~ \wcard = \mathsf{JustDataHashSize} \\ & \text{Return the size of $\DataHash^?$} \\~\\ & \fun{utxoEntrySize} \in \TxOut \to \MemoryEstimate \\ & \fun{utxoEntrySize}~\var{(a, v, d)} = \mathsf{utxoEntrySizeWithoutVal} + (\fun{size} (\fun{getValue}~(a, v, d))) + \mathsf{dataHashSize}~d \\ & \text{Calculate the size of a UTxO entry} \end{align*} \caption{Value Size} \label{fig:test} \end{figure*} \begin{note} Get dataHashSize from heapwords on the code \end{note} There is a change of constant value from the ShelleyMA era, specifically: \begin{itemize} \item $\mathsf{adaOnlyUTxOSize} = 29$ instead of $27$ \item $\mathsf{k_0} = 2$ instead of $0$ \end{itemize} Additionally, the new constants used in Alonzo have values : \begin{itemize} \item $\mathsf{JustDataHashSize} = 10$ words \item $\mathsf{NothingSize} = 0$ words \end{itemize}
{ "alphanum_fraction": 0.6790560472, "avg_line_length": 33.9, "ext": "tex", "hexsha": "0251e775ce6ea4761f8bb3129cc3595716e29253", "lang": "TeX", "max_forks_count": 86, "max_forks_repo_forks_event_max_datetime": "2021-10-04T17:17:15.000Z", "max_forks_repo_forks_event_min_datetime": "2019-03-29T06:53:05.000Z", "max_forks_repo_head_hexsha": "1f3c6bd0bd65a6e5ba7b03a46afb58b14660c8e3", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "vpatov/cardano-ledger-specs", "max_forks_repo_path": "alonzo/formal-spec/value-size.tex", "max_issues_count": 1266, "max_issues_repo_head_hexsha": "1f3c6bd0bd65a6e5ba7b03a46afb58b14660c8e3", "max_issues_repo_issues_event_max_datetime": "2021-11-04T12:50:51.000Z", "max_issues_repo_issues_event_min_datetime": "2019-03-18T20:23:28.000Z", "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "vpatov/cardano-ledger-specs", "max_issues_repo_path": "alonzo/formal-spec/value-size.tex", "max_line_length": 147, "max_stars_count": 108, "max_stars_repo_head_hexsha": "1f3c6bd0bd65a6e5ba7b03a46afb58b14660c8e3", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "vpatov/cardano-ledger-specs", "max_stars_repo_path": "alonzo/formal-spec/value-size.tex", "max_stars_repo_stars_event_max_datetime": "2021-10-30T05:27:16.000Z", "max_stars_repo_stars_event_min_datetime": "2019-03-24T02:26:41.000Z", "num_tokens": 573, "size": 1695 }
\section{Exec} This Add-on allows us to execute external programs and to collect their output. \subsection*{exec.bg} \index{\verb+exec.bg+} {\tt exec.bg} {\it word \verb?|? [word things...] string (thing)*} \newline\newline Execute the program given as the second input. All the inputs to the primitive after the second, will be given as arguments to the program. The first input is the name of a function to execute each time the program outputs a line. The output will be given as the first input of the command. If a list is used instead of a word, it will contain as its first element, the name and inputs of the function to call. The primitive outputs the id of the thread. \begin{verbatimtab} to process :line print :line end @> exec.bg "process '/bin/ls' \end{verbatimtab} \subsection*{exec.wait} \index{\verb+exec.wait+} {\tt exec.wait} {\it word string (thing)*} \newline\newline Execute a program given as the second input. All the inputs to the primitive after the second, will be given as arguments to the program. The first input is a variable name. This variable will contain a list of all the outputs from the program. The primitive returns when the program has finished and output the error code returned by the programmed executed. \begin{verbatimtab} @> exec.wait "term '/bin/sh' '-c' 'exec echo $TERM' @> print :term dumb \end{verbatimtab} \subsection*{launch} \index{\verb+launch+} {\tt launch} {\it string} \newline\newline Launch the preferred application of the file given as input (complete path). \begin{verbatimtab} @> launch 'mydoc.html' \end{verbatimtab}
{ "alphanum_fraction": 0.7517241379, "avg_line_length": 40.8974358974, "ext": "tex", "hexsha": "ad83af19e929cc36e28a1c2dcb16c376faacceba", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2020-10-26T08:56:12.000Z", "max_forks_repo_forks_event_min_datetime": "2020-10-26T08:56:12.000Z", "max_forks_repo_head_hexsha": "09fb4cf9c26433b5bc1915dee1b31178222d9e81", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "clasqm/Squirrel99", "max_forks_repo_path": "documentation/Documents/Manuals/Guide/exec.tex", "max_issues_count": 1, "max_issues_repo_head_hexsha": "09fb4cf9c26433b5bc1915dee1b31178222d9e81", "max_issues_repo_issues_event_max_datetime": "2018-12-12T17:04:17.000Z", "max_issues_repo_issues_event_min_datetime": "2018-12-12T17:04:17.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "clasqm/Squirrel99", "max_issues_repo_path": "documentation/Documents/Manuals/Guide/exec.tex", "max_line_length": 295, "max_stars_count": 2, "max_stars_repo_head_hexsha": "09fb4cf9c26433b5bc1915dee1b31178222d9e81", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "clasqm/Squirrel99", "max_stars_repo_path": "documentation/Documents/Manuals/Guide/exec.tex", "max_stars_repo_stars_event_max_datetime": "2020-03-31T10:39:39.000Z", "max_stars_repo_stars_event_min_datetime": "2019-01-26T14:35:33.000Z", "num_tokens": 402, "size": 1595 }
\subsection{PACT analysis} \label{PACTAnalysis} This appendix documents a PACT (People, Activities, Context \& Technology) analysis performed for the project. The analysis is used during the understanding process in the Scenario-based design method.
{ "alphanum_fraction": 0.8192771084, "avg_line_length": 124.5, "ext": "tex", "hexsha": "d329b9e8d79a139201c7ef6bdc5bbe324828ca9a", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "387a7c769cdda4913b81838bc8feffc9fbcafcc8", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "amatt13/FoodPlanner-Report", "max_forks_repo_path": "Appendix/PactIntroduction.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "387a7c769cdda4913b81838bc8feffc9fbcafcc8", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "amatt13/FoodPlanner-Report", "max_issues_repo_path": "Appendix/PactIntroduction.tex", "max_line_length": 201, "max_stars_count": null, "max_stars_repo_head_hexsha": "387a7c769cdda4913b81838bc8feffc9fbcafcc8", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "amatt13/FoodPlanner-Report", "max_stars_repo_path": "Appendix/PactIntroduction.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 51, "size": 249 }
%% -*- coding:utf-8 -*- \chapter{Natural transformation} Natural transformation is the most important part of the category theory. It provides a possibility to compare \mynameref{def:functor}s via a standard tool. \section{Definitions} The natural transformation is not an easy concept compare other ones and requires some additional preparations before we can give the formal definition. \begin{figure} \centering \begin{tikzpicture}[ele/.style={fill=black,circle,minimum width=.8pt,inner sep=1pt},every fit/.style={ellipse,draw,inner sep=-2pt}] % the texts \node at (0,3) {$C$}; \node at (4,3) {$D$}; \node[ele,label=above:$a$] (a) at (0,2) {}; \node[ele,label=above:$a_F$] (af) at (4,2) {}; \node[ele,label=below:$a_G$] (ag) at (4,0) {}; \node[draw,fit= (a),minimum width=2cm, minimum height=3.5cm] {} ; \node[draw,fit= (af) (ag),minimum width=2cm, minimum height=3.5cm] {} ; \draw[->,thick,shorten <=2pt,shorten >=2pt] (a) to node[above]{$F$} (af); \draw[->,thick,shorten <=2pt,shorten >=2pt] (a) to node[above]{$G$} (ag); \draw[->,thick,shorten <=2pt,shorten >=2pt] (af) to node[right]{$\alpha_a$} (ag); \end{tikzpicture} \caption{Natural transformation: object mapping} \label{fig:nt_objects_mapping} \end{figure} Consider 2 categories $\cat{C}, \cat{D}$ and 2 \mynameref{def:functor}s $F: \cat{C} \tof \cat{D}$ and $G: \cat{C} \tof \cat{D}$. If we have an \mynameref{def:object} $a \in \catob{C}$ then it will be translated by different functors into different objects of category $\cat{D}$: $a_F = F(a), a_G = G(a) \in \catob{D}$ (see \cref{fig:nt_objects_mapping}). There are 2 options possible \begin{enumerate} \item There is not any \mynameref{def:morphism} that connects $a_F$ and $a_G$. \item $\exists \alpha_a \in \hom\left(a_F, a_G\right) \subset \cathom{D}$. \end{enumerate} We can of course to create an artificial morphism that connects the objects but if we use \textit{natural} morphisms \footnote{the word natural means that already existent morphisms from category $\cat{D}$ are used} then we can get a special characteristic of the considered functors and categories. For instance if we have such morphisms then we can say that the considered functors are related each other. Opposite example if there are no such morphisms then the functors can be considered as unrelated each other. %% Another example if the %% morphisms are \mynameref{def:isomorphism}s then the functors can be %% considered as very close each other. \begin{figure} \centering \begin{tikzpicture}[ele/.style={fill=black,circle,minimum width=.8pt,inner sep=1pt},every fit/.style={ellipse,draw,inner sep=-2pt}] % the texts \node at (0,3) {$C$}; \node at (4,3) {$D$}; \node[ele,label=above:$a$] (a) at (0,2) {}; \node[ele,label=below:$b$] (b) at (0,0) {}; \node[ele,label=above:$a_F$] (af) at (4,2) {}; \node[ele,label=below:$a_G$] (ag) at (4,0) {}; \node[ele,label=above:$b_F$] (bf) at (5.5,2) {}; \node[ele,label=below:$b_G$] (bg) at (5.5,0) {}; \node[draw,fit= (a) (b),minimum width=2cm, minimum height=3.5cm] {} ; \node[draw,fit= (af) (ag) (bf) (bg),minimum width=3cm, minimum height=4cm] {} ; \draw[->,thick,shorten <=2pt,shorten >=2pt] (a) to node[left]{$f$} (b); \draw[->,thick,shorten <=2pt,shorten >=2pt] (af) to node[below]{$f_F$} (bf); \draw[->,thick,shorten <=2pt,shorten >=2pt] (ag) to node[above]{$f_G$} (bg); \draw[->,thick,shorten <=2pt,shorten >=2pt] (a) to node[above]{$F$} (af); \draw[->,thick,shorten <=2pt,shorten >=2pt] (b) to [out=45,in=135,looseness=1] node[above]{$F$} (bf); \draw[->,thick,shorten <=2pt,shorten >=2pt] (a) to node[above]{$G$} (ag); \draw[->,thick,shorten <=2pt,shorten >=2pt] (b) to [out=-45,in=-135,looseness=1] node[above]{$G$} (bg); \draw[->,thick,shorten <=2pt,shorten >=2pt] (af) to node[left]{$\alpha_a$} (ag); \draw[->,thick,shorten <=2pt,shorten >=2pt] (bf) to node[right]{$\alpha_b$} (bg); \end{tikzpicture} \caption{Natural transformation: morphisms mapping} \label{fig:nt_morphisms_mapping} \end{figure} The functor is not just the object mapping but also the morphisms mapping. If we have 2 objects $a$ and $b$ in the category $\cat{C}$ then we potentially can have a morphism $f \in \hom_{\cat{C}}(a, b)$. In this case the morphism is mapped by the functors $F$ and $G$ into 2 morphisms $f_f$ and $f_G$ in the category $\cat{D}$. As result we have 4 morphisms: $\alpha_a, \alpha_b, f_F, f_G \in \cathom{D}$. It is natural to impose additional conditions on the morphisms especially that they form a \mynameref{def:commutative_diagram} (see \cref{fig:nt_def}): \[ f_f \circ \alpha_b = \alpha_a \circ f_G. \] \begin{definition}[Natural transformation] \label{def:nt} \begin{figure} \centering \begin{tikzpicture}[ele/.style={fill=black,circle,minimum width=.8pt,inner sep=1pt},every fit/.style={ellipse,draw,inner sep=-2pt}] % the texts \node[ele,label=above:$a_F$] (af) at (0,2) {}; \node[ele,label=below:$a_G$] (ag) at (0,0) {}; \node[ele,label=above:$b_F$] (bf) at (1.5,2) {}; \node[ele,label=below:$b_G$] (bg) at (1.5,0) {}; \draw[->,thick,shorten <=2pt,shorten >=2pt] (af) to node[below]{$f_F$} (bf); \draw[->,thick,shorten <=2pt,shorten >=2pt] (ag) to node[above]{$f_G$} (bg); \draw[->,thick,shorten <=2pt,shorten >=2pt] (af) to node[left]{$\alpha_a$} (ag); \draw[->,thick,shorten <=2pt,shorten >=2pt] (bf) to node[right]{$\alpha_b$} (bg); \end{tikzpicture} \caption{Natural transformation: commutative diagram} \label{fig:nt_def} \end{figure} Let $F$ and $G$ are 2 \mynameref{def:functor}s from category $\cat{C}$ to the category $\cat{D}$. The \textit{natural transformation} is a set of \mynameref{def:morphism}s $\alpha \subset \cathom{D}$ which satisfy the following conditions: \begin{itemize} \item For every \mynameref{def:object} $a \in \catob{C}$ $\exists \alpha_a \in \hom\left(a_F, a_G\right)$ \footnote{ $a_F = F(a), a_G = G(a)$ } - \mynameref{def:morphism} in category $\cat{D}$. The morphism $\alpha_a$ is called the component of the natural transformation. \item For every morphism $f \in \cathom{C}$ that connects 2 objects $a$ and $b$, i.e. $f \in \hom_{\cat{C}}(a,b)$ the corresponding components of the natural transformation $\alpha_a, \alpha_b \in \alpha$ should satisfy the following conditions \begin{equation} f_G \circ \alpha_a = \alpha_b \circ f_F, \label{eq:nt_definition} \end{equation} where $f_F = F(f), f_G = G(f)$. In other words the morphisms form a \mynameref{def:commutative_diagram} shown on the \cref{fig:nt_def}. \end{itemize} We use the following notation (arrow with a dot) for the natural transformation between functors $F$ and $G$: $\alpha: F \tont G$. \end{definition} \section{Operations with natural transformations} \begin{example}[\textbf{Fun} category] \label{ex:fun_category} \index{Object!\textbf{Fun} example} \index{Morphism!\textbf{Fun} example} \index{Category!\textbf{Fun} example} The functors can be considered as objects in a special category \textbf{Fun}. The morphisms in the category are \mynameref{def:nt}s. To define a category we need to define composition operation that satisfied \mynameref{axm:composition}, identity morphism and verify \mynameref{axm:associativity}. \begin{figure} \centering \begin{tikzpicture}[ele/.style={fill=black,circle,minimum width=.8pt,inner sep=1pt},every fit/.style={ellipse,draw,inner sep=-2pt}] % the texts \node at (0,3) {$C$}; \node at (4,5) {$D$}; \node[ele,label=above:$a$] (a) at (0,2) {}; \node[ele,label=above:$a_F$] (af) at (4,4) {}; \node[ele,label=right:$a_G$] (ag) at (4,2) {}; \node[ele,label=below:$a_H$] (ah) at (4,0) {}; \node[draw,fit= (a),minimum width=2cm, minimum height=3.5cm] {} ; \node[draw,fit= (af) (ag) (ah),minimum width=5cm, minimum height=5cm] {} ; \draw[->,thick,shorten <=2pt,shorten >=2pt] (a) to node[above]{$F$} (af); \draw[->,thick,shorten <=2pt,shorten >=2pt] (a) to node[above]{$G$} (ag); \draw[->,thick,shorten <=2pt,shorten >=2pt] (a) to node[above]{$H$} (ah); \draw[->,thick,shorten <=2pt,shorten >=2pt] (af) to node[right]{$\alpha_a$} (ag); \draw[->,thick,shorten <=2pt,shorten >=2pt] (ag) to node[right]{$\beta_a$} (ah); \draw[->,thick,shorten <=2pt,shorten >=2pt] (af) to [out=-45,in=45,looseness=1] node[right]{$\beta_a \circ \alpha_a$} (ah); \end{tikzpicture} \caption{Natural transformation vertical composition: object mapping} \label{fig:nt_objects_mapping_composition} \end{figure} For the composition consider 2 \mynameref{def:nt}s $\alpha, \beta$ and consider how they act on an object $a \in \catob{C}$ (see \cref{fig:nt_objects_mapping_composition}). We always can construct the composition $\beta_a \circ \alpha_a$ i.e. we can define the composition of natural transformations $\alpha, \beta$ as \( \beta \circ \alpha = \left\{ \beta_a \circ \alpha_a | a \in \catob{C} \right\} \). \begin{figure} \centering \begin{tikzpicture}[ele/.style={fill=black,circle,minimum width=.8pt,inner sep=1pt},every fit/.style={ellipse,draw,inner sep=-2pt}] % the texts \node[ele,label=above:$a_F$] (af) at (0,4) {}; \node[ele,label=left:$a_G$] (ag) at (0,2) {}; \node[ele,label=below:$a_H$] (ah) at (0,0) {}; \node[ele,label=above:$b_F$] (bf) at (3,4) {}; \node[ele,label=right:$b_G$] (bg) at (3,2) {}; \node[ele,label=below:$b_H$] (bh) at (3,0) {}; \draw[->,thick,shorten <=2pt,shorten >=2pt] (af) to node[above]{$f_F$} (bf); \draw[->,thick,shorten <=2pt,shorten >=2pt] (ag) to node[above]{$f_G$} (bg); \draw[->,thick,shorten <=2pt,shorten >=2pt] (ah) to node[above]{$f_H$} (bh); \draw[->,thick,shorten <=2pt,shorten >=2pt] (af) to node[right]{$\alpha_a$} (ag); \draw[->,thick,shorten <=2pt,shorten >=2pt] (ag) to node[right]{$\beta_a$} (ah); \draw[->,thick,shorten <=2pt,shorten >=2pt] (af) to [out=-135,in=135,looseness=1] node[left]{$\beta_a \circ \alpha_a$} (ah); \draw[->,thick,shorten <=2pt,shorten >=2pt] (bf) to node[right]{$\alpha_b$} (bg); \draw[->,thick,shorten <=2pt,shorten >=2pt] (bg) to node[right]{$\beta_b$} (bh); \draw[->,thick,shorten <=2pt,shorten >=2pt] (bf) to [out=-45,in=45,looseness=1] node[right]{$\beta_b \circ \alpha_b$} (bh); \end{tikzpicture} \caption{Natural transformation vertical composition: morphism mapping - commutative diagram} \label{fig:nt_morphism_mapping_composition} \end{figure} The natural transformation is not just object mapping but also morphism mapping. We will require that all morphisms shown on \cref{fig:nt_morphism_mapping_composition} commute. The composition defined in such way is called \mynameref{def:vertical_composition}. The functor category between categories $\cat{C}$ and $\cat{D}$ is denoted as $[\cat{C}, \cat{D}]$. \end{example} \begin{definition}[Vertical composition] \label{def:vertical_composition} \index{Natural transformation!Vertical composition} Let $F,G,H$ are functors between categories $\cat{C}$ and $\cat{D}$. Also we have $\alpha : F \tont G, \beta: G \tont H$ - natural transformations. We can compose the $\alpha$ and $\beta$ as follows \[ \alpha \circ \beta: F \tont H. \] This composition is called \textit{vertical composition}. \end{definition} \begin{definition}[Horizontal composition] \label{def:horizontal_composition} \index{Natural transformation!Horizontal composition} If we have 2 pairs of functors. The first one $F,G: \cat{C} \to \cat{D}$ and another one $J,K: \cat{D} \tof \cat{E}$. We also have a natural transformation between each pair: $\alpha : F \tont G$ for the first one and $\beta : J \tont K$ for the second one. We can create a new transformation \[ \alpha \star \beta: F \circ J \tont G \circ K \] that is called \textit{horizontal composition}. Note that we use a special symbol $\star$ for the composition. \end{definition} \begin{remark}[Bifunctor in category of functors] \label{rem:bifunctor_fun_cat} If we have the same pair of functors as in \cref{def:horizontal_composition} then we can consider the functors as objects of 3 categories: $\cat{\mathcal{A}} = \left[\cat{C}, \cat{D}\right], \cat{\mathcal{B}} = \left[\cat{D}, \cat{E}\right]$ and $\cat{\mathcal{C}} = \left[\cat{C}, \cat{E}\right]$ Next we want to construct a \mynameref{def:bifunctor} $\otimes: \cat{\mathcal{A}} \times \cat{\mathcal{B}} \tof \cat{\mathcal{C}}$ where for each pair of objects $F \in \catob{\mathcal{A}}, J \in \catob{\mathcal{B}}$ we got another object from $\cat{\mathcal{C}}$. The used operation is an ordinary functor's composition. I.e. \[ \otimes: F \times G \to F \circ G \in \catob{\mathcal{C}}. \] The bifunctor is not just a map for objects. There is also a map between morphisms. Thus if we have 2 \mynameref{def:morphism}s: $\alpha : F \to G$ and $\beta : J \to K$ then we can construct the following mapping \[ \otimes: \alpha \times \beta \to \alpha \star \beta \in \cathom{\mathcal{C}}. \] As result we have the introduced mapping $\otimes$ as a bifunctor. \end{remark} \begin{definition}[Left whiskering] \label{def:lw} If we have 3 categories $\cat{B}, \cat{C}, \cat{D}$, \mynameref{def:functor}s $F,G: \cat{C} \tof \cat{D}$, $H: \cat{B} \to \cat{C}$ and \mynameref{def:nt} $\alpha: F \tont G$ then we can construct a new natural transformations: \[ \alpha H : F \circ H \tont G \circ H \] that is called \textit{left whiskering} of functor and natural transformation \cite{nlab:whiskering}. \end{definition} \begin{definition}[Right whiskering] \label{def:rw} If we have 3 categories $\cat{C}, \cat{D}, \cat{E}$, \mynameref{def:functor}s $F,G: \cat{C} \tof \cat{D}$, $H: \cat{D} \to \cat{E}$ and \mynameref{def:nt} $\alpha: F \tont G$ then we can construct a new natural transformations: \[ H \alpha : H \circ F \tont H \circ G \] that is called \textit{right whiskering} of functor and natural transformation \cite{nlab:whiskering}. \end{definition} \begin{definition}[Identity natural transformation] \label{def:idnt} If $F: \cat{C} \tof \cat{D}$ is a \mynameref{def:functor} then we can define \textit{identity natural transformation} $\idnt{F}$ that maps any \mynameref{def:object} $a \in \catob{C}$ into \mynameref{def:id} $\idm{F(a)} \in \cathom{D}$. \end{definition} \begin{remark}[Whiskering] \label{rem:whiskering} With \mynameref{def:idnt} we can redefine \mynameref{def:lw} and \mynameref{def:rw} via \mynameref{def:horizontal_composition} as follows. For left whiskering: \begin{equation} \label{eq:lw} \alpha H = \alpha \star \idnt{H} \end{equation} For right whiskering: \begin{equation} \label{eq:rw} H \alpha = \idnt{H} \star \alpha \end{equation} \end{remark} \section{Polymorphism and natural transformation} Polymorphism plays a certain role in programming languages. Category theory provides several facts about polymorphic functions which are very important. \begin{definition}[Parametrically polymorphic function] \index{Parametric polymorphism} \label{def:pp_function} Polymorphism is parametric if all function instances behave uniformly i.e. have the same realization. The functions which satisfy the parametric polymorphism requirements are parametrically polymorphic. \end{definition} \begin{definition}[Ad-hoc polymorphism] \label{def:ad_hoc_polymorphism} Polymorphism is parametric if the function instances can behave differently dependently on the type they are being instantiated with. \end{definition} \begin{theorem}[Reynolds] \label{thm:reynolds} \mynameref{def:pp_function}s are \mynameref{def:nt}s \begin{proof} TBD \end{proof} \end{theorem} \subsection{\textbf{Hask} category} In Haskell most of functions are \mynameref{def:pp_function}s \footnote{really in the run-time the functions are not \mynameref{def:pp_function}s}. \begin{example}[Parametrically polymorphic function][\textbf{Hask}] \label{ex:nt_hask} Consider the following function \begin{minted}{haskell} safeHead :: [a] -> Maybe a safeHead [] = Nothing safeHead (x:xs) = Just x \end{minted} The function is parametricaly polymorphic and by \mynameref{thm:reynolds} is \mynameref{def:nt} (see \cref{fig:nt_pp_hask}). \begin{figure} \centering \begin{tikzpicture}[ele/.style={fill=black,circle,minimum width=.8pt,inner sep=1pt},every fit/.style={ellipse,draw,inner sep=-2pt}] % the texts \node[ele,label=above:$a$] (a) at (0,2) {}; \node[ele,label=below:$b$] (b) at (0,0) {}; \node[ele,label=above:$\mbox{[a]}$] (af) at (5,2) {}; \node[ele,label=below:$\mbox{Maybe a}$] (ag) at (5,0) {}; \node[ele,label=above:$\mbox{[b]}$] (bf) at (7.5,2) {}; \node[ele,label=below:$\mbox{Maybe b}$] (bg) at (7.5,0) {}; \node[draw,fit= (a) (b),minimum width=2cm, minimum height=3.5cm] {} ; \node[draw,fit= (af) (ag) (bf) (bg),minimum width=6.5cm, minimum height=5.5cm] {} ; \draw[->,thick,shorten <=2pt,shorten >=2pt] (a) to node[left]{$f$} (b); \draw[->,thick,shorten <=2pt,shorten >=2pt] (af) to node[below]{$\mbox{fmap}_{[]}$} (bf); \draw[->,thick,shorten <=2pt,shorten >=2pt] (ag) to node[above]{$\mbox{fmap}_{Maybe}$} (bg); \draw[->,thick,shorten <=2pt,shorten >=2pt] (a) to node[above]{$$} (af); \draw[->,thick,shorten <=2pt,shorten >=2pt] (b) to [out=45,in=135,looseness=1] node[above]{$$} (bf); \draw[->,thick,shorten <=2pt,shorten >=2pt] (a) to node[above]{$$} (ag); \draw[->,thick,shorten <=2pt,shorten >=2pt] (b) to [out=-45,in=-135,looseness=1] node[above]{$$} (bg); \draw[->,thick,shorten <=2pt,shorten >=2pt] (af) to node[left]{$\mbox{safeHead}_a$} (ag); \draw[->,thick,shorten <=2pt,shorten >=2pt] (bf) to node[right]{$\mbox{safeHead}_b$} (bg); \end{tikzpicture} \caption{Haskell parametric polymorphism as a natural transformation} \label{fig:nt_pp_hask} \end{figure} Therefore from the definition of the natural transformation \eqref{eq:nt_definition} we have \mintinline{haskell}{fmap f . safeHead = safeHead . fmap f}. I.e. it does not matter if we initially apply \mintinline{haskell}{fmap f} and then \mintinline{haskell}{safeHead} to the result or initially \mintinline{haskell}{safeHead} and then \mintinline{haskell}{fmap f}. The statement can be verified directly. For empty list we have \begin{minted}{haskell} fmap f . safeHead [] -- equivalent to fmap f Nothing -- equivalent to Nothing \end{minted} from other side \begin{minted}{haskell} safeHead . fmap f [] -- equivalent to safeHead [] -- equivalent to Nothing \end{minted} For a non empty list we have \begin{minted}{haskell} fmap f . safeHead (x:xs) -- equivalent to fmap f (Just x) -- equivalent to Just (f x) \end{minted} from other side \begin{minted}{haskell} safeHead . fmap f (x:xs) -- equivalent to safeHead (f x: fmap f xs ) -- equivalent to Just ( f x ) \end{minted} Using the fact that \mintinline{haskell}{fmap f} is an expensive operation if it is applied to the list we can conclude that the second approach is more productive. Such transformation allows compiler to optimize the code. \footnote{It is not directly applied to Haskell because it has lazy evaluation that can perform optimization before that one} \end{example}
{ "alphanum_fraction": 0.6682971568, "avg_line_length": 34.8596802842, "ext": "tex", "hexsha": "75e271f8db795e8a8807a90c2649a452f0d4f7bf", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "522db3ad21e96084490acd39a146a335763e5beb", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "ivanmurashko/articles", "max_forks_repo_path": "cattheory/nt.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "522db3ad21e96084490acd39a146a335763e5beb", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "ivanmurashko/articles", "max_issues_repo_path": "cattheory/nt.tex", "max_line_length": 129, "max_stars_count": 1, "max_stars_repo_head_hexsha": "522db3ad21e96084490acd39a146a335763e5beb", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "ivanmurashko/articles", "max_stars_repo_path": "cattheory/nt.tex", "max_stars_repo_stars_event_max_datetime": "2019-09-27T08:59:55.000Z", "max_stars_repo_stars_event_min_datetime": "2019-09-27T08:59:55.000Z", "num_tokens": 6903, "size": 19626 }
\subsection{Implementation Language and Language Interoperability Strategy} The ESMF will have C++ and F90 language bindings. It is possible to represent the fundamental features of object-oriented software -- polymorphism, inheritance and encapsulation -- in a variety of languages, including the usual choices for high-performance systems: C, C++ and F90. \footnote{Albeit with differing levels of difficulty and effectiveness.} Perhaps the best evidence for this claim is that widely used object-oriented libraries and frameworks have been written in each of these languages. Given the above, we can assume that the architecture and design of the ESMF is largely independent of its implementation language. In practice we have decided to implement the lower layers of the ESMF and some of the higher-level control mechanisms in C++. The implementation of fields and grids will be in F90. We anticipate developing a prototype implementation of the fields and grids layer in C++ as well. The rationale for our decision is presented in the {\it ESMF Implementation Report.}
{ "alphanum_fraction": 0.798173516, "avg_line_length": 49.7727272727, "ext": "tex", "hexsha": "0cc15599da866bbce8e591cdb3addc2b08489557", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "0e1676300fc91000ecb43539cabf1f342d718fb3", "max_forks_repo_licenses": [ "NCSA", "Apache-2.0", "MIT" ], "max_forks_repo_name": "joeylamcy/gchp", "max_forks_repo_path": "ESMF/src/doc/ESMF_language.tex", "max_issues_count": 1, "max_issues_repo_head_hexsha": "0e1676300fc91000ecb43539cabf1f342d718fb3", "max_issues_repo_issues_event_max_datetime": "2022-03-04T16:12:02.000Z", "max_issues_repo_issues_event_min_datetime": "2022-03-04T16:12:02.000Z", "max_issues_repo_licenses": [ "NCSA", "Apache-2.0", "MIT" ], "max_issues_repo_name": "joeylamcy/gchp", "max_issues_repo_path": "ESMF/src/doc/ESMF_language.tex", "max_line_length": 91, "max_stars_count": 1, "max_stars_repo_head_hexsha": "0e1676300fc91000ecb43539cabf1f342d718fb3", "max_stars_repo_licenses": [ "NCSA", "Apache-2.0", "MIT" ], "max_stars_repo_name": "joeylamcy/gchp", "max_stars_repo_path": "ESMF/src/doc/ESMF_language.tex", "max_stars_repo_stars_event_max_datetime": "2018-07-05T16:48:58.000Z", "max_stars_repo_stars_event_min_datetime": "2018-07-05T16:48:58.000Z", "num_tokens": 223, "size": 1095 }
\subsection{Finding strong rules using the Apriori algorithm} \subsubsection{Finding frequent patterns} We can use search algorithms to find frequent patterns. Starting at the empty set. \subsubsection{Apriori algorithm} Breadth first search to generate candidate set of itemsets with support above some value. Start with a \(1\)-itemset, and increase \(k\) once done. Once we have found a frequent pattern, we can immediately identify other frequent patterns associated with it. We can do this by looking at confidence, not support.
{ "alphanum_fraction": 0.7937384899, "avg_line_length": 30.1666666667, "ext": "tex", "hexsha": "e87f6acc23cfc7c304e1a734015b8622087501a4", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "adamdboult/nodeHomePage", "max_forks_repo_path": "src/pug/theory/statistics/association/01-06-apriori.tex", "max_issues_count": 6, "max_issues_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_issues_repo_issues_event_max_datetime": "2022-01-01T22:16:09.000Z", "max_issues_repo_issues_event_min_datetime": "2021-03-03T12:36:56.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "adamdboult/nodeHomePage", "max_issues_repo_path": "src/pug/theory/statistics/association/01-06-apriori.tex", "max_line_length": 110, "max_stars_count": null, "max_stars_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "adamdboult/nodeHomePage", "max_stars_repo_path": "src/pug/theory/statistics/association/01-06-apriori.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 114, "size": 543 }
% !TeX root = ../main.tex \chapter{Introduction} Over the last few years, The unprecedented demand for computing power because of a large number of popular or emerging applications (e.g., neural networks and Bioinformatics) makes CGRA become a promising platform. A CGRA (Coarse-Grained Reconfigurable Architectures or Coarse Grain Reconfigurable Array) is typically a 2D architecture. It consists of an array of a large number of processing elements interconnected with a mesh or mesh+ network. Typically, CGRA contain \begin{enumerate} \item Processing Elements: Typically, processing element consists of a functional unit(FU) and a local register buffer(LRB). There are also two multiplexers to determine where data come from. Every cycle, CGRA can issue different instructions to all PEs. It doesn't need to be the same instruction as that PE executed last cycle, and that's what the "Reconfigurable" come from. \item Context Memory Controller: This controller is responsible for reconfiguration to each PE. Some CGRAs, like ADRES, Silicon Hive, and MorphoSys are fully dynamically reconfigurable: exactly one full reconfiguration takes place for every execution cycle\cite{ARC}. We will focus on the compilation of this type of CGRAs. \item LRB(local register buffer): Register file belongs to each PE. \item OMB(on-chip memory buffer): An memory buffer which is shared by all PEs. \item Network: There is an interconnected network between neighbor PEs, so each PE can communicate with each other. \end{enumerate} Most CGRA systems (e.g., ADRES\cite{ADRES}, TRIPS\cite{TRIPS}, DySER)\cite[]{DySER} limit the size of the array to less than 64 PEs. There are some likely reason about this situation. First, the early pioneering CGRAs are typically focus on image-, audio-, or telecommunication applications\cite{SURVEY}. SRP is Samsung’s proprietary DSP processor since 2005\cite{SRP} and SRP is a variation of ADRES. Second, compilation is a major problem of this architecture even when the size of CGRA is not that big. And if compilation time is matter, it will be way more hard to get an acceptable compilation when the size of CGRA gets bigger. We will discuss this topic further in the following chapter. In CGRA, each PE can consume its necessary data directly from its or its neighbor's PE produced last cycle or from CGRA's routing resources. There are some common routing resources provided by CGRA. PE, LRB and OMB. Through all these routing resources, data can show up at the right time and right place. Each routing resource has its own advantage and disadvantage. So how to utilize these resources efficiently is the major issue of CGRA compiling. \begin{enumerate} \item PE: If the compiler choose a PE to be the routing source, it can't be a computing source at the same cycle. SO in CCF(CGRA Compilation Framework)\cite{CCF}, routing instruction to PE is simply implemented as an add zero instruction. And also CCF is the experimental environment of this article. \item LRB: LRB is only touchable by its master PE, so no other PE can get data directly. If other PE wants data from LRB not belong to itself, it needs that LRB's master PE to be a routing PE and transfer data to its consumer. It takes one additional cycle. Because of this constraint, compiler should endeavour to map data's producer node and consumer node to the same PE if the data are routed by the LRB. In addition to this constraint, LRB also has some constraints and we will discuss them further when we talk about the detail of mapping a program to CGRA. \item OMB: It's a global buffer accessible to all PEs but also the last pick of routing. It takes additional load/store operations when compiler using it to route data. Also, the energy consumption of the memory isn't ideal when compare to other routing source. \end{enumerate} 補充架構示意圖
{ "alphanum_fraction": 0.7861003861, "avg_line_length": 133.9655172414, "ext": "tex", "hexsha": "c2ad47efc6e380252d5f9b21cdacc45366c84305", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "1a4ffea2fb005effa10f17aef23dd92f5b21b984", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "wub11/NTU-Thesis", "max_forks_repo_path": "contents/chapter01.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "1a4ffea2fb005effa10f17aef23dd92f5b21b984", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "wub11/NTU-Thesis", "max_issues_repo_path": "contents/chapter01.tex", "max_line_length": 694, "max_stars_count": null, "max_stars_repo_head_hexsha": "1a4ffea2fb005effa10f17aef23dd92f5b21b984", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "wub11/NTU-Thesis", "max_stars_repo_path": "contents/chapter01.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 899, "size": 3885 }
\section{Application to color normalization} Color normalization is the process of imposing the same color palette on a group of images. This color palette is always somehow related to the color palettes of the original images. For instance, if the goal is to cancel the illumination of a scene (avoid color cast), then the imposed histogram should be the histogram of the same scene illuminated with white light. Of course, in many occasions this information is not available. Following Papadakis et al.~\cite{Papadakis_ip11}, we define an in-between histogram, which is chosen here as the regularized OT barycenter. %The advantage with respect to Papadakis et al.'s method is that the influence of the input histograms on the barycenter can be easily tuned by a change of parameters, in fact it has as a special case any of the original image histograms. %Some other examples for which color normalization is useful are the color balancing of videos, or as a preprocessing to register/compare several images taken with different cameras (see \cite{Papadakis_ip11} for more examples). %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \subsection{Algorithm} Given a set of input images $(X^{0[r]})_{r \in R}$, the goal is to impose on all the images the same histogram $\mu_X$ associated to the barycenter $X$. As for the colorization problem tackled in Section~\ref{sec-appli-color}, the first step is to subsample the original cloud of points $X^{0[r]}$ to make the problem tractable. Thus, for every $X^{0[r]}$ we compute a smaller associated point set $X^{[r]}$ using K-means clustering. Then, we obtain the barycenter $X$ of all the point clouds $(X^{[r]})_{r \in R}$ with the algorithms presented in Section~\ref{algobar}. Figure~\ref{im:baryillu} first row, shows an example on two synthetic cloud of points, $X^{[1]}$ in blue and $X^{[2]}$ in red. The cloud of points in green corresponds to the barycenter $X$, which can change its position depending on the parameter $\rho=(\rho_1,\rho_2)$ in~\eqref{eqbar} from $X^{[1]}$ for $\rho=(1,0)$ to $X^{[2]}$ when $\rho=(0,1)$. This data set $X$ represents the 3-D histogram we want to impose on all the input images. Once we have $X$, we compute the regularized and relaxed OT transport maps $T^{[r]}$ between each $X^{[r]}$ and the barycenter $X$, by solving~\eqref{eq-symm-reg-energy}. The line segments in Figure~\ref{im:baryillu} represent the transport between point clouds, i.e. if $\Sig^{[1]}_{i,j}>0$, $X^{[1]}_i$ is linked to $X_j$, and similarly for $\Sig^{[2]}$. We apply $T^{[r]}$ to $X^{[r]}$, obtaining $\tilde X^{[r]}$, for all $r \in R$, that is to say, we obtain a set of point clouds $\tilde X^{[r]}$ with a color distribution close to $X$. Finally, to recover a set of high resolution images, we compute each $\tilde X^{0[r]}$ from $X^{0[r]}$ by up-sampling. A detailed description of the method is given in Algorithm~\ref{alg-norm}. \begin{algorithm}[ht!] \caption{Regularized OT Color Normalization} \label{alg-norm} % \begin{algorithmic}[1] \Require Images $\left( X^{0[r]} \right)_{r \in R} \in \RR^{N_0 \times d}$, $\la \in \RR^+ $, $\rho \in [0,1]^{|R|}$ and $k \in \RR^+$. \Ensure Images $\left( \tilde X^{0[r]} \right)_{r \in R} \in \RR^{N_0 \times d}$. % \Statex \begin{enumerate} \algostep{Histogram down-sample} Compute $X^{[r]}$ from $X^{0[r]}$ using K-means clustering. \algostep{Compute barycenter} Compute with either~\eqref{eq-bar-l2} or~\eqref{eq-bar-l1} a barycenter $\mu_X$ where $X$ is a local minimum of~\eqref{eqbar} using the block coordinate descent described in Section~\ref{algobar}, see Algorithm~\ref{algo-block-barycenters}. \algostep{Compute transport mappings} For all $r \in R$ compute $T^{[r]}$ between \\ $X$ and $X^{[r]}$ by solving~\eqref{eq-symm-reg-energy}, such that $T^{[r]}(X^{[r]}_i) = Z^{[r]}_i$, where $Z^{[r]}= \diag(\Sig^{[r]} \U)^{-1} \Sig^{[r]} X$. \algostep{Transport up-sample} For every $T^{[r]}$ compute $\tilde T^{0[r]}$ following~\eqref{eq-upsample}. \algostep{Obtain high resolution results} Compute $\foralls r, \tilde X^{0[r]} = \tilde T^{0[r]}(X^{0[r]})$. \end{enumerate} % \end{algorithmic} \end{algorithm} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \subsection{Results} We now show some example of color normalization using Algorithm~\ref{alg-norm}. \begin{figure*}[!h] \centering \setlength{\arrayrulewidth}{2pt} %%% OT %%%% \begin{tabular}{@{}c@{\hspace{1mm}}c@{\hspace{1mm}}c@{\hspace{1mm}}c@{}} $\rho=(1,0)$ & $\rho=(0.7,0.3)$ & $\rho=(0.4,0.6)$ & $\rho=(0,1)$ \\ \includegraphics[width=.23\linewidth]{./syntheticbary/Barycenter_DiagsyntheticINVrho-0-ksum1lambda0nnx4QP1png} & \includegraphics[width=.23\linewidth]{./syntheticbary/Barycenter_DiagsyntheticINVrho-03-ksum1lambda0nnx4QP1png} & \includegraphics[width=.23\linewidth]{./syntheticbary/Barycenter_DiagsyntheticINVrho-06-ksum1lambda0nnx4QP1png} & \includegraphics[width=.23\linewidth]{./syntheticbary/Barycenter_DiagsyntheticINVrho-1-ksum1lambda0nnx4QP1png} \\ \includegraphics[width=.23\linewidth,height=.2\linewidth]{./syntheticbary/Diag2-syntheticrho-0-ksum1lambda0nnx4QP1} & \includegraphics[width=.23\linewidth,height=.2\linewidth]{./syntheticbary/Diag2-syntheticrho-03-ksum1lambda0nnx4QP1} & \includegraphics[width=.23\linewidth,height=.2\linewidth]{./syntheticbary/Diag2-syntheticrho-06-ksum1lambda0nnx4QP1} & \includegraphics[width=.23\linewidth,height=.2\linewidth]{./syntheticbary/Diag2-syntheticrho-1-ksum1lambda0nnx4QP1}\\ \includegraphics[width=.23\linewidth,height=.2\linewidth]{./syntheticbary/Diag1-syntheticrho-0-ksum1lambda0nnx4QP1} & \includegraphics[width=.23\linewidth,height=.2\linewidth]{./syntheticbary/Diag1-syntheticrho-03-ksum1lambda0nnx4QP1} & \includegraphics[width=.23\linewidth,height=.2\linewidth]{./syntheticbary/Diag1-syntheticrho-06-ksum1lambda0nnx4QP1} & \includegraphics[width=.23\linewidth,height=.2\linewidth]{./syntheticbary/Diag1-syntheticrho-1-ksum1lambda0nnx4QP1} \\\hline %%% Regularized \includegraphics[width=.23\linewidth]{./syntheticbary/Barycenter_DiagsyntheticINVrho-0-ksum20lambda00005nnx4QP1} & \includegraphics[width=.23\linewidth]{./syntheticbary/Barycenter_DiagsyntheticINVrho-03-ksum20lambda00005nnx4QP1} & \includegraphics[width=.23\linewidth]{./syntheticbary/Barycenter_DiagsyntheticINVrho-06-ksum20lambda00005nnx4QP1} & \includegraphics[width=.23\linewidth]{./syntheticbary/Barycenter_DiagsyntheticINVrho-1-ksum20lambda00005nnx4QP1} \\ \includegraphics[width=.23\linewidth,height=.2\linewidth]{./syntheticbary/Diag2-syntheticrho-01-ksum20lambda00005nnx4QP1} & \includegraphics[width=.23\linewidth,height=.2\linewidth]{./syntheticbary/Diag2-syntheticrho-0307-ksum20lambda00005nnx4QP1}& \includegraphics[width=.23\linewidth,height=.2\linewidth]{./syntheticbary/Diag2-syntheticrho-0604-ksum20lambda00005nnx4QP1} & \includegraphics[width=.23\linewidth,height=.2\linewidth]{./syntheticbary/Diag2-syntheticrho-10-ksum20lambda00005nnx4QP1}\\ \includegraphics[width=.23\linewidth,height=.2\linewidth]{./syntheticbary/Diag1-syntheticrho-01-ksum20lambda00005nnx4QP1} & \includegraphics[width=.23\linewidth,height=.2\linewidth]{./syntheticbary/Diag1-syntheticrho-0307-ksum20lambda00005nnx4QP1} & \includegraphics[width=.23\linewidth,height=.2\linewidth]{./syntheticbary/Diag1-syntheticrho-0604-ksum20lambda00005nnx4QP1} & \includegraphics[width=.23\linewidth,height=.2\linewidth]{./syntheticbary/Diag1-syntheticrho-10-ksum20lambda00005nnx4QP1} \\ \end{tabular} \caption{Comparison of classical OT (top 3 first rows) and relaxed/regularized OT (bottom 3 last rows). The original input images $X^{0,[1]}$ and $X^{0,[2]}$ are shown in Figure~\ref{im:synth}~(a). Rows \#1 and \#4 shows the 2-D projections of $X^{[1]}$ (blue) and $X^{[2]}$ (red), and in green the barycenter distribution for different values of $\rho$. We display a line between $X^{[r]}_i$ and $X_j$ if $\Sig^{[r]}_{i,j} > 0.1$. Rows \#2 and \#5 (resp. \#3 and \#6) show the resulting normalized images $\tilde X^{0[1]}$ (resp. $\tilde X^{0[2]}$), for each value of $\rho$. \textbf{Top 3 first rows:} classical OT corresponding to setting $k=1$ and $\la=0$. \textbf{Bottom 3 last rows:} regularized and relaxed OT, with parameters $k=20$ and $\lambda=0.0005$. See main text for comments. \vspace{0.5cm} } \label{im:baryillu} \end{figure*} %%%%%%%%%%%%%%%%%% \paragraph{Synthetic example} Figure~\ref{im:baryillu} shows a comparison of normalization of two synthetic images using classical OT and our proposed relaxed/regularized OT. The results obtained using Algorithm~\ref{alg-norm} (setting $p=q=2$), using the set of two images ($|R|=2$) already used in Figure~\ref{im:synth}~(a), denoting here $X^{0[1]}=X^0$ and $X^{0[2]}=Y^0$. Each column shows the same experiment but with different values of $\rho$, which allows to visualize the interpolation between the color palettes (the colors in the images evolve from the colors in $X^{[1]}$ towards the colors of $X^{[2]}$). With classical OT, the structure of the original data sets in not preserved as we change $\rho$, and the consequence on the final images (second and third row), is that the geometry of the original images changes in the barycenters. In contrast to classical OT, for all values of $\rho$ the relaxed/regularized barycenters $X$ have the same number of clusters of the original sets. Note that the consequence of having a transport that maintains the clusters of the original images, is that the geometry is preserved, while the histograms change. \newcommand{\sidecapY}[1]{ \begin{sideways}\parbox{.19\linewidth}{\centering #1}\end{sideways} } \newcommand{\myimgY}[1]{\includegraphics[width=.26\linewidth,height=.22\linewidth]{#1}} \begin{figure*}[!h] \centering \begin{tabular}{@{}c@{\hspace{1mm}}c@{\hspace{1mm}}c@{\hspace{1mm}}c@{} } \sidecapY{ Original $X^{0[1]}$ } & \myimgY{star/fleur_1} & \myimgY{star/wheat_1} & \myimgY{star/parrot_1} \\ \sidecapY{$\rho=(1,0)$ } & \myimgY{barycenter/DiagYfleurrho-1-ksum11lambda00009nnx4QP1} & \myimgY{barycenter/wheat/Diag1-wheatrho-1-ksum13lambda001nnx4QP1} & \myimgY{barycenter/parrot/Diag1-parrotrho-1-ksum1lambda0001nnx4QP1} \\ \sidecapY{$\rho=(0.7,0.3)$} & \myimgY{barycenter/DiagYfleurrho-06-ksum11lambda00009nnx4QP1} & \myimgY{barycenter/wheat/Diag1-wheatrho-06-ksum13lambda001nnx4QP1} & \myimgY{barycenter/parrot/Diag1-parrotrho-06-ksum1lambda0001nnx4QP1} \\ \sidecapY{ $\rho=(0.4,0.6)$ } & \myimgY{barycenter/DiagYfleurrho-03-ksum11lambda00009nnx4QP1} & \myimgY{barycenter/wheat/Diag1-wheatrho-03-ksum13lambda001nnx4QP1} & \myimgY{barycenter/parrot/Diag1-parrotrho-03-ksum1lambda0001nnx4QP1} \\ \sidecapY{ $\rho=(0,1)$ } & \myimgY{barycenter/DiagYfleurrho-0-ksum11lambda00009nnx4QP1} & \myimgY{barycenter/wheat/Diag1-wheatrho-0-ksum13lambda001nnx4QP1} & \myimgY{barycenter/parrot/Diag1-parrotrho-0-ksum1lambda0001nnx4QP1} \\ \sidecapY{ Original $X^{0[2]}$ } & \myimgY{star/fleur_2} & \myimgY{star/wheat_2} & \myimgY{star/parrot_2} \\ & (a) & (b) & (c) \end{tabular} \caption{Results for the barycenter algorithm on different images computed with the method proposed in Section~\ref{algobarysobolev}. The parameters were set to \textbf{(a)} $k=1.1,\la=0.0009$, \textbf{(b)} $k=1.3,\la=0.01$, and \textbf{(b)} $k=1,\la=0.001$. Note how as $\rho$ approaches $(0,1)$, the histogram of the barycenter image becomes similar to the histogram of $X^{0[2]}$.} \label{im:bar} \end{figure*} \paragraph{Example on natural images} Fig.~\ref{im:bar} shows the results of the same experiment as in Fig.~\ref{im:baryillu}, but on the natural images labeled as $X^{0[1]}$ and $X^{0[2]}$ in rows $\#1$ and $\#6$. In this case, we only show the transport from $X$ to $X^{0[1]}$, that is to say, we maintain the geometry of $X^{0[1]}$ (row $\#1$) and match its histogram to the barycenter distribution. As in the previous experiment, note how the colors change smoothly from $(1,0)$ to $(0,1)$ without generating artifacts and match the color and contrast of image $X^{0[2]}$ for $\rho=(0,1)$. The change in contrast is specially visible for the (b) wheat image. %%%%%%% \paragraph{Color Normalization} Computing the barycenter distribution of the histograms of a set of images is useful for color normalization. We show in Figures~\ref{barflower}, and~\ref{barclock} the results obtained with Algorithm~\ref{alg-norm}, and compare them with the standard OT and the method proposed by Papadakis et al.~\cite{Papadakis_ip11}. The improvement of the relaxation and regularization is specially noticeable in Figures~\ref{barflower} where OT creates artifacts such as coloring the leaves on violet for Figure~\ref{barflower}~(a), or introducing new colors on the background in Figure~\ref{barflower}~(c). In Figure~\ref{barclock}, OT and Papadakis et al.'s method introduce artifacts mostly on the sky of Figure~\ref{barclock}~(a) and Figure~\ref{barclock}~(b), while the relaxed and regularized version displays a smoother result for Figure~\ref{barclock}~(a) and~(c) and a more meaningful color transformation (all the clouds have the same color in the fourth row) for Figure~\ref{barclock}~(b). \newcommand{\myimgZ}[1]{\includegraphics[width=.28\linewidth,height=.26\linewidth]{#1}} \newcommand{\myTriTab}[1]{ \begin{tabular}{@{}c@{\hspace{2mm}}c@{\hspace{2mm}}c@{} } #1 \end{tabular} } \begin{figure*}[ht] \centering \myTriTab{ \myimgZ{./barycenter/flowers/flowers-1} & \myimgZ{./barycenter/flowers/flowers-2} & \myimgZ{./barycenter/flowers/flowers-3} \\ \myimgZ{./barycenter/flowers/Diag1-OT} & \myimgZ{./barycenter/flowers/Diag2-OT} & \myimgZ{./barycenter/flowers/Diag3-OT} \\ \myimgZ{./barycenter/nicoflowers1} & \myimgZ{./barycenter/nicoflowers2} & \myimgZ{./barycenter/nicoflowers3} \\ \myimgZ{./barycenter/flowers/Diag1-flowersrho-033333-ksum2lambda0005nnx4QP1} & \myimgZ{./barycenter/flowers/Diag2-flowersrho-033333-ksum2lambda0005nnx4QP1} & \myimgZ{./barycenter/flowers/Diag3-flowersrho-033333-ksum2lambda0005nnx4QP1} \\ (a) & (b) & (c) } \caption{In the first row, we show the original images. In the following rows, we show the result of computing the barycenter histogram and imposing it on each of the original images, with different algorithms. In the second row, we use OT. In the third row, the results were obtained with the method proposed by Papadakis et al.~\cite{Papadakis_ip11}. On the last row, we show the results obtained with the relaxed and regularized OT barycenter with $k=2,\la=0.005$. Note how the proposed algorithm is the only one that does not produce artifacts on the final images such as (a) color artifacts on the leaves and (c) different colors on the background.} \label{barflower} \end{figure*} \begin{figure*}[ht] \centering \myTriTab{ \myimgZ{./barycenter/clockmontague-1} & \myimgZ{./barycenter/clockmontague-2} & \myimgZ{./barycenter/clockmontague-3} \\ \myimgZ{./barycenter/Diag1-clockmontaguerho-033333-ksum1lambda0nnx4QP1} & %OT \myimgZ{./barycenter/Diag2-clockmontaguerho-033333-ksum1lambda0nnx4QP1} & \myimgZ{./barycenter/Diag3-clockmontaguerho-033333-ksum1lambda0nnx4QP1} \\ \myimgZ{./barycenter/nicoclock1} & \myimgZ{./barycenter/nicoclock2} & \myimgZ{./barycenter/nicoclock3} \\ \myimgZ{./barycenter/Diag1-clockmontaguerho-033333-ksum13lambda00005nnx4QP1} & \myimgZ{./barycenter/Diag2-clockmontaguerho-033333-ksum13lambda00005nnx4QP1} & \myimgZ{./barycenter/Diag3-clockmontaguerho-033333-ksum13lambda00005nnx4QP1} \\ (a) & (b) & (c) } \caption{Experiment as in Figure~\ref{barflower} applied on the images of the first row. Our results, presented in the final row, were obtained with $k=1.3$ and $\la=0.0005$. Contrary to OT (second row) or the method proposed by Papadakis et al.~\cite{Papadakis_ip11} (third row), the proposed method does not create artifacts on the sky and the clock for images (a) and (c).} \label{barclock} \end{figure*} \begin{figure*}[ht] \centering \myTriTab{ \myimgZ{./barycenter/clockHD-1} & \myimgZ{./barycenter/clockHD-2} & \myimgZ{./barycenter/clockHD-3} \\ \myimgZ{./barycenter/Diag1-clockHDrho-033333-ksum11lambda00005nnx4QP1} & \myimgZ{./barycenter/Diag2-clockHDrho-033333-ksum11lambda00005nnx4QP1} & \myimgZ{./barycenter/Diag3-clockHDrho-033333-ksum11lambda00005nnx4QP1} \\ } \caption{The proposed method can be applied as a preprocessing step in a pipeline for objects detection or image registration, where canceling illumination is important. On the first row, we show a set of pictures of the same object taken at different hours of the day or night, and on the second row, the result of our algorithm setting $(p,q)=(2,2)$, $k=1$ and $\la=0.0005$. Note how the algorithm is able to normalize the illumination conditions of all the images.} \label{im:colornorm} \end{figure*} As a final example, we would like to show in Figure~\ref{im:colornorm} how this method can be applied as a preprocessing before comparing/registering images of the same object obtained under different illumination conditions.
{ "alphanum_fraction": 0.7426513989, "avg_line_length": 85.135678392, "ext": "tex", "hexsha": "2665ca36698f8616fa9e7367288937e7e64dee35", "lang": "TeX", "max_forks_count": 4, "max_forks_repo_forks_event_max_datetime": "2021-06-04T01:52:32.000Z", "max_forks_repo_forks_event_min_datetime": "2016-10-12T17:29:21.000Z", "max_forks_repo_head_hexsha": "4d20033657717e3e0d744e3ce95fbc9afc6e5096", "max_forks_repo_licenses": [ "CECILL-B" ], "max_forks_repo_name": "gpeyre/2013-SIIMS-regularized-ot", "max_forks_repo_path": "paper/sections/sec-application-bary.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "4d20033657717e3e0d744e3ce95fbc9afc6e5096", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "CECILL-B" ], "max_issues_repo_name": "gpeyre/2013-SIIMS-regularized-ot", "max_issues_repo_path": "paper/sections/sec-application-bary.tex", "max_line_length": 1014, "max_stars_count": 3, "max_stars_repo_head_hexsha": "4d20033657717e3e0d744e3ce95fbc9afc6e5096", "max_stars_repo_licenses": [ "CECILL-B" ], "max_stars_repo_name": "gpeyre/2013-SIIMS-regularized-ot", "max_stars_repo_path": "paper/sections/sec-application-bary.tex", "max_stars_repo_stars_event_max_datetime": "2021-10-19T17:21:04.000Z", "max_stars_repo_stars_event_min_datetime": "2019-06-27T03:15:19.000Z", "num_tokens": 5652, "size": 16942 }
\part*{Bibliography} \manualmark \markboth{\spacedlowsmallcaps{\bibname}}{\spacedlowsmallcaps{\bibname}} \refstepcounter{dummy} \addtocontents{toc}{\protect\vspace{\beforebibskip}} \addcontentsline{toc}{chapter}{\tocEntry{\bibname}} \chapter*{\bibname}
{ "alphanum_fraction": 0.7865612648, "avg_line_length": 31.625, "ext": "tex", "hexsha": "fd924fd2f4955ec06f08e0c7ca786a09b752b174", "lang": "TeX", "max_forks_count": 2, "max_forks_repo_forks_event_max_datetime": "2021-01-06T14:12:40.000Z", "max_forks_repo_forks_event_min_datetime": "2020-12-03T01:55:34.000Z", "max_forks_repo_head_hexsha": "a11b5e9e8eaf465005abb26cf3ec19540c73cb66", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "zorzalerrante/pandoc-thesis", "max_forks_repo_path": "ref-appendix/references.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "a11b5e9e8eaf465005abb26cf3ec19540c73cb66", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "zorzalerrante/pandoc-thesis", "max_issues_repo_path": "ref-appendix/references.tex", "max_line_length": 71, "max_stars_count": 5, "max_stars_repo_head_hexsha": "a11b5e9e8eaf465005abb26cf3ec19540c73cb66", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "zorzalerrante/pandoc-thesis", "max_stars_repo_path": "ref-appendix/references.tex", "max_stars_repo_stars_event_max_datetime": "2021-10-13T13:20:07.000Z", "max_stars_repo_stars_event_min_datetime": "2021-01-29T23:51:08.000Z", "num_tokens": 81, "size": 253 }
\subsection{Housing} \label{sec:housing} \textbf{Purpose}: \textit{Isolates} and \textit{insulates} growth environment from surroundings (heat, light, water vapour, air). Provides structural integrity and mounting points for other subsystems. Enables system extendability via repeated "unit cell" topology. \textbf{Method}: \begin{enumerate} \item \textit{Setup}: \begin{enumerate} \item Construct frame and install panels; \item Mount control module (w/ subsystems), connect inputs and internal subsystem connections; \item Install tray mounts, insert trays (w/ subsystems); \end{enumerate} \item \textit{Testing}: \begin{itemize} \item Frame construction is rigid, level, and sturdy; \item Panels are insulating against temperature changes; \end{itemize} \item \textit{Process}: \begin{enumerate} \item Panels insulate against heat gain/loss, are opaque, and contain light and heat via reflection; \item Shell construction is tight, thus sealing against moisture; \item Internal vertical mounting channels for systems, horizontal plane "trays"; \item \textbf{Extension} (can be repeated): \begin{enumerate} \item Add a second housing; \item Remove dividing panel from both housings; \item Remove "shared" skeleton extrusions from second housing; \item Join the two housings to form one larger 2x1 housing; \item \textbf{Extension Modes} (may be combined in any way to suit application): \begin{itemize} \item \textit{Option 1} (Smaller Housings): Operate the combined housing off \textbf{one} control module. \item \textit{Option 2} (Larger Housings): Add control modules to account for additional air volume, plant count, power requirement, etc.. Operate in a \textbf{controller-follower topology}. \item \textit{Option 3} (Frame Connection Only): Leave the dividing panel, add a control module, and operate the two PeaPods \textbf{separately}. \end{itemize} \end{enumerate} \end{enumerate} \item \textit{Shutdown}: \begin{enumerate} \item Dismount all systems, remove trays; \item Disassemble housing; \end{enumerate} \end{enumerate} \textbf{Features}: \begin{itemize} \item \textit{Frame}: T-slotted aluminum extrusion framing with aluminum face-mounted brackets forms a cubic skeleton for rigidity/strength (high strength-to-weight aluminum) and easy component mounting and repositioning (standard mounting channels). These extrusions form the "edges" of the cubic housing. %Todo: cite extrusion \item \textit{Panels}: Graphite-enhanced expanded polystyrene (GPS) rigid foam insulation panels \cite{insulation} with reflective mylar internal lamination increase energy efficiency (GPS RSI of 0.0328$\frac{m^2 \cdot \degree C}{W}$ per mm of thickness, mylar enables light/heat reflection), as well as safety against cross-contamination and pathogens. Panels slide into extrusion channels and form a "seal" for greater water vapour retention. These panels form the "faces" of cube. %Todo: cite mylar \item \textit{Trays}: Horizontal plane subframes mounted to internal vertical extrusion channels for ease of repositioning. Trays slide in/out on permanent mounts. All connections are quick-connect (i.e. quick-diconnect tubing for grow tray, push connectors for lighting) for ease of removal. Trays include: \begin{itemize} \item \textit{Grow Trays}: Support plants (via grow cups), aeroponic nozzles, aeroponics container, and supply/recycling lines (See \ref{sec:aeroponics}). \item \textit{Lighting Trays}: Support LED boards, driver board (See \ref{sec:lighting}). \end{itemize} \end{itemize}
{ "alphanum_fraction": 0.7222076903, "avg_line_length": 70.7962962963, "ext": "tex", "hexsha": "c676d96a4eae6ec12b26e16503970ccc6426b04f", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2021-10-24T02:21:18.000Z", "max_forks_repo_forks_event_min_datetime": "2021-10-24T02:21:18.000Z", "max_forks_repo_head_hexsha": "19f956f4e7612d81c203a616ff505c4dc68b9964", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "PeaPodTech/PeaPod", "max_forks_repo_path": "docs/solutionoverview/tex/subsystems/Housing.tex", "max_issues_count": 1, "max_issues_repo_head_hexsha": "19f956f4e7612d81c203a616ff505c4dc68b9964", "max_issues_repo_issues_event_max_datetime": "2021-10-30T05:06:16.000Z", "max_issues_repo_issues_event_min_datetime": "2021-10-30T05:06:16.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "PeaPodTech/PeaPod", "max_issues_repo_path": "docs/solutionoverview/tex/subsystems/Housing.tex", "max_line_length": 505, "max_stars_count": null, "max_stars_repo_head_hexsha": "19f956f4e7612d81c203a616ff505c4dc68b9964", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "PeaPodTech/PeaPod", "max_stars_repo_path": "docs/solutionoverview/tex/subsystems/Housing.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 931, "size": 3823 }
\section{Introduction} \label{sec:introduction} This specification models the \textit{conditions} that the different parts of a transaction have to fulfill so that they can extend a ledger, which is represented here as a list of transactions. In particular, we model the following aspects: \begin{description} \item[Preservation of value] relationship between the total value of input and outputs in a new transaction, and the unspent outputs. \item[Witnesses] authentication of parts of the transaction data by means of cryptographic entities (such as signatures and private keys) contained in these transactions. \item[Delegation] validity of delegation certificates, which delegate block-signing rights. \item[Update validation] voting mechanism which captures the identification of the voters, and the participants that can post update proposals. \end{description} The following aspects will not be modeled (since they are not part of the Byron release): \begin{description} \item[Stake] staking rights associated to an addresses. \end{description}
{ "alphanum_fraction": 0.8067542214, "avg_line_length": 41, "ext": "tex", "hexsha": "a99a651806debdba9e671f3eca116df59dee734e", "lang": "TeX", "max_forks_count": 86, "max_forks_repo_forks_event_max_datetime": "2021-10-04T17:17:15.000Z", "max_forks_repo_forks_event_min_datetime": "2019-03-29T06:53:05.000Z", "max_forks_repo_head_hexsha": "6474f68b24d05175fc3fd44a9bdfa95bda703a25", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "ilap/cardano-ledger-specs", "max_forks_repo_path": "byron/ledger/formal-spec/intro.tex", "max_issues_count": 1266, "max_issues_repo_head_hexsha": "6474f68b24d05175fc3fd44a9bdfa95bda703a25", "max_issues_repo_issues_event_max_datetime": "2021-11-04T12:50:51.000Z", "max_issues_repo_issues_event_min_datetime": "2019-03-18T20:23:28.000Z", "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "ilap/cardano-ledger-specs", "max_issues_repo_path": "byron/ledger/formal-spec/intro.tex", "max_line_length": 79, "max_stars_count": 108, "max_stars_repo_head_hexsha": "6474f68b24d05175fc3fd44a9bdfa95bda703a25", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "ilap/cardano-ledger-specs", "max_stars_repo_path": "byron/ledger/formal-spec/intro.tex", "max_stars_repo_stars_event_max_datetime": "2021-10-30T05:27:16.000Z", "max_stars_repo_stars_event_min_datetime": "2019-03-24T02:26:41.000Z", "num_tokens": 227, "size": 1066 }
% !TeX root = ../phd-1st-year-presentation.tex % !TeX encoding = UTF-8 % !TeX spellcheck = en_GB \section{Analysis of assembly lines} \subsection{Assembly lines} \begin{frame}{Assembly line} \begin{center}\scalebox{0.9}{\input{img/assembly_line}}\end{center} \vspace{1em} \begin{minipage}{0.6\textwidth} $N$ sequential workstations $\pl{WS}_1, \dots, \pl{WS}_N$ \begin{itemize} \item with transfer blocking \item and no buffering capacity \end{itemize} \vspace{1em} Workstation $\pl{WS_k}$ can be in one of three states \begin{itemize} \item \textit{producing}: $\pl{WS_k}$ is working on a product \item \textit{done}: $\pl{WS_k}$ is done working on a product \item \textit{idling}: $\pl{WS_k}$ is waiting for a new product \end{itemize} \end{minipage} \begin{minipage}{0.35\textwidth} \begin{center}\scalebox{0.8}{\input{img/workstation_states.tex}}\end{center} \end{minipage} \end{frame} \begin{frame}{Workstation} Each workstation $\pl{WS}_k$ \begin{itemize} \item has no internal parallelism \begin{itemize} \item at most one item being processed in each workstation \end{itemize} \item can implement complex workflows \begin{itemize} \item sequential/alternative/cyclic phases with random choices \end{itemize} \item and has GEN phases' durations \end{itemize} \begin{center}\scalebox{0.7}{\input{img/workstation}}\end{center} The last phase has no duration and encodes the \textit{done} state \end{frame} \begin{frame}{Underlying stochastic process} The underlying stochastic process of each isolated workstation\\ is a Semi Markov Process (SMP) \begin{itemize} \item due to GEN durations \item and the absence of internal parallelism \end{itemize} \vspace{1.5em} The whole assembly line finds a renewal in any case where \begin{itemize} \item every \textit{done} station is in a queue before a bottleneck \item and everything else is \textit{idling} \end{itemize} \begin{center}\scalebox{0.8}{\input{img/assembly_line_bottleneck}}\end{center} \end{frame} \subsection{Inspection} \begin{frame}{Inspection with partial observability} The assembly line can be inspected by external observers \begin{itemize} \item the line can be considered at steady-state at inspection \item there can be ambiguity about the current phase \end{itemize} \vspace{1em} An observation is a tuple $\omega=\langle \omega_0, \omega_1,\ldots, \omega_N \rangle$ \begin{itemize} \item $\omega_0$ indicates if a new product is ready to enter the line or not \item $\omega_k = \langle \sigma_k, \phi_k \rangle$ refers to $\pl{WS}_k$ \begin{itemize} \item $\sigma_k$ indicates if $\pl{WS}_k$ is \textit{idle}/\textit{producing}/\textit{done} \item $\phi_k$ identifies the set of possible current phases \end{itemize} \end{itemize} \vspace{1em} \begin{minipage}{0.65\textwidth} Two kinds of uncertainty \begin{itemize} \item about the actual current phase \begin{itemize} \item discrete \end{itemize} \item about the remaining time in the current phase% \begin{itemize} \item continuous \end{itemize} \end{itemize} \end{minipage} \begin{minipage}{0.3\textwidth} \begin{center}\scalebox{0.65}{\input{img/workstation_observation}}\end{center} \end{minipage} \end{frame} \subsection{Performance measures} \begin{frame}{Performance measures} \relaxnewsetlength{\descriptionwidth}{0.6\textwidth} \relaxnewsetlength{\schemawidth}{0.35\textwidth} \newcommand{\schemascale}{0.45} \relaxnewsetlength{\vspacegap}{1.5em} \begin{minipage}{\descriptionwidth} \textbf{Time To Done} \begin{itemize} \item The remaining time until workstation $k$,\\ according to observation $\omega$,\\ reaches the \textit{done} state \end{itemize} \end{minipage} \begin{minipage}{\schemawidth} \begin{center}\scalebox{\schemascale}{\input{img/workstation_states_TTD}}\end{center} \end{minipage} \vspace{\vspacegap} \begin{minipage}{\descriptionwidth} \textbf{Time To Idle} \begin{itemize} \item The remaining time until workstation $k$,\\ according to observation $\omega$,\\ reaches the \textit{idling} state \end{itemize} \end{minipage} \begin{minipage}{\schemawidth} \begin{center}\scalebox{\schemascale}{\input{img/workstation_states_TTI}}\end{center} \end{minipage} \vspace{\vspacegap} \begin{minipage}{\descriptionwidth} \textbf{Time To Start Next} \begin{itemize} \item The remaining time until workstation $k$,\\ according to observation $\omega$,\\ starts the production of a new product \end{itemize} \end{minipage} \begin{minipage}{\schemawidth} \begin{center}\scalebox{\schemascale}{\input{img/workstation_states_TTSN}}\end{center} \end{minipage} \end{frame} \subsection{Evaluation of performance measures} \begin{frame}{Time To Done} \begin{equation*} \mbox{TTD}(k,\omega) := \left\{ \begin{array}{ll} \displaystyle \sum_{\gamma\in \phi_k} P_{k, \gamma,\omega} \cdot (R(k,\gamma) + Z(k,\gamma)), & \mbox{ if } \sigma_k=\mbox{\em producing} \medskip\\ \mbox{TTD}(k-1,\omega) + V(k), & \mbox{ if } \sigma_k=\mbox{\em idling} \medskip\\ 0, & \mbox{ if } \sigma_k=\mbox{\em done} \end{array} \right. \label{eq:ttd} \end{equation*} \begin{itemize} \item $P_{k,\gamma,\omega}$ probability weight that $\pl{WS}_k$ is in phase $\gamma$ according to $\omega$ \item $R(k,\gamma)$ \textit{remaining time} in phase $\gamma$ of $\pl{WS}_k$ \item $Z(k,\gamma)$ \textit{execution time} of phases of $\pl{WS}_k$ that follow $\gamma$ \item $V(k)$ \textit{production time} of $\pl{WS}_k$ \end{itemize} \begin{center}\hspace{-0.5cm}\scalebox{0.8}{\input{img/assembly_line_TTD}}\end{center} Backward recursive evaluation \end{frame} \begin{frame}{Time To Idle} \begin{equation*} \mbox{TTI}(k,\omega) := \left\{ \begin{array}{ll} \max\{\mbox{TTD}(k,\omega),\mbox{TTI}(k+1,\omega)\}, & \mbox{ if } \sigma_k\in\{\mbox{\em producing}, \mbox{\em done}\} \medskip\\ 0, & \mbox{ if } \sigma_k=\mbox{\em idling} \end{array} \right. \label{eq:tti} \end{equation*} \begin{itemize} \item $\mbox{TTI}(k,\omega)=\max\{ \mbox{TTD}(k,\omega), \ldots, \mbox{TTD}(k+n,\omega) \}$ \begin{itemize} \item $\pl{WS}_j$ producing/done $\forall j \in [k,k+n]$ \item either $\pl{WS}_{k+n}$ last workstation or $\pl{WS}_{k+n+1}$ idling \end{itemize} \item $\pl{WS}_k$ becomes idle when the bottleneck finishes its production \end{itemize} \begin{center}\hspace{-1.5cm}\scalebox{0.8}{\input{img/assembly_line_TTI}}\end{center} Forward recursive evaluation \end{frame} \begin{frame}{Time To Start Next} \begin{equation*} \mbox{TTSN}(k,\omega) := \max\{ \mbox{TTI}(k,\omega), \mbox{TTD}(k-1,\omega) \} \label{eq:ttn} \end{equation*} \vspace{1em} \begin{center}\hspace{-1.5cm}\scalebox{0.8}{\input{img/assembly_line_TTSN}}\end{center} Forward and backward recursive evaluation \end{frame} \begin{frame}{Disambiguation of observed phases} Resolve observed (producing) phases' ambiguity \begin{itemize} \item steady-state probability that $\pl{WS}_k$ is in phase $\gamma$ according to $\omega$ \end{itemize} Given observation $\phi_k$ for workstation $\pl{WS}_k$ \begin{itemize} \item we compute probability $P_{k,\gamma,\omega}$ \item that it was actually $\gamma$ that produced $\phi_k$ \end{itemize} \begin{equation*} P_{k,\gamma,\omega} = \frac{\tilde{\pi}(\gamma)}{\displaystyle \sum_{\gamma' \in \phi_k} \tilde{\pi}(\gamma')} \label{eq:probabilityObservation} \end{equation*} \begin{itemize} \item $\tilde{\pi}(\gamma)$ steady-state probability of phase $\gamma$ in an isolated model of $\pl{WS}_k$ \end{itemize} \end{frame} \begin{frame}{Isolated workstation model} The isolated workstation model represents a workstation\\ repeatedly processing a product \begin{itemize} \item one product being processed \item after its production, it's moved back to the entry point of the workstation \end{itemize} \begin{center}\scalebox{0.6}{\input{img/isolated_workstation}}\end{center} It can be used for two reasons \begin{itemize} \item steady-state probabilities of producing phases are independent \item the inspection is at steady-state \begin{itemize} \item arrivals and productions can be considerer in equilibrium \end{itemize} \end{itemize} \end{frame} \begin{frame}{Remaining time} Evaluation of $F_{R(k,\gamma)}(t) = $ CDF of $R(k,\gamma)$ \begin{itemize} \item $R(k,\gamma)$ \textit{remaining time} in phase $\gamma$ of $\pl{WS}_k$ \end{itemize} \vspace{1em} Problem! \begin{itemize} \item remaining times of enabled GEN transitions are \textit{dependent} \item joint probabilities don't allow for a compositional approach \end{itemize} \vspace{1em} $\sfrac{1}{3}$ \textit{Immediate} approximation \begin{itemize} \item assume that phase $\gamma$ is inspected at its ending \begin{itemize} \item $\tilde{F}_{R(k, \gamma)}(t) = 1 \quad \forall t$ \end{itemize} \item represents an \textit{upper bound} \end{itemize} \vspace{1em} $\sfrac{2}{3}$ \textit{Newly enabled} approximation \begin{itemize} \item assume that phase $\gamma$ is inspected at its beginning \begin{itemize} \item $\tilde{F}_{R(k, \gamma)}(t) = F_{\gamma}(t)$ \item $F_{\gamma}(t)$ original CDF of the duration of $\gamma$ \end{itemize} \item represents a \textit{lower bound} \end{itemize} \end{frame} \begin{frame}{Remaining time} $\sfrac{3}{3}$ \textit{Independent remaining times} approximation \begin{itemize} \item consider the remaining times of ongoing phases as \textit{independent} \item represents a (better) \textit{lower bound} \end{itemize} \begin{block}{Theorem: positive correlation \& stochastic order} If $\hat{R}$ is an independent version of vector $R$ of positively correlated remaining times of ongoing phases, then $\hat{R} \geq_{st} R$ \end{block} \vspace{1em} Steady-state distribution of $\hat{R}(k,\gamma)$\\ computed according to the Key Renewal Theorem\footnote{\scriptsize Serfozo, R., 2009. Basics of applied stochastic processes. Springer Science \& Business Media.} \begin{equation*} \tilde{F}_{R(k,\gamma)}(t) = \frac{1}{\mu} \int_{0}^{t} [1 - F_{\gamma}(s)]ds \end{equation*} \begin{itemize} \item $\mu$ expected value of $F_{\gamma}(t)$ \end{itemize} \end{frame} \begin{frame}{Execution and production time} Evaluation of $F_{Z(k,\gamma)}(t)$ and $F_{V(k)}$ \begin{itemize} \item $Z(k,\gamma)$ \textit{execution time} of phases of $\pl{WS}_k$ that follow $\gamma$ \item $V(k)$ \textit{production time} of $\pl{WS}_k$ \end{itemize} \vspace{1em} CDFs of $Z(k,\gamma)$ and $V(k)$ are computed as transient probabilities \begin{itemize} \item $F_{Z(k,\gamma)}$ transient probability from phase after $\gamma$ to final phase of $\pl{WS}_k$ \item $F_{V(k)}$ transient probability from first phase to final phase of $\pl{WS}_k$ \end{itemize} \vspace{2em} Upper/lower bounds for \textit{TTD}, \textit{TTI} and \textit{TTSN} can be evaluated \begin{itemize} \item \textit{convolution} and \textit{max} operations maintain stochastic order \end{itemize} \end{frame} \subsection{Experimentation} \begin{frame}{Case study assembly lines} Sequential, alternative and cyclic workstations \vspace{0.5em} \begin{minipage}{0.3\textwidth} \begin{center}\scalebox{0.5}{\input{img/serial_workstation}}\end{center} \end{minipage} \begin{minipage}{0.325\textwidth} \begin{center}\scalebox{0.5}{\input{img/alternative_workstation}}\end{center} \end{minipage} \begin{minipage}{0.325\textwidth} \begin{center}\scalebox{0.5}{\input{img/cyclic_workstation}}\end{center} \end{minipage} \vspace{1em} \begin{minipage}{0.5\textwidth} \textit{Simple} assembly line \begin{itemize} \item two sequential workstations \item both in phase $\pl{p}_1$ at inspection \end{itemize} \end{minipage} \begin{minipage}{0.45\textwidth} \begin{center}\scalebox{0.7}{\input{img/simple_assembly_line}}\end{center} \end{minipage} \vspace{2em} \begin{minipage}{0.5\textwidth} \textit{Complex} assembly line \begin{itemize} \item three repetitions \item of sequential/alternative/cyclic ws \item all observed in \textit{producing} \end{itemize} \end{minipage} \begin{minipage}{0.45\textwidth} \begin{center}\scalebox{0.7}{\input{img/complex_assembly_line}}\end{center} \end{minipage} \end{frame} \begin{frame}{Simple assembly line}{TTDone, TTIdle, TTStartNext} \begin{minipage}{0.3\textwidth} \begin{center} {\tiny $F_{TTD(1,\omega)}$}\\ \colorbox{white}{\includegraphics[scale=0.25]{simple_ttd_cdf}} \end{center} \end{minipage} \begin{minipage}{0.3\textwidth} \begin{center} {\tiny $F_{TTI(1,\omega)}$}\\ \colorbox{white}{\includegraphics[scale=0.25]{simple_tti_cdf}} \end{center} \end{minipage} \begin{minipage}{0.3\textwidth} \begin{center} {\tiny $F_{TTSN(2,\omega)}$}\\ \colorbox{white}{\includegraphics[scale=0.25]{simple_ttsn_cdf}} \end{center} \end{minipage} \begin{minipage}{0.5\textwidth} \textit{TTD}, \textit{TTI} and \textit{TTSN} computed in \begin{itemize} \item 41/45/42 min for simulation \item 0.15/0.18/0.10 s for bounds \end{itemize} \end{minipage} \begin{minipage}{0.45\textwidth} \vspace{2em} Very good approximation results \begin{itemize} \item especially for \textit{independent remaining times} \end{itemize} Feasible approach \begin{itemize} \item very fast bounds evaluation \item compared to simulation \end{itemize} \end{minipage} \end{frame} \begin{frame}{Complex assembly line}{TTDone, TTIdle, TTStartNext} \begin{minipage}{0.3\textwidth} \begin{center} {\tiny $F_{TTD(5,\omega)}$}\\ \colorbox{white}{\includegraphics[scale=0.25]{complex_ttd_cdf}} \end{center} \end{minipage} \begin{minipage}{0.3\textwidth} \begin{center} {\tiny $F_{TTI(5,\omega)}$}\\ \colorbox{white}{\includegraphics[scale=0.25]{complex_tti_cdf}} \end{center} \end{minipage} \begin{minipage}{0.3\textwidth} \begin{center} {\tiny $F_{TTSN(5,\omega)}$}\\ \colorbox{white}{\includegraphics[scale=0.25]{complex_ttsn_cdf}} \end{center} \end{minipage} \vspace{4em} \begin{minipage}{0.5\textwidth} \textit{TTD}, \textit{TTI} and \textit{TTSN} computed in \begin{itemize} \item 0.126/0.123/0.75 s for bounds \end{itemize} \end{minipage} \begin{minipage}{0.45\textwidth} Scalable solution \begin{itemize} \item in a complex scenario \item simulation would be infeasible \end{itemize} \end{minipage} \end{frame}
{ "alphanum_fraction": 0.6030224627, "avg_line_length": 37.7073170732, "ext": "tex", "hexsha": "73eac5e10144354b2ea9d29e1117c5c16cbb29fa", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "a1226bd41b0208d0aac08c15c3372a759df0cb63", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "oddlord/uni", "max_forks_repo_path": "phd/committee/first-year/presentation/body/analysis_assembly_lines.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "a1226bd41b0208d0aac08c15c3372a759df0cb63", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "oddlord/uni", "max_issues_repo_path": "phd/committee/first-year/presentation/body/analysis_assembly_lines.tex", "max_line_length": 168, "max_stars_count": null, "max_stars_repo_head_hexsha": "a1226bd41b0208d0aac08c15c3372a759df0cb63", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "oddlord/uni", "max_stars_repo_path": "phd/committee/first-year/presentation/body/analysis_assembly_lines.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 5147, "size": 17006 }
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%% tWZ %%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \subsection[\tWZ]{\tWZ} \label{subsec:tWZ} This section describes the MC samples used for the modelling of \tWZ\ production. Section~\ref{subsubsec:tWZ_aMCP8} describes the \MGNLOPY[8] samples, \subsubsection[MadGraph5\_aMC@NLO+Pythia8]{\MGNLOPY[8]} \label{subsubsec:tWZ_aMCP8} \paragraph{Samples} %\label{par:tWZ_aMCP8_samples} The descriptions below correspond to the samples in Table~\ref{tab:tWZ_aMCP8}. \begin{table}[htbp] \begin{center} \caption{Nominal \tWZ\ samples produced with \MGNLOPY[8].} \label{tab:tWZ_aMCP8} \begin{tabular}{ l | l } \hline DSID range & Description \\ \hline 410408 & \tWZ\, DR1 \\ 410409 & \tWZ\, DR2 \\ \hline \end{tabular} \end{center} \end{table} \paragraph{Short description:} The production of \tWZ\ events was modelled using the \MGNLO[2.3.3]~\cite{Alwall:2014hca} generator at NLO with the \NNPDF[3.0nlo]~\cite{Ball:2014uwa} parton distribution function~(PDF). The events were interfaced with \PYTHIA[8.212]~\cite{Sjostrand:2014zea}~ using the A14 tune~\cite{ATL-PHYS-PUB-2014-021} and the \NNPDF[2.3lo]~\cite{Ball:2014uwa} PDF set. The decays of bottom and charm hadrons were simulated using the \EVTGEN[1.2.0] program~\cite{Lange:2001uf}. \paragraph{Long description:} The production of \tWZ events was modelled using the \MGNLO[2.3.3]~\cite{Alwall:2014hca} generator at NLO with the \NNPDF[3.0nlo]~\cite{Ball:2014uwa} parton distribution function~(PDF). The events were interfaced with \PYTHIA[8.212]~\cite{Sjostrand:2014zea}~ using the A14 tune~\cite{ATL-PHYS-PUB-2014-021} and the \NNPDF[2.3lo]~\cite{Ball:2014uwa} PDF set. The top quark and the $Z$ boson were decayed at LO using \MADSPIN~\cite{Frixione:2007zp,Artoisenet:2012st} to preserve spin correlations. While the top quark was allowed to decay inclusively, the $Z$ boson decay was restricted to a pair of charged leptons. The five-flavour scheme was used, where all the quark masses are set to zero, except the top quark. The renormalisation and factorisation scales were set to the top-quark mass. The diagram removal scheme described in Ref.~\cite{Frixione:2008yi} was employed to handle the interference between \tWZ and \ttZ, and was applied to the \tWZ sample. A sample with the alternative scheme described in Ref.~\cite{Demartin:2016axk} was produced to assess the associated systematic uncertainty. The decays of bottom and charm hadrons were simulated using the \EVTGEN[1.2.0] program~\cite{Lange:2001uf}.
{ "alphanum_fraction": 0.7279812938, "avg_line_length": 45.8214285714, "ext": "tex", "hexsha": "b03e47309cb7c89634416333b6b81f3e7d26693a", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "e2640e985974cea2f4276551f6204c9fa50f4a17", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "diegobaronm/QTNote", "max_forks_repo_path": "template/MC_snippets/tWZ.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "e2640e985974cea2f4276551f6204c9fa50f4a17", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "diegobaronm/QTNote", "max_issues_repo_path": "template/MC_snippets/tWZ.tex", "max_line_length": 140, "max_stars_count": null, "max_stars_repo_head_hexsha": "e2640e985974cea2f4276551f6204c9fa50f4a17", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "diegobaronm/QTNote", "max_stars_repo_path": "template/MC_snippets/tWZ.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 865, "size": 2566 }
% Released under the MIT license. \documentclass{ronr-bylaws} \usepackage{lipsum} \author{Brodi Elwood} \partyname{Party Name} \date{\today} \begin{document} % % \maketitlepage % % \tableofcontents \newpage % \maketitle \article{Lorem ipsum} % \lipsum[66] % \section{Lorem ipsum} \lipsum[75] \section{Lorem ipsum} \lipsum[75] % % \article{Lorem ipsum} \lipsum[1] % \article{Lorem ipsum} \lipsum[2] % \section{Lorem ipsum} \lipsum[3] % % \end{document}
{ "alphanum_fraction": 0.6585365854, "avg_line_length": 10.6956521739, "ext": "tex", "hexsha": "a98696898a27919243c97c4c9914e9541c9cd33e", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "586f0fdff85b38a4d15d2590298d664bc2500f10", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "bdelwood/ronr-bylaws-template", "max_forks_repo_path": "example.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "586f0fdff85b38a4d15d2590298d664bc2500f10", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "bdelwood/ronr-bylaws-template", "max_issues_repo_path": "example.tex", "max_line_length": 33, "max_stars_count": null, "max_stars_repo_head_hexsha": "586f0fdff85b38a4d15d2590298d664bc2500f10", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "bdelwood/ronr-bylaws-template", "max_stars_repo_path": "example.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 163, "size": 492 }
\documentclass[12pt,letterpaper]{article} \def\myauthor{Matteo Rizzuto} \def\mytitle{mr-vita} \def\myemail{[email protected]} \def\myweb{matteorizzuto} \def\mytwitter{@MatteoRiz} \def\myorcid{0000-0003-3065-9140} \def\myphone{+39 (320) 196 2339} \def\mykeywords{ matteo rizzuto, resume, curriculum, vita, curriculum vita, cv, matteo, rizzuto, mr } \setlength\parindent{0pt} \newenvironment{itemize*}% {\begin{itemize}% \setlength{\itemsep}{2.5pt}}% {\end{itemize}} \usepackage{url} \usepackage{fancyhdr} \usepackage{lastpage} %\usepackage{ocgtools} \usepackage{enumitem} \setlist{nolistsep,leftmargin=0.15in} \usepackage[log-declarations=false]{xparse} \usepackage[garamond]{mathdesign} \usepackage[no-math]{fontspec} % \usepackage{fontawesome} \usepackage{fontawesome5} \usepackage{microtype} \usepackage[margin=1.125in,top=1.3in,right=1in,left=2in]{geometry} \usepackage[strict]{changepage} \usepackage{ocgx2} %\usepackage[T1]{fontenc} \usepackage[ ocgcolorlinks, urlcolor={[rgb]{0,0,0.4}}, unicode, plainpages=false, pdfpagelabels, pdftitle={\mytitle}, pdfauthor={\myauthor}, pdfkeywords={\mykeywords} ]{hyperref} % fix ocgcolor link breaking; thanks due to Benjamin Lerner % (http://goo.gl/VZKR7M) \setmainfont{Adobe Garamond Pro} \makeatletter \AtBeginDocument{% \newlength{\temp@x}% \newlength{\temp@y}% \newlength{\temp@w}% \newlength{\temp@h}% \def\my@coords#1#2#3#4{% \setlength{\temp@x}{#1}% \setlength{\temp@y}{#2}% \setlength{\temp@w}{#3}% \setlength{\temp@h}{#4}% \adjustlengths{}% \my@pdfliteral{\strip@pt\temp@x\space\strip@pt\temp@y\space\strip@pt\temp@w\space\strip@pt\temp@h\space re}}% \ifpdf \typeout{In PDF mode}% \def\my@pdfliteral#1{\pdfliteral page{#1}}% I don't know why % this command... \def\adjustlengths{}% \fi \ifxetex \def\my@pdfliteral #1{\special{pdf: literal direct #1}}% isn't equivalent to this one \def\adjustlengths{\setlength{\temp@h}{-\temp@h}\addtolength{\temp@y}{1in}\addtolength{\temp@x}{-1in}}% \fi% \def\Hy@colorlink#1{% \begingroup \ifHy@ocgcolorlinks \def\Hy@ocgcolor{#1}% \my@pdfliteral{q}% \my@pdfliteral{7 Tr}% Set text mode to clipping-only \else \HyColor@UseColor#1% \fi }% \def\Hy@endcolorlink{% \ifHy@ocgcolorlinks% \my@pdfliteral{/OC/OCPrint BDC}% \my@coords{0pt}{0pt}{\pdfpagewidth}{\pdfpageheight}% \my@pdfliteral{F}% Fill clipping path (the url's text) with current color \my@pdfliteral{EMC/OC/OCView BDC}% \begingroup% \expandafter\HyColor@UseColor\Hy@ocgcolor% \my@coords{0pt}{0pt}{\pdfpagewidth}{\pdfpageheight}% \my@pdfliteral{F}% Fill clipping path (the url's text) with \Hy@ocgcolor \endgroup% \my@pdfliteral{EMC}% \my@pdfliteral{0 Tr}% Reset text to normal mode \my@pdfliteral{Q}% \fi \endgroup }% } \makeatother % end fixes % \newcommand{\ML}{\textsc{Matlab}} % \newcommand{\Simu}{Simulink} % \newcommand{\MLS}{\ML{}/\Simu{}} % \newcommand{\apdl}{\textsc{APDL}} % \newcommand{\ansys}{\textsc{Ansys}} % \newcommand{\fluent}{\textsc{Fluent}} % \newcommand\CPP{C/C\ensuremath{+}\ensuremath{+}} % \newcommand{\Star}{\textsc{Star-CCM\ensuremath{+}}} \newcommand{\mhead}[1]{\leavevmode\marginpar{\sffamily\footnotesize #1}} \newcommand{\rdate}[1]{{\addfontfeature{Numbers=OldStyle} \hfill #1}} \renewcommand{\date}[1]{{\addfontfeature{Numbers=OldStyle} #1}} \renewcommand{\labelitemi}{-} \setmainfont[ Ligatures={TeX,Common}, BoldFont={AGaramondPro-Semibold}, ]{Adobe Garamond Pro} \setsansfont[ Ligatures={TeX,Common}, Letters=SmallCaps, Color=005100, ]{Adobe Garamond Pro} \setmonofont[Scale=0.85]{FontAwesome} \makeatletter % fix for \hrulefill w/ mathdesign package \def\hrulefill{\leavevmode\leaders\hrule\hfill\kern\z@} \makeatletter \begin{document} \flushbottom \pagestyle{fancy} \setlength\headwidth{6.5in} \lhead{\textsc{matteo~rizzuto}} \rhead{\textsc{\thepage{}~of~\pageref*{LastPage}}} \cfoot{} \thispagestyle{empty} \begin{adjustwidth}{-1in}{} {\Huge {\textsc{% {\addfontfeature{Style=TitlingCaps}M}atteo~ {\addfontfeature{Style=TitlingCaps}R}izzuto} } } \hfill\hfill\hfill { \begin{minipage}[b]{2in} \flushleft \footnotesize Ecosystem Ecology Lab\\ Department of Biology \\ Memorial University of Newfoundland and Labrador\\ St.\ John's,\ NL, Canada \end{minipage} \hfill \begin{minipage}[b]{1.5in} \flushright \footnotesize \href{tel:\myphone}{\myphone} \\ %\texttt{}~ \href{mailto:\myemail}{\myemail} \\ \href{https://www.linkedin.com/in/\myweb}{\texttt{\faLinkedin}}~\href{https://\myweb.github.io}{\texttt{\faGithub}~\myweb}\\ \href{https://twitter.com/\mytwitter}{\texttt{\faTwitter}~\mytwitter}\\ \href{https://orcid.org/0000-0003-3065-9140}{\texttt{\faOrcid}~\myorcid} \end{minipage} }\par \hrulefill \end{adjustwidth} \reversemarginpar \setlength\marginparwidth{0.85in} \smallskip % \section{Education} \mhead{Education}% \textbf{Ph.D. Candidate}, \emph{\href{http://www.mun.ca/biology}{Biology}} \rdate{2016--\textsc{present}}\\ Memorial University of Newfoundland and Labrador, St.\ John's, NL, Canada \begin{itemize*} \item Thesis: From elements to landscapes: the role of terrestrial consumers in ecosystem functioning \item Supervisor: Dr.~Shawn J.~Leroux \end{itemize*} \medskip \textbf{Master of Research (Distinction)}, \emph{\href{https://www.imperial.ac.uk/study/pg/life-sciences/ecology-evolution-conservation-research/}{Ecology, Evolution, and Conservation}} \rdate{2013--2014}\\ Imperial College London-Silwood Park, Ascot, UK \begin{itemize*} \item Research Projects: \begin{itemize*} \item The scaling of activity budgets in carnivores; Supervisors: Dr.~Samraat Pawar and Dr.~Chris Carbone \item Comparison of two commonly used methods to estimate species diversity: dung counts and camera trapping; Supervisors: Prof.~Mick Crawley and Prof.~Joris Cromsigt \end{itemize*} \end{itemize*} \medskip \textbf{Master of Science}, \emph{\href{https://goo.gl/rCzbq7}{Evolution of Animal and Human Behaviour}} \rdate{2009--2012}\\ University of Turin, Turin, Italy \begin{itemize*} \item Thesis: Predator-prey interactions: feeding ecology of the Wolf (\textit{Canis lupus}) and anti-predator behaviour of the Chamois (\textit{Rupicapra rupicapra}) in the Western Alps \item Supervisor: Dr.~Francesca Marucco \end{itemize*} \medskip \textbf{Bachelor of Science (Honours)}, \emph{\href{http://biologia.campusnet.unito.it/do/home.pl}{Biology}} \rdate{2004--2009}\\ University of Turin, Turin, Italy \begin{itemize*} \item Thesis: Individual characteristics of vocalisations emitted during the song of \textit{Indri indri} \item Supervisors: Prof.~Cristina Giacoma and Dr.~Marco Gamba \end{itemize*} \bigskip % \section{Publications} \mhead{Publications}% \begingroup \small Note: for papers where I am first author, I led study design, data collection, analysis, and writing. For papers where I am fourth author or later, I contributed to ideas, data collection, and writing. An asterisk (*) stands for equal contribution. \endgroup \medskip \emph{Peer Reviewed} \begin{itemize*} \item Heckford, T. R., Leroux, S. J., Vander Wal, E., \textbf{Rizzuto, M.}, Balluffi-Fry, J., Richmond, I. C., Wiersma, Y. F.~(\date{2021}) Spatially explicit correlates of plant functional traits inform landscape patterns of resource quality.~\emph{Landscape Ecology}.\\ DOI: \href{https://doi.org/10.1007/s10980-021-01334-3}{10.1007/s10980-021-01334-3} \item \textbf{Rizzuto, M.}, Leroux, S.J., Vander Wal, E., Richmond, I. C., Heckford, T. R., Balluffi-Fry, J., Wiersma, Y. F.~(\date{2021}) Forage stoichiometry predicts the home range size of a small terrestrial herbivore.~\emph{Oecologia} \textbf{197}(2), 327--338.\\ DOI:~\href{https://rdcu.be/cmB38}{10.1101/2020.08.13.248831} \rdate{\textbf{\textit{Highlighted Student Paper}}} \item Ellis-Soto, D.\textsuperscript{*}, Ferraro, K. M.\textsuperscript{*}, \textbf{Rizzuto, M.}, Briggs, E., Monk, J. D., and Schmitz, O. J.~(\date{2021}) A methodological roadmap to quantify animal-vectored spatial ecosystem subsidies.~\emph{Journal of Animal Ecology} \textbf{90}(7), 1605--1622.~DOI:~\href{https://doi.org/10.1111/1365-2656.13538}{10.1111/1365-2656.13538} \item Richmond, I. C., Leroux, S. H., Vander Wal, E., Heckford, T. R., \textbf{Rizzuto, M.}, Balluffi-Fry, J., Kennah, J., Wiersma, Y. F.~(\date{2020}) Temporal variation and its drivers in the elemental traits of four boreal plant species.~\emph{Journal of Plant Ecology} \textbf{14}(3), 398--413.~DOI:~\href{https://doi.org/10.1093/jpe/rtaa103}{10.1093/jpe/rtaa103} \item Balluffi-Fry, J., Leroux, S. J., Wiersma, Y. F., Heckford, T. R., \textbf{Rizzuto, M.}, Richmond, I. C., Vander Wal, E.~(\date{2020}) Quantity-quality trade-offs revealed using a multiscale test of herbivore resource selection on elemental landscapes.~\emph{Ecology and Evolution} \textbf{10}(24), 13847--13859.~DOI:~\href{https://doi.org/10.1002/ece3.6975}{10.1002/ece3.6975} \item \textbf{Rizzuto, M.}, Leroux, S. J., Vander Wal, E., Wiersma, Y. F., Heckford, T. R., Balluffi-Fry, J.~(\date{2019}) Patterns and potential drivers of intraspecific variability in the body C, N, and P composition of a terrestrial vertebrate, the snowshoe hare (\textit{Lepus americanus}).~\textit{Ecology and Evolution} \textbf{9}(24), 14453--14464.~DOI:~\href{https://doi.org/10.1002/ece3.5880}{10.1002/ece3.5880} \item \textbf{Rizzuto, M.}, Carbone, C., and Pawar, S.~(\date{2018}) Foraging constraints reverse the scaling of activity time in carnivores.~\emph{Nature Ecology and Evolution} \textbf{2}(2), 247--253.~DOI: \href{https://doi.org/10.1038/s41559-017-0386-1}{10.1038/s41559-017-0386-1} \rdate{\textbf{\textit{Cover story}}} \end{itemize*} % \bigskip \medskip % \newpage \emph{In Progress} \begin{itemize*} \item McLeod, A.M., Leroux, S.J., \textbf{Rizzuto, M.}, Leibold, M.A., Schiesari, L. Integrating ecosystem and contaminant models to study the spatial dynamics of contaminants. Submitted. \emph{The American Naturalist}, manuscript id: AMNAT-S-21-00583. \item Balluffi-Fry, J., Leroux, S.J., Wiersma, Y.F., Richmond, I.C., Heckford, T.R., \textbf{Rizzuto, M.}, Kennah, J.L., Vander Wal, E. Integrating plant stoichiometry and feeding experiments: state-dependent forage choice and its implications on body mass. In review. \emph{Oecologia}, manuscript id: OECO-D-21-00125.\\ Preprint: \href{https://doi.org/10.1101/2021.02.16.431523}{10.1101/2021.02.16.431523} \item Heckford, T. R., Leroux, S. J., Vander Wal, E., \textbf{Rizzuto, M.}, Balluffi-Fry, J., Richmond, I. C., Wiersma, Y. F. Foliar elemental niche responses of balsam fir (\textit{Abies balsamea}) and white birch (\textit{Betula papyrifera}) to differing community types across geographic scales. In revision, \emph{Functional Ecology}, manuscript id: FE-2020-00432. \item Little, C. J.\textsuperscript{*}, \textbf{Rizzuto, M.}\textsuperscript{*}, Luhring, T. M., Monk, J. D., Nowicki, R. J., Paseka, R. E., Stegen, J. C., Symons, C. C., Taub, F. B., Yan, J. D. L. Filling the Information Gap in Meta-Ecosystem Ecology. Preprint: \href{https://doi.org/10.32942/osf.io/hc83u}{10.32942/osf.io/hc83u} \item Richmond, I. C., Balluffi-Fry, J., Vander Wal, E., Leroux, S. J., \textbf{Rizzuto, M.}, Heckford, T. R., Kennah, J. L., Riefesel, G. R., Wiersma, Y. F. Individual snowshoe hares manage risk differently: Integrating stoichiometric distribution models and foraging ecology. In review.~\emph{Journal of Mammalogy}, manuscript id: JMAMM-2021-026. \end{itemize*} \medskip \emph{Scientific Outreach} \begin{itemize*} \item Cagnacci, F., Rocca, M., Nicoloso, S., Ossi, F., Peters, W., Mancinelli, S., Valent, M., \textbf{Rizzuto, M.}, Hebblewhite, M.~(\date{2013}).~\emph{Il progetto 2C2T.}~Il Cacciatore Trentino, 93, 4--15. \end{itemize*} \bigskip % \section{Conference Presentations} % \newpage \mhead{Conference\\ Presentations}% \emph{Talks} \begin{itemize*} \setlength\itemsep{0.25em} \item \textbf{Rizzuto, M.}, Leroux, S. J., Vander Wal, E., Wiersma, Y., Heckford, T. R., Balluffi-Fry, J. \emph{Ontogeny and Ecological Stoichiometry of Snowshoe hares (Lepus americanus) in the Boreal Forests of Newfoundland.} Canadian Society for Ecology and Evolution Annual General Meeting, Guelph, ON, Canada. \date{18--21 \textsc{jul.}~2018} \item \textbf{Rizzuto, M.}, Carbone, C., and Pawar, S. \emph{Bio-mechanical constraints on foraging reverse the scaling of activity rate among carnivores.} Canadian Society for Ecology and Evolution Annual General Meeting, St.\ John's, NL, Canada. \date{7--11 \textsc{jul.}~2016} \end{itemize*} % \medskip % \emph{Co-authored} % \vspace{0.25em} % \begin{itemize*} % \setlength\itemsep{0.25em} % \item Heckford, T. R.\(^*\), Leroux, S. J., Vander Wal, E., \textbf{Rizzuto, M.}, Balluffi-Fry, J., Richmond, I., Wiersma, Y. \textit{Investigating the effect of competition on the intraspecific variability of foliar elemental traits.} Atlantic Society of Fish and Wildlife Biologists Annual General Meeting, Corner Brook, NL, Canada. \date{21--23 \textsc{oct.}~2018} % \item Heckford, T. R.\(^*\), Leroux, S. J., Vander Wal, E., \textbf{Rizzuto, M.}, Balluffi-Fry, J., Wiersma, Y. \textit{Spatial Variability in the Elemental Composition of Plants: Implications for Herbivores on the Resource Stoichiometric Landscape.} Canadian Society of Zoologists Annual General Meeting, St.\ John’s, NL, Canada. \date{7--11 \textsc{may}~2018} % \item Heckford, T. R.\(^*\), Leroux, S. J., Vander Wal, E., \textbf{Rizzuto, M.}, Balluffi-Fry, J., Wiersma, Y. \textit{From Elements to Communities to Ecosystems.} Nature Newfoundland and Labrador, St.\ John’s, NL, Canada. \date{15 \textsc{feb.}~2018} % \item Heckford, T. R.\(^*\), Leroux, S. J., Vander Wal, E., \textbf{Rizzuto, M.}, Balluffi-Fry, J., Wiersma, Y. \textit{The stoichiometric landscape: mapping the biogeochemical matrix with wildlife in mind.} Geomatics Atlantic Conference, St.\ John’s, NL, Canada. \date{23--24 \textsc{nov.}~2017} % \end{itemize*} \medskip \emph{Posters}% \begin{itemize*} \setlength\itemsep{0.25em} \item \textbf{Rizzuto, M.}, Leroux, S. J., Vander Wal, E., Wiersma, Y., Heckford, T. R., Balluffi-Fry, J. \emph{Beyond Diffusion: Animal-Mediated Nutrient Transport at Different Spatial Scales.} ``Unifying Ecology Across Scales'' Gordon Research Seminar and Conference, Biddeford, ME, USA.\ \date{21--27 \textsc{jul.}~2018} \end{itemize*} % \medskip % \emph{Co-authored} % \vspace{0.25em} % \begin{itemize*} % \setlength\itemsep{0.25em} % \item Balluffi-Fry J.\(^*\), Vander Wal E., Leroux S. J., Wiersma Y., Heckford T. R., \textbf{Rizzuto M.} \emph{Using forage elemental % composition to test for scale-dependent quality and quantity trade-offs in moose.} Canadian Society for Ecology and Evolution Annual General Meeting, Guelph, ON, Canada. \date{18--21 \textsc{jul.}~2018} % \item Balluffi-Fry J.\(^*\), Vander Wal E., Leroux S. J., Wiersma Y., Heckford T. R., \textbf{Rizzuto M.} \emph{Multi-scale test of moose foraging % strategy using quantitative elemental measures for forage quality and % quantity.} Memorial University Biology Graduate Student Symposium, St.\ John's, % NL, Canada. \date{11--13 \textsc{apr.}~2018} % \item Heckford, T. R.\(^*\), Leroux, S. J., Vander Wal, E., \textbf{Rizzuto, M.}, Balluffi-Fry, J., Wiersma, Y. \emph{Constructing the % Stoichiometric Landscape: Spatial Correlates of Plant Elemental Composition.} % Memorial University Biology Graduate Student Symposium, St.\ John's, NL, Canada. \date{11--13 \textsc{apr.}~2018} % \item Heckford, T. R.\(^*\), Leroux, S. J., Vander Wal, E., \textbf{Rizzuto, % M.}, Wiersma, Y. \emph{A tale of two trees: stoichiometry of balsam fir and % white birch in the boreal forest.} Canadian Society for Ecology and Evolution % Annual General Meeting, Victoria, BC, Canada. \date{7--11 \textsc{may}~2017} % \end{itemize*} \bigskip % \section{Honors and Awards} \mhead{Honors \& \newline Awards}% \par\vspace{-\baselineskip}\begin{itemize*} \item Mitacs Research Training Award, Mitacs Canada \rdate{2020} \item First Place, H5P Maker Session, Centre for Innovation in Teaching and Learning, Memorial University of Newfoundland and Labrador \rdate{2019} % \begin{itemize*} % \item[] \textit{Awarded to the best HTML5 interactive learning object created during the session} % \end{itemize*} \item Mitacs Globalink Research Award, Mitacs Canada \rdate{2018} % \begin{itemize*} % \item[] \textit{Awarded to senior graduate students to conduct research abroad} % \end{itemize*} \item Dean's Doctoral Award, \rdate{2016--2018}\\ Memorial University of Newfoundland and Labrador % \begin{itemize*} % \item[] \textit{Awarded to Ph.D. students based on academic excellence and research potential} % \end{itemize*} \item Graduated with Distinction, Imperial College London \rdate{2014} % \begin{itemize*} % \item[] \textit{Awarded to the top 5\% of all Master's students in the \date{2013-2014} cohort} % \end{itemize*} \item Erasmus-LLP Scholarship, University of Turin \rdate{2010} % \begin{itemize*} % \item[] \textit{Awarded to participants in the European Union's Erasmus student exchange program} % \end{itemize*} \end{itemize*} \bigskip % \section{Research Experience} \mhead{Research \newline Experience}% \textbf{Visiting Assistant in Research}, \emph{Schmitz Lab} \rdate{\textsc{feb.}-\textsc{apr.}~2019}\\ School of the Environment, Yale University, New Haven, CT \begin{itemize*} \item Collaboration to develop a mathematical metaecosystem model describing animal-mediated nutrient transport across ecosystem boundaries \item Reviewed metaecosystem, metapopulation, and animal movement literature \item Performed simulations of ecosystem persistence over time under different animal movement and nutrient transport scenarios \end{itemize*} \medskip \textbf{Research Assistant}, \emph{Pawar Lab} \rdate{\textsc{jan.}-\textsc{mar.}~2016}\\ Imperial College London-Silwood Park, Ascot, UK \begin{itemize*} \item Digitized data from terrestrial plant thermal response from published studies \item Assisted with database management and cleanup, and formation of new team members % \item Part of a larger project aimed at analyzing the effects of temperature on the physiology of different plant phyla \end{itemize*} \medskip \textbf{Research Assistant}, \emph{Tsaobis Baboon Project} \rdate{\textsc{jun.}-\textsc{oct.}~2015}\\ Zoological Society of London, London, UK \begin{itemize*} % \item Three months placement, part of a long-term research project on the ecology of Chacma baboons (\textit{P. ursinus}), in Tsaobis Leopard Nature Park (Namibia) \item Assisted with collection of behavioural and population data from two troops of wild, habituated Chacma baboons, and with plant phenology surveys % in the Tsaobis riverbed and nearby hills % \item Improved my knowledge of behavioural sampling techniques, primate and desert ecology, as well as of project management and of GPS equipment \end{itemize*} \medskip \textbf{Research Assistant}, \emph{Roe \& Red deer in Trentino \& Technology Project} \rdate{\textsc{jun.}-\textsc{sept.}~2013}\\ Trento, Italy \begin{itemize*} \item Studied the migratory behaviour of roe deer using radio-tracking techniques (GPS, VHF), plant phenology surveys, and pellet decay rate analyses \item Intensive fieldwork in a physically demanding environment, under harsh weather conditions % \item Worked with researchers from the Ungulate Ecology Lab of the University of Montana, as part of an international team coalescing a variety of different backgrounds and expertise % \item Greatly enhanced my radio-tracking, plant and pellet identification and organizational skills \end{itemize*} \medskip \textbf{Graduate Intern}, \emph{Piedmont Wolf Project} \rdate{\textsc{jun.}-\textsc{dec.}~2011}\\ Maritime Alps Nature Park, Entraque (CN), Italy \begin{itemize*} \item Designed and performed a study of the predator-prey interactions of wolves and chamois in the Maritime Alps Natural Park \item Managed all aspects of the project, from behavioural sampling to prey identification by hair analysis to scat specimen preparation for DNA analyses % \item Vastly enhanced my fieldwork, laboratory and time-management skills \end{itemize*} \medskip \textbf{Undergraduate Intern}, \emph{Ethology Lab} \rdate{\textsc{jun.}~2008-\textsc{mar.}~2009}\\ University of Turin, Turin, Italy \begin{itemize*} \item Bioacoustics study of the Indri lemur, testing for potential identification of lemurs from individuals' contributions to the family group song \end{itemize*} \bigskip % \section{Teaching Experience} \mhead{Teaching \newline Experience}% \textbf{Guest Lecturer}, \emph{Department of Biology}\\ Memorial University of Newfoundland and Labrador, St.\ John's, NL, Canada\\ \textit{Models in Biology} \rdate{\textsc{winter} 2020} % external \begin{itemize*} \item Lecture: Classical Models in Ecology \item Lecture: Models of Species Interactions \item Lecture: Meta-ecology Models \end{itemize*} \vspace{0.25em} \textbf{Teaching Assistant}, \emph{Department of Biology}\\ Memorial University of Newfoundland and Labrador, St.\ John's, NL, Canada\\ % BIOL~2600: Principles of Ecology \rdate{\textsc{fall} 2018} % internal \textit{Principles of Ecology} \rdate{\textsc{fall} 2018} % external \begin{itemize*} % \item Organized and run supporting activities for the course \item Managed the online learning section of the course, developing weekly new content introducing students to preeminent ecologists \item Provided administrative and academic support to students % \item Marked midterms and final exams \end{itemize*} \vspace{0.25em} % BIOL~1002: Principles of Biology \rdate{\textsc{winter} 2017} % internal \textit{Principles of Biology} \rdate{\textsc{winter} 2017} % external \begin{itemize*} % \item Demonstrator during practical laboratories \item Supported students during practical laboratories % and in preparing exams \item Invigilated midterm and final exams, marked weekly lab reports \end{itemize*} \smallskip \textbf{Undergraduate Teaching Assistant}, \emph{Department of Life Sciences}\\ Imperial College London, London, UK \\ \textit{Ecology} \rdate{\textsc{spring} 2015} % , 2\(^{nd}\) year course \begin{itemize*} % \item Demonstrator for the course's field trip to Silwood Park Campus \item Helped students collect data and plan analyses for final projects \end{itemize*} \vspace{0.25em} \textit{Behavioral Ecology} \rdate{\textsc{winter} 2015} % , 2\(^{nd}\) year course \begin{itemize*} % \item Demonstrator for the course's practical laboratories \item Assisted in laboratory setup for weekly practicals \item Helped student design analyses on data collected during practicals \end{itemize*} \vspace{0.25em} \textit{Introduction to Biological Statistics} \rdate{\textsc{fall} 2014} % , 2\(^{nd}\) year course \begin{itemize*} % \item Demonstrator during the course's lectures \item Helped students learn base and advanced R language % \item Facilitated Q\&A sessions ahead of final exams \end{itemize*} \smallskip \textbf{Graduate Teaching Assistant}, \emph{Department of Life Sciences}\\ Imperial College London-Silwood Park, Ascot, UK \\ \textit{Statistics} \rdate{\textsc{fall} 2014} \begin{itemize*} % \item Demonstrator during laboratory practicals \item Assisted students with R coding and in developing statistical analyses \end{itemize*} % \vspace{0.25em} % \newpage \textit{Macroecology and Climate Change} \rdate{\textsc{fall} 2014} \begin{itemize*} % \item Demostrator for practical lectures \item Assisted students using the species distribution software Maxent \end{itemize*} \bigskip % \section{Service} \mhead{Service} Manuscript reviewer for \emph{Ecology and Evolution} and \emph{Science of the Total Environment}. \bigskip % \section{Professional Development} \mhead{Professional\\ Development} % \par\vspace{-\baselineskip} \emph{Courses} \begin{itemize*} \item \textbf{Becoming a More Equitable Educator: Mindsets and Practices}, \rdate{\textsc{mar.-jun.}~2020}\\ \emph{MIT Teaching Systems Lab}\\ Delivered online through \href{www.edx.org}{edX.org} \item \textbf{Teaching Skills Enhancement Program}, \rdate{\textsc{sept.-aug.}~2020}\\ \emph{Centre for Innovation in Teaching and Learning}\\ Memorial University of Newfoundland and Labrador, St.\ John's, NL, Canada \end{itemize*} \medskip % \newpage \emph{Workshops} \begin{itemize*} \item \textbf{Reproducible Research through Open Science}, \rdate{11~\textsc{june}~2020}\\ \emph{Canadian Institute for Ecology and Evolution \& NSERC-CREATE ``Living Data Project''}\\ Attended online through \href{https://osf.io/p7r5d/}{osf.io} \item \textbf{Teaching Inclusively \& Equitably Online}, \rdate{21~\textsc{may}~2020}\\ \emph{American Society for Engineering Education \& NSF INCLUDES Aspire Alliance}\\ Attended online through Blackboard Collaborate Ultra \item \textbf{H5P Maker Session}, \rdate{24~\textsc{oct.}~2019}\\ \emph{Centre for Innovation in Teaching and Learning}\\ Memorial University of Newfoundland and Labrador, St.\ John's, NL, Canada \item \textbf{Community of Inquiry Coffee Break: Open Access}, \rdate{23~\textsc{oct.}~2019}\\ \emph{Centre for Innovation in Teaching and Learning}\\ Memorial University of Newfoundland and Labrador, St.\ John's, NL, Canada \item \textbf{Open Access and Scholarly Publishing}, \rdate{22~\textsc{oct.}~2019}\\ \emph{Centre for Innovation in Teaching and Learning}\\ Memorial University of Newfoundland and Labrador, St.\ John's, NL, Canada \item \textbf{Four Things to Consider for Graduate Student Teaching}, \rdate{8~\textsc{nov.}~2018}\\ \emph{Enhanced Development of the Graduate Experience}\\ Memorial University of Newfoundland and Labrador, St.\ John's, NL, Canada \item \textbf{AARMS CRG Software Carpentry Workshop}, \rdate{27~\textsc{may}~2017}\\ \emph{The Carpentries}\\ Memorial University of Newfoundland and Labrador, St.\ John's, NL, Canada \end{itemize*} \bigskip % \section{Volunteer Experience} \mhead{Volunteer\\ Experience}% \textbf{Communications Officer}, \emph{Biology Graduate Student Association} \rdate{2019--2020}\\ Memorial University of Newfoundland and Labrador, St.\ John's, NL, Canada \begin{itemize*} \item Managed the Association's weekly newsletter, website, and all of its social media presence % \item Coordinated on- and off-campus advertisement efforts for the Association's activities \end{itemize*} \smallskip \textbf{Chair}, \emph{Biology Graduate Student Association} \rdate{2018--2019}\\ Memorial University of Newfoundland and Labrador, St.\ John's, NL, Canada \begin{itemize*} % \item Collaborated with the Head of the Department of Biology's Office to involve graduate students in administrative tasks \item Designed, funded, and run ``The Balanced Student: stress and work management for graduate students'', a workshop series on graduate students mental health and wellness \item Hosted a Student-Supervisor Relationship workshop during the Graduate Core Seminar course for new graduate students \item Coordinated the activities of the Executive Committee, focusing on fundraising and community outreach \end{itemize*} \smallskip \textbf{Seminar Coordinator}, \emph{Biology Graduate Student Association} \rdate{2017--2018}\\ Memorial University of Newfoundland and Labrador, St.\ John's, NL, Canada \begin{itemize*} \item Organized and run the weekly Seminar series for the Department of Biology, in a team of three coordinators % \item Approached, scheduled and managed potential speakers, from both within and outside the Department and the Province \item Helped in organizing the annual Symposium showcasing the research produced by the Department's graduate students \item Contributed to the Association's fundraising and outreach activities \end{itemize*} \smallskip \textbf{Youth Educator}, \emph{OASI-Operazione Mato Grosso} \rdate{2000--2013}\\ Turin, Italy \begin{itemize*} \item Volunteer work with children and teenagers in primary, secondary and high school, in Turin, Italy, and at the Hospital S{\~a}o Juli{\~a}o in Campo Grande, MS, Brazil % \item Taught me the value of hard work and developed my publication relationship, oratory and teamwork skills \end{itemize*} \smallskip \textbf{Team Leader}, \emph{XX Winter Olympic and IX Paralympic Games} \rdate{2005--2006}\\ Turin, Italy \begin{itemize*} \item Organized and supervised daily activities of a team of 10 volunteers in the Protocol Crew, International Relations and Services % \item Provided initial containment of emergencies, customer assistance, and ensured my team’s welfare % \item Greatly developed my leadership and problem solving skills \end{itemize*} % \bigskip % \mhead{Professional\\ Experience}% % \textbf{Assistant Surveyor}, \emph{Catherine Bickmore Associates Ltd.} \rdate{\textsc{may}-\textsc{jun.}~2015}\\ % London, UK % \begin{itemize*} % \item Involved in reptile and amphibian censuses in Kent, Wiltshire and Cambridgeshire % % \item Expanded my knowledge of amphibian and reptile ecology, of ecological survey techniques and of the mechanisms of environmental consultancy % \end{itemize*} \bigskip % \section{Professional Affiliations} \mhead{Professional\\ Affiliations}% \par\vspace{-\baselineskip}\begin{itemize} \item[] Ecological Society of America \rdate{2021--2022} \item[] Canadian Society of Ecology and Evolution \rdate{2016--2021} \end{itemize} \bigskip % \section{Conferences Attended} \mhead{Conferences\\ Attended} \par\vspace{-\baselineskip}\begin{itemize*} \item \emph{``Unifying Ecology Across Scales''} \rdate{21--27 \textsc{jul.} 2018}\\ \emph{Gordon Research Seminar and Conference}\\ Biddeford, ME, USA \item \emph{Canadian Society for Ecology and Evolution Annual General Meeting} \rdate{18--21 \textsc{jul.} 2018}\\Guelph, ON, Canada \item \emph{Canadian Society for Ecology and Evolution Annual General Meeting} \rdate{7--11 \textsc{jul.} 2016} \\St.\ John's, NL, Canada \item \emph{From Energetics to Macro Ecology:} \rdate{14--15 \textsc{nov.} 2013}\\ \textit{Carnivore Responses to Environmental Change}\\Zoological Society of London, London, UK \item \emph{Euroscience Open Forum} \rdate{2--7 \textsc{jul.} 2010}\\Turin, Italy \item \emph{XIX Congress of the Italian Primatological Society} \rdate{1--3 \textsc{apr.} 2009}\\ Asti, Italy \end{itemize*} \bigskip % \section{Software Proficiency} \mhead{Software\\ Proficiency}% % \emph{Working knowledge}\newline%—comfortable using in a professional setting R, RStudio, Mathematica, \LaTeX{}, Markdown, Git, Unix shell, Atom, Inkscape, Quantum GIS, ArcGIS, Python (basic knowledge) % Adobe Creative Suite, % MacOS~X, % Microsoft Office, % Microsoft Windows % % \smallskip % \emph{Basic knowledge}\newline%—would require some training/use to refresh my memory % File Maker Pro Advanced, % Gimp, % Pixelmator \bigskip % \section{Certificates} \mhead{Certificates} \par\vspace{-\baselineskip}\begin{itemize*} \item Wilderness and Remote First Aid \rdate{2016--2019}\\ Canadian Red Cross, St.\ John's, NL, Canada \item WHMIS and Lab Safety \rdate{2017}\\ Memorial University of Newfoundland, St.\ John's, NL, Canada \item Basic Outdoor First Aid \rdate{2014}\\ Marlin Training Ltd., London, UK \item International English Language Testing System, grade 8.5 \rdate{2013}\\ British Council,Turin, Italy \item International Computer Driving Licence \rdate{2007}\\ University of Turin,Turin, Italy \end{itemize*} \bigskip % \section{Interests} \mhead{Interests}% Backpacking, Hiking, Photography, Yoga % , % T'ai Chi Quan, % Shaolin Kung Fu % % \newpage % \bigskip % % \section{References} % \mhead{References}% % \begin{minipage}[t]{.33\textwidth} % \raggedright % \textbf{Dr. Shawn J. Leroux}\\ % Associate Professor,\\ % Department of Biology\\ % Memorial University of Newfoundland\\ % St. John's, NL Canada\\ % % \noindent \href{mailto:[email protected]}{[email protected]}\\ % tel: +1 (709) 864 3042\\ % \end{minipage}% % % \begin{minipage}[t]{.33\textwidth} % % \raggedright % % \textbf{Dr. Yolanda F. Wiersma}\\ % % Professor,\\ % % Department of Biology\\ % % Memorial University of Newfoundland\\ % % St. John's, NL Canada\\ % % % % \noindent \href{mailto:[email protected]}{[email protected]}\\ % % tel: +1 (709) 864 7499 % % \end{minipage}% % \begin{minipage}[t]{.33\textwidth} % \raggedright % \textbf{Dr. Chelsea Little}\\ % Assistant Professor,\\ % School of Environmental Science\\ % Simon Fraser University\\ % Burnacy, BC Canada\\ % % \noindent \href{mailto:[email protected]}{chelsea\[email protected]}\\ % tel: +1 (236) 978 0266 % \end{minipage}% % \begin{minipage}[t]{.33\textwidth} % \raggedright % \textbf{Dr. Samraat Pawar}\\ % Reader,\\ % Department of Life Sciences\\ % Imperial College London\\ % London, UK % % \noindent \href{mailto:[email protected]}{[email protected]}\\ % tel: +44 (0) 2075 942 213 % \end{minipage} \end{document}
{ "alphanum_fraction": 0.7312739693, "avg_line_length": 49.4459259259, "ext": "tex", "hexsha": "2ccd55b6f8d5ff9265de38bd7d78cc439e9db23a", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "f9f98efc8b51e817e5c62a68730249269cc56363", "max_forks_repo_licenses": [ "Unlicense" ], "max_forks_repo_name": "matteorizzuto/mr-vita", "max_forks_repo_path": "mr-vita.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "f9f98efc8b51e817e5c62a68730249269cc56363", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Unlicense" ], "max_issues_repo_name": "matteorizzuto/mr-vita", "max_issues_repo_path": "mr-vita.tex", "max_line_length": 422, "max_stars_count": null, "max_stars_repo_head_hexsha": "f9f98efc8b51e817e5c62a68730249269cc56363", "max_stars_repo_licenses": [ "Unlicense" ], "max_stars_repo_name": "matteorizzuto/mr-vita", "max_stars_repo_path": "mr-vita.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 10644, "size": 33376 }
\section{Field Saver Field Names for Common Particle Types and Interaction Groups}\label{table:field_names} \begin{table}[h] \begin{center} \begin{tabular}{|c|c|c|c|} \cline{3-4} \multicolumn{2}{c|}{} & \textbf{Scalar Field Name} & \textbf{Vector Field Name}\\ \hline % Start Particle Types & & id & \\ & & tag & \\ & & radius & \\ & & v\_abs & \raisebox{1.25 ex}[0 pt]{displacement}\\ & \raisebox{1.25 ex}[0 pt]{NRotSphere} & sigma\_xx\_2d & \raisebox{1.25 ex}[0 pt]{position}\\ \textbf{Particles} & \raisebox{1.25 ex}[0 pt]{RotSphere} & sigma\_xy\_2d & \raisebox{1.25 ex}[0 pt]{force}\\ & & sigma\_yy\_2d & \raisebox{1.25 ex}[0 pt]{velocity}\\ & & sigma\_d & \\ & & e\_kin & \\ \cline{2-4} & & e\_kin\_linear & \\ & \raisebox{1.25 ex}[0 pt]{RotSphere} & e\_kin\_rot & \raisebox{1.25 ex}[0 pt]{ang\_velocity}\\ \hline\hline % End Particle Types % Start Wall Types & & & Position\\ \raisebox{1.25 ex}[0 pt]{\textbf{Walls}} & \raisebox{1.25 ex}[0 pt]{NRotElasticWall} & & Force\\ \hline\hline % End Wall Types % Start Interaction Types & Damping & & \\ & RotDamping & \raisebox{1.25 ex}[0 pt]{dissipated\_energy} & \raisebox{1.25 ex}[0 pt]{force}\\ \cline{2-4} & NRotElastic & & \\ & RotElastic & \raisebox{1.25 ex}[0 pt]{count} & force\\ & RotThermalElastic & \raisebox{1.25 ex}[0 pt]{potential\_energy} & \\ \cline{2-4} & & count & \\ & & sticking & \\ & \raisebox{1.25 ex}[0 pt]{NRotFriction} & slipping & force\\ & \raisebox{1.25 ex}[0 pt]{RotFriction} & force\_deficit & normal\_force\\ \textbf{Interactions} & \raisebox{1.25 ex}[0 pt]{RotThermalFriction} & dissipated\_energy & \\ & & potential\_energy & \\ \cline{2-4} & & & frictional\_force\\ & \raisebox{1.25 ex}[0 pt]{RotFriction} & & tangential\_force\\ \cline{2-4} & NRotBond & strain & \\ \cline{2-4} & NRotBond & count & \\ & RotBond & breaking\_criterion & force\\ & RotThermalBond & potential\_energy & \\ \cline{2-4} & & e\_pot\_normal & \\ & RotBond & e\_pot\_shear & \\ & RotThermalBond & e\_pot\_twist & \\ & & e\_pot\_bend & \\ \cline{2-4} & & & normal\_force\\ & \raisebox{1.25 ex}[0 pt]{RotBond} & & tangential\_force\\ \hline \end{tabular} \end{center} \end{table}
{ "alphanum_fraction": 0.6035164835, "avg_line_length": 36.1111111111, "ext": "tex", "hexsha": "78c69a88fe5d3289de46fe6af7e42ddcdd36c14a", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "e56638000fd9c4af77e21c75aa35a4f8922fd9f0", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "danielfrascarelli/esys-particle", "max_forks_repo_path": "Doc/Tutorial/tables/field_names.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "e56638000fd9c4af77e21c75aa35a4f8922fd9f0", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "danielfrascarelli/esys-particle", "max_issues_repo_path": "Doc/Tutorial/tables/field_names.tex", "max_line_length": 110, "max_stars_count": null, "max_stars_repo_head_hexsha": "e56638000fd9c4af77e21c75aa35a4f8922fd9f0", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "danielfrascarelli/esys-particle", "max_stars_repo_path": "Doc/Tutorial/tables/field_names.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 897, "size": 2275 }
% Copyright (C) 2001-2006 Python Software Foundation % Author: [email protected] (Barry Warsaw) \section{\module{email} --- An email and MIME handling package} \declaremodule{standard}{email} \modulesynopsis{Package supporting the parsing, manipulating, and generating email messages, including MIME documents.} \moduleauthor{Barry A. Warsaw}{[email protected]} \sectionauthor{Barry A. Warsaw}{[email protected]} \versionadded{2.2} The \module{email} package is a library for managing email messages, including MIME and other \rfc{2822}-based message documents. It subsumes most of the functionality in several older standard modules such as \refmodule{rfc822}, \refmodule{mimetools}, \refmodule{multifile}, and other non-standard packages such as \module{mimecntl}. It is specifically \emph{not} designed to do any sending of email messages to SMTP (\rfc{2821}), NNTP, or other servers; those are functions of modules such as \refmodule{smtplib} and \refmodule{nntplib}. The \module{email} package attempts to be as RFC-compliant as possible, supporting in addition to \rfc{2822}, such MIME-related RFCs as \rfc{2045}, \rfc{2046}, \rfc{2047}, and \rfc{2231}. The primary distinguishing feature of the \module{email} package is that it splits the parsing and generating of email messages from the internal \emph{object model} representation of email. Applications using the \module{email} package deal primarily with objects; you can add sub-objects to messages, remove sub-objects from messages, completely re-arrange the contents, etc. There is a separate parser and a separate generator which handles the transformation from flat text to the object model, and then back to flat text again. There are also handy subclasses for some common MIME object types, and a few miscellaneous utilities that help with such common tasks as extracting and parsing message field values, creating RFC-compliant dates, etc. The following sections describe the functionality of the \module{email} package. The ordering follows a progression that should be common in applications: an email message is read as flat text from a file or other source, the text is parsed to produce the object structure of the email message, this structure is manipulated, and finally, the object tree is rendered back into flat text. It is perfectly feasible to create the object structure out of whole cloth --- i.e. completely from scratch. From there, a similar progression can be taken as above. Also included are detailed specifications of all the classes and modules that the \module{email} package provides, the exception classes you might encounter while using the \module{email} package, some auxiliary utilities, and a few examples. For users of the older \module{mimelib} package, or previous versions of the \module{email} package, a section on differences and porting is provided. \begin{seealso} \seemodule{smtplib}{SMTP protocol client} \seemodule{nntplib}{NNTP protocol client} \end{seealso} \subsection{Representing an email message} \input{emailmessage} \subsection{Parsing email messages} \input{emailparser} \subsection{Generating MIME documents} \input{emailgenerator} \subsection{Creating email and MIME objects from scratch} \input{emailmimebase} \subsection{Internationalized headers} \input{emailheaders} \subsection{Representing character sets} \input{emailcharsets} \subsection{Encoders} \input{emailencoders} \subsection{Exception and Defect classes} \input{emailexc} \subsection{Miscellaneous utilities} \input{emailutil} \subsection{Iterators} \input{emailiter} \subsection{Package History\label{email-pkg-history}} This table describes the release history of the email package, corresponding to the version of Python that the package was released with. For purposes of this document, when you see a note about change or added versions, these refer to the Python version the change was made in, \emph{not} the email package version. This table also describes the Python compatibility of each version of the package. \begin{tableiii}{l|l|l}{constant}{email version}{distributed with}{compatible with} \lineiii{1.x}{Python 2.2.0 to Python 2.2.1}{\emph{no longer supported}} \lineiii{2.5}{Python 2.2.2+ and Python 2.3}{Python 2.1 to 2.5} \lineiii{3.0}{Python 2.4}{Python 2.3 to 2.5} \lineiii{4.0}{Python 2.5}{Python 2.3 to 2.5} \end{tableiii} Here are the major differences between \module{email} version 4 and version 3: \begin{itemize} \item All modules have been renamed according to \pep{8} standards. For example, the version 3 module \module{email.Message} was renamed to \module{email.message} in version 4. \item A new subpackage \module{email.mime} was added and all the version 3 \module{email.MIME*} modules were renamed and situated into the \module{email.mime} subpackage. For example, the version 3 module \module{email.MIMEText} was renamed to \module{email.mime.text}. \emph{Note that the version 3 names will continue to work until Python 2.6}. \item The \module{email.mime.application} module was added, which contains the \class{MIMEApplication} class. \item Methods that were deprecated in version 3 have been removed. These include \method{Generator.__call__()}, \method{Message.get_type()}, \method{Message.get_main_type()}, \method{Message.get_subtype()}. \item Fixes have been added for \rfc{2231} support which can change some of the return types for \function{Message.get_param()} and friends. Under some circumstances, values which used to return a 3-tuple now return simple strings (specifically, if all extended parameter segments were unencoded, there is no language and charset designation expected, so the return type is now a simple string). Also, \%-decoding used to be done for both encoded and unencoded segments; this decoding is now done only for encoded segments. \end{itemize} Here are the major differences between \module{email} version 3 and version 2: \begin{itemize} \item The \class{FeedParser} class was introduced, and the \class{Parser} class was implemented in terms of the \class{FeedParser}. All parsing therefore is non-strict, and parsing will make a best effort never to raise an exception. Problems found while parsing messages are stored in the message's \var{defect} attribute. \item All aspects of the API which raised \exception{DeprecationWarning}s in version 2 have been removed. These include the \var{_encoder} argument to the \class{MIMEText} constructor, the \method{Message.add_payload()} method, the \function{Utils.dump_address_pair()} function, and the functions \function{Utils.decode()} and \function{Utils.encode()}. \item New \exception{DeprecationWarning}s have been added to: \method{Generator.__call__()}, \method{Message.get_type()}, \method{Message.get_main_type()}, \method{Message.get_subtype()}, and the \var{strict} argument to the \class{Parser} class. These are expected to be removed in future versions. \item Support for Pythons earlier than 2.3 has been removed. \end{itemize} Here are the differences between \module{email} version 2 and version 1: \begin{itemize} \item The \module{email.Header} and \module{email.Charset} modules have been added. \item The pickle format for \class{Message} instances has changed. Since this was never (and still isn't) formally defined, this isn't considered a backward incompatibility. However if your application pickles and unpickles \class{Message} instances, be aware that in \module{email} version 2, \class{Message} instances now have private variables \var{_charset} and \var{_default_type}. \item Several methods in the \class{Message} class have been deprecated, or their signatures changed. Also, many new methods have been added. See the documentation for the \class{Message} class for details. The changes should be completely backward compatible. \item The object structure has changed in the face of \mimetype{message/rfc822} content types. In \module{email} version 1, such a type would be represented by a scalar payload, i.e. the container message's \method{is_multipart()} returned false, \method{get_payload()} was not a list object, but a single \class{Message} instance. This structure was inconsistent with the rest of the package, so the object representation for \mimetype{message/rfc822} content types was changed. In \module{email} version 2, the container \emph{does} return \code{True} from \method{is_multipart()}, and \method{get_payload()} returns a list containing a single \class{Message} item. Note that this is one place that backward compatibility could not be completely maintained. However, if you're already testing the return type of \method{get_payload()}, you should be fine. You just need to make sure your code doesn't do a \method{set_payload()} with a \class{Message} instance on a container with a content type of \mimetype{message/rfc822}. \item The \class{Parser} constructor's \var{strict} argument was added, and its \method{parse()} and \method{parsestr()} methods grew a \var{headersonly} argument. The \var{strict} flag was also added to functions \function{email.message_from_file()} and \function{email.message_from_string()}. \item \method{Generator.__call__()} is deprecated; use \method{Generator.flatten()} instead. The \class{Generator} class has also grown the \method{clone()} method. \item The \class{DecodedGenerator} class in the \module{email.Generator} module was added. \item The intermediate base classes \class{MIMENonMultipart} and \class{MIMEMultipart} have been added, and interposed in the class hierarchy for most of the other MIME-related derived classes. \item The \var{_encoder} argument to the \class{MIMEText} constructor has been deprecated. Encoding now happens implicitly based on the \var{_charset} argument. \item The following functions in the \module{email.Utils} module have been deprecated: \function{dump_address_pairs()}, \function{decode()}, and \function{encode()}. The following functions have been added to the module: \function{make_msgid()}, \function{decode_rfc2231()}, \function{encode_rfc2231()}, and \function{decode_params()}. \item The non-public function \function{email.Iterators._structure()} was added. \end{itemize} \subsection{Differences from \module{mimelib}} The \module{email} package was originally prototyped as a separate library called \ulink{\module{mimelib}}{http://mimelib.sf.net/}. Changes have been made so that method names are more consistent, and some methods or modules have either been added or removed. The semantics of some of the methods have also changed. For the most part, any functionality available in \module{mimelib} is still available in the \refmodule{email} package, albeit often in a different way. Backward compatibility between the \module{mimelib} package and the \module{email} package was not a priority. Here is a brief description of the differences between the \module{mimelib} and the \refmodule{email} packages, along with hints on how to port your applications. Of course, the most visible difference between the two packages is that the package name has been changed to \refmodule{email}. In addition, the top-level package has the following differences: \begin{itemize} \item \function{messageFromString()} has been renamed to \function{message_from_string()}. \item \function{messageFromFile()} has been renamed to \function{message_from_file()}. \end{itemize} The \class{Message} class has the following differences: \begin{itemize} \item The method \method{asString()} was renamed to \method{as_string()}. \item The method \method{ismultipart()} was renamed to \method{is_multipart()}. \item The \method{get_payload()} method has grown a \var{decode} optional argument. \item The method \method{getall()} was renamed to \method{get_all()}. \item The method \method{addheader()} was renamed to \method{add_header()}. \item The method \method{gettype()} was renamed to \method{get_type()}. \item The method \method{getmaintype()} was renamed to \method{get_main_type()}. \item The method \method{getsubtype()} was renamed to \method{get_subtype()}. \item The method \method{getparams()} was renamed to \method{get_params()}. Also, whereas \method{getparams()} returned a list of strings, \method{get_params()} returns a list of 2-tuples, effectively the key/value pairs of the parameters, split on the \character{=} sign. \item The method \method{getparam()} was renamed to \method{get_param()}. \item The method \method{getcharsets()} was renamed to \method{get_charsets()}. \item The method \method{getfilename()} was renamed to \method{get_filename()}. \item The method \method{getboundary()} was renamed to \method{get_boundary()}. \item The method \method{setboundary()} was renamed to \method{set_boundary()}. \item The method \method{getdecodedpayload()} was removed. To get similar functionality, pass the value 1 to the \var{decode} flag of the {get_payload()} method. \item The method \method{getpayloadastext()} was removed. Similar functionality is supported by the \class{DecodedGenerator} class in the \refmodule{email.generator} module. \item The method \method{getbodyastext()} was removed. You can get similar functionality by creating an iterator with \function{typed_subpart_iterator()} in the \refmodule{email.iterators} module. \end{itemize} The \class{Parser} class has no differences in its public interface. It does have some additional smarts to recognize \mimetype{message/delivery-status} type messages, which it represents as a \class{Message} instance containing separate \class{Message} subparts for each header block in the delivery status notification\footnote{Delivery Status Notifications (DSN) are defined in \rfc{1894}.}. The \class{Generator} class has no differences in its public interface. There is a new class in the \refmodule{email.generator} module though, called \class{DecodedGenerator} which provides most of the functionality previously available in the \method{Message.getpayloadastext()} method. The following modules and classes have been changed: \begin{itemize} \item The \class{MIMEBase} class constructor arguments \var{_major} and \var{_minor} have changed to \var{_maintype} and \var{_subtype} respectively. \item The \code{Image} class/module has been renamed to \code{MIMEImage}. The \var{_minor} argument has been renamed to \var{_subtype}. \item The \code{Text} class/module has been renamed to \code{MIMEText}. The \var{_minor} argument has been renamed to \var{_subtype}. \item The \code{MessageRFC822} class/module has been renamed to \code{MIMEMessage}. Note that an earlier version of \module{mimelib} called this class/module \code{RFC822}, but that clashed with the Python standard library module \refmodule{rfc822} on some case-insensitive file systems. Also, the \class{MIMEMessage} class now represents any kind of MIME message with main type \mimetype{message}. It takes an optional argument \var{_subtype} which is used to set the MIME subtype. \var{_subtype} defaults to \mimetype{rfc822}. \end{itemize} \module{mimelib} provided some utility functions in its \module{address} and \module{date} modules. All of these functions have been moved to the \refmodule{email.utils} module. The \code{MsgReader} class/module has been removed. Its functionality is most closely supported in the \function{body_line_iterator()} function in the \refmodule{email.iterators} module. \subsection{Examples} Here are a few examples of how to use the \module{email} package to read, write, and send simple email messages, as well as more complex MIME messages. First, let's see how to create and send a simple text message: \verbatiminput{email-simple.py} Here's an example of how to send a MIME message containing a bunch of family pictures that may be residing in a directory: \verbatiminput{email-mime.py} Here's an example of how to send the entire contents of a directory as an email message: \footnote{Thanks to Matthew Dixon Cowles for the original inspiration and examples.} \verbatiminput{email-dir.py} And finally, here's an example of how to unpack a MIME message like the one above, into a directory of files: \verbatiminput{email-unpack.py}
{ "alphanum_fraction": 0.7491132655, "avg_line_length": 41.9751861042, "ext": "tex", "hexsha": "82f0c15c73d6cde19e5f69bb7830aff9f2fc8543", "lang": "TeX", "max_forks_count": 2, "max_forks_repo_forks_event_max_datetime": "2019-07-18T21:33:17.000Z", "max_forks_repo_forks_event_min_datetime": "2017-01-30T21:52:13.000Z", "max_forks_repo_head_hexsha": "d5dbcd8556f1e45094bd383b50727e248d9de1bf", "max_forks_repo_licenses": [ "PSF-2.0" ], "max_forks_repo_name": "deadsnakes/python2.5", "max_forks_repo_path": "Doc/lib/email.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "d5dbcd8556f1e45094bd383b50727e248d9de1bf", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "PSF-2.0" ], "max_issues_repo_name": "deadsnakes/python2.5", "max_issues_repo_path": "Doc/lib/email.tex", "max_line_length": 83, "max_stars_count": 1, "max_stars_repo_head_hexsha": "d5dbcd8556f1e45094bd383b50727e248d9de1bf", "max_stars_repo_licenses": [ "PSF-2.0" ], "max_stars_repo_name": "deadsnakes/python2.5", "max_stars_repo_path": "Doc/lib/email.tex", "max_stars_repo_stars_event_max_datetime": "2015-10-23T02:57:29.000Z", "max_stars_repo_stars_event_min_datetime": "2015-10-23T02:57:29.000Z", "num_tokens": 4201, "size": 16916 }
\section{The M\"obius Group} We want to study $f:\mathbb C\to\mathbb C$ in the form $$f(x)=\frac{ax+b}{cx+d},a,b,c,d\in\mathbb C$$ This function has a pole at $x=-d/c$, so we need an element at infinity. We can take $\mathbb C_\infty:=\mathbb C\cup\{\infty\}$ by the stereographic projection $\mathbb C_{\infty}=\mathbb C\cup\{\infty\}\cong S^2$. Now we define the Mobius map properly \begin{definition} The Mobius map $f:\mathbb C_\infty\to\mathbb C_\infty$ is defined by $$ f(z)= \begin{cases} \frac{az+b}{cz+d}\text{, if $z\neq\infty$ and $z\neq -d/c$}\\ \infty\text{, if $z=-d/c$}\\ \frac{a}{c}\text{, if $z=\infty$} \end{cases} $$ where $ad-bc\neq 0$. \end{definition} The reason why we impose the last condiciton is that we want the Mobius map to be a bijection from $\mathbb C_\infty$ to $\mathbb C_\infty$. \begin{proposition} Let $\mathcal M=\{f:\mathbb C_\infty\to\mathbb C_\infty:f\text{ is a Mobius function.}\}$. Then $(\mathcal M,\circ,\mathrm{id})$ is a group. \end{proposition} \begin{proof} Obviously $\mathrm{id}\in\mathcal M$. Note also that if $g(z)=(az+b)/(cz+d), g'(z)=(a'z+b')/(c'z+d')$, then $g'(g(z))=(a''z+b'')/(c''z+d'')$ where $$ \begin{pmatrix} a''&b''\\ c''&d'' \end{pmatrix} = \begin{pmatrix} a'&b'\\ c'&d' \end{pmatrix} \begin{pmatrix} a&b\\ c&d \end{pmatrix} $$ Then it immediately tells us that $\mathcal M$ is closed under $\circ$ since the determinant function is multiplicative. Note as well that it also gives us the inverse by just finding some $a',b',c',d'$ (which exists due to our criterion on determinant) such that $$ \begin{pmatrix} a'&b'\\ c'&d' \end{pmatrix} = \begin{pmatrix} a&b\\ c&d \end{pmatrix}^{-1} $$ by noticing that the identity function corresponds to $cI,c\neq 0$. \end{proof} We can have $\mathcal M$ to act on $\mathbb C^\infty$ faithfully (with trivial kernel), so $\mathcal M\le\operatorname{Sym}\mathbb C_\infty$. Now consider the Mobius transformation $f(z)=1/(z-a)$, which sends $a$ to $\infty$ and its inverse that sends $\infty$ to $a$, so there is nothing special with $\infty$ in $\mathbb C^\infty$, as one will expect as there is no special point on $S^2$. \begin{proposition}\label{decomp_mobius} Every Mobius tranformation is a composition of $z\mapsto az,a\neq 0$, $z\mapsto z+b$ and $z\mapsto 1/z$. \end{proposition} \begin{proof} Let $z\mapsto (az+b)/(cz+d)$ be a mobius transformation, then if $c=0$ the proposition is trivial. Otherwise $c\neq 0$, then we have $$\frac{az+b}{cz+d}=\frac{a}{c}-\frac{ad-bc}{c(cz+d)}$$ which can obviously be obtained from the said functions. \end{proof} Now, how about fixed point of a Mobius transformation? We know that a Mobius transformation fixes at least $1$ point, but how about more? \begin{proposition} A Mobius transformation fixes $3$ points is the identity. \end{proposition} \begin{proof} Suppose $$f:z\mapsto\frac{az+b}{cz+d}$$ If $\infty$ is a fixed point, then $c=0$, so $f$ is a linear function. But then a linear function that fixes $2$ (non-infinity) points is the identity (since $f(x)-x$ is linear and a linear function has exactly $1$ root unless it is constantly $0$), $f$ is the identity.\\ Now if $\infty$ is not a fixed point, then $$f(z)=z\iff \frac{az+b}{cz+d}-z=0\iff az+b-z(cz+d)=0$$ which has at most $2$ roots (hence fixed point) since it is quadratic, unless it is the zero function, which essentially means that $c=b=0,d=a\neq 0\implies f=\mathrm{id}$. \end{proof} \begin{proposition}\label{mobius_3pts} Given distinct $z_1,z_2,z_3\in\mathbb C_\infty$ and $w_1,w_2,w_3\in\mathbb C$, then there is an unique Mobius transformation $f$ such that $f(z_i)=w_i$ for $i\in\{1,2,3\}$. \end{proposition} Note that since every Mobius transformation is a bijective (hence invertible), $w_i$'s are distinct as well. \begin{proof} For existence, it suffices to deal with the case where $w_1,w_2,w_3$ are $0,1,\infty$, since once we've found maps $f,g$ such that $f:z_1,z_2,z_3\mapsto 0,1,\infty,g:w_1,w_2,w_3\mapsto 0,1,\infty$, then $g^{-1}\circ f$ will send $z_1,z_2,z_3$ to $w_1,w_2,w_3$.\\ Now if none of $z_i$'s is $\infty$, we can use the interpolation $$f(z)=\frac{(z-z_2)(z-z_3)}{(z_1-z_2)(z_2-z_3)}+\frac{(z-z_1)(z-z_2)}{z-z_3}$$ Otherwise, suppose $z_i=\infty$, then the map $f_i$ suffices where $$f_1(z)=\frac{z-z_2}{z-z_3},f_2(z)=\frac{z_1-z_3}{z-z_3}, f_3(z)=\frac{z-z_2}{z_1-z_2}$$ For uniqueness, suppose $f,f'$ send $z_1,z_2,z_3$ to $w_1,w_2,w_3$ respectly, then $f^{-1}\circ f'$ fixes $z_1,z_2,z_3$, hence $f^{-1}\circ f'=\mathrm{id}\implies f=f'$. \end{proof} If $f,g\in\mathcal M$ and $f$ fixes $z_0$, then $gfg^{-1}$ fixes $g(z_0)$, which gives rise to the following observation \begin{theorem} %There are three conjugacy classes of $\mathcal M$, namely the identity alone, the functions with exactly one fixed point, and the functions with exactly $2$ fixed points. Every member of a conjugacy class of $\mathcal M$ has the same number of fixed point(s). \end{theorem} \begin{proof} Obviously the identity itself is itself a conjugacy class. Now for any nonidentity $f$, $f$ has either $1$ or $2$ fixed points.\\ If $f$ has $1$ fixed point $z_0\neq\infty$, then suppose $g(z)=1/(z-z_0)$, we know that $gfg^{-1}$ fixes $\infty$, and it cannot fix any other points because if so then applying $g^{-1}$ to that point would produce another fixed point of $f$. So it has to be the map $z\mapsto z+b,b\neq 0$.\\ If $f$ has $2$ fixed point, then we consider a Mobius transformation $g$ which sends the fixed points to $0,\infty$, then $gfg^{-1}$ fixed $0$ to $\infty$ and sends $1$ to $a\in\mathbb C\neq 0,\infty$, so there is exactly one Mobius transformation $z\mapsto az, a\neq 1$. \end{proof} Note that $(g^{-1}fg)^n=g^{-1}f^ng$. This allows us to compute the arbitrary (integral) power of a Mobius transformation. \begin{definition} The circle in the extended complex numbers is the equation $Az\bar z+\bar Bz+B\bar z+C=0$ with $A,C\in\mathbb R,B\in\mathbb C$. Consider $\infty$ is a point on this circle if and only if $A=0$. \end{definition} Note that circles in $\mathbb C$ are also circles in $\mathbb C_\infty$, and all other $\mathbb C_\infty$ circles are lines in $\mathbb C$. \begin{proposition} Circles in $\mathbb C_\infty$ are mapped to circles in $\mathbb C_\infty$ under any Mobius transformations. \end{proposition} \begin{proof} It is sufficient to verify this for $z\mapsto az,z\mapsto z+b,z\mapsto z^{-1}$ due to Proposition \ref{decomp_mobius}. It is then trivial. \end{proof} Note that every circle gets to mapped to any other circle since three points determine the circle and Proposition \ref{mobius_3pts} \begin{definition} For extended complex numbers $z_1,z_2,z_3,z_4$, the cross ratio is defined by $$[z_1,z_2,z_3,z_4]=\frac{(z_4-z_1)(z_2-z_3)}{(z_2-z_1)(z_4-z_3)}$$ \end{definition} We need to examine carefully when one of these numbers is infinity. For example, if we have $z_1=\infty$, then the cross ratio is $(z_2-z_3)/(z_4-z_3)$. \begin{corollary} For extended complex numbers $z_1,z_2,z_3,z_4$, the cross ratio is equal to $f(z_4)$ where $f$ is the unique Mobius transformation sending $z_1,z_2,z_3$ to $0,1,\infty$ respectively. \end{corollary} \begin{theorem} Mobius transformations preserve the cross-ratio. \end{theorem} \begin{proof} Suppose $z_1,z_2,z_3,z_4\in\mathbb C_\infty$, and $g\in\mathcal M$. Let $f$ be the Mobius transformation sending $z_1,z_2,z_3$ to $0,1,\infty$, so the cross ratio is $f(z_4)$, so $f\circ g^{-1}$ sends $g(z_1),g(z_2),g(z_3)$ to $0,1,\infty$, so the cross ratio of the $g(z_i)$'s is $f\circ g^{-1}(g(z_4))=f(z_4)$. \end{proof} The converse is also true (and proved in example sheet): If a map preserves cross-ratio, then it is a Mobius transformation. \begin{corollary} Four points $z_1,z_2,z_3,z_4\in\mathbb C_\infty$ are on a circle (in the sense of $\mathbb C_\infty$) if and only if $[z_1,z_2,z_3,z_4]$ is real. \end{corollary} \begin{proof} Let $f$ be the unique Mobius transformation sending $z_1,z_2,z_3$ to $0,1,\infty$, so $[z_1,z_2,z_3,z_4]=f(z_4)$. Let $c$ be the unique circle passing through $z_1,z_2,z_3$, then $z_4\in c\iff f(z_4)\in f(c)$, but $f(c)=\mathbb R\cup\{\infty\}$. \end{proof}
{ "alphanum_fraction": 0.658463316, "avg_line_length": 60.1041666667, "ext": "tex", "hexsha": "1bd28828ddef7f118bfd7555e87df9a01baf9c4b", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "98be673eb3a1fb62f01ba45168e1997eb1171541", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "david-bai-notes/IA-Groups", "max_forks_repo_path": "6/mobius.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "98be673eb3a1fb62f01ba45168e1997eb1171541", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "david-bai-notes/IA-Groups", "max_issues_repo_path": "6/mobius.tex", "max_line_length": 276, "max_stars_count": null, "max_stars_repo_head_hexsha": "98be673eb3a1fb62f01ba45168e1997eb1171541", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "david-bai-notes/IA-Groups", "max_stars_repo_path": "6/mobius.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 3028, "size": 8655 }
\chapter{Algorithms} \label{chap:Algorithm} This chapter will cover the various algorithms, including interpolation, that are required for either preparing data or in the evaluation of the equations outlined in \Cref{chap:2ndOrdCalc}. The reason for the interpolation is that the number of frequencies given in the WAMIT output files (on the order of tens of frequencies) is not likely going to correspond to the number of wave frequencies actually used by HydroDyn (on the order of hundreds to thousands), nor does the WAMIT output necessarily have to be equally discretized or even complete. The WAMIT output files may be very sparcely populated. So, it is necessary to interpolate in order to find the missing ones. %%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{FFT and IFFT} \label{sec:Algorithm:FFT} The FFT (or discrete Fourier transform -- DFT) and inverse FFT used in \HD are found in the FFTPACK version 4.1 from UCAR/NCAR. For a given discretized function, for example the complex wave form $Z[k]$ in the frequency domain, the inverse fourier transform to the time domain can be written as: \begin{equation} z(t_n) = \frac{1}{N} \sum\limits_{k=-\frac{N}{2}+1}^{\frac{N}{2}} Z[k] e^{i\omega_k t_n} = \Re\left\{\sum\limits_{k=1}^{N'}a_k e^{iw_k t_n}\right\}, \label{eq:IFFTofZ} \end{equation} where $N$ is given in \Cref{eq:N} as \begin{equation} N=\frac{2 \pi}{\Delta t \Delta \omega} = \frac{t_\text{max}}{\Delta t} = \frac{2 \omega_\text{max}}{\Delta \omega} = 2(N'+1). \label{eq:IFFT_N} \end{equation} In \Cref{eq:IFFTofZ}, the expression for the first summation is what is used within \HD and the second summation expression is used in some of Tiago's writings. The relationship between $Z[k]$ and $a_k$ can be written as \begin{equation} Z[k] = \begin{dcases*} \frac{N a_k}{2} & $k=1 ~\ldots~ N/2-1$\\ 0 & $k=0$ and $k=N/2$\\ \frac{N a^*_{|k|}}{2} & $k=-N/2+1 ~\ldots~ -1$\\ \end{dcases*} \label{eq:IFFT_Zk_ak} \end{equation} where $a^*$ is the complex conjugate of $a$. \subsection{Numerical Evaluation of IFFT} In the evaulation of \Cref{eq:IFFTofZ} in \HD to yield the wave height as a function of time, the IFFT is evaluated as \begin{equation} z(t_n) = \frac{1}{N} \sum\limits_{k=-\frac{N}{2}+1}^{\frac{N}{2}} Z[k] e^{i\omega_k t_n} = \operatorname{IFFT}\left(Z[k]\right) \label{eq:IFFTofZ:eval} \end{equation} where the IFFT is only evaluated over $k = 0 \ldots N/2$ (the negative frequencies are evaluated internally following the relationships in \Cref{eq:IFFT_Zk_ak}). The normalization constant of $1/N$ is also handled by the IFFT subroutines and is set by the initialization of the IFFT. There are some constraints imposed on what $N$ can be because of the IFFT solver used. $N$ must be even, and preferably a product of small prime numbers for speed. Additionally, $Z[k=0] = 0$ and $Z[k=N/2] = 0$ must be specified. \subsection{$Z[k]$ in \HD} In \HD the complex wave form in frequency space, $Z[k]$, is given as \begin{equation} Z[k] = W[k]\sqrt{\frac{2\pi}{\Delta t} S^\text{2-sided}_\zeta (\omega_k)} = W[k]\sqrt{N \Delta \omega S^\text{2-sided}_\zeta (\omega_k)} \label{eq:Zk} \end{equation} where \begin{equation} W[k] = \sqrt{\frac{N}{2}}\sqrt{-2 \ln \left( U_1[k]\right)}e^{i 2\pi U_2[k]}, \label{eq:Wk} \end{equation} the DFT of gaussian white noise using the Box-Muller method, and $S^\text{2-sided}_\zeta(\omega_k)$ is the two sided power spectral density (PSD) of the wave elevation per unit time.\footnote{Note that Tiago uses $a_k = \sqrt{2\Delta \omega S^\text{1-sided}_\zeta(\omega_k)}e^{i 2\pi U_k}$ which is a simplification where $U_k = U_2[k]$ and $\sqrt{-2 \ln\left(U_1[k]\right)} = \sqrt{2}$.} %%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{Interpolation} \label{sec:Algorithm:Interp} Four interpolation algorithms are required for the \modname{WAMIT2} module: two three-dimensional and two four-dimensional interpolations. For each set of 3D and 4D interpolation algorithms, a linear interpolation for full arrays and a linear interpolation for sparse arrays are needed. Due to the complexity of implimenting an interpolation over sparse arrays and time constraints in the development schedule, a placeholder will be created for it with an error message stating that an interpolation scheme for sparse arrays is not implimented at this time. The WAMIT output file reading algorithm is developed in such a way that an unordered sparse array can be read in and stored (see \Cref{sec:WamitOuput:Read}). The first order wave forces calculated within the \fname{WAMIT} module is interpolated with a linear method. In light of this and considering time constraints on the development, we will use linear interpolation algorithms for now. If time permits, we might investigate possibilities for other interpolation algorithms that will produce smooth surfaces is cubic interpolation (and smoothly continuous derivatives) or better allow for sparse data. %The issue with this interpolation method is that it tends to produce overshoot which may be undesirable. Algorithms which could minimize the effects of overshoot such as cosine fitting, radial basis function weighting, and hermite polynomial among others. In the ideal world, one would choose an algorithm that will best fit the data. In this case, this is not obvious considering that the general shape of the data will depend on what floating platform is used. \subsection{3D Interpolation} \label{sec:interp:3d} \subsubsection{Full array interpolation} A three-dimensional linear interpolation routine is available in the \modname{InflowWind} module. This routine was written specifically for the full field wind files, so it will require some modification to generalize it. \subsubsection{Sparse array interpolation} \label{sec:interp:3d:sparse} Due to time constraints in the development schedule, a placeholder subroutine will be created that tells the user that this is a currently unavailable feature. The user can then use an external data manipulation program to do the interpolation on their WAMIT output to create a full array (that can be unordered) that can be read in. The WAMIT output file reading algorithm will accomodate the reading either a full array (both the upper and lower triangle of the QTF) or partial array (upper half only, or a mix of upper and lower). This routine will expect a mask array (boolean?) of identical size to the sparse array that indicates which elements of the data array are missing. By creating this placeholder subroutine, we give ourselves the option of creating this interpolation scheme as time permits with the ability to handle a limited sparseness of the QTF array (\emph{i.e.} no more than a two step gap in any dimension). \subsection{4D Interpolation} \label{sec:interp:4d} \subsubsection{Full array interpolation} At present we do not have any four-dimensional interpolation routines. A four-dimensional linear interpolation should be fairly simple to extend from the three-dimensional one. \subsubsection{Sparse array interpolation} \label{sec:interp:4d:sparse} See \Cref{sec:interp:3d:sparse}.
{ "alphanum_fraction": 0.7430873975, "avg_line_length": 77.3870967742, "ext": "tex", "hexsha": "19c56148055a5b265199d186bf8b880a726dad99", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2019-03-20T02:57:07.000Z", "max_forks_repo_forks_event_min_datetime": "2019-03-20T02:57:07.000Z", "max_forks_repo_head_hexsha": "816705503bc3c9d31988f424caf551ded7b5c5eb", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "NWTC/HydroDyn", "max_forks_repo_path": "Documentation/2nd_order_implementation/chaps/Chap.Algorithms.tex", "max_issues_count": 1, "max_issues_repo_head_hexsha": "816705503bc3c9d31988f424caf551ded7b5c5eb", "max_issues_repo_issues_event_max_datetime": "2020-09-22T08:33:26.000Z", "max_issues_repo_issues_event_min_datetime": "2020-09-18T11:49:28.000Z", "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "NWTC/HydroDyn", "max_issues_repo_path": "Documentation/2nd_order_implementation/chaps/Chap.Algorithms.tex", "max_line_length": 684, "max_stars_count": 1, "max_stars_repo_head_hexsha": "816705503bc3c9d31988f424caf551ded7b5c5eb", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "NWTC/HydroDyn", "max_stars_repo_path": "Documentation/2nd_order_implementation/chaps/Chap.Algorithms.tex", "max_stars_repo_stars_event_max_datetime": "2021-09-28T21:32:25.000Z", "max_stars_repo_stars_event_min_datetime": "2021-09-28T21:32:25.000Z", "num_tokens": 1945, "size": 7197 }
\section{Network Connection Similarity} \label{sec:connectionsimilarity} In this section, I describe how multiclass spectral clustering algorithm can be applied to find anomalies in network connection data. %Equation~\ref{eq:sim} contains cosine similarity function and nearest neighbourhood measurement that allow a set of data points to construct the normalized Laplacian matrix. %It works well on low-dimensional data otherwise its pairwise cosine similarity and nearest neighbourhood can be arbitrary. %In this section, I describe how EM algorithm can reduce network data points' dimensionality and how we can detect point anomalies within a cluster which is classified as a normal case. %(p4: Add a sentence in the beginning of Section 5 that leads over from Section 4, saying that you now will explain how the multiclass spectral clustering algorithm can be applied to finding anomaly classes in network connection data.) A network connection is a connection between computers in the Internet to transmit and receive data. Connection data created by host computers is important to monitor of a network status. The problem is that mostly a network connection data is high-dimensional. So its pairwise cosine similarity and nearest neighbourhood can be arbitrary. Network connection data have also a number of point anomalies. A major drawback to clustering is that point anomalies can be assigned to a larger cluster which might be possibly classified to normal behaviour. %Network connection similarity helps to measure the similarity between network connections, and is essential to construct clusters. %, and requires comparison methods that helps to measure the similarity between network connections. %Network connection similairy methods provide how the data points are similar each other and is essential to construct clusters. After reviewing the families of proposed schemes, I identified similarity and density approaches as a promising approach to solve the problems. %In this report, we are gonig to measure similarity between network connections. %Since there is not only one normal state, I generated mixture models for normal connections for each protocols and attributes. %so the problem of comparing similarity has been an important problem. %\cite{} \newline %In Section~\ref{subsec:problemformulation}, I describe the problem.\newline In Section~\ref{subsec:normalabnormalsimilarity}, I propose new approaches to train similarity score function.\newline In Section~\ref{subsec:densitysimilarity}, I describe density similarity measurement which is relied on the way of representatives of the clusters and threshold.\newline %In Section~\ref{subsec:learningsimilarity}, I describe how the algorithm learn mixture models with training set.\newline %\subsection{Problem Formulation} %\label{subsec:problemformulation} %Given network connection data, we can learn normal and abnormal mixture models in order to measure similarity score and estimate a density of normal connections. %Anomalies can be detected by cluster algorithm based on affinity matrix which is computed by similarity score, or by comparing density of clusters against threshold. %The aim of the algorithm is to detect a set of anomalies and is not to classify them into types of anomalies. % \subsection{Normal and Abnormal Network Connection Similarity} \label{subsec:normalabnormalsimilarity} I now describe how to train a similarity score function for both normal and abnormal network connection data points to reduce dimensionality. Let $\hat{x_i} = (score_{\text{normal}}, score_{\text{abnormal}})$ is a node that is two-dimensional where $score_{\text{normal}}$ is a similarity score to normal mixture models and $score_{\text{abnormal}}$ is a similarity score to abnormal mixture models. Then two nodes are similar if their cosine similarity are similar. %that is in spectral embedded space $\mathbb{R}^{n \times k}$ tranported from network connection data points $\mathbb{R}^{n}$ where $n$ is the number of data points and $k$ is the desirable number of clustering from eigengap algorithm. %% They compute the similarity score, log probability of each connections, under the model. %I found that cosine similarity shows better performance when to measure the similarity between two nodes $\hat{x_i}$ and $\hat{x_j}$. %A node $\hat{x_i} = (s_{\text{normal}}, s_{\text{abnormal}})$ is two-dimensional which is transported from 39-dimensional data point $x_i$. A similarity score is a real value and is defined as follows: \begin{equation} score_j = \sum_{i=1}^{39} w_i \theta_i (x_j) \end{equation} where $j \in \{1,\cdots,n\}$ is an index of data points, $x_j$ is $j$th data point in data points, $\theta_i$ is a mixture model's log-likelihood function of a data point's $i$'th attribute and $w_i$ is a weight of $i$th attribute based on its importance \cite{kayacik05}. %The similarity for given data point $x$ that is a normal or abnormal score $s = \sum_{i=1}^{39} w_i \theta_i$ To do this, we need to have mixture models for two classes - normal and abnormal, three protocols - TCP/IP, ICMP, UDP and 39 attributes as in Table~\ref{fig:preprocessing}. As a result the total number of mixture models that is required is 234 ($= 2 \cdot 3 \cdot 39$). Each mixture model can be learned from the training set by EM algorithm, and it is guaranteed to converge to a local optimum on a given input. The log-likelihood function $\theta_i$ of a GMM in this case is : \begin{equation} \theta_i(x_j) = \ln p(z_{i,j} \mid \pi, \mu, \Sigma) = \ln (\sum_{k=1}^K \pi_k N(z_{i,j} \mid \mu_k, \Sigma_k)) \end{equation} where $k$ is an index of components of GMM, $\pi_k$ is a weight of $k$'th component of GMM, $\mu_k$ is a mean of $k$'th component of GMM, $\Sigma_k$ is a variance of $k$'th component of GMM, %$K$ is the number of components of GMM, and $z_{i,j}$ is a value of $j$th data point $x_j$'s $i$th attribute. For example, 38th column of the NSL-KDD dataset is "dst-host-rerror-rate" and 1st row data point's "dst-host-rerror-rate" is $0.05$. We call it as $z_{38,1} = 0.05$ in this case. %where $\theta_i$ is a log-likelihood probability calculated from a mixture model $m_i$ of all 39 attributes. %The log-likelihoods $\theta_1, \cdots, \theta_{39}$ are going to be weighted by $w_1, \cdots, w_{39}$ based on attribute's importance \cite{kayacik05}. %Each attribute has different correlation to the result \cite{olusola10}\cite{kayacik05}, so I give a different weight $w_i$ on them. %Also $(\theta_1, \cdots, \theta_{39})$ is called a score vector $v$. %As a result, we convert every 39-dimensional data point $x_i$ to 2-dimensional data point $\hat{x_i} = (s_{\text{normal}}, s_{\text{abnormal}})$ in the dataset. %So we are going to have nodes $X = (\hat{x_1}, \cdots, \hat{x_n})$. In short, a similarity score is simply a weighted-sum of all the elements of a score vector $v = (\theta_1(x_j), \cdots, \theta_{39}(x_j))$, and by using cosine similarity function, we can measure pairwise similarity between two transported data points. %we have a set of nodes $\hat{X} = (\hat{x_1}, \cdots, \hat{x_n})$ and %is going to be used to measure pairwise similarity %and it is going to be used to measure cosine similarity. %Learned $234(=2 \times 3 \times 39)$ Gaussian mixture models in total. %\begin{itemize} %\item 2 : one for normal, one for abnormal. %\item 3 : each per each protocol e.g.) udp, icmp, tcp. %\item 39 : for all attributes. %Since the data fit the gaussian and a sufficient number of data points are available to learn the parameters of the model, the model can be learned. %\begin{equation} % sim(V, V') = \frac{A \cdot B}{|A| |B|} %\end{equation} % \subsection{Connection Density Similarity} \label{subsec:densitysimilarity} Another point of view on anomaly detection can be a comparison of densities. A data point is abnormal if its density differs from a density of known normal connections even though its cluster is classified to normal behaviour. %For anomaly detection, only density of the data point is compared with the density of known normal connections. If the density of one data point is higher than a supposed density in the region, it are classified as anomalies. So it allows detecting unknown anomalies which is similar to known normal connections. In contrast to the known normal and abnormal connection similarity, it only requires normal network connections that appear in the training set. %The We can adjust threshold if false-positive or false-negative is high. %This is illustrated in (Figure) where one cluster over the distribution and threshold. The density function $\Phi(\hat{x})$ is computed by the kernel density estimation \cite{ester96} as belows: \begin{equation} \Phi(\hat{x_j}) = \dfrac{1}{n} \sum \limits_{i=1}^{n} K(\hat{x_j} - \hat{x_i}) \end{equation} where $\hat{x_j}$ is a $j$th node, $n$ is the number of data points, $i$ is an index of data points, and $K$ is a kernel function. In this report, I use a standard normal distribution as a kernel function. % %A density function only trained with normal network connections %than that of threshold. %\begin{equation} % d = \sum_X \sum_Y %\end{equation} %\subsubsection{Measuring similarity} %\label{subsec:densitysimilarity} %Gaussian mixture models from training set helps to measure those two similarity per each connection. %\begin{itemize} %\item Similarity to known normal behavior. %\item Similarity to known abnormal behavior. %\end{itemize}
{ "alphanum_fraction": 0.7666038131, "avg_line_length": 76.9838709677, "ext": "tex", "hexsha": "cb89f75fbd380e66994535ab54faf9565d961fbc", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2020-03-16T21:50:52.000Z", "max_forks_repo_forks_event_min_datetime": "2020-03-16T21:50:52.000Z", "max_forks_repo_head_hexsha": "397673dc6ce978361a3fc6f2fd34879f69bc962a", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "wsgan001/AnomalyDetection", "max_forks_repo_path": "report/sections/method.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "397673dc6ce978361a3fc6f2fd34879f69bc962a", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "wsgan001/AnomalyDetection", "max_issues_repo_path": "report/sections/method.tex", "max_line_length": 273, "max_stars_count": null, "max_stars_repo_head_hexsha": "397673dc6ce978361a3fc6f2fd34879f69bc962a", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "wsgan001/AnomalyDetection", "max_stars_repo_path": "report/sections/method.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 2374, "size": 9546 }
\documentclass{article} \usepackage{fullpage} \usepackage{nopageno} \usepackage{amsmath} \usepackage{amssymb} \usepackage[normalem]{ulem} \allowdisplaybreaks \newcommand{\abs}[1]{\left\lvert #1 \right\rvert} \begin{document} Jon Allen February 12, 2014 \section*{14} Generate the 6-tuples of 0s and 1s by using the base 2 arithmetic generating scheme and identify them with subsets of the set $\{x_5,x_3,x_2,x_1,x_0\}$. 000000$\to\emptyset$ 000001$\to\{x_0\}$ 000010$\to\{x_1\}$ 000011$\to\{x_1,x_0\}$ 000100$\to\{x_2\}$ 000101$\to\{x_2,x_0\}$ 000110$\to\{x_2,x_1\}$ 000111$\to\{x_2,x_1,x_0\}$ 001000$\to\{x_3\}$ 001001$\to\{x_3,x_0\}$ 001010$\to\{x_3,x_1\}$ 001011$\to\{x_3,x_1,x_0\}$ 001100$\to\{x_3,x_2\}$ 001101$\to\{x_3,x_2,x_0\}$ 001110$\to\{x_3,x_2,x_1\}$ 001111$\to\{x_3,x_2,x_1,x_0\}$ 010000$\to\{x_4\}$ 010001$\to\{x_4,x_0\}$ 010010$\to\{x_4,x_1\}$ 010011$\to\{x_4,x_1,x_0\}$ 010100$\to\{x_4,x_2\}$ 010101$\to\{x_4,x_2,x_0\}$ 010110$\to\{x_4,x_2,x_1\}$ 010111$\to\{x_4,x_2,x_1,x_0\}$ 011000$\to\{x_4,x_3\}$ 011001$\to\{x_4,x_3,x_0\}$ 011010$\to\{x_4,x_3,x_1\}$ 011011$\to\{x_4,x_3,x_1,x_0\}$ 011100$\to\{x_4,x_3,x_2\}$ 011101$\to\{x_4,x_3,x_2,x_0\}$ 011110$\to\{x_4,x_3,x_2,x_1\}$ 011111$\to\{x_4,x_3,x_2,x_1,x_0\}$ 100000$\to\{x_5\}$ 100001$\to\{x_5,x_0\}$ 100010$\to\{x_5,x_1\}$ 100011$\to\{x_5,x_1,x_0\}$ 100100$\to\{x_5,x_2\}$ 100101$\to\{x_5,x_2,x_0\}$ 100110$\to\{x_5,x_2,x_1\}$ 100111$\to\{x_5,x_2,x_1,x_0\}$ 101000$\to\{x_5,x_3\}$ 101001$\to\{x_5,x_3,x_0\}$ 101010$\to\{x_5,x_3,x_1\}$ 101011$\to\{x_5,x_3,x_1,x_0\}$ 101100$\to\{x_5,x_3,x_2\}$ 101101$\to\{x_5,x_3,x_2,x_0\}$ 101110$\to\{x_5,x_3,x_2,x_1\}$ 101111$\to\{x_5,x_3,x_2,x_1,x_0\}$ 110000$\to\{x_5,x_4\}$ 110001$\to\{x_5,x_4,x_0\}$ 110010$\to\{x_5,x_4,x_1\}$ 110011$\to\{x_5,x_4,x_1,x_0\}$ 110100$\to\{x_5,x_4,x_2\}$ 110101$\to\{x_5,x_4,x_2,x_0\}$ 110110$\to\{x_5,x_4,x_2x_1\}$ 110111$\to\{x_5,x_4,x_2x_1,x_0\}$ 111000$\to\{x_5,x_4,x_3\}$ 111001$\to\{x_5,x_4,x_3,x_0\}$ 111010$\to\{x_5,x_4,x_3,x_1\}$ 111011$\to\{x_5,x_4,x_3,x_1,x_0\}$ 111100$\to\{x_5,x_4,x_3,x_2\}$ 111101$\to\{x_5,x_4,x_3,x_2,x_0\}$ 111110$\to\{x_5,x_4,x_3,x_2,x_1\}$ 111111$\to\{x_5,x_4,x_3,x_2,x_1,x_0\}$ \section*{16} For each of the subsets (a), (b), (c), and (d) in the preceding exercise, determine the subset that immediately \emph{precedes} it in the base 2 arithmetic generating scheme. \subsection*{(a)} $\{x_4,x_1,x_0\}=00010011\gets00010010$, or $\{x_4,x_1\}$. \subsection*{(b)} $\{x_7,x_5,x_3\}=10101000\gets10100111$, or $\{x_7,x_5,x_2,x_1,x_0\}$ \subsection*{(c)} $\{x_7,x_5,x_4,x_3,x_2,x_1,x_0\}=10111111\gets10111110$, or $\{x_7,x_5,x_4,x_3,x_2,x_1\}$ \subsection*{(d)} $\{x_0\}=00000001\gets00000000$, or $\emptyset$ \section*{17} Which subset of $\{x_7,x_6,\dots,x_1,x_0\}$ is 150th on the list of subsets of $S$ when the base 2 arithmetic generating scheme is used? 200th? 250th? (As in Section 4.3, the places on the list are numbered beginning with 0.) \begin{align*} 2^0&=1& 2^1&=2& 2^2&=4& 2^3&=8& 2^4&=16& 2^5&=32& 2^6&=64& 2^7&=128 \end{align*} \begin{align*} 150&=128+16+4+2\to\{x_7,x_4,x_2,x_1\}& 200&=128+64+8\to\{x_7,x_6,x_3\}\\ 250&=128+64+32+16+8+2\to\{x_7,x_6,x_5,x_4,x_3,x_1\} \end{align*} \section*{22} Determine the reflected Gray code of order 6. 000000 000001 000011 000010 000110 000111 000101 000100 001100 001101 001111 001110 001010 001011 001001 001000 011000 011001 011011 011010 011110 011111 011101 011100 010100 010101 010111 010110 010010 010011 010001 010000 110000 110001 110011 110010 110110 110111 110101 110100 111100 111101 111111 111110 111010 111011 111001 111000 101000 101001 101011 101010 101110 101111 101101 101100 100100 100101 100111 100110 100010 100011 100001 100000 \section*{24} Determine the predecessors of each of the 9-tuples in Exercise 23 in the reflected Gray code of order 9. \subsection*{(a)} $010100110\gets010100010$ \subsection*{(b)} $110001100\gets110000100$ \subsection*{(c)} $111111111\gets111111110$ \section*{27} Generate the 2-subsets of $\{1,2,3,4,5,6\}$ in lexicographic order by using the algorithm described in Section 4.4. \[\begin{array}{ccc} 12&23&35\\ 13&24&36\\ 14&25&45\\ 15&26&46\\ 16&34&56\\ \end{array}\] \section*{29} Determine the 7-subset of $\{1,2,\dots,15\}$ that immediately follows 1,2,4,6,8,14,15 in the lexicographic order. Then determine the 7-subset that immediately precedes 1,2,4,6,8,14,15. Since 14 and 15 are as high as we can go we increment the 8 and start counting from there. \[1,2,4,6,8,14,15\text{ is followed by }1,2,4,6,9,10,11\] Since we can't decrement the 15 to 14 because we already have a 14 we decrement the 14 to 13 and leave the 15 since it is the max and we want it to roll over on the next count up. \[1,2,4,6,8,14,15\text{is preceded by}1,2,4,6,8,13,15\] \section*{31} Generate the 3-permutations of $\{1,2,3,4,5\}$ \begin{align*} \begin{array}{cccccccccc} 123&124&125&134&135&145&234&235&245&345\\ 132&142&152&143&153&154&243&253&254&354\\ 312&412&512&413&513&514&423&523&524&534\\ 321&421&521&431&531&541&432&532&542&543\\ 231&241&251&341&351&451&342&352&452&453\\ 213&214&215&314&315&415&324&325&425&435\\ \end{array} \end{align*} \section*{33} In which position does the subset 2489 occur in the lexicographic order of the 4-subsets of $\{1,2,3,4,5,6,7,8,9\}$? Using theorem 4.4.2 we have: \[\binom{9}{4}-\binom{7}{4}-\binom{5}{3}-\binom{1}{2}-\binom{0}{1}=\frac{9!}{4!(9-4)!}-\frac{7!}{4!(7-4)!}-\frac{5!}{4!(5-3)!}-0-0=81\] \section*{34} Consider the r-subsets of $\{1,2,\dots,n\}$ in lexicographic order. \subsection*{(a)} What are the first $(n-r+1)$ $r$-subsets? \begin{align*} \{1,2,\dots, (r-1),&r\}\\ \{1,2,\dots, (r-1),&(r+1)\}\\ &\vdots\\ \{1,2,\dots, (r-1),&n\} \end{align*} The first $(n-r+1)$ subsets all contain $1,2,\dots,(r-1)$ and then one last element that is all the numbers $r,\dots,n$ \subsection*{(b)} What are the last $(r+1)$ $r$-subsets? \begin{align*} \{(n-r+1)(n-r+2)\dots(n-1)(n)\}\\ \{(n-r)(n-r+2)\dots(n-1)(n)\}\\ \{(n-r)(n-r+1)(n-r+3)\dots(n-1)(n)\}\\ \{(n-r)(n-r+1)\dots(n-1)\} \end{align*} Basically the last subset is the last $r$ elements. Each of the preceeding $r$ subsets decreases by one the element that corresponds with how far from the last subset we are. \section*{35} The \emph{complement} $\bar{A}$ of an $r$-subset $A$ of $\{1,2,\dots,n\}$ is the $(n-r)$-subset of $\{1,2,\dots,n\}$, consisting of all those elements that do not belong to $A$. Let $M=\binom{n}{r}$, the number of $r$-subsets and, at the same time, the number of $(n-r)$-subsets of $\{1,2,\dots,n\}$. Prove that, if \[A_1,A_2,A_3,\dots,A_M\] are the $r$-subsets in lexicographic order, then \[\overline{A_M},\dots,\overline{A_3},\;\overline{A_2},\;\overline{A_1}\] are the $(n-r)$-subsets in lexicographic order. \subsection*{lemma} Reversing theorem 4.4.1 from the text we get the following lemma. Let $a_1a_2\dots a_r$ be an $r$-subset of $\{1,2,\dots,n\}$. The last $r$-subset in the lexicographic ordering is $(n-r+1)(n-r+2)\dots n$. Assume that $a_1a_2\dots a_r\neq 12\dots r$. Let $k$ be the largest integer such that $a_k>1$ and $a_k-1$ is different from each of $a_1,\dots,a_{k-1}$ then the $r$-subset that is the immediate predecessor of $a_1a_2\dots a_r$ in the lexicographic ordering is $a_1\dots a_{k-1}(a_k-1)(n-r+k+1)\dots n$ \subsection*{Proof} First let's take care of the trivial cases. Let $n<r$. Then $A_1=A_M=\emptyset$ and $\overline{A_1}=\overline{A_M}=\{1,2,\dots,n\}$. Since there is only one subset we are done. Let $n=r$. Then $A_1=A_M=\{1,2,\dots,n\}$ and $\overline{A_1}=\overline{A_M}=\emptyset$. Since there is only one subset we are done. Let $n>r$ Let's look at $A_1$. It is $\{1,2,\dots,r\}$. We also know that $A_M$ is $\{(n-r+1),(n-r+2),\dots,n\}$ so $\overline{A_M}$ is $\{1,2,\dots,(n-r)\}$ which is by definition the first $(n-r)$-subset in lexicographical order. Now we find that $A_{M-1}$ is $\{n-r,n-r+2,\dots,n\}$. The complement of this is $\{1,2,\dots,n-r-1,n-r+1\}$. And the successor to $\overline{A_M}$ is $\{1,2,\dots,n-r-1,n-r+1\}$ So we see that the hypothesis holds for $A_M$ and $A_{M-1}$ Now lets take some random subset $A_i=a_1a_2\dots a_{n-r}$. Determine $k$ as in the lemma above. Then \[a_1a_2\dots a_r=a_1\dots a_{k-1}a_k(n-r+k+1)\dots n\] where \[a_k-1>a_{k-1}\] Thus the immediate predecessor of $a_1a_2\dots a_r$ is $A_{i-1}=a_1\dots a_{k-1}(a_k-1)(n-r+k+1)\dots n$ Let us take $\overline{A_{i}}$. We know that $a_k(a_k+1)\dots (a_k+r-k-1)(a_k+r-k)\not\in\overline{A_i}$ and $a_1\dots a_{k-1}\not\in\overline{A_i}$ but that $a_k-1\in\overline{A_i}$. Further we know that $(a_k+r-k+1)\dots n\in\overline{A_i}$. Notice that if we ``shift'' this $r-k$ places we have $a_k+1\dots n-r+k$ and $|a_k+1\dots n-r+k|=|(a_k+r-k+1)\dots n|$. We don't really care anything about the part of $\overline{A_i}$ before $a_k$ so we'll just call it $b_1\dots b_p$. So the successor to $\overline{A_i}$ is $b_1\dots b_pa_k(a_k+1)(a_k+2)\dots (n-r+k)$. And if we examine $\overline{A_{i-1}}$ we will have the same $b_1\dots b_p$ that $\overline{A_i}$ has since we didn't change anything before $a_k$. And we have $(a_k-1)(n-r+k+1)\dots n\not\in\overline{A_{i-1}}$. Anything leftover must be in $\overline{A_{i-1}}$ so we have $\overline{A_{i-1}}=b_1\dots b_pa_k(a_k+1)\dots(n-r+k)$. Which is precisely the successor of $\overline{A_i}$ and so we have proven our result by induction. \end{document}
{ "alphanum_fraction": 0.669883351, "avg_line_length": 50.6989247312, "ext": "tex", "hexsha": "2875f1a86b8c354ddfca13dfbc4acbe6a27201d3", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "66d433f2090b6396c8dd2a53a733c25dbe7bc90f", "max_forks_repo_licenses": [ "Unlicense" ], "max_forks_repo_name": "ylixir/school", "max_forks_repo_path": "combinatorics/combinatorics-hw-2014-02-12.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "66d433f2090b6396c8dd2a53a733c25dbe7bc90f", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Unlicense" ], "max_issues_repo_name": "ylixir/school", "max_issues_repo_path": "combinatorics/combinatorics-hw-2014-02-12.tex", "max_line_length": 565, "max_stars_count": null, "max_stars_repo_head_hexsha": "66d433f2090b6396c8dd2a53a733c25dbe7bc90f", "max_stars_repo_licenses": [ "Unlicense" ], "max_stars_repo_name": "ylixir/school", "max_stars_repo_path": "combinatorics/combinatorics-hw-2014-02-12.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 4338, "size": 9430 }
\def\beginanswers{\iffalse} %%\def\beginanswers{\iftrue} \documentclass[10pt]{article} \usepackage{amsmath,amsfonts,amsthm,amssymb} \usepackage{graphicx} \usepackage{enumerate} \input{cs1.tex} \newcommand{\vect}[1]{{\bf #1}} %for bold chars \newcommand{\vecg}[1]{\mbox{\boldmath $ #1 $}} %for bold greek chars \newcommand{\matx}[1]{{\bf #1}} \setlength{\parindent}{0in} \setlength{\parskip}{1em} \setlength{\textheight}{9.5in} \setlength{\textwidth}{7in} \setlength{\headsep}{0in} % distance from top of page to address \setlength{\topmargin}{-0.5in} \setlength{\oddsidemargin}{-0.5in} \setlength{\evensidemargin}{-0.5in} \begin{document} \thispagestyle{empty} \vspace*{0.5in} \begin{center} \Large \textbf{Open Source Software --- CSCI-4961 --- Summer 2019} \\ \textbf{Test 1} \\ \textbf{June 28, 2019} \end{center} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \beginanswers \begin{center} \Large \textbf{SOLUTIONS} \end{center} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \else %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{center} \textbf{\Large Name:} \underline {\hspace{2.0in}} \\ \bigskip \bigskip \centerline{ \includegraphics[height=0.5in]{boxes} } %% \begin{tabular}{|p{0.1in}|p{0.1in}|p{0.1in}|p{0.1in}|p{0.1in}|p{0.1in}|p{0.1in}|p{0.1in}|l|} %% \hline \\ %% & & & & & & & & \textbf{\large @rpi.edu} \\ %% \hline %% \end{tabular} %% %% \end{tabular} \bigskip \textbf{\Large RIN\#:} \underline {\hspace{1.5in}} \vspace*{0.4in} {\large\bf Honor pledge: On my honor I have neither given nor received aid on this exam.} \vspace*{0.1in} {\large\bf Please sign here to indicate that you agree with the honor pledge: \underline {\hspace{1.5in}}} \end{center} \vspace*{.45in} {\large\bf Instructions:} \begin{itemize} %%\item You have 90 minutes to complete this test. \item Clearly print your name, RCS ID (in all caps.) and your RIN at the top of your exam. \item This test is open book, open notes and open computer. You {\textbf may not} use the internet. Please turn off your wifi. \item There are \textbf{7 questions} on this test worth a total of \textbf{100 points}. \end{itemize} \centering{\begin{tabular}{|c|c|r|} \hline Question & Score & Possible \\ \hline 1 & & 10 \\ \hline 2 & & 14 \\ \hline 3 & & 15 \\ \hline 4 & & 9 \\ \hline 5 & & 12 \\ \hline 6 & & 15 \\ \hline 7 & & 25 \\ \hline Total & & 100 \\ \hline \end{tabular}} \newpage %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \fi %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{enumerate} \item Answer the following short questions (10 pts) \begin{enumerate} \item Eric Raymond wrote, ``Before asking a technical question by e-mail, or in a newsgroup, or on a website chat board, do the following:'' and then listed 7 actions. List any 4 of them (4/10 pts) \begin{enumerate}[1] \beginanswers \item Try to find an answer by searching the archives of the forum or mailing list you plan to post to. \item Try to find an answer by searching the Web. \item Try to find an answer by reading the manual. \item Try to find an answer by reading a FAQ. \item Try to find an answer by inspection or experimentation. \item Try to find an answer by asking a skilled friend. \item If you're a programmer, try to find an answer by reading the source code. \else \item - \bigskip \bigskip \bigskip \bigskip \item - \bigskip \bigskip \bigskip \bigskip \item - \bigskip \bigskip \bigskip \bigskip \item - \bigskip \bigskip \bigskip \bigskip \fi \end{enumerate} \item Name the RPI student who was sued by the RIAA for copyright infringement (1/10 pts) \beginanswers \bigskip Jesse Jordan (Aaron Sherman works as well, although Aaron was not mentioned in our reading.) \bigskip \else \begin{itemize} \bigskip \bigskip \item - \bigskip \bigskip \end{itemize} \fi \item Given the following regex expression \verb+^[a-zA-Z](.*)\.(html|jpg)$+, indicate \textbf{Match} if the string would be a match, or \textbf{No Match} if it would not. (5/10 pts) \beginanswers \begin{enumerate}[1] \bigskip \item test.html \textbf{Match} \bigskip \bigskip \bigskip \item 2a38588hfquh.jpg \textbf{No Match} \bigskip \bigskip \bigskip \item jpg.r \textbf{No Match} \bigskip \bigskip \bigskip \item [email protected] \textbf{Match} \bigskip \bigskip \bigskip \item htmljpg \textbf{No Match} \bigskip \bigskip \bigskip \end{enumerate} \else \begin{enumerate}[1] \bigskip \item test.html \bigskip \bigskip \bigskip \item 2a38588hfquh.jpg \bigskip \bigskip \bigskip \item jpg.r \bigskip \bigskip \bigskip \item [email protected] \bigskip \bigskip \bigskip \item htmljpg \bigskip \bigskip \bigskip \end{enumerate} \fi \end{enumerate} \newpage \item The diagram below represents a normal (non-open source) software procurement cycle. Fill in the diagram with the terms needed to complete it. (We know that 7 appears 2 times.)(14 pts) \begin{figure}[h] \centering \includegraphics[width=.9\linewidth]{images/diag1.png} \label{fig:licensing} \end{figure} \beginanswers \begin{enumerate}[1] \item Is software for sale? \item Does standard license meet needs? \item Will vendor negotiate? \item Meet scope? \item Meet term? \item Renewal? \item Condition? \end{enumerate} \newpage \else \begin{enumerate}[1] \item - \bigskip \bigskip \bigskip \item - \bigskip \bigskip \bigskip \item - \bigskip \bigskip \bigskip \item - \bigskip \bigskip \bigskip \item - \bigskip \bigskip \bigskip \item - \bigskip \bigskip \bigskip \item - \bigskip \bigskip \bigskip \end{enumerate} \fi \item The Open Source Initiative defines 10 characteristics of open source software. For each of the following actions, indicate if the action is allowed or prohibited and the characteristic which allows or prohibits it. (15 pts) For example, for: \textit{Require a recipient of a derived work to contact the original author to get a license} the answer would be: \textbf{Prohibited, Distribution of License clause} Each correct answer is worth 3 points with the \textbf{Allowed} or \textbf{Prohibited} worth 1 point and identifying the appropriate characteristic worth 2 points. \beginanswers \begin{enumerate} \item The author of a source code distribution allows the original code to be freely distributed requiring only attribution for the original authors and inclusion of the copyright notice, but requires that any derived works be distributed under an AGPL license. (Original license is not AGPL) \textbf{Prohibited, Derived Works} \bigskip \item A user (recipient) gets a USB distribution of an open source family of products, all of the code on the USB is related and licensed simlarly. The user extracts one program from the USB and redistributes it as an independent program. \textbf{Allowed, License Must Not Be Specific to a Product} \bigskip \item An author requires that her source be maintained unchanged in any derivative works, but allows recipients to use \textit{git patch} files to implement and distribute modifications to the code. \textbf{Allowed, Integrity of the Author's Source Code} \bigskip \item An author distributes source under a BSD license, but adds a clause that prohibits the software from being used by the military or by nuclear power plants. \textbf{Prohibited, No Discrimination Against Persons or Groups} \bigskip \item A user downloads the source code of several open source project from repositories on github onto a \$4.00 USB and then sells the offers the USB for sale on EBAY for \$10000 without paying or offering remuneration to the original author. \textbf{Allowed, Free Redistribution} \bigskip \end{enumerate} \else \begin{enumerate} \item The author of a source code distribution allows the original code to be freely distributed, but requires that any derived works be distributed under an AGPL license. (Original license is not AGPL) \bigskip \bigskip \bigskip \bigskip \bigskip \item A user (recipient) gets a USB distribution of an open source family of products, all of the code on the USB is related and licensed simlarly. The user extracts one program from the USB and redistributes it as an independent program. \bigskip \bigskip \bigskip \bigskip \bigskip \item An author requires that her source be maintained unchanged in any derivative works, but allows recipients to use \textit{git patch} files to implement and distribute modifications to the code. \bigskip \bigskip \bigskip \bigskip \bigskip \item An author distributes source under a BSD license, but adds a clause that prohibits the software from being used by the military or by nuclear power plants. \bigskip \bigskip \bigskip \bigskip \bigskip \item A user downloads the source code of several open source project from repositories on github onto a \$4.00 USB and then sells the offers the USB for sale on EBAY for \$10000 without paying or offering remuneration to the original author. \bigskip \bigskip \bigskip \bigskip \bigskip \end{enumerate} \fi \newpage \item Define the following types of intellectual property rights (9 pts): \begin{enumerate} \beginanswers \item Copyright \bigskip Form of intellectual property, applicable to any expressed representation of a creative work, that grants the creator of an original work exclusive rights to its use and distribution, usually for a limited time. \bigskip \item Patent \bigskip A set of exclusive rights granted by a sovereign state to an inventor or assignee for a limited period of time in exchange for detailed public disclosure of an invention. \bigskip \item Trademark \bigskip A recognizable sign, design, or expression which identifies products or services of a particular source from those of others. \bigskip \else \item Copyright \bigskip \bigskip \bigskip \bigskip \bigskip \bigskip \bigskip \bigskip \item Patent \bigskip \bigskip \bigskip \bigskip \bigskip \bigskip \bigskip \bigskip \item Trademark \bigskip \bigskip \bigskip \bigskip \bigskip \bigskip \bigskip \bigskip \fi \end{enumerate} \item Write LaTeX code to duplicate the document below. You can assume the photo is ``images/pelican.jpg''. Your text should go on the next page. (12 pts): \begin{figure}[h] \centering \includegraphics[width=0.6\linewidth]{images/test1.png} \end{figure} \newpage \beginanswers \begin{lstlisting}[language=tex] \documentclass{article} \usepackage[pdftex]{graphicx} \begin{document} \section{First Section} We have some text with an an inline equation $x = \sum_0^{100} x^2$. It extends on until it wraps around. We have a new paragraph. Extend it to wrap as well. Below is a picture of a pelican. %scaling doesn't matter \includegraphics[scale=0.10]{images/pelican.jpg} And of course we need a table: \begin{tabular}{ccc} 1& middle & left \\ 2& middle & left \\ 3 & middle & left \\ \end{tabular} \end{document} \end{lstlisting} \else Write your LaTeX in the box below (we gave you a start with some declarations): \begin{lstlisting}[language=tex] \documentclass{article} \usepackage[pdftex]{graphicx} \end{lstlisting} \hspace*{-0.4in}\framebox(540,600){} \fi \newpage \item Consider the following scenario. You are working in a ``blessed'' repository configuration. On your laptop you have a clone of your fork with two remotes set up. The remote \textbf{origin} points to your forked repository and \textbf{upstream} points to the blessed repositiory. All three repositories are out of synch. You can assume the blessed repository \textit{(upstream)} has: \begin{itemize} \item a new file ``d.cxx'' \end{itemize} and your fork \textit{(origin)} has: \begin{itemize} \item a new file ``e.cxx''. \end{itemize} Your local repository on your laptop has \begin{itemize} \item a new file named ``a.cxx'' and \item two modified files ``b.cxx'' and ``c.cxx''. \end{itemize} Your goal is to get all of these changes in the master branch in your fork \textit{(origin)} so you can make a pull request to the blessed repository. Give the sequence of git commands required to reach the state where all changes are present in your fork and you are ready for the pull request. (15 pts) Write your git commands below \beginanswers \begin{lstlisting} git add a.cxx git add -u git commit -m "Committing the local changes" git pull origin master git pull upstream master git push origin master \end{lstlisting}\else \hspace*{-0.4in}\framebox(540,460){} \fi \newpage \item Consider the ``Makefile'' shown below. \begin{lstlisting}[language=make] m1: m1.o liba.a libb.so cc m1.o liba.a libb.so -o m1 -Wl,-rpath . m2: m2.o libb.so cc m2.o libb.so -o m2 -Wl,-rpath . m1.o: m1.c cc -c m1.c -o m1.o m2.o: m2.c cc -c m2.c -o m2.o libb.so: b.o cc -shared -o libb.so b.o liba.a: a.o ar qc liba.a a.o b.o: b.c cc -fPIC -c b.c -o b.o a.o: a.c cc -c a.c -o a.o m2.o: i.h a.o: i.h \end{lstlisting} \begin{enumerate} \item Draw the dependency graph that depicts the Makefile below. Use dotted lines for implicit dependencies. (10 pts) \beginanswers \begin{figure}[h] \centering \includegraphics[width=.9\linewidth]{images/dependency.png} \label{fig:dependency} \end{figure} \else \hspace*{-0.4in}\framebox(540,400){} \fi \newpage \item Makefiles should always have ``all'' and ``clean'' targets. Write the dependencies and commands for all and clean below (5pts): \beginanswers \begin{lstlisting}[language=make] all: m1 m2 clean: rm *.o *.so hi1 hi2 \end{lstlisting} \else \hspace*{-0.4in}\framebox(540,200){} \fi \item Now turn the Makefile into a CMakeLists.txt file. Add the commands to generate an install target to install ``m1'' and ``m2'' into ``bin'' and the shared library into ``lib'' (10 pts): \beginanswers \begin{lstlisting}[language=make] cmake_minimum_required(VERSION 3.0) project(Hello C) add_library(b SHARED b.c) add_library(a STATIC a.c i.h) add_executable(m1 m1.c) target_link_libraries(m1 a b) add_executable(m2 m2.c i.h) target_link_libraries(m2 b) install(TARGETS m1 DESTINATION bin) install(TARGETS m2 DESTINATION bin) install(TARGETS b DESTINATION lib) \end{lstlisting} \else \textbf{CMakeLists.txt}: \hspace*{-0.4in}\framebox(540,400){} \fi \end{enumerate} \end{enumerate} \end{document}
{ "alphanum_fraction": 0.7019466376, "avg_line_length": 27.5131086142, "ext": "tex", "hexsha": "ac565a7c1403f30226e97896bf08dbe6229a5eff", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "3d258da477a363b770d036e0c12c5b762e31a5ad", "max_forks_repo_licenses": [ "BSD-2-Clause", "CC-BY-4.0" ], "max_forks_repo_name": "rcos/CSCI-4961-01-Summer-2018", "max_forks_repo_path": "Resources/SampleQuizzes/Summer2019/Test1/test1.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "3d258da477a363b770d036e0c12c5b762e31a5ad", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "BSD-2-Clause", "CC-BY-4.0" ], "max_issues_repo_name": "rcos/CSCI-4961-01-Summer-2018", "max_issues_repo_path": "Resources/SampleQuizzes/Summer2019/Test1/test1.tex", "max_line_length": 304, "max_stars_count": 1, "max_stars_repo_head_hexsha": "3d258da477a363b770d036e0c12c5b762e31a5ad", "max_stars_repo_licenses": [ "BSD-2-Clause", "CC-BY-4.0" ], "max_stars_repo_name": "rcos/CSCI-4961-01-Summer-2018", "max_stars_repo_path": "Resources/SampleQuizzes/Summer2019/Test1/test1.tex", "max_stars_repo_stars_event_max_datetime": "2018-10-18T02:36:34.000Z", "max_stars_repo_stars_event_min_datetime": "2018-10-18T02:36:34.000Z", "num_tokens": 4602, "size": 14692 }
\documentclass[landscape,twocolumn]{article} \usepackage{amsmath, amssymb, biblatex, graphicx, hyperref, pgfplots, pgfplotstable} \graphicspath{{../figures/}} \addbibresource{references.bib} \pgfplotsset{compat = 1.17} %\pgfplotstableread[col sep=comma]{../results/results.csv}\results \title{COMP5329 Assignment 2} \author{John Hu (zehu4485, 500395897) \and Nicholas Grasevski (ngra5777, 500710654)} \begin{document} \maketitle \begin{abstract} In this report we implement a convolutional neural network for a multi-label image classification problem. \end{abstract} \section{Introduction} \begin{itemize} \item What's the aim of the study? \item Why is the study important? \item The general introduction of your used method in the assignment and your motivation for such a solution. \end{itemize} \section{Related work} Existing related methods in the literature. \section{Techniques} \begin{itemize} \item The principle of your method used in this assignment. \item Justify the method. \item Any advantage or novelty of the proposed method. \end{itemize} \section{Experiments} \subsection{Performance} \subsubsection{Accuracy} \subsubsection{Efficiency} \subsection{Analysis} \subsubsection{Hyperparameters} \subsubsection{Ablation studies} \subsubsection{Comparison methods} \section{Results} \section{Discussion} \section{Conclusion} \printbibliography\appendix \section{Running the code} \begin{itemize} \item Dependencies: \texttt{pip install torch} \item Usage: \texttt{./assignment2.py} \end{itemize} The code assumes the data is stored in a directory called \texttt{input}. \end{document}
{ "alphanum_fraction": 0.7935323383, "avg_line_length": 29.2363636364, "ext": "tex", "hexsha": "7ad8d5d36f99d7308edf077a85f0da941eda2445", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "a8ad45403889194b198affc652c1ec8dbf6c2573", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "grasevski/COMP5329-assignment2", "max_forks_repo_path": "report/report.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "a8ad45403889194b198affc652c1ec8dbf6c2573", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "grasevski/COMP5329-assignment2", "max_issues_repo_path": "report/report.tex", "max_line_length": 109, "max_stars_count": null, "max_stars_repo_head_hexsha": "a8ad45403889194b198affc652c1ec8dbf6c2573", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "grasevski/COMP5329-assignment2", "max_stars_repo_path": "report/report.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 433, "size": 1608 }
\input{../../utils/header.tex} \begin{document} \title{Machine Learning (41204-01)\\HW \#7} \author{Will Clark and Matthew DeLio \\ \textsf{\{will.clark,mdelio\}@chicagobooth.edu} \\ University of Chicago Booth School of Business} \date{\today} \maketitle \section{Data Summary} The user that has rated the most video games is \textsf{U584295664}; he has rated 53 games. This user is an extreme outlier, as the median user rated only two games and only one other user rated more than 21 games. This behavior makes him approximately 22 standard deviations above the mean. We can see in \cref{fig:histo_users} just how extreme this outlier is (the maximum value is marked with a red line). The game that has been rated most frequently is \textsf{I760611623}; it has been rated 200 times. This game is also an extreme outlier, as the median game has been rated three times and the next most-rated game was rated by only 102 users. The rating profile for this game puts it at approximately 15 standard deviations above the mean. We can see in \cref{fig:histo_games} how far from the center of the distribution this game is (the maximum value is again marked in red). \begin{figure} \centering \begin{subfigure}[b]{0.49\textwidth} \caption{by User} \includegraphics[width=\textwidth]{histo_users.pdf} \label{fig:histo_users} \end{subfigure} \hfill \begin{subfigure}[b]{0.49\textwidth} \caption{by Game} \includegraphics[width=\textwidth]{histo_games.pdf} \label{fig:histo_games} \end{subfigure} \caption{Distribution of Rating Behavior} \end{figure} \section{User Similarity} To find the user most similar to \textsf{U141954350}, we use RecommenderLab's similarity function. To compute cosine similarity, we can simply feed the raw ratings data into the similarity function with our user in question, use the argument \textsf{method="cosine"}, and get a list of similarity scores between them and the others. To compute Jaccard similarity, however, we had to "binarize" the ratings. To do this, we first normalize the data, which attempts to reduce any user bias by row-centering the data (subtracts the user's mean score from each rating). With this normalized data, we choose then to use the mean (now 0) as the cutoff between a binary 0/1. This means that any score above the user's mean is a favorable score (\textsf{=1}), and any below, is a unfavorable (\textsf{=0}). This binary data is then fed into the similarity function with the \textsf{method="jaccard"} argument which produces similarity scores between the user in question and the others. %We attempted to use the \textsf{pearson} method, but ran into issues when calculating the similarity matrix (it returned only NAs). The top 10 scores are listed in \cref{tab:top10}. These show some good agreement between the two methods with the top 5 matching in exact order. Furthermore, of the top 10 similar users, 9 appear in both lists. The closest user to our user in question is \textsf{U887577623} \input{top10.tex} \section{Recommendations} To recommend a particular item to our user \textsf{U141954350} we utilize a ``popular'' recommender (see \vref{lst:code}). We employ two methods for determining which item to recommend. \begin{enumerate} \item \textbf{Top-N List}\\Uses the recommender to produce a top 10 list based on the most popular items. \item \textbf{Top Predicted Ratings}\\Predicts all ratings for the user, which can then be sorted to produce a top-10 list. \end{enumerate} Before beginning this analysis, we would have been surprised if these two lists were drastically different from one another --- in theory, they are supposed to produce the items that are ``best'' for the given user. We find, however, that they produce completely different lists with drastically different predicted ratings (see \cref{tab:topn,tab:highest}). Likely the algorithms work differently, one using user-similarity to predict a top list (Top-N List) and the other just producing top ratings with potentially low support (Top Predicted Ratings). If this is the case, one would expect the Top-N List to represent a more robust list. Therefore the item that we would recommend to the user is \textsf{I840620023}. \input{topn.tex} \input{highest.tex} \begin{appendices} \clearpage \section{Code Listings} \lstinputlisting[label=lst:code, caption=Code for Homework, language=R]{../hw6.R} \end{appendices} \end{document} % \input{.tex} % \begin{figure} % \centering % \begin{subfigure}[b]{0.49\textwidth} % \caption{} % \includegraphics[width=\textwidth]{.pdf} % \label{fig:} % \end{subfigure} % \hfill % \begin{subfigure}[b]{0.49\textwidth} % \caption{} % \includegraphics[width=\textwidth]{.pdf} % \label{fig:} % \end{subfigure} % \caption{} % \end{figure} % \begin{figure}[!htb] % \centering % \caption{} % \includegraphics[scale=.5]{.pdf} % \label{fig:} % \end{figure}
{ "alphanum_fraction": 0.7564154786, "avg_line_length": 52.7956989247, "ext": "tex", "hexsha": "d0e48bc4224e7f608a2dd30a499e3a41187456da", "lang": "TeX", "max_forks_count": 2, "max_forks_repo_forks_event_max_datetime": "2021-08-18T13:16:58.000Z", "max_forks_repo_forks_event_min_datetime": "2017-02-23T00:53:27.000Z", "max_forks_repo_head_hexsha": "f4f09d6d1efa022d9c34647883e49ae8e2f1fe6c", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "wclark3/machine-learning", "max_forks_repo_path": "hw6/writeup/hw6.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "f4f09d6d1efa022d9c34647883e49ae8e2f1fe6c", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "wclark3/machine-learning", "max_issues_repo_path": "hw6/writeup/hw6.tex", "max_line_length": 723, "max_stars_count": null, "max_stars_repo_head_hexsha": "f4f09d6d1efa022d9c34647883e49ae8e2f1fe6c", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "wclark3/machine-learning", "max_stars_repo_path": "hw6/writeup/hw6.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 1302, "size": 4910 }
\section{Methods} This work can be split into four stages: data collection, data cleaning, topic modeling, and analysis. \subsection{Data collection} Large volumes of human language text must be acquired prior to beginning the analysis. This text falls into two categories: career data and curricular data. The Internet was used as the primary source for all data acquired. \subsubsection{Career data} Career data is any human language text describing the expected skills and qualifications of an ideal candidate. This kind of data is present in many formats. A contract signed between an employer and employee may contain this data in a description of day to day roles and responsibilities. Corporate publications (blog posts, articles) may also contain descriptions of employee behavior and roles. However, perhaps the most straightforward source of data for this corpus is online job postings. Job postings generally follow a predictable format: logistics (role, seniority, location), brief company profile, and then a list of responsibilities and expected prerequisite skills. Logistic information is an ample source of noise, but company descriptions and more so responsibilities and prerequisites are exactly the kind of human language career data necessary for this analysis. Open data sources were identified on the Internet and aggregated into a corpus of over 92,000 job description documents.~\cite{data.world:promptcloud} \subsubsection{Curricular data} Curricular data is human language text that describes expected outcomes of postsecondary courses. This data set must contain the necessary data to identify academic core skills addressed at the postsecondary level. This is much more subtle and difficult to standardize proposition than career data. A course curriculum contains exactly this data, but the formatting and availability of course curriculum documents vary widely and thus they do not lend themselves toward a broadly scoped automated data ingestion process. Perhaps the most common source of curricular data is the course syllabus. Syllabi contain similar information in a much more concise format. Additionally, they tend to communicate expectations of students, course logistics, and the end goals of the course curriculum. Several projects to collect corpora of syllabi exist, perhaps most notable being the Open Syllabus Project (OSP), but in general the raw data is not easily accessed. Another ubiquitous curricular data modality is the course description. These documents are even more concise than syllabi, but are collected in ``catalogs'' for the express purpose of public consumption. This makes course descriptions an ideal target for data collection. Additionally, previous work has expressed success building web scrapers to extract course data from publicly available websites.~\cite{rouly2015} The present work involved the creation of web scrapers to ingest a course description dataset of almost 20,000 course descriptions from various American universities. A detailed breakdown of sources is available online\footnote{\href{https://employability.rouly.net}{employability.rouly.net}}. \subsection{Data cleaning} Human language text in general contains a large amount of noise. Free form text retrieved from myriad Internet sources even more so. To improve on the signal to noise ratio, data was streamed through a cleaning process prior to storage in a central database. Statistical methods were utilized to determine language content of each document, and non-English documents were filtered out. The remaining documents were normalized via lowercasing and character sequence reduction, tokenized using a statistical model trained on English documents, and stemmed to word roots with the Porter stemmer. Known English ``stop words'' (glue words with little distinct semantic value) were filtered out of documents. Excessively short documents and tokens were dropped as well. By normalizing and stemming the documents, we provide the automated Topic Modeling methods a fighting chance to identify recurrent terms. \subsection{Topic modeling} Using the Spark library implementations, documents were vectorized from the raw ``bag of terms'' representation using a TF-IDF measure. These vectorized representations were fed into an LDA processor under a variety of constraint settings. Results are reported in the next section. \subsection{Analysis} After modeling the corpora of documents, a measure of overlap between the two was computed. The guiding notion in this analysis is that the proportional frequency with which topics are expressed in a corpus represents the prevalence, or importance, of that topic to the corpus. In other words, the more frequently a topic comes up, the more relevant it is as a whole to the corpus. Elasticsearch was used to perform an aggregation over all the modeled documents. Corpus topic frequency was converted into a percentage for analysis. Results are discussed in the following sections. \subsection{Metric} For a given topic $T_i$, the number $N(T_i, D_j, \rho)$ of documents in a dataset $D_j$ expressing topic $T_i$ with weight above a certain relevance threshold $\rho > 0$ is divided by the total size of the dataset $|D_j|$ to get the expression ratio $\epsilon(T_i, D_j, \rho)$, see Equation~\ref{eq:expr}. This is the ratio with which document set $D_j$ expresses topic $T_i$ in the context of the minimum relevance parameter $\rho$. \begin{equation} \epsilon(T_i, D_j, \theta) = \frac{N(T_i, D_j, \theta)}{|D_j|} \label{eq:expr} \end{equation} A topic $T_i$ is said to be ``strictly overlapping'' ($T_i \in \Omega(\rho, \theta)$) if for all datasets $D_j$, $\epsilon(T_i, D_j, \rho) > \theta > 0$ holds true. The intersection parameter $\theta$ is a measure of topic ubiquity within a dataset and will limit which relevant topics (relevant under $\rho$) will be considered overlapping. Using the set of strictly overlapping topics $\Omega(\rho, \theta)$ as a substitute for intersection, we can compute the Jaccard index for the distance between the set of topics expressed mainly by course descriptions and the set of topics expressed mainly by job descriptions (see Equation~\ref{eq:jaccard}). This formulation is useful to get around a strict set membership definition of either group of topics on their own. Lowering the weight threshold $\theta$ will have the effect of increasing the Jaccard similarity index. Increasing the number of topics $k$ will, to a certain point, also have the effect of increasing the index, until a certain optimal point when it should begin to decrease in value again. Further experimentation is required to identify the optimal value of $k$. \begin{equation} J(A, B) = \frac{|A \cap B|}{|A \cup B|} = \frac{|\Omega(\rho, \theta)|}{|T|} \label{eq:jaccard} \end{equation}
{ "alphanum_fraction": 0.8016395842, "avg_line_length": 75.0659340659, "ext": "tex", "hexsha": "8aeda50d67d49174f34dc050aa1f3b8bdaa133fb", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "a860caf10582ad152071c15d7bdc2cdd8042e10f", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "jrouly/employability", "max_forks_repo_path": "paper/sections/methods.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "a860caf10582ad152071c15d7bdc2cdd8042e10f", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "jrouly/employability", "max_issues_repo_path": "paper/sections/methods.tex", "max_line_length": 309, "max_stars_count": 2, "max_stars_repo_head_hexsha": "a860caf10582ad152071c15d7bdc2cdd8042e10f", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "jrouly/employability", "max_stars_repo_path": "paper/sections/methods.tex", "max_stars_repo_stars_event_max_datetime": "2021-06-23T17:31:31.000Z", "max_stars_repo_stars_event_min_datetime": "2018-12-08T19:10:03.000Z", "num_tokens": 1409, "size": 6831 }
\documentclass{article} \usepackage[a4paper]{geometry} \usepackage{microtype} \usepackage{acronym} \usepackage[standard]{ntheorem} \usepackage{unicode-math} \usepackage{haskell} \usepackage{hyperref} \usepackage{cleveref} \author{Justin Bedő} \title{Bioshake: a Haskell EDSL for bioinformatics pipelines} \acrodef{edsl}[EDSL]{Embedded Domain Specific Language} \begin{document} \maketitle \section{Introduction} Bioinformatics pipelines are typically composed of a large number of programs and stages coupled together loosely through the use of intermediate files. These pipelines tend to be quite complex and require a significant amount of computational time, hence a good pipeline must be able to manage intermediate files, guarantee rentrability -- the ability to re-enter a partially run pipeline and continue from the latest point -- and also provide methods to easily describe pipelines. \paragraph{Strong type-checking} We present a novel bioinformatics pipeline framework that is composed as an \ac{edsl} in Haskell. Our framework leverages the strong type-checking capabilities of Haskell to prevent many errors that can arise in the specification of a pipeline. Specifically, file formats are statically checked by the type system to prevent specification of pipelines with incompatible intermediate file formats. Furthermore, tags are implemented through Haskell type-classes to allow metadata tagging, allowing various properties of files -- such as weather a bed file is sorted -- to be statically checked. Thus, a miss-specified pipeline will simply fail to compile, catching these bugs well before the lengthy execution. \paragraph{Intrinsic and extrinsic building} Our framework builds upon the shake \ac{edsl}, which is a make-like build tool. Similarly to make, dependencies in shake are specified in an \textit{extrinsic} manor, that is a build rule will define its input dependencies based on the output file path. Our \ac{edsl} compiles down to shake rules, but allows the specification of pipelines in an \textit{intrinsic} fashion, whereby the processing chain is explicitly stated and hence no filename based dependency graph needs to be specified. However, as bioshake compiles to shake, both extrinsic and intrinsic rules can be mixed, allowing a choice to be make to maximise pipeline specification clarity. Furthermore, the use of explicit sequencing for defining pipelines allows abstraction away from the filename level: intermediate files can be automatically named and managed by bioshake, removing the burden of naming the (many) intermediate files. \begin{example} The following is an example of a pipeline expressed in the bioshake \ac{edsl}: \begin{haskell*} align &↦& sort &↦& dedup &↦& call &↦& out [''output.vcf''] \end{haskell*} From this example it is clear what the \textit{stages} are, and the names of the files flowing between stages is implicit and managed by bioshake. The exception is the explicitly named output, which is the output of the whole pipeline. \end{example} \section{Core data types} Bioshake is build around the following datatype: \begin{haskell*} \hskwd{data} a ↦ b \hswhere{(↦) :: a → b → a ↦ b}\\ \hskwd{infixl} 1 ↦ \end{haskell*} This datatype represents the conjunction of two stages $a$ and $b$. As we are compiling to shake rules, the $Buildable$ class represents a way to build thing of type $a$ by producing shake actions: \begin{haskell*} \hskwd{class} Buildable a \hswhere{build :: a → Action ()} \end{haskell*} Finally, as we are abstracting away from filenames, we use a typeclass to represent types that can be mapped to filenames: \begin{haskell*} \hskwd{class} Pathable a \hswhere{paths :: a → [FilePath]} \end{haskell*} \section{Defining stages} A stage -- for example $align$ing and $sort$ing -- is a type in this representation. Such a type is an instance of $Pathable$ as outputs from the stage are files, and also $Buildable$ as the stage is associated with some shake actions required to build the outputs. We give a simple example of declaring a stage that sorts bam files. \begin{example} \label{ex:sort} Consider the stage of sorting a bed file using samtools. We first define a datatype to represent the sorting stage and to carry all configuration options needed to perform the sort: \begin{haskell*} data Sort = Sort \end{haskell*} This datatype must be an instance of $Pathable$ to define the filenames output from the stage. Naming can take place according to a number of schemes, but here we will opt to use hashes to name output files. This ensure the filename is unique and relatively short. \begin{haskell*} \hskwd{instance} Pathable a ⇒ Pathable (a ↦ Sort) \hswhere{ paths (a ↦ \_) = \hslet{inputs = paths a}{[hash inputs +\!\!\!+ ".sort.bed"]} } \end{haskell*} In the above, \<hash :: Binary a ⇒ a → String\> could be a cryptographic hash function such as sha1 with base32 encoding. Many choices are appropriate here. Finally, we describe how to sort files by making $Sort$ an instance of $Buildable$: \begin{haskell*} \hskwd{instance} (Pathable a, IsBam a, Pathable (a ↦ Sort)) ⇒ Buildable (a ↦ Sort) \hswhere{ build p@(a ↦ \_) = \hslet{[input] = paths a\\[out] = paths p}{ cmd "samtools sort" [input] ["-o", out] } } \end{haskell*} Note here that $IsBam$ is a precondition for the instance: the sort stage is only applicable to BAM files. Likewise, the output of the sort is also a BAM file, so we must declare that too: \begin{haskell*} \hskwd{instance} IsBam (a ↦ Sort) \end{haskell*} The tag $IsBam$ itself can be declared as the empty typeclass \<class IsBam a\>. See \cref{sec:tags} for a discussion of tags and their utility. \end{example} \section{Compiling to shake rules} The pipelines as specified by the core data types are compiled to shake rules, with shake executing the build process. The distinction between $Buildable$ and $Compilable$ types are that the former generate shake $Action$s and the latter shake $Rules$. The $Compiler$ therefore extends the $Rules$ monad, augmenting it with some additional state: \begin{haskell*} \hskwd{type} Compiler &=& StateT (S.Set [FilePath]) Rules\\ \end{haskell*} The state here captures rules we have already compiled. As the same stages may be applied in several concurrent pipelines (i.e., the same preprocessing may be applied but different subsequent variant callers) the set of rules already compiled must be maintained. When compiling a rule, the state is checked to ensure the rule is new, and skipped otherwise. The rule compiler evaluates the state transformer, initialising the state to the empty set: \begin{haskell*} compileRules &::& Compiler () → Rules ()\\ compileRules p &=& evalStateT p mempty \end{haskell*} A compilable typeclass abstracts over types that can be compiled: \begin{haskell*} \hskwd{class} Compilable a \hswhere{compile :: a → Compiler ()} \end{haskell*} Finally, \<a ↦ b\> is $Compilable$ if the input and output paths are defined, the subsequent stage \<a\> is $Compilable$, and \<a ↦ b\> is $Buildable$. Compilation in this case defines a rule to build the output paths with established dependencies on the input paths using the $build$ function. These rules are only compiled if they do not already exist: \begin{haskell*} instance (Pathable a, Pathable (a ↦ b), Compilable a, Buildable (a ↦ b)) ⇒ Compilable (a ↦ b) \hswhere{ compile pipe(a ↦ b) = do \hsbody{ let outs = paths pipe\\ set ← get\\ when (outs `S.notMember` set) \$ do \hsbody{ lift \$ outs \textrm{\&\%>} \_ → do \hsbody{ need (paths a)\\ build pipe} put (outs `S.insert` set)} compile a}} \end{haskell*} \section{Tags} \label{sec:tags} Bioshake uses a number of tags to ensure type errors will be raised if stages are not connectable. There are numerous tags that are useful for capturing metadata that are not otherwise easily captured. We have already seen in \cref{ex:sort} the use of \textit{IsBam} to ensure the input file format of \textit{Sort} is compatible. By convention, Bioshake uses the file extension prefixed by \textit{Is} as tags for filetype, e.g.,: \textit{IsBam}, \textit{IsSam}, \textit{IsBcf}, \textit{IsBed}, \textit{IsCSV}, \textit{IsFastQ}, \textit{IsGff}, \textit{IsMPileup}, \textit{IsSam}, \textit{IsTSV}, \textit{IsVCF}. Other types of metadata such as if a file is sorted (\textit{Sorted}) or if duplicate reads have been removed (\textit{DeDuped}) are used. These tags allow input requirements of sorting or deduplication to be captured when defining stages. These properties can also automatically propagate down the pipeline; for example, once a file is \textit{DeDuped} all subsequent outputs carry the \textit{DeDuped} tag: \begin{haskell*} \hskwd{instance} Deduped a ⇒ Deduped (a ↦ b) \end{haskell*} Finally, the tags discussed so far have been empty type classes, however tags can easily carry more information. For example, bioshake uses a \textit{Referenced} tag to represent the association of a reference genome. This tag is defined as \begin{haskell*} \hskwd{class} Referenced \hswhere{ getRef :: FilePath }\\\\ \hskwd{instance} Referenced a ⇒ Referenced (a ↦ b) \end{haskell*} This tag allows stages to extract the path to the reference genome and automatically propagates down the pipeline. \section{Abstracting execution platform} In reality, it can be quite desirable to abstract the execution platform from the definition of the build. In \cref{ex:sort}, the shake function $cmd$ is directly used to execute samtools and perform the build. However, it is useful to abstract away from $cmd$ directly to allow the command to be executed instead on (say) a cluster, cloud service, or remote machine. Bioshake achieves this flexibility by using free monad transformers to provide a function $run$ -- the equivalent of $cmd$ -- but where the actual execution may take place via submitting a script to a cluster queue, for example. In order to achieve this, the datatype for stages in bioshake are augmented by a free parameter to carry implementation specific default configuration -- e.g., cluster job submission resources. In the running example of sorting a bed file, the new datatype is \< \hskwd{data} Sort c = Sort c\>. \section{Reducing boilerplate} Much of the code necessary for defining a new stage can be automatically written through the use of template Haskell. This allows very succinct definitions of stages increasing clarity of code and reducing boilerplate. Bioshake has template Haskell functions for generating instances of \textit{Pathable} and \textit{Buildable}, and for managing the tags. \begin{example} \Cref{ex:sort} can be simplified by using template Haskell considerably. First we have the augmented type definitions: \begin{haskell*} \hskwd{data} Sort c = Sort c \end{haskell*} The instances for \textit{Pathable} and the various tags can be generated with the template Haskell splice \begin{haskell*} \$(makeTypes ''\!Sort [''\!IsBam, ''\!Sorted] []) \end{haskell*} This splice generates a \textit{Pathable} instance using the hashed path names, and also declares the output to be instances of \textit{IsBam} and \textit{Sorted}. The first tag in the list of output tags is used to determine the file extension. The second empty list allows the definition of \textit{transient} tags; that is the tags that if present on the input paths will hold for the output files after the stage. Finally, given a generic definition of the build \begin{haskell*} buildSort t \_ (paths → [input]) [out] = \hsbody{run "samtools sort" [input] ["-@", show t] ["-o", out]} \end{haskell*} the \textit{Buildable} instances can be generated with the splice \begin{haskell*} \$(makeThreaded ''\!Sort [''\!IsBam] '\!buildSortBam) \end{haskell*} This splice takes the type, a list of required tags for the input, and the build function. Here, the build function is passed the number of threads to use, the \textit{Sort} object, the input object and a list of output paths. \end{example} \section{Discussion} \end{document} % Local Variables: % TeX-engine: luatex % End:
{ "alphanum_fraction": 0.7497963838, "avg_line_length": 44.0071684588, "ext": "tex", "hexsha": "3742ea72f3f34d392ff3c324605b3b5263a68fb3", "lang": "TeX", "max_forks_count": 8, "max_forks_repo_forks_event_max_datetime": "2019-08-13T16:02:57.000Z", "max_forks_repo_forks_event_min_datetime": "2017-05-11T06:22:42.000Z", "max_forks_repo_head_hexsha": "f46d41b98c1bd700f5b69472fcd1f4f524e9af45", "max_forks_repo_licenses": [ "ISC" ], "max_forks_repo_name": "averagehat/bioshake-1", "max_forks_repo_path": "doc/paper.tex", "max_issues_count": 5, "max_issues_repo_head_hexsha": "f46d41b98c1bd700f5b69472fcd1f4f524e9af45", "max_issues_repo_issues_event_max_datetime": "2019-08-17T20:31:37.000Z", "max_issues_repo_issues_event_min_datetime": "2017-10-18T11:34:45.000Z", "max_issues_repo_licenses": [ "ISC" ], "max_issues_repo_name": "averagehat/bioshake-1", "max_issues_repo_path": "doc/paper.tex", "max_line_length": 108, "max_stars_count": 49, "max_stars_repo_head_hexsha": "f46d41b98c1bd700f5b69472fcd1f4f524e9af45", "max_stars_repo_licenses": [ "ISC" ], "max_stars_repo_name": "averagehat/bioshake-1", "max_stars_repo_path": "doc/paper.tex", "max_stars_repo_stars_event_max_datetime": "2021-11-18T02:30:05.000Z", "max_stars_repo_stars_event_min_datetime": "2017-02-28T07:32:55.000Z", "num_tokens": 3289, "size": 12278 }
\chapter{Meromorphic functions} \section{The second nicest functions on earth} If holomorphic functions are like polynomials, then \emph{meromorphic} functions are like rational functions. Basically, a meromorphic function is a function of the form $ \frac{A(z)}{B(z)} $ where $A , B: U \to \CC$ are holomorphic and $B$ is not zero. The most important example of a meromorphic function is $\frac 1z$. We are going to see that meromorphic functions behave like ``almost-holomorphic'' functions. Specifically, a meromorphic function $A/B$ will be holomorphic at all points except the zeros of $B$ (called \emph{poles}). By the identity theorem, there cannot be too many zeros of $B$! So meromorphic functions can be thought of as ``almost holomorphic'' (like $\frac 1z$, which is holomorphic everywhere but the origin). We saw that \[ \frac{1}{2\pi i} \oint_{\gamma} \frac 1z \; dz = 1 \] for $\gamma(t) = e^{it}$ the unit circle. We will extend our results on contours to such situations. It turns out that, instead of just getting $\oint_\gamma f(z) \; dz = 0$ like we did in the holomorphic case, the contour integrals will actually be used to \emph{count the number of poles} inside the loop $\gamma$. It's ridiculous, I know. \section{Meromorphic functions} \prototype{$\frac 1z$, with a pole of order $1$ and residue $1$ at $z=0$.} Let $U$ be an open subset of $\CC$ again. \begin{definition} A function $f : U \to \CC$ is \vocab{meromorphic} if there exists holomorphic functions $A, B \colon U \to \CC$ with $B$ not identically zero in any open neighborhood, and $f(z) = A(z)/B(z)$ whenever $B(z) \ne 0$. \end{definition} Let's see how this function $f$ behaves. If $z \in U$ has $B(z) \neq 0$, then in some small open neighborhood the function $B$ isn't zero at all, and thus $A/B$ is in fact \emph{holomorphic}; thus $f$ is holomorphic at $z$. (Concrete example: $\frac 1z$ is holomorphic in any disk not containing $0$.) On the other hand, suppose $p \in U$ has $B(p) = 0$: without loss of generality, $p=0$ to ease notation. By using the Taylor series at $p=0$ we can put \[ B(z) = c_k z^k + c_{k+1} z^{k+1} + \dots \] with $c_k \neq 0$ (certainly some coefficient is nonzero since $B$ is not identically zero!). Then we can write \[ \frac{1}{B(z)} = \frac{1}{z^k} \cdot \frac{1}{c_k + c_{k+1}z + \dots}. \] But the fraction on the right is a holomorphic function in this open neighborhood! So all that's happened is that we have an extra $z^{-k}$ kicking around. %We want to consider functions $f$ defined on all points in $U$ %except for a set of ``isolated'' singularities; %for example, something like \[ \frac{1}{z(z+1)(z^2+1)} \] which is defined %for all $z$ other than $z=0$, $z=-1$ and $z=i$. %Or even \[ \frac{1}{\sin(2\pi z)}, \] which is defined at every $z$ which is \emph{not} an integer. %% Even though there's infinitely many points, they are not really that close together. This gives us an equivalent way of viewing meromorphic functions: \begin{definition} Let $f : U \to \CC$ as usual. A \vocab{meromorphic} function is a function which is holomorphic on $U$ except at an isolated set $S$ of points (meaning it is holomorphic as a function $U \setminus S \to \CC$). For each $p \in S$, called a \vocab{pole} of $f$, the function $f$ must admit a \vocab{Laurent series}, meaning that \[ f(z) = \frac{c_{-m}}{(z-p)^m} + \frac{c_{-m+1}}{(z-p)^{m-1}} + \dots + \frac{c_{-1}}{z-p} + c_0 + c_1 (z-p) + \dots \] for all $z$ in some open neighborhood of $p$, other than $z = p$. Here $m$ is a positive integer (and $c_{-m} \neq 0$). \end{definition} Note that the trailing end \emph{must} terminate. By ``isolated set'', I mean that we can draw open neighborhoods around each pole in $S$, in such a way that no two open neighborhoods intersect. \begin{example} [Example of a meromorphic function] Consider the function \[ \frac{z+1}{\sin z}. \] It is meromorphic, because it is holomorphic everywhere except at the zeros of $\sin z$. At each of these points we can put a Laurent series: for example at $z=0$ we have \begin{align*} \frac{z+1}{\sin z} &= (z+1) \cdot \frac{1}{z - \frac{z^3}{3!} + \frac{z^5}{5!} - \dots} \\ &= \frac 1z \cdot \frac{z+1}{1 - \left(% \frac{z^2}{3!} - \frac{z^4}{5!} + \frac{z^6}{7!} - \dots \right)} \\ &= \frac 1z \cdot (z+1) \sum_{k \ge 0} \left( % \frac{z^2}{3!}-\frac{z^4}{5!}+\frac{z^6}{7!}-\dots \right)^k. \end{align*} If we expand out the horrible sum (which I won't do), then you get $\frac 1z$ times a perfectly fine Taylor series, i.e.\ a Laurent series. \end{example} \begin{abuse} We'll often say something like ``consider the function $f : \CC \to \CC$ by $z \mapsto \frac 1z$''. Of course this isn't completely correct, because $f$ doesn't have a value at $z=0$. If I was going to be completely rigorous I would just set $f(0) = 2015$ or something and move on with life, but for all intents let's just think of it as ``undefined at $z=0$''. Why don't I just write $g : \CC \setminus \{0\} \to \CC$? The reason I have to do this is that it's still important for $f$ to remember it's ``trying'' to be holomorphic on $\CC$, even if isn't assigned a value at $z=0$. As a function $\CC \setminus \{0\} \to \CC$ the function $\frac 1z$ is actually holomorphic. \end{abuse} \begin{remark} I have shown that any function $A(z)/B(z)$ has this characterization with poles, but an important result is that the converse is true too: if $f : U \setminus S \to \CC$ is holomorphic for some isolated set $S$, and moreover $f$ admits a Laurent series at each point in $S$, then $f$ can be written as a rational quotient of holomorphic functions. I won't prove this here, but it is good to be aware of. \end{remark} \begin{definition} Let $p$ be a pole of a meromorphic function $f$, with Laurent series \[ f(z) = \frac{c_{-m}}{(z-p)^m} + \frac{c_{-m+1}}{(z-p)^{m-1}} + \dots + \frac{c_{-1}}{z-p} + c_0 + c_1 (z-p) + \dots. \] The integer $m$ is called the \vocab{order} of the pole. A pole of order $1$ is called a \vocab{simple pole}. We also give the coefficient $c_{-1}$ a name, the \vocab{residue} of $f$ at $p$, which we write $\Res(f;p)$. \end{definition} The order of a pole tells you how ``bad'' the pole is. The order of a pole is the ``opposite'' concept of the \vocab{multiplicity} of a \vocab{zero}. If $f$ has a pole at zero, then its Taylor series near $z=0$ might look something like \[ f(z) = \frac{1}{z^5} + \frac{8}{z^3} - \frac{2}{z^2} + \frac{4}{z} + 9 - 3z + 8z^2 + \dots \] and so $f$ has a pole of order five. By analogy, if $g$ has a zero at $z=0$, it might look something like \[ g(z) = 3z^3 + 2z^4 + 9z^5 + \dots \] and so $g$ has a zero of multiplicity three. These orders are additive: $f(z) g(z)$ still has a pole of order $5-3=2$, but $f(z)g(z)^2$ is completely patched now, and in fact has a \vocab{simple zero} now (that is, a zero of degree $1$). \begin{exercise} Convince yourself that orders are additive as described above. (This is obvious once you understand that you are multiplying Taylor/Laurent series.) \end{exercise} Metaphorically, poles can be thought of as ``negative zeros''. We can now give many more examples. \begin{example} [Examples of meromorphic functions] \listhack \begin{enumerate}[(a)] \ii Any holomorphic function is a meromorphic function which happens to have no poles. Stupid, yes. \ii The function $\CC \to \CC$ by $z \mapsto 100z\inv$ for $z \neq 0$ but undefined at zero is a meromorphic function. Its only pole is at zero, which has order $1$ and residue $100$. \ii The function $\CC \to \CC$ by $z \mapsto z^{-3} + z^2 + z^9$ is also a meromorphic function. Its only pole is at zero, and it has order $3$, and residue $0$. \ii The function $\CC \to \CC$ by $z \mapsto \frac{e^z}{z^2}$ is meromorphic, with the Laurent series at $z=0$ given by \[ \frac{e^z}{z^2} = \frac{1}{z^2} + \frac{1}{z} + \frac{1}{2} + \frac{z}{6} + \frac{z^2}{24} + \frac{z^3}{120} + \dots. \] Hence the pole $z=0$ has order $2$ and residue $1$. \end{enumerate} \end{example} \begin{example} [A rational meromorphic function] Consider the function $\CC \to \CC$ given by \begin{align*} z &\mapsto \frac{z^4+1}{z^2-1} = z^2 + 1 + \frac{2}{(z-1)(z+1)} \\ &= z^2 + 1 + \frac1{z-1} \cdot \frac{1}{1+\frac{z-1}{2}} \\ &= \frac{2}{z-1} + \frac32 + \frac94(z-1) + \frac{7}{8}(z-1)^2 - \dots \end{align*} It has a pole of order $1$ and residue $2$ at $z=1$. (It also has a pole of order $1$ at $z=-1$; you are invited to compute the residue.) \end{example} \begin{example} [Function with infinitely many poles] The function $\CC \to \CC$ by \[ z \mapsto \frac{1}{\sin(z)} \] has infinitely many poles: the numbers $z = 2\pi k$, where $k$ is an integer. Let's compute the Laurent series at just $z=0$: \begin{align*} \frac{1}{\sin(2\pi z)} &= \frac{1}{\frac{z}{1!} - \frac{z^3}{3!} + \frac{z^5}{5!} - \dots} \\ % &= \frac{1/z}{\frac{1}{1!} - \frac{z^2}{3!} + \frac{z^4}{5!} - \dots} \\ &= \frac 1z \cdot \frac{1}{1 - \left( \frac{z^2}{3!} - \frac{z^4}{5!} + \dots \right)} \\ &= \frac 1z \sum_{k \ge 0} \left( \frac{z^2}{3!} - \frac{z^4}{5!} + \dots \right)^k. \end{align*} which is a Laurent series, though I have no clue what the coefficients are. You can at least see the residue; the constant term of that huge sum is $1$, so the residue is $1$. Also, the pole has order $1$. \end{example} The Laurent series, if it exists, is unique (as you might have guessed), and by our result on holomorphic functions it is actually valid for \emph{any} disk centered at $p$ (minus the point $p$). The part $\frac{c_{-1}}{z-p} + \dots + \frac{c_{-m}}{(z-p)^m}$ is called the \vocab{principal part}, and the rest of the series $c_0 + c_1(z-p) + \dots$ is called the \vocab{analytic part}. \section{Winding numbers and the residue theorem} Recall that for a counterclockwise circle $\gamma$ and a point $p$ inside it, we had \[ \oint_{\gamma} (z-p)^m \; dz = \begin{cases} 0 & m \neq -1 \\ 2\pi i & m = -1 \end{cases} \] where $m$ is an integer. One can extend this result to in fact show that $\oint_\gamma (z-p)^m \; dz = 0$ for \emph{any} loop $\gamma$, where $m \neq -1$. So we associate a special name for the nonzero value at $m=-1$. \begin{definition} For a point $p \in \CC$ and a loop $\gamma$ not passing through it, we define the \vocab{winding number}, denoted $\Wind(p, \gamma)$, by \[ \Wind(\gamma, p) = \frac{1}{2\pi i} \oint_{\gamma} \frac{1}{z-p} \; dz \] \end{definition} For example, by our previous results we see that if $\gamma$ is a circle, we have \[ \Wind(\text{circle}, p) = \begin{cases} 1 & \text{$p$ inside the circle} \\ 0 & \text{$p$ outside the circle}. \end{cases} \] If you've read the chapter on fundamental groups, then this is just the fundamental group associated to $\CC \setminus \{p\}$. In particular, the winding number is always an integer (the proof of this requires the complex logarithm, so we omit it here). In the simplest case the winding numbers are either $0$ or $1$. \begin{definition} We say a loop $\gamma$ is \vocab{regular} if $\Wind(p, \gamma) = 1$ for all points $p$ in the interior of $\gamma$ (for example, if $\gamma$ is a counterclockwise circle). \end{definition} With all these ingredients we get a stunning generalization of the Cauchy-Goursat theorem: \begin{theorem} [Cauchy's residue theorem] Let $f : \Omega \to \CC$ be meromorphic, where $\Omega$ is simply connected. Then for any loop $\gamma$ not passing through any of its poles, we have \[ \frac{1}{2\pi i} \oint_{\gamma} f(z) \; dz = \sum_{\text{pole $p$}} \Wind(\gamma, p) \Res(f; p). \] In particular, if $\gamma$ is regular then the contour integral is the sum of all the residues, in the form \[ \frac{1}{2\pi i} \oint_{\gamma} f(z) \; dz = \sum_{\substack{\text{pole $p$} \\ \text{inside $\gamma$}}} \Res(f; p). \] \end{theorem} \begin{ques} Verify that this result coincides with what you expect when you integrate $\oint_\gamma cz\inv \; dz$ for $\gamma$ a counter-clockwise circle. \end{ques} The proof from here is not really too impressive -- the ``work'' was already done in our statements about the winding number. \begin{proof} Let the poles with nonzero winding number be $p_1, \dots, p_k$ (the others do not affect the sum).\footnote{ To show that there must be finitely many such poles: recall that all our contours $\gamma : [a,b] \to \CC$ are in fact bounded, so there is some big closed disk $D$ which contains all of $\gamma$. The poles outside $D$ thus have winding number zero. Now we cannot have infinitely many poles inside the disk $D$, for $D$ is compact and the set of poles is a closed and isolated set!} Then we can write $f$ in the form \[ f(z) = g(z) + \sum_{i=1}^k P_i\left( \frac{1}{z-p_i} \right) \] where $P_i\left( \frac{1}{z-p_i} \right)$ is the principal part of the pole $p_i$. (For example, if $f(z) = \frac{z^3-z+1}{z(z+1)}$ we would write $f(z) = (z-1) + \frac1z - \frac1{1+z}$.) The point of doing so is that the function $g$ is holomorphic (we've removed all the ``bad'' parts), so \[ \oint_{\gamma} g(z) \; dz = 0 \] by Cauchy-Goursat. On the other hand, if $P_i(x) = c_1x + c_2x^2 + \dots + c_d x^d$ then \begin{align*} \oint_{\gamma} P_i\left( \frac{1}{z-p_i} \right) \; dz &= \oint_{\gamma} c_1 \cdot \left( \frac{1}{z-p_i} \right) \; dz + \oint_{\gamma} c_2 \cdot \left( \frac{1}{z-p_i} \right)^2 \; dz + \dots \\ &= c_1 \cdot \Wind(\gamma, p_i) + 0 + 0 + \dots \\ &= \Wind(\gamma, p_i) \Res(f; p_i). \end{align*} which gives the conclusion. \end{proof} \section{Argument principle} One tricky application is as follows. Given a polynomial $P(x) = (x-a_1)^{e_1}(x-a_2)^{e_2}\dots(x-a_n)^{e_n}$, you might know that we have \[ \frac{P'(x)}{P(x)} = \frac{e_1}{x-a_1} + \frac{e_2}{x-a_2} + \dots + \frac{e_n}{x-a_n}. \] The quantity $P'/P$ is called the \vocab{logarithmic derivative}, as it is the derivative of $\log P$. This trick allows us to convert zeros of $P$ into poles of $P'/P$ with order $1$; moreover the residues of these poles are the multiplicities of the roots. In an analogous fashion, we can obtain a similar result for any meromorphic function $f$. \begin{proposition} [The logarithmic derivative] Let $f : U \to \CC$ be a meromorphic function. Then the logarithmic derivative $f'/f$ is meromorphic as a function from $U$ to $\CC$; its zeros and poles are: \begin{enumerate}[(i)] \ii A pole at each zero of $f$ whose residue is the multiplicity, and \ii A pole at each pole of $f$ whose residue is the negative of the pole's order. \end{enumerate} \end{proposition} Again, you can almost think of a pole as a zero of negative multiplicity. This spirit is exemplified below. \begin{proof} Dead easy with Taylor series. Let $a$ be a zero/pole of $f$, and WLOG set $a=0$ for convenience. We take the Taylor series at zero to get \[ f(z) = c_k z^k + c_{k+1} z^{k+1} + \dots \] % chktex 25 where $k < 0$ if $0$ is a pole and $k > 0$ if $0$ is a zero. Taking the derivative gives \[ f'(z) = kc_k z^{k-1} + (k+1)c_{k+1}z^{k} + \dots. \] Now look at $f'/f$; with some computation, it equals \[ \frac{f'(z)}{f(z)} = \frac 1z \frac{kc_k + (k+1)c_{k+1}z + \dots}{c_k + c_{k+1}z + \dots}. \] So we get a simple pole at $z=0$, with residue $k$. \end{proof} Using this trick you can determine the number of zeros and poles inside a regular closed curve, using the so-called Argument Principle. \begin{theorem} [Argument principle] \label{thm:arg_principle} Let $\gamma$ be a regular curve. Suppose $f : U \to \CC$ is meromorphic inside and on $\gamma$, and none of its zeros or poles lie on $\gamma$. Then \[ \frac{1}{2\pi i} \oint_\gamma \frac{f'}{f} \; dz = Z - P \] where $Z$ is the number of zeros inside $\gamma$ (counted with multiplicity) and $P$ is the number of poles inside $\gamma$ (again with multiplicity). \end{theorem} \begin{proof} Immediate by applying Cauchy's residue theorem alongside the preceding proposition. In fact you can generalize to any curve $\gamma$ via the winding number: the integral is \[ \frac{1}{2\pi i} \oint_\gamma \frac{f'}{f} \; dz = \sum_{\text{zero $z$}} \Wind(\gamma,z) - \sum_{\text{pole $p$}} \Wind(\gamma,p) \] where the sums are with multiplicity. \end{proof} Thus the Argument Principle allows one to count zeros and poles inside any region of choice. Computers can use this to get information on functions whose values can be computed but whose behavior as a whole is hard to understand. Suppose you have a holomorphic function $f$, and you want to understand where its zeros are. Then just start picking various circles $\gamma$. Even with machine rounding error, the integral will be close enough to the true integer value that we can decide how many zeros are in any given circle. Numerical evidence for the Riemann Hypothesis (concerning the zeros of the Riemann zeta function) can be obtained in this way. \section{Philosophy: why are holomorphic functions so nice?} All the fun we've had with holomorphic and meromorphic functions comes down to the fact that complex differentiability is such a strong requirement. It's a small miracle that $\CC$, which \emph{a priori} looks only like $\RR^2$, is in fact a field. Moreover, $\RR^2$ has the nice property that one can draw nontrivial loops (it's also true for real functions that $\int_a^a f \; dx = 0$, but this is not so interesting!), and this makes the theory much more interesting. As another piece of intuition from Siu\footnote{Harvard professor.}: If you try to get (left) differentiable functions over \emph{quaternions}, you find yourself with just linear functions. \section\problemhead % Looman-Menchoff theorem? \begin{problem} [Fundamental theorem of algebra] Prove that if $f$ is a nonzero polynomial of degree $n$ then it has $n$ roots. \end{problem} \begin{dproblem} [Rouch\'e's theorem] Let $f, g \colon U \to \CC$ be holomorphic functions, where $U$ contains the unit disk. Suppose that $\left\lvert f(z) \right\rvert > \left\lvert g(z) \right\rvert$ for all $z$ on the unit circle. Prove that $f$ and $f+g$ have the same number of zeros which lie strictly inside the unit circle (counting multiplicities). \end{dproblem} \begin{problem} [Wedge contour] \gim For each odd integer $n$, evaluate the improper integral \[ \int_0^\infty \frac{1}{1+x^{n}} \; dx. \] \begin{hint} This is called a ``wedge contour''. Try to integrate over a wedge shape consisting of a sector of a circle of radius $r$, with central angle $\frac{2\pi}{n}$. Take the limit as $r \to \infty$ then. \end{hint} \begin{sol} See \url{https://math.stackexchange.com/q/242514/229197}, which does it with $2019$ replaced by $3$. \end{sol} \end{problem} \begin{problem} [Another contour] \yod Prove that the integral \[ \int_{-\infty}^{\infty} \frac{\cos x}{x^2+1} \; dx \] converges and determine its value. \begin{hint} It's $\lim_{a \to \infty} \int_{-a}^{a} \frac{\cos x}{x^2+1} \; dx$. For each $a$, construct a semicircle. \end{hint} % semicircle integral % \quad \text{ and } \quad % \int_{-\infty}^{\infty} \frac{\sin x}{x^2+1} \; dx \end{problem} \begin{sproblem} \gim Let $f \colon U \to \CC$ be a nonconstant holomorphic function. \begin{enumerate}[(a)] \ii (Open mapping theorem) Prove that $f\im(U)$ is open in $\CC$.\footnote{Thus the image of \emph{any} open set $V \subseteq U$ is open in $\CC$ (by repeating the proof for $f \restrict{V}$).} \ii (Maximum modulus principle) Show that $\left\lvert f \right\rvert$ cannot have a maximum over $U$. That is, show that for any $z \in U$, there is some $z' \in U$ such that $\left\lvert f(z) \right\rvert < \left\lvert f(z') \right\rvert$. \end{enumerate} % http://en.wikipedia.org/wiki/Open_mapping_theorem_(complex_analysis) \end{sproblem}
{ "alphanum_fraction": 0.6719266607, "avg_line_length": 40.7387755102, "ext": "tex", "hexsha": "874e188c81c469a8ae768840b05b9ddfd6563690", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "ee461679e51e92a0e4b121f28ae5fe17d5e5319e", "max_forks_repo_licenses": [ "Apache-2.0", "MIT" ], "max_forks_repo_name": "aDotInTheVoid/ltxmk", "max_forks_repo_path": "corpus/napkin/tex/complex-ana/meromorphic.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "ee461679e51e92a0e4b121f28ae5fe17d5e5319e", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Apache-2.0", "MIT" ], "max_issues_repo_name": "aDotInTheVoid/ltxmk", "max_issues_repo_path": "corpus/napkin/tex/complex-ana/meromorphic.tex", "max_line_length": 113, "max_stars_count": null, "max_stars_repo_head_hexsha": "ee461679e51e92a0e4b121f28ae5fe17d5e5319e", "max_stars_repo_licenses": [ "Apache-2.0", "MIT" ], "max_stars_repo_name": "aDotInTheVoid/ltxmk", "max_stars_repo_path": "corpus/napkin/tex/complex-ana/meromorphic.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 6796, "size": 19962 }
\section{text} \subsection{Encoding, Decoding} \textit{ocaml-text} is a library for manipulation of unicode text. It can replace the \textit{String} module of the standard library when you need to access to strings as sequence of \textit{UTF-8} encoded unicode characters. It uses \textit{libiconv} to transcode between various character encodings, which is located in the module \textit{Encoding}. Decoding means extracting a unicode character from a sequence of bytes. To decode text, you need to create a decoder. For the system encoding, \textit{Encoding.system} is determined by environment variables. If you print non-ascii text on a terminal, it's a good idea to encode it in the \textit{system encoding}. \subsection{text manipulation} Note that in both case, the regular expression will be compiled only one time. And in the second example, it will be compiled the rst time it is used (by using lazy evaluation). But the more interesting use of this quotation is in pattern matchings. It is possible to put a regular expression in an arbitrary pattern, and capture variables. Here is a simple example of what you can do: \inputminted[fontsize=\scriptsize]{ocaml}{code/ocaml-text/patt.ml} \captionof{listing}{ocaml text regexp support}
{ "alphanum_fraction": 0.7909018356, "avg_line_length": 40.4193548387, "ext": "tex", "hexsha": "3415a8afc6f9ce209936b1b1d8831c30e8e0a199", "lang": "TeX", "max_forks_count": 17, "max_forks_repo_forks_event_max_datetime": "2021-06-21T06:57:32.000Z", "max_forks_repo_forks_event_min_datetime": "2015-02-10T18:12:15.000Z", "max_forks_repo_head_hexsha": "09a575b0d1fedfce565ecb9a0ae9cf0df37fdc75", "max_forks_repo_licenses": [ "BSD-3-Clause" ], "max_forks_repo_name": "mgttlinger/ocaml-book", "max_forks_repo_path": "library/text.tex", "max_issues_count": 1, "max_issues_repo_head_hexsha": "09a575b0d1fedfce565ecb9a0ae9cf0df37fdc75", "max_issues_repo_issues_event_max_datetime": "2018-12-03T04:15:48.000Z", "max_issues_repo_issues_event_min_datetime": "2018-10-09T13:53:43.000Z", "max_issues_repo_licenses": [ "BSD-3-Clause" ], "max_issues_repo_name": "mgttlinger/ocaml-book", "max_issues_repo_path": "library/text.tex", "max_line_length": 77, "max_stars_count": 142, "max_stars_repo_head_hexsha": "09a575b0d1fedfce565ecb9a0ae9cf0df37fdc75", "max_stars_repo_licenses": [ "BSD-3-Clause" ], "max_stars_repo_name": "mgttlinger/ocaml-book", "max_stars_repo_path": "library/text.tex", "max_stars_repo_stars_event_max_datetime": "2022-01-15T00:47:37.000Z", "max_stars_repo_stars_event_min_datetime": "2015-01-12T16:45:40.000Z", "num_tokens": 301, "size": 1253 }
% !TEX TS-program = XeLaTeX % use the following command: % all document files must be coded in UTF-8 \documentclass{textolivre} % for anonymous submission %\documentclass[anonymous]{textolivre} % to create HTML use %\documentclass{textolivre-html} % HTML compile using make4ht % $ make4ht -c textolivre-html.cfg -u -x article "fn-in,svg,pic-align" % % See more information on the repository: https://github.com/leolca/textolivre % Metadata \begin{filecontents*}[overwrite]{article.xmpdata} \Title{What are the factors that influence the use of social networks as a means to serve pedagogy?} \Author{Melchor Gómez-García \sep Moussa Boumadan \sep Roberto Soto-Varela \sep Ángeles Gutiérrez-García} \Language{en} \Keywords{Social networks \sep Teacher education \sep Multimedia instruction \sep Information technology} \Journaltitle{Texto Livre} \Journalnumber{1983-3652} \Volume{14} \Issue{1} \Firstpage{1} \Lastpage{16} \Doi{10.35699/1983-3652.2021.25420} \setRGBcolorprofile{sRGB_IEC61966-2-1_black_scaled.icc} {sRGB_IEC61966-2-1_black_scaled} {sRGB IEC61966 v2.1 with black scaling} {http://www.color.org} \end{filecontents*} \journalname{Texto Livre: Linguagem e Tecnologia} \thevolume{14} \thenumber{1} \theyear{2021} \receiveddate{\DTMdisplaydate{2020}{09}{23}{-1}} % YYYY MM DD \accepteddate{\DTMdisplaydate{2020}{12}{3}{-1}} \publisheddate{\DTMdisplaydate{2020}{12}{22}{-1}} % Corresponding author \corrauthor{Melchor Gómez-García} % DOI \articledoi{10.35699/1983-3652.2021.25420} % Abbreviated author list for the running footer \runningauthor{Gómez-García et al.} \editorname{Daniervelin Pereira} \title{What are the factors that influence the use of social networks as a means to serve pedagogy?} \othertitle{Quais são os fatores que influenciam a utilização das redes sociais como meio para servir a pedagogia?} % if there is a third language title, add here: %\othertitle{Artikelvorlage zur Einreichung beim Texto Livre Journal} \author[1]{Melchor Gómez-García \orcid{0000-0003-3453-218X} \thanks{Email: \url{[email protected]}}} \author[1]{Moussa Boumadan \orcid{0000-0003-3334-1007} \thanks{Email: \url{[email protected]}}} \author[2]{Roberto Soto-Varela \orcid{0000-0003-2105-5580} \thanks{Email: \url{[email protected]}}} \author[1]{Ángeles Gutiérrez-García \orcid{0000-0001-7376-6064} \thanks{Email: \url{[email protected]}}} \affil[1]{Universidad Autónoma de Madrid, Espanha.} \affil[2]{Nebrija University, Espanha.} \addbibresource{article.bib} % use biber instead of bibtex % $ biber tl-article-template % set language of the article \setdefaultlanguage[variant=british]{english} \setotherlanguage[variant=brazilian]{portuguese} % for spanish, use: %\setdefaultlanguage{spanish} %\gappto\captionsspanish{\renewcommand{\tablename}{Tabla}} % use 'Tabla' instead of 'Cuadro' % for languages that use special fonts, you must provide the typeface that will be used % \setotherlanguage{arabic} % \newfontfamily\arabicfont[Script=Arabic]{Amiri} % \newfontfamily\arabicfontsf[Script=Arabic]{Amiri} % \newfontfamily\arabicfonttt[Script=Arabic]{Amiri} % % in the article, to add arabic text use: \textlang{arabic}{ ... } %\newcolumntype{b}{X} %\newcolumntype{m}{>{\hsize=.6\hsize}X} %\newcolumntype{s}{>{\hsize=.35\hsize}X} \newcolumntype{b}{>{\hsize=2.3\hsize}X} \newcolumntype{m}{>{\hsize=1.1\hsize}X} \newcolumntype{s}{>{\hsize=.5\hsize}X} \begin{document} \maketitle \begin{polyabstract} \begin{english} \begin{abstract} Social networks are a key element in young people's and teenagers' leisure time. But their influence goes beyond the field of entertainment further reaching educational and training environments. These tools can support and improve student learning, but it is clear that they can also be an inconvenience or a potential danger for the youth. The purpose of this research is to describe the profiles of teachers who use social networks as resources for their teaching, both inside and outside the classroom. It also seeks to know the characteristics that influence their incorporation as learning elements. For this reason, a secondary exploitation of the PISA 2018 database is carried out. It contains the data of 21,621 teachers of the secondary education level from the 19 regions of Spain. Multilevel analysis is used as a method of data analysis. The results point out differences in use according to the Autonomous Communities, showing the importance of the age of the teacher and the number of years of experience, both for their use in the classroom and to cover their training needs. Quality teaching requires a commitment to technological resources; therefore, decisive to know these elements in order to design teachers and educational institutions. \keywords{Social networks \sep Teacher education \sep Multimedia instruction \sep Information technology} \end{abstract} \end{english} \begin{portuguese} \begin{abstract} As redes sociais são um elemento-chave nos tempos livres dos jovens e dos adolescentes. Mas a sua influência vai para além do campo do entretenimento, atingindo o ambiente educativo e de formação. Essas ferramentas podem apoiar e melhorar a aprendizagem dos estudantes, mas é evidente que também podem ser um incômodo ou um perigo potencial para os jovens. O objetivo desta investigação é descrever os perfis dos professores que utilizam as redes sociais como recursos para o seu ensino, tanto dentro como fora da sala de aula. Procura também conhecer as características que influenciam a sua incorporação como elementos de aprendizagem. Por esta razão, é realizada uma exploração secundária da base de dados PISA 2018. Ela contém os dados de 21.621 professores do ensino secundário das 19 regiões de Espanha. A análise multinível é utilizada como um método de análise de dados. Os resultados apontam para diferenças de utilização de acordo com as Comunidades Autônomas, mostrando a importância da idade do professor e o número de anos de experiência, tanto para a sua utilização na sala de aula como para cobrir as suas necessidades de formação. Um ensino de qualidade requer um compromisso com os recursos tecnológicos; por conseguinte, é decisivo conhecer esses elementos para planear professores e instituições de ensino. \keywords{Redes sociais \sep Formação de professores \sep Instrução multimédia \sep Tecnologia da informação} \end{abstract} \end{portuguese} % if there is another abstract, insert it here using the same scheme \end{polyabstract} \section{Introduction}\label{sec-intro} The Annual Study of Social Networks \cite{spain2019} points out that in Spain, 85\% of Internet users between the ages of 16 and 65 use social networks. This represents more than 25.5 million users in the country. To get an overview of the extent of the numbers, we can add that the total Spanish population in this age bracket (16-65 years) is 30.9 million people \cite{instituto2018}. Only 5.4 million do not use these digital socialization environments, which represents 17.5\% of the total citizens in this age bracket. Users generate more content quickly and less content that remains documented over time. Under this paradigm, new proposals for the typology of social networks arise, where this form of sharing is favoured, whose main characteristic is to strengthen the link that makes a person constantly aware of new updates, or to be the one who has a recurrent need to update his or her profile with new content. All this without having yet extended the new approaches to connectivity based on 5G technology, which will allow and facilitate new ways of being connected. This facilitates a level of data transfer and ease of access, which will clearly translate into increased connection time in spaces of socialization. We are facing an area in which the forecasts are clearly linked to an exponential growth in a short time, the video format is clearly located as the reference packaging in these new forms of socializing, and the frequency of use of social networks by the vast majority of society is widely extended. Taking into account these data sets and the new ways of interacting in digital socialization spaces. These spaces can influence many aspects of the lives of individuals who use these applications. \textcite{coll2013} highlights that the skills and competences we develop between the learning we experience, are the result of the different scenarios in which we take part and develop. Considering this casuistry from the educational point of view, we can identify schools as one of these scenarios. But they are not the only ones or online digital spaces have fostered the emergence of a variety of them, thanks to the progress of Information and Communication Technologies (ICT). What we mean is that many young people spend part of their time following referents or "influencers" who can influence them in some way. Therefore, in this new learning ecology described by the author, the most decisive factor is the ability to acquire new knowledge, searching for and generating those conditions to learn in diverse contexts and situations, rather than possessing a dense body of knowledge. This knowledge is updated at a dizzying rate, in these digital niches that have high numbers of visits from young profiles, profiles that we have been calling Millennials or Generation Z, among other categorizations. In this context, informal training opportunities have an impact on an individual's process of constructing meaning. \textcite{siemens2005} considers this type of approach as a general characteristic of the whole learning path. Recognizing informal learning opportunities in formal education, takes strength from the number of educational niches that individuals are exposed from their own mobile devices. The great challenge is to be able to ensure, with sufficient evidence, that these networked environments can generate real learning. The interconnection between users in these scenarios is an absolutely necessary condition in itself. Nevertheless, it is not enough \cite{selwyn2010}. The work of teachers in this new learning ecology is a key element in the need to facilitate that learning takes place. Drawing on specific environments such as social networks and their use as a means of developing a learning experience, \textcite{timothy2016} stresses that these spaces can foster communication between teachers and learners, breaking down geographical barriers and enabling the sharing of diverse interests. On the one hand, it boosts the level of feedback and interactivity in the development of a training opportunity, making access to knowledge more fluid, leading to substantial time saving. Social networks as a tool at the service of pedagogy, favour the development of technical skills and abilities that are fundamental for the acquisition of a digital competence that is essential in today's and tomorrow's society. Moreover, they facilitate the increase of fundamental social skills and the development of creative capacity, critical thinking and collaborative work. They are also venues that make it possible to reduce the distance between students and teachers, establishing a task of monitoring and accompanying teachers that has a higher level of immediacy and fluidity \cite{garcia2012,tejedor2012,timothy2016}. The emergence of web 2.0 technologies \cite{oreilly2005}, and in particular social networks, has substantially modified learning ecologies, leading to a prevailing need for new learning pedagogies \cite{casteneda2013}. Pedagogies that take into account that individuals who participate in these spaces must be aware of the learning to which they are being exposed, and be clear about the skills they will be able to acquire. To delimit the evaluative approach, is necessary to know the proposal through which it will be possible to demonstrate what a student has learned \cite{williams2011}. \begin{sloppypar} The importance of didactic planning of technology- mediated learning experiences was discussed. Research results in this field \cite{gusman2017,hughes2015} showed that teachers who decide to incorporate ICTs in the planning of their didactic experiences are scarce, and moreover, most of them do not end up being an innovation that can be considered to have an educational impact. This is in accordance with the findings of \textcite{cortina2019}. They state that even if students sometimes have access to technology, they cannot make pedagogically meaningful use of it. \end{sloppypar} In this sense, \textcite{gomezgarcia2020} have identified that the aspects of the online training most valuable to teachers are the contents and activities, rather than the format of the proposal or technologies used. In the last decade, many relevant studies have focused on clarifying the impact of the incorporation of professional digital competence in teacher training \cite{instefjord2017}, discovering and analysing basic criteria for the optimal design of training proposals of digital competence, in educational institutions and in teacher training \cite{engen2015}. This factor that emphasize the importance of technical and pedagogical proficiency in training environments in which we use technology as a means of learning, including social networks. In line with this issue, this study aims to exploit the PISA 2018 database in order to describe the profiles of teachers who use social networks as resources for their teaching, both inside and outside the classroom, determining the characteristics that influence their incorporation as elements of learning. \section{Methodology}\label{sec-metodologia} The main objective of this research is to quantify how the digital tools used by teachers are conditioned by the teacher profile observed from variables such as age, teaching experience, gender or nationality. This was done using the PISA 2018 database for Spain to establish, by means of a secondary use of them, which of them had a statistically significant. The results of which were published in December 2019. The aim of the PISA report is to provide comparable data that will enable countries to improve their education policies and results. It involves an analysis that does not evaluate the student, but rather the system in which he or she is being educated. The sample analysed covers 19 Spanish Autonomous Communities (Andalusia, Aragon, the Balearic Islands, Catalonia, the Canary Islands, Cantabria, Castile-La Mancha, Castile and Leon, the Community of Madrid, the Foral Community of Navarre, the Community of Valencia, Extremadura, Galicia, the Basque Country, the Principality of Asturias, the Region of Murcia and La Rioja, together with the Autonomous Cities of Ceuta and Melilla) and there were 21,621 secondary school teachers participating, specifically at the 15-year level. 36.5\% of the total sample are boys, and 63.5\% are girls. The ages range from 20 to 70 years, with an average of 45.5, with fashion being 50, coinciding with the social phenomenon known as the baby boom of the 1970s. In the \Cref{fig01} is showed the age distribution of the teachers that participate in the study. \begin{figure}[htbp] \centering \includegraphics[width=0.75\textwidth]{fig01} \caption{Frequency of teacher's age.} \label{fig01} \source{own elaboration.} \end{figure} All participants take a computer-based test. One part of the test is multiple-choice and there is also a part of open-ended questions. It takes about an hour for each participant to answer a questionnaire that focuses on their background, including learning habits, motivation and family. In our work, we used the macro variable of the PISA study [TC169]: How often did you use the following tools in your teaching this school year? corresponding to the main ‘teacher questionnaire’ (use of information and communication technology at school), and the following country (CNTYD) and identification variables (TC): gender TC001, age TC002, years of teaching experience (TC007). To develop the methodology, a univariate linear model was calculated on the variable “H\_Social” which is the compilation of the frequencies of use of social networks in education, in the PISA database. It was compared with variables of “teacher's age”, “teacher's sex”, “years of teaching experience” and “teacher's country”, which allowed us to establish the profile of the teacher who uses social media as an educational resource in his/her classes. The variable TC169Q09HA was recoded for handling, and thus “Never” turned into “0”; “In some lessons” turned into “1”; “In almost every lesson” turned into “2” and “In almost every lesson” turned into “3”. This approach allowed us to establish an analysis of factorial variances, and contingency tables. Finding significance in the explanation of variability in some factors, we focus on those in order to define the profile of the teacher on hypothesis contrasting and mean comparison. \section{Results}\label{sec-resultados} \subsection{The age of the teacher and the use of social networks in education}\label{sec-age} \cref{tbl-tabela-01} shows the different intensities of use of social networks in education, in different age brackets. Each group shows a different average age, so we suspect that there may be a relationship between both variables. \begin{table}[htpb] \caption{The age of the teacher and the use of social networks in education.} \label{tbl-tabela-01} \begin{tabularx}{\textwidth}{msss}%{XXXX} \toprule Never & N & Valid & 9532 \\ & Mean & & 45.68 \\ & Std. Deviation & & 9.131 \\ \midrule In some lessons & N & Valid & 2955 \\ & Mean & & 44.69 \\ & Std. Deviation & & 9.069 \\ \midrule In most lessons & N & Valid & 423 \\ & & Perdidos & 1 \\ & Mean & & 43.51 \\ & Std. Deviation & & 9.144\\ \midrule In every or almost every lesson & N & Valid & 136 \\ & Mean & & 43.85 \\ & Std. Deviation & & 9.839 \\ \midrule No Response & N & Valid & 294 \\ & Mean & & 48.29 \\ & Std. Deviation & & 8.748 \\ \bottomrule \end{tabularx} \source{own elaboration.} \end{table} Normal distribution in each of the groups according to the Kolmogorov-Smirnov test, but when analyzing the homoscedasticity, cannot be said for sure that there is an equality of variance between the groups. Therefore, we will apply non-parametric tests, specifically the Kruskall-Wallis test, which confirms the existence of significant age differences between the different groups of social network use. \Cref{tbl-tabela-02} shows significant elements emerge in the peer-to-peer testing. This allows us to make a difference in two subgroups. A first sub-group formed by teachers who use social networks, and which includes those who use them “In every lesson” and “In most lessons”, and whose mean age is close to 43. And a second sub-group that includes those who “Never” use social networks, which reflects a significant difference in age, and a mean age above 45 years old. The mean age is higher among those teachers with less use of social networks for education. \begin{table}[htpb] \caption{Average range of social network usage in education.} \label{tbl-tabela-02} \begin{tabularx}{\textwidth}{lXXXXX} \toprule Sample 1-Sample 2 & Contrast test & Error & Control test Dev. & Sig. & Adjusted Sig.\\ \midrule In most lessons-In every or almost every lesson & 93.524 & 371.070 & -.252 & .801 & 1.000 \\ %\midrule In most lessons-In some lessons & 468.714 & 195.691 & 2.395 & .017 & .100 \\ %\midrule In most lessons-Never & 894.194 & 187.046 & 4.781 & .000 & .000 \\ %\midrule In every or almost every lesson-In some lessons & 375.190 & 330.135 & 1.136 & .256 & 1.000 \\ %\midrule In every or almost every lesson-Never & 800.670 & 325.085 & 2.463 & .000 & .000 \\ %\midrule In some lessons-Never & 425.480 & 79.259 & 5.368 & .000 & .000 \\ \bottomrule \end{tabularx} \vspace{1ex} {\raggedright \footnotesize Asymptotic significances are shown. Significance level is .05. Significance values have been adjusted by Bonferroni correction for several tests. \par} \source{own elaboration.} \end{table} % % essa tabela tem uma nota. Como acrescentar? \subsection{The use of social networks in the different Autonomous Communities of Spain}\label{sec-use} \Cref{tbl-tabela-03} shows the different percentages in the use of social network by Autonomous Comunities. \begin{table}[htpb] \caption{Social network usage percentage in the different Autonomous Communities.} \label{tbl-tabela-03} \centering \begin{tabular}{ll} \toprule Never & \\ \midrule Andalusia & 68.1 \\ Aragon & 74.4 \\ Asturias & 77.3 \\ Balearic Islands & 74.2 \\ Canary Islands & 71.9 \\ Cantabria & 75.0 \\ Castile and Leon & 70.3 \\ Castile-La Mancha & 70.3 \\ Catalonia & 76.2 \\ Extremadura & 68.3 \\ Galicia & 77.0 \\ La Rioja & 65.8 \\ Madrid & 74.2 \\ Murcia & 66.6 \\ Navarre & 76.4 \\ Basque Country & 76.9 \\ Comunidad Valenciana & 74.8 \\ Ceuta & 69.8 \\ Melilla & 65.2 \\ \bottomrule \end{tabular} \source{own elaboration.} \end{table} %é isto? \Cref{fig02} shows the process of cluster classification, to explore the social network use by Autonomous Communities and let us make the best decision in terms of the number of subgroups we are going to do. \begin{figure}[htbp] \centering \includegraphics[width=0.75\textwidth]{fig02} \caption{Dendogram of social network use by Autonomous Communities.} \label{fig02} \source{own elaboration.} \end{figure} \Cref{tbl-tabela-04} reveals by applying a ranking conglomerate process, we were able to segment the regions into three distinct subgroups with a high degree of accuracy. \begin{table}[htpb] \caption{Final Cluster.} \label{tbl-tabela-04} \centering \begin{tabularx}{0.5\linewidth}{XXXX} \toprule & \multicolumn{3}{c}{Clúster} \\ & 1 & 2 & 3 \\ \midrule Never Use & 66.80 & 70.58 & 75.64 \\ \bottomrule \end{tabularx} \source{own elaboration.} \end{table} Social networks are often used with different principles of technology and social communication, and have an integration approach that differs from one classroom to another, depending on the region. Despite the fact that it is not a specific parameter in this work, an initial observation of the graph suggests a relationship between less use of social networks and more rural and unpopulated regions; and more use of social networks in more urban and populated regions. This data is not statistically proven. \subsection{Gender and use of social networks}\label{sec-gender} \Cref{fig03} analyze the use of social networks from a gender perspective, we obtain results that seem to indicate that male teachers use social networks more than female teachers in education. \begin{figure}[htbp] \centering \includegraphics[width=0.75\textwidth]{fig03} \caption{Gender and use of social networks.} \label{fig03} \source{own elaboration.} \end{figure} \Cref{tbl-tabela-05} reflects, in line with \Cref{fig03}, the use of social networks by gender from absolute values and percentage values. \begin{table}[htpb] \caption{Use of social network by gender.} \label{tbl-tabela-05} \begin{tabularx}{\linewidth}{XXXXXX} \toprule & \multicolumn{5}{>{\hsize=\dimexpr5\hsize+5\tabcolsep+\arrayrulewidth\relax}X}{This school year, how often used for teaching: Social media (e.g., Facebook, Twitter)} \\ Are you female or male? & Never & In some lessons & In most lessons & In every or almost every lesson & Total \\ \midrule Female & 21638 & 10422 & 2450 & 798 & 35308 \\ & (61.3\%) & (29.5\%) & (6.9\%) & (2.3\%) & (100.0\%) \\ Male & 14857 & 8126 & 2243 & 640 & 25866 \\ & (57.4\%) & (31.4\%) & (8.7\%) & (2.5\%) & (100.0\%) \\ Total & 36495 & 18548 & 4693 & 1438 & 61174 \\ & (59.7\%) & (30.3\%) & (7.7\%) & (2.4\%) & (100.0\%) \\ \bottomrule \end{tabularx} \source{own elaboration.} \end{table} % não sei fazer esta In \cref{tbl-tabela-06} and \cref{tbl-tabela-07} by means of Chi-square test and Symetric measures, we check if this tendency in the data is statistically consistent, and we get a high significance. These statistician's values indicate a significant relationship between the use of social networks in education and the gender of the teacher. \begin{table}[htpb] \caption{Use of social network by gender.} \label{tbl-tabela-06} \centering \begin{tabular}{llll} \toprule & Value & df & Sig \\ \midrule Pearson Chi-Square & 116.082 & 3 & .000 \\ Likelihood Ratio & 115.595 & 3 & .000 \\ Linear-by-Linear Association & 99.049 & 1 & .000 \\ N of Valid Cases & 61174 & & \\ \bottomrule \end{tabular} \source{own elaboration.} \end{table} \begin{table}[htpb] \caption{Symmetric Measures.} \label{tbl-tabela-07} \centering \begin{tabular}{lll} \toprule & Value & Aprox. Sig. \\ \midrule Nominal by Contingency Coefficient & .044 & .000 \\ Nominal & & \\ N of Valid Cases & 61174 & \\ \bottomrule \end{tabular} \source{own elaboration.} \end{table} \subsection{Experience in teaching}\label{sec-experiencia} Data on the use of social networks in education according to the number of years of experience of the teacher, are shown below. The first fact we observe is the use of social networks seems to be related to the number of years of teaching experience. In \cref{tbl-tabela-08}, to check if this relationship is situational or statistically significant, we do an Anova test, which gives us the following results. \begin{table}[htpb] \caption{Anova about how many years of work experience do you have? Year(s) working as a teacher in total.} \label{tbl-tabela-08} \centering \begin{tabular}{llllll} \toprule & Sum of Squares & df & Mean Square & F & Sig. \\ \midrule Between Groups & 7892.648 & 3 & 2630.883 & 26.970 & .000 \\ Within Groups & 5884462.679 & 60323 & 97.549 & &\\ Total & 5892355.327 & 60326 & & & \\ \bottomrule \end{tabular} \source{own elaboration.} \end{table} \Cref{tbl-tabela-09} indicates the group that “uses a lot” and “always uses” do not have different means, but the rest do. The difference in means is significant, and the more social networks are used, the fewer years the teacher has been teaching. \begin{table}[htpb] \caption{How many years of work experience do you have? Year(s) working as a teacher in total.} \label{tbl-tabela-09} \begin{tabularx}{\textwidth}{mmsssss} \toprule & & & & & \multicolumn{2}{>{\hsize=\dimexpr2\hsize+2\tabcolsep+\arrayrulewidth\relax}s}{95\% Confidence Interval for Difference}\\\cline{6-7} (I) Social media (e.g., Facebook, Twitter) & (J) Social media (e.g., Facebook, Twitter) & Mean Difference (I-J) & Std. Error & Sig. & Lower Bound & Upper Bound \\ \midrule Never & In some lessons & $.518^{*}$ & $.090$ & $.000$ & $-.75$ & $-.29$ \\ & In most lessons & $1.010^{*}$ & $.154$ & $.000$ & $.611$ & $.41$ \\ & In every or almost every lesson & $1.338^{*}$ & $.268$ & $.000$ & $.652$ & $.03$ \\ In some lessons & Never & $-.518^{*}$ & $.090$ & $.000$ & $-.75$ & $-.29$ \\ & In most lessons & $.492^{*}$ & $.162$ & $.013$ & $.07$ & $.91$ \\ & In every or almost every lesson & $.820^{*}$ & $.273$ & $.014$ & $.121$ & $.52$ \\ In most lessons & Never & $-1.010^{*}$ & $.154$ & $.000$ & $-1.41$ & $-.61$ \\ & In some lessons & $-.492^{*}$ & $.162$ & $.013$ & $-.91$ & $-.07$ \\ & In every or almost every lesson & $.328$ & $.300$ & $.693$ & $-.441$ & $.10$ \\ In every or almost every lesson & Never & $-1.338^{*}$ & $.268$ & $.000$ & $-2.03$ & $-.65$ \\ & In some lessons & $-.820^{*}$ & $.273$ & $.014$ & $-1.52$ & $-.12$ \\ & In most lessons & $-.328$ & $.300$ & $.693$ & $-1.10$ & $.44$ \\ \bottomrule \end{tabularx} \vspace{1ex} {\raggedright \footnotesize $^{*}$ The mean difference is significant at the $.05$ level. \par} \source{own elaboration.} \end{table} % não sei fazer essa In \cref{tbl-tabela-10}, we make the multiple comparisons and form 3 subgroups, which are not as defined as in the case of the age of the teachers. \begin{table}[htpb] \caption{How many years of work experience do you have? Year(s) working as a teacher in total.} \label{tbl-tabela-10} \begin{tabularx}{\textwidth}{XXXXX} \toprule & & \multicolumn{3}{>{\hsize=\dimexpr3\hsize+3\tabcolsep+\arrayrulewidth\relax}X}{Subgroups for alpha = $0.05$} \\ Social media (e.g., Facebook, Twitter) & N & 1 & 2 & 3 \\ \midrule In every or almost every lesson & 1412 & 15.05 & & \\ In most lessons & $4634$ & $15.38$ & $15.38$ & \\ In some lessons & $18323$ & & $15.87$ & $15.87$ \\ Never & $35958$ & & & $16.39$ \\ Sig. & & $.448$ & $.118$ & $.090$ \\ \bottomrule \end{tabularx} \vspace{1ex} {\raggedright \footnotesize The means for the groups are displayed in the homogeneous subsets.\\a. Use the sample size of the harmonic mean = $3974.571$.\\b. Group sizes are not equal. The harmonic mean of the group sizes is used. Type I error levels are notguaranteed. \par} \source{own elaboration.} \end{table} % não sei fazer essa Looking at the number of years in each of the subgroups, we identified a group that uses social media less as a classroom resource, with an average number of years of experience greater than the other groups. It is highly likely that we find an age-related variable, since the teachers who have more years of teaching experience are usually also the teachers who are older. \section{Discussion and conclusions}\label{sec-discussao} The results of this study show three elements of the teacher's profile having an influence on the use of social networks in education. The first relevant fact is the age of teachers who do not use social networks in teaching, which is significantly different from those who do. If we also look at their mean ages, teachers who use social networks in education are younger than those who do not use them, who are above age 45. This seems to suggest that the older the teacher, the less use is made of this tool in education. This is in line with the research of \textcite{perez2016}, which, although they indicate an absence of difference in digital competence linked to the use of social networks when they look at it from the gender perspective, they do present this difference from the age factor. They conclude that the youngest teachers are those who reach the highest level of digital competence and handling of these tools in pedagogical practice. Nevertheless, \textcite{suarez2013} point out that there is no significant evidence of competence in the area of technology. Usually, it is not the age itself that causes a lower level of use of digital resources based on social interactions on the Internet. According to \textcite{correa2015} found that age, gender and educational level influence a better management of the tools that allow social interactions on the network. Demographic variables such as age are influenced by variations over time and context, an issue that requires constant updates \cite{sanchez2010}. In the same vein, \cite{lopez2020} they note that variables such as age, sex and social class affect the assessments made of social networks. The results show that young men are the ones who express positive opinions regarding the use of these media spaces, although they add that these appreciations should be taken in a cautious or careful way. In addition, \textcite{beemt2020} highlight that many studies are not explicit in the demographic description of respondents, leading to the possibility of inferring conclusions that are not very specific, but rather general. Therefore, it is not age itself that causes the phenomenon, but it is the compendium of multiple variables that are influenced by age \cite{suarez2013,law2008}. We have been able to verify a similar fact when we have analyzed the years of teaching experience, which shows a relationship with the use of social networks similar to that of age. It is clear that these are related factors, but it is difficult to indicate whether the cause is age or the number of years worked. But if we look at the age factor from a general perspective of integration of technologies in learning processes, the older teaching group (between 45 and 55 years) is the one that makes the most varied and frequent use of these tools \cite{area2016}. While \textcite{gomez2012} show that the use of networks for educational purposes usually arises from the initiative of the students, and rarely from the initiative of the teacher. Gender is the second influential feature. The present study reflects that social networks are a resource that is used more frequently by the male gender, and to a lesser extent by the female gender. This coincides with the findings of \textcite{suarez2013,almerich2011}, which show that both age and gender have a direct impact on the management of social networks in training environments. In turn, these two studies indicate that male teachers have a higher level of competence in the use of ICT, while female teachers are more likely to integrate ICT into their teaching practice. Nevertheless, when moving from an intermediate to a high level of digital skill in the use of social networks, \textcite{mayor2019} found that gender is the most influential variable. In the present study, the general use of social networks has its highest percentage in the male gender. Contrasting this idea, \textcite{cruz2016} found that women spend more time on social networks, in a general perspective of the use. The third influential factor is the geographical one. Social networks are integrated to a very different extent in different geographical areas of Spain. Social learning as a methodological approach incorporates many principles of technology and social communication. Although it is not a parameter that is the object of this work, a first observation of the data seems to show a trend towards less use of social networks in both rural and unpopulated regions, and greater use of social networks in urban and populated regions. In this regard, \textcite{alcivar2017} showed that there are significant differences between rural areas and those coming from urban areas, pointing out that this is due to the limitations in terms of access to technology and connectivity in rural areas. Even so, if we combine the factor of geographical area and age, \textcite{peral2015} show that socio-demographic variables do not serve to differentiate between elderly people in terms of their use of social networks. There is no doubt that educational planning cannot ignore the active and social use of social networks \cite{duart2009}. They are one of the main tools for interaction and therefore an excellent opportunity to facilitate communication between educational actors. We must reflect on the enormous educational potential of social networks in education, and the challenge remains to awaken the interest of institutions, teachers and students alike to integrate them as basic teaching tools \cite{casteneda2010}. The study shows significant data regarding the age, sex and years of experience of the teachers. To make this evidence more comprehensible, it would have been interesting to conduct some focus groups with the study participants, which is not an option since this is a secondary use of the database. However, it is a factor to be taken into account for future research. For all these reasons, it is important to have elements to predict the use of these social spaces from a segmented perspective, and to know which teaching profiles most frequently opt for these tools. \begin{english} \printbibliography\label{sec-bib} \end{english} \end{document}
{ "alphanum_fraction": 0.7665118897, "avg_line_length": 74.1858037578, "ext": "tex", "hexsha": "98ec28984616f8332a57ac6d69bd0b5965f382c6", "lang": "TeX", "max_forks_count": 4, "max_forks_repo_forks_event_max_datetime": "2021-10-02T09:04:53.000Z", "max_forks_repo_forks_event_min_datetime": "2020-12-16T11:56:20.000Z", "max_forks_repo_head_hexsha": "7b83b7c5b56f208a34d7f856af96cdd9b98d4ebc", "max_forks_repo_licenses": [ "CC-BY-4.0" ], "max_forks_repo_name": "leolca/textolivre", "max_forks_repo_path": "issues/2021/v14/n1/25420/article.tex", "max_issues_count": 76, "max_issues_repo_head_hexsha": "7b83b7c5b56f208a34d7f856af96cdd9b98d4ebc", "max_issues_repo_issues_event_max_datetime": "2021-07-27T13:20:13.000Z", "max_issues_repo_issues_event_min_datetime": "2021-01-27T22:53:57.000Z", "max_issues_repo_licenses": [ "CC-BY-4.0" ], "max_issues_repo_name": "leolca/textolivre", "max_issues_repo_path": "issues/2021/v14/n1/25420/article.tex", "max_line_length": 1530, "max_stars_count": 1, "max_stars_repo_head_hexsha": "7b83b7c5b56f208a34d7f856af96cdd9b98d4ebc", "max_stars_repo_licenses": [ "CC-BY-4.0" ], "max_stars_repo_name": "leolca/textolivre", "max_stars_repo_path": "issues/2021/v14/n1/25420/article.tex", "max_stars_repo_stars_event_max_datetime": "2022-02-21T09:13:11.000Z", "max_stars_repo_stars_event_min_datetime": "2022-02-21T09:13:11.000Z", "num_tokens": 9036, "size": 35535 }
%!TEX root=ClassNotes.tex \section{Computing Indefinite Integrals} In this section, we're interested in computing {\it indefinite integrals} or {\it antiderivatives}. Indefinite integrals cannot be computed for all functions, for example, there is no closed form for the {\bf Gaussian Integral} \begin{align*} \int \exp\left(-\dfrac{x^2}{2} \right) \: dx \end{align*} which is widely used in statistics, and the only way to compute it is using numerical methods. There is no general method of computing indefinite integrals. Instead, in this section we'll learn a few common tricks which you can combine in various ways to solve complicated problems. There are two basic tools for computing indefinite integrals \begin{enumerate} \item u-substitution and \item integration by parts. \end{enumerate} These are often combined with various algebraic and trig identities. One thing that makes the computation of indefinite integrals far more complicated than that of derivatives, in practice, is that similar looking functions can have wildly different integrals. For example, \begin{align*} \int 1/x \: dx & = \ln x + c \\ \int 1/x^2 \: dx & = -1/x + c \\ \int 1/{(1+x^2)} \: dx & = \tan^{-1} x + c \\ \int 1/(1 - x^2) \: dx & = 1/2 \cdot \ln ((1 + x)/(1 - x)) + c \\ \int 1/\sqrt{1 - x^2} \: dx & = \sin^{-1} x + c \end{align*} So just by looking at a problem you cannot easily {\it guess} what it's integral is going to look like. \subsection{u-substitution} In terms of indefinite integrals, u-substitution takes the form \begin{align} \label{eq:u-sub} \int f\left(g(x)\right) g'(x)\: dx = \int f(u) \: du + \mathrm{constant} \end{align} Ideally we want the integral on the right-hand side to be simpler than the one on the left-hand side. We interpret this identity as having obtained the right-hand from the left-hand side by making the ``substitution'' \begin{align*} u & = g(x) \\ du & = g'(x) \: dx \end{align*} This is just a {\it mnemonic} and doesn't have a concrete mathematical meaning (at least not a simple one). \begin{remark} After finding the integral $\int f(u) \: du$ you should ALWAYS plug back $u = g(x)$ to get your final answer in terms of the original variable. \end{remark} \begin{exercise} Find the following indefinite integrals using u-substitution(s). \begin{multicols}{2} \begin{enumerate} \item $\int x e^{x^2} \: dx $ \item $\int \dfrac{1}{x \ln x} \: dx$ \item $\int \dfrac{e^{\sqrt{x}}}{\sqrt{x}} \: dx$ \item $\int \dfrac{x}{4x^2+5} \: dx$ % \item $\int \dfrac{x}{2x^2 - 3} \: dx$ \item $\int \dfrac{1}{x^2 + 2} \: dx$ \item $\int \sec^2 x \tan^5 x \: dx$ \item $\int \tan x \: dx$ \item $\int \dfrac{e^x}{e^{2x} + 2e^x + 1} \: dx$ \end{enumerate} \end{multicols} \end{exercise} % \begin{exercise} % Sometimes there's no obvious reason to use u-substitution. Instead, we use u-substitution in the hope that integral simplifies. % \begin{enumerate} % \item $\dfrac{e^x}{1 - e^{2x}}$ % \item $\dfrac{1 - e^x}{1 + e^{x}}$ % \item $\dfrac{e^{2x}}{\sqrt{1 + e^x}}$ % \end{enumerate} % \end{exercise} \subsection{Integration by Parts} Integration by parts is {\it sometimes} used to integrate products of functions (you should always try u-substitution first, unless the use of integration by parts is extremely obvious). There are various ways of memorizing integration by parts, you should pick one that you find easy to remember. One possible way to express it is as follows. \begin{align} \int fg = fG - \int f' G + \mathrm{constant} \end{align} where $G$ is any antiderivative of $g$. In most problems, integration by parts is useful only if the function $f'G$ is simpler that the function $fg$. \begin{example} For computing $\int x e^x \: dx$ by parts, we have (at least) two possible choices: \begin{enumerate} \item $f = x \mbox{ and } g = e^x$. In this case, \begin{align*} f' G = e^x \end{align*} \item $f = e^x \mbox{ and } g = x$. In this case, \begin{align*} f' G = e^x \cdot x^2/2 \end{align*} In the first case, the function $f' G$ is simpler than $fg$ and hence this is the decomposition we should use for applying integration by parts. \end{enumerate} \end{example} \begin{exercise} Find the following indefinite integrals using integration by parts. \begin{multicols}{2} \begin{enumerate} \item $\int x e^x\: dx$ \item $\int x^2 e^{x}\: dx$ \item $\int x \sin x\: dx$ \item $\int \ln x \: dx$ \hint{ Use $f = \ln x$ and $g = 1$.} \item $\int x \ln x\: dx$ \item $\int x (\ln x)^2\: dx$ \end{enumerate} \end{multicols} \end{exercise} \subsection{Trigonometric Integrals} There are a LOT of tricks for computing integrals of functions involving trigonometric functions. We'll only compute integrals of two kinds of functions: \begin{align*} \sin^ m x \cos^n x \mbox{ \qquad and \qquad } e^{ax} \sin bx \end{align*} Let $m$ and $n$ be non-negative integers. For integrating $\sin^ m x \cos^n x$ if {\bf either $m$ or $n$ is odd} then we get the answer by a simple u-substitution. For example, \begin{align*} \sin^ {2k+1} x \cdot \cos^n x & = \sin^ {2k} x \cdot \sin x \cdot \cos^n x \\ & = \left( \sin^2 x \right)^k \cdot \sin x \cdot \cos^n x \\ & = \left( 1 - \cos^2 x \right)^k \cdot \sin x \cdot \cos^n x \end{align*} after which we can integrate using the u-substitution $ u = \cos x$. \begin{exercise} Compute the following indefinite integrals. \begin{multicols}{2} \begin{enumerate} \item $\int \sin^3 x \cos^2 x \: dx$ \item $\int \sin^2 x \cos^5 x \: dx$ \end{enumerate} \end{multicols} \end{exercise} Let $m$ and $n$ be non-negative integers. For integrating $\sin^ m x \cos^n x$ if {\bf both $m$ and $n$ are even} then we use the double angle formulae, \begin{align*} \sin^ {2k} x & = \left(\sin^{2} x \right)^k & & & \cos^ {2l} x & = \left(\cos^{2} x \right)^l \\ \\ & = \left(\dfrac{1 - \cos(2x)}{2} \right)^k & & & & = \left(\dfrac{1 + \cos(2x)}{2} \right)^l \end{align*} \begin{exercise} Compute the following indefinite integrals. \begin{multicols}{2} \begin{enumerate} \item $\int \sin^2 x \: dx $ \item $\int \sin^2 x \cos^2 x \: dx$ \end{enumerate} \end{multicols} \end{exercise} Let $a$, $b$ be real numbers. Using the Euler's identity \begin{align*} e^{ibx} = \cos bx + i \sin bx \end{align*} we get \begin{align*} \int e^{ax} \cos bx \: dx & = \mbox{real part of } \int e^{ax} e^{ibx} \: dx \\ \int e^{ax} \sin bx \: dx & = \mbox{imaginary part of } \int e^{ax} e^{ibx} \: dx\end{align*} \begin{exercise} Compute the following indefinite integrals. \begin{multicols}{2} \begin{enumerate} \item $\int e^{ax} \cos bx \: dx$ \item $\int e^{ax} \sin bx \: dx $ \end{enumerate} \end{multicols} \end{exercise} \subsection{Trigonometric Substitutions} Trigonometric substitutions are extremely useful when {\it eliminating radicals} (among other things) because of the identities \begin{align} \begin{split} \label{eq:trig_identities} \sin^2 x + \cos^2 x &= 1 \\ \sec^2 x &= 1 + \tan^2 x \end{split} \end{align} \begin{example} If there is a term \begin{align*} \sqrt{1 + x^2} \end{align*} in our integral, then we can substitute $ x = \tan u$ so that \begin{align*} \sqrt{1 + x^2} & = \sqrt{1 + \tan^2 u} \\ & = \sec u & \mbox{ by \eqref{eq:trig_identities}} \end{align*} \end{example} We'll need a few preliminary computations. \begin{exercise} \begin{enumerate} \item Compute $(\sec x)'$. \item Show using the fundamental theorem of calculus that \begin{align*} \int \sec x \: dx & = \ln (\sec x + \tan x) + c \\ \int \csc x \: dx & = -\ln (\csc x + \cot x) + c \end{align*} \item {\bf Optional: } Compute the integrals $\int \sec x \: dx$ and $\int \csc x \: dx$ directly (without using the fundamental theorem). \end{enumerate} \end{exercise} \begin{exercise} Compute the following integrals using trig substitutions. \begin{multicols}{2} \begin{enumerate} \item $\int \dfrac{1}{\sqrt{1-x^2}}\: dx$ \item $\int \dfrac{1}{\sqrt{1+x^2}} \: dx$ \item $\int \dfrac{1}{\sqrt{x^2-1}} \: dx$ \item $\int \dfrac{1}{x\sqrt{x^2-1}} \: dx$ \item $\int \dfrac{1}{x\sqrt{1-x^2}} \: dx$ \item $\int \dfrac{1}{x\sqrt{1+x^2}} \: dx$ \item $\int \sqrt{1-x^2}\: dx$ \item $\int x^3 \sqrt{1-x^2}\: dx$ \end{enumerate} \end{multicols} \end{exercise} \begin{exercise}{{\bf (Optional)}} \begin{enumerate} \item Compute the integral \begin{align*} \int \sec^3 x \: dx \end{align*} \item Compute the integral \begin{align*} \int \sqrt{1 + x^2} \: dx \end{align*} \end{enumerate} \end{exercise} \subsection{Partial Fractions} Partial fractions is a technique used to compute integrals of the form \begin{align*} \int \dfrac{P(x)}{Q(x)} \: dx \end{align*} where $P(x)$ and $Q(x)$ are polynomials. The higher the degree of the denominator $Q(x)$ the harder it is to compute the integral.\\ \subsubsection*{Linear Polynomials} When the denominator $Q(x)$ is linear the integral can be computed easily using u-substitution. \begin{exercise} Compute the following integrals \begin{multicols}{2} \begin{enumerate} \item $ \int \dfrac{x^2+1}{x} \: dx$ \item $ \int \dfrac{x^2 + 1}{x+1} \: dx$ \item $ \int \dfrac{x}{2x-3} \: dx$ \item $ \int \dfrac{x^2}{2x+2} \: dx$ \end{enumerate} \end{multicols} \end{exercise} \subsubsection*{Quadratic with Repeated Roots} When the denominator is a quadratic $Q(x) = ax^2 + bx + c$ there are 3 different methods for finding the integral, depending on what the roots of $Q(x)$ are. \begin{align*} x^2 + ax + b \mbox{ has } \begin{cases} \mbox{ repeated roots } \\ \mbox{ complex roots } \\ \mbox{ real non-repeated roots } \end{cases} \end{align*} The first case of {\bf repeated roots} is the easiest. In this case, our goal is to find a simple u-substitution to reduce the problem to \begin{align*} \int \dfrac{R(u)}{u^2} \: du \end{align*} where $R(u)$ is some polynomial. \begin{exercise} For each of the following problems, verify that the denominator has repeated roots. Then compute the integrals. \begin{multicols}{2} \begin{enumerate} \item $ \int \dfrac{x}{4x^2 - 4x + 1} \: dx$ \item $ \int \dfrac{x^2}{x^2 + 2x + 1} \: dx$ \end{enumerate} \end{multicols} \end{exercise} \subsubsection*{Quadratic with Complex Roots} When the denominator $Q(x) = ax^2 + bx + c$ has complex roots our goal is to find a u-substitution to reduce the problem to the integrals \begin{align*} \int \dfrac{1}{u^2 + 1} \: du & & \mbox{ and } & & \int \dfrac{u}{u^2 + 1} \: du \end{align*} \begin{exercise} For each of the following problems, verify that the denominator has complex roots. Then compute the integrals. \begin{multicols}{2} \begin{enumerate} \item $ \int \dfrac{1}{x^2 + 4} \: dx$ \item $ \int \dfrac{3}{x^2 + 2x + 2} \: dx$ \item $ \int \dfrac{3x}{x^2 + 2x + 2} \: dx$ \item $ \int \dfrac{x}{4x^2 - 4x + 3} \: dx$ \end{enumerate} \end{multicols} \end{exercise} \begin{exercise}If the degree of the numerator is $ \ge 2$ then we first have to do a long division to simplify the numerator. Compute the following integrals. \begin{multicols}{2} \begin{enumerate} \item $ \int \dfrac{x^2}{x^2 + 4} \: dx$ \item $ \int \dfrac{3x^3}{x^2 + 2x + 2} \: dx$ \end{enumerate} \end{multicols} \end{exercise} \subsubsection*{Quadratic with Real Non-repeated Roots} Finally, when the denominator $Q(x)$ has real non-repeated roots we need to use the method of partial fractions.\\ In this method we first need to find a factorization of $Q(x)$ as a product of linear terms, say $Q(x) = Q_1(x) \cdot Q_2(x)$ where both $Q_1(x)$ and $Q_2(x)$ are linear. If the degree of the numerator $P(x)$ is $\le 1$ then we can always write \begin{align*} \dfrac{P(x)}{Q(x)} & = \dfrac{A}{Q_1(x)} + \dfrac{B}{Q_2(x)} \end{align*} for some constants $A$ and $B$. This is called the {\bf partial fraction decomposition} of $\frac{P(x)}{Q(x)}$. We find $A$ and $B$ by multiplying both sides by $Q(x)$ and comparing the coefficients on the left and right hand sides. \begin{exercise} For each of the following problems, verify that the denominator has real non-repeated roots. Then compute the integrals. \begin{multicols}{2} \begin{enumerate} \item $ \int \dfrac{1}{x^2 - 1} \: dx$ \item $ \int \dfrac{4x}{x^2 - 4} \: dx$ \item $ \int \dfrac{5}{x^2 - 2x} \: dx$ \item $ \int \dfrac{3 x}{x^2 + x - 2} \: dx$ \end{enumerate} \end{multicols} \end{exercise} \begin{exercise}As before if the degree of the numerator is $ \ge 2$ then we first need to do long division to simplify the numerator. Compute the following integrals. \begin{multicols}{2} \begin{enumerate} \item $ \int \dfrac{3 x^2}{x^2 + x - 2} \: dx$ \item $ \int \dfrac{x^2 - 1}{x^2 - 4} \: dx$ \item $ \int \dfrac{4x^3 - 3x + 5}{x^2 - 2x} \: dx$ \end{enumerate} \end{multicols} \end{exercise} \subsubsection*{Degree $\ge 3$} When degree of the denominator $Q(x)$ is $\ge 3$, one can show the existence of a partial fraction decomposition using {\it abstract algebra}. But the details are much more complicated and hard to do by hand. Instead, feel free to use the internet to compute partial fraction decompositions. For example, go to \url{http://www.wolframalpha.com} and input \begin{verbatim} partial fractions (x^2 - 2)/(x+1)^3 \end{verbatim} for computing the partial fraction decomposition of $\dfrac{x^2 - 2}{(x+1)^3}$. Once you have the partial fraction decomposition you can use u-substitution to compute the integral. \begin{exercise} Compute the following integrals using partial fraction decompositions. \begin{multicols}{2} \begin{enumerate} \item $ \int \dfrac{x^2 - 2}{(x+1)^3} \: dx$ \item $ \int \dfrac{8}{3x^3 + 7x^2 + 4x} \: dx$ \item $ \int \dfrac{x^3 + 8}{(x^2 - 1)(x - 2)} \: dx$ \end{enumerate} \end{multicols} \end{exercise} \newpage \subsection{Practice Problems} We've learned several techniques for computing indefinite integrals. Of these \begin{enumerate} \item Basic u-substitution \item Trigonometric substitutions (+ trig identities) \end{enumerate} are the tricky ones as there are a lot of possible substitutions to choose from. The other three \begin{enumerate}[resume] \item Integration by Parts \item Trigonometric integrals \item Partial fractions \end{enumerate} are much easier to use.\\ The following problems will require you to use all of the above techniques. You should not expect to {\it see} the solution right away, instead, systemically try different things until you reduce the problem to something that looks familiar. \begin{exercise} Compute the following integrals. \begin{multicols}{2} \begin{enumerate} \item $\int \dfrac{e^x}{(e^x - 1)(e^x - 3)}\: dx$ \item $\int \dfrac{1}{1 + e^x} \: dx$ \item $\int e^{\sqrt{x}} \: dx$ \item $\int \sin \sqrt{x}\: dx$ \item $\int \dfrac{\sin^3 x}{\cos^2 x} \: dx$ \item $\int \dfrac{1 - \sin x}{\cos^2 x}\: dx$ \item $\int \dfrac{1}{1 + \sin x}\: dx$ \item $\int \sqrt{1 + \cos 2x}\: dx$ \item $\int \sec^3 x \tan x\: dx$ \item $\int x \tan^{-1} x\: dx$ \item $\int x^2 \tan^{-1} x\: dx$ \item $\int \tan^{-1} {\sqrt{x}}\: dx$ \item $\int \dfrac{x}{\sqrt{2 + 2x + x^2}} \: dx$ \item $\int \dfrac{1}{\sqrt{2x - x^2}}\: dx$ \item $\int \ln(1 + x^2)\: dx$ \item $\int \tan^{2} x\: dx$ \end{enumerate} \end{multicols} \end{exercise}
{ "alphanum_fraction": 0.635694051, "avg_line_length": 34.6834061135, "ext": "tex", "hexsha": "c5add1e63fb7ff37a80a2beef79a96241318d839", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "5b6cb3dde364990abe868ce155a697dce78302fa", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "apurvnakade/jhu2017-18-honors-single-variable-calculus", "max_forks_repo_path": "2018/09Computations.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "5b6cb3dde364990abe868ce155a697dce78302fa", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "apurvnakade/jhu2017-18-honors-single-variable-calculus", "max_issues_repo_path": "2018/09Computations.tex", "max_line_length": 244, "max_stars_count": null, "max_stars_repo_head_hexsha": "5b6cb3dde364990abe868ce155a697dce78302fa", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "apurvnakade/jhu2017-18-honors-single-variable-calculus", "max_stars_repo_path": "2018/09Computations.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 5483, "size": 15885 }
\documentclass{article} \usepackage{eecstex} \usepackage{pgfplots} \title{EE 123 HW 04} \author{Bryan Ngo} \date{2022-02-11} \begin{document} \maketitle \setcounter{section}{2} \section{} \subsection{} Each circular convolution in the overlap-save method will result in a length \(2^v - P + 1\) signal. Then, the cost of the FFT and IFFT is \(\frac{2^v}{2} \log_2(2^v) = v 2^{v - 1}\). Then, there is a \(2^v\)-pointwise multiplication. The total number of multiplications is \(2^v (v + 1)\). Thus, the FFT for each sample will require \begin{equation} \frac{2^v (v + 1)}{2^v - P + 1}. \end{equation} complex multiplications per output sample. \subsection{} \begin{center} \begin{tikzpicture} \begin{axis}[ xlabel=\(v\), ylabel={Cost}, title={Complex Multiplications}, axis lines=middle, width=0.4\textwidth ] \addplot[ycomb, mark=*, color=blue] table[ col sep=comma, x=v, y=cost ]{q3.csv}; \end{axis} \end{tikzpicture} \end{center} with a minimum cost of \(v = 12\). The direct evaluation would cost \(500\) complex multiplications per output sample, since that is the length of a given sample. \subsection{} \begin{equation} \lim_{v \to \infty} \frac{2^v (v + 1)}{2^v - P + 1} = \lim_{v \to \infty} \frac{v + 1}{1 - \left(\frac{P - 1}{2^v}\right)} = v \end{equation} Thus, for \(P = 500\), the direct method will be more efficient for \(v > 500\). \newpage \section{Fun with FFT} \begin{enumerate} \item For \(0 \leqslant k < N\), \(H[k] = X_r[k] + W_{2N}^k X_i[k]\). \item For \(0 \leqslant k < N\), \(H[k + N] = X_r[k] - W_{2N}^k X_i[k]\). \item \(X[k] = \frac{1}{2} (H[k] + H[k + N]) + \frac{j}{2} W_{2N}^{-k} (H[k] - H[k + N])\), which takes 3 multiplications and 3 additions. \end{enumerate} \newpage \section{Hadamard Transform} \subsection{} \begin{equation} H_3 = \begin{bmatrix} 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 \\ 1 & -1 & 1 & -1 & 1 & -1 & 1 & -1 \\ 1 & 1 & -1 & -1 & 1 & 1 & -1 & -1 \\ 1 & -1 & -1 & 1 & 1 & -1 & -1 & 1 \\ 1 & 1 & 1 & 1 & -1 & -1 & -1 & -1 \\ 1 & -1 & 1 & -1 & -1 & 1 & -1 & 1 \\ 1 & 1 & -1 & -1 & -1 & -1 & 1 & 1 \\ 1 & -1 & -1 & 1 & -1 & 1 & 1 & -1 \end{bmatrix} \end{equation} The order that represents increasing frequency content is the sequency ordering. \subsection{} \begin{center} \includegraphics[width=0.8\textwidth]{q5b.png} \end{center} \newpage \section{} \subsection{} \begin{align} X[3k] &= \sum_{n = 0}^{N - 1} x[n] W_N^{3kn} \\ &= \sum_{n = 0}^{\frac{N}{3} - 1} x[n] W_N^{3kn} + \sum_{n = \frac{N}{3}}^{\frac{2N}{3} - 1} x[n] W_N^{3kn} + \sum_{n = \frac{2N}{3}}^{N - 1} x[n] W_N^{3kn} \\ &= \sum_{n = 0}^{\frac{N}{3} - 1} x[n] W_N^{3kn} + \sum_{n = 0}^{\frac{N}{3} - 1} x\left[n + \frac{N}{3}\right] W_N^{3kn} + \sum_{n = 0}^{\frac{N}{3} - 1} x\left[n + \frac{2N}{3}\right] W_N^{3kn} \\ &= \sum_{n = 0}^{\frac{N}{3} - 1} \left(x[n] + x\left[n + \frac{N}{3}\right] + x\left[n + \frac{2N}{3}\right]\right) W_N^{3kn} \\ &= \sum_{n = 0}^{\frac{N}{3} - 1} \underbrace{\left(x[n] + x\left[n + \frac{N}{3}\right] + x\left[n + \frac{2N}{3}\right]\right)}_{x_1[n]} W_{\frac{N}{3}}^{kn} \end{align} \subsection{} \begin{align} X[3k + 1] &= \sum_{n = 0}^{N - 1} x[n] W_N^{n(3k + 1)} \\ &= \sum_{n = 0}^{\frac{N}{3} - 1} x[n] W_N^{n(3k + 1)} + \sum_{n = \frac{N}{3}}^{\frac{2N}{3} - 1} x[n] W_N^{n(3k + 1)} + \sum_{n = \frac{2N}{3}}^{N - 1} x[n] W_N^{n(3k + 1)} \\ &= \sum_{n = 0}^{\frac{N}{3} - 1} x[n] W_N^{n(3k + 1)} + \sum_{n = 0}^{\frac{N}{3} - 1} x\left[n + \frac{N}{3}\right] W_N^{\frac{N}{3}} W_N^{n(3k + 1)} + \sum_{n = 0}^{\frac{N}{3} - 1} x\left[n + \frac{2N}{3}\right] W_N^{\frac{2N}{3}} W_N^{n(3k + 1)} \\ &= \sum_{n = 0}^{\frac{N}{3} - 1} \left(x[n] + x\left[n + \frac{N}{3}\right] W_N^{\frac{N}{3}} + x\left[n + \frac{2N}{3}\right] W_N^{\frac{2N}{3}}\right) W_N^{n(3k + 1)} \\ &= \sum_{n = 0}^{\frac{N}{3} - 1} \underbrace{\left(x[n] + x\left[n + \frac{N}{3}\right] W_N^{\frac{N}{3}} + x\left[n + \frac{2N}{3}\right] W_N^{\frac{2N}{3}}\right) W_N^n}_{x_2[n]} W_{\frac{N}{3}}^{kn} \\ X[3k + 2] &= \sum_{n = 0}^{N - 1} x[n] W_N^{n(3k + 2)} \\ &= \sum_{n = 0}^{\frac{N}{3} - 1} x[n] W_N^{n(3k + 2)} + \sum_{n = \frac{N}{3}}^{\frac{2N}{3} - 1} x[n] W_N^{n(3k + 2)} + \sum_{n = \frac{2N}{3}}^{N - 1} x[n] W_N^{n(3k + 2)} \\ &= \sum_{n = 0}^{\frac{N}{3} - 1} x[n] W_N^{n(3k + 2)} + \sum_{n = 0}^{\frac{N}{3} - 1} x\left[n + \frac{N}{3}\right] W_N^{\frac{2N}{3}} W_N^{n(3k + 2)} + \sum_{n = 0}^{\frac{N}{3} - 1} x\left[n + \frac{2N}{3}\right] W_N^{\frac{4N}{3}} W_N^{n(3k + 2)} \\ &= \sum_{n = 0}^{\frac{N}{3} - 1} \left(x[n] + x\left[n + \frac{N}{3}\right] W_N^{\frac{2N}{3}} + x\left[n + \frac{2N}{3}\right] W_N^{\frac{4N}{3}}\right) W_N^{n(3k + 2)} \\ &= \sum_{n = 0}^{\frac{N}{3} - 1} \underbrace{\left(x[n] + x\left[n + \frac{N}{3}\right] W_N^{\frac{N}{3}} + x\left[n + \frac{2N}{3}\right] W_N^{\frac{2N}{3}}\right) W_N^{2n}}_{x_3[n]} W_{\frac{N}{3}}^{kn} \end{align} \subsection{} \begin{center} \includegraphics[width=0.7\textwidth]{q6c.png} \end{center} \subsection{} \begin{center} \includegraphics[width=0.8\textwidth]{q6d.png} \end{center} \newpage \section{} \begin{enumerate} \item \(|X[k]| \leqslant N\) for \(k = 0\). \item We want \(x[n]\) to be a constant under the DFT, so we can cancel out the complex exponential terms to obtain \(x[n] = e^{j \theta} W_N^{-kn}\) for all \(\theta \in \R\), and \(k, n \in \Z\). \end{enumerate} \newpage \section{} \subsection{} \begin{equation} H_4[k] = \sum_{k = 0}^3 h[n] W_4^{kn} = 1 - W_4^k = 1 - (-j)^k = \{0, 1 + j, 2, 1 - j\} \end{equation} \begin{center} \begin{tikzpicture} \begin{axis}[ xlabel=\(k\), ylabel={\(|H_4[k]|\)}, title={Magnitude of 4-point DFT}, axis lines=middle ] \addplot[ ycomb, color=blue, mark=* ] coordinates { (0, 0) (1, 2^0.5) (2, 2) (3, 2^0.5) }; \end{axis} \end{tikzpicture} \end{center} The DFT is not even, not odd, is conjugate symmetric. The DFT is a high-pass filter since it lets in \(\omega = \pi\), which is the highest frequency, and blocks out \(\omega = 0\), the DC frequency. \subsection{} It is not possible to uniquely identify \(x[n]\) since the expression \begin{equation} X[k] = \frac{Y[k]}{H[k]} \end{equation} involves \(H[0] = 0\), so the expression is undefined at \(k = 0\), meaning that \begin{equation} X[k] = \begin{cases} C & k \equiv 0 \pmod{4} \\ \frac{Y[k]}{1 - (-j)^k} & k \in [1, 3] \pmod{4} \end{cases} \end{equation} for some \(C \in \C\). \subsection{} Using Parseval's theorem for the DFT, \begin{align} \sum_{n = 0}^3 |x[n]|^2 &= \frac{1}{4} \sum_{k = 0}^3 |X[k]|^2 = D \\ &= \frac{1}{4} \left(|X[0]|^2 + |X[1]|^2 + |X[2]|^2 + |X[3]|^2\right) \\ \implies X[k] &= \begin{cases} \pm \sqrt{4D - \frac{1}{\sqrt{2}} |Y[1]|^2 - \frac{1}{2} |Y[2]|^2 - \frac{1}{\sqrt{2}} |Y[3]|^2} & k \equiv 0 \pmod{4} \\ \frac{Y[k]}{1 - (-j)^k} & k \in [1, 3] \pmod{4} \end{cases} \end{align} \subsection{} Using the frequency shift property of the DFT, \begin{equation} \tilde{Y}[k] = X[k] \tilde{H}[(k + 1)_N] \end{equation} Assuming nothing else about \(x[n]\), \begin{equation} X[k] = \begin{cases} \frac{Y[k]}{1 - (-j)^{k + 1}} & k \in [0, 2] \pmod{4} \\ C & k \equiv 3 \pmod{4} \end{cases} \end{equation} for \(C \in \C\). Assuming the sum holds from the previous part, \begin{equation} X[k] = \begin{cases} \frac{Y[k]}{1 - (-j)^{k + 1}} & k \in [0, 2] \pmod{4} \\ \pm \sqrt{4D - \frac{1}{\sqrt{2}} |Y[0]|^2 - \frac{1}{2} |Y[1]|^2 - \frac{1}{\sqrt{2}} |Y[2]|^2} & k \equiv 3 \pmod{4} \\ \end{cases} \end{equation} \end{document}
{ "alphanum_fraction": 0.5159441953, "avg_line_length": 35.68, "ext": "tex", "hexsha": "462db2a521e40659de50857e42455bf2cd364689", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "d10fe34fdb95f2d7785eeaec9c578991aca9054d", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "bdngo/ee-123", "max_forks_repo_path": "hw04/hw04.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "d10fe34fdb95f2d7785eeaec9c578991aca9054d", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "bdngo/ee-123", "max_issues_repo_path": "hw04/hw04.tex", "max_line_length": 258, "max_stars_count": null, "max_stars_repo_head_hexsha": "d10fe34fdb95f2d7785eeaec9c578991aca9054d", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "bdngo/ee-123", "max_stars_repo_path": "hw04/hw04.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 3529, "size": 8028 }
\section{Background and Preliminaries} \label{sec:bg} This section introduces our running example, necessary background of ML system internals, as well as common types of redundancy. \subsection{Running Example} Example~\ref{ex:1} shows a user-level example ML pipeline---written in SystemDS' DML scripting language with R-like syntax \cite{BoehmADGIKLPR20}---which we use as a running example throughout this paper. \begin{example} [GridSearch LM] \label{ex:1} We read a feature matrix \mat{X} and labels \mat{y}, and extract 10 random subsets of 15 features. For each feature set, we tune the linear regression (lm) hyper-parameters regularization, intercept, and tolerance via grid search and print the loss. \begin{lstlisting} 1: X = read('data/X.csv'); # 1M x 100 2: y = read('data/y.csv'); # 1M x 1 3: for( i in 1:10) { 4: s = sample(15, ncol(X)); 5: [loss, B] = gridSearch('lm', 'l2norm', list(X[,s],y), list('reg','icpt','tol'),...); 6: print("Feature set ["+toString(s)+"]: "+loss); 7: } \end{lstlisting} High-level primitives like \texttt{gridSearch} and \texttt{lm} are themselves script-based built-in functions and imported accordingly. Below functions show their key characteristics in simplified form: \begin{lstlisting} 01: gridSearch = function(...) return(...) { 02: HP = ... # materialize hyper-parameter tuples 03: parfor( i in 1:nrow(HP) ) { # parallel for 04: largs = ... # setup list hyper-parameters 05: rB[i,] = t(eval(train, largs)); 06: rL[i,] = eval(score, list(X,y,t(rB[i,]))); 07: } } 08: lm = function(...) return(...) { 09: if (ncol(X) <= 1024) # select closed-form 10: B = lmDS(X, y, icpt, reg, verbose); 11: else # select iterative 12: B = lmCG(X, y, icpt, reg, tol, maxi, verbose); 13: } 14: lmDS = function(...) return(...) { 15: if (icpt > 0) { 16: X = cbind(X, matrix(1,nrow(X),1)); 17: if (icpt == 2) 18: X = scaleAndShift(X); # mu=0,sd=1 19: } ... 20: A = t(X) %*% X + diag(matrix(reg,ncol(X),1); 21: b = t(X) %*% y; 22: beta = solve(A, b); 23: } 24: lmCG = function(...) return(...) { 25: if (icpt > 0) { 26: X = cbind(X, matrix(1,nrow(X),1)); 27: if (icpt == 2) 28: X = scaleAndShift(X); # mu=0,sd=1 29: } ... 30: while (i<maxi & norm_r2>norm_r2_tgt) { 31: q = t(X) %*% (X %*% ssX_p); ... 32: p = -r + (norm_r2 / old_norm_r2) * p; 33: } } \end{lstlisting} The \texttt{gridSearch} function enumerates and materializes all hyper-parameter combinations $\mat{HP}$ of the passed parameters and value ranges, and invokes training (\texttt{lm}) and scoring (\texttt{l2norm}) functions to find the best model and loss. The \texttt{lm} function in turn dispatches---based on the number of features---either to a closed-form method with $\mathcal{O}(m\cdot n^2 + n^3)$ complexity (\texttt{lmDS}); or an iterative conjugate-gradient method with $\mathcal{O}(m \cdot n)$ per iteration (\texttt{lmCG}), which performs better for many features as it requires $\leq n$ iterations until convergence. \vspace{-0.1cm} \end{example} \subsection{ML Systems Background} \label{sec:mlsys} There is a variety of existing ML systems. Relevant for understanding this paper, are especially the underlying techniques for program and DAG compilation, and operator scheduling~\cite{2019Boehm}. Here, we focus primarily on lazy evaluation and program compilation. \textbf{Program/DAG Compilation:} We distinguish three types of compilation in contemporary ML systems: (1) interpretation or eager execution, (2) lazy expression or DAG compilation, and (3) program compilation. First, interpretation as used in R, PyTorch \cite{PaszkeGMLBCKLGA19}, or Python libraries like NumPy \cite{WaltCV11} or Scikit-learn \cite{PedregosaVGMTGBPWDVPCBPD11} execute operations as-is and the host language (e.g., Python) handles the scoping of variables. Second, systems like TensorFlow \cite{AbadiBCCDDDGIIK16}, OptiML \cite{SujeethLBRCWAOO11}, and Mahout Samsara \cite{MahoutSamsara} performing lazy expression evaluation that lazily collects a DAG of operations, which is optimized and executed on demand. Some of these systems---like TensorFlow or OptiML---additionally provide control flow primitives, integrated in the data flow graph. Here, the host language still interprets the control flow, and thus, unrolls operations into a larger DAG. However, recent work like AutoGraph \cite{abs-1810-08061} automatically compiles TensorFlow control flow primitives. Only bound output variables leave the scope of expression evaluation. Third, program compilation in systems like Julia \cite{BezansonEKS17}, SystemML \cite{BoehmDEEMPRRSST16}, SystemDS~\cite{BoehmADGIKLPR20}, and Cumulon \cite{HuangB013} compiles a script into a hierarchy of program blocks, where every last-level block contains DAGs of operations. Accordingly, control flow and variable scoping is handled by the ML system itself. Despite the large optimization scope of lazy expression evaluation and program compilation, unnecessary redundancy cannot be fully eliminated via code motion and common subexpression elimination (CSE) because the conditional control flow is often unknown. \begin{figure}[!t] \centering \includegraphics[scale=0.32]{figures/background} \vspace{-0.25cm} \caption{\label{fig:background}Operator Scheduling and Runtime Plans.} \end{figure} \textbf{Operator Scheduling:} Given a DAG of operations of an expression or program block, operator scheduling then determines an execution order of the individual operations, subject to the explicit data dependencies (i.e., edges) of the data flow graph. The two predominant approaches are sequential and parallel instruction streams. First, a sequential instruction stream linearizes the DAG---in depth- or breadth-first order---into a sequence of instructions that is executed one-at-a-time. For example, Figure~\ref{fig:background} shows a plan of runtime instructions in SystemDS for lines 21-23 of Example 1. A symbol table holds references to live variables and their metadata. Instructions are executed sequentially, read their inputs from a variable map (a.k.a. symbol table), and put their outputs back. Such a serial execution model---as used in PyTorch \cite{PaszkeGMLBCKLGA19} and SystemML \cite{BoehmDEEMPRRSST16,BoehmBERRSTT14}---is simple and allows bounding the memory requirements. Second, parallel instruction streams---as used in TensorFlow \cite{AbadiBCCDDDGIIK16}---leverage inter-operator parallelism: when all inputs of an operation are available, this operation is enqueued for parallel execution. This execution model offers a high degree of parallelism (for many small operations) but makes memory requirements less predictable. \subsection{Sources of Redundancy} \label{sec:redundancy} We can now return to our running example and discuss common sources of fine-grained redundancy. \begin{example} [GridSearch LM Redundancy] The user script from Example~\ref{ex:1} with a $1\text{M} \times 100$ feature matrix $\mat{X}$ and three hyper-parameters (\texttt{reg}, \texttt{icpt}, \texttt{tol} with 6, 3, and 5 values) exhibits multiple sources of redundancy. % First, since $\mat{X}$ has 100 features, all calls to \texttt{lm} are dispatched to \texttt{lmDS} and thus, one of the hyper-parameters (\texttt{tol}) is irrelevant and we train five times more models than necessary. % Second, evaluating different $\lambda$ parameters (\texttt{reg}) for \texttt{lmDS} exhibits fine-grained operational redundancy. The core operations $\mat{X}^{\top}\mat{X}$ and $\mat{X}^{\top}\mat{y}$ are independent of \texttt{reg} and thus, should be executed only once for different $\lambda$. % Third, both \texttt{lmDS} and \texttt{lmCG} have the same pre-processing block, and for 2/3 of \texttt{icpt} values, we perform the same \texttt{cbind} operation, which is expensive because it creates an intermediate larger than $\mat{X}$. % Fourth, appending a column of ones does not require re-executing $\mat{X}^{\top}\mat{X}$ and $\mat{X}^{\top}\mat{y}$. Instead we can reuse these intermediates and augment them with $\text{colSums}(\mat{X})$, $\text{sum}(\mat{y})$ and $\text{nrow}(\mat{X})$. Similarly, the random feature sets will exhibit overlapping features whose results can be reused. % %Overall, eliminating all redundancy allows us to reduce the number of floating point operations from XXX to XXX. \end{example} \textbf{Types of Redundancy:} Existing work performs reuse for coarse-grained sub-tasks in ML pipelines \cite{SparksVKFR17, ZhangKR14, VartakTMZ18, XinMMLSP18, ShangZBKECBUK19, DerakhshanMARM20}. Generalizing upon the previous example, we further extend this to common types of \emph{fine-grained} redundancy: \begin{itemize} \item \emph{Full Function or Block Redundancy:} At all levels of the program hierarchy, there is potential for full reuse of the outputs of program blocks. This reuse is a form of function memoization \cite{CrankshawWZFGS17}, which requires deterministic operations. %on the taken control path \item \emph{Full Operation Redundancy:} Last-level operations can be reused for equivalent inputs, given that all non-determinism (e.g., a system-generated seed) is exposed from these operations and cast to a basic input as well. \item \emph{Partial Operation Redundancy:} Operation inputs with overlapping rows or columns further allow reuse by extraction from---or augmentation of---previously computed results. \end{itemize} Together, these different types of redundancy motivate a design with (1) fine-grained lineage tracing, (2) multi-level, lineage-based reuse, and (3) exploitation of both full and partial reuse. \textbf{Applicability in ML Systems:} Fine-grained lineage tracing and reuse is applicable in ML systems with eager execution, lazy evaluation, and program compilation. In contrast, multi-level tracing, deduplication, and reuse require access to control structures and thus, are limited to systems with program compilation scope.
{ "alphanum_fraction": 0.750225338, "avg_line_length": 95.0952380952, "ext": "tex", "hexsha": "ef383e5d92dbee4c782ad3ff59ffc818af6655b1", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "f7804b2513859f7e6f14fa7842d81003d0758bf8", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "damslab/reproducibility", "max_forks_repo_path": "sigmod2021-LIMA-p32/paper/Background.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "f7804b2513859f7e6f14fa7842d81003d0758bf8", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "damslab/reproducibility", "max_issues_repo_path": "sigmod2021-LIMA-p32/paper/Background.tex", "max_line_length": 1774, "max_stars_count": 4, "max_stars_repo_head_hexsha": "f7804b2513859f7e6f14fa7842d81003d0758bf8", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "damslab/reproducibility", "max_stars_repo_path": "sigmod2021-LIMA-p32/paper/Background.tex", "max_stars_repo_stars_event_max_datetime": "2021-12-27T14:38:40.000Z", "max_stars_repo_stars_event_min_datetime": "2021-12-10T17:20:26.000Z", "num_tokens": 2668, "size": 9985 }
% Template GRASS newsletter - Article % Language: Latex % % Head \graphicspath{{./images/}} \title{Manuel pour QGIS/GRASS} \subtitle{} \author{Yann Chemin} \maketitle \section{INTRODUCTION QGIS} Ce Manuel est valide pour QGIS version 0.8 et plus (http://www.qgis.org) \textbf{Lancez QGIS}, la premi\`ere fois cel\`a doit ressembler \`a Fig.~\ref{fig:qgis000} %\setkeys{Gin}{width=1\textwidth} \begin{figure}[htbp] \centering %name of your graphic, without the path AND in PNG (screnshots etc)/PDF (drawings) format: \includegraphics[scale=0.18]{qgis000.png} %caption of the figure \caption{} %label of the figure, which has to correspond to \ref{}: \label{fig:qgis000} \end{figure} Ouvrez quelques couches vectorielles venant des exemples de donn\'ees fournies avec QGIS Fig.~\ref{fig:qgis001} %\setkeys{Gin}{width=1\textwidth} \begin{figure}[htbp] \centering %name of your graphic, without the path AND in PNG (screnshots etc)/PDF (drawings) format: \includegraphics[scale=0.3]{qgis001.png} %caption of the figure \caption{} %label of the figure, which has to correspond to \ref{}: \label{fig:qgis001} \end{figure} S\'electionnez toutes les couches (Ctrl+a) Fig.~\ref{fig:qgis002} %\setkeys{Gin}{width=1\textwidth} \begin{figure}[htbp] \centering %name of your graphic, without the path AND in PNG (screnshots etc)/PDF (drawings) format: \includegraphics[scale=0.28]{qgis002.png} %caption of the figure \caption{} %label of the figure, which has to correspond to \ref{}: \label{fig:qgis002} \end{figure} Les couches affich\'ees devraient ressembler \`a cel\`a Fig.~\ref{fig:qgis003} %\setkeys{Gin}{width=1\textwidth} \begin{figure}[htbp] \centering %name of your graphic, without the path AND in PNG (screnshots etc)/PDF (drawings) format: \includegraphics[scale=0.22]{qgis003.png} %caption of the figure \caption{} %label of the figure, which has to correspond to \ref{}: \label{fig:qgis003} \end{figure} Zoomez \`a l'\'etendue de toutes les couches ensemble... Fig.~\ref{fig:qgis004} %\setkeys{Gin}{width=1\textwidth} \begin{figure}[htbp] \centering %name of your graphic, without the path AND in PNG (screnshots etc)/PDF (drawings) format: \includegraphics[scale=0.19]{qgis004.png} %caption of the figure \caption{} %label of the figure, which has to correspond to \ref{}: \label{fig:qgis004} \end{figure} R\'esultat apr\`es le zoom Fig.~\ref{fig:qgis005} %\setkeys{Gin}{width=1\textwidth} \begin{figure}[htbp] \centering %name of your graphic, without the path AND in PNG (screnshots etc)/PDF (drawings) format: \includegraphics[scale=0.2]{qgis005.png} %caption of the figure \caption{} %label of the figure, which has to correspond to \ref{}: \label{fig:qgis005} \end{figure} Mettez la premi\`ere couche dans le cadre de survol Fig.~\ref{fig:qgis006} %\setkeys{Gin}{width=1\textwidth} \begin{figure}[htbp] \centering %name of your graphic, without the path AND in PNG (screnshots etc)/PDF (drawings) format: \includegraphics[scale=0.2]{qgis006.png} %caption of the figure \caption{} %label of the figure, which has to correspond to \ref{}: \label{fig:qgis006} \end{figure} R\'esultat... Fig.~\ref{fig:qgis007} %\setkeys{Gin}{width=1\textwidth} \begin{figure}[htbp] \centering %name of your graphic, without the path AND in PNG (screnshots etc)/PDF (drawings) format: \includegraphics[scale=0.2]{qgis007.png} %caption of the figure \caption{} %label of the figure, which has to correspond to \ref{}: \label{fig:qgis007} \end{figure} Ouvrez le menu des plugins Fig.~\ref{fig:qgis008} %\setkeys{Gin}{width=1\textwidth} \begin{figure}[htbp] \centering %name of your graphic, without the path AND in PNG (screnshots etc)/PDF (drawings) format: \includegraphics[scale=0.19]{qgis008.png} %caption of the figure \caption{} %label of the figure, which has to correspond to \ref{}: \label{fig:qgis008} \end{figure} Cel\`a devrait ressembler \`a la Fig.~\ref{fig:qgis009} %\setkeys{Gin}{width=1\textwidth} \begin{figure}[htbp] \centering %name of your graphic, without the path AND in PNG (screnshots etc)/PDF (drawings) format: \includegraphics[scale=0.19]{qgis009.png} %caption of the figure \caption{} %label of the figure, which has to correspond to \ref{}: \label{fig:qgis009} \end{figure} S\'electionnez ces plugins Fig.~\ref{fig:qgis010} %\setkeys{Gin}{width=1\textwidth} \begin{figure}[htbp] \centering %name of your graphic, without the path AND in PNG (screnshots etc)/PDF (drawings) format: \includegraphics[scale=0.19]{qgis010.png} %caption of the figure \caption{} %label of the figure, which has to correspond to \ref{}: \label{fig:qgis010} \end{figure} De nouveaux menus sont apparus! Fig.~\ref{fig:qgis011} %\setkeys{Gin}{width=1\textwidth} \begin{figure}[htbp] \centering %name of your graphic, without the path AND in PNG (screnshots etc)/PDF (drawings) format: \includegraphics[scale=0.19]{qgis011.png} %caption of the figure \caption{} %label of the figure, which has to correspond to \ref{}: \label{fig:qgis011} \end{figure} Dans le menu Plugins ouvrez GRASS Fig.~\ref{fig:qgis012} %\setkeys{Gin}{width=1\textwidth} \begin{figure}[htbp] \centering %name of your graphic, without the path AND in PNG (screnshots etc)/PDF (drawings) format: \includegraphics[scale=0.2]{qgis012.png} %caption of the figure \caption{} %label of the figure, which has to correspond to \ref{}: \label{fig:qgis012} \end{figure} Selectionnez Ouvrir un Mapset Fig.~\ref{fig:qgis013} %\setkeys{Gin}{width=1\textwidth} \begin{figure}[htbp] \centering %name of your graphic, without the path AND in PNG (screnshots etc)/PDF (drawings) format: \includegraphics[scale=0.2]{qgis013.png} %caption of the figure \caption{} %label of the figure, which has to correspond to \ref{}: \label{fig:qgis013} \end{figure} \section{LE PLUGIN GRASS DANS LE SIG QUANTUM} Dans le menu View, selectionnez Browser Panel Fig.~\ref{fig:qgis014} %\setkeys{Gin}{width=1\textwidth} \begin{figure}[htbp] \centering %name of your graphic, without the path AND in PNG (screnshots etc)/PDF (drawings) format: \includegraphics[scale=0.26]{qgis014.png} %caption of the figure \caption{} %label of the figure, which has to correspond to \ref{}: \label{fig:qgis014} \end{figure} Ceci est le menu contextuel s'ouvrant, s\'electionnez le nom de carte ``elevation.10m'' Fig.~\ref{fig:qgis015} %\setkeys{Gin}{width=1\textwidth} \begin{figure}[htbp] \centering %name of your graphic, without the path AND in PNG (screnshots etc)/PDF (drawings) format: \includegraphics[scale=0.28]{qgis015.png} %caption of the figure \caption{} %label of the figure, which has to correspond to \ref{}: \label{fig:qgis015} \end{figure} Ceci est le r\'esultat du chargement de la couche raster de GRASS Fig.~\ref{fig:qgis016} %\setkeys{Gin}{width=1\textwidth} \begin{figure}[htbp] \centering %name of your graphic, without the path AND in PNG (screnshots etc)/PDF (drawings) format: \includegraphics[scale=0.083]{qgis016.png} %caption of the figure \caption{} %label of the figure, which has to correspond to \ref{}: \label{fig:qgis016} \end{figure} De la m\^eme mani\`ere avec d'autres types de donn\'ees, ajoutez cette couche dans le cadre de survol Fig.~\ref{fig:qgis017} %\setkeys{Gin}{width=1\textwidth} \begin{figure}[htbp] \centering %name of your graphic, without the path AND in PNG (screnshots etc)/PDF (drawings) format: \includegraphics[scale=0.35]{qgis017.png} %caption of the figure \caption{} %label of the figure, which has to correspond to \ref{}: \label{fig:qgis0017} \end{figure} R\'esultat Fig.~\ref{fig:qgis018} %\setkeys{Gin}{width=1\textwidth} \begin{figure}[htbp] \centering %name of your graphic, without the path AND in PNG (screnshots etc)/PDF (drawings) format: \includegraphics[scale=0.11]{qgis018.png} %caption of the figure \caption{} %label of the figure, which has to correspond to \ref{}: \label{fig:qgis018} \end{figure} Ajoutez une couche vectorielle de GRASS en s\'electionnant la premi\`ere ic\^one \`a partir de la gauche Fig.~\ref{fig:qgis019} %\setkeys{Gin}{width=1\textwidth} \begin{figure}[htbp] \centering %name of your graphic, without the path AND in PNG (screnshots etc)/PDF (drawings) format: \includegraphics[scale=0.35]{qgis019.png} %caption of the figure \caption{} %label of the figure, which has to correspond to \ref{}: \label{fig:qgis019} \end{figure} Voici le menu contextuel qui s'ouvre, s\'electionnez la carte ayant le nom ``streams'' et sa couche de donn\'ees ``1\_Line'' Fig.~\ref{fig:qgis020} %\setkeys{Gin}{width=1\textwidth} \begin{figure}[htbp] \centering %name of your graphic, without the path AND in PNG (screnshots etc)/PDF (drawings) format: \includegraphics[scale=0.25]{qgis020.png} %caption of the figure \caption{} %label of the figure, which has to correspond to \ref{}: \label{fig:qgis020} \end{figure} Ceci est la couche vectorielle ``streams'', ouvrez les propri\'et\'es en cliquant sur le bouton droit sur le nom Fig.~\ref{fig:qgis021} %\setkeys{Gin}{width=1\textwidth} \begin{figure}[htbp] \centering %name of your graphic, without the path AND in PNG (screnshots etc)/PDF (drawings) format: \includegraphics[scale=0.25]{qgis021.png} %caption of the figure \caption{} %label of the figure, which has to correspond to \ref{}: \label{fig:qgis021} \end{figure} La bo\^ite de propri\'et\'es ressemble \`a cel\`a, s\'electionnez le bouton Couleur pour ouvrir une bo\^ite d'outils de s\'election de couleurs. Changez la couleur en un bleu commun et validez Fig.~\ref{fig:qgis022} Fig.~\ref{fig:qgis023} %\setkeys{Gin}{width=1\textwidth} \begin{figure}[htbp] \centering %name of your graphic, without the path AND in PNG (screnshots etc)/PDF (drawings) format: \includegraphics[scale=0.25]{qgis022.png} %caption of the figure \caption{} %label of the figure, which has to correspond to \ref{}: \label{fig:qgis022} \end{figure} %\setkeys{Gin}{width=1\textwidth} \begin{figure}[htbp] \centering %name of your graphic, without the path AND in PNG (screnshots etc)/PDF (drawings) format: \includegraphics[scale=0.35]{qgis023.png} %caption of the figure \caption{} %label of the figure, which has to correspond to \ref{}: \label{fig:qgis023} \end{figure} S\'electionnez le deuxi\`eme bouton sur la gauche pour commencer le module d'\'edition de vecteurs de GRASS Fig.~\ref{fig:qgis024} %\setkeys{Gin}{width=1\textwidth} \begin{figure}[htbp] \centering %name of your graphic, without the path AND in PNG (screnshots etc)/PDF (drawings) format: \includegraphics[scale=0.25]{qgis024.png} %caption of the figure \caption{} %label of the figure, which has to correspond to \ref{}: \label{fig:qgis024} \end{figure} La bo\^ite de dialogue de l'\'editeur de vecteurs de GRASS peut seulement \^etre ouverte si une couche vectorielle est s\'electionn\'ee dans la fen\^etre principale de QGIS Fig.~\ref{fig:qgis025} %\setkeys{Gin}{width=1\textwidth} \begin{figure}[htbp] \centering %name of your graphic, without the path AND in PNG (screnshots etc)/PDF (drawings) format: \includegraphics[scale=0.3]{qgis025.png} %caption of the figure \caption{} %label of the figure, which has to correspond to \ref{}: \label{fig:qgis025} \end{figure} S\'electionnez le bouton Node Tool (10\`eme de la gauche) et bougez la croix rouge sur la carte Fig.~\ref{fig:qgis026} %\setkeys{Gin}{width=1\textwidth} \begin{figure}[htbp] \centering %name of your graphic, without the path AND in PNG (screnshots etc)/PDF (drawings) format: \includegraphics[scale=0.3]{qgis026.png} %caption of the figure \caption{} %label of the figure, which has to correspond to \ref{}: \label{fig:qgis026} \end{figure} Le r\'esultat devrait ressembler \`a ceci (Fig.~\ref{fig:qgis026}). Le deuxi\`eme bouton de la barre d'outils va enregistrer les modifications effectu\'ees sur la couche vectorielle et la reconstruire Fig.~\ref{fig:qgis027} %\setkeys{Gin}{width=1\textwidth} \begin{figure}[htbp] \centering %name of your graphic, without the path AND in PNG (screnshots etc)/PDF (drawings) format: \includegraphics[scale=0.35]{qgis027.png} %caption of the figure \caption{} %label of the figure, which has to correspond to \ref{}: \label{fig:qgis027} \end{figure} Dans le terminal de lancement, l'enregistrement des changements apparaissent Fig.~\ref{fig:qgis028} %\setkeys{Gin}{width=1\textwidth} \begin{figure}[htbp] \centering %name of your graphic, without the path AND in PNG (screnshots etc)/PDF (drawings) format: \includegraphics[scale=0.20]{qgis028.png} %caption of the figure \caption{} %label of the figure, which has to correspond to \ref{}: \label{fig:qgis028} \end{figure} Mettez en place l'environnement du plugin GRASS pour le traitement de donn\'ees... Fig.~\ref{fig:qgis029} %\setkeys{Gin}{width=1\textwidth} \begin{figure}[htbp] \centering %name of your graphic, without the path AND in PNG (screnshots etc)/PDF (drawings) format: \includegraphics[scale=0.2]{qgis029.png} %caption of the figure \caption{} %label of the figure, which has to correspond to \ref{}: \label{fig:qgis029} \end{figure} Cet outil GRASS est une repr\'esentation mince des capacit\'es de GRASS, mais il va suffir aux besoins de cette introduction. Il est fournit avec un navigateur de jeu de cartes GRASS. Il agit aussi en temps qu'interface de gestion de donn\'ees. Fig.~\ref{fig:qgis030} %\setkeys{Gin}{width=1\textwidth} \begin{figure}[htbp] \centering %name of your graphic, without the path AND in PNG (screnshots etc)/PDF (drawings) format: \includegraphics[scale=0.4]{qgis030.png} %caption of the figure \caption{} %label of the figure, which has to correspond to \ref{}: \label{fig:qgis030} \end{figure} Le navigateur a la capacit\'e d'ouvrir les informations d'en-t\^ete et de m\'eta-donn\'ees contenues dans les couches s\'electionn\'ees Fig.~\ref{fig:qgis031} %\setkeys{Gin}{width=1\textwidth} \begin{figure}[htbp] \centering %name of your graphic, without the path AND in PNG (screnshots etc)/PDF (drawings) format: \includegraphics[scale=0.22]{qgis031.png} %caption of the figure \caption{} %label of the figure, which has to correspond to \ref{}: \label{fig:qgis031} \end{figure} Les modules GRASS disponibles sont list\'es dans les deux prochaines pages. Plus de modules sont int\'egr\'es tous les jours, le nombre actuel de modules de GRASS d\'epasse les 400, vous pouvez voir qu'il y a toujours du travail \`a faire, et que la communaut\'e de volontaires y travaillent Fig.~\ref{fig:qgis032} Fig.~\ref{fig:qgis033} %\setkeys{Gin}{width=1\textwidth} \begin{figure}[htbp] \centering %name of your graphic, without the path AND in PNG (screnshots etc)/PDF (drawings) format: \includegraphics[scale=0.3]{qgis032.png} %caption of the figure \caption{} %label of the figure, which has to correspond to \ref{}: \label{fig:qgis032} \end{figure} %\setkeys{Gin}{width=1\textwidth} \begin{figure}[htbp] \centering %name of your graphic, without the path AND in PNG (screnshots etc)/PDF (drawings) format: \includegraphics[scale=0.3]{qgis033.png} %caption of the figure \caption{} %label of the figure, which has to correspond to \ref{}: \label{fig:qgis033} \end{figure} \section{TRAITEMENT DE DONNEES AVEC LE PLUGIN GRASS} Cr\'eons quelques buffers (zones tampon)... S\'electionnez ``buffering of vectors'' dans la liste des modules. Cel\`a devrait ressembler \`a ceci. Choisissez ue taille de zone tampon de 500 m\`etres Fig.~\ref{fig:qgis034} %\setkeys{Gin}{width=1\textwidth} \begin{figure}[htbp] \centering %name of your graphic, without the path AND in PNG (screnshots etc)/PDF (drawings) format: \includegraphics[scale=0.25]{qgis034.png} %caption of the figure \caption{} %label of the figure, which has to correspond to \ref{}: \label{fig:qgis034} \end{figure} Traitement de donn\'ees en cours... Fig.~\ref{fig:qgis035} %\setkeys{Gin}{width=1\textwidth} \begin{figure}[htbp] \centering %name of your graphic, without the path AND in PNG (screnshots etc)/PDF (drawings) format: \includegraphics[scale=0.3]{qgis035.png} %caption of the figure \caption{} %label of the figure, which has to correspond to \ref{}: \label{fig:qgis035} \end{figure} Fin du traitement de donn\'ees Fig.~\ref{fig:qgis036} %\setkeys{Gin}{width=1\textwidth} \begin{figure}[htbp] \centering %name of your graphic, without the path AND in PNG (screnshots etc)/PDF (drawings) format: \includegraphics[scale=0.3]{qgis036.png} %caption of the figure \caption{} %label of the figure, which has to correspond to \ref{}: \label{fig:qgis036} \end{figure} Le r\'esultat devrait ressembler \`a cel\`a (Vous devrez charger la carte vous-m\^eme!) Fig.~\ref{fig:qgis037} %\setkeys{Gin}{width=1\textwidth} \begin{figure}[htbp] \centering %name of your graphic, without the path AND in PNG (screnshots etc)/PDF (drawings) format: \includegraphics[scale=0.2]{qgis037.png} %caption of the figure \caption{} %label of the figure, which has to correspond to \ref{}: \label{fig:qgis037} \end{figure} Maintenant cr\'eez une autre zone tampon \`a partir de ``streams'' mais cette fois de 100 m\`etres... Comme ceci Fig.~\ref{fig:qgis038} %\setkeys{Gin}{width=1\textwidth} \begin{figure}[htbp] \centering %name of your graphic, without the path AND in PNG (screnshots etc)/PDF (drawings) format: \includegraphics[scale=0.2]{qgis038.png} %caption of the figure \caption{} %label of the figure, which has to correspond to \ref{}: \label{fig:qgis038} \end{figure} Maintenant nous allons soustraire le buffer de 100m au buffer de 500m, car nous voulons exclure le cours d'eau et sa zone de proximit\'e de notre zone de s\'election. Trouvez ce module! Fig.~\ref{fig:qgis039} %\setkeys{Gin}{width=1\textwidth} \begin{figure}[htbp] \centering %name of your graphic, without the path AND in PNG (screnshots etc)/PDF (drawings) format: \includegraphics[scale=0.3]{qgis039.png} %caption of the figure \caption{} %label of the figure, which has to correspond to \ref{}: \label{fig:qgis039} \end{figure} Traitement de la superimposition de couches avec l'op\'erateur bool\'een ``NOT'' Fig.~\ref{fig:qgis040} %\setkeys{Gin}{width=1\textwidth} \begin{figure}[htbp] \centering %name of your graphic, without the path AND in PNG (screnshots etc)/PDF (drawings) format: \includegraphics[scale=0.3]{qgis040.png} %caption of the figure \caption{} %label of the figure, which has to correspond to \ref{}: \label{fig:qgis040} \end{figure} Le r\'esultat est d'enlever tout dans les 100m des cours d'eau, et tout au-del\`a des 500m des cours d'eau. Fig.~\ref{fig:qgis041} %\setkeys{Gin}{width=1\textwidth} \begin{figure}[htbp] \centering %name of your graphic, without the path AND in PNG (screnshots etc)/PDF (drawings) format: \includegraphics[scale=0.2]{qgis041.png} %caption of the figure \caption{} %label of the figure, which has to correspond to \ref{}: \label{fig:qgis041} \end{figure} Traitement d'un carte d'aspect \`a partir de la carte d'altitude Fig.~\ref{fig:qgis042} %\setkeys{Gin}{width=1\textwidth} \begin{figure}[htbp] \centering %name of your graphic, without the path AND in PNG (screnshots etc)/PDF (drawings) format: \includegraphics[scale=0.3]{qgis042.png} %caption of the figure \caption{} %label of the figure, which has to correspond to \ref{}: \label{fig:qgis042} \end{figure} Traitement en cours... Fig.~\ref{fig:qgis043} %\setkeys{Gin}{width=1\textwidth} \begin{figure}[htbp] \centering %name of your graphic, without the path AND in PNG (screnshots etc)/PDF (drawings) format: \includegraphics[scale=0.3]{qgis043.png} %caption of the figure \caption{} %label of the figure, which has to correspond to \ref{}: \label{fig:qgis043} \end{figure} Result Fig.~\ref{fig:qgis044} %\setkeys{Gin}{width=1\textwidth} \begin{figure}[htbp] \centering %name of your graphic, without the path AND in PNG (screnshots etc)/PDF (drawings) format: \includegraphics[scale=0.2]{qgis044.png} %caption of the figure \caption{} %label of the figure, which has to correspond to \ref{}: \label{fig:qgis044} \end{figure} \address{GRASS Development Team\\ \url{http://grass.osgeo.org}\\ \email{[email protected]}} %%% Local Variables: %%% mode: latex %%% TeX-master: main_document.tex %%% End:
{ "alphanum_fraction": 0.7097205636, "avg_line_length": 34.0406504065, "ext": "tex", "hexsha": "d32e36aeae3e6528b7301a425ebc6eacd5d19bba", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "d5afa9f0e4c7c2d80674e622cef5795ac59accd5", "max_forks_repo_licenses": [ "Unlicense" ], "max_forks_repo_name": "YannChemin/Lectures", "max_forks_repo_path": "QGIS_Tuto/FR_article_QGIS.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "d5afa9f0e4c7c2d80674e622cef5795ac59accd5", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Unlicense" ], "max_issues_repo_name": "YannChemin/Lectures", "max_issues_repo_path": "QGIS_Tuto/FR_article_QGIS.tex", "max_line_length": 337, "max_stars_count": null, "max_stars_repo_head_hexsha": "d5afa9f0e4c7c2d80674e622cef5795ac59accd5", "max_stars_repo_licenses": [ "Unlicense" ], "max_stars_repo_name": "YannChemin/Lectures", "max_stars_repo_path": "QGIS_Tuto/FR_article_QGIS.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 6857, "size": 20935 }
%------------------------------------------- % Resume in LaTeX % Original Author : Sourabh Bajaj % License : MIT % Modified By: Kandarp Khandwala %------------------------------------------- \documentclass[a4paper,10pt]{article} % Set the text height and width and the margins such that the page is almost full. \usepackage[empty]{fullpage} % Set the default font \usepackage{tgpagella} \usepackage{titlesec} \usepackage{marvosym} \usepackage[usenames,dvipsnames]{color} % Pretty bulleting \usepackage{verbatim} \usepackage{enumitem} % Some standard packages \usepackage[hidelinks,pdfauthor={Kandarp Khandwala},pdftitle={Kandarp_Khandwala_Resume}]{hyperref} % Adjust margins \addtolength{\oddsidemargin}{-0.50in} \addtolength{\evensidemargin}{-0.50in} \addtolength{\textwidth}{1in} \addtolength{\topmargin}{-.75in} \addtolength{\textheight}{1.8in} \raggedbottom \raggedright \setlength{\tabcolsep}{0in} % Sections formatting \titleformat{\section}{ \vspace{-13pt}\scshape\raggedright\large }{}{0em}{}[\color{black}\titlerule \vspace{-3pt}] %------------------------- % Custom commands %------------------------- \newcommand{\projectHeading}[2]{ \begin{tabular*}{1\textwidth}{l@{\extracolsep{\fill}}r} \large{\textbf{#1}} & \small{#2} \end{tabular*} } \newcommand{\resumeItemListStart}{\begin{itemize}[topsep=2pt, parsep=2pt, listparindent=0pt, itemindent=0pt, itemsep=1pt, leftmargin=*]} \newcommand{\resumeItemListEnd}{\end{itemize}} \renewcommand\labelitemi{$\cdot$} %------------------------------------------- \begin{document} \centerline{\LARGE{Kandarp Khandwala}} \centerline{\normalsize{Baltimore, MD \textbar{} \href{mailto:[email protected]}{[email protected]} \textbar{} +1 (443) 763-9251 \textbar{} \href{https://github.com/kandarpck}{Github: kandarpck} }} \centerline{\normalsize{\href{https://kandarpck.appspot.com/}{Website: kandarpck.appspot.com} \textbar{} \href{https://www.linkedin.com/in/kandarpkhandwala}{LinkedIn: linkedin.com/in/kandarpkhandwala}}} %\centerline{\Large{Jane Doe}} %\centerline{\normalsize{\href{mailto:[email protected]}{[email protected]} \textbar{} +91122454747 \textbar{} \href{http://jane.appspot.com/}{jane.appspot.com} \textbar{} \href{https://github.com/jane}{Github: jane} \textbar{} \href{https://www.linkedin.com/in/jane}{LinkedIn: jane}}} %-----------EDUCATION----------------- \section{Education} \projectHeading{Masters in Computer Science \& Security \textbar{} Johns Hopkins University}{Aug. 17 - Dec. 18} \begin{itemize}[nosep, leftmargin=*] \item{\small Network Security, Advanced Cryptographic Systems, Blockchains and Cryptocurrencies. \hfill{} GPA: 3.81} \end{itemize} \vspace{3pt} \projectHeading {Bachelors in Computer Engineering \textbar{} Mumbai University}{Jul. 11 - May. 15} \begin{itemize}[nosep, leftmargin=*] \item{\small Algorithms, Operating Systems, Data Mining, Advanced Computer Networks, Artificial Intelligence. \hfill{} GPA: 3.61} \end{itemize} %-----------EXPERIENCE----------------- \section{Experience} \projectHeading{Teaching Assistant \textbar{} Practical Cryptographic System} {Aug. 18 - Dec. 18} \resumeItemListStart \item\small{ \textbf{Head TA: } {Assisting Prof. Matthew Green by conducting bi-weekly office hours for 70 students in addition to holding review sessions, assisting with projects, creating rubrics and grading assignments.}} \resumeItemListEnd \projectHeading{Software Engineer Intern \textbar{} Verizon} {May. 18 - Aug. 18} \resumeItemListStart \item\small{ \textbf{Clean and Safe Driving: } {Increased user engagement by 5\% by developing the API Backend Microservice to calculate a Safety and Green score based on driving habits and car conditions using metrics like Acceleration, Braking, Speed and Location data.}} \item\small{ \textbf{Google Assistant Integration: } {Build a Microservice to enable the user to interact with the Google Assistant ecosystem and ask questions about the status, metrics and current details about his connected car.}} \resumeItemListEnd \projectHeading {Research Assistant \textbar{} Institute of Data Intensive Engineering and Science at JHU %\textbar{} \textit{\small \href{http://idies.jhu.edu}{idies.jhu.edu}}}\ }{Aug. 17 - May. 18} \resumeItemListStart \item\small{\textbf{Parallel Data Access: } {Improved data querying and operations by 20x using parallel access of terabytes of galaxy data using dynamic task scheduling optimized for computation.}} \resumeItemListEnd \projectHeading {Software Engineer \textbar{} JPMorgan Chase \& Co.} {Jul. 15 - Jun. 17} \resumeItemListStart % \item\small{\textbf{Batch Split} % {Improved resiliency and performance of the End of Day(EOD) Fixed Income risk batch which reduced Priority 1 issues by 33\% and improved overall timing by one hour.} \item\small{\textbf{DevOps: } {Lead the DevOps initiative in the team by bringing the support team up to speed using knowledge transfer sessions. \\ Identified pain points and built a dashboard tool unifying applications which increased productivity by 20\%}} \vspace{2pt} \item\small{\textbf{Secured Lending: } {Interacted with traders and orchestrated the development of a comprehensive suite of five applications for high-value Lending and Collateral transactions, shortening the turnaround time from days to minutes.}} \resumeItemListEnd % \projectHeading{Summer Intern \textbar{} Avocation Education Services Pvt. Ltd.} % {May. 15 - Jul. 15} % \resumeItemListStart % \item\small{\textbf{Security and Optimization: } % {Reduced the web application downtime by 50\% by optimizing the code and streamlining the infrastructure leading to a sustained 3x increase in web traffic over the year.} % \item\small{\textbf{Server Hardening} % {Enhanced Web Security by adding API authentication, HTTPS, closing down on unrestricted ports, adding measures to protect against DDoS attacks and implementing Identity \& Access Management.}} % \resumeItemListEnd % %-----------Extra Experience----------------- % \projectHeading{Summer Intern \textbar{} Gray Routes} % {Jun. 14 - Aug 14} % \resumeItemListStart % \item\small{\textbf{Intelligent Systems} % {Conceptualized innovative solutions for last-mile delivery and logistics by combining Location Services and Machine % Learning for efficient inventory management and optimized route prediction.} % \resumeItemListEnd % \resumeSubheading % {Stupidsid} % {Developer Intern}{Summer 13 and 15} % \resumeItemListStart % \item\small{\textbf{Application Scaling} % {Revamped the back-end application architecture and API services, leading to a sustained 3x increase in web traffic over the year while also slashing costs to half.} % \item\small{\textbf{Mobile First} % {Pioneered the design and development of Android applications which empowered the startup to adopt a distinguished strategy and triumph over the competition.} % \resumeItemListEnd %------------------------------------------------ %--------PROGRAMMING SKILLS------------ \section{Programming Skills} \small{Python, Pandas, Django, \hfill{} Java, Spring, Android, Go, Unix, \hfill{} Microservices, \hfill{} TensorFlow, Snort, \hfill{} AWS, GCP, \hfill{} Solr, \hfill{} Git, SQL, \LaTeX} %-----------PROJECTS----------------- \section{Projects and Publications} \projectHeading{Publicly Verifiable Proof of Data Deletion}{Aug. 18 - Dec. 18} \resumeItemListStart \item\small{Building a mechanism to provide a public proof of secure data deletion by encrypting data and storing the keys in a trusted platform module (TPM) like Intel SGX.} \resumeItemListEnd \projectHeading{Decentralized Public Key Infrastructure using Ethereum Smart Contracts}{Jan. 18 - May. 18} \resumeItemListStart \item\small{Developed a PKI with built-in certificate transparency to make it easy to detect rogue certificates for fine-grained identity management and trust.} \resumeItemListEnd \projectHeading{Mini Projects}{Sep. 17 - May. 18} \resumeItemListStart \item\small{Simulate an attack on Bitcoin by developing a Miner with 51\% hash power to maximize profits.} \item\small{Cryptographic Protocol Analysis of the Telegram encryption protocols.} \item\small{Thorough Analysis, Reporting of the top 10 OWASP vulnerabilities and demonstrating attacks and defenses against them.} \resumeItemListEnd \projectHeading{Personalized Search Engine and Mobile Assistant}{Aug. 14 - Mar. 15} \resumeItemListStart \item\small{Programmed a small-scale search engine by extending Apache Solr, delivering high quality, efficient search results and context aware customized information based on the user's current likes and likelihood of future preferences.} \resumeItemListEnd \projectHeading{Dynamic Context Aware Traffic System Optimization}{Jan. 14 - May. 14} \resumeItemListStart \item\small{Proposed algorithmic changes after statistical analyses on vehicular traffic data leading to 15\% traffic efficiency improvement.} \item\small{Nominated as the best departmental project. Picked among top ten ideas from 600 national entries at 'All India Science and Engineering' Contest organized by KPIT.} \item\small{The findings for this research were published in IJERT. arXiv:1407.5212 (2014). doi: 10.5120/17586-8253 Aug. 14} \resumeItemListEnd \projectHeading{Real Time Road Bump Data Analysis using Big Query}{Jun. 14 - Jul. 14} \resumeItemListStart \item\small{Created a continous data pipeline processor that managed the entire ETL process on streaming Road Bump data collected using mobile sensors delivering an optimized route to transport cheaply.} \resumeItemListEnd %-----HONORS AND AWARDS------- \section{Honors and Awards} \resumeItemListStart \item\small{\textbf{Top Thinker: } {Award for the 6 most impactful employees among 150 analysts joining JPMorgan.}\hfill{} {Jul. 15}} \item\small{\textbf{Hackathon - 1st Prize, JPMorgan: } {Among 120 competitors selected from 20 colleges.}\hfill{} {Jul. 14}} \item\small{\textbf{Hackathon - 2nd Prize, Google Developers Group: } {World Hackathon among 30 participating teams.}\hfill{} {Oct. 14}} \item\small{\textbf{State Govt. Academic Scholarship: } {Funding 1 year of education for excellence in the field of Computer Sci.}\hfill{} {Jun. 11}} \resumeItemListEnd %--------PUBLICATIONS------------ %\section{Publications} % \resumeItemListStart % \resumeSubItem{Context Aware Dynamic Traffic Signal Optimization.} % {Khandwala K, Sharma R, and Rao S. arXiv:1407.5212 (2014). doi: 10.5120/17586-8253}\hfill \textit{Aug. 14} % \resumeItemListEnd %--------LEADERSHIP------------ \section{Leadership and Involvement} \projectHeading{Campus Ambassador, Technical Judge and SME \textbar{} JPMorgan}{Oct. 15 - Nov. 16} \resumeItemListStart \item\small{Provided subject matter expertise and conducted training sessions on Git, Android, and AWS for 500 students.} \resumeItemListEnd \projectHeading{Event Planner and Technical Lead \textbar{} Indian Society for Technical Education }{Jul. 12 - Jul. 14} \resumeItemListStart \item\small{Incharge of the Mobile and Web strategy. Built games like Virtual Stock Market Trading which garnered 20000 hits.} \resumeItemListEnd %------------------------------------------- \end{document}
{ "alphanum_fraction": 0.7113560625, "avg_line_length": 49.3797468354, "ext": "tex", "hexsha": "b4a3c3624ba2f6ace8734dd6e8be88c70ed6e113", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "2255dd9f015b11816246642e5cab8401cef6d800", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "kandarpck/texsume", "max_forks_repo_path": "Kandarp_Khandwala_Resume.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "2255dd9f015b11816246642e5cab8401cef6d800", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "kandarpck/texsume", "max_issues_repo_path": "Kandarp_Khandwala_Resume.tex", "max_line_length": 279, "max_stars_count": null, "max_stars_repo_head_hexsha": "2255dd9f015b11816246642e5cab8401cef6d800", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "kandarpck/texsume", "max_stars_repo_path": "Kandarp_Khandwala_Resume.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 3070, "size": 11703 }
\section{Update Governance} %\mnote{I think that we could merge the following part with the previous part of the paper. We should use Section 4 to dig into the delegation mechanism and the activation mechanism. Let's discuss that in our next meeting.} %\mnote{For the formal description of a transaction please refer to Sec.~\ref{se:bcabstraction}}
{ "alphanum_fraction": 0.7869318182, "avg_line_length": 70.4, "ext": "tex", "hexsha": "6e9adbfe570e7e830cd162dbfb020898504a9f12", "lang": "TeX", "max_forks_count": 4, "max_forks_repo_forks_event_max_datetime": "2021-05-16T10:39:00.000Z", "max_forks_repo_forks_event_min_datetime": "2019-07-18T13:38:25.000Z", "max_forks_repo_head_hexsha": "89f5873f82c0ff438e2cd3fff83cc030a46e29da", "max_forks_repo_licenses": [ "ECL-2.0", "Apache-2.0" ], "max_forks_repo_name": "MitchellTesla/decentralized-software-updates", "max_forks_repo_path": "papers/FC20/paper/sections/governance.tex", "max_issues_count": 120, "max_issues_repo_head_hexsha": "89f5873f82c0ff438e2cd3fff83cc030a46e29da", "max_issues_repo_issues_event_max_datetime": "2021-06-24T10:20:09.000Z", "max_issues_repo_issues_event_min_datetime": "2019-03-06T18:29:25.000Z", "max_issues_repo_licenses": [ "ECL-2.0", "Apache-2.0" ], "max_issues_repo_name": "MitchellTesla/decentralized-software-updates", "max_issues_repo_path": "papers/FC20/paper/sections/governance.tex", "max_line_length": 224, "max_stars_count": 10, "max_stars_repo_head_hexsha": "89f5873f82c0ff438e2cd3fff83cc030a46e29da", "max_stars_repo_licenses": [ "ECL-2.0", "Apache-2.0" ], "max_stars_repo_name": "MitchellTesla/decentralized-software-updates", "max_stars_repo_path": "papers/FC20/paper/sections/governance.tex", "max_stars_repo_stars_event_max_datetime": "2022-01-06T02:08:38.000Z", "max_stars_repo_stars_event_min_datetime": "2019-01-25T19:38:49.000Z", "num_tokens": 82, "size": 352 }
% ========================================= % COMMAND: _PROC_WAIT % ========================================= \newpage \section{\_PROC\_WAIT} \label{cmd:_PROC_WAIT} \paragraph{Syntax:} \subparagraph{} \texttt{\_PROC\_WAIT <name>*} \paragraph{Purpose:} \subparagraph{} Waits for processes $<$name$>$*.
{ "alphanum_fraction": 0.5115511551, "avg_line_length": 20.2, "ext": "tex", "hexsha": "9561db082c56ba2f0ce0a7cb19dcbecc759562bd", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "5bbe912ffde2e74b382405f580ef5963bf792288", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "ia97lies/httest", "max_forks_repo_path": "doc/users-guide/local-commands/cmd_proc_wait.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "5bbe912ffde2e74b382405f580ef5963bf792288", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "ia97lies/httest", "max_issues_repo_path": "doc/users-guide/local-commands/cmd_proc_wait.tex", "max_line_length": 43, "max_stars_count": 4, "max_stars_repo_head_hexsha": "5bbe912ffde2e74b382405f580ef5963bf792288", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "ia97lies/httest", "max_stars_repo_path": "doc/users-guide/local-commands/cmd_proc_wait.tex", "max_stars_repo_stars_event_max_datetime": "2021-06-01T13:22:10.000Z", "max_stars_repo_stars_event_min_datetime": "2019-05-16T07:47:43.000Z", "num_tokens": 76, "size": 303 }
% \documentclass[11pt,a4paper,twocolumn,titlepage]{article} \documentclass[11pt,a4paper,titlepage]{article} \usepackage[margin=1in]{geometry} \usepackage{amsmath} \usepackage{amsthm} \usepackage{amssymb} \usepackage{graphicx} \usepackage[pdftex]{hyperref} \setlength{\parskip}{1em} \title{CS 6230 Project Report\\\textbf{Parallel Computation of Betweenness Centrality}} \author{Rui Dai, Sam Olds} \date{\today} \begin{document} \maketitle \newpage % Outline: % 1.) Introduction % * What is centrality % * Different types % * Degree % * Closeness % * Betweenness % * Page Rank! % * Many More: Eigenvector centrality, Katz centrality, Percolation % centrality, Cross-clique centrality, Freeman Centralization % * Why is it important % * What problems does it solve % * How is it used % * The type of centrality we focused on was betweenness centrality - it's % a lot of shortest path calculations. % 2.) Related work % * How have other people already approached this problem % * The best known algorithms for different types of centrality % * Algorithms we looked at % * Floyd-Warshall $O(V^3)$ % * Brandes $O(VE)$ % 3.) Implementation % * Challenges of implementing this in parallel % * Challenges of finding a validly large social graph % * Started with an adjacency matrix and matrix multiplication % * Moved onto brandes algorithm using an adjacency list % \cite{brandes2001faster} % * Finally, implemented a parallelized version of brandes found here % \cite{bader2006parallel} % 4.) Experimentation % * Began with synthetic graphs. generated graphs this way % * Moved to facebook graph with 4000 vertices \cite{leskovec2012learning} % * Ended with a gplus graph with 107000 vertices % \cite{leskovec2012learning} % 5.) Results % * Challenges of rendering large graphs % * Challenges of validating centrality % * Graphs of strong and weak scalability. % * Graphs of centrality found % 6.) Conclusion % * MPI and OpenMP really sped up the processing time % 7.) Future work % * We could continue working to improve the running time and finding more % graphs to run it on % * We used a shared memory model, it would be interesting to try and break % up the graph into a distributed memory model. %% ============================= Introduction ============================= %% \section{Introduction} % 1. introduction and motivation \label{sec:intro} % * What is centrality % * Different types % * Degree % * Closeness % * Betweenness % * Page Rank! % * Many More: Eigenvector centrality, Katz centrality, Percolation % centrality, Cross-clique centrality, Freeman Centralization % * Why is it important % * What problems does it solve % * How is it used % * The type of centrality we focused on was betweenness centrality - it's % a lot of shortest path calculations. In graph theory, centrality indicates the importance of a vertex in the network. This concept is naturally applied on social network analysis. Imagine you are producing a new product and want to find beta users. It's simple to let users with high centrality to use and spread the news to their reachable networks. There are many different definitions of centrality, eg. degree centrality, closeness centrality and betweenness centrality. In our project, we will use the vertex betweenness centrality. Betweenness centrality quantifies the number of times a node acts as a bridge along the shortest path between two other nodes. It's not hard to imagine that the computation of centrality is very expensive with all the operations with shortest paths. Our goal of this project is parallel such computation and leverage the technology of MPI/OPENMP we learned in class. In this report, we will first introduce data preparation and graph representation. Then explain the algorithms we are using with performance experiments. %% ============================= Related Work ============================= %% \section{Related Work} % 2. related work \label{sec:related-work} % * How have other people already approached this problem % * The best known algorithms for different types of centrality % * Algorithms we looked at % * Floyd-Warshall $O(V^3)$ % * Brandes $O(VE)$ Centrality was first introduced as a measure for quantifying the control of a human on the communication between other humans in a social network by Linton Freeman\cite{burt2009structural} In his conception. Ever since, the concept has drawn many interests in cross-indiscipline area like network analysis and social science. Betweenness centralities in a graph involve calculating the shortest paths between all pairs of vertices on a graph, which requires $\Theta(V^3)$ time with the Floyd–Warshall\cite{Cormen:2001:IA:580470} algorithm. On sparse graphs, Johnson's\cite{johnson1977efficient} algorithm may be more efficient, taking $O(V^2 \log V + V E)$ time. In the case of unweighted graphs the calculations can be done with Brandes' algorithm\cite{brandes2001faster} which takes $\Theta(VE)$ time. Normally, these algorithms assume that graphs are undirected and connected with the allowance of loops and multiple edges. When specifically dealing with network graphs, often graphs are without loops or multiple edges to maintain simple relationships (where edges represent connections between two people or vertices). In this case, using Brandes' algorithm will divide final centrality scores by 2 to account for each shortest path being counted twice. Breadth first search is also a great part in centrality computation as for the shortest path part. Parallel BFS algorithms attracts many researchers to explore, among those, \cite{beamer2013direction} proposed a two-direction way to do BFS. %% ============================= Data ============================= %% \section{Implementation} % 3. methods/algorithms \label{sec:data} % * Challenges of implementing this in parallel % * Challenges of finding a validly large social graph % * Started with an adjacency matrix and matrix multiplication % * Moved onto brandes algorithm using an adjacency list % \cite{brandes2001faster} % * Finally, implemented a parallelized version of brandes found here % \cite{bader2006parallel} We implemented our algorithms in C++ using OpenMP and MPI. Our final submission employs an adjacency list to represent the graph and uses a parallel version of the Brandes algorithm for calculating betweenness centrality. We will examine the results more closely later, but it appears that this implementation does not scale well which is most likely due to the fact that the graph is in shared memory. Distributing the graph was a large hurdle we tried to avoid. \subsection{Challenges} The largest challenge was handling a graph distributed across processors. Trying to partition the graph evenly among the workers could be a project on its own. We encountered this problem when breaking up the adjacency matrix among processors. We realized each processor might not have all of the edges attached to each vertex in memory, which would complicate things. This was solved simply by putting the whole graph in shared memory. However, it is believed that this was the cause of the poor scaling performance. The next challenge that arose was getting the graph data. During initial development, a synthetic graph was used by looping through every vertex for each vertex and creating and edge between them with a 25\% probability. Once we were comfortable with our implementation we found a graph of Facebook friend circles used in a Stanford paper \cite{leskovec2012learning} with 4039 vertices and 88234 edges. We then found an even larger dataset of Google+ users used in the same paper. This graph has 107614 vertices and 13673453 edges. Validating the results was another challenge. We spent a fair number of hours finding a way to visually render the graphs. The Graphviz open source project was used, but was somewhat buggy. Ultimately, our validation of the metrics became a visual inspection of the rendered graphs, which is less than ideal. However, the nodes that were marked with higher centrality seemed to be the correct ones. \subsection{Graph Representation} At first, the underlying representation we used was a sparse adjacency matrix. We thought that this would allow for easier parallelization when performing the matrix multiplication. However, it presented a few challenges during implementation. First, an $n \times n$ matrix becomes memory demanding, even though most of the values are simply $0$. Second, we used the vertex ids as the indices into the $n^2$ array, which didn't work for the Google+ dataset. This dataset used $21$ digit long values, which means a separate data structure would have been needed to map the vertex ids back to the matrix indices. Instead, we switched to using an adjacency list to decrease the memory footprint and simplify the vertex id handling. This had a few additional advantages. With an adjacency list, we could easily figure out the total number of vertexes that exist in the graph by just getting the size of the list. This also made it easy to split work among processors by making each processor handle some chunk of the list. This made implementing the Brandes algorithm more straightforward as this was the underlying graph representation used in that paper. \subsection{Algorithms} Effectively, we implemented two different parallel algorithms for calculating the betweenness centrality of a graph. As we mentioned in the introduction section, vertex betweenness centrality is formally defined as: \[ g(v) = \sum_{s \neq v \neq t}{\frac{\sigma_{st}(v)}{\sigma_{st}}} \] \subsubsection{Matrix Multiplication} Calculating betweenness centrality can be naively accomplished by calculating the shortest path from every vertex to every other vertex. This was our initial approach as we demonstrated during the presentation. We used a breadth first search technique using matrix multiplication because it was believed that this would make it easy to parallelize. Using matrix multiplication to find the shortest paths behaves as follows: For our $n \times n$ matrix, we initialize a vector of size $n$ to all $0$s. We set some arbitrary value to $1$, to be our root node. Then, we simply perform a matrix multiplication with this vector. The resulting vector can be interpreted to mean that every $n_i$ value that is non-zero is directly reachable from the root node. We keep track of this meta data at the end of this loop so we know the predecessors. We then take this resulting vector and use it as the vector we multiply the matrix with in the first step. The next resulting vector is all of the vertices reachable from the root node in two steps. We repeat this process until all vertices have been reached. This simple technique seemed to be easy to parallelize because we could just send chunks of the matrix to each processor. However, once we discovered that not all of the edges connected to each vertex would be in memory for each processor, we decided to look into different methods. \subsubsection{Brandes' Algorithm} We came across a paper by Brandes \cite{brandes2001faster}, which described an algorithm for calculating betweenness centrality efficiently. Instead of running in $\Theta(V^3)$ time, like the Floyd-Warshall algorithm, this algorithm costs $\Theta(VE)$. We implemented this algorithm and tried to find a good way to parallelize it. Ultimately, we found a published parallel algorithm that was based off of Brandes' algorithm. We implemented the parallel version (presented as \textit{Algorithm 1} in a paper by Bader and Madduri \cite{bader2006parallel}). Fortunately, this algorithm requires very few modifications from the original. In addition to some very minor tweaks, the only additions were a number of directives, such as \texttt{\#pragma omp parallel for}, and the associated declaration of critical sections and shared memory management. All of our experimentation and results are from this implementation. %% ============================= Methods ============================= %% \section{Experimentation} % 3. methods/algorithms \label{sec:methods} % * Began with synthetic graphs. generated graphs this way % * Moved to facebook graph with 4000 vertices \cite{leskovec2012learning} % * Ended with a gplus graph with 107000 vertices \cite{leskovec2012learning} We began testing our implementation of the parallel Brandes algorithm using synthetic graphs. We randomly generated this data by doubly-looping through every vertex and adding an edge with a probability of ~25\%. This obviously did not create accurate ``social'' graphs due to the fact that there would be no clusters. However, we were able to try and visually validate our implementation using the Graphviz renderings: \begin{center} \includegraphics[width=0.9\textwidth]{figures/synthetic64_2} %\caption{Synthetic with ~64 edges} \end{center} \begin{center} \textit{Figure 1: Synthetic with ~64 edges} \end{center} \begin{center} \includegraphics[width=0.9\textwidth]{figures/synthetic512_1} %\caption{Synthetic with ~512 edges} \end{center} \begin{center} \textit{Figure 2: Synthetic with ~512 edges} \end{center} The vertices with deeper reds have larger betweenness centrality metric values. Once we were comfortable with our implementation, we found datasets used in a Stanford paper by Leskovec and Mcauley \cite{leskovec2012learning}. We downloaded this dataset and read it into memory. We ran this through our system and generated the following Graphviz renderings: \begin{center} \includegraphics[width=0.9\textwidth]{figures/facebook} %\caption{Facebook betweenness centrality calculation} \end{center} \begin{center} \textit{Figure 3: Facebook betweenness centrality calculation} \end{center} %% ============================= Results ============================= %% \section{Results} % 4. experiments and results \label{sec:results} % * Challenges of rendering large graphs % * Challenges of validating centrality % * Graphs of strong and weak scalability. % * Graphs of centrality found While the graphs we were able to generate were quite exciting, unfortunately our implementation appears to have poor performance when scaling. This is most likely due to memory contention from having the whole graph in shared memory. \begin{center} \includegraphics[width=0.8\textwidth]{figures/strong} %\caption{Strong scaling for betweenness centrality on a synthetic graph with % 10000 edges} \end{center} \begin{center} \textit{Figure 4: Strong scaling for betweenness centrality on a synthetic graph with 10000 edges} \end{center} \begin{center} \includegraphics[width=0.8\textwidth]{figures/weak} % \caption{Weak scaling for betweenness centrality on a synthetic graph with % 100 edges per process} \end{center} \begin{center} \textit{Figure 5: Weak scaling for betweenness centrality on a synthetic graph with 100 edges per process} \end{center} This figure indicates that our implementation has very poor weak scaling. We hypothesize that this stems from the use of the graph being used in shared memory. %% ============================= Conclusion ============================= %% \section{Conclusion} % 5. conclusions and future directions \label{sec:conclusion} Our implementation of the Brandes' algorithm could be improved by distributing the memory across the processors instead of using shared memory. With that said, we achieved some interesting results in finding the vertices in a graph that have high betweenness centrality. %% ============================= Future Work ============================= %% \section{Future Work} \label{sec:future-work} % * We could continue working to improve the running time and finding more % graphs to run it on % * We used a shared memory model, it would be interesting to try and break % up the graph into a distributed memory model. Looking forward, there are a number of improvements we could make. Our scalability results showed poor performance, which is likely due to all of the contention over the shared memory. If we had more time, we would like to explore different options for distributing the graph across processors. In addition we would like to try and find existing datasets that have already performed betweenness centrality calculations so that we might validate our metrics. Also it would be helpful to try with different graph density and see how our algorithms scales in different situations. %% ============================= Bibliography ============================= %% \newpage \bibliographystyle{acm} \bibliography{references} \end{document}
{ "alphanum_fraction": 0.7542097489, "avg_line_length": 43.5089974293, "ext": "tex", "hexsha": "3d2daef31b691f81719d2976cdabdf7e84947c73", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "bd934ecfbf762f27d443d05025dfed348404e353", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "drstarry/Parallel_BFS", "max_forks_repo_path": "report/project.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "bd934ecfbf762f27d443d05025dfed348404e353", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "drstarry/Parallel_BFS", "max_issues_repo_path": "report/project.tex", "max_line_length": 87, "max_stars_count": 1, "max_stars_repo_head_hexsha": "bd934ecfbf762f27d443d05025dfed348404e353", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "drstarry/Parallel_BFS", "max_stars_repo_path": "report/project.tex", "max_stars_repo_stars_event_max_datetime": "2017-08-04T00:22:38.000Z", "max_stars_repo_stars_event_min_datetime": "2017-08-04T00:22:38.000Z", "num_tokens": 3866, "size": 16925 }
\section{Outlook}\label{outlook} Aurora, as part of the Borealis project that added support for parallel and distributed execution, was commercialized as StreamBase Inc. in 2003 and was the first commercial real-time stream processing engine~\cite{AuroraBorealis2016}. The company provides an enterprise data stream management system and was bought by TIBCO in 2013~\cite{TIBCObuysStreamBase}, which continues to offer enterprise solutions for streaming applications.~\footnote{\url{https://www.tibco.com}. Last accessed on 09.07.2019} Members of the Stratosphere project continued with publications until 2016. With the transition to Apache Flink, the project now supports stream and batch processing. The team followed their research goals, which we described in Chapter~\ref{stratosphereImprovement}, and mainly focused on parallelization and optimization topics~\cite{StratospherePublications}. Parallel to the academic Stratosphere project, some of the authors founded ``data artisans", the first commercial installment that offers an enterprise solution based on Apache Flink, and continues to contribute to Flink's open source development. Data artisans was renamed to Ververica~\cite{VervericaAbout} and sold to Alibaba in January 2019~\cite{VervericaAlibaba}. These developments and the large industrial user base show that the Stratosphere project was hugely successful. In the process of our work on stream and batch processing, we noticed an interesting dichotomy of processing data as a stream and as a batch. Aurora, a data stream management system, generates small batches of data – tuple trains - before processing them. The developers call the principle \textit{train scheduling}. Further, the system tries to pipeline execution of each tuple train over as many operators as possible, which is called \textit{superbox scheduling}. This means that the self-declared streaming platform relies on small batches of data for efficient processing. Stratosphere, on the other hand, was developed as a batch processing system. Interestingly, Stratosphere's execution engine aims to process each data tuple as soon as it arrives, which can lead to some tuples being several operations ahead of other tuples from the same data set. Therefore Stratosphere decomposes a batch and, at least on each Task Manager, creates a pipelined stream out of it. There is a third processing principle regarding big data, called micro-batching, which treats arriving data as small batches and then processes those small batches together. This technique is employed by popular Apache Spark. In addition, Apache Flink allows stream- as well as batch processing with the same engine. Following those observations, it seems that for big data applications, a combination of treating data situationally as a stream- and a batch, is the key for a high performing system. Incoming batches may be partly formed into a stream and vice versa.
{ "alphanum_fraction": 0.8200136612, "avg_line_length": 244, "ext": "tex", "hexsha": "eefd36fd78703e5708012db7fab20636d1a03c10", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "fc5dd559cb4098c8abdd305a56a98f3c9d1170a6", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "jo-jstrm/report-stream-batch-processing", "max_forks_repo_path": "tex/Outlook.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "fc5dd559cb4098c8abdd305a56a98f3c9d1170a6", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "jo-jstrm/report-stream-batch-processing", "max_issues_repo_path": "tex/Outlook.tex", "max_line_length": 844, "max_stars_count": null, "max_stars_repo_head_hexsha": "fc5dd559cb4098c8abdd305a56a98f3c9d1170a6", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "jo-jstrm/report-stream-batch-processing", "max_stars_repo_path": "tex/Outlook.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 609, "size": 2928 }
\documentclass[12pt,twoside]{article} \usepackage{mathtools} % for DeclareParedDelimiter \usepackage{listings} \usepackage{graphicx} % Required for including images \newcommand{\reporttitle}{Foundations of Machine Learning} \newcommand{\reportauthor}{Lionel Ngoupeyou Tondji} \newcommand{\reporttype}{Assignment: Statistics and Probability} \newcommand\myeq{\mathrel{\overset{\makebox[0pt]{\mbox{\normalfont\tiny\sffamily iid}}}{=}}} \begin{document} % front page % Last modification: 2016-09-29 (Marc Deisenroth) \begin{titlepage} \newcommand{\HRule}{\rule{\linewidth}{0.5mm}} % Defines a new command for the horizontal lines, change thickness here %---------------------------------------------------------------------------------------- % LOGO SECTION %---------------------------------------------------------------------------------------- \begin{center} \includegraphics[height = 2cm]{qla}\hspace{4mm} \includegraphics[height=2cm]{aims-rwanda} \end{center} \begin{center} % Center remainder of the page %---------------------------------------------------------------------------------------- % HEADING SECTIONS %---------------------------------------------------------------------------------------- \textsc{\LARGE \reporttype}\\[1.5cm] \textsc{\Large African Institute for Mathematical Sciences}\\[0.5cm] \textsc{\large Quantum Leap Africa}\\[0.5cm] %---------------------------------------------------------------------------------------- % TITLE SECTION %---------------------------------------------------------------------------------------- \HRule \\[0.4cm] { \huge \bfseries \reporttitle}\\ % Title of your document \HRule \\[1.5cm] \end{center} %---------------------------------------------------------------------------------------- % AUTHOR SECTION %---------------------------------------------------------------------------------------- \begin{minipage}{0.4\hsize} \begin{flushleft} \large \textit{Author: Lionel Ngoupeyou Tondji}\\ \end{flushleft} \vspace{2cm} \makeatletter Date: \@date \end{minipage} \vfill % Fill the rest of the page with whitespace \makeatother \end{titlepage} \section{Discrete Models} \subsection*{c)} The following graph is showing the evidence for each model \begin{center} \includegraphics{../scatter} \end{center} \subsection*{d)} \subsubsection*{1)} Table over the 11 different possibilities under each model \\ \\. \begin{tabular}{|c|c|c|} \hline Bethany-model & Charlotte-model & Davina-model \\ \hline 0 & 0.00000000e+00 & 0.00000000e+00 \\ \hline 0 & 0.00000000e+00 & 7.78633801e-09\\ \hline 0 & 5.66695902e-05 & 2.79676219e-05 \\ \hline 0 & 0.00000000e+00 & 2.13746139e-03 \\ \hline 0 & 6.19681968e-02 & 3.05825945e-02 \\ \hline 1 & 0.00000000e+00 & 1.55251489e-01 \\ \hline 0 & 7.05856492e-01 & 3.48354866e-01 \\ \hline 0 & 0.00000000e+00 & 3.44952256e-01 \\ \hline 0 & 2.32118641e-01 & 1.14555379e-01 \\ \hline 0 & 0.00000000e+00 & 4.13797926e-03 \\ \hline 0 & 0.00000000e+00 & 0.00000000e+00 \\ \hline \end{tabular} \subsubsection*{2)} Predictive distribution \\\\ \begin{tabular}{|c|c|c|} \hline Bethany-model & Charlotte-model & Davina-model \\ \hline 0.5 & 0.63400742105992502 & 0.63635359808933689 \\ \hline \end{tabular} \subsection*{e)} \subsubsection*{1)} According to the question $c)$ we can say that all of the three girls are right because the graph is in accord with what they have said at the beginning. \subsubsection*{2)} What happen If Andrew had drawn 130 white balls out of 200?\\ \begin{center} \includegraphics{../index1} \end{center} As we can see on the graph, when $nTotal$ is very big the Charlotte's model and the Bethany's model tends to be the same. \section{Continuous Models} \subsection*{a)} Derive Fred’s maximum likelihood solution for the parameters of his model. The parameters for the Fred's model is $\theta = (\mu, \sigma^2)$.\\ Likelihood: Probability of the observed data x given the parameters $\theta$\\ The Likelihood is given by : \begin{equation*} p(x| \theta) = p(x_1,x_2,....., x_N\ \theta) = \prod_{n=1}^{N} p(x_n| \theta) \end{equation*} Our goal is to Find the parameters that maximize the likelihood $p(x| \theta)$.\\ The step of finding the maximum are : \begin{itemize} \item Compute gradient with respect to θ \item Set gradient to 0 \item Solve for θ \end{itemize} So we have : \begin{equation*} p(x| \theta) = \prod_{n=1}^{N} p(x_n| \theta) = \prod_{n=1}^{N} \mathcal{N}(x_n| \mu, \sigma^2) \end{equation*} But instead of maximizing the Likelihood, we will maximize the log-Likelihood because log is a strictly monotonically function increasing function. Why are we allow to this transformation? because \begin{itemize} \item We will obtain The same maximum \item Gradients easy \item Fewer numerical problems \end{itemize} Thus, the Log Likelihood is given by : \begin{align*} log \,p(x| \theta) &= log \, \prod_{n=1}^{N} p(x_n| \theta) \\ &= \sum_{n=1}^{N} log \, \left( \frac{1}{\sqrt{2 \pi \sigma^2}} exp\big(-\frac{1}{2 \sigma^2}\big(x_n - \mu\big)^2\big) \right) \\ &= -\frac{N}{2} log\big(2 \pi \sigma^2\big) + \sum_{n=1}^{N} \, \left( -\frac{1}{2 \sigma^2}\big(x_n - \mu\big)^2 \right) \\ \end{align*} The gradient of the Log Likelihood with respect to $\mu$ is given by : \begin{align*} \frac{\partial}{\partial \mu}log \,p(x| \theta) &= \frac{\partial}{\partial \mu} \left( -\frac{N}{2} log\big(2 \pi \sigma^2\big) + \sum_{n=1}^{N} \, \left( -\frac{1}{2 \sigma^2}\big(x_n - \mu\big)^2 \right) \right) \\ &= \sum_{n=1}^{N} \, \frac{\partial}{\partial \mu} \left(-\frac{1}{2 \sigma^2}\big(x_n - \mu\big)^2 \right) \\ &= \sum_{n=1}^{N} \, \left(\frac{1}{\sigma^2}\big(x_n - \mu\big) \right) \\ \end{align*} Setting the gradient that we compute before to zero, we obtain: \begin{equation*} \sum_{n=1}^{N} \, \left(\frac{1}{\sigma^2}\big(x_n - \mu\big) \right) = 0 \\ \Longrightarrow \sum_{n=1}^{N} \,x_n - N \mu = 0 \end{equation*} So we obtain : \begin{equation} \boxed{ \mu _{ML} = \frac{1}{N} \sum_{n=1}^{N} x_n } \end{equation} Let pose $\alpha = \sigma^2$.\\ The gradient of the Log Likelihood with respect to $\sigma ^2$ is given by : \begin{align*} \frac{\partial}{\partial \alpha}log \,p(x| \theta) &= \frac{\partial}{\partial \alpha} \left( -\frac{N}{2} log\big(2 \pi \alpha \big) + \sum_{n=1}^{N} \, \left( -\frac{1}{2 \alpha}\big(x_n - \mu\big)^2 \right) \right) \\ &= \frac{\partial}{\partial \alpha} \left( -\frac{N}{2} log\big(2 \pi \alpha \big)\right) + \frac{\partial}{\partial \alpha} \left( \sum_{n=1}^{N} \, \left( -\frac{1}{2 \alpha}\big(x_n - \mu\big)^2 \right) \right) \\ &= -\frac{N}{2 \alpha} + \sum_{n=1}^{N} \, \frac{1}{2 \alpha^2}\big(x_n - \mu\big)^2 \\ \end{align*} Setting the gradient that we compute before to zero, we obtain: \begin{equation*} -\frac{N}{2 \alpha} + \sum_{n=1}^{N} \, \frac{1}{2 \alpha^2}\big(x_n - \mu\big)^2 = 0 \\ \Longrightarrow -N \alpha + \sum_{n=1}^{N} \, \big(x_n - \mu\big)^2 = 0 \end{equation*} Replacing $\alpha$ by $\sigma^2$ \begin{equation*} -N \sigma^2 + \sum_{n=1}^{N} \, \big(x_n - \mu\big)^2 = 0 \end{equation*} So we obtain : \begin{equation} \boxed{ \sigma^2 _{ML} = \frac{1}{N} \sum_{n=1}^{N} \, \big(x_n - \mu\big)^2} \end{equation} \subsection*{b)} Using only the properties of expectation and variance of linear combinations of iid random variables, show that the maximum likehood estimator $\sigma^2 _{ML}$ is biased. (that is, taking the expectation of the maximum likelihood estimator with respect $\mathcal{N}(\mu, \sigma^2)$ does not return $\sigma^2$ ).\\ \\ From the previous question, we already have the value of $\sigma^2 _{ML}$ \begin{align*} \sigma^2 _{ML} &= \frac{1}{N} \sum_{n=1}^{N} \, \big(x_n - \mu\big)^2 \\ &= \frac{1}{N} \sum_{n=1}^{N} \, \big(x_n - \mu_{ML}\big)^2 \\ &= \frac{1}{N} \sum_{n=1}^{N} \, x_n^2 - \frac{2}{N} \sum_{n=1}^{N} \, x_n \mu_{ML} + \mu_{ML}^2 \\ &= \frac{1}{N} \sum_{n=1}^{N} \, x_n^2 - 2 \mu_{ML} \left( \frac{1}{N} \sum_{n=1}^{N} \, x_n \right) + \mu_{ML}^2 \\ &= \frac{1}{N} \sum_{n=1}^{N} \, x_n^2 - 2 \mu_{ML}^2 + \mu_{ML}^2 \\ &= \frac{1}{N} \sum_{n=1}^{N} \, x_n^2 - \mu_{ML}^2 \\ \end{align*} Now we can compute the expectation $\sigma^2 _{ML}$: \begin{align*} E(\sigma^2 _{ML}) &= E\left(\frac{1}{N} \sum_{n=1}^{N} \, x_n^2 - \mu_{ML}^2 \right) \\ &= E \left(\frac{1}{N} \sum_{n=1}^{N} \, x_n^2 \right) - E(\mu_{ML}^2) \\ &= \frac{1}{N} \sum_{n=1}^{N} E(x_n^2) - E(\mu_{ML}^2) \\ & \myeq E(x_N^2) - E(\mu_{ML}^2) \\ \end{align*} According to the alternative definition of variance, $\sigma_x = E(x^2) - E(x)^2$ and similarly, $\sigma_{\mu_{ML}} = E(x_{\mu_{ML}}^2) - E(x_{\mu_{ML}})^2$ where the random variable is $\mu_{ML}.$ We can notice that $E(x) = E(x_{\mu_{ML}}) = \mu$. Plug the 2 equations to the derivation: \begin{align*} E(\sigma^2 _{ML}) &= (\sigma_x^2 + \mu^2) - (\sigma_{\mu_{ML}}^2 + \mu^2) \\ &= \sigma_x^2 - \sigma_{\mu_{ML}}^2 \\ \end{align*} We have : \begin{align*} \sigma_{\mu_{ML}}^2 &= Var(\frac{1}{N} \sum_{n=1}^{N} x_n) \\ &= \frac{1}{N^2}Var( \sum_{n=1}^{N} x_n) \\ & \myeq \frac{1}{N^2} \sum_{n=1}^{N} Var( x_n) \\ &= \frac{1}{N^2} \sum_{n=1}^{N} \sigma_x^2 \\ &= \frac{1}{N} \sigma_x^2 \\ \end{align*} Plug back to the $E(\sigma^2 _{ML})$ derivation, So we have : \begin{align*} E(\sigma^2 _{ML}) &= \sigma_x^2 - \sigma_{\mu_{ML}}^2 \\ &= \sigma_x^2 - \frac{1}{N} \sigma_x^2\\ &= \frac{N-1}{N} \sigma_x^2\\ \end{align*} So we get : \begin{equation} E(\sigma^2 _{ML}) \neq \sigma^2 _{ML} \end{equation} Then we conclude that the estimator $\sigma^2 _{ML}$ is biased.\\ \\ \subsection*{c)} Derive the MAP solution for μ in George’s model. Do your analysis replacing 10 with $\mu_0$ and 25 with $\sigma_0^2$ (this makes it easier to read). Write your answer in terms of Fred’s maximum likelihood solution $\mu_{ML}$.\\ \\ The Posterior is given by : \begin{equation*} p(\theta|x) = \frac{p(x|\theta)p(\theta)}{p(x)} \end{equation*} Where $x =(x_1,x_2,....,x_n)$ , $x \sim \mathcal{N}(\mu, \sigma^2)$ and $\mu \sim \mathcal{N}(\mu_0, \sigma^2_o)$ Our goal is to Find the parameters that maximize the log-Posterior $p(\theta|x)$.\\ The step of finding the maximum are : \begin{itemize} \item Compute gradient with respect to θ \item Set gradient to 0 \item Solve for θ \end{itemize} Thus, the Log Posterior is given by : \begin{align*} log \, p(\theta|x) &= log \, p(x|\theta) + log \, p(\theta) + cste \\ &= -\frac{N}{2} log\big(2 \pi \sigma^2\big) - \sum_{n=1}^{N} \, \left( \frac{1}{2 \sigma^2}\big(x_n - \mu\big)^2 \right) + log \, \mathcal{N}(\mu_0, \sigma_0^2)\\ &= -\frac{N}{2} log\big(2 \pi \sigma^2\big) - \sum_{n=1}^{N} \, \left( \frac{1}{2 \sigma^2}\big(x_n - \mu\big)^2 \right) -\frac{1}{2} log\big(2 \pi \sigma_0^2\big) - \left( \frac{1}{2 \sigma_0^2}\big(\mu - \mu_0\big)^2 \right) \\ \end{align*} The gradient of the Log Posterior with respect to $\mu$ is given by : \begin{align*} \frac{\partial}{\partial \mu}log \,p(\theta|x) &= \frac{\partial}{\partial \mu} \left( -\frac{N}{2} log\big(2 \pi \sigma^2\big) - \sum_{n=1}^{N} \, \left( \frac{1}{2 \sigma^2}\big(x_n - \mu\big)^2 \right) -\frac{1}{2} log\big(2 \pi \sigma_0^2\big) - \left( \frac{1}{2 \sigma_0^2}\big(\mu - \mu_0\big)^2 \right) \right) \\ &= \sum_{n=1}^{N} \, \left( \frac{1}{\sigma^2}\big(x_n - \mu\big) \right) -\frac{1}{\sigma_0^2}\big(\mu - \mu_0\big) \\ \end{align*} Setting this gradient that we compute before to zero, we obtain: \begin{equation*} \frac{\partial}{\partial \mu}log \,p(\theta|x) = 0 \Longrightarrow \sum_{n=1}^{N} \, \left( \frac{1}{\sigma^2}\big(x_n - \mu\big) \right) -\frac{1}{\sigma_0^2}\big(\mu - \mu_0\big) = 0 \end{equation*} \begin{equation*} \Longrightarrow \sum_{n=1}^{N} \, \frac{1}{\sigma^2} x_n - \frac{N}{\sigma^2} \mu -\frac{1}{\sigma_0^2}\mu + \frac{1}{\sigma_0^2} \mu_0 = 0 \end{equation*} \begin{equation*} \Longrightarrow \frac{N}{\sigma^2} \mu_{ML} - \frac{N}{\sigma^2} \mu -\frac{1}{\sigma_0^2}\mu + \frac{1}{\sigma_0^2} \mu_0 = 0 \end{equation*} \begin{equation*} \Longrightarrow \mu \left(\frac{N}{\sigma^2} + \frac{1}{\sigma_0^2} \right) = \frac{N}{\sigma^2} \mu_{ML} + \frac{1}{\sigma_0^2} \mu_0 \end{equation*} So we obtain : \begin{equation} \boxed{ \mu_{MAP} = \frac{1}{\left(\frac{N}{\sigma^2} + \frac{1}{\sigma_0^2} \right)} \left( \frac{N}{\sigma^2} \mu_{ML} + \frac{1}{\sigma_0^2} \mu_0 \right) } \end{equation} \subsection*{d)} Explaining your reasoning, calculate the posterior for George’s model. Show that the MAP point you calculated in the previous exercise is also the mean, and give a reason why this is true in this example but not true in general. Again use $\mu_0$ and $\sigma_0^2$ rather than use the actual numbers. The Posterior is given by : \begin{equation*} p(\theta|x) = \frac{p(x|\theta)p(\theta)}{p(x)} \end{equation*} Where $x =(x_1,x_2,....,x_n)$ , $x \sim \mathcal{N}(\theta, \sigma^2)$ and $\theta \sim \mathcal{N}(\mu_0, \sigma^2_0)$. Given the formula for the posterior we can say that : \begin{equation*} p(\theta|x) \propto p(x|\theta)p(\theta) \end{equation*} We get : \begin{equation*} p(x|\theta)p(\theta) = e^{-\frac{1}{2}M} \end{equation*} Where : \begin{align*} M &= \sum_{n=1}^{N} \frac{1}{\sigma^2}(x_n - \theta)^2 + \frac{1}{\sigma^2_0}(\theta - \mu_0)^2 \\ &= \frac{1}{\sigma^2} \sum_{n=1}^{N}x_n^2 + \frac{N}{\sigma^2} \theta^2 - \frac{2\theta}{\sigma^2}\sum_{n=1}^{N}x_n + \frac{\theta^2}{\sigma_0^2} + \frac{\mu_0^2}{\sigma_0^2} - \frac{2\theta\mu_0}{\sigma_0^2}\\ &= \underbrace{ \left(\frac{N}{\sigma^2} + \frac{1}{\sigma_0^2}\right)}_a\theta^2 - 2\underbrace{ \left(\frac{1}{\sigma^2}\sum_{n=1}^{N}x_n + \frac{\mu_0}{\sigma_0^2} \right)}_b\theta + \underbrace{\left(\frac{1}{\sigma^2} \sum_{n=1}^{N}x_n^2 + \frac{\mu_0^2}{\sigma_0^2} \right)}_c \end{align*} So : \begin{align*} M &= a \theta^2 - 2b \theta + c \\ &= a \left(\theta^2 - \frac{2b}{a} + \frac{c}{a}\right) \\ &= a \left[\left( \theta - \frac{b}{a} \right)^2 - \frac{b^2}{a^2} + \frac{c}{a}\right] \\ &= a \left( \theta - \frac{b}{a} \right)^2 - \frac{b^2}{a} + c \end{align*} Then : \begin{align*} p(x|\theta)p(\theta) &= exp \left[-\frac{1}{2}\left(a ( \theta - \frac{b}{a} )^2 - \frac{b^2}{a} + c\right)\right] \\ &= Cste \times exp \left(-\frac{1}{2}a( \theta - \frac{b}{a} )^2\right) \end{align*} We can conclude that $p(\theta|x) \sim \mathcal{N}(\frac{b}{a}, \frac{1}{a})$, where : \begin{equation*} a = \left(\frac{N}{\sigma^2} + \frac{1}{\sigma_0^2}\right) \end{equation*} \begin{equation*} b = \left(\frac{1}{\sigma^2}\sum_{n=1}^{N}x_n + \frac{\mu_0}{\sigma_0^2} \right) \end{equation*} \begin{equation*} c = \left(\frac{1}{\sigma^2} \sum_{n=1}^{N}x_n^2 + \frac{\mu_0^2}{\sigma_0^2} \right) \end{equation*} \subsubsection*{1)} We can see that the mean of the posterior is given by : \begin{align*} \frac{b}{a} &= \frac{1}{\left(\frac{N}{\sigma^2} + \frac{1}{\sigma_0^2}\right)} \left(\frac{1}{\sigma^2}\sum_{n=1}^{N}x_n + \frac{\mu_0}{\sigma_0^2} \right) \\ &= \frac{1}{\left(\frac{N}{\sigma^2} + \frac{1}{\sigma_0^2}\right)} \left(\frac{N}{\sigma^2}\mu_{ML} + \frac{\mu_0}{\sigma_0^2} \right) \\ &= \mu_{MAP} \end{align*} So finally we end-up with : \begin{equation} \boxed{\frac{b}{a} = \mu_{MAP}} \end{equation} \subsubsection*{2)} Reason why this is true in this example but not true in general\\ The MAP point that we calculated previously is also the mean because the distribution used is symmetric But in general if the distribution is not symmetric we will not have the equality. \subsection*{e)} Derive the MAP estimate for Harry’s model. Use a and b for the shape and scale 2 respectively instead of the numbers. Write the your answer for $\sigma_{MAP}^2$ in terms of Fred’s maximum likelihood result $\sigma_{ML}^2$ . NB you might find it easier to work with (and differentiate with respect to) $\sigma^2$ rather than $\sigma$\\ The Posterior is given by : \begin{equation*} p(\theta|x) = \frac{p(x|\theta)p(\theta)}{p(x)} \end{equation*} Where $x =(x_1,x_2,....,x_n)$ , $x \sim \mathcal{N}(\mu, \theta)$ and $\theta \sim \mathcal{IG}(a, b)$, where $\mathcal{IG}$ stand for the inverse Gamma distribution.\\\\ Given the formula for the posterior we can say that : \begin{equation*} p(\theta|x) \propto p(x|\theta)p(\theta) \end{equation*} \\ The Log Prior is given by : \begin{align*} log \, p(\theta) &= log \, \left(\frac{b^a}{\Gamma(a)} \big(\frac{1}{\theta}\big)^{a+1}exp(-\frac{b}{\theta})\right) \\ &= cste -(a+1)log \, \theta - \frac{b}{\theta} \end{align*} Thus, the Log Posterior is given by : \begin{align*} log \, p(\theta|x) &= log \, p(x|\theta) + log \, p(\theta) + cste \\ &= -\frac{N}{2} log\big(2 \pi \theta) - \sum_{n=1}^{N} \, \frac{1}{2 \theta}\big(x_n - \mu\big)^2 + log \, \mathcal{IG}(a, b)\\ &= -\frac{N}{2} log\big(2 \pi \theta \big) - \sum_{n=1}^{N} \, \frac{1}{2 \theta}\big(x_n - \mu\big)^2 -(a+1)log \, \theta - \frac{b}{\theta} \\ \end{align*} The gradient of the Log Posterior with respect to $\theta$ is given by : \begin{align*} \frac{\partial}{\partial \theta}log \,p(\theta|x) &= \frac{\partial}{\partial \theta} \left( -\frac{N}{2} log\big(2 \pi \theta\big) - \sum_{n=1}^{N} \, \left( \frac{1}{2 \theta}\big(x_n - \mu\big)^2 \right) -(a+1)log \, \theta - \frac{b}{\theta}\right) \\ &= -\frac{N}{2 \theta} + \sum_{n=1}^{N} \, \frac{1}{2 \theta^2}\big(x_n - \mu\big)^2 - \frac{(a+1)}{\theta} + \frac{b}{\theta^2}\\ \end{align*} Setting the previous gradient to zero, we get: \begin{equation*} -\frac{N}{2 \theta} + \sum_{n=1}^{N} \, \frac{1}{2 \theta^2}\big(x_n - \mu\big)^2 - \frac{(a+1)}{\theta} + \frac{b}{\theta^2} = 0 \end{equation*} \begin{equation*} \Longrightarrow \frac{\theta^2}{N}\left( -\frac{N}{2 \theta} + \sum_{n=1}^{N} \, \frac{1}{2 \theta^2}\big(x_n - \mu\big)^2 - \frac{(a+1)}{\theta} + \frac{b}{\theta^2}\right) = 0 \end{equation*} \begin{equation*} \Longrightarrow -\frac{\theta}{2} + \frac{1}{2N} \,\sum_{n=1}^{N} \big(x_n - \mu\big)^2 - \frac{\theta(a+1)}{N} + \frac{b}{N} = 0 \end{equation*} \begin{equation*} \Longrightarrow \theta \left(-\frac{1}{2} - \frac{(a+1)}{N} \right) + \frac{1}{2N} \,\sum_{n=1}^{N} \big(x_n - \mu\big)^2 + \frac{b}{N} = 0 \end{equation*} \begin{equation*} \Longrightarrow \theta \left(\frac{1}{2} + \frac{(a+1)}{N} \right) = \frac{1}{2N} \,\sum_{n=1}^{N} \big(x_n - \mu\big)^2 + \frac{b}{N} \end{equation*} \begin{equation*} \Longrightarrow \theta \left(\frac{1}{2} + \frac{(a+1)}{N} \right) = \frac{1}{2}\sigma_{ML}^2 + \frac{b}{N} \end{equation*} Then we have : \begin{equation} \boxed{\theta_{MAP} = \frac{1}{\left(\frac{1}{2} + \frac{(a+1)}{N} \right)} \left(\frac{1}{2}\sigma_{ML}^2 + \frac{b}{N} \right)} \end{equation} \subsection*{f)} Derive Harry's posterior distribution. You may reuse some of your working from your previous answer. State also the posterior mean and explain why it is not equal to the MAP estimate you found in the previous part. You may use standard results for the mean of the Inverse Gamma distribution.\\ \\ The Posterior is given by : \begin{equation*} p(\theta|x) = \frac{p(x|\theta)p(\theta)}{p(x)} \end{equation*} Where $x =(x_1,x_2,....,x_n)$ , $x \sim \mathcal{N}(\mu, \theta)$ and $\theta \sim \mathcal{IG}(a, b)$, where $\mathcal{IG}$ stand for the inverse Gamma distribution.\\ Given the formula for the posterior we can say that : \begin{equation*} p(\theta|x) \propto p(x|\theta)p(\theta) \end{equation*} So : \begin{align*} p(\theta|x) &= \left(\theta^{-\frac{N}{2}} exp\big(- \sum_{n=1}^{N} \frac{\big(x_n - \mu\big)^2}{2\theta}\big)\right) \left(\frac{b^a}{\Gamma(a)} \big(\frac{1}{\theta}\big)^{a+1}exp(-\frac{b}{\theta}) \right) \\ &= Cste \times \theta^{- \big(a + \frac{N}{2} + 1\big)} exp \left( \frac{-\big(\frac{1}{2} \sum_{n=1}^{N} (x_n - \mu)^2 + b\big)}{\theta} \right) \end{align*} We can conclude that : \begin{equation} \boxed{p(\theta|x) \sim \mathcal{IG}\left(a+\frac{N}{2}, \,b + \frac{1}{2}\sum_{n=1}^{N} (x_n - \mu)^2\right)} \end{equation} Recall : Let a random variable $X \sim \mathcal{IG}(a, b)$, then it probability density function is $f_X(x) = \frac{b^a}{\Gamma(a)} \big(\frac{1}{x}\big)^{a+1}exp(-\frac{b}{x})$ where $\Gamma(\alpha) = \int_{0}^{+\infty} t^{\alpha - 1} e^{-\alpha} dt$.\\ \\ The mean is given by the expectation of $X$. \begin{align*} E(X) &= \int_{-\infty}^{+\infty} x f_X(x) dx \\ \\ &= \int_{0}^{+\infty} \frac{b^a}{\Gamma(a)} \big(\frac{1}{x}\big)^{a}exp(-\frac{b}{x}) dx \\\\ &= \frac{b^a}{\Gamma(a)} \int_{0}^{+\infty} \big(\frac{1}{x}\big)^{a}exp(-\frac{b}{x}) dx \\\\ &= \frac{b^a}{\Gamma(a)} \left( \underbrace{\left[\frac{1}{1 - a}e^{-\frac{b}{x}}x^{1- a}\right]^{+\infty}_{0}}_0 - \int_{0}^{+\infty} \frac{b}{1-a} x^{-a-1}e^{-\frac{b}{x}} dx\right) \\ \\ &= -\frac{b^{a+1}}{(1-a)\Gamma(a)} \left(\int_{0}^{+\infty} x^{-a-1}e^{-\frac{b}{x}} dx\right) \end{align*} Let us pose $X = \frac{b}{x} \Longrightarrow x = \frac{b}{X} \Longrightarrow dx = -\frac{b}{X^2} dX$.\\ Plug this to the previous integral, we have : \begin{align*} E(X) &= -\frac{b^{a+1}}{(1-a)\Gamma(a)} \left( - \int_{0}^{+\infty} \frac{b^{-a-1}}{X^{-a-1}} e^{-X} \big(-\frac{b}{X^2}\big)dX\right) \\ \\ &= -\frac{b}{(1-a)\Gamma(a)} \int_{0}^{+\infty} X^{a-1} e^{-X}dX\\\\ &= \frac{b}{(a-1)\Gamma(a)}\Gamma(a) \\ \end{align*} We have show that : \begin{equation*} \boxed{E(X) = \frac{b}{a-1}} \end{equation*} By the previous result we can now compute the mean of the posterior following an inverse Gamma distribution $\mathcal{IG}\left(a+\frac{n}{2}, \,b + \frac{1}{2}\sum_{n=1}^{N} (x_n - \mu)^2 \right)$\\ \begin{align*} posterior\, mean &= \frac{b + \frac{1}{2}\sum_{n=1}^{N} (x_n - \mu)^2}{a+\frac{N}{2} - 1} \\ &= \frac{1}{\left(\frac{1}{2} + \frac{(a-1)}{N}\right)} \left( \frac{1}{2}\sigma_{ML}^2 + \frac{b}{N}\right) \end{align*} We can notice that the mean of the posterior is not equal to the mean obtain by the MAP method.\\\\ \subsubsection*{1)} Justifications for the reasons why the posterior mean is not equal to the MAP estimate that we found in the previous part.\\ The MAP point that we calculated previously is not the mean because the distribution used is not symmetric. \subsection*{g)} Explain what would happen to the result of inference in the three models if Elizabeth was to take a very large sample from the random number generator.\\ For the three model when $N \rightarrow +\infty$, \begin{equation*} \mu_{MAP} \rightarrow \mu_{ML} \end{equation*} \begin{equation*} \sigma_{MAP}^2 \rightarrow \sigma_{ML}^2 \end{equation*} \end{document}
{ "alphanum_fraction": 0.5924717145, "avg_line_length": 44.1074856046, "ext": "tex", "hexsha": "0b1622801d289c8ff71f1d35cfefd0e4df2e0050", "lang": "TeX", "max_forks_count": 2, "max_forks_repo_forks_event_max_datetime": "2022-02-04T20:08:25.000Z", "max_forks_repo_forks_event_min_datetime": "2018-12-21T12:52:21.000Z", "max_forks_repo_head_hexsha": "3e03e9477f562b6ffbfe28d5f4b1a4553124edba", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "tondji/tondji.github.io", "max_forks_repo_path": "Machine-Learning-Codes/second_assignment-Statistics_and_Probability/report_template/report_template.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "3e03e9477f562b6ffbfe28d5f4b1a4553124edba", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "tondji/tondji.github.io", "max_issues_repo_path": "Machine-Learning-Codes/second_assignment-Statistics_and_Probability/report_template/report_template.tex", "max_line_length": 351, "max_stars_count": 2, "max_stars_repo_head_hexsha": "3e03e9477f562b6ffbfe28d5f4b1a4553124edba", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "tondji/tondji.github.io", "max_stars_repo_path": "Machine-Learning-Codes/second_assignment-Statistics_and_Probability/report_template/report_template.tex", "max_stars_repo_stars_event_max_datetime": "2019-04-15T15:07:31.000Z", "max_stars_repo_stars_event_min_datetime": "2019-01-19T07:51:26.000Z", "num_tokens": 8976, "size": 22980 }
\documentclass{beamer} % INPUT ENCODING, FONT ENCODING, LANGUAGE \usepackage[utf8]{inputenc} \usepackage[T1]{fontenc} \usepackage[english]{babel} % \usepackage[frenchb]{babel} % \usepackage[ngerman]{babel} % DOCUMENT INFORMATIONS: TITLE, AUTHOR, DATE, INSTITUTE \title{Title of the presentation} \subtitle{Subtitle of the presentation} \author[Eugene]{Eugene Kudryavtsev} \date{\today} \institute[BFH]{Berner Fachhochschule} \titlegraphic{\includegraphics[scale=0.5]{bfh-logo.png}} % logo on titlepage only % PACKAGES \usetheme{BFH} % use BFH theme \usepackage{color} \usepackage{listings} \usepackage{hyperref} % links \begin{document} \begin{frame}[plain] \titlepage \end{frame} \begin{frame}[noframenumbering]{Overview} \tableofcontents \end{frame} \AtBeginSection[] { \begin{frame}[noframenumbering]{Overview} \tableofcontents[currentsection] \end{frame} } \section{Frist Section} \begin{frame}{\insertsection} \begin{itemize} \item First Item \item Second Item \item Third Item \end{itemize} \end{frame} \subsection{Subsection} \begin{frame}{\insertsection}{\insertsubsection} \begin{itemize} \item Fourth Item \item Fifth Item \item Sixth Item \end{itemize} \end{frame} \section{Second Section} \begin{frame}{\insertsection} \begin{itemize} \item First Item \item Second Item \item Third Item \end{itemize} \end{frame} \section*{References} \begin{frame}{References} \begin{thebibliography}{10} \beamertemplateonlinebibitems \bibitem{http://www.zinktypografie.nl/latex.php?lang=en} Zink Typography Agency \newblock {\small{What is \rmfamily\LaTeX}?} \newblock http://www.zinktypografie.nl \beamertemplateonlinebibitems \bibitem{http://liantze.penguinattack.org/training_consultancy.html} Lim Lian Tze's blog \newblock {{\rmfamily\LaTeX}: Beautiful Typesetting} \newblock http://liantze.penguinattack.org/latextypesetting.html \beamertemplateonlinebibitems \bibitem{https://www.sharelatex.com/learn/} Share{\rmfamily\LaTeX{}} Online Ressource \newblock {Guides \& References} \newblock https://www.sharelatex.com \end{thebibliography} \end{frame} \end{document}
{ "alphanum_fraction": 0.7337868481, "avg_line_length": 22.7319587629, "ext": "tex", "hexsha": "839deefe7c19a137910786314c8f3907482b6f9d", "lang": "TeX", "max_forks_count": 10, "max_forks_repo_forks_event_max_datetime": "2021-10-12T04:13:23.000Z", "max_forks_repo_forks_event_min_datetime": "2019-11-02T03:10:26.000Z", "max_forks_repo_head_hexsha": "3ab28a23fb60cb0a97fcec883847e2d8728b98c0", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "lemoxiao/Awesome-Beamer-Collection", "max_forks_repo_path": "200+ beamer 模板合集/latex-beamer-bfh-master(伯尔尼高等专业学院)/bfh-template.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "3ab28a23fb60cb0a97fcec883847e2d8728b98c0", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "lemoxiao/Awesome-Beamer-Collection", "max_issues_repo_path": "200+ beamer 模板合集/latex-beamer-bfh-master(伯尔尼高等专业学院)/bfh-template.tex", "max_line_length": 81, "max_stars_count": 13, "max_stars_repo_head_hexsha": "3ab28a23fb60cb0a97fcec883847e2d8728b98c0", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "lemoxiao/Awesome-Beamer-Collection", "max_stars_repo_path": "200+ beamer 模板合集/latex-beamer-bfh-master(伯尔尼高等专业学院)/bfh-template.tex", "max_stars_repo_stars_event_max_datetime": "2021-12-24T09:27:26.000Z", "max_stars_repo_stars_event_min_datetime": "2019-07-30T04:09:54.000Z", "num_tokens": 692, "size": 2205 }
\documentclass{article} \usepackage{blindtext} \usepackage[T1]{fontenc} \usepackage[utf8]{inputenc} \usepackage[margin=1in]{geometry} \usepackage{url} \usepackage{hyperref} \usepackage{amsfonts} \usepackage{graphicx} \usepackage{caption} \usepackage{subcaption} \usepackage{setspace} \begin{document} \onehalfspacing % \doublespacing \begin{center} %\LARGE{\textbf{Research Proposal}} \\ %\vspace{1em} \Large{15-418 Final Project Report} \\ \vspace{1em} \normalsize\textbf{Manish Nagireddy (mnagired) and Ziad Khattab (zkhattab)} \\ \vspace{1em} \end{center} \section*{Title} An Exploration of Parallelism in Neural Networks \section{Summary} For our 15-418 final project, we looked into potential axes of parallelism that exist within neural networks. We implemented image classifying neural networks in \texttt{Python} (via \texttt{PyTorch} and \texttt{mpi4py}, a message passing package for Python) as well as also via \texttt{OpenMP} in \texttt{C++} and measured their performance (e.g. test set accuracy and training time). We found that data parallelism through varying batch size is far more robust (as well as more effective) than attempting to map neural network training to either a shared memory or message passing setting. \section{Background} In recent years, we have seen the rapid increase of neural networks within the landscape of artificial intelligence, and more broadly within algorithmic decision making. However, because of the large size of these models (as defined by the number of parameters) as well as the large size of the data used to train them, performance of these so-called deep learning paradigms might be suboptimal without specific attention to parallelism. Broadly, from the perspective of a neural network, there are two dimensions to parallelize: the data and the model. \paragraph{Data Parallelism} given $X$ machines/cores, we split the data into $X$ partitions and use the \textit{same} model to train each partition of the data on a different device in parallel. Then, we combine the resultant model weights from each partition. Note that this is \textit{model agnostic} because it relies only on the data and not on the underlying model architecture. \paragraph{Model Parallelism} splitting the model into various partitions and assigning them to different machines. Note that there are dependencies which are specific to specific model architectures, and so model parallelism is not really ``parallel" because we are actually just assigning consecutive layers of a model to different devices. Some have referred to this as \textit{model serialization}\footnote{\href{https://leimao.github.io/blog/Data-Parallelism-vs-Model-Paralelism/}{Data Parallelism VS Model Parallelism in Distributed Deep Learning Training - Lei Mao's Log Book}}. Model parallelism is commonly used when the model is too large to fit on a single device. \\ \noindent Refer to the figure\footnote{\href{https://xiandong79.github.io/Intro-Distributed-Deep-Learning}{Link to Figure Citation}} below for an illustration of data parallelism and model parallelism. \begin{figure}[h] \includegraphics[scale = 0.5]{background} \centering \end{figure} \subsection{Breaking it Down} The main challenge stems from the fact that there exist a lot of dependencies when working with a neural network. At the lowest, implementation-detail level, we know that different architectures have different structures and therefore different dependencies between layers (e.g. between input and hidden layers). But we also know that the choice of hyperparameters (e.g. learning rate) and optimizers (e.g. Stochastic Gradient Descent or SGD, Adam, AdaGrad, etc.) heavily influence model convergence time as well as model performance. In addition, we have that the choice of dataset is also significant (and related) to the class of models we use. Finally, we are limited by the choice of compute power, in the sense that we can only measure performance on machines with fixed number of CPU cores as well as fixed numbers of GPUs. Given that we want to focus on parallelism, we did \textit{not} want to get caught up in trying out different models on different datasets with different parameters. Thus, we fixed our dataset to be the widely well-known benchmark MNIST\footnote{\href{http://yann.lecun.com/exdb/mnist/}{Link to MNIST Data}} dataset. This also limited us to the problem of image classification, which further restricted the class of neural networks that we looked at (e.g. now we focused on simple feed forward networks and basic convolutional neural networks or CNNs). From background research, this dataset is known to be relatively cheap to train and still have nontrivial results without excessive preprocessing of the input data. Finally, we decided to hold several commonly used hyperparameters constant. For example, our choice of loss criterion was cross entropy loss, our choice of optimizer was stochastic gradient descent (SGD). So, we fixed these values and varied other hyperparameters, such as the learning rate of the optimizer, the number of training iterations (i.e. epochs), and the batch size. Of course, we also varied the number of processors (in the context of message passing) as well as the number of threads (for shared memory). Then, we were able to focus the ``unknown" aspects of our project to be the axes of parallelism and their corresponding performance or speedups. Specifically, once we had a baseline working model in \texttt{PyTorch}, we were able to incorporate message passing and then we were even able to implement a shared memory version in \texttt{C++} via \texttt{OpenMP}. Given that the majority of this class is focused on parallelism, we want to incorporate one of the main kinds of parallel settings that we have talked about. Specifically, in the context of neural network training, it makes sense to use message passing where we can distribute the training across processors. We will have each processor train a copy of the network and then send back the updated weights to a master process. Then, we average the weights together and continue with validating our model via evaluation on a held-out testing set of images. \\ \noindent At a high level, our learning objectives were to \begin{enumerate} \item Think about the inherent dependencies within a deep learning pipeline when attempting to implement a neural network via message passing (and possibly also via shared memory) \item Comment on the effectiveness (i.e. speedup and overall performance) of data parallelism in the example of a reasonably complex data and model setting \end{enumerate} \subsection{Changes Along the Way} \paragraph{Dropping \texttt{C++} (for now)} We note that our project changed immediately following our proposal feedback. For the time being, we decided to drop the \texttt{C++} and \texttt{OpenMP} implementation entirely due to potential time constraints. Then, we changed our project to have the core of the parallelism implementation have to do with \texttt{mpi4py} since we were able to more easily justify how message passing can be leveraged in neural network training (see above). \paragraph{Dropping Model Parallelism in favor of Message Passing + More Complex Architectures} Then, after a week or so, when we finished the \texttt{MPI} based implementations of a feedforward multi-layer perceptron (MLP) as well as a basic convolutional neural network (CNN), we decided that the implementation of model parallelism is not rigorous enough to justify the time spent attempting to gather the resources to test model parallel workflows. Specifically, given \texttt{PyTorch} and its capabilities, the way to implement model parallelism would be to send different layers of a network architecture to different machines (e.g. different GPUs). Then, in order to test this model, we would have to secure multi-GPU machines, which would take more work than the actual implementation of model parallelism itself. Therefore, to account for this change, we decided to translate our message-passing based parallelism to a more sophisticated neural network architecture, such as \texttt{ResNet}. More concretely, given that the internal architecture of \texttt{ResNet} is inherently more complicated than a 2-layer CNN (which we have already implemented), we anticipated that attempting to integrate message passing (again via \texttt{mpi4py}) will be more complicated with respect to implementation. Additionally, we would be able to compare the effectiveness of message passing effects on baseline models against more complicated architectures. This added dimension of comparision, as well as the increase in complexity of implementation, allowed us to justify the change in plans. As we describe later in the results section, the addition of message passing to a more sophisticated model architecture yielded some surprising performance metrics. \paragraph{The Return of \texttt{C++} } Finally, with about a week to go before the project deadline, we decided to go for our "Hope to Achieve" goals where we wanted to experiment with moving to \texttt{C++} and potentially also incorporating a shared memory setting via \texttt{OpenMP}. Although it was certainly nontrivial to mesh our idea of data parallelism within the shared memory paradigm, we were able to achieve a successful implementation with decent speedup after thinking carefully about synchronization. We elaborate more on our approach below and on our results further below. \section*{Approach} Our approach, in a broad sense, was data-side parallelization of the neural network training process. We did this by partitioning the data across threads, so that different processors are training the neural network architecture from their own randomized starting points, then either using a reductive combination process across all the weights trained by each processor to unify them (in the message passing case) or using synchronization to update a shared copy of these weights (shared memory). In both the message passing and shared memory settings, we aimed to map a portion of the training data in a given epoch to a processor core. This was the primary basis of parallelism in our codebase. Given that we want to explore the differences in axes of parallelism, we tried to not spend too much time implementing sequential neural networks from scratch. For \texttt{Python}, we will rely upon the \texttt{PyTorch} documentation for commonly used neural networks (e.g. standard CNN or basic \texttt{ResNet}). The main coding portion (in \texttt{Python + mpi4py}) will come from incorporating message passing into a \texttt{PyTorch} based model. \paragraph{Message Passing} The \texttt{Python} portion of the code included two networks, a simple Convolutional Neural Network (CNN), and a more complicated CNN with underlying \texttt{ResNet}\footnote{\href{https://pytorch.org/hub/pytorch_vision_resnet/}{Link to PyTorch ResNet}} architecture. In these two cases, the parallelization was done in a nearly identical way: we had a master process that would send a copy to each worker and then use \texttt{MPI} to broadcast a message to the workers to begin training. The workers would run their training loops and send their state dictionaries (which contained the updated weight parameters for each corresponding model) back to the master to be combined with one another for the next epoch (i.e. next iteration of the training loop). Since we were concerned with parallelization rather than machine learning, we first coded up a simple neural network without any message passing in \texttt{PyTorch} (referring to their documentation for both CNN's and \texttt{ResNet}) to serve as a sequential basis for our code. There were slight changes to the serial algorithm, in that we had to employ a reductive combination on the weights after the distributed training. This is slightly different from a purely sequential neural network where each input updates the weights cumulatively. We observed some tradeoffs because of this, which we discuss below in our results section. Optimization mainly occurred in two passes: the initial parallelization, and then the later efforts to make it more efficient. Initially, we used a roundtrip system in \texttt{MPI} to pass the weights on, hoping to emulate the structure of a sequential neural network. Then, the main optimization involved using broadcasting to make the master-worker architecture more efficient, instead of using the aforementioned roundtrip system. \paragraph{Shared Memory} The \texttt{C++} portion only used a simple feedforward multi-layer perceptron, but instead used \texttt{OpenMP} to parallelize the training process. The principle was similar, except that there was no need to separate things into master and worker processes because \texttt{OpenMP} allows for automatic work distribution across threads. We note that from the \texttt{C++} standpoint, there exists an abundance of starter code for basic neural networks (e.g. \href{https://github.com/Whiax/NeuralNetworkCpp/tree/master/src/neural}{[1]}, \href{https://github.com/huangzehao/SimpleNeuralNetwork/blob/master/src/neural-net.cpp}{[2]}, \href{https://github.com/arnobastenhof/mnist/tree/master/src}{[3]}). So, once we had a working sequential neural network, we spent an overwhelming majority of the time trying to adapt this implementation into a shared memory setting. In \texttt{OpenMP}, we used a similar structure to what we have now, but had very conservative lock-based synchronization to ensure correctness, since we did not have the message-passing abstraction to keep things clean and were instead updating weights in shared memory. We made optimizations by limiting synchronization and minimizing critical sections as much as possible (so much so that in fact we were able to reduce them to two lines in the training loop). Another major breakthrough for us was to make the ``memory windows" used for intermediate weight calculations \textit{private} to the threads instead of shared, and this allowed us to reduce synchronization significantly. This did seem to come with overall memory tradeoffs which we will discuss later in our results section. \section{Results} At a high level, we were successful in achieving our main goal - to have a working implementation of our models in a message passing setting so that we will be able to comment on performance and speedup. Additionally, we were even able to achieve one of our extra credit options where we were able to implement a simple feedforward convolutional neural network in \texttt{C++} and combine this with our knowledge of shared memory via \texttt{OpenMP}. \subsection{Related Work} \includegraphics[scale = 0.75]{google} Before we get into our specific results, we'd like to note that there is a very interesting article from folks at Google Research\footnote{\href{https://ai.googleblog.com/2019/03/measuring-limits-of-data-parallel.html}{Measuring the Limits of Data Parallel Training for Neural Networks}} which describes the various scaling patterns that they observed during data parallelism. Note that their measurement of data parallelism is directly related to the batch size, which is the number of training data examples utilized in one iteration of training the model. Specifically, they discovered three distinct but related so-called ``scaling-regimes": \begin{enumerate} \item Perfect Scaling: doubling batch size halves number of training steps to reach some target error \item Diminishing Returns: fewer decreases to training time with increasing batch size \item Maximal Data Parallelism: further increasing batch size beyond this point doesn’t reduce training time \end{enumerate} \noindent See the figure above for a visual depication of these regimes. Specifically, the Google researchers found a universal relationship between batch size and training speed with three distinct regimes: perfect scaling (following the dashed line), diminishing returns (diverging from the dashed line), and maximal data parallelism (where the trend plateaus). The transition points between the regimes vary dramatically between different workloads\footnote{\href{https://ai.googleblog.com/2019/03/measuring-limits-of-data-parallel.html}{Link to Original Article: ``Measuring the Limits of Data Parallel Training for Neural Networks"}}. We attempted to see if this pattern could also be observed with our model and comment on any differences below in our results section. Note that the Google researchers use training steps, whereas we utilize training time (given by the number of seconds needed to fully complete one epoch of training). This is a simplification for clarity, but one which does not impact correctness since training time and number of training steps are very closely related. \subsection{Summary of Findings} \noindent For a quick summary of our findings, we were able to observe similar results as the Google researchers, even in the context of message passing. Specifically, we observed that increasing the batch size resulted in decreasing the training time, but not in a linear fashion. We did, in fact, observe the ``diminishing returns" noted above. Furthermore, we found that increasing the batch size also resulted in lower testing accuracy, which means that our model became poorer at generalizing. Somewhat surprisingly, this pattern was consistent throughout all processor counts (1, 4, 16, 64, and 128). Then, for our shared memory model, instead of modifying batch size, we decided to hold batch size constant (at 1, so the model processed training images one at a time) and decided instead to modify the learning rate of the optimizer. We thought to do this to see if changing the workload (as the Google folks pointed out) results in different speedups and performance. Again, somewhat surprisingly, we observed near perfect speedup when going from 1 thread to 4 threads but then saw degradation and eventual deterioration for higher thread counts (e.g. 64 and 128). \subsection{Visualizations and Further Analysis} \subsubsection{Varying Batch Size} Our primary set of experiments involved attempting to see if the Google Research ``scaling regimes" (i.e. perfect scaling, diminishing returns, and maximal data parallelism) held true even after we incorporated message passing. \begin{figure}[!h] \centering \includegraphics[scale=0.5]{base_time_bs} \includegraphics[scale=0.5]{base_acc_bs} \caption{Plots of training time for 1 epoch and accuracy (after 5 epochs of training) with learning rate of 0.01 for both baseline MLP and baseline CNN models. Note, there is \textit{no} message passing in either of these models, as they are our reference points.} \label{fig:base_bs} \end{figure} \paragraph{Baseline Models} First, we implemented two sequential models which served as a baseline for comparisons. Our most basic model is a simple 2-layer feed-forward multi-layer perceptron (referred to as our baseline MLP in the figures). Then, our second baseline model is a convolutional neural network (CNN) with 2 convolutional layers (referred to as our baseline CNN in the figures). Now, look to Figure \ref{fig:base_bs} for the plots of training time and accuracy across varying batch sizes. Specifically, we point to the fact as we double the batch size from 32 to 64, we see \textit{almost halving} the training time. This corresponds to the ``perfect" scaling regime from the Google Research folks. However, going from batch size of 64 to 128 results in a less than half reduction in training time. So, this corresponds to the ``diminishing returns" regime. However, note that there is still a decrease in training time, so we have not yet reached the point of ``maximal data parallelism" where increasing batch size no longer decreases the training time. Then, in the accuracy plots, we see that increasing the batch size decreases the accuracy at an almost (but not quite) linear rate. Note that this phenomenon is studied quite in depth in the literature, where we found that ``Practitioners often want to use a larger batch size to train their model as it allows computational speedups from the parallelism [of GPUs]. However, it is well known that too large of a batch size will lead to \textit{poor generalization}"\footnote{\href{https://medium.com/mini-distill/effect-of-batch-size-on-training-dynamics-21c14f7a716e}{Link to Article: ``Effect of batch size on training dynamics"}}. \paragraph{Message Passing Model} \begin{figure}[!h] \centering \includegraphics[scale=0.5]{cnn_mpi_bs_time} \includegraphics[scale=0.5]{cnn_mpi_bs_acc} \caption{Plots of training time and speedup for 1 epoch and accuracy (after 5 epochs of training) with learning rate of 0.01 for CNN model \textit{with message passing}.} \label{fig:cnn_bs} \end{figure} We modified our baseline CNN model to also integrate message passing (via the \texttt{mpi4py} package). As described above in our approach section, this required that different processors are training the neural network architecture from their own randomized starting points, and then the master process utilizes a reductive combination process across all the weights trained by each worker processor to unify them into one updated weight matrix. Now, look to Figure \ref{fig:cnn_bs} for the plots of training time, speedup, and accuracy across varying batch sizes. Specifically, we see that the training time follows a similar trend as for the baseline models where going from batch size of 32 to 64 results in a larger decrease than going from 64 to 128. However, one thing to note is that we no longer experience any perfect scaling due to the inclusion of message passing. We are immediately thrust into the ``diminishing returns" regime. We believe this is due to the fact that the size of the messages being passed is too large and therefore dominates the computation time. As stated above, our message passing neural network works by sending copies of the weights (via the model's internal state dictionary) to all of the worker processes. This is an extremely large (and dense) matrix, which results in a lot of communication. We see this is evident by the physical training times needed by the message passing model. Observe that even for the simplest message passing model (which only uses 1 worker process), the need to send a single message results in over 18 seconds to train 1 epoch with a batch size of 64 (visually, this is the second blue dot on the left plot in Figure \ref{fig:cnn_bs}). For reference, the baseline cnn model was able to train in almost 5 fewer seconds (given by the second blue dot on the left plot in Figure \ref{fig:base_bs}). So, what we observe is that the overhead of message passing is large enough to diminish the reduction in time after increasing batch size, but not enough so as to completely eradicate the effect of doubling batch size. What we found more surprising was that this pattern, of smaller but still visible reductions in training time due to batch size increases, was apparent across all processor counts. This showed us that our message passing implementation was decent enough to allow the batch size factor to still appear, despite the number of message needing to be sent. As far as accuracy is concerned, we found similar results as to the baseline models, where increasing batch size resulted in lower testing accuracy. Again, this is largely due to the same factors as previously stated above. \subsubsection{Varying Learning Rate} \begin{figure}[!h] \includegraphics[scale = 0.35]{lr} \centering \caption{Explanation of learning rate size on neural network convergence.} \label{fig:lr} \end{figure} As an additional axis of comparison, we decided to hold batch size and number of training epochs constant (512 and 5 respectively) and vary the learning rate of the stochastic gradient descent (SGD) optimizer. We varied the learning rate to be in the set $\{0.0001, 0.001, 0.01, 0.1, 1, 10\}$. Additionally, we wanted to see whether varying the number of processors (for message passing) or the number of threads (for shared memory) resulted in different accuracies or training times due to the size of the learning rate. For reference on what learning rate means in the context of the deep learning pipeline, refer to Figure \ref{fig:lr}. \begin{figure}[!h] \centering \includegraphics[scale=0.5]{base_acc} \caption{Plot of accuracy (after 5 epochs of training) with varying learning rates for both baseline MLP and baseline CNN models. Note, there is \textit{no} message passing in either of these models, as they are our reference points.} \label{fig:base_lr} \end{figure} \paragraph{Baseline Models} Since the baseline models don't have any explicit parallelization, it doesn't make sense to collect their training times for varying learning rates. Instead, we look at their testing accuracies (Figure \ref{fig:base_lr}) and find that a learning rate of 0.01 appears to be the most optimal, because it resulted in the best training accuracy, which means the model with the highest predictive power. \paragraph{Message Passing Models} \begin{figure}[!h] \centering \includegraphics[scale=0.5]{cnn_mpi_lr_time} \includegraphics[scale=0.5]{cnn_mpi_lr_speed} \caption{Plots of training time and speedup for 1 epoch with varying learning rates for CNN model \textit{with message passing}.} \label{fig:cnn_lr} \end{figure} Now, we first display the results of experiments on the CNN model with message passing in Figure \ref{fig:cnn_lr}. From these plots, we can observe that the training time increases almost exponentially as we increase the number of processors. As stated before, this is largely due to the size of the messages that need to be sent. Again, somewhat surprisingly, we see that the training time increases in similar fashions for all learning rates. What this means is that even though an extremely suboptimal learning rate might be bad for convergence, the time it takes to train the model is not dependent upon the learning rate, across differing numbers of processors. \begin{figure}[!h] \centering \includegraphics[scale=0.5]{res_lr_time} \includegraphics[scale=0.5]{res_lr_speed} \caption{Plots of training time and speedup for 1 epoch with varying learning rates for \texttt{ResNet} model \textit{with message passing}.} \label{fig:res_lr} \end{figure} \begin{figure}[!h] \centering \includegraphics[scale=0.25]{res18} \caption{\texttt{ResNet18} model architecture} \label{fig:res18} \end{figure} Next, we first display the results of experiments on the more complicated \texttt{ResNet} model with message passing in Figure \ref{fig:res_lr}. From these plots, we can observe similar patterns to the simpler CNN from above. However, paying close attention to the physical training time, we see that it takes over 5 \textit{minutes} to train a single epoch of the \texttt{ResNet} model with message passing. This is due to the fact that the \texttt{ResNet} architecture is significantly more complicated, with many more convolutional and fully connected layers, than our original CNN model. Refer to Figure \ref{fig:res18} for a tabular view of the model architecture\footnote{\href{https://www.researchgate.net/figure/ResNet-18-Architecture_tbl1_322476121}{Link to Image}}. Hence, as can be expected, we observe much higher training times due to the fact that the updated weights of the models (which are the messages that are being sent) are significantly larger in size. Additionally, there is also the overhead present of training a larger and deeper neural network which adds extra computation time. \begin{figure}[!h] \centering \includegraphics[scale=0.35]{kill_res} \caption{An example of PSC killing the job of training our \texttt{ResNet} model with MPI} \label{fig:kill_res} \end{figure} As you may observe, we do not provide any metrics beyond 32 processors, which is due to the fact that the PSC machines would automatically kill our jobs given the magnitude of time needed to complete the training of the network for higher processor counts. Refer to Figure \ref{fig:kill_res} for an example of the output we obtained. Nevertheless, we were able to observe that message passing does not work well in terms of training time for more complicated neural network architectures. \paragraph{Shared Memory} \begin{figure}[!h] \centering \includegraphics[scale=0.5]{mp_lr_speed} \includegraphics[scale=0.5]{mp_lr_acc} \caption{Plots of speedup for 1 epoch as well as accuracy (after 5 epochs) with varying learning rates for \texttt{OpenMP} CNN in \texttt{C++}.} \label{fig:mp_lr} \end{figure} There were a few patterns observed in the results gathered from the OpenMP experiments on the PSC machines. Refer to Figure \ref{fig:mp_lr} for the plots. Note that we managed to obtain near perfect speedup going from 1 processor to 4 processors, which we delve into below. The first set of results pertains to the trend of decreasing test accuracy as the processor count increased, across all learning rates (with the possible exception of 10.0, which did very little training at all). Primarily, this is caused by the nature of neural network parallelization across data. Traditionally, a neural network trains on the data sequentially. It starts with randomized weights, then obtains feedback from the first input, then the second, and so on. In the parallel model, $N$ threads are started with their individually randomized weights, and each of the $N$ threads operates on a portion of the data, which is then merged at the end. This improves the speed that the model is able to take in the data, but if Thread 1 handles inputs 1-10, and Thread 2 handles inputs 11-20, they are not able to compound their learning upon one another since they are operating simultaneously. This means that in an equivalent amount of epochs, we expect to see higher parallelism correlate with decreases in test accuracy. The benefit of parallelism is in a long-term trade off, where better utilization of hardware over a correspondingly increased number of epochs is able to achieve the same test accuracy, but in an overall shorter amount of time. On the other hand, we saw that speedup was decreasing as the number of processors increased, with a U-shaped curve for the amount of time taken to train. Essentially, speedups began at high values moving from 1 to 4 processors, but 1 to 16 saw a smaller speedup, and 1 to 64 and above yielded slowdowns. It is difficult to pin down exactly one reason for this, but there are a number of contributing factors. The first is competition over the resources of the machines being tested, from other processes running on them. This results in poor CPU utilization and wasted thread startup overhead, since additional threads are not being allocated to their own cores. Another reason is a bottleneck in the startup phase of each thread, involving randomization of the initial weights. The random number generator library provided by \texttt{C++} is not designed for multithreaded use. While functional, it relies on global memory access and is not efficient in a parallel context, and likely acts as a bottleneck. It is called tens of times during each thread startup. Furthermore, there are potential random access memory (RAM) concerns due to the fact that each thread allocates its own private large blocks of memory to store the inter-layer weights used in intermediate matrix multiplication calculations. Given that the data being analyzed is images, the sizes can get pretty large, which may lead to severe competition for memory resources on the system. Overall, this would explain why profound speedups are seen at 4 and 16 processors, but strongly diminishing and even negative returns are seen past that point. An interesting fact is that lower speedup on shared memory based neural networks has been found in prior research literature as well. From the folks at the University of Ulm in Germany, they found that “the reason for the lower speedup of OpenMP-based implementation seems to be the \textit{unavoidable false cache sharing} of data arrays that are frequently modified by several threads.”\footnote{\href{https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.595.8618&rep=rep1&type=pdf}{Link to Paper: ``A comparison of OpenMP and MPI for neural network simulations on a SunFire 6800"}} In the context of our results, we note that false sharing occurs when data from multiple threads, which was not meant to be shared, gets mapped to the same cache. This can lead to performance where it appears that multiple threads are competing for the same data. Hence, this correlates with what we saw, since the shared weights array is shared between an increasing number of processors. Since they all need to update the summed weights, we observe a situation where each processor is invalidating the caches of the others when it does its own update, leading to poor performance. \subsection{Closing Thoughts} Although we were able to verify that the Google Research team's findings regarding batch size and data parallelism exist even after integrating message passing, we found that neural network training is not really amenable at all to message passing. Although we could manage for our smaller and more shallow models, it became too costly to send a copy of the weight matrices for larger and more complex architectures. Additionally, we note that we only tested on CPUs, whereas we might have benefitted from the internal hardware components of GPUs, which are broadly used for deep learning. The reason we stuck to CPUs was due to the easy of access to higher processor and thread counts via PSC. A possible future direction of our project would be to see if similar results hold when we run our more complex models on a sophisticated GPU. Nevertheless, we are thankful for the experience of walking through a step-by-step process of integrating both message passing and shared memory ideas into a deep learning pipeline. We hope our results are, at the very least, interesting (as they certainly were to us when we were collecting them!) \newpage \section{References} Each of these links are click-able! \begin{enumerate} \item \href{https://ai.googleblog.com/2019/03/measuring-limits-of-data-parallel.html}{Google Research: Measuring the Limits of Data Parallel Training for Neural Networks} \item \href{https://www.pyohio.org/2019/presentations/123}{Distributed Deep Neural Network Training using MPI on Python} \item \href{https://leimao.github.io/blog/Data-Parallelism-vs-Model-Paralelism/}{Data Parallelism vs Model Parallelism} \item \href{https://web.stanford.edu/~rezab/classes/cme323/S16/projects_reports/hedge_usmani.pdf}{Parallel and Distributed Deep Learning} \item \href{https://cse.buffalo.edu/faculty/miller/Courses/CSE633/Ma-Spring-2014-CSE633.pdf}{Parallel Implementation of Deep Learning Using MPI} \item \href{https://analyticsindiamag.com/a-guide-to-parallel-and-distributed-deep-learning-for-beginners/}{A Guide to Parallel and Distributed Deep Learning for Beginners} \item \href{https://arxiv.org/pdf/1404.5997.pdf}{Variable Batch Size} \item \href{https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.595.8618&rep=rep1&type=pdf}{OpenMP + NN research paper} \item \href{https://www.cs.swarthmore.edu/~newhall/papers/pdcn08.pdf}{Parallelizing Neural Network Training for Cluster Systems} \item \href{https://www.jeremyjordan.me/nn-learning-rate/}{Learning Rate Overview} \item \href{https://towardsdatascience.com/an-overview-of-resnet-and-its-variants-5281e2f56035}{ResNet Architectures} \item \href{https://medium.com/mini-distill/effect-of-batch-size-on-training-dynamics-21c14f7a716e}{Batch Size vs. Training Dynamics} \item \href{https://pytorch.org/tutorials/intermediate/model_parallel_tutorial.html}{Single-Machine Model Parallel Best Practices - PyTorch} \end{enumerate} \section{List of Work} \begin{itemize} \item Manish Nagireddy (mnagired) \begin{enumerate} \item Initial Project Ideation \item Background Research \item Implementing baseline models in \texttt{Python} \item Adding MPI functionality to CNN model in \texttt{Python} \item Adding MPI functionality to \texttt{ResNet} model in \texttt{Python} \item Running Experiments on PSC \item Final Report: summary, background, approach, and results \end{enumerate} \item Ziad Khattab (zkhattab) \begin{enumerate} \item Background Research \item Implementing baseline model in \texttt{C++} \item Adding MPI functionality to \texttt{OpenMP} model in \texttt{C++} \item Running Experiments on PSC \item Final Report: Approach and results for \texttt{C++} models \end{enumerate} \end{itemize} We stress that equal contributions were made by both students! \end{document}
{ "alphanum_fraction": 0.7965117849, "avg_line_length": 117.2120253165, "ext": "tex", "hexsha": "f77017428c9a597b5613b19ea72506ad3cbc473f", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "15372b2f29c795b4af9596caae7ee1e069c8985e", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "mnagired/418_final_project", "max_forks_repo_path": "report/report.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "15372b2f29c795b4af9596caae7ee1e069c8985e", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "mnagired/418_final_project", "max_issues_repo_path": "report/report.tex", "max_line_length": 1636, "max_stars_count": null, "max_stars_repo_head_hexsha": "15372b2f29c795b4af9596caae7ee1e069c8985e", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "mnagired/418_final_project", "max_stars_repo_path": "report/report.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 8201, "size": 37039 }
% Generated by GrindEQ Word-to-LaTeX 2008 % ========== UNREGISTERED! ========== Please register! ========== % LaTeX/AMS-LaTeX \documentclass[a4paper]{article} \usepackage{anysize} \marginsize{1cm}{1cm}{1cm}{1cm} \usepackage{amssymb} \usepackage{amsmath} \usepackage[dvips]{graphicx} \usepackage{listings} \lstset{language=haskell} \lstset{commentstyle=\textit} \lstset{mathescape=true} %\lstset{labelstep=1} %\lstset{backgroundcolor=,framerulecolor=} \lstset{backgroundcolor=,rulecolor=} \linespread{2.0} \begin{document} \noindent \section{Subtyping Heaps and References} \noindent Let us consider the types of the references of the example above, given $Heap\ h_0\ ref$ for some initial heap $h_0$ and some reference type$\ ref$: \begin{lstlisting} i : ref (New $h_0$ Int) Int s : ref (New (New $h_0$ Int) String) String \end{lstlisting} The heap $h_0$ is the starting heap at the beginning of the program. If the program is used as a ``main'', that is it is just launched, then $h_0$ will be an empty heap; in other cases, like using our computation inside another, larger computation, will mean that $h_0$ will be the current heap that is available where the example is launched. \noindent Now let us consider the two statements: \begin{lstlisting} v $\leftarrow$ s x $\leftarrow$ i \end{lstlisting} These two are incompatible, since the first statement forces our monad to be of type $ref\ \left(New\ \left(New\ h_0\ Int\right)\ String\right)$ and thus this type will have to be in all subsequent statements of the same monadic program, but the second statement expects our monad to be of type $ref\ \left(New\ h_0\ Int\right)$. This is absolutely reasonable on the part of the type system, but it is quite unacceptable given our circumstances: why cannot we use a reference that refers to a smaller heap where we have a larger heap available? Indeed, all the values that reference $i$ requires are available (plus some more that are irrelevant to $i$) in the heap that we have when the reference $s$ is in scope. It would be interesting to make it possible to use a reference that requires a smaller (less defined) heap when a larger (more specified) heap is available. This kind of notion is clearly a notion of subtyping between heaps and references. \noindent To define a subtyping relationship between heaps, let us start by giving a subtyping predicate: \begin{lstlisting} Subtype $\alpha$ $\beta$ \end{lstlisting} Which for brevity we will also write \begin{lstlisting} $\alpha$ $\le$ $\beta$ \end{lstlisting} The fact that $\alpha $ is a subtype of $\beta $ implies that $\alpha $ is more specified than $\beta $, and as such it may be used in any context where a $\beta $ is expected. For this reason we also give a casting (or coercion) function that converts from a value of type $\alpha $ to a value of its supertype $\beta $: \begin{lstlisting} downcast :$\alpha$ $\to$ $\beta$ \end{lstlisting} We also know that the subtyping relation is reflexive and transitive, so we will add these rules as instances of the subtyping predicate: \begin{lstlisting} Subtype $\alpha$ $\alpha$ downcast=$\lambda$ x.x \end{lstlisting} \begin{lstlisting} Subtype $\alpha$ $\beta$ $\wedge$ Subtype $\beta$ $\gamma$ $\Rightarrow$ Subtype $\alpha$ $\gamma$ downcast=downcast $\circ$ downcast \end{lstlisting} Since we are mostly interested in subtyping between references, when is it safe to downcast them? As already discussed, a first criterion for downcasting a reference is that the heap $h$ that the reference manipulates is actually smaller than the current heap; this means that: \begin{lstlisting} Heap h ref $\wedge$ Heap $h'$ ref $\wedge$ $h'$ $\le$ h $\Rightarrow$ ref h $\alpha$ $\le$ ref $h'$ $\alpha$ \end{lstlisting} This means that references are contravariant with respect to their heap. Also, the second argument of references allows for the intuitive kind of conversion: \begin{lstlisting} $\alpha$ $\le$ $\alpha'$ $\Rightarrow$ ref h $\alpha$ $\le$ ref h $\alpha'$ \end{lstlisting} This second rule shows that a reference to a value $\alpha $ behaves exactly like that value, thereby allowing conversions that mirror the behavior of the value that our reference is standing for. These rules are quite hard to concretely instance. This is why we will not be able to instance these rules once and for all in a highly parametric fashion, but instead we will just give some specific instances that are of particular use to us when we are dealing with concrete heaps and references. Also, we will mostly deal with subtyping between values and not subtyping between functions. At this point we can rewrite the ``incriminated'' example above so that it compiles: \begin{lstlisting}[frame=tb,mathescape]{somecode} $ex_2'$ = do 10 $>>=$ ($\lambda$i. do "hello " $>>=$ ($\lambda$s. do s *= ($\lambda$x.x++"world") let $i'$ = downcast i v$\leftarrow$eval s x$\leftarrow$eval $i'$ return v ++ show x)) \end{lstlisting} Now the type of $ex_2'$ will not only reflect the type of the result of the computation, but also the requirement of subtyping between heaps built with the $New$ type operator: \begin{lstlisting} Heap $h_0$ ref $\wedge$ ref (New $h_0$ Int) Int $\le$ ref (New (New $h_0$ Int) String) Int $\Rightarrow$ $ex_2'$ : ref $h_0$ String \end{lstlisting} \end{document} % == UNREGISTERED! == GrindEQ Word-to-LaTeX 2008 ==
{ "alphanum_fraction": 0.7416481069, "avg_line_length": 49.4311926606, "ext": "tex", "hexsha": "832f33dcde264a5437d49f8799f94662cd20331a", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "58fa4a3b4c8185ad30bf9a142002d87ceca756e6", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "vs-team/Papers", "max_forks_repo_path": "Before Giuseppe's PhD/Monads/ObjectiveMonad/MonadicObjects/trunk/tex/2.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "58fa4a3b4c8185ad30bf9a142002d87ceca756e6", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "vs-team/Papers", "max_issues_repo_path": "Before Giuseppe's PhD/Monads/ObjectiveMonad/MonadicObjects/trunk/tex/2.tex", "max_line_length": 954, "max_stars_count": 3, "max_stars_repo_head_hexsha": "58fa4a3b4c8185ad30bf9a142002d87ceca756e6", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "vs-team/Papers", "max_stars_repo_path": "Before Giuseppe's PhD/Monads/ObjectiveMonad/MonadicObjects/trunk/tex/2.tex", "max_stars_repo_stars_event_max_datetime": "2019-08-19T07:16:23.000Z", "max_stars_repo_stars_event_min_datetime": "2016-04-06T08:46:02.000Z", "num_tokens": 1442, "size": 5388 }
% !TEX root = main.tex % !TEX spellcheck = en-US \section{Introduction} Zero-knowledge proof systems, which allow a prover to convince a verifier of an NP statement $\REL(\inp,\wit)$ without revealing anything else about the witness $\wit$ have broad application in cryptography and theory of computation~\cite{FOCS:GolMicWig86,STOC:Fortnow87,C:BGGHKMR88}. When restricted to computationally sound proof systems, also called \emph{argument systems}\footnote{We use both terms interchangeably.}, proof size can be shorter than the size of the witness~\cite{brassard1988minimum}. Zero-knowledge Succinct Non-interactive ARguments of Knowledge (zkSNARKs) are zero-knowledge argument systems that additionally have two succinctness properties: small proof sizes and fast verification. Since their introduction in~\cite{FOCS:Micali94}, zk-SNARKs have been a versatile design tool for secure cryptographic protocols. They became particularly relevant for blockchain applications that demand short proofs and fast verification for on-chain storage and processing. Starting with their deployment by Zcash~\cite{SP:BCGGMT14}, they have seen broad adoption, e.g., for privacy-preserving cryptocurrencies and scalable and private smart contracts in Ethereum.%\footnote{\url{https://z.cash/} and \url{https://ethereum.org} respectively}. %The work of~\cite{EC:GGPR13} proposed a preprocessing zk-SNARK for general NP statements phrased in the language of Quadratic Span Programs (QSP) and Quadratic Arithmetic Programs (QAP) for Boolean and arithmetic circuits respectively. This built on previous works of~\cite{IKO07,AC:Groth10a,TCC:Lipmaa12} and led to several works~\cite{TCC:BCIOP13,SP:PHGR13,C:BCGTV13,AC:Lipmaa13,USENIX:BCTV14,EC:Groth16} with very short proof sizes and fast verification. While research on zkSNARKs has seen rapid progress~\cite{EC:GGPR13,AC:Groth10a,TCC:Lipmaa12,TCC:BCIOP13,SP:PHGR13,C:BCGTV13,AC:Lipmaa13,USENIX:BCTV14,EC:Groth16} with many works proposing significant improvements in proof size, verifier and prover efficiency, and complexity of the public setup, less attention has been paid to non-malleable zkSNARKs and succinct signatures of knowledge~\cite{C:CamSta97,C:ChaLys06} (sometimes abbreviated SoK or referred to as SNARKY signatures~\cite{C:GroMal17,EPRINT:BKSV20}). \paragraph{Relevance of simulation extractability.} Most zkSNARKs are shown only to satisfy a standard knowledge soundness property. Intuitively, this guarantees that a prover that creates a valid proof in isolation knows a valid witness. However, deployments of zkSNARKs in real-world applications, unless they are carefully designed to have application-specific malleability protection, e.g.~\cite{SP:BCGGMT14}, require a stronger property -- \emph{simulation-extractability} (SE) -- that corresponds much more closely to existential unforgeability of signatures. This correspondence is made precise by SoK, which use an NP-language instance as the public verification key. Instead of signing with the secret key, SoK signing requires knowledge of the NP-witness. Intuitively, an SoK is thus a proof of knowledge (PoK) of a witness that is tied to a message. In fact, many signatures schemes, e.g., Schnorr, can be read as SoK for a specific hard relation, e.g., DL~\cite{AC:DHLW10}. To model strong existential unforgeability of SoK signatures, even when given an oracle for obtaining signatures on different instances, an attacker must not be able to produce new signatures. Chase and Lysyanskaya~\cite{C:ChaLys06} model this via the notion of simulation extractability which guarantees extraction of a witness even in the presence of simulated signatures. In practice, an adversary against a zkSNARK system also has access to proofs computed by honest parties that should be modeled as simulated proofs. The definition of knowledge soundness (KS) ignores the ability of an adversary to see other valid proofs that may occur in real-world applications. For instance, in applications of zkSNARKs in privacy-preserving blockchains, proofs are posted on-chain for all blockchain participants to see. We thus argue that SE is a much more suitable notion for robust protocol design. We also claim that SE has primarily an intellectual cost, as it is harder to prove SE than KS---another analogy here is IND-CCA vs IND-CPA security for encryption. However, we will show that the proof systems we consider are SE out-of-the-box. \markulfdone{01.05}{Old: \paragraph{Signatures of knowledge} A signature of knowledge~\cite{C:CamSta97,C:ChaLys06} uses an instance of an NP-language as the public verification key. Instead of signing using a secret key, which typically would be related to the public key via a discrete logarithm or some other hard relation~\cite{AC:DHLW10}, SoK signing requires knowledge of the NP-witness. Chase and Lysyanskaya~\cite{C:ChaLys06} require signatures of knowledge to be simulatable to assure protection against signing key/witness extraction. Signatures can be simulated without a witness, given a trapdoor associated with the public setup. Furthermore, to model strong existential unforgeability of signatures, even when given an oracle for obtaining signatures on different instances, an attacker must not be able to produce new signatures. Chase and Lysyanskaya model this via the notion of simulation extractability (SE) which guarantees extraction of a witness even in the presence of simulated signatures. Moreover, Groth and Maller \cite{C:GroMal17} showed how to construct SoK from zkSNARK schemes that are simulation-extractable. Therefore, our focus can be moved from SoK to the main building block, zkSNARK schemes, for which we have many new efficient constructions in recent literature. \paragraph{Relevance of simulation extractability.} Most zkSNARKs are shown only to satisfy a standard knowledge soundness property. Intuitively, this guarantees that a prover that creates valid proof knows a valid witness. However, deployments of zkSNARKs in real-world applications, unless they are carefully designed to have application-specific malleability protection, e.g.~\cite{SP:BCGGMT14}, require a stronger property -- \textit{simulation-extractability} -- that, as discussed above, corresponds more closely to existential unforgeability of signatures. In practice, an adversary against the zkSNARK has access to proofs provided by other parties using the same zkSNARK. The definition of knowledge soundness ignores the ability of an adversary to see other valid proofs that may occur in real-world applications. For instance, in applications of zkSNARKs in privacy-preserving blockchains, proofs are posted on the chain for all blockchain-participants to see. } % Therefore, it is necessary for a zero-knowledge proof system to be % \emph{non-malleable}, that is, resilient against adversaries that additionally get % to see proofs generated by different parties before trying to forge. Therefore, it % is necessary for a zero-knowledge proof system to be \emph{simulation-extractable}, % that is, resilient against adversaries that additionally get to see proofs % generated by different parties before trying to forge. This captures the more % general scenario where an adversary against the zkSNARK has access to proofs % provided by other parties as it is in applications of zkSNARKs in % privacy-preserving blockchains, where proofs are posted on the chain for all % participants in the network to verify. \paragraph{Fiat--Shamir-based zkSNARKs.} Most modern zkSNARK constructions follow a modular blueprint that involves the design of an information-theoretic interactive protocol, e.g. an Interactive Oracle Proof (IOP)~\cite{TCC:BenChiSpo16}, that is then compiled via cryptographic tools to obtain an interactive argument system. This is then turned into a zkSNARK using the Fiat-Shamir transform. By additionally hashing the message, the Fiat-Shamir transform is also a popular technique for constructing signatures. While well-understood for 3-message sigma protocols and justifiable in the ROM~\cite{CCS:BelRog93}, Fiat--Shamir should be used with care because there are both counterexamples in theory~\cite{FOCS:GolKal03} and real-world attacks in practice when implemented incorrectly~\cite{Blog:FrozenHeart20}. %The Fiat--Shamir (FS) transform takes a public-coin interactive protocol and makes it interactive by hashing the current protocol transcript to compute the verifier's public coins. % %The FS transform is a popular design tool for constructing %zkSNARKs. In the updatable universal SRS setting, works like \sonic{}~\cite{CCS:MBKM19} %\plonk{}~\cite{EPRINT:GabWilCio19}, and \marlin~\cite{EC:CHMMVW20} are designed %and proven secure as multi-round interactive protocols. Security is then only %\emph{conjectured} for their non-interactive variants by employing the FS %transform. In particular, several schemes such as Sonic~\cite{CCS:MBKM19}, Plonk~\cite{EPRINT:GabWilCio19}, Marlin~\cite{EC:CHMMVW20} follow this approach where the information-theoretic object is a multi-message algebraic variant of IOP, and the cryptographic primitive in the compiler is a polynomial commitment scheme (PC) that requires a trusted setup. To date, this blueprint lacks an analysis in the ROM in terms of simulation extractability. \paragraph{Updatable SRS zkSNARKs.} One of the downsides of many efficient zkSNARKs~\cite{AC:Groth10a,TCC:Lipmaa12,EC:GGPR13,SP:PHGR13,AC:Lipmaa13,AC:DFGK14,EC:Groth16} is that they rely on a \textit{trusted setup}, where there is a structured reference string (SRS) that is assumed to be generated by a trusted party. In practice, however, this assumption is not well-founded; if the party that generates the SRS is not honest, they can produce proofs for false statements. If the trusted setup assumption does not hold, knowledge soundness breaks down. Groth et al.~\cite{C:GKMMM18} propose a setting to tackle this challenge which allows parties -- provers and verifiers -- to \emph{update} the SRS.\footnote{This can be seen as an efficient player-replaceable~\cite{EPRINT:GHMVZ17} multi-party computation.} The update protocol takes an existing SRS and contributes to its randomness in a verifiable way to obtain a new SRS. The guarantee in this \textit{updatable setting} is that knowledge soundness holds as long as one of the parties updating the SRS is honest. The SRS is also \emph{universal}, in that it does not depend on the relation to be proved but only on an upper bound on the size of the statement's circuit. Although inefficient, as the SRS size is quadratic in the size of the circuit,~\cite{C:GKMMM18} set a new paradigm for designing zkSNARKs. The first universal zkSNARK with updatable and linear size SRS was Sonic proposed by Maller et al.~in \cite{CCS:MBKM19}. Subsequently, Gabizon, Williamson, and Ciobotaru designed Plonk~\cite{EPRINT:GabWilCio19} which currently is the most efficient updatable universal zkSNARK. Independently, Chiesa et al.~\cite{EC:CHMMVW20} proposed Marlin with comparable efficiency to Plonk. %\chaya{02.05}{we spelled out all author names for Plonk, but not for Sonic and Marlin?}\markulf{1.5}{I did it because it was only three and plonk is our result in the body.} \paragraph{The challenge of SE in the updatable setting.} The notion of simulation-extractability for zkSNARKs which is well motivated in practice, has not been studied in the updatable setting. Consider the following scenario: We assume a ``rushing'' adversary that starts off with a sequence of updates by malicious parties resulting in a subverted reference string $\srs$. By combining their trapdoor contributions and employing the simulation algorithm, these parties can easily compute a proof to obtain a triple $(\srs,\inp,\pi)$ that convinces the verifier of a statement $\inp$ without knowing a witness. Now, assume that at a later stage, a party produces a triple $(\srs',\inp,\pi')$ for the same statement with respect to an updated $\srs'$ that has an honest update contribution. We want the guarantee that this party must know a witness corresponding to $\inp$. The ability to ``maul" the proof $\pi$ from the old SRS to a proof $\pi'$ for the new SRS without knowing a witness would clearly violate security. The natural idea is to require that honestly \emph{updated} reference strings are indistinguishable from honestly \emph{generated} reference strings even for parties that previously contributed updates. However, this is not sufficient as the adversary can also rush toward the end of the SRS generation ceremony to perform the last update. %That is, an adversary who does not know the trapdoor for the update from $\srs$ to $\srs'$ should not be able to break SE. % as long as there was at least one honest update to $\srs$.\markulf{30/09/2021}{We currently don't achieve this strong USE notion.} A definition of SE in the updatable setting should take these additional powers of the adversary, which are not captured by existing definitions of SE, into consideration. While generic compilers~\cite{EPRINT:KZMQCP15,CCS:AbdRamSla20short} can be applied to updatable SRS SNARKs to obtain SE, not only do they inevitably incur overheads and lead to efficiency loss, we contend that the standard definition of SE does not suffice in the updatable setting. \subsection{Our Contributions} We investigate the non-malleability properties of zkSNARK protocols obtained by FS-compiling multi-message protocols in the updatable SRS setting and give a modular approach to analyze their simulation-extractability. We make the following contributions: \begin{itemize} \item \emph{Updatable simulation extractability (USE)}. We propose a definition of simulation extractability in the updatable SRS setting called USE, that captures the additional power the adversary gets by being able to update the SRS. \item \emph{Theorem for USE of FS-compiled proof systems.} We define three notions in the updatable SRS and ROM model, \emph{trapdoor-less zero-knowledge}, a \emph{unique response} property, and \emph{rewinding-based knowledge soundness}. Our main theorem shows that multi-message FS-compiled proof systems that satisfy these notions \emph{are USE out-of-the box}. %Informally, our notion of rewinding-based knowledge soundness is a variant of special soundness where %the transcripts provided to the extractor are obtained through interaction with an honest verifier, and %the extraction guarantee is computational %instead of unconditional. %Our extractor only needs oracle access to the %adversary, it does not depend on the adversary’s code, nor does it rely on %knowledge assumptions. \item \emph{USE for concrete zkSNARKs.} We prove that the most efficient updatable SRS SNARKS -- Plonk/Sonic/Marlin -- satisfy the premises of our theorem. We thus show that these zkSNARKs are updatable simulation extractable. %In instantiating our general theorem for these concrete zkSNARK candidates, we rely on the algebraic group model (AGM). \item \emph{SNARKY signatures in the updatable setting.} Our results validate the folklore that the Fiat--Shamir transform is a natural means for constructing signatures of knowledge. This gives rise to the first SoK in the updatable setting and confirms that a much larger class of zkSNARKs, besides \cite{C:GroMal17}, can be lifted to SoK. \item \emph{Broad applicability.} The updatable SRS plus ROM model includes both the trusted SRS and the ROM model as special cases. This implies the relevance of our theorem for transparent zkSNARKs such as Halo2 and Plonky2 that replace the polynomial commitments of~Kate et al.~\cite{AC:KatZavGol10} with commitments from Bulletproof~\cite{SP:BBBPWM18} and STARKs~\cite{EPRINT:BBHR18}, respectively. \end{itemize} \subsection{Technical Overview} %unique response, forking special soundness. General theorem without additional assumptions. to apply the theorem to concrete schemes like plonk, we show it satisfies unique response, forking soundness, in AGM. At a high level, the proof of our main theorem for updatable simulation extractability is along the lines of the simulation extractability proof for FS-compiled sigma protocol from~\cite{INDOCRYPT:FKMV12}. However, our theorem introduces new notions that are more general to allow us to consider proof systems that are richer than sigma protocols and support an updatable setup. We discuss some of the technical challenges below. \plonk{}, \sonic{}, and \marlin{} were originally presented as interactive proofs of knowledge that are made non-interactive via the Fiat--Shamir transform. In the following, we denote the underlying interactive protocols by $\plonkprot$ (for $\plonk$), $\sonicprot$ (for $\sonic$), and $\marlinprot$ (for \marlin) and the resulting non-interactive proof systems by $\plonkprotfs$, $\sonicprotfs$, $\marlinprotfs$ respectively. \oursubsub{Rewinding-Based Knowledge Soundness (RBKS).} Following~\cite{INDOCRYPT:FKMV12}, one would have to show that for the protocols we consider, a witness can be extracted from sufficiently many valid transcripts with a common prefix. The standard definition of special soundness for sigma protocols requires the extraction of a witness from any two transcripts with the same first message. However, most zkSNARK protocols do not satisfy this notion. We put forth a notion analogous to special soundness that is more general and applicable to a wider class of protocols. Namely, protocols compiled using multi-round FS that rely on an (updatable) SRS. $\plonkprot$, $\sonicprot$, and $\marlinprot$ have more than three messages, and the number of transcripts required for extraction is more than two. Concretely, $(3 \noofc + 6)$ for Plonk, $(\noofc + 1)$ for Sonic %, where $\multconstr$ and $\linconstr$ are the numbers of multiplicative and linear constraints , and $(2 \noofc + 3)$ for Marlin, where $\noofc$ is the number of constraints in the proven circuit. Hence, we do not have a pair of transcripts but a \emph{tree of transcripts}. Furthermore, the protocols we consider are arguments and rely on a SRS that comes with a trapdoor. An adversary in possession of the trapdoor can produce multiple valid proof transcripts potentially for false statements without knowing any witness. This is true even in the updatable setting, where a trapdoor still exists for any updated SRS. Recall that the standard special soundness definition requires witness extraction from \emph{any} suitably structured tree of accepting transcripts. This means that there are no such trees for false statements. Instead, we give a rewinding-based knowledge soundness definition with an extractor that proceeds in two steps. It first uses a tree building algorithm $\tdv$ to obtain a tree of transcripts. In the second step, it uses a tree extraction algorithm $\extcss$ to compute a witness from this tree. Tree-based knowledge soundness guarantees that it is possible to extract a witness from all (but negligibly many) trees of accepting transcripts produced by probabilistic polynomial time (PPT) adversaries. That is, if extraction from such a tree fails, then we break an underlying computational assumption. Moreover, this should hold even against adversaries that contribute to the SRS generation. \oursubsub{Unique Response Protocols (UR).} Another property required to show simulation extractability is the unique response property which says that for $3$-message sigma protocols, the response of the prover ($3$-rd message) is determined by the first message and the challenge~\cite{C:Fischlin05} (intuitively, the prover can only employ fresh randomness in the first message of the protocol). We cannot use this definition since the protocols we consider have multiple rounds of randomized prover messages. In Plonk, both the first and the third messages are randomized. Although the Sonic prover is deterministic after it picks its first message, the protocol has more than $3$ messages. The same holds for Marlin. We propose a generalization of the unique response property called $\ur{k}$. It requires that the behavior of the prover be determined by the first $k$ of its messages. For our proof, it is sufficient that Plonk is $\ur{3}$, and Sonic and Marlin are $\ur{2}$. \oursubsub{Trapdoor-Less Zero-Knowledge (TLZK).} The premises of our main theorem include two computational properties that do not mention a simulator, RBKS and UR, The theorem states that together with a suitable property for the simulator of the zero-knowledge property, they imply USE. % Our key technique is to simulate simulation queries when reducing to RBKS and UR. For this it is convenient that the zero-knowledge simulator be trapdoor-less, that is can produce proofs without relying on the knowledge of the trapdoor. Simulation is based purely on the simulators early control over the challenge. % In the ROM this corresponds to a simulator that programs the random oracle and can be understood as a generalization of honest-verifier zero-knowledge for multi-message Fiat--Shamir transformed proof systems with an SRS. We say that such a proof system is $k$-TLZK, if the simulator only programs the $k$-th challenge and we construct such simulators for $\plonkprotfs$, $\sonicprotfs$, and $\marlinprotfs$. %To invoke our main theorem %on (Fiat--Shamir transformed) Plonk, Sonic and Marlin to conclude USE, we also need %to show that simulators in these protocols produce proofs without relying on the %knowledge of trapdoor. More precisely, for our reduction, we need simulators that rely %only on reordering the messages and picking suitable verifier challenges without %knowing the SRS trapdoor. That is, any PPT party should be able to produce a %simulated proof on its own in a trapdoor-less way. Note that this property does not %necessarily break the soundness of the protocol as the simulator is required only to %produce a transcript and is not involved in a real conversation with a real %verifier. Technically we will make use of the $k$-UR property together with the $k$-TLZK property to bound the probability that the tree produced by the tree builder $\tdv$ of RBKS contains any programmed random oracle queries. \subsection{Related Work} There are many results on simulation extractability for non-interactive zero-knowledge proofs (NIZKs). First, Groth \cite{AC:Groth07} noticed that a (black-box) SE NIZK is universally-composable (UC)~\cite{EPRINT:Canetti00}. Then Dodis et al.~\cite{AC:DHLW10} introduced a notion of (black-box) \emph{true simulation extractability} (i.e., SE with simulation of true statements only) and showed that no NIZK can be UC-secure if it does not have this property. In the context of zkSNARKs, the first SE zkSNARK was proposed by Groth and Maller~\cite{C:GroMal17} and a SE zkSNARK for QAP was designed by Lipmaa~\cite{EPRINT:Lipmaa19a}. Kosba et al.~\cite{EPRINT:KZMQCP15} give a general transformation from a NIZK to a black-box SE NIZK. Although their transformation works for zkSNARKs as well, the succinctness of the proof system is not preserved by this transformation. Abdolmaleki et al.~\cite{CCS:AbdRamSla20short} showed another transformation that obtains non-black-box simulation extractability but also preserves the succinctness of the argument. The zkSNARK of~\cite{EC:Groth16} has been shown to be SE by introducing minor modifications to the construction and making stronger assumptions \cite{EPRINT:BowGab18,EPRINT:AtaBag19}. Recently,~\cite{EPRINT:BKSV20} showed that the Groth's original proof system from~\cite{EC:Groth16} is weakly SE and randomizable. None of these results are for zkSNARKs in the updatable SRS setting or for zkSNARKs obtained via the Fiat--Shamir transformation. The recent work of~\cite{cryptoeprint:GOPTT22} shows that Fiat--Shamir transformed Bulletproofs are simulation extractable. While they show a general theorem for multi-round protocols, they do not consider a setting with an SRS, and are therefore inapplicable to zkSNARKs in the updatable SRS setting. %%% Local Variables: %%% mode: latex %%% TeX-master: "main" %%% End:
{ "alphanum_fraction": 0.7998925709, "avg_line_length": 88.652014652, "ext": "tex", "hexsha": "472535c9f0b4df2c64f1d583549a52e18dcc1cb4", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "7da7fa2b6aa17142ef8393ace6aa532f3cfd12b4", "max_forks_repo_licenses": [ "CC-BY-4.0" ], "max_forks_repo_name": "clearmatics/research-plonkext", "max_forks_repo_path": "SCN2022/introduction.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "7da7fa2b6aa17142ef8393ace6aa532f3cfd12b4", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "CC-BY-4.0" ], "max_issues_repo_name": "clearmatics/research-plonkext", "max_issues_repo_path": "SCN2022/introduction.tex", "max_line_length": 1168, "max_stars_count": null, "max_stars_repo_head_hexsha": "7da7fa2b6aa17142ef8393ace6aa532f3cfd12b4", "max_stars_repo_licenses": [ "CC-BY-4.0" ], "max_stars_repo_name": "clearmatics/research-plonkext", "max_stars_repo_path": "SCN2022/introduction.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 5860, "size": 24202 }
\subsection*{Unit 1, Exercise 4} Read this article about the history of the mobile phone. Decide if the verbs need to be active or passive and put them in the right form. \begin{answer} \begin{tabular}{lll} 1 & make & was made \\ 2 & walk & walked \\ 3 & look & looked \\ 4 & use & used \\ 5 & introduce & was introduced \\ 6 & cost & costed \\ 7 & be & are \\ 8 & weigh & weigh \\ 9 & can buy & can be bought \\ 10 & use & are used \\ 11 & use & will be used \\ 12 & not seem & do not seem \\ \end{tabular} \end{answer}
{ "alphanum_fraction": 0.6208178439, "avg_line_length": 26.9, "ext": "tex", "hexsha": "b7c1817e1db43606c7ff3e9ac7e7f0b458d177ae", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2018-02-21T11:48:44.000Z", "max_forks_repo_forks_event_min_datetime": "2018-02-21T11:48:44.000Z", "max_forks_repo_head_hexsha": "fda508243fa6950340a802a1ef0e5b00bc83e2c2", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "AChep/khpi-latex", "max_forks_repo_path": "public/english/section_1_4.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "fda508243fa6950340a802a1ef0e5b00bc83e2c2", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "AChep/khpi-latex", "max_issues_repo_path": "public/english/section_1_4.tex", "max_line_length": 137, "max_stars_count": null, "max_stars_repo_head_hexsha": "fda508243fa6950340a802a1ef0e5b00bc83e2c2", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "AChep/khpi-latex", "max_stars_repo_path": "public/english/section_1_4.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 166, "size": 538 }
\documentclass[]{article} \usepackage[left=1in,top=1in,right=1in,bottom=1in]{geometry} %%%% more monte %%%% % thispagestyle{empty} % https://stackoverflow.com/questions/2166557/how-to-hide-the-page-number-in-latex-on-first-page-of-a-chapter \usepackage{color} % \usepackage[table]{xcolor} % are they using color? % \definecolor{WSU.crimson}{HTML}{981e32} % \definecolor{WSU.gray}{HTML}{5e6a71} % \definecolor{shadecolor}{RGB}{248,248,248} \definecolor{WSU.crimson}{RGB}{152,30,50} % use http://colors.mshaffer.com to convert from 981e32 \definecolor{WSU.gray}{RGB}{94,106,113} %%%%%%%%%%%%%%%%%%%%%%%%%%%% \newcommand*{\authorfont}{\fontfamily{phv}\selectfont} \usepackage{lmodern} \usepackage[T1]{fontenc} \usepackage[utf8]{inputenc} \usepackage{abstract} \renewcommand{\abstractname}{} % clear the title \renewcommand{\absnamepos}{empty} % originally center \renewenvironment{abstract} {{% \setlength{\leftmargin}{0mm} \setlength{\rightmargin}{\leftmargin}% }% \relax} {\endlist} \makeatletter \def\@maketitle{% \pagestyle{empty} \newpage % \null % \vskip 2em% % \begin{center}% \let \footnote \thanks {\fontsize{18}{20}\selectfont\raggedright \setlength{\parindent}{0pt} \@title \par}% } %\fi \makeatother \title{\textbf{\textcolor{WSU.crimson}{Will Smith Versus Denzel Washington}} \newline \textbf{\textcolor{WSU.gray}{Which is the better actor?}} } % % \author{ \Large true \hfill \normalsize \emph{} } \author{\Large Jessica Smith\vspace{0.05in} \newline\normalsize\emph{Washington State University} } \date{December 08, 2020} \setcounter{secnumdepth}{3} \usepackage{titlesec} % See the link above: KOMA classes are not compatible with titlesec any more. Sorry. % https://github.com/jbezos/titlesec/issues/11 \titleformat*{\section}{\bfseries} \titleformat*{\subsection}{\bfseries\itshape} \titleformat*{\subsubsection}{\itshape} \titleformat*{\paragraph}{\itshape} \titleformat*{\subparagraph}{\itshape} % https://code.usgs.gov/usgs/norock/irvine_k/ip-092225/ %\titleformat*{\section}{\normalsize\bfseries} %\titleformat*{\subsection}{\normalsize\itshape} %\titleformat*{\subsubsection}{\normalsize\itshape} %\titleformat*{\paragraph}{\normalsize\itshape} %\titleformat*{\subparagraph}{\normalsize\itshape} % https://tex.stackexchange.com/questions/233866/one-column-multicol-environment#233904 \usepackage{environ} \NewEnviron{auxmulticols}[1]{% \ifnum#1<2\relax% Fewer than 2 columns %\vspace{-\baselineskip}% Possible vertical correction \BODY \else% More than 1 column \begin{multicols}{#1} \BODY \end{multicols}% \fi } \usepackage{natbib} \setcitestyle{aysep={}} %% no year, comma just year % \usepackage[numbers]{natbib} \bibliographystyle{plainnat} \usepackage[strings]{underscore} % protect underscores in most circumstances \newtheorem{hypothesis}{Hypothesis} \usepackage{setspace} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%% MONTE ADDS %%% \usepackage{fancyhdr} % fancy header \usepackage{lastpage} % last page \usepackage{multicol} \usepackage{etoolbox} \AtBeginEnvironment{quote}{\singlespacing\small} % https://tex.stackexchange.com/questions/325695/how-to-style-blockquote \usepackage{soul} %% allows strike-through \usepackage{url} %% fixes underscores in urls \usepackage{csquotes} %% allows \textquote in references \usepackage{rotating} %% allows table and box rotation \usepackage{caption} %% customize caption information \usepackage{booktabs} %% enhance table/tabular environment \usepackage{tabularx} %% width attributes updates tabular \usepackage{enumerate} %% special item environment \usepackage{enumitem} %% special item environment \usepackage{lineno} %% allows linenumbers for editing using \linenumbers \usepackage{hanging} \usepackage{mathtools} %% also loads amsmath \usepackage{bm} %% bold-math \usepackage{scalerel} %% scale one element (make one beta bigger font) \newcommand{\gFrac}[2]{ \genfrac{}{}{0pt}{1}{{#1}}{#2} } \newcommand{\betaSH}[3]{ \gFrac{\text{\tiny #1}}{{\text{\tiny #2}}}\hat{\beta}_{\text{#3}} } \newcommand{\betaSB}[3]{ ^{\text{#1}} _{\text{#2}} \bm{\beta} _{\text{#3}} } %% bold \newcommand{\bigEQ}{ \scaleobj{1.5}{{\ }= } } \newcommand{\bigP}[1]{ \scaleobj{1.5}{#1 } } \usepackage{endnotes} % he already does this ... \renewcommand{\enotesize}{\normalsize} % https://tex.stackexchange.com/questions/99984/endnotes-do-not-be-superscript-and-add-a-space \renewcommand\makeenmark{\textsuperscript{[\theenmark]}} % in brackets % % https://tex.stackexchange.com/questions/31574/how-to-control-the-indent-in-endnotes \patchcmd{\enoteformat}{1.8em}{0pt}{}{} \patchcmd{\theendnotes} {\makeatletter} {\makeatletter\renewcommand\makeenmark{\textbf{[\theenmark]} }} {}{} % https://tex.stackexchange.com/questions/141906/configuring-footnote-position-and-spacing \addtolength{\footnotesep}{5mm} % change to 1mm \renewcommand{\thefootnote}{\textbf{\arabic{footnote}}} \let\footnote=\endnote %\renewcommand*{\theendnote}{\alph{endnote}} %\renewcommand{\theendnote}{\textbf{\arabic{endnote}}} \renewcommand*{\notesname}{} \makeatletter \def\enoteheading{\section*{\notesname \@mkboth{\MakeUppercase{\notesname}}{\MakeUppercase{\notesname}}}% \mbox{}\par\vskip-2.3\baselineskip\noindent\rule{.5\textwidth}{0.4pt}\par\vskip\baselineskip} \makeatother \renewcommand*{\contentsname}{TABLE OF CONTENTS} \renewcommand*{\refname}{REFERENCES} %\usepackage{subfigure} \usepackage{subcaption} \captionsetup{labelfont=bf} % Make Table / Figure bold %%% you could add elements here ... monte says .... %%% %\usepackage{mypackageForCapitalH} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % set default figure placement to htbp \makeatletter \def\fps@figure{htbp} \makeatother % move the hyperref stuff down here, after header-includes, to allow for - \usepackage{hyperref} \makeatletter \@ifpackageloaded{hyperref}{}{% \ifxetex \PassOptionsToPackage{hyphens}{url}\usepackage[setpagesize=false, % page size defined by xetex unicode=false, % unicode breaks when used with xetex xetex]{hyperref} \else \PassOptionsToPackage{hyphens}{url}\usepackage[draft,unicode=true]{hyperref} \fi } \@ifpackageloaded{color}{ \PassOptionsToPackage{usenames,dvipsnames}{color} }{% \usepackage[usenames,dvipsnames]{color} } \makeatother \hypersetup{breaklinks=true, bookmarks=true, pdfauthor={Jessica Smith (Washington State University)}, pdfkeywords = {Will Smith, Denzel Washington, IMDB, Multivariate Analysis}, pdftitle={Will Smith Versus Denzel Washington: Which is the better actor?}, colorlinks=true, citecolor=blue, urlcolor=blue, linkcolor=magenta, pdfborder={0 0 0}} \urlstyle{same} % don't use monospace font for urls % Add an option for endnotes. ----- % % add tightlist ---------- \providecommand{\tightlist}{% \setlength{\itemsep}{0pt}\setlength{\parskip}{0pt}} % add some other packages ---------- % \usepackage{multicol} % This should regulate where figures float % See: https://tex.stackexchange.com/questions/2275/keeping-tables-figures-close-to-where-they-are-mentioned \usepackage[section]{placeins} \pagestyle{fancy} \lhead{\textcolor{WSU.crimson}{\textbf{ Will Smith Versus Denzel Washington }}} \chead{} \rhead{\textcolor{WSU.gray}{\textbf{ Page\ \thepage\ of\ \protect\pageref{LastPage} }}} \lfoot{} \cfoot{} \rfoot{} \begin{document} % \pagenumbering{arabic}% resets `page` counter to 1 % % \maketitle {% \usefont{T1}{pnc}{m}{n} \setlength{\parindent}{0pt} \thispagestyle{plain} {\fontsize{18}{20}\selectfont\raggedright \maketitle % title \par } { \vskip 13.5pt\relax \normalsize\fontsize{11}{12} \textbf{\authorfont Jessica Smith} \hskip 15pt \emph{\small Washington State University} } } \begin{abstract} \hbox{\vrule height .2pt width 39.14pc} \vskip 8.5pt % \small \noindent This project explored multiple variables from the IMDB database in order to determine which actor is better - Denzel Washington or Will Smith. A mathematical analysis was performed to quantitatively evaluate which actor is better using appropriate data features selected by the author. An adjacency matrix was constructed from data on 50 contemporary actors, and the eigen ratings of each actor were calculated. The results of the multivariate analysis showed that Denzel Washington can be considered the better actor. \vskip 8.5pt \noindent \textbf{\underline{Keywords}:} Will Smith, Denzel Washington, IMDB, Multivariate Analysis \par \hbox{\vrule height .2pt width 39.14pc} \vskip 5pt \hfill \textbf{\textcolor{WSU.gray}{ December 08, 2020 } } \vskip 5pt \end{abstract} \vskip -8.5pt % removetitleabstract \noindent \section{Introduction} \label{sec:intro} The IMDB database is a public repository containing information related to movies and actors, including cast, crew, ratings, and earnings data. This project attempted to investigate, compare, and draw conclusions regarding two well-known actors by performing multivariate analysis on the data gleaned from the IMDB website. Previous work into the subject evaluated the actors by considering box office sales and IMDB movie rankings. It was shown that the median box office sales, when adjusted for inflation, were about the same for the two stars. The median Metacritic ratings for each actors films were also similar and did not enable a meaningful distinction. As the results were inconclusive, further investigation is needed to provide deterministic insights into which actor is ``better''. This project will look at multiple variables and perform a mathematical analysis to attempt to quantitatively evaluate which actor is better using the available data features. This an important demonstration of how data analysis can be used to empirically inform a response to an otherwise hard to answer and somewhat subjective question. The author has no predisposition toward either actor, and the information presented is intended to provide an unbiased and data driven perspective. \section{Overview} \label{sec:Overview} Denzel Washington and Will Smith are two well-known movie stars with comparable ratings and extensive filmographies. Denzel is 66 years old and made his first movie in 1981, while Will Smith is 52 and made his first movies in 1992. Will Smith has 111 films listed on IMDB with a mean rating of 6.2, while Denzel has 61 films listed with a mean of 6.8. Both actors are listed in the top 500 on IMDB, and differentiating an advantage between the two on the basis of published statistics has proven challenging.\newline The IMDB data contains metrics on movies in which the actors have appeared including ratings, box office sales, and Metacritic scores, as well as info on the cast and crew. For actors, their rating within each movie is available, allowing for the determination of whether their role was a lead or whether they played a less prominent character. Demographic data and star meter rank was also available for each actor. \newline The concept of which actor is ``better'' is an inherently subjective measure, and the choice of which factors to consider in the analysis was given careful consideration. Calculating a diversity score from an analysis of the gender breakdown of the cast and crew for each movie was suggested. However, different genres can have different casting requirements, and the outcome of any analysis may be more indicative of which genre an actor prefers than anything else. A preliminary assessment of the diversity index of Will Smith movies, for example, showed a wide variation in score across genres. A strong correlation between gender diversity and acting acumen appears unlikely, and gender/diversity score was not selected as a component of the analysis. \newline Another available factor was the ``star meter'' ranking, a measure of popularity created by IMDB that is a function of the number of credits a person has, popularity of the work a star appears in, and traffic to the celebrity's profile. While this initially seemed promising, after closer inspection, many of the actors used to build the data set had identical ratings. This rating also appears as a static snapshot that reflects the score today, rather than at the time a particular movie was released, and it's unclear how these ratings might change over time. This metric was deemed unreliable and non-deterministic, and was not included in the final analysis. \newline The analysis required building a dataframe for each actor with the desired covariates. For each film an actor was associated with, the movie ratings, Metacritic score, box office sales, and actor rank were selected. The decision was made to only consider movies where the star had been a headliner, defined as having an actor rank of 1, 2, or 3 for a given movie. This was done in an attempt to make sure the covariates were a mainly a function of the actor being considered, not some one else who may have been in the movie and had a larger role. The assumption was made that a ``better'' actor would be appear as the lead more often than in a supporting role, so the ratio of leading roles to total movies was also considered. Box office sales were adjusted to 2020 dollars, and the mean of the ratings, scores, and sales for each movie were calculated. \newline To obtain a more robust analysis, Will and Denzel should be compared to more actors than just each other. A pool of 48 contemporaries was drawn from the actorRank2000.rds provided by the instructor, ensuring only candidates from the modern era were considered. Actor ID's were randomly selected and screened for quality. The criteria were at least 10 movies where the actor ranked 3 or lower, and complete data for ratings, Metacritic scores, and box office sales. Table 1 shows an example of the first ten actors in the pool and the final selection of covariates that were used in the analysis. \newline \input{tables/table1.tex} Using the entire pool of 50 actors, the data was standardized by dividing each value for a covariate by the maximum value for that feature found in the pool. The result was a 50 X 4 matrix where each value was a positive integer between 0 and 1. An adjacency matrix was calculated where the value of the eigenvector in each row/column is the rating of the actor evaluated against the other actors in the pool for that factor. The eigenvector ranking quantified the approximate importance of each actor in the pool. From these eigen-rankings, an empirical evaluation of which actor is ``better'', Will or Denzel, was able to be obtained. \newline \section{Key Findings} \label{sec:findings} \input{tables/results.tex} The results of the analysis show that Denzel Washington has a higher eigen-rank than Will Smith, indicating that given the metrics evaluated, Denzel can be considered the better actor (.20 to .16). These results are in agreement with the rankings returned from the instructors Actor-Actor matrix that showed Denzel Washington scoring slightly higher than Will Smith (57.11 to 50.79). Given the authors lack of preference at the outset, the relative `closeness' of the results, and the alignment with existing research outcomes, the results can be concluded to be reasonable and acceptable. \section{Conclusion} \label{sec:conclusion} Denzel has a higher eigen-rank than Will Smith when considering the average ratings of the movies in which he starred, the average box office sales of those movies, and the ratio of leading to non leading roles. The results of the multivariate analysis have shown that Denzel Washington is the better actor. %% appendices go here! \newpage \theendnotes %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% biblio %%%%%%%% \newpage \begin{auxmulticols}{1} \singlespacing %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% biblio %%%%%%%% \end{auxmulticols} \newpage { \hypersetup{linkcolor=black} \setcounter{tocdepth}{3} \tableofcontents } \end{document}
{ "alphanum_fraction": 0.7457218756, "avg_line_length": 31.5536062378, "ext": "tex", "hexsha": "d9beec6b41e9207a0d003b4ab4a6dbaf52d4609b", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "20bf9168ee130548f6a619f1fb307717647f4f77", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "jsmith0434/WSU_STATS419_FALL2020", "max_forks_repo_path": "final/Will_vs_Denzel/will-v-denzel.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "20bf9168ee130548f6a619f1fb307717647f4f77", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "jsmith0434/WSU_STATS419_FALL2020", "max_issues_repo_path": "final/Will_vs_Denzel/will-v-denzel.tex", "max_line_length": 116, "max_stars_count": null, "max_stars_repo_head_hexsha": "20bf9168ee130548f6a619f1fb307717647f4f77", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "jsmith0434/WSU_STATS419_FALL2020", "max_stars_repo_path": "final/Will_vs_Denzel/will-v-denzel.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 4365, "size": 16187 }
\PassOptionsToPackage{svgnames}{xcolor} \documentclass[10pt,letterpaper]{article} \usepackage[top=.5in, bottom=.75in, left=.5in, right=.5in]{geometry} \usepackage{tcolorbox} \usepackage{lipsum} \tcbuselibrary{skins,breakable} \usetikzlibrary{shadings,shadows} \usepackage{graphicx} % Allows to include images \usepackage{booktabs} % Allows the use of \toprule, \midrule and \bottomrule in tables \usepackage{multicol} \usepackage{float} \usepackage[T1]{fontenc} \usepackage[utf8]{inputenc} \title{\vspace{-.5in}Practice 5: Focus on the Game} \author{\vspace{-.5in}} \date{\vspace{-.5in}} \newenvironment{agendablock}[1]{% \tcolorbox[beamer,% noparskip,breakable, colback=LightGray,colframe=Black,% colbacklower=Gray!75!LightGray,% title=#1]}% {\endtcolorbox} \newenvironment{evenBlock}[1]{% \tcolorbox[beamer,% noparskip,breakable, colback=LightGreen,colframe=DarkGreen,% colbacklower=LimeGreen!75!LightGreen,% title=#1]}% {\endtcolorbox} \newenvironment{oddBlock}[1]{% \tcolorbox[beamer,% noparskip,breakable, colback=LightBlue,colframe=DarkBlue,% colbacklower=DarkBlue!75!LightBlue,% title=#1]}% {\endtcolorbox} \newenvironment{myexampleblock}[1]{% \tcolorbox[beamer,% noparskip,breakable, colback=LightGreen,colframe=DarkGreen,% colbacklower=LimeGreen!75!LightGreen,% title=#1]}% {\endtcolorbox} \newenvironment{myalertblock}[1]{% \tcolorbox[beamer,% noparskip,breakable, colback=LightCoral,colframe=DarkRed,% colbacklower=Tomato!75!LightCoral,% title=#1]}% {\endtcolorbox} \newenvironment{myblock}[1]{% \tcolorbox[beamer,% noparskip,breakable, colback=LightBlue,colframe=DarkBlue,% colbacklower=DarkBlue!75!LightBlue,% title=#1]}% {\endtcolorbox} \usepackage{lmodern} \begin{document} \fontfamily{lmss}\selectfont \maketitle \begin{agendablock}{Practice Activities} \begin{enumerate} \item Warm ups / Coerver Touches [ 15 min ] \item Drills [ 35 min ] \item Small Sided Activity [ 15 min ] \item Small Sided Game [ 20 min ] \item Sprints [ 5 min ] \end{enumerate} \end{agendablock} \section{Warm Ups} Run the \textbf{CLOCKS} drill until everyone arrives and for a few minutes after. \textbf{Time: 2 minutes} \input{../Drills/Drill_Clocks} \textbf{Time: 3 minutes} \begin{myalertblock}{Theme of the Practice} \textbf{Elements of the Game!} \begin{itemize} \setlength{\itemsep}{0pt} \setlength{\parskip}{0pt} \setlength{\parsep}{0pt} \item throw-ins \item corner kicks - offense \item corner kicks - defending \item direct kicks - defending \item direct kicks - taking \item keeper distribution: punt, throw, goal kick. \item kick off. \end{itemize} \end{myalertblock} \textbf{Time: 10 minutes} \input{../Warmups} \clearpage \section{Throw Ins} \textbf{Total Time: 15 minutes} \textbf{Time: 4 minutes} \begin{agendablock}{HOWTO: Basic Throw In:} \begin{itemize} \setlength{\itemsep}{0pt} \setlength{\parskip}{0pt} \setlength{\parsep}{0pt} \item Both feet must be planted during the throw. \item Both hands must be on the ball. \item Ball must travel over and behind the throwers head. \end{itemize} \end{agendablock} \begin{agendablock}{HOWTO: Deep Throw In:} \begin{itemize} \setlength{\itemsep}{0pt} \setlength{\parskip}{0pt} \setlength{\parsep}{0pt} \item For a more deeper throw in the player should advance to the line, \item Plant one foot and drag the other keeping it in contact with the ground. \item Both hands must be on the ball. \item Ball must travel over and behind the throwers head. \end{itemize} \end{agendablock} \textbf{Time: 5 minutes} \begin{evenBlock}{Throw-In to Feet} Have the boys partner up. Have one player stand out of bounds and thrown in to the other player about 3 to 5 yards away. The throw should be directed to the feet of the field player. The field player should trap the ball at his feet. \end{evenBlock} \textbf{Time: 5 minutes} \begin{oddBlock}{Throw-In leading field player} Have the boys partner up. Have one player stand out of bounds and thrown in to the other player about 3 to 5 yards away. The field player should be standing with his body open don field. The throw should be directed in front of the field player. The field player should make a first touch toward goal. \end{oddBlock} \section{Corner Kicks - Defense} Explain the meanings of `far post' and `near post'. \textbf{Time: 15 minutes} \begin{evenBlock}{Positions} Mid-Field Wings: At goal posts. Near post player positioned out of the goal to cut off that near post ball. Far post marker inside goal marking that edge. Center backs marking goal side. Far post Fullback and center midfielder marks goal side. Near post fullback marks a man. Center Forward Near 18 corner to cut off any passes out and ready to counter attack. \end{evenBlock} Once setup Coach will kick a corner for the team to handle. Switch sides of the field. \section{Corner Kicks - Offense} \textbf{Time: 15 minutes} \begin{evenBlock}{Positions} Forward and wing Mid-Fielders lineup at the corner of the 18 yard box. Near post full back takes kick. Center mid-fielder and Far post full back line up between near post 18 yard line and side line. Center mid runs to the PK spot, full back hovers playing the ball. Center backs should be at half field, keeper at his 18 yard line or further out. \end{evenBlock} \clearpage \section{Direct Kicks - Defense} \textbf{Time: 15 minutes} \begin{evenBlock}{Positions} Form a Wall as close to the kicker as possible- center forward, mid field wing(s). Make the referee move you away from the kick. Center turn and face the keeper and adjust left or right as directed. Defenders mark goal side within the funnel. \end{evenBlock} \section{Direct Kicks - Offense} \textbf{Time: 15 minutes} \begin{evenBlock}{Positions} If the ball is in the \end{evenBlock} \section{Small Sided Activity} \textbf{Time: 15 minutes, Started at 1:15 PM to 1:20 PM} \input{../Drills/Drill_Block_Passing} \section{Game} \textbf{Start Time: 1:35 PM} \begin{oddBlock}{Small Sided} \textbf{Time:} 10 minute halves. \textbf{Size:} 4v4 or 5v5. Express: \begin{itemize} \setlength{\itemsep}{0pt} \setlength{\parskip}{0pt} \setlength{\parsep}{0pt} \item remind them about the practice goals and expectation there is a lot of movement and passing. \item Funnel Positioning. \item Go outside on our defensive half. \item Pass quickly down sidelines or into open space. \item Avoid passing backward. \item Make a pass early or move early. \end{itemize} \end{oddBlock} \section{Close} \begin{oddBlock}{Sprints (5 min)} Agility runs to cone 5 yards away, stop and step circling cone then explode to next cone, circling it then explode sprinting to half field, jog back to end line and repeat 3 times. \end{oddBlock} \end{document}
{ "alphanum_fraction": 0.7084783517, "avg_line_length": 29.8049792531, "ext": "tex", "hexsha": "c8286850cdd2f17ac2f1c2ffc5eb53943ffe511a", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "8c8903f0902260460d4842ac6a2c30712ecd9e13", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "talbot-j/TravelSoccer", "max_forks_repo_path": "Lessons/P5_Focus_on_the_Game/P5_Focus_on_the_Game.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "8c8903f0902260460d4842ac6a2c30712ecd9e13", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "talbot-j/TravelSoccer", "max_issues_repo_path": "Lessons/P5_Focus_on_the_Game/P5_Focus_on_the_Game.tex", "max_line_length": 309, "max_stars_count": null, "max_stars_repo_head_hexsha": "8c8903f0902260460d4842ac6a2c30712ecd9e13", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "talbot-j/TravelSoccer", "max_stars_repo_path": "Lessons/P5_Focus_on_the_Game/P5_Focus_on_the_Game.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 2093, "size": 7183 }
\section{Updates: 03/16/2020}% \label{sec:updates_2020_03_16} % \subsection{Current \& Future Items} \begin{itemize} \item \textbf{\uline{Symplectic}} \begin{itemize} \item With $\alpha_{S_{i}} = 0$ for \(i = x, v\), (only \(T_{i}, Q_{i}\) terms), this should be trivial. % \item With only \(T_{i}, Q_{i}\) terms, this should be trivial. \item Even with \(S\) terms, the formulas are pretty simple, and should be easy to verify once we can be sure \(S = 0\) is working properly. \item For now it is probably still enough to only consider \(T_{x}\) since we mainly want to understand the source of the bias. \item Once we understand that we can add more terms and work on tuning it better. \end{itemize} \item \textbf{\uline{Reversibility}} \begin{itemize} \item We can check that the trained sampler is reversible using the following procedure: \begin{enumerate} \item Randomly choose the direction \(d\) to create the initial state \(\xi = (x, v, d)\). \item Run the dynamics and flip the direction: \begin{equation} \mathbf{FL} \xi = \mathbf{F}\xi^{\prime} = (x^{\prime}, v^{\prime}, - d) \end{equation} % \item Flip the direction and run the dynamics in the reverse direction: \item Run the dynamics and flip the direction again: \begin{equation} \mathbf{FL} \xi^{\prime} = \mathbf{F} \xi^{\prime\prime} = (x^{\prime\prime}, v^{\prime\prime}, d) % \mathbf{F}\xi^{\prime} = (x^{\prime}, v^{\prime}, -d) \rightarrow % \xi^{\prime\prime} = (x^{\prime\prime}, v^{\prime\prime}, -d) \end{equation} \item Check the difference: \begin{align} \delta x &= x - x^{\prime\prime} \\ \delta v &= v - v^{\prime\prime} \end{align} \end{enumerate} % % \item Run the dynamics according to the following two procedures. % \begin{enumerate} % \item Run the dynamics backwards then forwards to get % \(\xi^{\mathrm{fb}}\). % \begin{equation} % \xi^{\mathrm{fb}} = \mathbf{FL}^{\mathrm{f}}\mathbf{FL}^{\mathrm{b}}\xi % \end{equation} % \item Run the dynamics forwards then backwards to get % \(\xi^{\mathrm{bf}}\). % \begin{equation} % \xi^{\mathrm{bf}} = \mathbf{FL}^{\mathrm{b}}\mathbf{FL}^{\mathrm{f}}\xi % \end{equation} % \end{enumerate} % \textbf{\item \color{red}{(AI1):}} Look for outliers in the reversibility % using the \(\max\) of the differences, \(\xi^{fb} - \xi\) and % \(\xi^{bf} - \xi\). \item \textbf{\color{red}{(AI2):}} The violations should get worse as the volume increases, so it is probably best to formulate the network in terms of the group variables \((\cos\phi_{\mu}(i),\sin\phi_{\mu}(i)) = e^{i\phi_{\mu}(i)}\) instead of the algebra \(\phi_{\mu}(i)\). This would easily apply to higher groups as well. For \(U(1)\), this doubles the size of the inputs, but should work better overall. \item Having the network treat \(0\) and \(2\pi\) differently is a potential source of reversibility violation, though it may be small in practice. Continue looking for evidence of this. % \begin{todolist} \color{red}{\item Using the above criterion it was observed that the sampler does indeed violate reversibility, as shown in Fig~\ref{fig:reverse_diffs}.}\color{black} \item In order to identify the root cause of the reversibility violation, I am currently working on stepping through the sub-updates of the dynamics code and checking reversibility at each step. \begin{figure}[htpb!] \centering \includegraphics[width=\textwidth]{updates_2020_03_16/reverse_diffs_traceplot.pdf} \caption{Traceplot of average differences \(\langle \delta x\rangle, \langle \delta v\rangle\) demonstrating the reversibility violation.}% \label{fig:reverse_diffs} \end{figure} % \color{blue}{\item[\done] \textbf{{(AI1, AI2):}}} Having looked through % existing inference data (specifically, those models for which \(\delta \phi_{P} % > 0\)), there don't appear to be any violations in reversibility. % \end{todolist} \end{itemize} \item \textbf{\uline{Ergodicity}} \begin{itemize} \item Technically, this may be an autocorrelation problem, but in practice it is difficult to distinguish them. \begin{itemize} \item This may be the main problem. L2HMC seems to only learn certain types of updates, but fails at general mixing. \end{itemize} \item \textbf{\color{red}{(AI3)}} In principle, there should be some initial conditions that give a negative bias, and some positive. Mapping out the bias distribution for different seeds may help confirm this, though the distribution may not be symmetric, and could have a large tail on one side. \item \textbf{\color{red}{(AI4)}} Alternating HMC with L2HMC is an idea to fix this. \item If L2HMC mixes poorly on its own, then we may need to run mostly HMC.\@ \item Perhaps, running \(N\) updates of HMC and \(1\) L2HMC, for varying \(N\), will avoid the L2HMC bias and show an improvement over either alone. \item We can periodically switch between L2HMC and generic HMC during inference to see if there is any benefit. An example of this procedure is shown in Fig~\ref{fig:mixed_samplers}. \end{itemize} \begin{figure} \centering \includegraphics[width=0.95\textwidth]{figures/updates_2020_03_16/mix_samplers.pdf} \caption{Results obtained by periodically mixing between L2HMC and HMC during inference.}% \label{fig:mixed_samplers} \end{figure} \begin{todolist} \color{blue}{\item[\done] \textbf{{(AI3, AI4):}}} No evidence in recent data showing a negative bias, although it may be that the distribution is \emph{very} one sided. Continuing to look for counter-examples. \end{todolist} \item \textbf{\uline{Training}} \begin{itemize} \item \textbf{\color{red}{(AI5)}} Run more tests with current code at \(T_{x} = 1\) to further map this distribution and see how the average distance, \(\delta x_{\mathrm{out}}\) and acceptance, \(A(\xi^{\prime}|\xi)\) % (\texttt{accept\(_\)prob}) after training correlate with the bias. \item \textbf{\color{red}{(AI6)}} The initial tests of alternate loss functions seemed promising. Continue to explore alternate loss functions. \begin{itemize} \item Maybe \(|\delta x| * A^{2}(\xi^{\prime}|\xi)\) \item Or others that avoid anything we can hopefully determine from \textbf{\color{red}{(AI5)}} (or other tests), that correlate with increased bias. \end{itemize} \item Try running on \(O(2)\) model in 1D compare against Yannicks results. \end{itemize} \item \textbf{\uline{Code/scaling}} \begin{itemize} \item Longer-term, we want to write L2HMC in QEX for Aurora. This would potentially be faster and easier to scale up. \begin{itemize} \item However, the full dense layer won't scale well, and would need to be replaced eventually. \end{itemize} \item \textbf{\color{red}{(AI7)}} One option is to replace the dense weight matrix with a low-rank approximation. This could be investigated by taking the SVD of the weight matrix and looking at how the singular values fall off. \begin{itemize} \item Could also replace the weight matrix after training (on the dense matrix), then run inference on a low-rank approximation to see how it compares. \end{itemize} \item \textbf{\color{red}{(AI8)}} We could experiment with a few other variants that would be easy to implement and scale well, like a local connection (stencil) in combination with a low-rank fully connected layer. This would be relatively easy to implement in QEX.\@ \item In the 2D case, the singular value decomposition (SVD) of a weight matrix \(W\) in the network can be written as: \begin{equation} W = USV^{H} \end{equation} where \(S = \mathrm{s}\) contains the singular values of \(W\) and \(U, V^{H}\) are unitary. The rows of \(V^{H}\) are the eigenvectors of \(W^{H}W\) and the columns of \(U\) are the eigenvectors of \(WW^{H}\). In both cases the corresponding (possibly non-zero) eigenvalues are given by \(s^{2}\). \item The amount of overall variance explained by the \(i^{th}\) pair of SVD vectors is given by \(s_{i}^2 / \sum_{j} s_{j}^{2}\), where \(s_{j}\) are the singular values (diagonal of \(S\)). \end{itemize} % \begin{todolist} % \color{blue}{\item[\done] \textbf{{(AI7, AI8):}}} We can look at the % \end{todolist} \begin{figure}[htpb!] \centering \begin{subfigure}[b]{0.5\textwidth} \centering \includegraphics[width=\textwidth]{updates_2020_03_16/xnet_x_layer_svd.pdf} \end{subfigure}% ~ \begin{subfigure}[b]{0.5\textwidth} \centering \includegraphics[width=\textwidth]{updates_2020_03_16/vnet_v_layer_svd.pdf} \end{subfigure} \caption{Plots showing the percent of the total variance explained by the \(i^{th}\) singular value for two layers in our neural network.} \end{figure} \end{itemize} \subsection{Simplifying the \texorpdfstring{$x$}{x}-update}% {{{ \label{subsec:simplify_x_update}% % To determine which part of the algorithm is responsible for biasing the average % plaquette we have been trying to simplify the network/algorithm as much as % possible by removing individual items and One technique for determining the source of the bias in the plaquette is to remove/simplify individual components in the network, and see if any of these changes fixes the issue. One possible simplification that we have begun to explore is to combine the two \(x\) sub-updates into a single update by explicitly setting the ``site'' masks in Eq.~\ref{eq:forward_update}~\ref{eq:backward_update} be % \begin{align} m^{t} &= [1, 1, \ldots, 1]\\ \bar{m}^{t} &= [0, 0, \ldots, 0] \end{align} % for \(t = 1, \ldots, N_{\mathrm{LF}}\). In this case, the forward update (\(d = 1\)), becomes: % \begin{align} x^{\prime} &= x\odot\exp{\left(\varepsilon S_{x}(\zeta_2)\right)} + \varepsilon\left(v^{\prime}\odot\exp{\left( \varepsilon Q_{x}(\zeta_{2})\right)} + T_{x}(\zeta_{2}) \right)\\ x^{\prime\prime} &= x^{\prime}\\ &= x\odot\exp{\left(\varepsilon S_{x}(\zeta_2)\right)} + \varepsilon\left(v^{\prime}\odot\exp{\left( \varepsilon Q_{x}(\zeta_{2})\right)} + T_{x}(\zeta_{2}) \right)\\ \end{align} % And similarly for the backwards (d=-1) update: % \begin{align} x^{\prime} &= x\\ x^{\prime\prime} &= {\left[x^{\prime} - v^{\prime}\odot\varepsilon\left(\exp(\varepsilon Q_{x}(\zeta_{3})) + T_{x}(\zeta_{3})\right)\right]\odot\exp\left({ -\varepsilon S_{x}(\zeta_{3}) }\right)} \end{align} % }}} \begin{figure}[htpb] \centering \includegraphics[width=\textwidth]{zero_masks/original_masks} \caption{Using the original (randomly) assigned site masks, we see the bias is present.} \end{figure} % \begin{figure}[htpb] \centering \includegraphics[width=\textwidth]{zero_masks/zero_masks2} \caption{Using the combined \(x\) sub-updates with \(m^{t} = [1, 1, \ldots, 1]\) and \(\bar{m}^{t} = [0, 0, \ldots, 0]\). We see that the bias has slightly improved, although both the acceptance rate and tunneling rate appear to suffer dramatically.} \end{figure} % % \clearpage % \section{TODO:} % \begin{todolist} % \item[\done] Export saved weights from tensorflow as arrays to make the inference run % portable and library independent. % \item Try with $O(2)$ model in $D = 1$. % \item Calculate the relative probabilities for the topological sectors using % Eq.~\ref{eq:rel_prob}. % % \item Get a reasonable estimate of the \textbf{integrated autocorrelation} % % time. % % \item Write unit tests for dynamics engine. % \end{todolist} % From Yannick's calculation, the relative probabilities for the topological % sectors is given by % \begin{equation} % P = \exp\left[-\frac{\beta}{2}{\frac{{(2\pi)}^2 n^2}{N_{s} N_{t}}}\right]
{ "alphanum_fraction": 0.6333437305, "avg_line_length": 47.4962962963, "ext": "tex", "hexsha": "1bccdc4833787158e8b1c77192ec58c525121fe0", "lang": "TeX", "max_forks_count": 4, "max_forks_repo_forks_event_max_datetime": "2021-05-25T00:49:14.000Z", "max_forks_repo_forks_event_min_datetime": "2020-10-31T02:25:04.000Z", "max_forks_repo_head_hexsha": "b5fe06243fae663607b6c88e71373b68b19558fc", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "saforem2/l2hmc-qcd", "max_forks_repo_path": "doc/updates/updates_2020_02_25.tex", "max_issues_count": 21, "max_issues_repo_head_hexsha": "b5fe06243fae663607b6c88e71373b68b19558fc", "max_issues_repo_issues_event_max_datetime": "2022-02-26T17:43:51.000Z", "max_issues_repo_issues_event_min_datetime": "2019-09-09T21:10:48.000Z", "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "saforem2/l2hmc-qcd", "max_issues_repo_path": "doc/updates/updates_2020_02_25.tex", "max_line_length": 98, "max_stars_count": 32, "max_stars_repo_head_hexsha": "b5fe06243fae663607b6c88e71373b68b19558fc", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "saforem2/l2hmc-qcd", "max_stars_repo_path": "doc/updates/updates_2020_02_25.tex", "max_stars_repo_stars_event_max_datetime": "2022-03-31T18:30:48.000Z", "max_stars_repo_stars_event_min_datetime": "2019-04-18T18:50:28.000Z", "num_tokens": 3720, "size": 12824 }
\section{Banking}
{ "alphanum_fraction": 0.7, "avg_line_length": 5, "ext": "tex", "hexsha": "0503d0b054616c2edf1f10f390ea51308fe22432", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "adamdboult/nodeHomePage", "max_forks_repo_path": "src/pug/theory/economics/banking/01-00-Banking.tex", "max_issues_count": 6, "max_issues_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_issues_repo_issues_event_max_datetime": "2022-01-01T22:16:09.000Z", "max_issues_repo_issues_event_min_datetime": "2021-03-03T12:36:56.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "adamdboult/nodeHomePage", "max_issues_repo_path": "src/pug/theory/economics/banking/01-00-Banking.tex", "max_line_length": 17, "max_stars_count": null, "max_stars_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "adamdboult/nodeHomePage", "max_stars_repo_path": "src/pug/theory/economics/banking/01-00-Banking.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 7, "size": 20 }
\documentclass[12pt]{article} % Language setting % Replace `english' with e.g. `pathspanish' to change the document language \usepackage[english]{babel} % Set page size and margins % Replace `letterpaper' with`a4paper' for UK/EU standard size \usepackage[a4paper,top=2cm,bottom=2cm,left=3cm,right=3cm,marginparwidth=1.75cm]{geometry} % Useful packages \usepackage{amsmath} \usepackage{graphicx} \usepackage[colorlinks=true, allcolors=black]{hyperref} \begin{document} %title \title{\underline{Book Of Specification - AutoPylot}} \date{January 2022 - May 2022} \author{% \\ Alexandre Girold\\ Mickael Bobovitch \\ Maxime Ellerbach \\ Maxime Gay \\ \\ Group: Autonomobile } \maketitle \centerline{\includegraphics[height=7cm]{../logos/logo-transparent-white.png}} \newpage \tableofcontents \newpage \section{Introduction} \subsection{Project presentation} Autonomous vehicles and more specifically self driving cars have grasp the attention of many people for good or ill. In this spirit, we have decided with the Autonomobile team to create our first ever project, AutoPylot. The name of our team is of course full of meaning in that regard. Autonomobile is a two-word name, the first one a french word for autonomous : "Autonome", the second one a french word for car : "Automobile". These two-word combined literally mean Autonomous car.\\ What is Autopylot's goal ? Drive itself on a track and win races. It may, at first glance seem very simple but not everything is at it seems. Yet we will try to make it as easy to undestand as possible, without omiting crutial information. To achieve our goal, we need to solve many other problems. Those problems can be separate into two distinct groups. \\ The first one would be the software part. Indeed in this project we will need to learn and acquire certain skills, from teamwork to coding in different languages. With those newly acquired skills we will be able to bring machine learning to our car to make it drive itself. This leads use directly to our second part, the more tangible one : hardware. Indeed, as we will progress in our work, we will need to see the results of our work in real life condition. This means implementing our code to a functioning car which will be able to race on a track. \\ This project will lead by a team of four young developers, Maxime Ellerbach, Mickael Bobovitch, Maxime Gay and Alexandre Girold. In this project work will be divided equally amongst all of us, sometimes we will have to work together to achieve our very tight time frame. \subsection{Team members} % Write a small paragraph about yourself, what you like, what you did in the past. Everything is valuable ! \subsubsection{Maxime Ellerbach} I am a curious and learning hungry person, always happy to learn and collaborate with new people ! Programming, robotics and tinkering has always attracted me. Writting code and then seing the results in real life is something that I find amazing ! I had multiple projects in this field : Lego Mindstorms, a robotic arm, more recently an autonomous car and even a simulator in unity to train even without a circuit at home ! Even if I know quite well the domain of autonomous cars, there is always something new to learn. I look forward working with this team full of hard working people on such a fun project ! \subsubsection{Mickael Bobovtich} Roses are red. Violets are blue. Unexpected “Mickael BOBOVITCH “ on line 32. Hello I am a French Student with Russian parents. Lived half of my life in Moscow. Passionate in web dev, servers, and business. Started programming at 13 years old. Created many projects. I like to learn everything, from AI, to UI, from Hardware to Software. Actually I am like OCaml, you need to know me well to appreciate me. \subsubsection{Maxime Gay} I am 18 years old, and I am crazy about investment, finance and especially cryptocurrencies and blockchain. I already worked with a team on different Investment projects and during summer Jobs but this is the first time that I am working on such a project. Furthermore, I am a beginner in computer Science and autonomous car. However, I am impatient to learn new skills with this incredible team. \subsubsection{Alexandre Girold} I am already getting old. I am 19 years of age, yet I am full of ressources. I am delighted to be able to learn someting new. There are many things which I enjoy from programming to geopolitics. I know this project will push me toward a better me and make great friends along the way. \subsection{State of the art} In this section, we will try to see what was previously made in this sector of industry. It would not be realistic to compare our 1:10 project to real sized cars such as Tesla's, simply because in a racing environnement, we don't need to deal with such an amount of safety: pedestrian detection, emergency braking, speed limit detection and other. So we will only see miniature autonomous racing framework that we would likely race against.\\ The most known is called "DonkeyCar", created by Will Roscoe and Adam Conway in early of 2017. Most of the models trained with DonkeyCar are behavior cloning models, meaning models that tries to replicate the behavior of a driver. This methods uses a big amount of images (input) associated to steering angles and throttle (output), it requires the user to drive the car (collect data) prior to training the model: no examples means no training. The lack of training data often leads to the car leaving the track.\\ One other framework worth looking at is one created by Nvidia called "JetRacer" released in 2019. It uses a different approach from DonkeyCar where the user annotates the images by hand by clicking on where the car should go. The model used is similar to what DonkeyCar uses: a Convolutional Neural Network with one input (the image) and two outputs, one for the steering angle and one for the throttle to apply. \\ Both of those framework are written in python and use packages such as Tensorflow and OpenCV, we will also use them in our project. \newpage \section{Objectives} \subsection{Final objectives} Our main objective is to make our car race against other cars and win the race ! This will require multiple intermediate milestones: \begin{itemize} \item Being able to send scripted controls to the motor and servo. \item Being able to drive the car manually using a controller. \item Develop a way to gather images and annotations and store them in a stuctured way, for example sorted by date. \item Process those data before feeding them to the neural network. \item Being able to train a convolutional neural network using those data. \item Build a telemetry web application. \item Tweak the architecture and the parameters of the chosen model to acheive the best results. \item Test in real life the model. \item Race against others ! \end{itemize} Once all of that is done, we will start our optional objectives that will enable better racing and better understanding of the car's environnement. \subsection{Optional objectives} To be able to go faster and increase the reliability of our car's driving, we will need to add some features to our project. \\ One good feature would be to have a model that takes into account the speed of the car. As we all know on a real car, we don't steer the same way when going at 10km/h and going at 90km/h. This input extension could bring more stability to our model. To go faster, it would also be great to be able to differenciate turns from straight lines and even braking zone before the turn. \\ One of the challenges racing implies is overtaking. Overtaking is a very complex manoeuver. First we obviously need to be faster than the car ahead, we could assure that by detecting the successive positions of the car ahead on the image. If the car is getting closer, it's Y coordinate will come closer to the bottom of our image. Also, we cannot overtake if we have no room for that, that means we have to detect a gap on the left or the right of the car before initiating the overtake. Once we decided on what part of track we will overtake, we still need to get ahead of the opponent's car. We could acheive that by forcing the model to drive on the left or right part of the track for a given amount of time. \newpage \subsection{Motivations} Why this project ? \\ This project is something that we deeply care about. Being able to work on a self driving car does sound like a dream to of us. Being able to work on this project means we will be able to understand how tomorrow's car will work. Moreover, we will learn valuable skills in Python and in neural networks. Being able to work on this project is also a way to prove ourselves and that with enough work anything is achievable. Our goal is not to make the new tesla model W but it is to be a part of this constant progress in autonomous vehicles. We also want to see how far we can take this project and what we will have achieved by the end of the school year. With this idea in mind, we have set ourself some goals, for example winning a race. This project is not a common one, but that is exactly what pushes us to make it work not matter the cost. We are also proud to be able to represent Epita for the races. It is a great opportunity for us because we will have the chance to meet many people during the races like some people of Renault Digital and some other passionate people. \section{Technical specifications} \subsection{Hardware} On the hardware side, we already have a working car containing: \begin{itemize} \item A RaspberryPi 4. It is used for heavy computations like image processing, model inference, and other. Most of our programs will run on this device. \item A USB camera. Connected to the RaspberryPi, this camera will be our main source of data. \item An Arduino Mini. It is used for the low level, it handles commands sent by the RaspberryPi on the serial port, processes them and send Pulse Width modulation (PWM) signals to both the Electronic Speed Controller (ESC) and servo motor. \item An Electronic Speed Controller (or ESC). It is used to drive the motor, it receives from the arduino a PWM signal (Pulse width modulation). \item A Servo motor. Just like the ESC, it receives a PWM signal \item A Speed sensor. This sensor will be usefull for our optional objectives that will require a speed feedback. This sensor is read by the arduino, then the data is sent to the RaspberryPi over the serial port. \end{itemize} As the car is already working on the hardware side, it will save us some precious time. But still, it is really important to understand how the whole car is working and how the different components are interecting. Here is a schema of the current hardware setup we will be using: \newpage \rotatebox[origin=c]{-90}{\includegraphics[width=27cm]{../docs/schema.png}} \subsection{Software} As you understood, our work will be focused more on the software side. Our project will be written mostly in Python, with a bit of JS and arduino code. We will divide the project into 3 big parts: \begin{itemize} \item The backend. This is where most of our efforts will be focused; this part includes every key programs all the way from the data gathering to the model training and testing. The first part we will develop is the code responsible for the communication between the RaspberryPi and the Arduino, this code will enable us to drive the motor and servo motor by calling some simple functions. This program will also later be used to fetch the speed of the car transmitted by the arduino to the RaspberryPi. Then, we will need to find a way to control the car manually, for example using a controller. The values of every axis and buttons on the controller will need to be fetch and then transmitted to the arduino using the code previously created. We will then create functions to capture, save and load images and annotations corresponding to the image. The data will be stored in an ordered manner using the date of capture of the images. For this part we will mostly use OpenCV and Numpy. Then we will need to create models using Tensorflow, we will firstly create a basic Convolutional model. In order to train it, we will need to write a program to load existing data, feed the right inputs and output to the model. The modularity of this part is crucial as our models will have more and more inputs and outputs. To increase the accuracy and generalization of our model, we will need to provide some augmented data to our model. We will create multiple functions to add some random noise, random shadows, lights, and blur to our images. This process known as data augmentation, is really important and will bring more robustness to our models. We will then be able to start our optional objectives such as speed control and overtaking ! \item The Telemetry website. Developing an autonomous car is difficult. The more you know about what’s happening inside, the better results will get. We will use telemetry. It will help us analyze what is happening inside the car at any moment. To collect data, a server will run outside the car on a computer. The car will communicate with the server through a Wi-Fi router. The server is divided in two parts: The first part is a web-app. It will use JavaScript (Node.js) as the main language. The UI will be created using the React.js framework. The server will handle image streaming with UDP from multiple client (autonomous cars). The web-app will display the camera’s views in a canvas. Furthermore, clients will be sending debugging and logs to an Elasticsearch Database because we will work with JSON data. All the logs will be accessible from the web-app, and will be displayed in various form as Time series graph and Scatterplots. The server and the database will run each inside its own docker container. We will use docker-compose to run all containers with one command. \item The presentation website. The presentation website will be a Static Generated Site built with Next.js (a React Framework). The Website will introduce a cool design, a presentation of the members and the progress of the project. It will be hosted on GitHub Pages as it is free, reliable and can be managed on the same organization account. \end{itemize} \newpage \subsection{Constraints} Throughout the project, we will have to keep in mind some constraints. Our biggest constraint will be the compute power. We need all of the inference of the model to be executed on the RaspberryPi, this means that we will have to be really carefull of the performance of our code. How fast our main control loop is will determine the reactivity of our car. Our usb camera can capture up to 30 images per seconds, our main control loop should ideally match 30 iterations per seconds. For example, if the car is going at 1 meter per seconds and our control loop is running at 10hz, the distance the car travels between each iteration is 10cm. Now if you are going 5 times faster with the same control loop, you now have 50cm between each decision, this is huge for such a small car ! The part that will likely take the most time will be the inference, we will need to keep track of the ratio between performance and accuracy of our models. Regarding the accuracy of our models, we will need sufficient accuracy and generalization to be able to complete multiple laps. If the generalization is not high enough, small changes in the lightning or changes in the background will affect the prediction of our model. On the safety part, we have to keep in mind that we are driving a powerful car, a wrong motor command can lead us right into the wall and break the car. Prior to deploy and test our code on a real track, we will need to carefully test our code to avoid such a disaster. % sout : 7 / 03 & 25 / 04 & 06 / 06 \section {Planning} \subsection {Races} \begin{tabular}{|l|c|c|c|c|c|c|} \hline Tasks & Race 1 & Race 2 & Race 3 & Race 4 & Race 5 & Race 6 \\ \hline Code controlled motors and servo & 75\% & 100\% & & & & \\ \hline Drive the car with a controller & 25\% & 100\% & & & & \\ \hline Data collection & & 50\% & 100\% & & & \\ \hline Telemetry website & & 25\% & 100\% & & & \\ \hline Data processing and augmentation & & & 50\% & 75\% & 100\% & \\ \hline Basic Convolutional neural network & & & 25\% & 50\% & 100\% & \\ \hline \begin{tabular}[c]{@{}l@{}}Advanced \\models and optional objectives\end{tabular} & & & & & & 50\% \\ \hline \end{tabular} \subsection {Presentations} \begin{tabular}{|l|c|c|c|} \hline Tasks & 1st presentation & 2nd Presentation & Final resentation \\ \hline Code controlled motors and servo & 100\% & & \\ \hline Drive the car with a controller & 100\% & & \\ \hline Data collection & 75\% & 100\% & \\ \hline Telemetry website & 25\% & 100\% & \\ \hline Presentation website & 100\% & Update & Update \\ \hline Data processing and augmentation & & 75\% & 100\% \\ \hline Basic Convolutional neural network & & 50\% & 100\% \\ \hline \begin{tabular}[c]{@{}l@{}}Advanced \\models and optional objectives\end{tabular} & & & 50\% \\ \hline \end{tabular} \section {Task allocation} \begin{tabular}{|l|c|c|c|c|} \hline Tasks & Mickael B. & Maxime G. & Alexandre G. & Maxime E. \\ \hline Low level car control & & x & x & \\ \hline Driving with a controller & & x & x & \\ \hline Data storage and handling & x & x & x & x \\ \hline Telemetry website & x & & & x \\ \hline Presentation website & x & & & x \\ \hline Convolutional neural network & x & x & x & x \\ \hline Main control loop & & x & x & x \\ \hline \end{tabular} \section {Conclusion} All in all, our project is not an easy one, our goal is to make an autonomous car to race against other cars. To make this dream come true, we have to deal with two distinct parts. In one hand the Hardware side that is to say create a real functional car and in the other hand the Software side. \\ We will spend most of our time on the Software part, which includes gathering data to the model training and testing, develop the code responsible for driving the car, the Software part also includes a networking part for the telemetry website and a presentation website. Moreover, we will have to face many constraints as for example the computer power and the code efficiency. Furthermore, thanks to the regular races, it provides us an immediate feedback which is important for the project development. \\ As you might have realized, it will take a lot of work and learning to make it work, but we are extremely motivated to work on this project and to learn new skills. \\ To make a long story short, our goal is not to revolutionise the automobile industry, but to make something that we enjoy and that will give us a first glance into what AI and self driving cars are all about. \end{document}
{ "alphanum_fraction": 0.7151416925, "avg_line_length": 82.7682926829, "ext": "tex", "hexsha": "fbef138bcf0f424ccaaa5a9f64d4681acd4e39f2", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2022-03-07T16:54:30.000Z", "max_forks_repo_forks_event_min_datetime": "2022-03-07T16:54:30.000Z", "max_forks_repo_head_hexsha": "01daafa380c90a0e5525111021a287cce9342cf5", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "Autonomobile/AutoPylot", "max_forks_repo_path": "ressources/project-specifications/project-specifications.tex", "max_issues_count": 33, "max_issues_repo_head_hexsha": "01daafa380c90a0e5525111021a287cce9342cf5", "max_issues_repo_issues_event_max_datetime": "2022-03-28T22:34:47.000Z", "max_issues_repo_issues_event_min_datetime": "2022-01-14T14:15:57.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "Autonomobile/AutoPylot", "max_issues_repo_path": "ressources/project-specifications/project-specifications.tex", "max_line_length": 1055, "max_stars_count": 4, "max_stars_repo_head_hexsha": "01daafa380c90a0e5525111021a287cce9342cf5", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "Autonomobile/AutoPylot", "max_stars_repo_path": "ressources/project-specifications/project-specifications.tex", "max_stars_repo_stars_event_max_datetime": "2022-01-26T18:15:03.000Z", "max_stars_repo_stars_event_min_datetime": "2022-01-18T16:16:22.000Z", "num_tokens": 4564, "size": 20361 }
\documentclass[8pt,oneside]{extarticle} %\usepackage{subfigure} \usepackage{subcaption} \usepackage{tabularx} \usepackage{graphicx} \usepackage{amsmath} \usepackage{amsfonts} \usepackage{hyperref} \usepackage{adjustbox} \usepackage{listings} \usepackage{optidef} \usepackage{cleveref} \usepackage{threeparttable} \usepackage{xcolor} \usepackage{titlesec} \usepackage{enumitem} \usepackage{mathrsfs} \usepackage[driver=pdftex]{geometry} \usepackage{import} %\usepackage{titleformat{\section % {\normalfont\normalzie\bfseries}{Helo.}{1em}{} \definecolor{codegreen}{rgb}{0,0.6,0} \definecolor{codegray}{rgb}{0.5,0.5,0.5} \definecolor{codepurple}{rgb}{0.58,0,0.82} \definecolor{backcolour}{rgb}{0.95,0.95,0.92} \crefname{table}{table}{table} \setlength{\parindent}{0em} \setlength{\parskip}{0.7em} \counterwithin{table}{section} \lstdefinestyle{mystyle}{ backgroundcolor=\color{backcolour}, commentstyle=\color{codegreen}, keywordstyle=\color{magenta}, numberstyle=\tiny\color{codegray}, stringstyle=\color{codepurple}, basicstyle=\ttfamily\footnotesize, breakatwhitespace=false, breaklines=true, captionpos=b, keepspaces=true, numbers=left, numbersep=5pt, showspaces=false, showstringspaces=false, showtabs=false, tabsize=2 } \newtheorem{theorem}{Theorem} \newtheorem{definition}{Definition} \newtheorem{proof}{Proof} \lstset{style=mystyle} %\usepackage[margin=0.5in]{geometry} \usepackage{inputenc} \newcommand{\Real}{\mathbb{R}} \newcommand{\Int}{\mathbb{Z}} \newcommand{\Nat}{\mathbb{N}} \newcommand{\Complex}{\mathbb{C}} \newcommand{\vect}[1]{\boldsymbol{#1}} %\renewcommand{\TPTminimum}{\textwidth} \renewcommand{\Re}[1]{\mathfrak{Re}\left\lbrace{#1}\right\rbrace} \renewcommand{\Im}[1]{\mathfrak{Im}\left\lbrace{#1}\right\rbrace} %\DeclareMathOperator*{\minimize}{minimize} %\DeclareMathOperator*{\maximize}{maximize} \title{{\bf MATH 3172 3.0\\ Combinatorial Optimization}\\\vspace{10pt} \large Workshop 5 \author{Jacques Nel} } \begin{document} \maketitle \thispagestyle{empty} \newpage \pagenumbering{arabic} \section*{A quick note about revision:} This is the $3^\mathrm{rd}$ revision of this assignment. \newpage \section{A-4 Cane sugar production} This problem is taken from\footnote{C. Guéret, C. Prins, M. Sevaux, \textit{Applications of optimization with Xpress-MP}. % Paris: Dash Optimization Ltd., 2007. Page 74.}. \subsection{Parameters} Let $W = \left\lbrace 1,\ldots, m\right\rbrace$ enumerate $m=11$ wagons or lots, and $S=\left\lbrace 1, \ldots, n\right\rbrace$ enumerate $n$ time slots. The refinery has $k=3$ equivalent processing lines. $n$ timeslots are required to process $m$ wagons where $n$ is given by $n=\texttt{ceil}(m/k)=4$. Each lot $w\in W$ has an associated hourly loss $\Delta_w$ and remaining lifespan $l_w$ until total loss. Furthermore, a single lot takes $D=2$ hours to process on any given production line. \subsection{Decision variable} Let $\vect{x} = \left[ x_{ws}\right] \in \left\lbrace 0, 1\right\rbrace^{m\times n}$ where for $w\in W$ and $s\in S$, $$x_{ws} = \begin{cases} 1 & \text{lot }w\text{ is processed in slot }s\\ 0 & \text{otherwise} \end{cases}. $$ \subsection{Model} We seek to minimize the loss in raw material resulting from fermentation due to delayed processing of a lot. The model is \begin{mini!} {\vect{x}}{f\left(\vect{x}\right)=\sum_{w\in W}\sum_{s\in S} sd\Delta_{w}x_{ws} \protect\label{eq:a4-obj}}{\label{eq:a4}}{} \addConstraint{\sum_{s\in S}x_{ws}}{=1, \forall w\in W \protect\label{eq:a4-cstr1}} \addConstraint{\sum_{w\in W}x_{ws}}{\leq k, \forall s\in S \protect\label{eq:a4-cstr2}} \addConstraint{\sum_{s\in S}sx_{ws}}{\leq l_w / d, \forall w\in W \protect\label{eq:a4-cstr3}} \end{mini!} The objective function \cref{eq:a4-obj} is the total loss in raw material resulting from delayed processing summed over all lots and wagons. All lots must be assigned to exactly one slot as enforced by \cref{eq:a4-cstr1}. Next, \cref{eq:a4-cstr2} guarantees that at most $k=3$ lots can be processed in any one timeslot. Finally, \cref{eq:a4-cstr3} ensures that a lot is processed before its total loss occurs. Observe that total loss of a lot occurs after $l_w / d$ slots. \newpage \subsection{Results} The optimal solution results in a loss of $f\left(\vect{x}^*\right) = 1620$ kg with the following time slot assignments: \begin{table}[h] \centering \caption{Optimal time slot allocations for each lot}\label{table:a4-results} \begin{tabular}{cccc} \hline \textbf{Slot 1} & \textbf{Slot 2} & \textbf{Slot 3} & \textbf{Slot 4} \\ \hline lot 3 & lot 1 & lot 10 & lot 2 \\ lot 6 & lot 5 & lot 8 & lot 4 \\ lot 7 & lot 8 & &\\ \hline \end{tabular} \medskip \emph{Note:} Column $j$ is generated with $\left\lbrace w : x_{wj} = 1, \forall w \in \mathrm{W}\right\rbrace$. \end{table} \section{A-6 Production of electricity} This problem is taken from\footnote{C. Guéret, C. Prins, M. Sevaux, \textit{Applications of optimization with Xpress-MP}. % Paris: Dash Optimization Ltd., 2007. Page 78.}. \subsection{Parameters} Let $T=\lbrace 1,\ldots, n\rbrace$ enumerate $n=7$ time periods (of varying length) and $P=\lbrace 1,\ldots, m\rbrace$ enumerate $m=4$ generator types. For a time period $t\in T$ let $l_t$ denote the length of time period in hours, and let $d_t$ denote the forecasted power demand given by $d_t$. \begin{table}[h] \center \caption{Length and forecasted demand of time periods}\label{table:a6-periods} \begin{tabular}{c c c} \hline \textbf{Period} $t$ & \textbf{Length} $l_t$ & \textbf{Demand} $d_t$ \\ \hline 1 & 6 & $1.2 \times 10^4$ \\ 2 & 3 & $3.2 \times 10^4$ \\ 3 & 3 & $2.5 \times 10^4$ \\ 4 & 2 & $3.6 \times 10^4$ \\ 5 & 4 & $2.5 \times 10^4$ \\ 6 & 4 & $3.0 \times 10^4$ \\ 7 & 2 & $1.8 \times 10^4$ \\ \hline \end{tabular} \end{table} For each generator type $p\in P$, there are $a_p$ units available. Each unit has a minimum base power output $\theta_p$ (if it is running) and can scale up to a maximum output denoted $\psi_p$, but incurring additional operating cost. \begin{table}[h] \center \caption{Number of available units and power output capacity}\label{table:a6-generators} \begin{tabular}{cccc} \hline \textbf{Type} & \textbf{Num. available} $a_p$ & \textbf{Min. output} $\theta_p$ & \textbf{Max output} $\psi_p$ \\ \hline 1 & 10 & $7.5\times 10^2$ & $1.75\times 10^3$ \\ 2 & 4 & $1.0\times 10^3$ & $1.5\times 10^3$ \\ 3 & 8 & $1.2\times 10^3$ & $2.0\times 10^3$ \\ 4 & 3 & $1.8\times 10^3$ & $3.5\times 10^3$ \\ \hline \end{tabular} \end{table} \newpage Starting a generator unit of type $p\in P$ incurs a startup cost $\lambda_p$. Running the generator incurs a fixed cost per hour $\mu_p$. Additional scalable output, on top of the base output, incurs a hourly cost $\nu_p$ that is proportional to the additional output. \begin{figure}[h] \protect\label{fig:a6-costs} \center \caption{Various costs associated with generator types} \begin{tabular}{cccc} \hline \textbf{Type} & \textbf{Start cost} $\lambda_p$ & \textbf{Run cost} $\mu_p$ & \textbf{Add. cost} $\nu_p$ \\ \hline 1 & 5000 & 2250 & 2.7 \\ 2 & 1600 & 1800 & 2.2 \\ 3 & 2400 & 3750 & 1.8 \\ 4 & 1200 & 4800 & 3.8 \\ \hline \end{tabular} \end{figure} \subsection{Decision variables} Suppose $p\in P$ and $t\in T$. Let $0 \leq x_{pt} \in\Int$ denote the number of generators of type $p$ started in period $t$, and $0 \leq y_{pt} \in\Int$ be the number of generators of type $p$ running in period $t$. Finally, let $0 \leq z_{pt}\in\Real$ denote the additional power generated by unit of type $p$ during period $t$. To simplify notation, let $\vect{x} = \left[x_{pt}\right] \in \Int^{m\times n}, \vect{y} = \left[y_{pt}\right] \in \Int^{m\times n}$, and $\vect{z} = \left[z_{pt}\right] \in \Real^{m\times n}$. \subsection{Model} \begin{mini!} {\vect{x}, \vect{y}, \vect{z}}{\sum_{t\in T}\sum_{p\in P} \lambda_{p}x_{pt} + l_t\left(\mu_p y_{pt} + \nu_p z_{pt}\right) \protect\label{eq:a6-obj}}{\label{eq:a6}}{} \addConstraint{x_{p1}}{\geq y_{p1} - y_{pn}, \forall p\in P \protect\label{eq:a6-cstr1}} \addConstraint{x_{pt}}{\geq y_{pt} - y_{p(t-1)}, \forall p\in P, 1 < t\in T \protect\label{eq:a6-cstr2}} \addConstraint{z_{pt}}{\leq \left(\psi_p - \theta_p\right)y_{pt}, \forall (p, t)\in P\times T \protect\label{eq:a6-cstr3}} \addConstraint{\sum_{p\in P} \theta_p y_{pt} + z_{pt}}{\geq d_t, \forall t\in T \protect\label{eq:a6-cstr4}} \addConstraint{\sum_{p\in P} \psi_p y_{pt}}{\geq 1.2 d_t, \forall t\in T \protect\label{eq:a6-cstr5}} \addConstraint{y_{pt}}{\leq a_p, \forall (p,t)\in P\times T \protect\label{eq:a6-cstr6}} \addConstraint{x_{pt}}{\geq 0, \forall (p,t)\in P\times T \protect\label{eq:a6-cstr7}} \addConstraint{y_{pt}}{\geq 0, \forall (p,t)\in P\times T \protect\label{eq:a6-cstr8}} \addConstraint{z_{pt}}{\geq 0, \forall (p,t)\in P\times T \protect\label{eq:a6-cstr9}} \end{mini!} The cost function \cref{eq:a6-obj} is simply the startup cost, running cost, and additional power cost summed over all decision variables. The number of generators started for $t=1$ is related to the number of generators running by \cref{eq:a6-cstr1}. This relationship depends on the numbers generators running at the end period $t=n$. The next family of constraints \cref{eq:a6-cstr2} is similar to the above, but deals with the relationship for $1< t \leq n$. Additional power output $z_{pt}$ is bounded by the difference between the maximum and the base output, ie. $\psi_p - \theta_p$, as expressed by \cref{eq:a6-cstr3}. Next, \cref{eq:a6-cstr4} ensures that the total ouput of all generator units meets forecasted demand $d_t$ for all $t\in T$. Furthermore, a $20\%$ safety buffer is required at all times. \Cref{eq:a6-cstr5} ensures that, if demand were to suddenly spike, a minimum of $20\%$ of $d_t$ of additional capacity can instantly be made available, by increasing additional output $z_{pt}$ up its maximum $\psi_p-\theta_p$. The family of constraints \cref{eq:a6-cstr6} simply places an upper bound on the number of units running, in a given period, equal to the given available number of units $a_p$ of each type. Finally, \cref{eq:a6-cstr7}, \cref{eq:a6-cstr8}, and \cref{eq:a6-cstr9} simply enforce the canonical non-negativity of $x_{pt}$, $y_{pt}$, and $z_{pt}$ respectively. \subsection{Results} The optimal solution was found with a total operating cost of $f\left(\vect{x}^*, \vect{y}^*, \vect{z}^*\right) = \$1,456,810$. \begin{table}[ht] \centering \caption{Optimal power generation schedule for 4 generator types over 7 planning periods} \begin{tabular}{clrrrrrrr} \hline \textbf{Type} & & \textbf{1} & \textbf{2} & \textbf{3} & \textbf{4} & \textbf{5} & \textbf{6} & \textbf{7} \\ \hline \textbf{1} & No. used & 3 & 4 & 4 & 7 & 3 & 3 & 3\\ & Tot. output & 2250 & 4600 & 3000 & 8600 & 2250 & 2600 & 2250\\ & Add. output & 0 & 1600 & 0 & 3350 & 0 & 350 & 0\\ \hline \textbf{2} & No. used & 4 & 4 & 4 & 4 & 4 & 4 & 4\\ & Tot. output & 5750 & 6000 & 4200 & 6000 & 4950 & 6000 & 5950\\ & Add. output & 1750 & 2000 & 200 & 2000 & 950 & 2000 & 1950\\ \hline \textbf{3} & No. used & 2 & 8 & 8 & 8 & 8 & 8 & 4\\ & Tot. output & 4000 & 16000 & 16000 & 16000 & 16000 & 16000 & 8000\\ & Add. output & 1600 & 6400 & 6400 & 6400 & 6400 & 6400 & 3200\\ \hline \textbf{4} & No. used & 0 & 3 & 1 & 3 & 1 & 3 & 1\\ & Tot. output & 0 & 5400 & 1800 & 5400 & 1800 & 5400 & 1800\\ & Add. output & 0 & 0 & 0 & 0 & 0 & 0 & 0\\ \hline \end{tabular} \medskip \emph{Note:} The above table was generated from the solution values $x_{pt}^*, y_{pt}^*$, and $z_{pt}^*$ with \texttt{a-6\_report.py}. \end{table} \section{C-2 Production of drinking glasses} This problem is taken from\footnote{C. Guéret, C. Prins, M. Sevaux, \textit{Applications of optimization with Xpress-MP}. % Paris: Dash Optimization Ltd., 2007. Page 106.}. \subsection{Parameters} Let $W =\left\lbrace 1,\ldots, n\right\rbrace$ enumerate $n=12$ week-long planning periods, and let $P=\left\lbrace 1,\ldots, m\right\rbrace$ enumerate the $m=6$ product variants. Each product $p\in P$ has a predicted demand $d_{pt}$ during week $t\in W$ as given in \cref{table:c2-demands}. \begin{table}[h] \center \caption{Predicted weekly demand for each product variant}\label{table:c2-demands} \begin{tabular}{c cccccccccccc} \hline \textbf{Week} & \textbf{1} &\textbf{2} &\textbf{3} &\textbf{4} &\textbf{5} &\textbf{6} &\textbf{7} &\textbf{8} &\textbf{9} &\textbf{10} &\textbf{11} &\textbf{12} \\ \hline \textbf{V1} & 20 & 22 & 18 & 35 & 17 & 19 & 23 & 20 & 29 & 30 & 28 & 32 \\ \textbf{V2} & 17 & 19 & 23 & 20 & 11 & 10 & 12 & 34 & 21 & 23 & 30 & 12 \\ \textbf{V3} & 18 & 35 & 17 & 10 & 9 & 21 & 23 & 15 & 10 & 0 & 13 & 17 \\ \textbf{V4} & 31 & 45 & 24 & 38 & 41 & 20 & 19 & 37 & 28 & 12 & 30 & 37 \\ \textbf{V5} & 23 & 20 & 23 & 15 & 10 & 22 & 18 & 30 & 28 & 7 & 15 & 10 \\ \textbf{V6} & 22 & 18 & 20 & 19 & 18 & 35 & 0 & 28 & 12 & 30 & 21 & 23 \\ \hline \end{tabular} \end{table} Each product variant $p\in P$ has an associated basic production cost $\lambda_p$ and an inventory storage cost $\mu_p$ incurred on product in inventory over given period. Production requires a known amount worker labour time $\delta_p$, machine time $\pi_p$, and production area $\gamma_p$. In every period there is a limited amount of worker time $\Delta$, available machine time $\Pi$, and production area $\Gamma$. Lastly, at the start of planning period there exists volume $I_p$ of item $p$ in the inventory, and it is required that there is $F_p$ of item $p$ at the end of the planning period in the inventory. All parameters are given in \cref{table:c2-params}. \begin{table}[h] \center \caption{Given costs, production resources, and inventory of product variants} \label{table:c2-params} \begin{tabular}{cccccccc} \hline & \textbf{prod. cost} & \textbf{inv. cost} & \textbf{init. stock} & \textbf{fin. stock} & \textbf{labour} & \textbf{mach. time} & \textbf{area} \\ \hline \textbf{V1} & 100 & 25 & 50 & 10 & 3 & 2 & 4 \\ \textbf{V2} & 80 & 28 & 20 & 10 & 3 & 1 & 5 \\ \textbf{V3} & 110 & 25 & 0 & 10 & 3 & 4 & 5 \\ \textbf{V4} & 90 & 27 & 15 & 10 & 2 & 8 & 6 \\ \textbf{V5} & 200 & 10 & 0 & 10 & 4 & 11 & 4 \\ \textbf{V6} & 150 & 20 & 10 & 10 & 4 & 9 & 9 \\ \hline \end{tabular} \end{table} \newpage \subsection{Decision variables} For a given product $p\in P$ and week $t\in W$ let $0 \leq x_{pt} \in\Int$ denote the production volume. Also let $0\leq y_{pt}\in\Int$ denote the amount of product stored in the inventory at the end of period $t$. To simplify notation let $\vect{x} = \left[ x_{pt} \right] \in \Int^{m\times x}$ and $\vect{y} = \left[ y_{pt} \right] \in \Int^{m\times x}$. \subsection{Model} \begin{mini!} {\vect{x}, \vect{y}}{f\left(\vect{x}, \vect{y}\right)=\sum_{p\in P}\sum_{t\in W} \lambda_p x_{pt} + \mu_p y_{pt} \protect\label{eq:c2-obj}}{\label{eq:c2}}{} \addConstraint{y_{p1}}{= I_p + x_{pt} - d_{pt}, \forall p\in P \protect\label{eq:c2-cstr1}} \addConstraint{y_{pt}}{= y_{p(t-1)} + x_{pt} - d_{pt}, \forall p\in P, t\in W \protect\label{eq:c2-cstr2}} \addConstraint{y_{pn}}{= F_p, \forall p\in P, \protect\label{eq:c2-cstr3}} \addConstraint{\sum_{p\in P} \delta_p x_{pt}}{\leq \Delta, \forall t\in W, \protect\label{eq:c2-cstr4}} \addConstraint{\sum_{p\in P} \pi_p x_{pt}}{\leq \Pi, \forall t\in W, \protect\label{eq:c2-cstr5}} \addConstraint{\sum_{p\in P} \gamma_p x_{pt}}{\leq \Gamma, \forall t\in W, \protect\label{eq:c2-cstr6}} \addConstraint{x_{pt}}{\geq 0 \forall p\in P, t\in W, \protect\label{eq:c2-cstr7}} \addConstraint{y_{pt}}{\geq 0 \forall p\in P, t\in W, \protect\label{eq:c2-cstr8}} \end{mini!} We seek to minimize total production cost. \Cref{eq:c2-obj} is a cost function which simply sums the total production and storage costs over all decision variables. \Cref{eq:c2-cstr1} and \cref{eq:c2-cstr2} states that the inventory at time $t$ is equal the previous inventory plus the product minus the demand. \Cref{eq:c2-cstr1} makes special consideration for the initial inventory $I_p$. \Cref{eq:c2-cstr3} ensures that the final inventory for product $p$ is equal to $F_p$ at the end of the planning period. \Cref{eq:c2-cstr4}, \cref{eq:c2-cstr5} and \cref{eq:c2-cstr6} ensures that limited production factors: worker time capacity $\Delta$, machine time capacity $\Pi$, and production area $\Gamma$ constraints are respected. For example, production of product $p$ during period $t$ requires $\pi_p x_{pt}$ machine hours, the total of which shall not exceed $\Pi$ for the given period. Lastly \cref{eq:c2-cstr7} and \cref{eq:c2-cstr8} are simply the cononical non-negativity constraints on both decision variables $x_{pt}$ and $y_{pt}$. \newpage \subsection{Results} A optimal solution $\left(\vect{x}^*, \vect{t}^*\right)$ is found with a total production cost of $f\left(\vect{x}^*, \vect{y}^*\right) = \$186,076$. \begin{table}[h] \center \caption{Production and storage quantities for each product type} \begin{tabular}{cccccccccccccc} \hline & \textbf{Week} & \textbf{1} &\textbf{2} &\textbf{3} &\textbf{4} &\textbf{5} &\textbf{6} &\textbf{7} &\textbf{8} &\textbf{9} &\textbf{10} &\textbf{11} &\textbf{12} \\ \hline \textbf{1} & Prod. & 0 & 0 & 11 & 34 & 29 & 7 & 23 & 21 & 29 & 29 & 29 & 41\\ & Store & 30 & 8 & 1 & 0 & 12 & 0 & 0 & 1 & 1 & 0 & 1 & 10\\ \textbf{2} & Prod. & 7 & 21 & 14 & 17 & 11 & 10 & 12 & 34 & 21 & 23 & 30 & 22\\ & Store & 10 & 12 & 3 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 10\\ \textbf{3} & Prod. & 18 & 35 & 17 & 11 & 8 & 21 & 23 & 15 & 10 & 0 & 13 & 27\\ & Store & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 10\\ \textbf{4} & Prod. & 16 & 45 & 24 & 38 & 41 & 20 & 20 & 36 & 29 & 11 & 31 & 46\\ & Store & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 1 & 0 & 1 & 10\\ \textbf{5} & Prod. & 47 & 16 & 34 & 14 & 23 & 24 & 43 & 0 & 26 & 4 & 0 & 0\\ & Store & 24 & 20 & 31 & 30 & 43 & 45 & 70 & 40 & 38 & 35 & 20 & 10\\ \textbf{6} & Prod. & 14 & 17 & 20 & 18 & 18 & 35 & 1 & 27 & 12 & 49 & 28 & 7\\ & Store & 2 & 1 & 1 & 0 & 0 & 0 & 1 & 0 & 0 & 19 & 26 & 10\\ \hline \end{tabular} \end{table} \emph{Note:} I choose to solve this as an integer programming model. This is the reason why my results differ slightly from the book. The above table simply states the optimal solution values for $x_{pt}^*$ and $y_{pt}^*$ for all $p\in P$ and $t\in W$. \section{D-5 Cutting sheet metal} This problem is taken from\footnote{C. Guéret, C. Prins, M. Sevaux, \textit{Applications of optimization with Xpress-MP}. % Paris: Dash Optimization Ltd., 2007. Page 134.}. \subsection{Parameters} Let $S =\left\lbrace 1, \ldots, n\right\rbrace$ enumerate $n=4$ different sizes, ie. $\left\lbrace \texttt{36x50}, \texttt{24x36}, \texttt{20x60}, \texttt{18x30}\right\rbrace$. Also, let $P=\left\lbrace 1,\ldots, m\right\rbrace$ enumerate $m$ different cutting patterns. For $s\in S$ and $p\in P$ let $c_{sp}$ denote number of pieces of size $s$ yielded by pattern $p$. The values of $c_{sp}$ are given by \cref{table:d5-yield}. \begin{table}[h] \center \caption{Yields of various cutting patterns}\label{table:d5-yield} \begin{tabular}{ccccccccccccccccc} \hline \textbf{Pattern} & \textbf{1} & \textbf{2} &\textbf{3} &\textbf{4} &\textbf{5}&\textbf{6}&\textbf{7}&\textbf{8}&\textbf{9}&\textbf{10}&\textbf{11}&\textbf{12}&\textbf{13}&\textbf{14}&\textbf{15}&\textbf{16} \\ \hline \texttt{36x50} & 1 & 1 & 1 & 0 & 0 & 0 &0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ \texttt{24x36} & 2 & 1 & 0 & 2 & 1 & 0 & 3 & 2 & 1 & 0 & 5 & 4 & 3 & 2 & 1 & 0 \\ \texttt{20x60} & 0 & 0 & 0 & 2 & 2 & 2 & 1 & 1 & 1 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \\ \texttt{18x30} & 0 & 1 & 3 & 0 & 1 & 3 & 0 & 2 & 3 & 5 & 0 & 1 & 3 & 5 & 6 & 8 \\ \hline \end{tabular} \end{table} Each cut size $s\in S$ has a given demand $\vect{d} =\left( d_s : s \in S\right)^T = \left( 8, 13, 5, 15\right)^T$. Finaly, each pattern has equivalent cost $\kappa = 1$, or simply the cost of each sheet of raw material. \subsection{Decision variable} Let $0 \leq x_{p} \in \Int$ denote the number of times pattern $p$ is used. To simplify notation let $\vect{x} = \left( x_p : p \in P\right)$. \subsection{Model} \begin{mini!} {\vect{x}}{f(\vect{x}) = \sum_{p\in P} x_p\kappa \protect\label{eq:d5-obj}}{\label{eq:d5}}{} \addConstraint{\sum_{p\in P} c_{sp} x_p}{\geq d_s, \forall s\in S \protect\label{eq:d5-cstr1}} \addConstraint{x_p}{\geq 0, \forall p\in P \protect\label{eq:d5-cstr2}} \end{mini!} We seek to minimize the total cost which is given by \cref{eq:d5-obj}. This is simply the total number of sheets of raw material used. \Cref{eq:d5-cstr1} is the family of demand constraints, which guarantee that demand is met for each size $s\in S$. Finaly \cref{eq:d5-cstr2} is simply the canonical non-negativity constraint on the decision variable $x_p$. \subsection{Results} An optimal solution is found which uses 11 sheets of raw material to satisfy demand, with a cost function value of $f\left(\vect{x}^*\right)= 11$. The following quantities of each pattern are used: \texttt{pattern 1} = 3, \texttt{pattern 3} = 5, \texttt{pattern 4} = 2, \texttt{pattern 7} = 1, and all others are unused. These values are simply the optimal non-zero values of the decision variable $x_p^*, \forall p\in\mathrm{P}$. \section{F-1 Flight connections at a hub} This problem is taken from\footnote{C. Guéret, C. Prins, M. Sevaux, \textit{Applications of optimization with Xpress-MP}. % Paris: Dash Optimization Ltd., 2007. Page 157.}. \subsection{Parameters} Let $P = \left\lbrace 1, \ldots, n\right\rbrace$ enumerate both $n=6$ incoming flights and then $n$ outgoing flights. For $i\in P$ and $j\in P$, let $\mu_{ij}$ denote the number of passengers arriving on flight $i$ from origin $i$ seeking to continue on to destination $j$ given by \cref{table:f1-passengers}. \begin{table}[h] \center \caption{Arriving passengers and destinations}\label{table:f1-passengers} \begin{tabular}{c|cccccc} \hline \textbf{City} & \textbf{1} & \textbf{2} & \textbf{3} & \textbf{4} & \textbf{5} & \textbf{6} \\ \textbf{1} & 35 & 12 & 16 & 38 & 5 & 2 \\ \textbf{2} & 25 & 8 & 9 & 24 & 6 & 8 \\ \textbf{3} & 12 & 8 & 11 & 27 & 3 & 2 \\ \textbf{4} & 38 & 15 & 14 & 30 & 2 & 9 \\ \textbf{5} & - & 9 & 8 & 25 & 10 & 5 \\ \textbf{6} & - & - & - & 14 & 6 & 7 \\ \hline \end{tabular} \end{table} \subsection{Decision variable} Let $x_{ij} \in\lbrace 0, 1\rbrace$ indicate that aircraft from origin $i\in P$ travels to destination $j\in P$ for next flight when $x_{ij} = 1$. To simplify notation let $\vect{x} = \left[x_{ij}\right] \in \left\lbrace 0, 1\right\rbrace^{n\times n}$. \subsection{Model} We seek to minimize the number of passengers requiring to disembark and transfer to another plane for their next flight. In other words, we wish to maximize the number of passengers staying on their arriving aircraft. \begin{mini!} {\vect{x}}{f(\vect{x}) = \sum_{i\in P} \sum_{j\in P} \mu_{ij}x_{ij} \protect\label{eq:f1-obj}}{\label{eq:f1}}{} \addConstraint{\sum_{j\in P} \mu_{ij}x_{ij}}{= 1, \forall i\in P \protect\label{eq:f1-cstr1}} \addConstraint{\sum_{i\in P} \mu_{ij}x_{ij}}{= 1, \forall j\in P. \protect\label{eq:f1-cstr2}} \end{mini!} \subsection{Results} The optimal solution has a total of $f\left(\vect{x}^*\right)= 112$ passengers remaining on their arrival flights for the remainder of their journeys. The following assignment of aircraft to destination, minimizes passenger inconvenience: \medskip Bordeaux $\rightarrow$ London 38 Clermon-Ferrand $\rightarrow$ Bern 8 Marseille $\rightarrow$ Brussels 11 Nantes $\rightarrow$ Berlin 38 Nice $\rightarrow$ Rome 10 Toulouse $\rightarrow$ Vienna 7 \medskip \emph{Note:} The above mapping is simply constructed by the permutation matrix given by $\vect{x}^*$. Refer to \texttt{f-1\_report.py} for implementation details. \end{document}
{ "alphanum_fraction": 0.6417568274, "avg_line_length": 41.716442953, "ext": "tex", "hexsha": "2cbd735adcfe813eb197be875d8a8b332e3f7ed9", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "c921dee6cb0febc47a8a791f8220b02b35caf0cf", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "jmnel/combinatorial-optimization", "max_forks_repo_path": "workshop5/latex-rev3/root.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "c921dee6cb0febc47a8a791f8220b02b35caf0cf", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "jmnel/combinatorial-optimization", "max_issues_repo_path": "workshop5/latex-rev3/root.tex", "max_line_length": 217, "max_stars_count": null, "max_stars_repo_head_hexsha": "c921dee6cb0febc47a8a791f8220b02b35caf0cf", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "jmnel/combinatorial-optimization", "max_stars_repo_path": "workshop5/latex-rev3/root.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 9203, "size": 24863 }
%to force start on odd page \newpage \thispagestyle{empty} \mbox{} \section{Quantum Chemistry} \lettrine[lines=4]{\color{BrickRed}B}efore the reader to go further in reading this chapter of the book, we want to remind that the site deals mainly with Applied Mathematics and theoretical physics. Thus, we will address in this section only of theoretical chemistry (theoretical quantum chemistry, theoretical thermochemistry, theoretical kinetic chemistry, etc). This choice follows the changes of chemistry since the years 1980: form a largely descriptive science descriptive, it tends to become deductive. That is to say that in addition to experience, calculation methods are constantly growing and particularly since the development of modern computing that greatly helps chemists to numerical modeling. Theoretical chemistry, also named "\NewTerm{physical chemistry}\index{physical chemistry}" - application of methods from physics to chemistry - is too often seen as a discipline in itself. In fact, under this term any modern chemistry field is included. Thus, the investigation of any problem in advanced chemistry requires the assistance of theoretical chemistry (and this is lucky...) and the chemist must have a thorough knowledge of it. At the level of chemistry teaching as secondary branch, the role of physical chemistry is already evident: the result is an increase in the level of students, increase in the abstraction and therefore a risk in alienating the average student. Finally, the purpose is not to burden the knowledge by incorporating more new elements, but to convert the mode of approach of this discipline by substituting the most often encyclopedic knowledge statements by rational developments based on only a few assumptions and hypothesis that permits to deduce thanks to mathematics many properties thanks to colorraries. A good understanding of physical chemistry requires in our point of view necessarily to be familiar with quantum physics (\SeeChapter{see chapter Atomistic}) to have at least one approach to what an atom is and its different electron orbits before talking about connections, different filling methods of electron orbits, redox, filling layers, and others... In this sense, we will begin by studying the particular case of the hydrogen atom, which is crucial for the whole that will follow (study of polyelectronic atoms). It is therefore necessary for the reader to browse the next lines with all possible attention and to understand as best as possible the subtleties! \subsection{Infinite three-dimensional rectangular potential} We studied in details in the section of Corpuscular Quantum Physics the Bohr-Sommerfeld hydrogen atom using the results proved in the section of Special Relativity. This model emerged in a simplistic quantification (but not too much wrong as will discussed later below) of certain properties of matter. In the section of Wave Quantum Physics, we studied alos in details the rectilinear infinite potential wall and the harmonic oscillator without giving many more examples. Now we will move towards to resolve problems closer to those useful in chemistry with the objective of studying the hydrogen-like atom. We will now consider a particle moving freely in the three dimensional box below: \begin{figure}[H] \begin{center} \includegraphics{img/chemistry/box_quantum_chemistry.jpg} \end{center} \caption[]{Three dimensional imaginary box in which the particle moves} \end{figure} The potential energy of the system is given by: As in the one-dimensional case (\SeeChapter{see section Wave Quantumn Physics}), the walls of infinite potential prevent the particle from leaving the box, and the wave function is nonzero only for position vector $\vec{r}$ being inside the box. It necessarily vanish when one of the walls is touched. The Schrödinger equation we have to solve is (\SeeChapter{see section Wave Quantumn Physics}): and boundary conditions are: Note that the Hamiltonian can be written as the sum of the Hamiltonian in each axis (we speak of the hamiltonian operators of course!). So we have: where relations that we have proved the origin in details in the section of Wave Quantum Physics of this book. Such a form is named a "\NewTerm{separable form}\index{chemical separable form}": the Hamiltonian is the sum of individual operators $H_i$ each depending only on one variable or degree of freedom $q_i$. This form reflects the independent nature of the movements described by the variables $q_i$. Remember that the joint probability of two independent events is the product of the individual probabilities of the two events separately (\SeeChapter{see section Probabilities}). We therefore expect that the presence probability density in space (\SeeChapter{see section Wave Quantum Physics}) with multidimensional configuration is, if the Hamiltonian of separable form, a simple individual probability density product. In fact, the separable form of the Hamiltonian permits the separation of variables on the wave function itself. Let us write the solutions of the Schrödinger equation under the form: of a product of three factors each depending only of one coordinated. Substituting this notation in the Schrödinger equation, we get without technical developments (elementary algebra): or, by dividing both sides of this equation by $\xi(x)\vartheta(y)\zeta(z)$: which is a much more aesthetic and easier to remember. This equation requires that the sum of the three terms in the left-hand side is equal to a constant in the context of a conservative system (that is what often interested chemists)! Each of these three terms depending only on one and only one variable, so that their sum is equal to a constant, it is necessary that each term is itself constant! In fact, by taking the derivative of both sides of the above equation with respect to $x$, for example, we have: meaning that equation although must be a constant which we will denote equation (as this term expresses an energy). We then have (surprise...): Similarly, we get: Note that each of the separate equations that we have just obtained, for the movement of the particle in the three spatial directions, is a Schrödinger equation in a one-dimensional box. Thus, the three relations previously obtained independently describe each movement in the respective $x, y, z$ directions, limited to the respective ranges: and must be respectively solved with boundary conditions: The results obtained in the section of Wave Quantum Physics when solving the Schrödinger equation in the case of straight wells give us directly: In summary the stationary states of the particle in the three-dimensional box are specified by three positive integers quantum numbers $\lambda, \mu, \nu$. The wave function is finally: and its respective energies (eigenvalues): The variable separation technique detailed above, is applicable only because the Hamiltonian is in separable form. It comes automatically therefore the three-dimensional probability density $\vert \Psi(x,y,z) \vert^2$ is the product of probability density $\vert \xi_\lambda(x)\vert^2,\vert \vartheta_\mu(y)\vert^2,\vert \zeta_\nu(z)\vert^2$, as we had anticipated it. We also note that the energy of movement in three dimensional space is the sum of energy movements in all three spatial directions: the independence of these three directions or degrees of freedom, implies the additivity of their energy. \subsection{Molecular Vibrations} We studied in the Wave Quantum Physics section the harmonic oscillator. Now it in is chemistry that we will use all the power of the results obtained during the study of this system. The harmonic oscillator is a model of molecular vibrations, and is represented by a type of parabolic potential as: for a diatomic molecule. But we have proved in the section of Nuclear Physics that $c^{te}=m\omega_0^2$ so that we finally have for a diatomic molecule: For a polyatomic molecule, we have verbatim (by the additivity of energy): Quantities $\omega_0$ and $\omega_i$ are the vibration frequencies (or rather more correctly: the pulsation) of a molecule, diatomic in the first case and polyatomic in the second case. In the first equation, the variable $x$ represents the elongation of the bond between the two atoms $A$ and $B$ (as with a spring) in a diatomic molecule, that is to say $x=R-R_{eq}$, where $R$ is the instantaneous length of this bond, and $R_{eq}$ is its equilibrium value. In the case of a polyatomic molecule, the potential describing molecular vibrations takes a separable form in terms of summation above only if one considers special variables $q_i$ denoting collective motions of nuclei, and which are named "\NewTerm{normal vibration modes}\index{normal vibration modes}". We also saw in the section of Wave Quantum Physics that the Hamiltonian of a diatomic molecule (problem of the harmonic oscillator) can be written as For a polyatomic molecule that relationship becomes logically: The Hamiltonian above is clearly a type of separable form: it is a sum of one-dimensional Hamiltonians, each depending only on a single mode $q_i$ as variable, describing this mode as a unique spring or harmonic unit mass ($m=1$) oscillator and of pulse oscillation $\omega_i$. Therefore, a separation of variables $q_i$ is possible, reducing the Schrödinger time independant into a number of equations of the same type as that of a one-dimensional harmonic oscillator. So we need just o know the expression of the wave function for a one-dimensional harmonic oscillator, what we already have done in the section of Wave Quantum Physics where we got: and: The figure below shows the graph of the first wave functions of the above relation as well as that of their respective presence probability densities. We can see the same modal structures as those specific to functions of a particle in a one-dimensional box: \begin{figure}[H] \begin{center} \includegraphics{img/chemistry/one_dimensionnal_oscillator.jpg} \end{center} \caption{Wave functions and probability density of a one-dimensional harmonic oscillator} \end{figure} Above the first energy levels of a one-dimensional oscillator with \texttt{\textbf{(a)}} their associated eigenfunction, \texttt{\textbf{(b)}} the associated probability distribution of presence. In the limit of very large values of $n$, the probability distribution approximates more and more of that predicted by classical mechanics, the oscillator lies for the most of the time in the vicinity of the turning points defined by the intersection of potential $E_{\text{pot}}$ with the level of $n$. This trend is illustrated below: \begin{figure}[H] \begin{center} \includegraphics{img/chemistry/one_dimensionnal_oscillator_limit.jpg} \end{center} \caption{Probability density function of a one-dimensional harmonic oscillator for large $n$} \end{figure} For a polyatomic molecule the expression of quantified energy therefore becomes: and eigenfunctions/eigenstates become: with: The last two relations are very important because they allow among others to: \begin{itemize} \item Predict the spectrum of the molecule (spectroscopy) \item To study the energy bands (where does the bands of valence and conduction comes from) \item To locate the bonds between atoms and thus the chemical properties \end{itemize} \subsection{Hydrogenoid Atom} We consider here the quantification of a generic system made of two bodies (particles) interacting with each other and moving in a three-dimensional space. We will be prove at first that, even if the separation of dynamic variables describing individually each of the two bodies is impossible, for cons, the overall movement system (the center of mass) and internal movement, also said "relative motion", are separable. In addition, if the potential is centrosymmetric, the internal movement may also be decomposed into a rotational movement and radial movement. The quantification of the rotational movement is intimately connected to that of angular momentum. Here we focus on the mechanics of an atomic system having only one electron. This is a two-particle system: a nucleus of mass $M$ and charge $+Zq_e$, and an electron of mass $m_e$ and of charge $-q_e$. The atomic system is described by the following Hamiltonian Remember that in the section of Wave Quantum Physics we had proved during our study of functional operators: and remember also that $\vec{r}_e$ and $\vec{R}_n$ are respectively the position vectors of the electron and nucleus in the prior-previous relation. The potential energy being given by (\SeeChapter{see section Electrostatic}): The movements of the two particles are correlated because the two charges interact through their mutual electrical field. We can not make a separation between variables $\vec{r}_e$ and $\vec{R}_N$. By cons, a separation of variables is possible with the coordinate of the center of mass (see the definition of the center of mass in the section of Classical Mechanics): and the relative coordinate of the electron relative to the nucleus: We obtain therefore: and: The Hamiltonian in the center of mass repository will therefore be written: where $M_{tot}=m_e+M$ is the total mass of the system and: is its reduced mass. We clearly see that the Hamiltonian $H$ is this time set in a separable form and we can write it as follows: with: In terms of the coordinates $\vec{R}_{\text{CM}}$ and $\vec{r}_{\text{rel}}$, the function describing a stationary state of the two-body system is a product of individual wave functions (recall that the joint probability of two events is the product of probabilities), one for the movement of the center of mass and the other for the relative movement: and the energy of this state is the sum of the respective energies of movements: with: \begin{tcolorbox}[title=Remark,colframe=black,arc=10pt] This approach of separating the wave function into the composition of a wave function of the center of mass and the relative movement is also used in the context of the study of poly-electronic atoms, but with one difference: as the nucleus is much more massive than the processing electrons (in approximation ...), the center of mass is assimilated to the nucleus of the atom and the relative motion to the entire electron cloud. This approximate approach is well known under the designation "\NewTerm{Born-Oppenheimer approximation}\index{Born-Oppenheimer approximation}". \end{tcolorbox} Where the Hamiltonian appearing in the first of these relations has been defined above as being: This movement is that of a particle of mass $M_{tot}$ in a three-dimensional box of infinite volume. Eigenvalues and eigenfunctions for this movement has already been obtained in our previous study, we will restrict ourselves to the study of separate equation for the relative movement, or internal movement. As no confusion will be possible between different Hamiltonians, we let down, to simplify the notations, the "rel" word in subscript. With $H_{\text{rel}}$ given by the relation that we have proved previously: and the relation (as proved above): then we obtain the Schrödinger equation for the relative motion: or written differently: Note that in the case where the potential energy $E_{\text{pot}}$ is a centrosymmetric source, that is to say it depends only on the length of the position vector $\vec{r}$, and not its orientation, the previous equation, written in Cartesian coordinates, is inseparable. Indeed, in Cartesian coordinates, the length $\vec{r}$ is given by: and the potential energy can not be separated into three components, each depending only one of the three variables $x, y, z$. The Hamiltonian is therefore still not a separable form and so we did not meet our target. However, the above equation is separable at the moment we make a change of coordinates to spherical coordinates. Indeed, in this coordinate system, the potential depends on only on one of the three spherical variables: the radius $r$. It is independent of the two angles $\theta$ and $\phi$. Referring to the result obtained in the study of the Laplace expressions in different coordinate systems, in the section of Vector Calculus, we got for the Laplacian of a scalar field in spherical coordinates the following expression: The hamiltonien: then becomes (simple distribution and new way to note): where: is the kinetic energy operator for the radial movement of the electron relative to the nucleus, and $L^2$ is the squared "associated" operator of the angular momentum vector: The term: is therefore an energy associated with the angular momentum $J$ (\SeeChapter{see section Classical Mechanics}). To understand the nature of this operator $L^2$ a detour by the notion of rigid rotor will help. \subsection{Rigid Rotator} If we now consider the case of a system named "\NewTerm{rigid rotor}\index{rigid rotor}" where we neglect ("restrict" would be a more appropriate term ...) the degrees of freedom of oscillation (this is the system that are the study case for linear diatomic or polyatomic molecules), the only coordinates being into play are the angles $\theta$ and $\phi$ which fix the orientation of the rotator. Thus, in this case $r$ is fixed and we have: and in view of the constraints on the potential, it is normally quite easy to understand why the rotator is said to be "rigid". In the above case, the Hamiltonian is reduced to: For the rest, we associate the operator $L^2$ to an angular momentum, for the simple reason that he has the units... Indeed, let us recall that we have prove in the section of Wave Quantum Physics that when the spin is zero (so as part of our study of the hydrogenoid atom here, the spin will not be taken into account in the first instance) and that we are dealing with a single particle then the angular momentum (which we will denote by $L$ instead of $b$) is given by: where the components of the vector $\vec{l}$ are also natural numbers. By doing this similarity, we can then write the Schrödinger equation in the form: Let us recall we got in the section of Wave Quantum Physics that: by the vector product. We go now to rectangular coordinates $x, y, z$ coordinates to spherical coordinates $r,\theta,\phi$. Remember for this (\SeeChapter{see section Vector Calculus}) that: and that: Now let us express the total differentials: These relations can be written as an orthogonal transformation of the total differential $\mathrm{d}r,r\mathrm{d}\theta,r\sin(\theta)\mathrm{d}\phi$ by: or by the inverse transformation (if required ... it is enough to check that the two transformation matrices multiplied together give the identity matrix): It results of this for example: and finally (the method for the second and third lines is the sam as for the first!): Thus, taking into account these relationships, we obtain for example, in the case of the operator: the following developments: which gives the following result: By doing the same with: by doing the same developments: we have the following result: And for finish with: by doing the same developments: we get the following result: Finally, we have only little freedom for the movement of our rigid rotor (as it is very rigid ...) and we can write for the Schrödinger equation: where $H_\text{rot}$ is for recall, seen as the functional linear operator, and the total energy $E$ as its corresponding eigenvector. Therefore, we can write that angular momentum operator is given by (we change the notation so to not confuse subsequently operator and eigenvalue according to the comments we made during the satement of the postulates of Wave Quantum Physics in the corresponding section): Thus, the eigenfunctions $\Phi(\phi)$ of $\hat{L}_z$ are solutions of the equation to the eigenvalues and eigenfunctions: that is to say the differential equation: where $L_z$ is obviously the eigenvalue of $\hat{L}_z$. A simple solution to this differential would be: with for uniformity condition, depending on the properties of complex number (\SeeChapter{see section Numbers}): This mathematical condition imposes the obvious and remarkable following quantification: where (recall) $m_l$ is the magnetic quantum number. Knowing that (\SeeChapter{see section Corpuscular Quantum Physics}): We can write: Therefore, we falls back on the result(s) that we get in the section of Corpuscular Quantum Physics and Wave Quantum Physics: Which is quite satisfactory, even remarkable and enjoyable (to not say it ...). Thus, the measurement of a component of the angular of $\hbar$ which appears as a natural unit of angular momentum. The common eigenfunctions (!!!) to the operators $\hat{L}^2$ and $\hat{L}_z$ are in a more general framework necessarily of the form (method of separation of variables ): As the rotator is rigid, we have $R(r)=c^{e}$. This factor will eliminate itself in the equation of eigenvalues and eigenfunctions that we will determine further below. So we can not take it into account if we ant. Finally, we can write thanks to previous developments: Which brings us to the equation to the eigenvalues and eigenfunctions: That is to say Therefore: By putting: and therefore: we get a "Fuchs" like differential equation given by: Therefore finally: Whose coefficients have poles (singularities) in $\xi=\pm 1$. But, let us recall that we have: So that we often find the previous differential equation in the following form in the books after elementary algebra factorization of some terms: A nontrivial solution being, knowing the Fuchs of differential equations, that it is customary to name the "\NewTerm{associated Legendre polynomials}\index{associated Legendre polynomials}" (although this is not strictly speaking a polynomial ....) because containing partly Legendre polynomials (\SeeChapter{see section Calculus}): that you can check by injecting this solution in prior-previous differential equation. ...Following the request of a reader is an example of verification before continuing: The $m_l=l=0$ is immediate. Then let us consider the case where $m_l=l=1$: Thus: And we inject in it the associated Lagrange polynomial: Therefore: Which give after a small simplification: By derivating: Let's focus on the left part to see what it is equal to by putting everything to a common denominator: By simplifying the numerator, it should be zero. Let us see this by simplifying a first time: by distributing: Which is indeed equal to zero!!! So finally, we have common eigen functions (because remember that the Legendre polynomials are orthogonal to each other) that will be: \begin{tcolorbox}[title=Remark,colframe=black,arc=10pt] It is not necessary to make complicated calculations to calculate the normalizaton factor of the exponential, as in the context of an integration over all space, the three factors of $Y_{m_l,l}(\theta,\phi)$ are independent of each other. Thus the integral is the product of the integrals (\SeeChapter{see section of Differential and Integral Calculus}). \end{tcolorbox} Finally, we must find $N_{m_l,l}$ such that: and we will see (what we will prove just below) that: In summary, we write (we should rather write "we will write"...) where we have omitted the factor $(-1)^l$ since in any case this term in the module of this function this term multiplies himself and then gives $(-1)^{2l}=1$. Let us check now the framed previous boxed relation (warning this is a bit long and it is advisable to read it several times): We consider the functions defined by: where: with: The aim will therefore be to prove that these functions are orthogonal first and then find the constants $N_{m_l,l}$ such that $||Y_{m_l,l}||$. In short we will have to roll up the sleeves... of our brain... First, let us prove for future needs that: \begin{dem} If and only if $l=m_l=0$ the equality is obvious. Let us suppose that $l\geq 1$ (thus the general case outside the obvious previous case) and given $P$ a real polynomial of degree $\leq l-1$. Let us put: Let us prove (functional dot product): in $\mathcal{C}(x\in[-1,1],\mathbb{R})$. Indeed, let us recall that we made the change of variable: Integrating by parts, we get: let us notice that for any $0\leq j\leq m_l-1$, $\dfrac{\mathrm{d}^j}{\mathrm{d}x^j}Z(x)$ is equal to zero in $x=\pm 1$ that is to say: Therefore (by extension), the above relation simplifies to: After $m_l$ integration by parts equation, we get: If $\deg(P)<m_l$ then the above expression shows that trivially: If $\deg(P)\geq m_l$ then by putting: We get: let us notice once again that $\dfrac{\mathrm{d}^j}{\mathrm{d}x^j}h(x)$ vanishes in $x=\pm 1$ for any $j\leq m_l-1$, that is say: By integrating by parts $m_l$the previous expression, we find: but $h$ is an polynomial of degree $m_l+\text{deg}(P)$. Indeed, the first factor is of degree $2m$ and the $m_l$th derivative of $P(x)$" is of degree $P-m_l$, therefore: So $\dfrac{\mathrm{d}^{m_l}}{\mathrm{d}x^{m_l}}h(x)$ is a polynomial of degree $\deg(P)\leq l-1$ and knowing that $\dfrac{\mathrm{d}^l}{\mathrm{d}x^l}(1-x^2)^l$ is to a given constant equal to the $l$-th Legendre polynomial (\SeeChapter{see section Calculus}) we then have: So we have just proved that $\dfrac{\mathrm{d}m_l}{\mathrm{d}x^{m}_l}$ is orthogonal to any polynomial of degree $\leq l-1$. \begin{flushright} $\square$ Q.E.D. \end{flushright} \end{dem} $\dfrac{\mathrm{d}^{m_l}}{\mathrm{d}x^{m_l}}Z$ is a polynomial of degree $l$ (its just enough to check for some values) so therefore let us search if there is a constant $c^{te}\in\mathbb{R}$ such that: with for recall: We can determine the constant $c^{te}$ by comparing the dominant coefficients of the polynomials: The dominant coefficient of $\dfrac{\mathrm{d}^{m_l}}{\mathrm{d}x^{m_l}}Z$ is: and the dominant coefficient of $\dfrac{\mathrm{d}^l}{\mathrm{d}x^l}(1-x^2)^l$ is: Therefore: That is to say: So we would have for $l\geq m_l\geq 0$ (we integrate parts we integrate as many times as necessary to the left and right - necessarily - to achieve this result): Now let us establish a remarkable relation that should perhaps exist between $P_{-m_l,l}$ and $P_{m_l,l}$ (and which will be useful to us later). Let us assume for this $0\leq m_l\leq l$ and remember that at the base: So that brings us to write (nothing special): By the previous results ($(-1)^{m_l}=(-1)^{-m_l}$): this leads us to write: Therefore we get: First, let us prove that: where $P_l$ is the $n$-th Legendre polynomial (hence the origin of the name of "associated Legendre polynomial" ...). \begin{dem} First, we have proved that the Legendre polynomials satisfy the following recurrence relation (\SeeChapter{see section Calculus}): for $n\geq 1$. Multiplying the above equation by $x^{n-1}$ and integrating, we get: But: Let us recall that the $P_{n+1}$ polynomials form an orthogonal basis of which the polynomials that generate it are of increasing degree from $0$ to $n$, so a lower order polynomial - expressed in a sub vector space - will always be perpendicular to the vectors (polynomials) generating the higher dimensions. So if we take the example of $\mathbb{R}^3$ generated by the basis $(\vec{e}_1,\vec{e}_2,\vec{e}_3)$, then a vector $\vec{v}$ expressed by the linear combination of $(\vec{e}_1,\vec{e}_2)$ will always be perpendicular to $\vec{e}_3$ and therefore a zero scalar product with it. And therefore it follows: Let us put: The previous expression becomes (remember $P_0(x)=1$): Thus by induction: Furthermore as: We then for for the prior-previous relation the denominator which can obviously be rewritten: Then we have: So in the end we can simplify the denominator as follows: and: So we have well proved that (just in case ... you would not follow anymore the initial target ...) that: \begin{flushright} $\square$ Q.E.D. \end{flushright} \end{dem} Let us attack us finally to what interests us. That is to say, prove that: \begin{dem} If $m_l\neq j$ then: where: \begin{tcolorbox}[title=Remark,colframe=black,arc=10pt] Let us recall that the Jacobian in spherical coordinates is $r^2\sin(\theta)$ (\SeeChapter{see section of Differential and Integral Calculus}) and as the integrated function above is not dependent on $r$, we have take out the term $r^2\mathrm{d}r$ of this integral (by cons we will meet again the same term in the function $R(r)$ present in the Schrödinger equation). \end{tcolorbox} And with: If $l>k$ and $m_l=j$ then the dot product of: is simplified to: By doing the change of variable $x=\cos(\theta)$ we get: Let us suppose that $m_l\geq 0$: where $P_l(x)$ is the $n$-th Legendre polynomial. Thus the expression of the dot product becomes: If we put: then the relation becomes: Integrating by parts $m$ times the expression above we get: But $\dfrac{\mathrm{d}^{m_l}}{\mathrm{d}x^{m_l}}h(x)$ is a polynomial of degree $k$. Knowing that $l>k$, the latter integral is zero for the same reasons as those mentioned above. Therefore: If $m_l<$ then we have proved that: and therefore: as $-m_l\geq 0$. It only remains to us to treat the case $m_l=j,l=k$. Let us suppose again that $m_l\geq 0$. So as before we have: and: Let us put: The relation then becomes: By integrating $m$ times by parts, we find: $\dfrac{\mathrm{d} ^{m_l}}{\mathrm{d} x^{m_l}}h(x)$ is a polynomial of degree $l$ which dominant coefficient is equal to: $P_l$ being orthogonal to any polynomial of degree strictly less that $l$, the expression can be written: But, we have proved that: therefore: If $m_l\leq 0$ we know that we get the result. \begin{flushright} $\square$ Q.E.D. \end{flushright} \end{dem} Finally this result gives us also the normalization condition: And so finally: is indeed an orthonormal family. Either explicitly (we reintroduce the factor $(-1)^l$): Finally, after this highly mathematical interlude (but instructive for the methodology of approach), we see (which is logical) that ot each value of $l$ correspond therefore $2l + 1$ eigenfunctions $Y_{m_l,l}(\theta,\phi)$. We also say that the value $\hbar l(l+1)$ is $2l + 1$ times degenerated since: Here are some values of the function $Y_{m_l,l}(\theta,\phi)$ that generates what we commonly name "\NewTerm{spherical harmonics}\index{spherical harmonics}": Let's see some plots of these beautiful spherical harmonics that can be obtained with Maple 4.00b by using the following command (this is the $6$th spherical harmonic function above):\\ \texttt{>plot3d(Re(sqrt(15/(8*Pi))*(sin(theta)*cos(theta)*exp(I*phi)))\string^2,phi=0..2*Pi,\\theta=0..Pi, coords=spherical,scaling=constrained,numpoints=5000,axes=frame);} \begin{figure}[H] \centering \includegraphics{img/chemistry/orbit_rigid_rotator_hydrogen_y12_maple.jpg} \caption{Plot of the spherical harmpnics $Y_{1,2}$} \end{figure} \begin{itemize} \item $Y_{0,0}$ (corresponding to $n=1$!) gives a sphere (constant value regardless $\theta,\phi$) which the probability density can be represented by the "\NewTerm{photographic card}\index{photographic card (chemistry)}" or "\NewTerm{density map}\index{density map (chemistry)}" (the density in a given state is represented by the density of light spots on a dark background): \begin{figure}[H] \centering \includegraphics{img/chemistry/density_map1s.jpg} \caption{$1s$ density map} \end{figure} Representing the possible $1s$ orbits. \item $Y_{0,1},Y_{1,1},Y_{-1,1}$ give (for $n=2$ at least!): \begin{figure}[H] \centering \includegraphics{img/chemistry/harmonic_functions_2p.jpg} \caption{$2p$ orbitals (spherical harmonics)} \end{figure} Which represents the possible $2p$ orbits, the probability density function can be represented by its density and isodensity maps: \begin{figure}[H] \centering \includegraphics{img/chemistry/density_map2p.jpg} \caption{$2p$ density map} \end{figure} \item $Y_{-2,2},Y_{-1,2},Y_{1,2},Y_{2,2},Y_{0,2}$ give (for $n=3$ at least!): \begin{figure}[H] \centering \includegraphics{img/chemistry/harmonic_functions_3d.jpg} \caption{$3d$ orbitals (spherical harmonics)} \end{figure} Representing $5$ possible $3d$ centrosymmetric orbits, which the probability density can be represented by (the last two maps represent $Y_{0,2}$) the following density maps: \begin{figure}[H] \centering \includegraphics{img/chemistry/density_map3d.jpg} \caption{$3d$ density map} \end{figure} \item $Y_{-3,3},Y_{-2,3},Y_{-1,3},Y_{1,3},Y_{2,3},Y_{3,3},Y_{0,3}$ give (for $n=4$ at least!): \begin{figure}[H] \centering \includegraphics{img/chemistry/harmonic_functions_4f.jpg} \caption{$3d$ orbitals (spherical harmonics)} \end{figure} Representing $7$ possible $3f$ anti-centrosymmetric orbits, which the probability density can be represented by (in the order: $Y_{0,3},Y_{\pm 1,3},Y_{\pm 2,3},Y_{\pm 3,3}$) the following density maps: \begin{figure}[H] \centering \includegraphics{img/chemistry/density_map4f.jpg} \caption{$4f$ density map} \end{figure} \end{itemize} The above results thus lead us to write: Substituting this in the Schrödinger equation: We get ($T_r=0$ in the rigid rotor but $\neq 0$ in the case of the hydrogen atom): As there is in this relation no operator which acts on $Y_{m_l,l}(\theta,\phi)$, we can simplify to obtain: which we see in the general case of the isolated atom that energy levels are no longer dependent of $m_l$ (due to the spherical symmetry of the potential). Then we say that the levels corresponding to the same values of $n$ and of $l$ are all merged whatever the values of $m_l$. In the case where $E_\text{pot}$ derived from the $1 / r$ Coulomb potential , this radial equation leads us to a normalizing solution of $R (r)$ (different from zero then...) only for values of the energy corresponding to the following quantization law (well ... what a coincidence, we fall back on the expression proved in the old models of Corpuscular Quantum Physics!): where $R_H$ is the Rydberg constant as we determined in the section of Corpuscular Quantum Physics. Thus, in this case the energy levels corresponding to the same values of $n$ are all merged regardless of the value of $l$. For a given value of the principal quantum number $n$ (recall that we saw in the section of Corpuscular Quantum Physics that $l\leq n-1$), it is possible to verify that there are several solutions to the function $R(r)$ according to the value of the azimuthal quantum number $l$. Hence the identification of the solutions by the pair $(n, l)$. We note them $R_{n,l}(r)$. These are real functions of the variable $r$ (it just enough to check ... because if they work then they satisfy the Schrödinger equation, we will make an example a little further below): where (beware some books give this value in natural units!): is the equivalent of the Bohr radius (for the reduced mass) that we have determined in the section of Corpuscular Quantum Physics with the difference that here we have a reduced mass instead of a single mass. However let us see if our Schrödinger equation is satisfied (taking $n=1,l=0$ for example): Which corresponds well to the expected result. Which graphically gives us the radial part $R_{n,l}(r)$: \begin{figure}[H] \centering \includegraphics{img/chemistry/radial_functions.jpg} \caption{Plot of few radial functions $R_{n,l}(r)$} \end{figure} Let us study a little more in detail the radial function in the case of the hydrogen atom!: In the case of the atomic orbital $1s$ (!special case but we could do the same calculations as the following with all other orbital!) so we have for the hydrogen atom: So it is well a decreasing exponential function as shown in the graphic above. Before continuing let us recall that (\SeeChapter{see section Wave Quantum Physics}) : But, in spherical coordinates (see the beginning of this section): It then comes as we have seen earlier above: Then if follows that: With this result we can calculate the radial probability of finding the electron on each atomic orbital! So, it comes immediately with the previous result: So in the case of our $1s$ atomic orbital: It is now super interesting to calculate the point $r$ point where the probability of finding the electron is maximum on the $1s$ orbital! For this, we notice that $\dfrac{P_r}{\mathrm{d}r}$ reaches a maximum when we have trivially: Therefore: Therefore: Which is remarkable, because we find the result of the Bohr model (\SeeChapter{see section Corpuscular Physics})!!! To summarize a little all this, the stationary states of the hydrogen atom are specified by three quantum numbers $n\in\mathbb{N}^{*},l\leq n-1,|m_l|\leq l$ and the Schrödinger wave function given finally by: We then the following traditional nomenclature in the case of the hydrogen atom: We can include the spin of the electron in the description of the electronic structure of the atom. If we treat the spin as an additional degree of freedom then the lack of interaction term between conventional degrees of freedom (positions in real space) and the spin interaction named "\NewTerm{spin-orbit coupling}\index{spin-orbit coupling}" in the previous Hamiltonian implies that we can write the total wave function, spin included, in the form of a product: So taking into account everything seen so far we have the following density plots: \begin{figure}[H] \centering \includegraphics[scale=0.8]{img/chemistry/hydrogen_full_wave_function.jpg} \end{figure} where we added the spin quantum number $m_s=\pm 1/2$ (\SeeChapter{see section Corpuscular Quantum Physics}). The same remark we made in the section of Corpuscular Quantum Physics then applies: the levels remain $2n^2$ times degenerated. Let us do example. So we have: Thus for $1$ proton: Now let us apply the 5th postulate of wave quantum physics (see section of the same name) for unlike earlier, not calculate the modal radius (most likely one), but the average radius! Then, as the operator position is the position itself (\SeeChapter{see section of Wave Quantum Physics}), the average value of the radius will be given by (do not forget that we are in spherical coordinates!): Using the Fubini theorem proved in the section of Differential and Integral Calculus we can write (well in this case it's even trivial that we have the right to write this... we should not even have to mention Fubini theorem normally...): For the last integral, we will use integration by parts: Thus finally: Or more explicitly: Therefore the average distance of the electron to the core is equal to $3/2$ times that of the Bohr radius so further that the most likely radius we have calculated earlier above (and which corresponds to Bohr radius)! \subsubsection{Potential Profile} Let us come back on an important point that is often used in physics book but never proved (as far as we know): the quantum potential profile of the hydrogen-like atom. Many books sometimes speak of "\NewTerm{harmonic model of the atomic bonding}\index{harmonic model of the atomic bonding}" but it seems that this is a priori rather a misnomer. So we saw much earlier in this section that: In view of the interpretation of the three terms of the Hamiltonian, it is customary to say that the two terms: constitute the "\NewTerm{effective potential energy}\index{effective potential energy}", thus explicitly: So the first term is (logically) repulsive while the second is attractive. A plot in Maple 4.00b of the effective potential energy gives with real experimental values for the radius with the real values of the constants:\\ \texttt{>plot([-2.31E-28/r+6.11E-39*0*(0+1)/r\string^2,-2.31E-28/r+6.11E-39*1*(1+1)/r\string^2,\\-2.31E-28/r+6.11E-39*2*(2+1)/r\string^2,-2.31E-28/r+6.11E-39*3*(3+1)/r\string^2,\\-2.31E-28/r+6.11E-39*10*(10+1)/r\string^2],r=5E-11..10E-10,\\y=-0.5E-17..0.5E-17,thickness=2);} \begin{figure}[H] \begin{center} \includegraphics{img/chemistry/effective_potential_energy.jpg} \end{center} \caption{Plot of the effective potential energy with Maple 4.00b for various $l$ and $Z$} \end{figure} where the legends were added afterwards with a text processor software. The reader will notice especially the case where $l=1$ that matches to the case of the figure indicated by the majority of graduate books of physics. Either with a zoom:\\ \texttt{>plot(-2.31E-28/r+6.11E-39*1*(1+1)/r\string^2,r=5E-11..10E-10,thickness=2, color=green);} \begin{figure}[H] \begin{center} \includegraphics{img/chemistry/effective_potential_energy_l_equal_1.jpg} \end{center} \caption{Plot of the famous effective potential energy with Maple 4.00b for $l=1$ and $Z=1$} \end{figure} The first graph also tells us quite clearly that for $l= 0$ the electron has a negative potential energy that firmly holds it in the orbit of the proton. By cons already at $l= 1$ we guess that the point of stability of the electron is where the derivative is zero. Beyond the $l= 1$, in the case of a nucleus with a single proton, the electron is not naturally linked anymore since its potential energy tends to be positive. The reader can also have fun with Maple by making vary $Z$ and $l$. He will see that the effective potential energy is very sensitive to these parameters. For example, the plot below shows the effective potential energy with $l= 4$ and $Z = 1$ (thus unstable atom) and then with $l = 4$ and $Z = 6$ (which corresponds rather to an excited state):\\ \texttt{>plot([-2.31E-28*1/r+6.11E-39*4*(4+1)/r\string^2,-2.31E-28*6/r+6.11E-39*4*(4+1)/r\string^2],\\r=5E-11..10E-10,thickness=2);} \begin{figure}[H] \begin{center} \includegraphics{img/chemistry/effective_potential_energy_l_equal_1_varous_z.jpg} \end{center} \caption{Plot of the effective potential energy for $l=1$ and various $Z$} \end{figure} It is customary in practice to consider that: is to a given factor (electric charge factor) an "\NewTerm{effective electrical potential}\index{effective electrical potential}" or "\NewTerm{electric screened potential}\index{electric screened potential}". Indeed by defining the electric potential (\SeeChapter{see section Electrostatics}), there are only an electri charge factor ratio between the electric potential energy and the electric potential. So we have: \begin{flushright} \begin{tabular}{l c} \circled{90} & \pbox{20cm}{\score{3}{5} \\ {\tiny 49 votes, 66.12\%}} \end{tabular} \end{flushright} %to make section start on odd page \newpage \thispagestyle{empty} \mbox{} \section{Molecular Chemistry} \lettrine[lines=4]{\color{BrickRed}B}olecular chemistry is the central area that interconnects thanks to the study of molecules many promising advanced technologies of the early 21st century which are to name only the best known: molecular biology, molecular materials, molecular electronics, polymers, etc. Orbital approximation Knowing it was found experimentally that a single molecule can have several very different functions, its theoretical study allows to use them better (sometimes better performance in terms of R\&D) in its areas of application. The reader will therefore understand that, as usual in this book, that we will focus here only on the theoretical aspect (mathematical) of molecular chemistry even if we limit ourselves only to theoretical developments made between the years 1910 and about 1935 (beyond the complexity of theories require too many pages to a general book as ours). We are in the beginning of the 21st century at the infancy of the discovery of what nature has done with plenty of time and chance (probabilities): that is to say complex molecules working as nanomachines capable locally (active site) to filter, oxidize, to make catalysis ... and many other manipulations (there is just to observe your own body!). A molecule is often treated in school classes with the Schrödinger equation (so no relativistic case and no consideration of the spins) in the usual form (\SeeChapter{see section Wave Quantum Physics}): or also in a stationary from (time-independent) where as a reminder $\Psi$ is a eigenfunctions and $E$ an eigenvalue of the application $H$. In reality, the wave functions are impossible to calculate normally with contemporary mathematical tools and the only thing we can do are numerical calculations (perturbation method). This is why some chemistry centers are transformed over time into data centers where the predictive character (and inexpensive) of quantum chemistry is becoming more and more important. It remains of course essential, as always, to understand how the theoretical models are built and their underlying assumptions. But we can still thanks to calculations predict the form of reasonable size of molecules, the energy of their internal connections, their energy capacity under stress deformation, the shape of the molecular orbitals (M.O.), energy state transitions (when parts of the molecule move therein), their reactivity vis-a-vis of a reaction medium... We commonly distinguish two cases of study of the molecular chemistry: \begin{enumerate} \item Quantum mechanics: all interactions between particles are taken into account under the assumption of some acceptable simplifications. \item Molecular mechanics: For large molecules, we are note concerned anymore over the electronic problem, but the interaction of certain parameters on which we want to focus. \end{enumerate} For example, hemoglobin (protein carrying oxygen carrying in the muscles) is a huge molecular structure which we will study only active site with the tools of quantum mechanics. The overall behavior of the molecule itself is treated with the molecular mechanics tools. It follows that excepts for hydrogen-like atoms, we can not analytically describe a molecule from a purely quantum point of view! All current quantum methods rely on one or more approximations. The wave functions are therefore approximated and the level of calculation is adjusted according to what we want to show and the precision that we seek (seeking to minimize the computation time for cost problems...). The good understanding of approximations permits to express simple models requiring only a minimum of calculations (often trivial). We propose here to show two common models (and the most simplest): \subsubsection{Orbital Approximations} A molecule is obviously an extremely complex problem: $N$ nuclei, $n$ electrons and everything is moving! \begin{figure}[H] \begin{center} \includegraphics{img/chemistry/vibrating_molecule.jpg} \end{center} \caption{Example of molecule where a almost everything is moving} \end{figure} The Hamiltonian (\SeeChapter{see section Wave Quantum Physics}): is then a nightmare but in the intuitive form (the subscript $G$ of the Hamiltonian means "General") below: where: \begin{enumerate} \item $\displaystyle-\sum_{k=1}^{N}\frac{\hbar^2}{2M_k}\vec{\nabla}_k^2$ is the kinetic energy of the $k$ nuclei of mass $M_k$ in the molecule. \item $\displaystyle-\sum_{i=1}^{n}\frac{\hbar^2}{2m_e}\vec{\nabla}_i^2$ is the kinetic energy of the $n$ electrons n mass $m_e$. \item $\displaystyle-\sum_{k=1}^{N}\sum_{i=1}^{n}\frac{Ze^2}{4\pi\varepsilon_0 r_{ik}}$ is the potential energy due to the attraction electron(-)/nucleus(+). \item $\displaystyle\mathop{\sum_{i=1}}_{j>1}^{n-1}\frac{e^2}{4\pi\varepsilon_0 r_{ik}}$ is the potential energy of the repulsion electron(-)/electron(-). \item $\displaystyle\mathop{\sum_{k=1}}_{i>k}^{N-1}\frac{Z_kZ_ie^2}{4\pi\varepsilon_0 r_{ik}}$ is the potential energy of repulsion nucleus(+)/nucleus(+). \end{enumerate} Often we find these terms in the following form of the Schrödinger equation in the literature: A first approximation we might try is to decouple the movement of the nuclei of the electrons. Indeed, as the nucleus is much more massive (about 2,000 times) than the cloud of electrons, the center of mass is assimilated to the nucleus of the atom and all the motion to the entire electron cloud. This approximate approach is well known under the name "\NewTerm{Born-Oppenheimer approximation}\index{Born-Oppenheimer approximation}": which then allows us to study the molecular orbitals. But unfortunately this approximation is not sufficient because of the repulsion interelectronic term (the double sum) that prevents using the separation of variables technique as we did in the section of Quantum Chemistry with the hydrogenoid-atom. Moreover, this latter equation is also written as the first line of the couple of equation below (Schrödinger equation of electrons and nuclei): \begin{subequations} \begin{align} &\underbrace{(T_e+V_{ee}+V_{en})}_{H_{\text{el.}}}\Psi_{el}=E\Psi_{el}\\ &\underbrace{(T_n+V_{nn})}_{H_{\text{nuclei}}}\Psi_n\Psi_{el}=E\Psi_n\Psi_{el} \end{align} \end{subequations} This system of equations is what some name the "\NewTerm{adiabatic approximation}\index{adiabatic approximation}" (???). The idea that then comes to mind will be using the following property: Given two operators $A$ and $B$, $f (u)$ and $g(v)$ their respective eigenfunctions associated with eigenvalues $a$ and $b$. Then $f (u) g (v)$ is an eigenfunction of the operator $A + B$ with associated eigenvalue $a + b$. Which is written: \begin{dem} We have: \begin{flushright} $\square$ Q.E.D. \end{flushright} \end{dem} And that's what we will use to break the $n$-electronic Hamiltonian $H_{el}$ into a sum of independent-electron Hamiltonian knowing of the above that if we find the eigenfunction for each (which is relatively easier) if will bu sufficient to simply multiply them to get the overall eigenfunction. Thus, we write: and therefore we have to find for each $i$: To then have: with therefore: This approach by one-electron Hamiltonian approach will lead us to replace: by the sum of Hamiltonian for an electron named "\NewTerm{effective Hamiltonian}\index{effective Hamiltonian}": This approximation method is sometimes named in theoretical chemistry "\NewTerm{independent electron approximation}\index{independent electron approximation}" or "\NewTerm{orbital approximation}\index{orbital approximation}". It consists therefore to include the electron-electron interactions and to write that each electron move in an average potential resulting from the presence of all other electrons. The "\NewTerm{Slater method}\index{Slater method}" consists by definition to write the latter relation in the form: where $\sigma$ is named the "\NewTerm{screen constant}\index{screen constant}". The Slater method basically means replacing the purely electronic terms by a constant. It can be regarded as a parametric method since the constants were determined purely experimentally. The principle of empirical calculation of the screening constant is relatively simple: In a poly-electronic atom, the core electrons are on much contracted orbits while the valence electrons that will be responsible for the chemical properties of the atom in question are on orbits much more "relaxed". The attraction of the nucleus on the latter electrons is much lower than that exerted on the core electrons and these electrons only receive a portion of the atomic charge. Slater then proposed that the effective charge, which is usually denoted by $Z^*$ could be calculated by taking into account the screening constant. This constant represents then the average effect of the other electrons on the considered electron of the effective Hamiltonian $i$: For a peripheral electron, we will need to consider its screen constant is due to all electrons placed on orbits equal or below its own. The tradition (or rather the "trick") is that the calculation is done by combining atomic orbitals in several groups $1s/2s, 2p/3s, 3p/3d/4s, 4p/4d/4f/5s, 5p/$ etc. Then the calculation is simple because it is based on an array of predefined values and we simply have to add the screening contributions of all the electrons following the table below: This table deserves some explanation of course !: The index indicates the number of the group that contributes to the screening constant while $n$ is the number of the group of electron that we consider. \pagebreak \begin{tcolorbox}[colframe=black,colback=white,sharp corners] \textbf{{\Large \ding{45}}Example:}\\\\ In the case of the Carbon of configuration $1s^2 2s^2 2p^2$, the nuclear charge is $Z=6$. One electron $1s$ is shielded by onlye the another $1s$ electron, the effective charge it sees is therefore: A $2s$ or $2p$ electron is shielded by the two $1s$ electrons and by the other $3$ electrons $2s$ and $2p$. The effective charge by which it is attracted is then: So we see that the effective charge experienced decreases rather quickly! \end{tcolorbox} \subsection{LCAO Method} A linear combination of atomic orbitals or LCAO is a quantum superposition of atomic orbitals and a technique for calculating molecular orbitals in quantum chemistry. In quantum mechanics, electron configurations of atoms are described as wave functions. In mathematical sense, these wave functions are the basis set of functions, the basis functions, which describe the electrons of a given atom. In chemical reactions, orbital wave functions are modified, i.e. the electron cloud shape is changed, according to the type of atoms participating in the chemical bond. So as already mention, this method, rather qualitative, considers that the molecular wave function is a "\NewTerm{Linear Combination of Atomic Orbitals LCAO}\index{Linear Combination of Atomic Orbitals}" unlike the previous method where we multiply the effective Hamiltonian. This method is important because it is the basis of much of the current vocabulary of chemists when the chemistry done is cutting edge one! Let us take the example of the dihydrogen molecule $H_2$. The idea is then following: If we have the function of the atomic orbital $1s_A$ of $H_A$ and respectively the function $1s_B$ of $H_B$, then we assume that the dicentric molecular orbital (linked to two atoms) thereof is given by: which defines a quantum system with two eigenstates. But as we well know, in reality, only the square of the wave function has a physical sense (probability of presence). Thus, if we assume that the wave function has no value in $\mathbb{C}$, we have for the single electron of interest ($1s$): where we assume that: \begin{itemize} \item $a^2\Psi_A^2$ represents the probability of presence to be near $A$. \item $b^2\Psi_B^2$ represents the probability of presence to be near $B$. \item $2ab\Psi_A\Psi_B$ represents the probability of presence of the electron that do the link $A-B$. \end{itemize} In the particular case of the symmetric diatomic molecule we have chosen as an example, the atoms $A$ and $B$ perform the same function and there is no reason that the electron is closer to $A$ than to $B$ or vice versa. Thus, the probability of finding the electron near $A$ is equal to the probability of finding it near $B$. Moreover, in this case the orbitals $\Psi_A$ and $\Psi_B$ are completely identical ($1s$ orbitals, both of the same atom) and there is therefore no need to distinguish them. So we have: We have two solutions for $\Psi_{AB}$ that are (these two solutions can be found in very different notations in the literature): and: \begin{tcolorbox}[title=Remark,colframe=black,arc=10pt] Caution! We can not put for the last two relations that $\Psi_A=\Psi_B$. The latter equality occurs at any point only if the distance between the two nucleus is zero (which is unlikely) or, if they are spaced a distant of a certain value $D$ in the middle thereof. \end{tcolorbox} These two expressions are simultaneously solutions of the Schrödinger equation. So we get two molecular orbitals from the two atomic orbitals in the case of symmetrical diatomic molecule. The function: is named "\NewTerm{bonding function}\index{bonding function}" because it corresponds to a reinforcement of the probability of presence of the electron between atoms $A$ and $B$ which corresponds to the creation of the bond! \begin{figure}[H] \begin{center} \includegraphics{img/chemistry/bonding_link.jpg} \end{center} \end{figure} Conversely, the function: is named "\NewTerm{anti-bonding function}\index{anti-bonding function}" because it corresponds to a reduction of the probability of presence of the electron between atoms $A$ and $B$ which corresponds to the destruction of the bond! \begin{figure}[H] \begin{center} \includegraphics{img/chemistry/bonding_unlink.jpg} \end{center} \end{figure} Ultimately, by overlapping, the two atomic orbitals with the same energy give birth to two molecular orbitals of different energy, a stabilized binding and the other antibonding destabilized. We have obviously from what we see just above that, in more complex cases, the energy level of the bonding molecular orbital is smaller than the antibonding (we will prove this rigorously in details below). Thus, it takes more energy to ionize respectively the electron of the binding orbital $\sigma$ than to ionize the electron of the antibonding orbital $\sigma^{*}$. It is commonly accepted that the energy of the bond function is stronger than the antibonding one (but we will make the proof further below). Let us also indicate that in chemistry, a chemical bond wherein each of the bonded atoms is sharing an electron from one of its outer layers to form a pair of electrons linking two atoms is commonly known as "\NewTerm{covalent bond}\index{covalent bond}". The chemists then say the covalent bond involves the equitable sharing of only one pair of electrons, named "\NewTerm{bonding pair}\index{bonding pair}" (but in fact where only one electron is really shared). Each atom provides an electron, the electron pair is then delocalized between two atoms as we have shown. These are the reasons why we commonly say that the bond $\sigma$ is a covalent chemical bond between two atoms created by orbital axial overlap. Now let us in-deep this approach! The molecular orbitals are to be normalized as we know. Which means that: What gives, since the atomic orbitals are normalized for $\Psi_1$ and are real functions: Since $a$ (real number in our case) is imposed as a constant, it comes immediately: Therefore for $\Psi_{AB}^1$: Identically, we have for $\Psi_{AB}^2$: If we have $S_{12} 1$, it comes the following format that we find in many books: Let us make a small example using as orbital, the lowest atomic orbital (1$s$) of the hydrogen atom in the case of a dihydrogeneous bond $H_2$ for which we have proved at the end of that section of Quantum Chemistry of quantum chemistry that: Therefore it comes: with for recall: It comes then for the molecular binding orbital of level $s$: and for the antibonding of also the $s$ level: We then see immediately that $\sigma^*_s$ vanishes in the middle of the two protons because in this place $r_1=r_2$. The molecular antibonding orbital therefore has a nodal plane and the electrons are mainly located on the protons. By cons, for the molecular orbital $\sigma_s$ the density does not vanish. Then we understand easily that an electron of $\sigma_s$ ensures the stability of the molecule and is therefore responsible for the chemical bond. We therefore conclude that the electronic stabilization due to the two identical orbital interaction is proportional to their recovery. More the recovery is big, the more the stabilization is important. There is a more technical approach using Dirac notation (\SeeChapter{see section Wave Quantum Physics}) and that has the advantage of allowing the determination of the eigenvalues of energy. First we write the general expression of the time independant Schrödinger equation with the Bra-Ket notation for one molecular orbital, superposition of two atomic orbitals: Either in explicit form: If we multiply by the bra $\langle \Psi_A|$ on the left and taking into account that $a$, $b$ and the specific eigenvalues of the energy are constants, we get the following equation: Similarly, we get the bra $\langle \Psi_B|$: Let us simplify the notations even more: By symmetry of the problem in the case of dihydrogen, we put: which are named "\NewTerm{resonance integrals}\index{resonance integrals}" because it is a term relating to the combination (resonance) of the both atomic orbital relative to the two atoms that made the molecular structure. We also have: which are named "\NewTerm{Coulomb integrals}\index{Coulomb integrals}" because they correspond according to the fifth postulate of Wave Quantum Physics (see section of the same name) to the average value of the total energy of the electron. We have obviously: which are named "\NewTerm{recovery integrals}\index{recovery integrals}" because the two atomic orbitals of the same type of each atom overlap. And finally, we have always have by symmetry of our particular case: We can then write, since the recovery integrals are unitary: These two equations are named "\NewTerm{secular equations}\index{secular equations}". The trivial solution is a priori not physical because it would mean that the electron has a zero probability density at any point in the space corresponding at $a=b=0$. There is a nontrivial solution and unique solution if and only if the following determinant (\SeeChapter{see section Linear Algebra}), known in molecular chemistry under the name "\NewTerm{secular determinant}\index{secular determinant}", is equal to zero: As we have by symmetry in our particular case: Therefore it comes: Hence: This gives us two solutions ($+$): and minus ($-$): Therefore we have: But to be able to calculate the energy levels in detail, we must still have the shape of the Hamiltonian... and that using the both electrons of the dihydrogen molecule is quite difficult... To simplify the study, we reduce ourselves to the case of the cation (positive ion) $H_2^{+}$ consisting of two protons and one electron: \begin{figure}[H] \begin{center} \includegraphics{img/chemistry/dihydrogen_cation.jpg} \end{center} \caption{Simplified study of the dihydrogen cation $H_2^+$} \end{figure} We then have base on the relation we ahve obtained at the beginning of this section: The following relation: where the first two terms in the brackets are for recall associated with the potential energy of the electron and the last to the potential repulsion energy of proton (the first term on the right of the equality is the kinetic energy of the electron). Now let us try to sort the energy of these two molecular orbitals. For this, we write: Let us recall that for a system to be stable, the energies $E_n$ must be negatives, this corresponding to the stable states (we need a supply of energy to take them out) and request from us because of the shape of $E_2$: Knowing this it comes: Therefore, we see that the notations are not consistent with the use in quantum physics because normally the index $1$ is reserved to the lowest energy. So we will write in the future: with the associated eigenfunctions $\Psi_1$ and $\Psi_2$ and therefore: We can also noticed an important thing! This is that if we consider the atoms in isolated, the interaction terms cancel and we have: Therefore we have the qualitative difference between a single atom and a simple diatomic (ionized) system: This means that the energy of the lowest level of a diatomic ionized molecule is less than the energy of a single atom which is near $\alpha$. This observation confirms that the system is stabilized in energy compared to two isolated atoms, which seems consistent with the experimental determination of the existence of such molecules. The traditional is that chemists represent the energy differences in the following form for our particular case: \begin{figure}[H] \begin{center} \includegraphics{img/chemistry/dihydrogen_cation_energy_levels.jpg} \end{center} \caption{Energy levels of the dihydrogen cation $H_2^+$} \end{figure} We therefore conclude - by generalizing a little bit... - that when two atoms (each contributing with an electron) combine, their atomic orbitals will combine to generate two molecular orbitals, one of energy level $\Psi_1$ and the second of higher energy level $\Psi_2$ than that of the isolated atoms. Thus, the split up that will make leave one of the electron with one of atoms will be exothermic in comparison to the single atoms. Up to now we have discussed the electronic states of rigid molecules, where the nuclei are clamped to a fixed position. In this section we will improve our model of molecules and include the rotation and vibration of diatomic molecules. \pagebreak \subsection{Molecular Rotational Energy Levels} As we have seen in the section of Quantum chemistry, for analytical reasons we consider molecules als rigid rotators. The rigid rotators are commonly classified into four types: \begin{itemize} \item Spherical rotors: have equal moments of inertia (e.g., $\mathrm{CH}_4$). \begin{figure}[H] \centering \includegraphics{img/chemistry/molecule_ch4.jpg} \end{figure} \item Symmetric rotors: have two equal moments of inertial (e.g., $\mathrm{NH}_3$). \begin{figure}[H] \centering \includegraphics{img/chemistry/molecule_nh3.jpg} \end{figure} \item Linear rotors: have one moment of inertia equal to zero (e.g., $\mathrm{CO_2}$, $\mathrm{HCl}$). \begin{figure}[H] \centering \includegraphics{img/chemistry/molecule_co2.jpg} \end{figure} \begin{figure}[H] \centering \includegraphics{img/chemistry/molecule_hcl.jpg} \end{figure} \item Asymmetric rotors: have three different moments of inertia (e.g., $\mathrm{H}_2\mathrm{O}$). \begin{figure}[H] \centering \includegraphics{img/chemistry/molecule_h2o.jpg} \end{figure} \end{itemize} Let us now recall that have proved in the section of Quantum Chemistry that for the rigid rotator the part of the Hamiltonian dedicated to the rotation of energy is: Where $L^2$ was is an operator but from which we know from our study ow Wave Quantum Physics that the eigenvalues are: and where $r$ is the distance between the two corpuscules (nucleus and electron in the context of our study of the hydrogenous atom in the section of Quantum Chemistry) and In the context of diatomic molecules $A$ and $B$ it is more common to write $r_{AB}$ and: In the section of Wave Quantum Physics we have seen that we must consider the spin we have have to write the more general form: Therefore: In the old style spectroscopic literature, the rotational term values $F(J) = E(J)/hc$ are used instead of the energies....The previous relation is then written: with the "\NewTerm{rotational constant}\index{rotational constant}": We also know from the section of Classical Mechanics that: Therefore: That simplifies to: Therefore: The energy separation between the rotational levels $J$ and $J+1$ is given obviously by: and increase linearly with $J$. Let us now calculate the moment of inertia, that we will denoted $I$ to avoid the confusion with the orbital kinetic momentum $J$ used above, of a diatomic molecule. Let us imagine the diatomic molecule as a system of two tiny spheres at either end of a thin weightless rod. \begin{figure}[H] \centering \includegraphics{img/chemistry/diatomic_molecule_moment_inertia.jpg} \caption{Construction for the study of inertia momentum of a diatomic molecule} \end{figure} Let $C$ be the center of mass of the molecule. Let $r_1$ and $r_2$ be the distances of the two atoms of respective masses $m_1$, $m_2$ from the center of mass $C$ of the molecule: We see that: and we have (\SeeChapter{see section Classical Mechancics}): Therefore: Hence: After rearranging we get: or: Similarly: Let $I$ be the moment of inertia of the diatomic molecule about an axis passing through the center of mass of the molecule and perpendicular to bond length. Then we have seen in the section of Classical Mechanics that: or: thus: after simplification: Hence: So finally for a diatomic molecule (or any pair of object turning around a common center) we get the following moment of inertia\index{moment of inertia of a diatomic molecule}: Hence the fact that we often found in the literature the previous main relations under the form: and: Therefore, as: the frequencies at which transitions can occur are given by : Notice that for $J_z=0$ we have a non-null zero point energy and frequency: \begin{tcolorbox}[colframe=black,colback=white,sharp corners] \textbf{{\Large \ding{45}}Example:}\\\\ The molecule $\mathrm{NaH}$ is found to undergo a rotational transition from $J=0$ to $J=1$ when it absorbs a photon of frequency $2.94 \times 10^{11}$ [Hz]. We want to know the equilibrium bond length of the molecule.\\ For this purpose we use $J_z=0$ in the formula for the transition frequency Solving for $r_0$ gives: The reduced mass is given by: which is in atomic mass units or relative units. In order to convert to kilograms, we need the conversion factor $1\;[\text{au}]= 1.66\cdot 10^{-27}$ [kg]. Multiplying this by $0.9655$ gives a reduced mass of $1.603\cdot 10^{-27}$ [kg]. Substituting in for $r_0$ gives: \end{tcolorbox} \pagebreak \subsection{Molecular Vibrational Energy Levels} Let us consider the simple case of a vibrating diatomic molecule, where restoring force is proportional to displacement such that (\SeeChapter{see section Mechanical Engineering}): The potential energy is a we proved it in the previously mentioned section, but with the notation of Quantum Physics: Now remember that we have proved in the section of Wave Quantique Physique the Schrödinger equation was: After rearrangement: And using the conventional notations in chemistry and quantum physics: As we consider a linear vibration mode, the know that we can use the reduced mass to analyze the system (\SeeChapter{see section Classical Mechanics}). Therefore we: Hence: And as we have prove it in the section of Wave Quantum Physique we have: with for recall $n\in \mathbb{N}$. \begin{flushright} \begin{tabular}{l c} \circled{90} & \pbox{20cm}{\score{3}{5} \\ {\tiny 23 votes, 64.35\%}} \end{tabular} \end{flushright} %to make section start on odd page \newpage \thispagestyle{empty} \mbox{} \section{Analytical Chemistry} \lettrine[lines=4]{\color{BrickRed}C}hemistry is a very complex $n$-body science that mathematics can not explained without the input of numerical computer simulations or approximations regarding the use of quantum theory (\SeeChapter{see Atomistic section}). Until these tools are powerful enough and accessible to everyone, chemistry remains a primarily experimental science based on the observation of different properties of matter and we would like here give some very important definitions (which we find also elsewhere in other fields as chemistry). Analytical chemistry is concerned with the chemical characterization of matter and the answer to two important questions: what is it (qualitative analysis) and how much is it (quantitative analysis). Chemicals make up everything we use or consume, and knowledge of the chemical composition of many substances is important in our daily lives. Analytical chemistry plays an important role in nearly all aspects of chemistry, for example, agricultural, clinical, environmental, forensic, manufacturing, metallurgical, and pharmaceutical chemistry. The nitrogen content of a fertilizer determines its value. Foods must be analyzed for contaminants (e.g., pesticide residues) and for essential nutrients (e.g., vitamin content). The air we breathe must be analyzed for toxic gases (e.g., carbon monoxide). Blood glucose must be monitored in diabetics (and, in fact, most diseases are diagnosed by chemical analysis). The presence of trace elements from gun powder on a perpetrator’s hand will prove a gun was fired by that hand. The quality of manufactured products often depends on proper chemical proportions, and measurement of the constituents is a necessary part of quality assurance. The carbon content of steel will influence its quality. The purity of drugs will influence their efficacy. In this section, we will focus only on the mathematical tools and techniques for performing these different types of analyses. \textbf{Definitions (\#\mydef):} \begin{enumerate} \item[D1.] A "\NewTerm{subjective property}\index{subjective property}" is a property based on personal / individual printing, for example: beauty, sympathy, color, utility, etc. \item[D2.] An "\NewTerm{objective property}\index{objective property}" is an experienced property (which can not be contradicted), for example: mass, volume, shape, etc. \item[D3.] A "\NewTerm{qualitative property}\index{qualitative property}" is a descriptive property given using words. For example: oval, magnetic, conductive, etc. \item[D4.] A "\NewTerm{quantitative property}\index{quantitative property}" is a property that can be measured. For example: mass, volume, density, etc. \item[D5.] A "\NewTerm{characteristic property}\index{characteristic property}" is an exclusive property that identifies a pure substance. It does not change even if it is physically transformed material, for example: its density, its boiling point, its melting point, etc. \item[D6.] A "\NewTerm{characteristic property}\index{characteristic property}" is an exclusive property that identifies a pure substance. It does not change even if it is physically transformed material, for example: its density, its boiling point, its melting point, etc. \begin{tcolorbox}[title=Remark,colframe=black,arc=10pt] We know about $2,000,000$ different pure substances in the early 21st century (that is to say ... there is work behind it). \end{tcolorbox} \item[D7.] We name "\NewTerm{compound bodies}\index{compound bodies}", the bodies, that subjected to chemical processes, restore their components in the form of pure substances. \item[D8.] If we make the separation of mixtures and the decomposition of compositions, we finally get the bodies that are non-decomposable by conventional chemical methods; we name them "\NewTerm{elements}\index{elements}" or "\NewTerm{simple bodies}\index{simple bodies}". \item[D9.] The smallest part of a chemical combination yet having all of the properties thereof is the "\NewTerm{molecule}\index{molecule}" of this combination. The smallest part of an element or simple body is the "\NewTerm{atom}\index{atom}" of that element. \end{enumerate} Some general reminders first: \begin{enumerate} \item A mixture is named "\NewTerm{heterogeneous}\index{heterogeneous}" in chemistry if the components are immediately discernible to the naked eye or through the microscope. \item A mixture is said to be "\NewTerm{homogeneous}\index{homogeneous}" in chemistry if the components are not discernible to the naked eye or through the microscope. \item A system or body is said to be "\NewTerm{isotropic}\index{isotropic}" if it has identical values of a property in all directions otherwise it is said "\NewTerm{anisotropic}\index{anisotropic}". \end{enumerate} \subsection{Simple Mixtures} Before going into more or less complicated equations, the simplest case of application of mathematics to chemistry by which we can start is the management of mixtures for analysis and control operations of simple chemical reactions with two mixtures. Let us consider two typical and particular examples as theoretical introduction: \begin{enumerate} \item Given a solution (yellow) of $10$ milliliters of a solution containing an acid concentration at $30\%$. How many milliliters of pure acid (blue) should we add to increase the concentration (green) to $50\%$? \begin{figure}[H] \begin{center} \includegraphics{img/chemistry/chemistry_simple_mixture.jpg} \end{center} \caption{The joy of mixtures...} \end{figure} Since the unknown is the amount of pure acid to be added, we will denote it by $x$. Then we have: That gives: It comes the obviously: Therefore $4$ milliliters of acid should be added to the original solution. \item A canister contains $8$ liters of gasoline and oil to run an aggregate. If $40\%$ of the initial mixture is of the essence, how much should we remove of the mixture (pink) to replace it with pure gasoline (green) so that the final mixture (light green) contains $60\%$ gasoline? \begin{figure}[H] \begin{center} \includegraphics{img/chemistry/chemistry_simple_mixture_gazoline.jpg} \end{center} \caption{The joy of mixtures by for diyers and military ...} \end{figure} We denote the unknown $x$ that is the number of liters of the initial mixture to remove and replaced by the pure essence being of equal amount also $x$. Then we have: That gives: We have then obviously: So approximately $2.6$ liters should be removed from the original mixture and be replaced by approximately $2.6$ liters of pure essence. \end{enumerate} In short this is for all mixtures in this book until now. We can go much further and do much more complicated with more unknowns but we'll stop there for now. \subsection{Reactions} Since the main study in chemistry is to observe the results of pure substances mixtures and/or of compounds mixtures, it is first necessary to deal with the basic rules governing these mixtures under normal conditions of pressure and temperature (N.C.P.T). We should first clarify that we are not going to study in this section what creates the connections between the elements, as this is the role of quantum and molecular chemistry (see previous sections). Furthermore, we insist on the fact that every theoretical element will be illustrated with a practical example which can be useful sometimes to better understand. Let us now consider a closed chemical system (without mass transfer therefore!). We translate the change in the composition (if applicable and if have there is one) of the chemical system with a reaction equation of the form (the system does not always go both ways!): named "\NewTerm{balance equation}\index{balance equation}" where the coefficients $v_i \in \mathbb{N}^*$ are named "\NewTerm{stoichiometric coefficients}\index{stoichiometric coefficients}" in the sense that they indicate the "golden proportions", strictly named "\NewTerm{stoichiometric ratio}\index{stoichiometric ratio}" necessary such that under normal conditions the reaction can take place and where the $A_i$ are the reactants (pure or compounds) and the ${A'}_i$ the formed products. Caution! In the writing of the above equation, we require that all the $A_i$ without exception react to the chemical reaction and that therefore all the $v_i$ are dependents. If the "golden proportions" are respected (such that the coefficients are well stoichiometric) and exist when writing of the reaction equation, then for any $\alpha \in \mathbb{N}$ we have: this proposal can be proven only if the stoichiometric coefficients on one side or the other of the reaction vary proportionally. Experience shows that in normal conditions of temperature and pressure (N.C.T.P.) this is the case! Therefore, the stoichiometry of the reaction requires that if it disappears $x_1$ moles of $A_1$, $x_2$ moles of $A_2$ respectively with a variation of material of the products $\mathrm{d}n_1,\mathrm{d}n_2,\ldots $, it will appear accordingly ${x'}_1$ moles of ${A'}_1$, ${x'}_2$ moles of ${A'}_2$, ... with respectively a variation of material of the products $\mathrm{d}{n'}_1,\mathrm{d}{n'}_2,\ldots $... by respecting the proportionalities of the stoichiometric coefficients such that we can write the "\NewTerm{material balance equation}\index{material balance equation}": where $\mathrm{d}\xi$ is named the "\NewTerm{elementary reaction progress}\index{elementary reaction progress}" (frequently we will take the absolute values of the ratios to not have to think about the sign of the variations). The division of the variations $\mathrm{d}n_1,\mathrm{d}n_2$ and $\mathrm{d}{n'}_1,\mathrm{d}{n'}_2$ by their stoichiometric coefficients is justified only for normlization reasons having for purpose to bring $\mathrm{d}\xi$ to a value between $0$ and $1$ (between $0\%$ and $100\%$...). These last equalities simply indicate that if one of the reactive products disappear in a given quantity, the other reactants have their quantity that decreased in relation to their stoichiometric coefficient so as to maintain the golden proportions of the reaction. The writing of the energy balance can be simplified by the introduction of algebraic stoichiometric coefficients $v_i$ such that: $v_i>0$ for a formed product, $v_i<0$ for a reactive product. Finally we can write: we also often find in the literature with the absolute value at the numerator! Therefore, with this algebraic convention, the reaction equation as it exists, can be written: which means that the algebraic sum of the total number of pure compounds of the reactants and products formed is always zero. It is clear that at the initial time of the reaction we choose for the progress the value $\xi=0$ (its maximum value being equal to unity), time at which the quantities of material are equal to $n_{i,0}$. The integration of the differential expression of material balance obviously gives: Therefore: relation that we found in chemical progress tables (see further below), without forgetting that $v_i>0$ for a formed product and, $v_i<0$ for a reactive product. This bring us to the question: What is the maximum value $\xi_{\max}$ of the progress of a reaction? Well the answer to that is in fact quite simple: The maximum progress value of a reaction having the stoichiometric proportions and such that it occurs when the reactants will have all disappear and therefore it is necessarily given by: for what we name the "\NewTerm{limiting reactant}\index{limiting reactant}", that is to say, the reactant that disappears (has always the smallest value of molarity) first and stops the expected reaction! If there is no limiting reactant, that is that at the end of the reaction all reactants have been transformed: then we say that all reactants were in stoichiometric proportion. It may be helpful to define the "\NewTerm{percentage of completion}\index{percentage of completion (chemistry)}" $A_i$ given by the intensive quantity: which gives with a more formal notation: \begin{tcolorbox}[colframe=black,colback=white,sharp corners] \textbf{{\Large \ding{45}}Example:}\\\\ Let us consider to illustrate these concepts the reaction (dinitrogen and hydrogen giving ammonia): where the Latin letters represent the pure substances (atoms) whose name does not matter to us in this book (notation proposed by Jöns Jacob Berzelius in 1813). The indices simply represent the number of combination of atoms to obtain a molecule. \\ We then have in this reaction: The reader will have notice that we have well following our convention for the mass balance: If we consider that there is one mole of each compound body, it gives us for the stoichiometric proportions (to a given factor $x\in\mathbb{R}^{*}$ for all values): If at any a given time $t\neq t_0$, we get by measurement: What is the progress of that reaction? The answer is: or in other words, we are at $10\%$ of progress (logical!).\\ \end{tcolorbox} \begin{tcolorbox}[colframe=black,colback=white,sharp corners] The conversion rate of $\mathrm{NH}_3$ is thereto: And what is the maximum progress value $\xi_{\max}$ of the limiting reactant?\\ So in the context of the above example where we have $n_{1,0}=1\;[\text{mol}]$ for the $\mathrm{N}_2$ then: \end{tcolorbox} Chemists also often use what they name a "\NewTerm{reaction progress table}\index{reaction progress table}". Let us take our previous example to introduce this table. We have: Let us seek $\xi_{\max}$ from this table. The limiting reactant is either $\mathrm{N}_2$ or $3\mathrm{H}_2$. So for $\mathrm{N}_2$: and for $3H_2$: Each reactant having the same $\xi_{\max}$ progress, it is thus also the minimum $\xi_{\max}$. Consequently, according to the definition of limiting reactant, as the proportions are stoichiometric in the given example no reactant is limiting. \begin{flushright} \begin{tabular}{l c} \circled{10} & \pbox{20cm}{\score{3}{5} \\ {\tiny 25 votes, 55.20\%}} \end{tabular} \end{flushright} %to force start on odd page \newpage \thispagestyle{empty} \mbox{} \section{Thermochemistry} \lettrine[lines=4]{\color{BrickRed}T}hermochemistry is the branch that historically focuses on thermic phenomena and to equilibrium accompanying chemical reactions. It mainly has its foundations in the thermodynamics. More technically, thermochemistry is the study of the energy and heat associated with chemical reactions and/or physical transformations. A reaction may release or absorb energy, and a phase change may do the same, such as in melting and boiling. Thermochemistry focuses on these energy changes, particularly on the system's energy exchange with its surroundings. Thermochemistry is useful in predicting reactant and product quantities throughout the course of a given reaction. In combination with entropy determinations, it is also used to predict whether a reaction is spontaneous or non-spontaneous, favorable or unfavorable. We can only strongly recommend the readers to have read or to read the section on Thermodynamics in the Mechanics chapter because many concepts that have been seen there will be assumed to be known in this section. Moreover, it is strongly recommended to read this chapter in parallel to that of Analytical Chemistry (this can be a boring but you must do with...). \subsubsection{Chemical transformations} Given the closed system closed of the chemical reaction (\SeeChapter{see section Analytical Chemistry}): We will consider for simplicity that the chemical reaction is complete and that the reactants are used in stoichiometric amounts (state 1: $\Sigma_1$) to give the products formed, also in stoichiometric quantities (state 2: $\Sigma_2$). If the transformation is done in (quasi-)steady volume steady, work on the surrounding atmosphere is zero because (\SeeChapter{see section Thermodynamics}): The application of the first law of thermodynamics is reduced and allows then us to write: where $Q_v$ is within the thermal chemistry framework named "\NewTerm{heat of reaction at constant volume}\index{heat of reaction at constant volume}", of course exchanged between the system and the external environment (we do not write the delta $\Delta$ in front of $Q_V$ to indicate that it is a variation... by tradition...). Let us recall that: \begin{enumerate} \item If $Q_V>0$ the reaction is said to be "\NewTerm{endothermic}\index{endothermic}" (the system receives heat from the external environment). \item If $Q_V<0$ the reaction is said to be "\NewTerm{exothermic}\index{exothermic}" (the system gives heat to the external environment). \item If $Q_V=0$ the reaction is said to be "\NewTerm{athermic}\index{athermic}" (the system do not exchange any heat with the environment). \end{enumerate} \begin{tcolorbox}[title=Remark,colframe=black,arc=10pt] Let us also recall that a closed system is not an isolated system! For a review of different definitions, the reader is referred once again to the section of Thermodynamics. \end{tcolorbox} If the reaction is carried out at constant pressure (the most usual case in practice), that is to say isobaric, then we have: \begin{tcolorbox}[title=Remark,colframe=black,arc=10pt] The choice of integration indices are different to previously to differentiate the fact that a reaction a pressure or constant volume are not necessarily identical. \end{tcolorbox} The application of the first law of thermodynamics, between the two states, gives: where $Q_p$ is the amount of heat, named "\NewTerm{constant-pressure reaction heat}\index{constant-pressure reaction heat}", exchanged between the system and the external environment ($Q_P$ is a variation... even if the traditional unfortunate notation of thermodynamician does not put that in evidence...). Using the definition of enthalpy, we can write the last relation in the form: If we work with the molar volumes, those of condensed phases (therefore solid and liquid) is negligible compared to the gas molar volume, only the gas components have a very different enthalpy of their internal energy (see the example in the section of Thermodynamics) . We would therefore have under the ideal gas approximation (\SeeChapter{see section Thermodynamics}): In the context of the ideal gas, the prior-previous relation can be written: But, as (\SeeChapter{see section Continuum Mechanics}) $U_2$ and $U_3$ are both the same final states of a single complete reaction and that for a monatomic gas: therefore the internal energy $U_2$ and $U_3$ only depends on the number of components but ... they are equal since they are the same final state of the same reaction! Therefore we have: By putting $\Delta n=n_2-n_1$ (the difference between the number of moles of gas of formed products and those of reacting products), we can write for a chemical reaction: that gives the possibility to differentiates the energy involved between isobaric and isochoric reaction and look for the best choice in terms of industrial objectives. It is interesting to note that if the $\Delta$ of moles is zero. Isobaric or isochoric heat variations are equal and there is no a priori reason to prefer one or the other transformations. Obviously in practice the problem is to know the values of the different variables of the latter relation. These values can be found on huge databases that chemists have access to... This relationship is only very rarely used in practice and in any case it is based on too simplifying and restrictive assumptions to be of real practical interest. \subsection{Molar Quantities} \textbf{Definitions (\#\mydef):} \begin{enumerate} \item[D1.] By convention, the "\NewTerm{mole}\index{mole}" is the quantity of substance of a system which contains as many chemical species as there are Carbon atoms in $12$ [g] of carbon $12$ (\SeeChapter{see section Nuclear Physics}). The number of carbon atoms contained in $12$ [g] is equal to the Avogadro's \underline{number} given approximately by: This means verbatim and by construction that a mole of water, of iron, of electron, respectively always contains a number of atoms equal to the Avogadro's number. Most of time the mole is simply denoted $n$ and has its value in $\mathbb{R}^{+}$. Note that with a mixed system it is a mathematical nonsense to do the sum of the molar masses of the constituents for the total molar mass. The molar mass is an intensive quantity! \begin{tcolorbox}[title=Remark,colframe=black,arc=10pt] \textbf{R1.} Hydrogen-1 was once used as a standard but given the inaccuracy that can occur because of its low mass, it was later disregarded. Once mass spectrometry was made available, physicists were using Carbon-12 for it's stability and abundance, and basically to stop everybody from fighting. Carbon-12 also more accurately defines a mass for hydrogen, and it is unbound in it's ground state and also is the most common and readily available isotope to have exactly the same number of protons and neutrons, 6 of each, and thus provides a perfect average when divided by the total number of protons and neutrons (electron is so small as to be considered negligible). \\ \textbf{R2.} The "Avogadro project" aims to redefine Avogadro's constant (currently defined by the kilogram: the number of atoms in 12 g of Carbon-12) and reverse the relationship so that the kilogram is precisely specified by Avogadro's constant. This method required creating the most perfect sphere on Earth. It is made out of a single crystal of silicon 28 atoms. By carefully measuring the diameter, the volume can be precisely specified. Since the atom spacing of silicon is well known, the number of atoms in a sphere can be accurately calculated. This allows for a very precise determination of Avogadro's constant. \end{tcolorbox} \item[D2.] The "\NewTerm{molar mass (MM)}\index{molar mass}" is the mass of one mole of atoms of the chemical elements involved. Therefore by definition the molar mass of $\mathrm{C}_{12}$ is equal to $12$ grams (yes historically we use the gram to express molar mass because for application purposes is is obviously more convenient... \begin{tcolorbox}[title=Remark,colframe=black,arc=10pt] We find these atomic molar masses in the periodic classification. But above all it must be known that those that are indicated take into account the natural isotopes (which is normal since they are chemically indistinguishable excepted for the nuclear chemist or nuclear physicist). So the value indicated in the tables is calculated as the sum of the respective proportions of the molar masses of the different corresponding isotopes (the validity of this method of calculation is obviously relative...). \end{tcolorbox} \item[D3.] The "\NewTerm{atomic molar mass}\index{atomic molar mass}" is the molar mass of a given element divided by the Avogadro number. Thus: Therefore the atomic (molar) mass is the mass of $1$ atom of a particular element and the molar mass is the mass of $1$ mole of an atom or molecule. We therefore have the following graph: \begin{figure}[H] \begin{center} \includegraphics{img/chemistry/mole_mass_avogadro.jpg} \end{center} \end{figure} \item[D4.] The "\NewTerm{Molecular molar mass (MMM)}\index{molecular molar mass}" is equal to the sum of the atomic molars masses of the chemical elements that constitutes it. It comes therefore immediately the following observation: the mass $m$ of a sample consisting of an amount of $n$ moles of identical chemical species of molar mass $M_m$ is given by the relation: Somewhat in a little bit more formal way and in a thermodynamic aspect, here is also is how we can define the molar mass: Let $X$ be an extensive quantity on a single-phase system (see the section of Thermodynamics for precisions about the vocabulary used) and given a volume element $\mathrm{d}V$ of this system around a common point $M$ and containing the amount of material $\mathrm{d}n$. We associate it the extensive quantity $\mathrm{d}X$ proportional to $\mathrm{d}n$ such that: so that $X_m$ is an intensive quantity (ratio of two extensive quantities according to what was seen in the section of Thermodynamics) which we will name by definition the "\NewTerm{associated molar size}\index{associated molar size}" to $X$. We conclude that: the integral applying on the whole monophasic system. In the case of a uniform phase, $X_m$ being constant at any point, we can simply write the latter as: \begin{tcolorbox}[title=Remark,colframe=black,arc=10pt] Basically the idea is to say that the mass of a single-phase chemical system is proportional to the molar mass of it to closely to a given integer factor representing the number of its constituents (or the number of moles to be more exact). \end{tcolorbox} \item[D5.] When the system is heterogeneous, we use the concept of "\NewTerm{mole fraction}\index{mole fraction}", defined by: $x_i$ being the mole fraction of a species $A_i$ whose the quantit of material (the number of moles for example) is $n_i$ with $n=\sum_i n_i$ being the total quantity of matter of the studied phase. As a result, for all chemical species of the studied phase, $\sum_i x_i=1$ which means that if there are $n$ chemical species, it is enough to know $n-1$ molar titles to know them all. If the studied phase is a gas and assuming a perfect gas according to Boyle's law (approximation of the Van der Waals equation proved in the section of Statistical Mechanics) we have: we therefore have the possibility in the case of gaseous phases to express the mole fraction as: \begin{tcolorbox}[title=Remark,colframe=black,arc=10pt] We can do obviously the same for the volume $V$. \end{tcolorbox} \item[D6.] We define the "\NewTerm{mass content associated with the species $A_i$}\index{mass content associated a species}" by the ration: with $m=\sum_i m_i$ being the total mass of the studied phase. We also have of course $\sum_i w_i=1$. \item[D7.] We define the "\NewTerm{volumic molar concentration}\index{volumic molar concentration}" or "\NewTerm{molarit}\index{molarit}" the ratio (do not confuse the notation with the specific heat): \begin{tcolorbox}[title=Remark,colframe=black,arc=10pt] There are other composition variables used much less used than $x_i$ or $c_i$. We can cite the "\NewTerm{mass concentration density}\index{mass concentration density}" $m_i/V$, the "\NewTerm{molality}\index{molality}" (ratio of the amount of material of the species $A_i$ by the total mass of solvent), etc. \end{tcolorbox} \item[D8.] We say that a (perfect) gas is in the "\NewTerm{standard state}\index{standard state}" if its pressure is equal to the standard pressure: \item[D9.] We call "\NewTerm{standard molar quantity}\index{standard molar quantity}" of a constituent $X_m^\circ$ the value of the molar quantity of this same component taken in the standard state, that is to say under the pressure $P^\circ$. \begin{tcolorbox}[title=Remarks,colframe=black,arc=10pt] \textbf{R1.} Any standard molar quantity is obviously intensive: the pressure being set by the standard state , it depends only on the temperature.\\ \textbf{R2.} Any standard quantity is denoted with the superscript "${}^\circ$". $V_m^\circ$ is then standard molar volume. For cons, the standard molar quantity is not always specified with the small index $m$, we must sometimes be careful with what is handled in the equations (as always anyway!). \end{tcolorbox} In the case of the ideal gas, the molar volume is calculated using the ideal gas equation of state. Then we get: We see of course that the standard molar volume of an ideal gas depends on the temperature. If we do that calculation at the "\NewTerm{standard conditions of temperature and pressure}\index{standard conditions of temperature and pressure}" (abbreviated STP), that is to say at a temperature of $273.15$ [K] (i.e. $0$ [$^\circ$C]) and a pressure of $1$ [atm] (i.e. $101,325$ [kPa]), then we find a volume of $22.4$ [L$\cdot$mol$^{-1}$] which is a well known value by chemists. \end{enumerate} \begin{tcolorbox}[title=Remarks,colframe=black,arc=10pt] \textbf{R1.} In a wide range of temperatures and pressures, the molar volume of real gases is generally not very different from that of an ideal gas.\\ \textbf{R2.} In the case of a condensed state, we do not have in general a state equation but we can measure the molar volume.. \end{tcolorbox} We can define then by extension other standard quantities resulting from those we had defined in the section Thermodynamics: \begin{enumerate} \item The "\NewTerm{standard molar internal energy}\index{standard molar internal energy}" (intensive quantity as expressed by molar unit) and denoted by $U_m^\circ$. \item The "\NewTerm{standard molar enthalpy}\index{standard molar enthalpy}" (intensive quantity as expressed by molar unit) with: It is important that the reader notice that the enthalpy depends only on the temperature (and the internal energy). \end{enumerate} \begin{tcolorbox}[title=Remark,colframe=black,arc=10pt] For condensed states the standard volume is very low in S.I. units so that $H_m^\circ\cong U_m^\circ$. However, it is very difficult to speak of pressure for so condensed states so this approximation has to be used with caution. \end{tcolorbox} If we now consider an extensive function $X$ (as for example the volume!) defined on a chemical gaseous evolving system. We can a priori express $X$ based on two intensive variables $T$, $P$ (because an extensive function is always a product or ratio of two intensive quantities, or a sum of extensive quantities) and of the different quantity of materials $n_i,{n'}_i$ of $A_i,{A'}_i$ such that: If all products (reagents and resulting one) are in their standard state, the extensive function, therefore denoted $X^\circ$, gets the form: where the pressure is no longer involved as attached to its standard value. The gas is then described by its temperature and the quantity of its constituents! However, if we consider an infinitesimal evolution of the system at constant temperature and pressure (because assume a very slow transformation) the different quantities of materials vary therefore following the exact total differential (\SeeChapter{see section of Differential Calculus and Integral}): where obviously are taken into account, as $T$ and $P$ are are supposed constant, only the quantity of materials that could vary (yes don't forget we are doing chemistry!!!). We can then define artificially (nothing avoid us to do so, it's not false!) the intensive standard molar quantity that depends only on the temperature: Therefore: but we have also defined in the section Analytical Chemistry the relation: expressing, for recall, the variation in the quantity of matter of one of the compounds of a chemical reaction relatively toits stoichiometric ratio (constant) and the progress of the reaction. We therefore have: and also that: By definition, we name this algebraic sum "standard quantity reaction associated with the extensive function $X$" and denote it by (notation badly chosen by chemists in our point of view...): which is an intensive quantity that depends only on the temperature and represents a relative change (hence the subscript $r$!). This relation can also be written: In general, chemists name "\NewTerm{Lewis operator}\index{Lewis operator}", denoted $\Delta_r$, the derivative of a quantity $X$ (standardized or not), with respect to the progress of the reaction $\xi$ with constant temperature and pressure. \begin{tcolorbox}[title=Remark,colframe=black,arc=10pt] The symbol $\Delta_r$ appears with the letter $r$ in subscript to show that this is a relative reaction quantity. In other words, it is the standard variation of the molecular quantity during the concerned reaction for a given reaction progress of one mole at a pressure of $1$ bar for a perfect gas. \end{tcolorbox} We must also not forget that the stoichiometric coefficients of the reactants are positive and those of the resulting products are negative (\SeeChapter{see section of Analytical Chemistry}). There are two reaction quantities that play important roles in chemistry: \begin{enumerate} \item The internal molar energy of reaction, named often "\NewTerm{internal energy of standard reaction}\index{internal energy of standard reaction}" of a chemical system: \item The molar enthalpy of reaction, often named more "\NewTerm{standard enthalpy of reaction}\index{standard enthalpy of reaction}" of a chemical system: \end{enumerate} \subsubsection{Standard enthalpy of reaction} Therefore we can consider the following two cases after knowing the relation (\SeeChapter{see section Thermodynamics}): \begin{enumerate} \item If the $A_i$ are in a condensed state, since the internal pressure does not apply we have: which still remains to be taken with precaution following the scenarios! \item If the $A_i$ are in the gaseous state (assumed perfect gas): \end{enumerate} We conclude that only gaz will intervene in this relation: That we write conventionally: It follows that in the special case where: (Which is in fact an unfortunate notation... for the algebraic sum of the stoichiometric ratio that would equal to zero) for a given temperature then we have: where it must be remembered that the stochiometric coefficients of the products are counted as positive, while those of the reactants are counted as negative (\SeeChapter{see section Analytical Chemistry}). Thus, the variation of the enthalpy function corresponds to the variation of the quantity of heat absorbed or emitted in an isobaric transformation at a given temperature $T$. This is why it is sometimes denoted $\Delta_r H_{T,P^\circ}$. A chemical reaction that has an enthalpy reaction (which is for recall the instantaneous change in enthalpy during a reaction) that is negative is said to be "\NewTerm{exothermic}\index{exothermic}", since it releases heat into the environment (constant pressure obligedby the definition of enthalpy reaction!), then a chemical reaction whose reaction enthalpy is positive is say to be "\NewTerm{endothermic}\index{endothermic}" since it then requires a supply of heat to occur (so the vocabulary is the same as in the section of Thermodynamics). Thus, according to the preceding developments, if we denote with an index $p$ the products and with an index $i$ the reactants, we often find the standard enthalpy of reaction as follows if the stochiometric coefficients are counted as positive: Into this form, then we see well that the standard reaction enthalpy corresponds to the difference partial molar enthalpies between the products and reactants of the transformation. This is nothing more than the "\NewTerm{Hess's law}\index{Hess's law}" set in the 19th century by the Swiss chemist Henri Hess. The law the states that the total enthalpy change during the complete course of a chemical reaction is the same whether the reaction is made in one step or in several steps and can be understood as an expression of the principle of conservation of energy, also expressed in the first law of thermodynamics, and the fact that the enthalpy of a chemical process is independent of the path taken from the initial to the final state (i.e. enthalpy is a state function). Because in a system at equilibrium, the initial energy is always greater than or equal to the final energy (all systems tend to move towards to a more stable state with minimum energy as we have study it in the section of Thermdynamics), then the standard enthalpy of reaction $\Delta_r H^\circ$ may be only negative or zero. If a chemical reaction at constant pressure and at a specific temperature gives only a single chemical compound (product) then the standard enthalpy of reaction is named "\NewTerm{standard enthalpy of formation}\index{standard enthalpy of formation}" and is denoted by $\Delta_f H^\circ$. In fact, the interest of prior-previous relation is that the chemist can simply, without having to know the quantities of material involved, determine just by knowing the stochiometric coefficients of an isobaric gas or condensed chemical reaction (if he agrees that it will then be an approximation for the latter case) that the instantaneous variation of the molar internal energy during the progress of the reaction at a given temperature is equal to the instantaneous variation of the molar enthalpy . Two different situations arise then: \begin{enumerate} \item The difference between the instantaneous variation of the molar internal energy and the molar enthalpy is zero: therefore, the chemical reaction (at a given temperature) does not instantaneously occupies a larger volume and thus don't loss energy to push ("unnecessarily") the pressure of the gas surrounding the studied reaction (this can be seen as a money saving in terms of energy in the chemical industry). In this case, the standard enthalpy of reaction is simply equal to the heat of reaction at constant pressure $Q_p$. \item The difference between the instantaneous variation of the molar internal energy and the molar enthalpy is positive: Therefore, the chemical reaction (at the given temperature) instantly occupies a greater volume and thus loses some energy to push ("unnecessarily") the pressure of the gas surrounding the studied reaction (this can be seen as a waste of money in terms of energy efficiency in the chemical industry). \end{enumerate} \begin{tcolorbox}[title=Remark,colframe=black,arc=10pt] Obviously, it is possible to imagine a company that takes advantage of the change in volume of a reaction (case 2 above) that pushes the surrounding gas with a piston system to then produce mechanical energy... so it would be possible in certain situations to lose much less money (verbatim energy ...). \end{tcolorbox} However a small difficulty arise, ... the standard enthalpy of a simple pure body (body formed of a single type of atom) can not be calculated in absolute terms because it depends on the internal energy which is very difficult to calculate (you must use the tools of quantum physics which raise insurmountable problems even in the beginning of the 21st century). This means we must define an arbitrary scale of molar enthalpies by setting an arbitrary zero enthalpy and adopted internationally (which is unfortunately not the case as far as we know!). Thus, in order to set up tables of standard molars enthalpy, he was chosen to define the scale of enthalpy as follows: the standard molar enthalpy of a simple steady pure body in the standard state is equal to $0$ at $298$ [K]. It follows that the enthalpy of formation of a simple standard pure body is always equal to zero. \begin{tcolorbox}[colframe=black,colback=white,sharp corners] \textbf{{\Large \ding{45}}Example:}\\\\ Given the reaction: That is to say, the dissociation of chlorine and phosphorus pentachloride in phosphorus trichloride. The tables give us at the temperature of $T=1000$ [K] the following value of the standard molar enthalpy of this reaction: The variation of the value of the molar enthalpy of reaction being positive, it follows that the reaction is endothermic (requires a heat input hence the dissociation temperature of $1000$ [K]) and therefore the product is more volatile than the initial reactant. We have the following algebraic sum of the stochiometric coefficients of the reaction: That is (which is dimensionless since enthalpy is in molar value!): therefore the reaction increase the pressure by creating an additional mole per mole of reactant. Since: \end{tcolorbox} \begin{tcolorbox}[colframe=black,colback=white,sharp corners] then it comes: This is then the part of internal energy absorbed by the system on the $156 \;[\text{kJ}\cdot \text{mol}^{-1}]$. The remainder (difference) is has just been used to push the surrounding atmosphere of the chemical reactor. \end{tcolorbox} \paragraph{Kirchhoff's Enthalpy Law}\mbox{}\\\\\ Kirchhoff's Enthalpy Law describes the enthalpy of a reaction's variation with temperature changes. In general, enthalpy of any substance increases with temperature, which means both the products and the reactants' enthalpies increase. The overall enthalpy of the reaction will change if the increase in the enthalpy of products and reactants is different. In other words, in a more practical way, the latent heat - energy required to evaporate a liquid - is not the same at every temperature! . The difference between the Gas and Liquid energy levels increases at higher temperatures. Thus, the Kirchhoff's Enthalpy law enables the calculation of a new latent heat from an existing one with a known temperature change. \begin{figure}[H] \centering \includegraphics[scale=0.9]{img/chemistry/eirchooff_enthalpy_law.jpg} \caption{Kirchoff's enthalpy law illustration} \end{figure} So the Kirchhoff enthalpy law idea is to express the variations of the enthalpy of reaction (molar or not) in function of the temperature from the knowledge of the heat capacity at constant pressure of the gaseous reactants. He have built in previous developments the following relation: which is the standard enthalpy of reaction at a given temperature in a system with a standard pressure. We had also mentioned earlier above that $\Delta_r$, for recall, is somewhat an unfortunate notation for the differential (Lewis) operator of progress of the reaction $\mathrm{d}/\mathrm{d}\xi$. If we focus on the influence of the temperature $T$ on $\Delta_r H^\circ$ we have just to write the exact differential: since the algebraic variation of the standard enthalpy by definition depends only on the temperature. The stochiometric coefficients $v_i$ are not dependent of the temperature at least until this latter does not changes the essence itself of the studied transformation. We then have under this approximation (assumption): Now we have defined in the section of Thermodynamics the heat capacity at constant pressure which is written: So if the conditions are standard (the enthalpy therefore depends only on the temperature), we get is the exact differential: Then we have: We can of course integrate the latter relations to get the common form use in practice and available in many books: Then we have: where $T_0$ is a particular temperature for which $\Delta H^\circ (T_0)$ is known. In a temperature range very close to $T_0$ chemists sometimes approximate the variation as being linear. That is equivalent to put: It then immediately comes from the prior-previous relation: \begin{tcolorbox}[title=Remark,colframe=black,arc=10pt] Quite often, the variation of the enthalpy of reaction with temperature is negligible! \end{tcolorbox} \begin{tcolorbox}[colframe=black,colback=white,sharp corners] \textbf{{\Large \ding{45}}Example:}\\\\ For the reaction (graphite + oxygen yielding to carbon dioxide) we would like to know $\Delta_r H^\circ$ at $1000$ [K]: For this, it is given in the tables for this reaction at $298$ [K]: and: We write in lowercase the heat capacities above as the are enough subscripts to not add a third one ($m$) that mean these are molar heat capacities. \begin{tcolorbox}[title=Remark,colframe=black,arc=10pt] When the enthalpy of reaction is given at the reference temperature (nowadays...) at $298$ [K] chemists then speak as we have already mention earlier above the "standard enthalpy of formation". \end{tcolorbox} The value of the molar enthalpy of reaction being negative, it follows that the reaction is exothermic (it is tendency of nature to favor exothermic reactions to stabilize systems in their minimal energy states). \end{tcolorbox} \begin{tcolorbox}[colframe=black,colback=white,sharp corners] We then have immediately: Thus the variation is of $-560\;[\text{kJ}\cdot \text{mol}^{-1}]$, that is a variation of about $+0.1\%$. It follows that the higher the temperature increase, more is the reaction exothermic. In fact, the choice of this particular temperature of $1000$ [K] is not innocent because it is from this temperature that experiments shows that the reaction also produces carbon monoxide. \end{tcolorbox} We can also conclude that some exothermic reactions and having a enthalpy of reaction that decreases rapidly with temperature can blow up! Finally, let us indicate that in practice we often use the term "\NewTerm{calorific power}\index{calorific power}" or "\NewTerm{Heat of combustion}\index{Heat of combustion}" ,which is simply the fact ... the enthalpy of reaction per unit mass of fuel or the energy obtained by combusting a kilogram of fuel. Thus, for Gasoline, we have following what give tables (under the assumption that this number is correct): And we can have fun by calculating the amount of Gasoline needed to accelerate a car of $1000$ [kg] from $0$ to $100\;[\text{km}\cdot \text{h}^{-1}]$ with a yield of $\eta=35\%$ at a temperature of $293$ [K]. Thus we have: and to get the amount of fuel in liters the tables give us the for Gasoline density about $700\;[\text{kg}\cdot \text{m}^{-3}]$ which gives finally a volume in liters of: \begin{flushright} \begin{tabular}{l c} \circled{20} & \pbox{20cm}{\score{3}{5} \\ {\tiny 23 votes, 58.26\%}} \end{tabular} \end{flushright}
{ "alphanum_fraction": 0.767565356, "avg_line_length": 68.2087209302, "ext": "tex", "hexsha": "57fcacf05fc6eec4f2443546dbf3002bffdf7592", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "71a881b8dfdf0ac566c59442244e6ed5f9a2c413", "max_forks_repo_licenses": [ "Unlicense" ], "max_forks_repo_name": "lefevred/Opera_Magistris_Francais_v3", "max_forks_repo_path": "Chapter_Chemistry.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "71a881b8dfdf0ac566c59442244e6ed5f9a2c413", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Unlicense" ], "max_issues_repo_name": "lefevred/Opera_Magistris_Francais_v3", "max_issues_repo_path": "Chapter_Chemistry.tex", "max_line_length": 1048, "max_stars_count": null, "max_stars_repo_head_hexsha": "71a881b8dfdf0ac566c59442244e6ed5f9a2c413", "max_stars_repo_licenses": [ "Unlicense" ], "max_stars_repo_name": "lefevred/Opera_Magistris_Francais_v3", "max_stars_repo_path": "Chapter_Chemistry.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 28798, "size": 117319 }
\begin{chapterquote} ``(..) I realized that actually doing physics is much more enjoyable than just learning it. Maybe 'doing it' is the right way of learning, at least as far as I am concerned. '' \quoteauthor{Gerd Binnig, Nobel Prize Laureate in Physics, 1986} \end{chapterquote} \chapter{Conclusions} \section{Numerical considerations} A multi-species, multi-dimensional, steady-state, and time-accurate numerical method is developed to solve the Favre-averaged Navier-Stokes equations closed by the Wilcox $k\omega$ model. The flux discretization is based on the Yee-Roe scheme and the pseudotime stepping on block-implicit approximate factorization. The Wilcox $k\omega$ model along with the Wilcox dilatational dissipation are deemed adequate in capturing the essentials of the flowfield physics and good agreement is shown with the experimental data of a ramp injector by Waitz {\it et al.} \cite{aiaa:1993:waitz} and a swept ramp injector by Donohue {\it et al.} \cite{aiaa:1994:donohue}. For a typical cantilevered ramp injector flowfield over a flat plate, a grid refinement study is performed and a relative error of approximately 10--15\% on the mixing efficiency is estimated for the baseline mesh (using a grid dimensions factor $r=1$). The grid-induced error for the three-dimensional cantilevered ramp injector flowfields is here estimated by comparing the effect of the grid to similar two-dimensional mixing problems where the grid can be refined sufficiently. \emph{The Richardson extrapolation is here observed to be inadequate in determining the grid-induced error of a mixing layer problem, due to the discontinuous turbulent/non-turbulent interface.} The influence of the turbulent Schmidt number on the mixing efficiency is seen to be minimal for a hydrogen-air mixing problem, as the mixing efficiency increases by only 8\% for a turbulent Schmidt number variation from 1.0 to 0.25. The use of the Yee entropy correction factor in conjunction with the Roe scheme is found to be unnecessary for the inlet flowfields and its use should be avoided as it increases significantly the numerical error in the mixing layer. \section{Marching window acceleration technique} Dubbed the marching window, a novel acceleration technique is presented which is aimed at accelerating the convergence of the Favre-averaged Navier-Stokes equations in the supersonic/hypersonic regime for flowfields with large streamwise separated flow regions. Similarly to the active domain method \cite{aiaa:1997:nakahashi}, the marching window iterates in pseudotime a band-like computational domain of minimal width which adjusts to the size of the streamwise elliptic regions when encountered. However, in contrast to the active domain method, it is shown that \emph{the marching window guarantees the residual on all nodes to be below a user-defined convergence threshold when convergence is reached}, and hence results in the same converged solution (within the tolerance of the convergence criterion) as the one obtained by standard pseudotime marching methods. Further, a streamwise ellipticity sensor based on the Vigneron splitting \cite{aiaaconf:1978:vigneron} is developed which ensures the downstream boundary of the marching window to advance sufficiently such that regions of significant streamwise ellipticity are contained within the marching window subdomain. It is noted that while the Vigneron splitting sensor does not capture all possible streamwise elliptic phenomena, this does not affect the accuracy of the final solution and only affects the performance of the marching window as an acceleration technique. Also, a multizone decomposition is implemented inside the marching window to restrict the computing to the zones where the residual is above the user-defined threshold. This is shown to further decrease the work needed for convergence by close to 2 times for the test cases shown in Chapter \ref{chapter:numerical_method}. The use of the marching window with multizone decomposition on a backward facing step and a shock boundary layer interaction flowfield (where one or several large streamwise separated region is present) reveals a 4 to 6 times decrease in storage and a 4 to 8 times decrease in work compared to the standard cycle. The proposed algorithm is also shown to work well at a low CFL number in regions of quasihyperbolicity/parabolicity and is recommended for stiff problems with high non-linear stability restrictions on the time step size. For the inlet cases presented in Chapter ..., the marching window with multizone decomposition results in convergence attained in typically less than 200 effective iterations using a CFL number of 0.6. A variant of the marching window designed for time-accurate simulations is observed to result in a fivefold reduction in storage and 25\% reduction in work for the time-accurate exploding cavity case investigated in Chapter \ref{chapter:numerical_method}. The reduction in computational work through the use of the marching window is made possible by focusing the high number of iterations needed to converge the streamwise separated regions to the region in question, and not to the entire computational domain. The amount of storage needed is also significantly reduced if no memory is allocated to the nodes outside of the marching window subdomain. \emph{The marching window does not impose any restriction on the discretization stencils part of the residual or on the pseudotime stepping method.} While not implemented here, the numerous acceleration techniques available for pseudotime stepping (such as multigrid, block relaxation \cite{aiaa:2000:denicola}, preconditioning, Newton-Krylov, \etc) can be used in conjunction with the marching window. Furthermore, the marching window is not limited to the Favre-averaged Navier-Stokes equations and its application to other fields of physics where a quasihyperbolicity of the system is present would only require a redefinition of the ellipticity sensor shown in Eq.~(...). The ellipticity threshold constant $\varphiverge$ and the marching window minimal width $\phi_3$ are seen to affect the performance of the algorithm significantly, and it is unclear at this stage by how much these parameters would need to be altered for very different flow properties and physical domain size. The dependency on the problem setup seems not too severe as the same values for the user-specified constants are used for all inlet cases, with a resulting similar convergence rate. \section{Thrust potential} The losses and the gains in the flowfields presented herein are assessed mostly by the air-based mixing efficiency and by the thrust potential. It is recalled that the thrust potential at a certain station corresponds to the thrust of the vehicle obtained if the flow is reversibly expanded from the station in consideration to either the engine exit area or the engine back pressure. In Chapter ..., it is shown that the thrust potential can be expressed as a function of the stagnation pressure, stagnation temperature, and backpressure of the engine. While limited to a perfect gas, Eq.~(...) shows that losses or gains in an engine cannot be determined solely by monitoring the stagnation pressure. This is confirmed by the inlet cases presented herein: at the station where fuel is injected, the thrust potential increases significantly due to the high momentum of the fuel injected while the mass flux averaged stagnation pressure decreases. Another conclusion drawn from Eq.~(...) is that the thrust potential can be expressed solely as a function of the stagnation temperature and the ratio between the stagnation pressure and the backpressure. Hence, it follows that \emph{variations in the backpressure impact the thrust potential to the same degree as variations in stagnation pressure}. In a scramjet or a shcramjet engine, the flow is typically underexpanded, and the backpressure of the engine is known to be significantly higher than the surroundings. Expanding the flow to a fixed backpressure equal to the surroundings would hence lead to significant errors in the thrust potential. Therefore, the thrust potential is here determined by reversibly expanding the flow to an iteratively determined backpressure, which is such that the cross-sectional area of the expanded flow corresponds to the engine exit area. It is noted that the same backpressure is shared by all streamtubes at one particular station. In this way, we avoid errors originating from a fixed backpressure and errors related to unrealistic backpressure variations that would occur when expanding the flow to a constant area. For a typical inlet flowfield, the backpressure is seen to increase by approximately 2.5 times while the mass-flux averaged stagnation pressure decreases by 4 times. \emph{The similar observed changes in the backpressure and the stagnation pressure shows the importance of accurately determining the backpressure when assessing the losses in a shcramjet inlet}. \section{Fundamental mixing studies} %mention the conclusions from the parametric study of mixing enhancement by compression To predict the mixing efficiency increase through a compression wave, an expression is derived for the special case of hypersonic flow at a high convective Mach number. In such conditions, it is seen that two assumptions can be made: (i) the speed of the fuel and air streams remain constant through the compression and (ii) the density increase of the air corresponds to the density increase of the fuel through the compression. From these two assumptions, it is then shown in the mixing efficiency growth equation [\ie\ Eq.~(...)] that \emph{the mixing efficiency growth is proportional to the product of the flow density, the interface length, and the Papamoschou-Roshko correction term.} Noting that the Papamoschou-Roshko correction term decreases for decreasing temperature, this shows one of the major challenges of mixing in the inlet. \emph{The very low flow density and flow temperature lead to a very low mixing efficiency growth in a shcramjet inlet, as compared to the mixing efficiency growth that could be obtained in the combustor, where the temperature and the density are high.} With the help of the derived mixing efficiency growth, the separate effect of interface stretching due to vorticity is assessed for a mixing layer traversing an oblique shock and a mixing layer traversing a Prandtl-Meyer compression fan. It is observed that \emph{the higher density induced by the compression fan leads to a greater increase in the mixing efficiency growth, despite the more vigorous interface stretching by the axial vortices induced by the oblique shock.} % fundamental study of importance of ramp-induced axial vortices The increase in mixing efficiency generally associated to ramp-like injector configurations compared to planar mixing configurations is also analyzed. A comparison is performed between the mixing efficiency obtained from a planar mixing configuration, a free jet configuration and a cantilevered ramp injector configuration. It is seen that, at a convective Mach number of 1.5 and at a global equivalence ratio of 1, the mixing efficiency of a cantilevered ramp injector is as much as 4 times the one of a planar configuration. This fourfold increase is attributed to, in order of importance: (i) the increased fuel/air interface length present in 3D, (ii) the higher pressure and temperature present at injection, and (iii) the stretching of the fuel/air interface by the axial vortices. \section{Mixing over a flat plate} %chapter 6 A parametric study of the effect of the fuel inflow conditions on the mixing performance of a cantilevered ramp injector is performed. It is found that, while keeping the fuel speed and fuel pressure constant, increasing the global equivalence ratio from 1 to 3 translates into a 30\% increase in the mixing efficiency, but results in a large portion of the mixture at the domain exit to be outside the hydrogen/air flammability limits. On the other hand, reducing the global equivalence ratio from 1 to 0.33 reduces the mixing efficiency by 27\% and induces a high mixture temperature which is beneficial if burning is desired close to the injection point, but undesirable otherwise. If burning is not desired near the point of injection, injecting the fuel in stoichiometric proportions with the incoming air is the recommended approach. Secondly, keeping the global equivalence ratio and the fuel pressure constant, the convective Mach number is varied from -0.5 to 1.5, with a negative value indicating a fuel speed smaller than the air speed. This increase in the convective Mach number is seen to result in a mixing efficiency increase of 31\% while the mass averaged stagnation pressure decreases by only 10\%. In the near field mixing region, the convective Mach number has a negligible impact on the mixing efficiency if using the Wilcox dilatational dissipation. This is shown to be related to a limitation of the turbulent mixing layer growth due to compressibility effects occurring at a high turbulent Mach number. Even when the convective Mach number is 0, a high turbulent Mach number is present in the near field due to the high local shear stresses induced by the axial vortices. Due to the presence of a high turbulent Mach number, the dilatational dissipation reduces considerably the growth of the mixing layer. The use of the Wilcox dilatational dissipation correction results in a decrease of the near field mixing efficiency of 25\% and 43\% for a convective Mach number of 0 and 1.5, respectively. Thus, one major finding of this thesis is that \emph{the dilatational dissipation correction affects the mixing efficiency considerably for a cantilevered ramp injector flowfield, even at a vanishing convective Mach number.} A parametric study of the variation of the injector array spacing shows that the mixing efficiency at the domain exit is maximal for an array spacing equal to the height of the injector. Reducing the spacing diminishes considerably the axial vortices strength but increases the interface length at the point of injection. The rate of growth of the mixing efficiency in the nearfield is observed to be directly related to the interface length at injection. The decrease in the axial vortices strength at a small array spacing prevents enough air from being entrained under the fuel jet, resulting in a combustible mixture present in the boundary layer. Due to the higher shock strength present above the injector array, a high angle of injection translates into significantly more losses: injecting the fuel at 16$^\circ$ results in a twofold increase in the thrust potential losses compared to injection at 4$^\circ$, with an associated increase in mixing efficiency of 9\%. A change in the fuel inflow conditions is observed not to affect the capability of the cantilevered ramp injector at preventing the injectant from entering the hot boundary layer. However, a change in the injector geometry is found to result sometimes in hydrogen entering the boundary layer. This is attributed to the amount of air separating the fuel from the wall being strongly dependent on the injector geometry. The reduction in the air mass flow rate separating the fuel jet from the wall is seen to be partly due to weakened axial vortices and/or to a lower air mass flux flowing under the fuel at the point of injection. It is observed that an air cushion between the wall and the hydrogen is sufficiently thick to prevent fuel in the boundary layer when (i) the fuel is injected at an angle of approximately $10^\circ$ or more, (ii) an array spacing of at least the height of the injector is used, and (iii) a swept ramp configuration is avoided. Another interesting finding of this thesis is that \emph{the use of a negative sweeping angle used in conjunction with the cantilevered ramp injector configuration is observed to result into better fuel penetration and better mixing, and is highly recommended for mixing in a shcramjet inlet where premature ignition should be avoided.} Lastly, it is observed that an incoming boundary layer with a thickness of 15\% the injector height does not diminish significantly the air cushion between the fuel and the wall. \section{Mixing in the inlet} %conclusions from inlet problems Based on the results obtained from the above-mentioned parametric studies, a baseline injector configuration is considered as a means to deliver fuel in the inlet of an external compression shcramjet. The baseline injector geometry consists of a sweeping angle set to the minimal value possible, an array spacing equal to the height of the injector, and the injection angle set to 10 degrees. Two inlet geometries are considered: one in which the flow is compressed by two equal-strength shocks (\ie\ the shock-shock configuration), and one in which the second shock is replaced by a Prandtl-Meyer compression fan (\ie\ the shock-fan configuration). The fuel inflow conditions in the inlet are such that the global equivalence ratio approaches 1, and the convective Mach number is 1.2 with the speed of the hydrogen jet higher than the speed of the air. \emph{Due to the fuel being injected at a very high speed, fuel injection in the inlet is found to increase considerably the thrust potential, with a gain exceeding the loss by 40--120\%.} Another beneficial effect of fuel injection on the inlet performance is the observed decrease of approximately 10\% in the skin friction force. The decrease in skin friction is attributed to fuel being present in the boundary layer after the second inlet compression process: the low density of hydrogen decreases the density of the boundary layer, hence resulting in a reduced wall shear stress. However, the presence of fuel in the inlet is not all beneficial: due to the Mach number of the hydrogen stream being significantly less than the Mach number of the air stream, the performance of the compression fan is reduced, with an associated increase in the thrust potential losses estimated to be of 8\% for the shock-fan configurations. \emph{Typically, it is estimated that 50--70\% of the thrust potential losses in the inlet are due to skin friction.} The large importance of the skin friction in the shcramjet inlet is seen to be partly due to the axial vortices generated by the second inlet compression process continuously entraining upwards the upper part of the boundary layer, which results in a substantial thinning of the boundary layer, hence increasing the wall shear stress. The use of a turbulence model that can predict accurately the wall shear stress, such as the Wilcox $k\omega$ model used herein, is hence seen to be crucial in assessing the losses accurately in a shcramjet inlet. Interestingly, no recirculation region is observed near the second inlet shock for all inlet cases studied. This is not completely surprising, as turbulent boundary layers are known to offer a strong resistance to separation as the flow Mach number increases \cite{jfm:1972:coleman}. Furthermore, both the ramp-generated axial vortices and the axial vortices generated through the second inlet shock help in reducing the size of the boundary layer, which reduces the chance of shock-induced flow separation in the inlet. The relatively low mixing efficiency of 0.30 obtained for the baseline inlet cases is attributed to the lack of adequate fuel penetration, partly due to the absence of a sufficient amount of air separating the fuel jets upon entering the second inlet compression process. The lack of air in-between the fuel jets prevents the axial vortices created by the interaction of the mixing layer with the compression fan (or shock) to enhance fuel penetration, as they normally do by entraining the air in-between the fuel jets to under the fuel. \emph{A major difficulty encountered while mixing in an external compression inlet is that the height of the inlet, and hence the amount of air entering the inlet, is strongly dependent on the fuel injection process.} For this reason, contrarily to mixing over a flat plate, the increase of the injection angle does not translate into a better air-based mixing efficiency since the increase in the amount of air entering the inlet is more pronounced than the increase in the amount of air mixed with the fuel. \emph{One novel approach that is shown herein to be successful at increasing the fuel penetration is by alternating the injection angle from one injector to another.} In this way, the fuel jets emanating from the high-angle injectors penetrate deeply in the incoming air, while the fuel jets emanating from the low-angle injectors remain close to the body. Upon entering the compression process, due to the fuel jets belonging to two distinct levels, there is a much increased amount of air separating the fuel jets on each level and the compression process induces better penetration. Furthermore, the strength of the shock forming on the injector array does not increase appreciably due to one in every two injector being at a low angle. The reduced array shock strength helps in keeping the inlet height to a low value. The use of alternating injection angles of 9 and 16 degrees is seen to result in a 32\% increase in the mixing efficiency and a 14\% increase in the thrust potential losses when compared to injecting the fuel at a single injection angle of 10 degrees. A second strategy that is shown to increase the fuel penetration and the mixing efficiency is the use of longer and thinner cantilevered ramp injectors. By doubling the length of the injector, while doubling the height and halving the depth of the fuel jet, a mixing efficiency increase of 15\% is observed at the expense of an increase in thrust potential losses of 12\%. The combination of the alternating angle configuration with the longer and thinner injectors results in a mixing efficiency of 0.47 and 0.44 for the shock-shock inlet configuration and the shock-fan inlet configuration respectively, which is an increase of more than 50\% over the baseline injector configuration. % risk of premature ignition Premature ignition in the inlet is a high possibility for the baseline inlet configurations, as a fuel/air mixture is seen to penetrate the hot boundary layer after the second inlet compression wave. One strategy that is here shown to prevent to a large extent the fuel from entering the boundary layer is the use of an increased injector array spacing. A higher array spacing increases the amount of air that is entrained by the axial vortices below the fuel jets, hence retarding the complete erosion of the air buffer separating the fuel from the wall. Unfortunately, while a higher array spacing prevents the fuel from entering the boundary layer, it results in stronger axial vortices which entrain upwards the hot flow from the boundary layer to the mixing layer. This is particularly worrisome as the mixing layer is then exposed to a temperature significantly above the ignition point for a considerable amount of time, resulting in a high risk of premature ignition. Nonetheless, the risk of premature ignition in the mixing layer can be reduced through the use of a shock-fan configuration, which results in a reduction in the flow temperature of as much as 80~K compared to the shock-shock configuration. Besides helping to prevent the risk of premature ignition, a reduction of the temperature of the flow entering the combustor augments the performance of the shcramjet due to a more efficient heat released in the shock-induced combustion \cite{jpp:2001:sislian}. \emph{The use of a Prandtl-Meyer compression surface in a shcramjet inlet is hence strongly recommended as it decreases the thrust potential losses, reduces the risk of premature ignition, while resulting in a small 6\% diminution of the mixing efficiency for the optimal injector configuration considered.}
{ "alphanum_fraction": 0.8159297194, "avg_line_length": 60.4987405542, "ext": "tex", "hexsha": "ff2c67773d2f95e414b79d5b67c9832eb8dcd009", "lang": "TeX", "max_forks_count": 2, "max_forks_repo_forks_event_max_datetime": "2022-01-19T01:06:13.000Z", "max_forks_repo_forks_event_min_datetime": "2020-06-10T07:17:34.000Z", "max_forks_repo_head_hexsha": "a38615797ff779d8779ccc4cd17f441abd50c463", "max_forks_repo_licenses": [ "BSD-2-Clause-FreeBSD" ], "max_forks_repo_name": "bernardparent/TEXSTYLE", "max_forks_repo_path": "waflthesis/chapter6.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "a38615797ff779d8779ccc4cd17f441abd50c463", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "BSD-2-Clause-FreeBSD" ], "max_issues_repo_name": "bernardparent/TEXSTYLE", "max_issues_repo_path": "waflthesis/chapter6.tex", "max_line_length": 96, "max_stars_count": 4, "max_stars_repo_head_hexsha": "a38615797ff779d8779ccc4cd17f441abd50c463", "max_stars_repo_licenses": [ "BSD-2-Clause-FreeBSD" ], "max_stars_repo_name": "bernardparent/TEXSTYLE", "max_stars_repo_path": "waflthesis/chapter6.tex", "max_stars_repo_stars_event_max_datetime": "2020-06-10T07:17:32.000Z", "max_stars_repo_stars_event_min_datetime": "2019-06-24T04:30:25.000Z", "num_tokens": 5178, "size": 24018 }
\documentclass[11 pt]{scrartcl} \usepackage[header, margin, koma]{tyler} %\usetikzlibrary{automata,arrows,positioning,calc} \usepackage{csquotes} \pagestyle{fancy} \fancyhf{} \fancyhead[l]{CS 294-165 Notes} \fancyhead[r]{Tyler Zhu} \cfoot{\thepage} \begin{document} \title{\Large CS 294-165: Sketching Algorithms} \author{\large Tyler Zhu} \date{\large\today} \maketitle \begin{center} \begin{displayquote} \emph{"A good stock of examples, as large as possible, is indispensable for a thorough understanding of any concept, and when I want to learn something new, I make it my first job to build one."} \\ \begin{flushright} \emph{– Paul Halmos}. \end{flushright} \end{displayquote} \end{center} These are course notes for the Fall 2020 rendition of CS 294-165, Sketching Algorithms, taught by Professor Jelani Nelson. \tableofcontents \newpage \section{Wednesday, August 26} \end{document}
{ "alphanum_fraction": 0.751920966, "avg_line_length": 24.6216216216, "ext": "tex", "hexsha": "50a4975e22da5aacf6f6ac447adeb010db88671e", "lang": "TeX", "max_forks_count": 3, "max_forks_repo_forks_event_max_datetime": "2021-09-07T17:21:17.000Z", "max_forks_repo_forks_event_min_datetime": "2020-10-13T08:41:17.000Z", "max_forks_repo_head_hexsha": "cc269a2606bab22a5c9b8f1af23f360fa291c583", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "cbugwadia32/course-notes", "max_forks_repo_path": "templates/notes-template.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "cc269a2606bab22a5c9b8f1af23f360fa291c583", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "cbugwadia32/course-notes", "max_issues_repo_path": "templates/notes-template.tex", "max_line_length": 261, "max_stars_count": 8, "max_stars_repo_head_hexsha": "cc269a2606bab22a5c9b8f1af23f360fa291c583", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "cbugwadia32/course-notes", "max_stars_repo_path": "templates/notes-template.tex", "max_stars_repo_stars_event_max_datetime": "2022-01-07T01:19:16.000Z", "max_stars_repo_stars_event_min_datetime": "2021-07-20T19:22:41.000Z", "num_tokens": 272, "size": 911 }
% Default to the notebook output style % Inherit from the specified cell style. \documentclass[11pt]{article} \usepackage[T1]{fontenc} % Nicer default font (+ math font) than Computer Modern for most use cases \usepackage{mathpazo} % Basic figure setup, for now with no caption control since it's done % automatically by Pandoc (which extracts ![](path) syntax from Markdown). \usepackage{graphicx} % We will generate all images so they have a width \maxwidth. This means % that they will get their normal width if they fit onto the page, but % are scaled down if they would overflow the margins. \makeatletter \def\maxwidth{\ifdim\Gin@nat@width>\linewidth\linewidth \else\Gin@nat@width\fi} \makeatother \let\Oldincludegraphics\includegraphics % Set max figure width to be 80% of text width, for now hardcoded. \renewcommand{\includegraphics}[1]{\Oldincludegraphics[width=.8\maxwidth]{#1}} % Ensure that by default, figures have no caption (until we provide a % proper Figure object with a Caption API and a way to capture that % in the conversion process - todo). \usepackage{caption} \DeclareCaptionLabelFormat{nolabel}{} \captionsetup{labelformat=nolabel} \usepackage{adjustbox} % Used to constrain images to a maximum size \usepackage{xcolor} % Allow colors to be defined \usepackage{enumerate} % Needed for markdown enumerations to work \usepackage{geometry} % Used to adjust the document margins \usepackage{amsmath} % Equations \usepackage{amssymb} % Equations \usepackage{textcomp} % defines textquotesingle % Hack from http://tex.stackexchange.com/a/47451/13684: \AtBeginDocument{% \def\PYZsq{\textquotesingle}% Upright quotes in Pygmentized code } \usepackage{upquote} % Upright quotes for verbatim code \usepackage{eurosym} % defines \euro \usepackage[mathletters]{ucs} % Extended unicode (utf-8) support \usepackage[utf8x]{inputenc} % Allow utf-8 characters in the tex document \usepackage{fancyvrb} % verbatim replacement that allows latex \usepackage{grffile} % extends the file name processing of package graphics % to support a larger range % The hyperref package gives us a pdf with properly built % internal navigation ('pdf bookmarks' for the table of contents, % internal cross-reference links, web links for URLs, etc.) \usepackage{hyperref} \usepackage{longtable} % longtable support required by pandoc >1.10 \usepackage{booktabs} % table support for pandoc > 1.12.2 \usepackage[inline]{enumitem} % IRkernel/repr support (it uses the enumerate* environment) \usepackage[normalem]{ulem} % ulem is needed to support strikethroughs (\sout) % normalem makes italics be italics, not underlines % Colors for the hyperref package \definecolor{urlcolor}{rgb}{0,.145,.698} \definecolor{linkcolor}{rgb}{.71,0.21,0.01} \definecolor{citecolor}{rgb}{.12,.54,.11} % ANSI colors \definecolor{ansi-black}{HTML}{3E424D} \definecolor{ansi-black-intense}{HTML}{282C36} \definecolor{ansi-red}{HTML}{E75C58} \definecolor{ansi-red-intense}{HTML}{B22B31} \definecolor{ansi-green}{HTML}{00A250} \definecolor{ansi-green-intense}{HTML}{007427} \definecolor{ansi-yellow}{HTML}{DDB62B} \definecolor{ansi-yellow-intense}{HTML}{B27D12} \definecolor{ansi-blue}{HTML}{208FFB} \definecolor{ansi-blue-intense}{HTML}{0065CA} \definecolor{ansi-magenta}{HTML}{D160C4} \definecolor{ansi-magenta-intense}{HTML}{A03196} \definecolor{ansi-cyan}{HTML}{60C6C8} \definecolor{ansi-cyan-intense}{HTML}{258F8F} \definecolor{ansi-white}{HTML}{C5C1B4} \definecolor{ansi-white-intense}{HTML}{A1A6B2} % commands and environments needed by pandoc snippets % extracted from the output of `pandoc -s` \providecommand{\tightlist}{% \setlength{\itemsep}{0pt}\setlength{\parskip}{0pt}} \DefineVerbatimEnvironment{Highlighting}{Verbatim}{commandchars=\\\{\}} % Add ',fontsize=\small' for more characters per line \newenvironment{Shaded}{}{} \newcommand{\KeywordTok}[1]{\textcolor[rgb]{0.00,0.44,0.13}{\textbf{{#1}}}} \newcommand{\DataTypeTok}[1]{\textcolor[rgb]{0.56,0.13,0.00}{{#1}}} \newcommand{\DecValTok}[1]{\textcolor[rgb]{0.25,0.63,0.44}{{#1}}} \newcommand{\BaseNTok}[1]{\textcolor[rgb]{0.25,0.63,0.44}{{#1}}} \newcommand{\FloatTok}[1]{\textcolor[rgb]{0.25,0.63,0.44}{{#1}}} \newcommand{\CharTok}[1]{\textcolor[rgb]{0.25,0.44,0.63}{{#1}}} \newcommand{\StringTok}[1]{\textcolor[rgb]{0.25,0.44,0.63}{{#1}}} \newcommand{\CommentTok}[1]{\textcolor[rgb]{0.38,0.63,0.69}{\textit{{#1}}}} \newcommand{\OtherTok}[1]{\textcolor[rgb]{0.00,0.44,0.13}{{#1}}} \newcommand{\AlertTok}[1]{\textcolor[rgb]{1.00,0.00,0.00}{\textbf{{#1}}}} \newcommand{\FunctionTok}[1]{\textcolor[rgb]{0.02,0.16,0.49}{{#1}}} \newcommand{\RegionMarkerTok}[1]{{#1}} \newcommand{\ErrorTok}[1]{\textcolor[rgb]{1.00,0.00,0.00}{\textbf{{#1}}}} \newcommand{\NormalTok}[1]{{#1}} % Additional commands for more recent versions of Pandoc \newcommand{\ConstantTok}[1]{\textcolor[rgb]{0.53,0.00,0.00}{{#1}}} \newcommand{\SpecialCharTok}[1]{\textcolor[rgb]{0.25,0.44,0.63}{{#1}}} \newcommand{\VerbatimStringTok}[1]{\textcolor[rgb]{0.25,0.44,0.63}{{#1}}} \newcommand{\SpecialStringTok}[1]{\textcolor[rgb]{0.73,0.40,0.53}{{#1}}} \newcommand{\ImportTok}[1]{{#1}} \newcommand{\DocumentationTok}[1]{\textcolor[rgb]{0.73,0.13,0.13}{\textit{{#1}}}} \newcommand{\AnnotationTok}[1]{\textcolor[rgb]{0.38,0.63,0.69}{\textbf{\textit{{#1}}}}} \newcommand{\CommentVarTok}[1]{\textcolor[rgb]{0.38,0.63,0.69}{\textbf{\textit{{#1}}}}} \newcommand{\VariableTok}[1]{\textcolor[rgb]{0.10,0.09,0.49}{{#1}}} \newcommand{\ControlFlowTok}[1]{\textcolor[rgb]{0.00,0.44,0.13}{\textbf{{#1}}}} \newcommand{\OperatorTok}[1]{\textcolor[rgb]{0.40,0.40,0.40}{{#1}}} \newcommand{\BuiltInTok}[1]{{#1}} \newcommand{\ExtensionTok}[1]{{#1}} \newcommand{\PreprocessorTok}[1]{\textcolor[rgb]{0.74,0.48,0.00}{{#1}}} \newcommand{\AttributeTok}[1]{\textcolor[rgb]{0.49,0.56,0.16}{{#1}}} \newcommand{\InformationTok}[1]{\textcolor[rgb]{0.38,0.63,0.69}{\textbf{\textit{{#1}}}}} \newcommand{\WarningTok}[1]{\textcolor[rgb]{0.38,0.63,0.69}{\textbf{\textit{{#1}}}}} % Define a nice break command that doesn't care if a line doesn't already % exist. \def\br{\hspace*{\fill} \\* } % Math Jax compatability definitions \def\gt{>} \def\lt{<} % Document parameters \title{Kreisel} % Pygments definitions \makeatletter \def\PY@reset{\let\PY@it=\relax \let\PY@bf=\relax% \let\PY@ul=\relax \let\PY@tc=\relax% \let\PY@bc=\relax \let\PY@ff=\relax} \def\PY@tok#1{\csname PY@tok@#1\endcsname} \def\PY@toks#1+{\ifx\relax#1\empty\else% \PY@tok{#1}\expandafter\PY@toks\fi} \def\PY@do#1{\PY@bc{\PY@tc{\PY@ul{% \PY@it{\PY@bf{\PY@ff{#1}}}}}}} \def\PY#1#2{\PY@reset\PY@toks#1+\relax+\PY@do{#2}} \expandafter\def\csname PY@tok@w\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.73,0.73,0.73}{##1}}} \expandafter\def\csname PY@tok@c\endcsname{\let\PY@it=\textit\def\PY@tc##1{\textcolor[rgb]{0.25,0.50,0.50}{##1}}} \expandafter\def\csname PY@tok@cp\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.74,0.48,0.00}{##1}}} \expandafter\def\csname PY@tok@k\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.00,0.50,0.00}{##1}}} \expandafter\def\csname PY@tok@kp\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.00,0.50,0.00}{##1}}} \expandafter\def\csname PY@tok@kt\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.69,0.00,0.25}{##1}}} \expandafter\def\csname PY@tok@o\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.40,0.40,0.40}{##1}}} \expandafter\def\csname PY@tok@ow\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.67,0.13,1.00}{##1}}} \expandafter\def\csname PY@tok@nb\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.00,0.50,0.00}{##1}}} \expandafter\def\csname PY@tok@nf\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.00,0.00,1.00}{##1}}} \expandafter\def\csname PY@tok@nc\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.00,0.00,1.00}{##1}}} \expandafter\def\csname PY@tok@nn\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.00,0.00,1.00}{##1}}} \expandafter\def\csname PY@tok@ne\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.82,0.25,0.23}{##1}}} \expandafter\def\csname PY@tok@nv\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.10,0.09,0.49}{##1}}} \expandafter\def\csname PY@tok@no\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.53,0.00,0.00}{##1}}} \expandafter\def\csname PY@tok@nl\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.63,0.63,0.00}{##1}}} \expandafter\def\csname PY@tok@ni\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.60,0.60,0.60}{##1}}} \expandafter\def\csname PY@tok@na\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.49,0.56,0.16}{##1}}} \expandafter\def\csname PY@tok@nt\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.00,0.50,0.00}{##1}}} \expandafter\def\csname PY@tok@nd\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.67,0.13,1.00}{##1}}} \expandafter\def\csname PY@tok@s\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.73,0.13,0.13}{##1}}} \expandafter\def\csname PY@tok@sd\endcsname{\let\PY@it=\textit\def\PY@tc##1{\textcolor[rgb]{0.73,0.13,0.13}{##1}}} \expandafter\def\csname PY@tok@si\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.73,0.40,0.53}{##1}}} \expandafter\def\csname PY@tok@se\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.73,0.40,0.13}{##1}}} \expandafter\def\csname PY@tok@sr\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.73,0.40,0.53}{##1}}} \expandafter\def\csname PY@tok@ss\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.10,0.09,0.49}{##1}}} \expandafter\def\csname PY@tok@sx\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.00,0.50,0.00}{##1}}} \expandafter\def\csname PY@tok@m\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.40,0.40,0.40}{##1}}} \expandafter\def\csname PY@tok@gh\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.00,0.00,0.50}{##1}}} \expandafter\def\csname PY@tok@gu\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.50,0.00,0.50}{##1}}} \expandafter\def\csname PY@tok@gd\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.63,0.00,0.00}{##1}}} \expandafter\def\csname PY@tok@gi\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.00,0.63,0.00}{##1}}} \expandafter\def\csname PY@tok@gr\endcsname{\def\PY@tc##1{\textcolor[rgb]{1.00,0.00,0.00}{##1}}} \expandafter\def\csname PY@tok@ge\endcsname{\let\PY@it=\textit} \expandafter\def\csname PY@tok@gs\endcsname{\let\PY@bf=\textbf} \expandafter\def\csname PY@tok@gp\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.00,0.00,0.50}{##1}}} \expandafter\def\csname PY@tok@go\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.53,0.53,0.53}{##1}}} \expandafter\def\csname PY@tok@gt\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.00,0.27,0.87}{##1}}} \expandafter\def\csname PY@tok@err\endcsname{\def\PY@bc##1{\setlength{\fboxsep}{0pt}\fcolorbox[rgb]{1.00,0.00,0.00}{1,1,1}{\strut ##1}}} \expandafter\def\csname PY@tok@kc\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.00,0.50,0.00}{##1}}} \expandafter\def\csname PY@tok@kd\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.00,0.50,0.00}{##1}}} \expandafter\def\csname PY@tok@kn\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.00,0.50,0.00}{##1}}} \expandafter\def\csname PY@tok@kr\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.00,0.50,0.00}{##1}}} \expandafter\def\csname PY@tok@bp\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.00,0.50,0.00}{##1}}} \expandafter\def\csname PY@tok@fm\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.00,0.00,1.00}{##1}}} \expandafter\def\csname PY@tok@vc\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.10,0.09,0.49}{##1}}} \expandafter\def\csname PY@tok@vg\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.10,0.09,0.49}{##1}}} \expandafter\def\csname PY@tok@vi\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.10,0.09,0.49}{##1}}} \expandafter\def\csname PY@tok@vm\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.10,0.09,0.49}{##1}}} \expandafter\def\csname PY@tok@sa\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.73,0.13,0.13}{##1}}} \expandafter\def\csname PY@tok@sb\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.73,0.13,0.13}{##1}}} \expandafter\def\csname PY@tok@sc\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.73,0.13,0.13}{##1}}} \expandafter\def\csname PY@tok@dl\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.73,0.13,0.13}{##1}}} \expandafter\def\csname PY@tok@s2\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.73,0.13,0.13}{##1}}} \expandafter\def\csname PY@tok@sh\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.73,0.13,0.13}{##1}}} \expandafter\def\csname PY@tok@s1\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.73,0.13,0.13}{##1}}} \expandafter\def\csname PY@tok@mb\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.40,0.40,0.40}{##1}}} \expandafter\def\csname PY@tok@mf\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.40,0.40,0.40}{##1}}} \expandafter\def\csname PY@tok@mh\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.40,0.40,0.40}{##1}}} \expandafter\def\csname PY@tok@mi\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.40,0.40,0.40}{##1}}} \expandafter\def\csname PY@tok@il\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.40,0.40,0.40}{##1}}} \expandafter\def\csname PY@tok@mo\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.40,0.40,0.40}{##1}}} \expandafter\def\csname PY@tok@ch\endcsname{\let\PY@it=\textit\def\PY@tc##1{\textcolor[rgb]{0.25,0.50,0.50}{##1}}} \expandafter\def\csname PY@tok@cm\endcsname{\let\PY@it=\textit\def\PY@tc##1{\textcolor[rgb]{0.25,0.50,0.50}{##1}}} \expandafter\def\csname PY@tok@cpf\endcsname{\let\PY@it=\textit\def\PY@tc##1{\textcolor[rgb]{0.25,0.50,0.50}{##1}}} \expandafter\def\csname PY@tok@c1\endcsname{\let\PY@it=\textit\def\PY@tc##1{\textcolor[rgb]{0.25,0.50,0.50}{##1}}} \expandafter\def\csname PY@tok@cs\endcsname{\let\PY@it=\textit\def\PY@tc##1{\textcolor[rgb]{0.25,0.50,0.50}{##1}}} \def\PYZbs{\char`\\} \def\PYZus{\char`\_} \def\PYZob{\char`\{} \def\PYZcb{\char`\}} \def\PYZca{\char`\^} \def\PYZam{\char`\&} \def\PYZlt{\char`\<} \def\PYZgt{\char`\>} \def\PYZsh{\char`\#} \def\PYZpc{\char`\%} \def\PYZdl{\char`\$} \def\PYZhy{\char`\-} \def\PYZsq{\char`\'} \def\PYZdq{\char`\"} \def\PYZti{\char`\~} % for compatibility with earlier versions \def\PYZat{@} \def\PYZlb{[} \def\PYZrb{]} \makeatother % Exact colors from NB \definecolor{incolor}{rgb}{0.0, 0.0, 0.5} \definecolor{outcolor}{rgb}{0.545, 0.0, 0.0} % Prevent overflowing lines due to hard-to-break entities \sloppy % Setup hyperref package \hypersetup{ breaklinks=true, % so long urls are correctly broken across lines colorlinks=true, urlcolor=urlcolor, linkcolor=linkcolor, citecolor=citecolor, } % Slightly bigger margins than the latex defaults \geometry{verbose,tmargin=1in,bmargin=1in,lmargin=1in,rmargin=1in} \begin{document} \maketitle \begin{Verbatim}[commandchars=\\\{\}] {\color{incolor}In [{\color{incolor}167}]:} \PY{o}{\PYZpc{}}\PY{k}{matplotlib} inline \PY{k+kn}{import} \PY{n+nn}{numpy} \PY{k}{as} \PY{n+nn}{np} \PY{k+kn}{from} \PY{n+nn}{numpy} \PY{k}{import} \PY{n}{pi} \PY{k+kn}{import} \PY{n+nn}{matplotlib}\PY{n+nn}{.}\PY{n+nn}{pyplot} \PY{k}{as} \PY{n+nn}{plt} \PY{k+kn}{from} \PY{n+nn}{scipy}\PY{n+nn}{.}\PY{n+nn}{optimize} \PY{k}{import} \PY{n}{curve\PYZus{}fit} \end{Verbatim} \begin{Verbatim}[commandchars=\\\{\}] {\color{incolor}In [{\color{incolor}168}]:} \PY{n}{plt}\PY{o}{.}\PY{n}{rc}\PY{p}{(}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{lines}\PY{l+s+s1}{\PYZsq{}}\PY{p}{,} \PY{n}{linewidth} \PY{o}{=} \PY{l+m+mf}{0.4}\PY{p}{,} \PY{n}{marker} \PY{o}{=} \PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{.}\PY{l+s+s1}{\PYZsq{}}\PY{p}{,} \PY{n}{markersize} \PY{o}{=} \PY{l+m+mi}{3}\PY{p}{,} \PY{n}{markeredgewidth} \PY{o}{=} \PY{l+m+mf}{0.4}\PY{p}{)} \PY{n}{plt}\PY{o}{.}\PY{n}{rc}\PY{p}{(}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{errorbar}\PY{l+s+s1}{\PYZsq{}}\PY{p}{,} \PY{n}{capsize} \PY{o}{=} \PY{l+m+mi}{2}\PY{p}{,} \PY{p}{)} \end{Verbatim} \begin{Verbatim}[commandchars=\\\{\}] {\color{incolor}In [{\color{incolor}169}]:} \PY{n}{g} \PY{o}{=} \PY{l+m+mf}{9.81} \end{Verbatim} \hypertarget{duxe4mpfung}{% \section{2) Dämpfung}\label{duxe4mpfung}} \begin{Verbatim}[commandchars=\\\{\}] {\color{incolor}In [{\color{incolor}170}]:} \PY{n}{t} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{array}\PY{p}{(}\PY{p}{[}\PY{l+m+mi}{0}\PY{p}{,} \PY{l+m+mf}{124.6}\PY{p}{,} \PY{l+m+mf}{243.8}\PY{p}{,} \PY{l+m+mf}{368.6}\PY{p}{,} \PY{l+m+mf}{485.4}\PY{p}{,} \PY{l+m+mf}{600.2}\PY{p}{,} \PY{l+m+mf}{719.7}\PY{p}{]}\PY{p}{)} \PY{n}{Dt} \PY{o}{=} \PY{l+m+mf}{0.1} \PY{n}{f} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{array}\PY{p}{(}\PY{p}{[}\PY{l+m+mf}{676.4}\PY{p}{,} \PY{l+m+mf}{615.1}\PY{p}{,} \PY{l+m+mf}{567.4}\PY{p}{,} \PY{l+m+mf}{522.5}\PY{p}{,} \PY{l+m+mf}{483.3}\PY{p}{,} \PY{l+m+mf}{448.6}\PY{p}{,} \PY{l+m+mf}{416.2}\PY{p}{]}\PY{p}{)}\PY{o}{/}\PY{l+m+mi}{60} \PY{n}{Df} \PY{o}{=} \PY{l+m+mi}{2}\PY{o}{/}\PY{l+m+mi}{60} \PY{n}{om} \PY{o}{=} \PY{l+m+mi}{2}\PY{o}{*}\PY{n}{pi}\PY{o}{*}\PY{n}{f} \PY{n}{Dom} \PY{o}{=} \PY{l+m+mi}{2}\PY{o}{*}\PY{n}{pi}\PY{o}{*}\PY{n}{Df} \end{Verbatim} \begin{Verbatim}[commandchars=\\\{\}] {\color{incolor}In [{\color{incolor}171}]:} \PY{k}{def} \PY{n+nf}{om\PYZus{}fit\PYZus{}} \PY{p}{(}\PY{n}{t}\PY{p}{,} \PY{n}{A}\PY{p}{,} \PY{n}{delta}\PY{p}{)}\PY{p}{:} \PY{k}{return} \PY{n}{om\PYZus{}fit}\PY{p}{(}\PY{p}{[}\PY{n}{A}\PY{p}{,} \PY{n}{delta}\PY{p}{]}\PY{p}{,} \PY{n}{t}\PY{p}{)} \end{Verbatim} \begin{Verbatim}[commandchars=\\\{\}] {\color{incolor}In [{\color{incolor}172}]:} \PY{n}{popt}\PY{p}{,} \PY{n}{pcov} \PY{o}{=} \PY{n}{curve\PYZus{}fit}\PY{p}{(}\PY{n}{om\PYZus{}fit\PYZus{}}\PY{p}{,} \PY{n}{t}\PY{p}{,} \PY{n}{om}\PY{p}{,} \PY{n}{sigma} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{repeat}\PY{p}{(}\PY{n}{Dom}\PY{p}{,} \PY{n}{om}\PY{o}{.}\PY{n}{size}\PY{p}{)}\PY{p}{,} \PY{n}{p0} \PY{o}{=} \PY{n}{initial\PYZus{}param}\PY{p}{)} \PY{n}{A\PYZus{}} \PY{o}{=} \PY{n}{popt}\PY{p}{[}\PY{l+m+mi}{0}\PY{p}{]} \PY{n}{delta\PYZus{}} \PY{o}{=} \PY{n}{popt}\PY{p}{[}\PY{l+m+mi}{1}\PY{p}{]} \PY{n}{DA\PYZus{}} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{sqrt}\PY{p}{(}\PY{n}{pcov}\PY{p}{[}\PY{l+m+mi}{0}\PY{p}{]}\PY{p}{[}\PY{l+m+mi}{0}\PY{p}{]}\PY{p}{)} \PY{n}{Ddelta\PYZus{}} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{sqrt}\PY{p}{(}\PY{n}{pcov}\PY{p}{[}\PY{l+m+mi}{1}\PY{p}{]}\PY{p}{[}\PY{l+m+mi}{1}\PY{p}{]}\PY{p}{)} \PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Amplitude: }\PY{l+s+s2}{\PYZdq{}}\PY{p}{,} \PY{n}{A\PYZus{}}\PY{p}{,} \PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{ +\PYZhy{} }\PY{l+s+s2}{\PYZdq{}}\PY{p}{,} \PY{n}{DA\PYZus{}}\PY{p}{)} \PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Dämpfung: }\PY{l+s+s2}{\PYZdq{}}\PY{p}{,} \PY{n}{delta\PYZus{}}\PY{p}{,} \PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{ +\PYZhy{} }\PY{l+s+s2}{\PYZdq{}}\PY{p}{,} \PY{n}{Ddelta\PYZus{}}\PY{p}{)} \end{Verbatim} \begin{Verbatim}[commandchars=\\\{\}] Amplitude: 70.38819935140532 +- 0.23330687189100963 Dämpfung: 0.0006759773672469798 +- 8.968112509897252e-06 \end{Verbatim} \begin{Verbatim}[commandchars=\\\{\}] {\color{incolor}In [{\color{incolor}173}]:} \PY{n}{chi2\PYZus{}} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{sum}\PY{p}{(}\PY{p}{(}\PY{n}{om\PYZus{}fit\PYZus{}}\PY{p}{(}\PY{n}{t}\PY{p}{,} \PY{o}{*}\PY{n}{popt}\PY{p}{)} \PY{o}{\PYZhy{}} \PY{n}{om}\PY{p}{)}\PY{o}{*}\PY{o}{*}\PY{l+m+mi}{2}\PY{o}{/}\PY{n}{Dom}\PY{o}{*}\PY{o}{*}\PY{l+m+mi}{2}\PY{p}{)} \PY{n}{dof} \PY{o}{=} \PY{n}{om}\PY{o}{.}\PY{n}{size} \PY{o}{\PYZhy{}} \PY{l+m+mi}{2} \PY{n}{chi2\PYZus{}red} \PY{o}{=} \PY{n}{chi2\PYZus{}}\PY{o}{/}\PY{n}{dof} \PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{chi2 = }\PY{l+s+s2}{\PYZdq{}}\PY{p}{,} \PY{n}{chi2\PYZus{}}\PY{p}{)} \PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{chi2\PYZus{}red = }\PY{l+s+s2}{\PYZdq{}}\PY{p}{,} \PY{n}{chi2\PYZus{}red}\PY{p}{)} \end{Verbatim} \begin{Verbatim}[commandchars=\\\{\}] chi2 = 11.119037300304747 chi2\_red = 2.223807460060949 \end{Verbatim} \begin{Verbatim}[commandchars=\\\{\}] {\color{incolor}In [{\color{incolor}174}]:} \PY{n}{plt}\PY{o}{.}\PY{n}{errorbar}\PY{p}{(}\PY{n}{t}\PY{p}{,} \PY{n}{om}\PY{p}{,} \PY{n}{xerr} \PY{o}{=} \PY{n}{Dt}\PY{p}{,} \PY{n}{yerr} \PY{o}{=} \PY{n}{Dom}\PY{p}{,} \PY{n}{linestyle} \PY{o}{=} \PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{none}\PY{l+s+s1}{\PYZsq{}}\PY{p}{)} \PY{n}{plt}\PY{o}{.}\PY{n}{plot}\PY{p}{(}\PY{n}{t}\PY{p}{,} \PY{n}{om\PYZus{}fit}\PY{p}{(}\PY{p}{[}\PY{n}{A\PYZus{}}\PY{p}{,} \PY{n}{delta\PYZus{}}\PY{p}{]}\PY{p}{,} \PY{n}{t}\PY{p}{)}\PY{p}{,} \PY{n}{marker} \PY{o}{=} \PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{\PYZsq{}}\PY{p}{)} \PY{n}{plt}\PY{o}{.}\PY{n}{yscale}\PY{p}{(}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{log}\PY{l+s+s1}{\PYZsq{}}\PY{p}{)} \PY{n}{plt}\PY{o}{.}\PY{n}{xlabel}\PY{p}{(}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{t [s]}\PY{l+s+s1}{\PYZsq{}}\PY{p}{)} \PY{n}{plt}\PY{o}{.}\PY{n}{ylabel}\PY{p}{(}\PY{l+s+sa}{r}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{\PYZdl{}}\PY{l+s+s1}{\PYZbs{}}\PY{l+s+s1}{omega\PYZdl{} [Hz]}\PY{l+s+s1}{\PYZsq{}}\PY{p}{)} \PY{n}{plt}\PY{o}{.}\PY{n}{savefig}\PY{p}{(}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{figures/daempfung.pdf}\PY{l+s+s1}{\PYZsq{}}\PY{p}{)} \end{Verbatim} \begin{center} \adjustimage{max size={0.9\linewidth}{0.9\paperheight}}{output_8_0.png} \end{center} { \hspace*{\fill} \\} \begin{Verbatim}[commandchars=\\\{\}] {\color{incolor}In [{\color{incolor}175}]:} \PY{c+c1}{\PYZsh{} Halbwertzeit} \PY{n}{T\PYZus{}halb} \PY{o}{=} \PY{o}{\PYZhy{}} \PY{n}{np}\PY{o}{.}\PY{n}{log}\PY{p}{(}\PY{l+m+mi}{1}\PY{o}{/}\PY{l+m+mi}{2}\PY{p}{)}\PY{o}{/}\PY{n}{delta} \PY{n}{DT\PYZus{}halb} \PY{o}{=} \PY{n}{T\PYZus{}halb}\PY{o}{*}\PY{p}{(}\PY{n}{Ddelta}\PY{o}{/}\PY{n}{delta}\PY{p}{)} \PY{n+nb}{print}\PY{p}{(}\PY{n}{T\PYZus{}halb}\PY{p}{,} \PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{ +\PYZhy{} }\PY{l+s+s2}{\PYZdq{}}\PY{p}{,} \PY{n}{DT\PYZus{}halb}\PY{p}{)} \end{Verbatim} \begin{Verbatim}[commandchars=\\\{\}] 1025.4090924088457 +- 13.602216445491958 \end{Verbatim} \hypertarget{pruxe4zession}{% \section{3) Präzession}\label{pruxe4zession}} \begin{Verbatim}[commandchars=\\\{\}] {\color{incolor}In [{\color{incolor}243}]:} \PY{n}{f\PYZus{}f} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{empty}\PY{p}{(}\PY{p}{(}\PY{l+m+mi}{4}\PY{p}{,} \PY{l+m+mi}{4}\PY{p}{)}\PY{p}{)} \PY{n}{T\PYZus{}p} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{empty}\PY{p}{(}\PY{p}{(}\PY{l+m+mi}{4}\PY{p}{,} \PY{l+m+mi}{4}\PY{p}{)}\PY{p}{)} \PY{n}{f\PYZus{}f}\PY{p}{[}\PY{l+m+mi}{0}\PY{p}{]} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{array}\PY{p}{(}\PY{p}{[}\PY{l+m+mf}{685.3}\PY{p}{,} \PY{l+m+mf}{562.6}\PY{p}{,} \PY{l+m+mf}{459.9}\PY{p}{,} \PY{l+m+mf}{312.6}\PY{p}{]}\PY{p}{)}\PY{o}{/}\PY{l+m+mi}{60} \PY{n}{T\PYZus{}p}\PY{p}{[}\PY{l+m+mi}{0}\PY{p}{]} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{array}\PY{p}{(}\PY{p}{[}\PY{l+m+mf}{122.0}\PY{p}{,} \PY{l+m+mf}{104.6}\PY{p}{,} \PY{l+m+mf}{85.2}\PY{p}{,} \PY{l+m+mf}{58.8}\PY{p}{]}\PY{p}{)} \PY{n}{f\PYZus{}f}\PY{p}{[}\PY{l+m+mi}{1}\PY{p}{]} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{array}\PY{p}{(}\PY{p}{[}\PY{l+m+mf}{658.6}\PY{p}{,} \PY{l+m+mf}{556.8}\PY{p}{,} \PY{l+m+mf}{380.5}\PY{p}{,} \PY{l+m+mf}{271.2}\PY{p}{]}\PY{p}{)}\PY{o}{/}\PY{l+m+mi}{60} \PY{n}{T\PYZus{}p}\PY{p}{[}\PY{l+m+mi}{1}\PY{p}{]} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{array}\PY{p}{(}\PY{p}{[}\PY{l+m+mf}{92.3}\PY{p}{,} \PY{l+m+mf}{79.3}\PY{p}{,} \PY{l+m+mf}{54.8}\PY{p}{,} \PY{l+m+mf}{38.6}\PY{p}{]}\PY{p}{)} \PY{n}{f\PYZus{}f}\PY{p}{[}\PY{l+m+mi}{2}\PY{p}{]} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{array}\PY{p}{(}\PY{p}{[}\PY{l+m+mf}{635.3}\PY{p}{,} \PY{l+m+mf}{539.6}\PY{p}{,} \PY{l+m+mf}{356.1}\PY{p}{,} \PY{l+m+mf}{255.8}\PY{p}{]}\PY{p}{)}\PY{o}{/}\PY{l+m+mi}{60} \PY{n}{T\PYZus{}p}\PY{p}{[}\PY{l+m+mi}{2}\PY{p}{]} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{array}\PY{p}{(}\PY{p}{[}\PY{l+m+mf}{61.4}\PY{p}{,} \PY{l+m+mf}{51.7}\PY{p}{,} \PY{l+m+mf}{41.2}\PY{p}{,} \PY{l+m+mf}{24.7}\PY{p}{]}\PY{p}{)} \PY{n}{f\PYZus{}f}\PY{p}{[}\PY{l+m+mi}{3}\PY{p}{]} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{array}\PY{p}{(}\PY{p}{[}\PY{l+m+mf}{688.9}\PY{p}{,} \PY{l+m+mf}{560.1}\PY{p}{,} \PY{l+m+mf}{360.9}\PY{p}{,} \PY{l+m+mf}{275.0}\PY{p}{]}\PY{p}{)}\PY{o}{/}\PY{l+m+mi}{60} \PY{n}{T\PYZus{}p}\PY{p}{[}\PY{l+m+mi}{3}\PY{p}{]} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{array}\PY{p}{(}\PY{p}{[}\PY{l+m+mf}{49.7}\PY{p}{,} \PY{l+m+mf}{40.6}\PY{p}{,} \PY{l+m+mf}{26.1}\PY{p}{,} \PY{l+m+mf}{20.0}\PY{p}{]}\PY{p}{)} \PY{n}{DT\PYZus{}p} \PY{o}{=} \PY{l+m+mf}{0.1} \PY{n}{om\PYZus{}f\PYZus{}start} \PY{o}{=} \PY{l+m+mi}{2}\PY{o}{*}\PY{n}{pi}\PY{o}{*}\PY{n}{f\PYZus{}f}\PY{p}{;} \PY{n}{om\PYZus{}f\PYZus{}start} \end{Verbatim} \begin{Verbatim}[commandchars=\\\{\}] {\color{outcolor}Out[{\color{outcolor}243}]:} array([[71.76444818, 58.91533423, 48.16061538, 32.73539545], [68.96843072, 58.30795965, 39.84586682, 28.39999759], [66.52846043, 56.50677986, 37.2907048 , 26.78731336], [72.1414393 , 58.65353484, 37.79335962, 28.79793266]]) \end{Verbatim} \begin{Verbatim}[commandchars=\\\{\}] {\color{incolor}In [{\color{incolor}244}]:} \PY{n}{om\PYZus{}f\PYZus{}end} \PY{o}{=} \PY{n}{om\PYZus{}fit\PYZus{}}\PY{p}{(}\PY{n}{T\PYZus{}p}\PY{p}{,} \PY{n}{om\PYZus{}f\PYZus{}start}\PY{p}{,} \PY{n}{delta\PYZus{}}\PY{p}{)} \PY{n}{om\PYZus{}f} \PY{o}{=} \PY{p}{(}\PY{n}{om\PYZus{}f\PYZus{}start} \PY{o}{+} \PY{n}{om\PYZus{}f\PYZus{}end}\PY{p}{)}\PY{o}{/}\PY{l+m+mi}{2}\PY{p}{;} \PY{n}{om\PYZus{}f} \end{Verbatim} \begin{Verbatim}[commandchars=\\\{\}] {\color{outcolor}Out[{\color{outcolor}244}]:} array([[68.92400284, 56.90439579, 46.8129329 , 32.09758062], [66.88261354, 56.78630881, 39.12135259, 28.03427303], [65.17608747, 55.53643414, 36.7785911 , 26.56554086], [70.94973507, 57.85961533, 37.46288914, 28.60457515]]) \end{Verbatim} \begin{Verbatim}[commandchars=\\\{\}] {\color{incolor}In [{\color{incolor}245}]:} \PY{n}{Dom\PYZus{}f\PYZus{}end} \PY{o}{=} \PY{l+m+mi}{1}\PY{o}{/}\PY{l+m+mi}{2}\PY{o}{*}\PY{n}{np}\PY{o}{.}\PY{n}{sqrt}\PY{p}{(}\PY{p}{(}\PY{p}{(}\PY{l+m+mi}{1} \PY{o}{+} \PY{n}{np}\PY{o}{.}\PY{n}{exp}\PY{p}{(}\PY{o}{\PYZhy{}}\PY{n}{delta}\PY{o}{*}\PY{n}{T\PYZus{}p}\PY{p}{)}\PY{p}{)}\PY{o}{*}\PY{n}{Dom}\PY{p}{)}\PY{o}{*}\PY{o}{*}\PY{l+m+mi}{2} \PY{o}{+} \PY{p}{(}\PY{n}{om\PYZus{}f}\PY{o}{*}\PY{n}{T\PYZus{}p}\PY{o}{*}\PY{n}{np}\PY{o}{.}\PY{n}{exp}\PY{p}{(}\PY{o}{\PYZhy{}}\PY{n}{delta}\PY{o}{*}\PY{n}{T\PYZus{}p}\PY{p}{)}\PY{o}{*}\PY{n}{Ddelta}\PY{p}{)}\PY{o}{*}\PY{o}{*}\PY{l+m+mi}{2} \PY{o}{+} \PY{p}{(}\PY{n}{om\PYZus{}f}\PY{o}{*}\PY{n}{delta}\PY{o}{*}\PY{n}{np}\PY{o}{.}\PY{n}{exp}\PY{p}{(}\PY{o}{\PYZhy{}}\PY{n}{delta}\PY{o}{*}\PY{n}{T\PYZus{}p}\PY{p}{)}\PY{o}{*}\PY{n}{DT\PYZus{}p}\PY{p}{)}\PY{o}{*}\PY{o}{*}\PY{l+m+mi}{2}\PY{p}{)} \PY{n}{Dom\PYZus{}f} \PY{o}{=} \PY{l+m+mi}{1}\PY{o}{/}\PY{l+m+mi}{2}\PY{o}{*}\PY{n}{np}\PY{o}{.}\PY{n}{sqrt}\PY{p}{(}\PY{n}{Dom}\PY{o}{*}\PY{o}{*}\PY{l+m+mi}{2} \PY{o}{+} \PY{n}{Dom\PYZus{}f\PYZus{}end}\PY{o}{*}\PY{o}{*}\PY{l+m+mi}{2}\PY{p}{)} \PY{n}{Dom\PYZus{}f} \end{Verbatim} \begin{Verbatim}[commandchars=\\\{\}] {\color{outcolor}Out[{\color{outcolor}245}]:} array([[0.14623268, 0.14612319, 0.14628435, 0.14671774], [0.14645599, 0.14649209, 0.14683031, 0.14716533], [0.14685481, 0.14696434, 0.14712106, 0.14749209], [0.14708133, 0.1471895 , 0.14746708, 0.14760595]]) \end{Verbatim} \begin{Verbatim}[commandchars=\\\{\}] {\color{incolor}In [{\color{incolor}246}]:} \PY{n}{m} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{array}\PY{p}{(}\PY{p}{[}\PY{l+m+mi}{1}\PY{p}{,} \PY{l+m+mi}{1}\PY{p}{,} \PY{l+m+mi}{2}\PY{p}{,} \PY{l+m+mi}{2}\PY{p}{]}\PY{p}{)}\PY{o}{*}\PY{l+m+mf}{9.85e\PYZhy{}3} \PY{n}{l} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{array}\PY{p}{(}\PY{p}{[}\PY{l+m+mf}{15e\PYZhy{}2}\PY{p}{,} \PY{l+m+mf}{20e\PYZhy{}2}\PY{p}{,} \PY{l+m+mf}{15e\PYZhy{}2}\PY{p}{,} \PY{l+m+mf}{20e\PYZhy{}2}\PY{p}{]}\PY{p}{)} \end{Verbatim} \begin{Verbatim}[commandchars=\\\{\}] {\color{incolor}In [{\color{incolor}247}]:} \PY{k}{def} \PY{n+nf}{T\PYZus{}p\PYZus{}fit}\PY{p}{(}\PY{n}{om\PYZus{}f}\PY{p}{,} \PY{n}{I\PYZus{}z}\PY{p}{,} \PY{n}{m}\PY{p}{,} \PY{n}{l}\PY{p}{)}\PY{p}{:} \PY{k}{return} \PY{l+m+mi}{2}\PY{o}{*}\PY{n}{pi}\PY{o}{*}\PY{n}{I\PYZus{}z}\PY{o}{*}\PY{n}{om\PYZus{}f}\PY{o}{/}\PY{p}{(}\PY{n}{m}\PY{o}{*}\PY{n}{g}\PY{o}{*}\PY{n}{l}\PY{p}{)} \end{Verbatim} \begin{Verbatim}[commandchars=\\\{\}] {\color{incolor}In [{\color{incolor}248}]:} \PY{n}{I\PYZus{}z\PYZus{}fit} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{empty}\PY{p}{(}\PY{l+m+mi}{4}\PY{p}{)} \PY{n}{DI\PYZus{}z\PYZus{}fit} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{empty}\PY{p}{(}\PY{l+m+mi}{4}\PY{p}{)} \PY{k}{for} \PY{n}{i} \PY{o+ow}{in} \PY{n+nb}{range}\PY{p}{(}\PY{l+m+mi}{4}\PY{p}{)}\PY{p}{:} \PY{n}{param} \PY{o}{=} \PY{n}{T\PYZus{}p}\PY{p}{[}\PY{n}{i}\PY{p}{,} \PY{l+m+mi}{0}\PY{p}{]}\PY{o}{*}\PY{n}{m}\PY{p}{[}\PY{n}{i}\PY{p}{]}\PY{o}{*}\PY{n}{g}\PY{o}{*}\PY{n}{l}\PY{p}{[}\PY{n}{i}\PY{p}{]}\PY{o}{/}\PY{p}{(}\PY{l+m+mi}{2}\PY{o}{*}\PY{n}{pi}\PY{o}{*}\PY{n}{om\PYZus{}f}\PY{p}{[}\PY{n}{i}\PY{p}{,} \PY{l+m+mi}{0}\PY{p}{]}\PY{p}{)} \PY{n}{popt}\PY{p}{,} \PY{n}{pcov} \PY{o}{=} \PY{n}{curve\PYZus{}fit}\PY{p}{(}\PY{k}{lambda} \PY{n}{om\PYZus{}f}\PY{p}{,} \PY{n}{I\PYZus{}z}\PY{p}{:} \PY{n}{T\PYZus{}p\PYZus{}fit}\PY{p}{(}\PY{n}{om\PYZus{}f}\PY{p}{,} \PY{n}{I\PYZus{}z}\PY{p}{,} \PY{n}{m}\PY{p}{[}\PY{n}{i}\PY{p}{]}\PY{p}{,} \PY{n}{l}\PY{p}{[}\PY{n}{i}\PY{p}{]}\PY{p}{)}\PY{p}{,} \PY{n}{om\PYZus{}f}\PY{p}{[}\PY{n}{i}\PY{p}{]}\PY{p}{,} \PY{n}{T\PYZus{}p}\PY{p}{[}\PY{n}{i}\PY{p}{]}\PY{p}{,} \PY{n}{sigma} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{repeat}\PY{p}{(}\PY{n}{DT\PYZus{}p}\PY{p}{,} \PY{n}{T\PYZus{}p}\PY{p}{[}\PY{n}{i}\PY{p}{]}\PY{o}{.}\PY{n}{size}\PY{p}{)}\PY{p}{)} \PY{n}{I\PYZus{}z\PYZus{}fit}\PY{p}{[}\PY{n}{i}\PY{p}{]} \PY{o}{=} \PY{n}{popt}\PY{p}{[}\PY{l+m+mi}{0}\PY{p}{]} \PY{n}{DI\PYZus{}z\PYZus{}fit}\PY{p}{[}\PY{n}{i}\PY{p}{]} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{sqrt}\PY{p}{(}\PY{n}{pcov}\PY{p}{[}\PY{l+m+mi}{0}\PY{p}{]}\PY{p}{[}\PY{l+m+mi}{0}\PY{p}{]}\PY{p}{)} \PY{n+nb}{print}\PY{p}{(}\PY{n}{I\PYZus{}z\PYZus{}fit}\PY{p}{)} \PY{n+nb}{print}\PY{p}{(}\PY{n}{DI\PYZus{}z\PYZus{}fit}\PY{p}{)} \end{Verbatim} \begin{Verbatim}[commandchars=\\\{\}] [0.00416427 0.00426994 0.00444362 0.00430776] [4.09545718e-05 1.64676004e-05 1.72207931e-04 5.56541512e-06] \end{Verbatim} \begin{Verbatim}[commandchars=\\\{\}] {\color{incolor}In [{\color{incolor}249}]:} \PY{n}{I\PYZus{}z} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{average}\PY{p}{(}\PY{n}{I\PYZus{}z\PYZus{}fit}\PY{p}{)} \PY{n}{DI\PYZus{}z} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{std}\PY{p}{(}\PY{n}{I\PYZus{}z\PYZus{}fit}\PY{p}{)}\PY{o}{/}\PY{l+m+mi}{2} \PY{n+nb}{print}\PY{p}{(}\PY{n}{I\PYZus{}z}\PY{p}{,} \PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{ +\PYZhy{} }\PY{l+s+s2}{\PYZdq{}}\PY{p}{,} \PY{n}{DI\PYZus{}z}\PY{p}{)} \end{Verbatim} \begin{Verbatim}[commandchars=\\\{\}] 0.004296395130347154 +- 4.9976848864411255e-05 \end{Verbatim} \begin{Verbatim}[commandchars=\\\{\}] {\color{incolor}In [{\color{incolor}250}]:} \PY{n}{Dl} \PY{o}{=} \PY{l+m+mf}{2e\PYZhy{}3} \PY{n}{DT\PYZus{}p\PYZus{}fit} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{empty}\PY{p}{(}\PY{p}{(}\PY{l+m+mi}{4}\PY{p}{,} \PY{l+m+mi}{4}\PY{p}{)}\PY{p}{)} \PY{k}{for} \PY{n}{i} \PY{o+ow}{in} \PY{n+nb}{range}\PY{p}{(}\PY{l+m+mi}{4}\PY{p}{)}\PY{p}{:} \PY{n}{DT\PYZus{}p\PYZus{}fit}\PY{p}{[}\PY{n}{i}\PY{p}{]} \PY{o}{=} \PY{n}{T\PYZus{}p\PYZus{}fit}\PY{p}{(}\PY{n}{om\PYZus{}f}\PY{p}{[}\PY{n}{i}\PY{p}{]}\PY{p}{,} \PY{n}{I\PYZus{}z}\PY{p}{,} \PY{n}{m}\PY{p}{[}\PY{n}{i}\PY{p}{]}\PY{p}{,} \PY{n}{l}\PY{p}{[}\PY{n}{i}\PY{p}{]}\PY{p}{)}\PY{o}{*}\PY{n}{np}\PY{o}{.}\PY{n}{sqrt}\PY{p}{(}\PY{p}{(}\PY{n}{DI\PYZus{}z}\PY{o}{/}\PY{n}{I\PYZus{}z}\PY{p}{)}\PY{o}{*}\PY{o}{*}\PY{l+m+mi}{2} \PY{o}{+} \PY{p}{(}\PY{n}{Dl}\PY{o}{/}\PY{n}{l}\PY{p}{[}\PY{n}{i}\PY{p}{]}\PY{p}{)}\PY{o}{*}\PY{o}{*}\PY{l+m+mi}{2}\PY{p}{)} \PY{n+nb}{print}\PY{p}{(}\PY{n}{DT\PYZus{}p\PYZus{}fit}\PY{p}{)} \end{Verbatim} \begin{Verbatim}[commandchars=\\\{\}] [[2.27138634 1.87528092 1.54271737 1.05777382] [1.43311852 1.21678126 0.83826771 0.60070075] [1.07393701 0.91509991 0.60601812 0.43773289] [0.76013312 0.6198897 0.4013656 0.30646041]] \end{Verbatim} \begin{Verbatim}[commandchars=\\\{\}] {\color{incolor}In [{\color{incolor}251}]:} \PY{k}{for} \PY{n}{i} \PY{o+ow}{in} \PY{n+nb}{range}\PY{p}{(}\PY{l+m+mi}{4}\PY{p}{)}\PY{p}{:} \PY{n}{plt}\PY{o}{.}\PY{n}{errorbar}\PY{p}{(}\PY{n}{om\PYZus{}f}\PY{p}{[}\PY{n}{i}\PY{p}{]}\PY{p}{,} \PY{n}{T\PYZus{}p}\PY{p}{[}\PY{n}{i}\PY{p}{]}\PY{p}{,} \PY{n}{xerr} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{repeat}\PY{p}{(}\PY{n}{Dom}\PY{p}{,} \PY{n}{om\PYZus{}f}\PY{p}{[}\PY{n}{i}\PY{p}{]}\PY{o}{.}\PY{n}{size}\PY{p}{)}\PY{p}{,} \PY{n}{yerr} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{repeat}\PY{p}{(}\PY{n}{DT\PYZus{}p}\PY{p}{,} \PY{n}{T\PYZus{}p}\PY{p}{[}\PY{n}{i}\PY{p}{]}\PY{o}{.}\PY{n}{size}\PY{p}{)}\PY{p}{,} \PY{n}{marker} \PY{o}{=} \PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{x}\PY{l+s+s1}{\PYZsq{}}\PY{p}{,} \PY{n}{linestyle} \PY{o}{=} \PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{none}\PY{l+s+s1}{\PYZsq{}}\PY{p}{)} \PY{n}{plt}\PY{o}{.}\PY{n}{plot}\PY{p}{(}\PY{n}{om\PYZus{}f}\PY{p}{[}\PY{n}{i}\PY{p}{]}\PY{p}{,} \PY{n}{T\PYZus{}p\PYZus{}fit}\PY{p}{(}\PY{n}{om\PYZus{}f}\PY{p}{[}\PY{n}{i}\PY{p}{]}\PY{p}{,} \PY{n}{I\PYZus{}z}\PY{p}{,} \PY{n}{m}\PY{p}{[}\PY{n}{i}\PY{p}{]}\PY{p}{,} \PY{n}{l}\PY{p}{[}\PY{n}{i}\PY{p}{]}\PY{p}{)}\PY{p}{)} \PY{n}{plt}\PY{o}{.}\PY{n}{xlim}\PY{p}{(}\PY{n}{left} \PY{o}{=} \PY{l+m+mi}{0}\PY{p}{)} \PY{n}{plt}\PY{o}{.}\PY{n}{ylim}\PY{p}{(}\PY{n}{bottom} \PY{o}{=} \PY{l+m+mi}{0}\PY{p}{)} \PY{n}{plt}\PY{o}{.}\PY{n}{xlabel}\PY{p}{(}\PY{l+s+sa}{r}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{\PYZdl{}}\PY{l+s+s1}{\PYZbs{}}\PY{l+s+s1}{omega\PYZus{}F\PYZdl{} [Hz]}\PY{l+s+s1}{\PYZsq{}}\PY{p}{)} \PY{n}{plt}\PY{o}{.}\PY{n}{ylabel}\PY{p}{(}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{\PYZdl{}T\PYZus{}p\PYZdl{} [s]}\PY{l+s+s1}{\PYZsq{}}\PY{p}{)} \PY{n}{plt}\PY{o}{.}\PY{n}{savefig}\PY{p}{(}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{figures/praezession.pdf}\PY{l+s+s1}{\PYZsq{}}\PY{p}{)} \end{Verbatim} \begin{center} \adjustimage{max size={0.9\linewidth}{0.9\paperheight}}{output_19_0.png} \end{center} { \hspace*{\fill} \\} \begin{Verbatim}[commandchars=\\\{\}] {\color{incolor}In [{\color{incolor}252}]:} \PY{k}{for} \PY{n}{i} \PY{o+ow}{in} \PY{n+nb}{range}\PY{p}{(}\PY{l+m+mi}{4}\PY{p}{)}\PY{p}{:} \PY{n}{chi2\PYZus{}} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{sum}\PY{p}{(}\PY{p}{(}\PY{n}{T\PYZus{}p\PYZus{}fit}\PY{p}{(}\PY{n}{om\PYZus{}f}\PY{p}{[}\PY{n}{i}\PY{p}{]}\PY{p}{,} \PY{n}{I\PYZus{}z}\PY{p}{,} \PY{n}{m}\PY{p}{[}\PY{n}{i}\PY{p}{]}\PY{p}{,} \PY{n}{l}\PY{p}{[}\PY{n}{i}\PY{p}{]}\PY{p}{)} \PY{o}{\PYZhy{}} \PY{n}{T\PYZus{}p}\PY{p}{[}\PY{n}{i}\PY{p}{]}\PY{p}{)}\PY{o}{*}\PY{o}{*}\PY{l+m+mi}{2}\PY{o}{/}\PY{n}{DT\PYZus{}p}\PY{o}{*}\PY{o}{*}\PY{l+m+mi}{2}\PY{p}{)} \PY{n}{dof} \PY{o}{=} \PY{n}{T\PYZus{}p}\PY{o}{.}\PY{n}{size} \PY{o}{\PYZhy{}} \PY{l+m+mi}{1} \PY{n}{chi2\PYZus{}red} \PY{o}{=} \PY{n}{chi2\PYZus{}}\PY{o}{/}\PY{n}{dof} \PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Messreihe }\PY{l+s+s2}{\PYZdq{}}\PY{p}{,} \PY{n}{i}\PY{p}{,} \PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{: }\PY{l+s+s2}{\PYZdq{}}\PY{p}{,} \PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{chi2 = }\PY{l+s+s2}{\PYZdq{}}\PY{p}{,} \PY{n}{chi2\PYZus{}}\PY{p}{)} \PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{chi2\PYZus{}red = }\PY{l+s+s2}{\PYZdq{}}\PY{p}{,} \PY{n}{chi2\PYZus{}red}\PY{p}{)} \end{Verbatim} \begin{Verbatim}[commandchars=\\\{\}] Messreihe 0 : chi2 = 4737.812976375742 chi2\_red = 315.8541984250495 Messreihe 1 : chi2 = 160.22963559241367 chi2\_red = 10.68197570616091 Messreihe 2 : chi2 = 4881.136194769347 chi2\_red = 325.4090796512898 Messreihe 3 : chi2 = 6.222330005020856 chi2\_red = 0.4148220003347237 \end{Verbatim} \hypertarget{nutation-1}{% \section{4) Nutation 1}\label{nutation-1}} \begin{Verbatim}[commandchars=\\\{\}] {\color{incolor}In [{\color{incolor}273}]:} \PY{n}{t} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{array}\PY{p}{(}\PY{p}{[}\PY{l+m+mf}{20.1}\PY{p}{,} \PY{l+m+mf}{17.2}\PY{p}{,} \PY{l+m+mf}{24.8}\PY{p}{,} \PY{l+m+mf}{18.1}\PY{p}{,} \PY{l+m+mf}{20.4}\PY{p}{,} \PY{l+m+mf}{27.1}\PY{p}{,} \PY{l+m+mf}{30.5}\PY{p}{,} \PY{l+m+mf}{16.8}\PY{p}{,} \PY{l+m+mf}{20.4}\PY{p}{,} \PY{l+m+mf}{33.1}\PY{p}{]}\PY{p}{)} \PY{n}{f} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{array}\PY{p}{(}\PY{p}{[}\PY{l+m+mf}{513.1}\PY{p}{,} \PY{l+m+mf}{577.7}\PY{p}{,} \PY{l+m+mf}{368.9}\PY{p}{,} \PY{l+m+mf}{548.7}\PY{p}{,} \PY{l+m+mf}{497.3}\PY{p}{,} \PY{l+m+mf}{373.8}\PY{p}{,} \PY{l+m+mf}{330.9}\PY{p}{,} \PY{l+m+mf}{593.2}\PY{p}{,} \PY{l+m+mf}{487.6}\PY{p}{,} \PY{l+m+mf}{304.3}\PY{p}{]}\PY{p}{)}\PY{o}{/}\PY{l+m+mi}{60} \PY{n}{om\PYZus{}f} \PY{o}{=} \PY{l+m+mi}{2}\PY{o}{*}\PY{n}{pi}\PY{o}{*}\PY{n}{f} \PY{n}{Om} \PY{o}{=} \PY{o}{\PYZhy{}}\PY{l+m+mi}{2}\PY{o}{*}\PY{n}{pi}\PY{o}{*}\PY{l+m+mi}{10}\PY{o}{/}\PY{n}{t} \PY{n}{DOm} \PY{o}{=} \PY{n}{Om}\PY{o}{*}\PY{p}{(}\PY{n}{Dt}\PY{o}{/}\PY{n}{t}\PY{p}{)} \PY{n}{Om} \end{Verbatim} \begin{Verbatim}[commandchars=\\\{\}] {\color{outcolor}Out[{\color{outcolor}273}]:} array([-3.12596284, -3.65301471, -2.53354246, -3.4713731 , -3.0799928 , -2.31851856, -2.06006076, -3.73999125, -3.0799928 , -1.8982433 ]) \end{Verbatim} \begin{Verbatim}[commandchars=\\\{\}] {\color{incolor}In [{\color{incolor}274}]:} \PY{k}{def} \PY{n+nf}{Om\PYZus{}fit}\PY{p}{(}\PY{n}{om\PYZus{}f}\PY{p}{,} \PY{n}{I\PYZus{}x}\PY{p}{)}\PY{p}{:} \PY{k}{return} \PY{p}{(}\PY{n}{I\PYZus{}x} \PY{o}{\PYZhy{}} \PY{n}{I\PYZus{}z}\PY{p}{)}\PY{o}{/}\PY{n}{I\PYZus{}x}\PY{o}{*}\PY{n}{om\PYZus{}f} \end{Verbatim} \begin{Verbatim}[commandchars=\\\{\}] {\color{incolor}In [{\color{incolor}275}]:} \PY{n}{param} \PY{o}{=} \PY{p}{[}\PY{l+m+mf}{0.004}\PY{p}{]} \PY{n}{popt}\PY{p}{,} \PY{n}{pcov} \PY{o}{=} \PY{n}{curve\PYZus{}fit}\PY{p}{(}\PY{n}{Om\PYZus{}fit}\PY{p}{,} \PY{n}{om\PYZus{}f}\PY{p}{,} \PY{n}{Om}\PY{p}{,} \PY{n}{sigma} \PY{o}{=} \PY{n}{DOm}\PY{p}{)} \PY{n}{I\PYZus{}x} \PY{o}{=} \PY{n}{popt}\PY{p}{[}\PY{l+m+mi}{0}\PY{p}{]} \PY{n}{DI\PYZus{}x} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{sqrt}\PY{p}{(}\PY{n}{pcov}\PY{p}{[}\PY{l+m+mi}{0}\PY{p}{]}\PY{p}{[}\PY{l+m+mi}{0}\PY{p}{]}\PY{p}{)} \PY{n+nb}{print}\PY{p}{(}\PY{n}{I\PYZus{}x}\PY{p}{,} \PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{ +\PYZhy{} }\PY{l+s+s2}{\PYZdq{}}\PY{p}{,} \PY{n}{DI\PYZus{}x}\PY{p}{)} \end{Verbatim} \begin{Verbatim}[commandchars=\\\{\}] 0.004052799962835811 +- 2.3587354903747225e-06 \end{Verbatim} \begin{Verbatim}[commandchars=\\\{\}] {\color{incolor}In [{\color{incolor}276}]:} \PY{n}{chi2\PYZus{}} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{sum}\PY{p}{(}\PY{p}{(}\PY{n}{Om\PYZus{}fit}\PY{p}{(}\PY{n}{om\PYZus{}f}\PY{p}{,} \PY{n}{I\PYZus{}x}\PY{p}{)} \PY{o}{\PYZhy{}} \PY{n}{Om}\PY{p}{)}\PY{o}{*}\PY{o}{*}\PY{l+m+mi}{2}\PY{o}{/}\PY{n}{DOm}\PY{o}{*}\PY{o}{*}\PY{l+m+mi}{2}\PY{p}{)} \PY{n}{dof} \PY{o}{=} \PY{n}{Om}\PY{o}{.}\PY{n}{size} \PY{o}{\PYZhy{}} \PY{l+m+mi}{1} \PY{n}{chi2\PYZus{}red} \PY{o}{=} \PY{n}{chi2\PYZus{}}\PY{o}{/}\PY{n}{dof} \PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{chi2 = }\PY{l+s+s2}{\PYZdq{}}\PY{p}{,} \PY{n}{chi2\PYZus{}}\PY{p}{)} \PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{chi2\PYZus{}red = }\PY{l+s+s2}{\PYZdq{}}\PY{p}{,} \PY{n}{chi2\PYZus{}red}\PY{p}{)} \end{Verbatim} \begin{Verbatim}[commandchars=\\\{\}] chi2 = 522.7319361884539 chi2\_red = 58.08132624316155 \end{Verbatim} \begin{Verbatim}[commandchars=\\\{\}] {\color{incolor}In [{\color{incolor}277}]:} \PY{n}{plt}\PY{o}{.}\PY{n}{errorbar}\PY{p}{(}\PY{n}{om\PYZus{}f}\PY{p}{,} \PY{n}{Om}\PY{p}{,} \PY{n}{xerr} \PY{o}{=} \PY{n}{Dom}\PY{p}{,} \PY{n}{yerr} \PY{o}{=} \PY{n}{DOm}\PY{p}{,} \PY{n}{marker} \PY{o}{=} \PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{x}\PY{l+s+s1}{\PYZsq{}}\PY{p}{,} \PY{n}{linestyle} \PY{o}{=} \PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{none}\PY{l+s+s1}{\PYZsq{}}\PY{p}{)} \PY{n}{plt}\PY{o}{.}\PY{n}{plot}\PY{p}{(}\PY{n}{om\PYZus{}f}\PY{p}{,} \PY{n}{Om\PYZus{}fit}\PY{p}{(}\PY{n}{om\PYZus{}f}\PY{p}{,} \PY{n}{I\PYZus{}x}\PY{p}{)}\PY{p}{)} \PY{n}{plt}\PY{o}{.}\PY{n}{xlabel}\PY{p}{(}\PY{l+s+sa}{r}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{\PYZdl{}}\PY{l+s+s1}{\PYZbs{}}\PY{l+s+s1}{omega\PYZus{}F\PYZdl{} [Hz]}\PY{l+s+s1}{\PYZsq{}}\PY{p}{)} \PY{n}{plt}\PY{o}{.}\PY{n}{ylabel}\PY{p}{(}\PY{l+s+sa}{r}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{\PYZdl{}}\PY{l+s+s1}{\PYZbs{}}\PY{l+s+s1}{Omega\PYZdl{} [Hz]}\PY{l+s+s1}{\PYZsq{}}\PY{p}{)} \PY{n}{plt}\PY{o}{.}\PY{n}{savefig}\PY{p}{(}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{figures/nutation\PYZus{}Omega.pdf}\PY{l+s+s1}{\PYZsq{}}\PY{p}{)} \end{Verbatim} \begin{center} \adjustimage{max size={0.9\linewidth}{0.9\paperheight}}{output_26_0.png} \end{center} { \hspace*{\fill} \\} \hypertarget{nutation-2}{% \section{5) Nutation 2}\label{nutation-2}} \begin{Verbatim}[commandchars=\\\{\}] {\color{incolor}In [{\color{incolor}278}]:} \PY{n}{om\PYZus{}f} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{array}\PY{p}{(}\PY{p}{[}\PY{l+m+mf}{29.7}\PY{p}{,} \PY{l+m+mf}{81.1}\PY{p}{,} \PY{l+m+mf}{52.7}\PY{p}{,} \PY{l+m+mf}{51.0}\PY{p}{,} \PY{l+m+mf}{34.6}\PY{p}{,} \PY{l+m+mf}{29.7}\PY{p}{,} \PY{l+m+mf}{50.6}\PY{p}{,} \PY{l+m+mf}{43.8}\PY{p}{,} \PY{l+m+mf}{33.9}\PY{p}{,} \PY{l+m+mf}{56.2}\PY{p}{]}\PY{p}{)} \PY{n}{om\PYZus{}n} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{array}\PY{p}{(}\PY{p}{[}\PY{l+m+mf}{42.4}\PY{p}{,} \PY{l+m+mf}{39.8}\PY{o}{*}\PY{l+m+mi}{2}\PY{p}{,} \PY{l+m+mf}{50.8}\PY{p}{,} \PY{l+m+mf}{43.5}\PY{p}{,} \PY{l+m+mf}{32.5}\PY{p}{,} \PY{l+m+mf}{38.2}\PY{p}{,} \PY{l+m+mf}{50.3}\PY{p}{,} \PY{l+m+mf}{29.3}\PY{p}{,} \PY{l+m+mf}{33.0}\PY{p}{,} \PY{l+m+mf}{53.4}\PY{p}{]}\PY{p}{)} \PY{n}{Dom\PYZus{}n} \PY{o}{=} \PY{l+m+mf}{0.5} \end{Verbatim} \begin{Verbatim}[commandchars=\\\{\}] {\color{incolor}In [{\color{incolor}279}]:} \PY{k}{def} \PY{n+nf}{om\PYZus{}n\PYZus{}fit}\PY{p}{(}\PY{n}{om\PYZus{}f}\PY{p}{,} \PY{n}{I\PYZus{}x}\PY{p}{)}\PY{p}{:} \PY{k}{return} \PY{n}{I\PYZus{}z}\PY{o}{/}\PY{n}{I\PYZus{}x}\PY{o}{*}\PY{n}{om\PYZus{}f} \end{Verbatim} \begin{Verbatim}[commandchars=\\\{\}] {\color{incolor}In [{\color{incolor}280}]:} \PY{n}{param} \PY{o}{=} \PY{p}{[}\PY{n}{I\PYZus{}x}\PY{p}{]} \PY{n}{popt}\PY{p}{,} \PY{n}{pcov} \PY{o}{=} \PY{n}{curve\PYZus{}fit}\PY{p}{(}\PY{n}{om\PYZus{}n\PYZus{}fit}\PY{p}{,} \PY{n}{om\PYZus{}f}\PY{p}{,} \PY{n}{om\PYZus{}n}\PY{p}{,} \PY{n}{sigma} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{repeat}\PY{p}{(}\PY{n}{Dom\PYZus{}n}\PY{p}{,} \PY{n}{om\PYZus{}n}\PY{o}{.}\PY{n}{size}\PY{p}{)}\PY{p}{)} \PY{n}{I\PYZus{}x\PYZus{}} \PY{o}{=} \PY{n}{popt}\PY{p}{[}\PY{l+m+mi}{0}\PY{p}{]} \PY{n}{DI\PYZus{}x\PYZus{}} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{sqrt}\PY{p}{(}\PY{n}{pcov}\PY{p}{[}\PY{l+m+mi}{0}\PY{p}{]}\PY{p}{[}\PY{l+m+mi}{0}\PY{p}{]}\PY{p}{)} \PY{n+nb}{print}\PY{p}{(}\PY{n}{I\PYZus{}x\PYZus{}}\PY{p}{,} \PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{ +\PYZhy{} }\PY{l+s+s2}{\PYZdq{}}\PY{p}{,} \PY{n}{DI\PYZus{}x\PYZus{}}\PY{p}{)} \end{Verbatim} \begin{Verbatim}[commandchars=\\\{\}] 0.004463117442875881 +- 0.0002212855903480917 \end{Verbatim} \begin{Verbatim}[commandchars=\\\{\}] {\color{incolor}In [{\color{incolor}281}]:} \PY{n}{chi2\PYZus{}} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{sum}\PY{p}{(}\PY{p}{(}\PY{n}{om\PYZus{}n\PYZus{}fit}\PY{p}{(}\PY{n}{om\PYZus{}f}\PY{p}{,} \PY{n}{I\PYZus{}x\PYZus{}}\PY{p}{)} \PY{o}{\PYZhy{}} \PY{n}{om\PYZus{}n}\PY{p}{)}\PY{o}{*}\PY{o}{*}\PY{l+m+mi}{2}\PY{o}{/}\PY{n}{Dom\PYZus{}n}\PY{o}{*}\PY{o}{*}\PY{l+m+mi}{2}\PY{p}{)} \PY{n}{dof} \PY{o}{=} \PY{n}{om\PYZus{}n}\PY{o}{.}\PY{n}{size} \PY{o}{\PYZhy{}} \PY{l+m+mi}{1} \PY{n}{chi2\PYZus{}red} \PY{o}{=} \PY{n}{chi2\PYZus{}}\PY{o}{/}\PY{n}{dof} \PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{chi2 = }\PY{l+s+s2}{\PYZdq{}}\PY{p}{,} \PY{n}{chi2\PYZus{}}\PY{p}{)} \PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{chi2\PYZus{}red = }\PY{l+s+s2}{\PYZdq{}}\PY{p}{,} \PY{n}{chi2\PYZus{}red}\PY{p}{)} \end{Verbatim} \begin{Verbatim}[commandchars=\\\{\}] chi2 = 1943.8940285504664 chi2\_red = 215.98822539449625 \end{Verbatim} \begin{Verbatim}[commandchars=\\\{\}] {\color{incolor}In [{\color{incolor}282}]:} \PY{n}{plt}\PY{o}{.}\PY{n}{errorbar}\PY{p}{(}\PY{n}{om\PYZus{}f}\PY{p}{,} \PY{n}{om\PYZus{}n}\PY{p}{,} \PY{n}{xerr} \PY{o}{=} \PY{n}{Dom}\PY{p}{,} \PY{n}{yerr} \PY{o}{=} \PY{n}{Dom\PYZus{}n}\PY{p}{,} \PY{n}{linestyle} \PY{o}{=} \PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{none}\PY{l+s+s1}{\PYZsq{}}\PY{p}{)} \PY{n}{plt}\PY{o}{.}\PY{n}{plot}\PY{p}{(}\PY{n}{om\PYZus{}f}\PY{p}{,} \PY{n}{om\PYZus{}n\PYZus{}fit}\PY{p}{(}\PY{n}{om\PYZus{}f}\PY{p}{,} \PY{n}{I\PYZus{}x\PYZus{}}\PY{p}{)}\PY{p}{,} \PY{n}{marker} \PY{o}{=} \PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{\PYZsq{}}\PY{p}{)} \PY{n}{plt}\PY{o}{.}\PY{n}{xlabel}\PY{p}{(}\PY{l+s+sa}{r}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{\PYZdl{}}\PY{l+s+s1}{\PYZbs{}}\PY{l+s+s1}{omega\PYZus{}F\PYZdl{} [Hz]}\PY{l+s+s1}{\PYZsq{}}\PY{p}{)} \PY{n}{plt}\PY{o}{.}\PY{n}{ylabel}\PY{p}{(}\PY{l+s+sa}{r}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{\PYZdl{}}\PY{l+s+s1}{\PYZbs{}}\PY{l+s+s1}{omega\PYZus{}N\PYZdl{} [Hz]}\PY{l+s+s1}{\PYZsq{}}\PY{p}{)} \PY{n}{plt}\PY{o}{.}\PY{n}{savefig}\PY{p}{(}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{figures/nutation\PYZus{}om\PYZus{}n.pdf}\PY{l+s+s1}{\PYZsq{}}\PY{p}{)} \end{Verbatim} \begin{center} \adjustimage{max size={0.9\linewidth}{0.9\paperheight}}{output_32_0.png} \end{center} { \hspace*{\fill} \\} % Add a bibliography block to the postdoc \end{document}
{ "alphanum_fraction": 0.5475799749, "avg_line_length": 69.615727003, "ext": "tex", "hexsha": "3a9d22eac9081fa801a70873c365d70f4dca7d8f", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "0aa48867e24a89dd297a70381dcb1180b973e8cf", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "JanJakob1/PAP2", "max_forks_repo_path": "versuch_213/notebook.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "0aa48867e24a89dd297a70381dcb1180b973e8cf", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "JanJakob1/PAP2", "max_issues_repo_path": "versuch_213/notebook.tex", "max_line_length": 884, "max_stars_count": null, "max_stars_repo_head_hexsha": "0aa48867e24a89dd297a70381dcb1180b973e8cf", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "JanJakob1/PAP2", "max_stars_repo_path": "versuch_213/notebook.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 22332, "size": 46921 }
\section{Strings} \subsectionRed{Knuth-Morris-Pratt} Count and find all matches of string $f$ in string $s$ in $O(n)$ time. \code{strings/kmp.cpp} \subsection{Trie} \code{strings/trie.cpp} \subsubsection{Persistent Trie} \code{strings/trie_persistent.cpp} \subsectionRed{Suffix Array} Construct a sorted catalog of all substrings of $s$ in $O(n \log n)$ time using counting sort. \code{strings/suffix-array.cpp} \subsectionRed{Longest Common Prefix} Find the length of the longest common prefix for every substring in $O(n)$. \code{strings/lcp.cpp} \subsectionRed{Aho-Corasick Trie} Find all multiple pattern matches in $O(n)$ time. This is KMP for multiple strings. \code{strings/aho-corasick-trie.java} \subsection{Palimdromes} \subsubsectionRed{Palindromic Tree} Find lengths and frequencies of all palindromic substrings of a string in $O(n)$ time. Theorem: there can only be up to $n$ unique palindromic substrings for any string. \code{strings/palindromic-tree.cpp} \subsubsection{Eertree} \code{strings/eertree.cpp} \subsectionRed{Z Algorithm} Find the longest common prefix of all substrings of $s$ with itself in $O(n)$ time. \code{strings/z.cpp} \subsectionRed{Booth's Minimum String Rotation} Booth's Algo: Find the index of the lexicographically least string rotation in $O(n)$ time. \code{strings/booth.cpp} \subsection{Hashing} \subsubsection{Rolling Hash} \code{strings/rolling_hash.cpp}
{ "alphanum_fraction": 0.703014753, "avg_line_length": 44.5428571429, "ext": "tex", "hexsha": "95fac0b86ad72f17d5f5d00e1c46d04a6a35ec66", "lang": "TeX", "max_forks_count": 2, "max_forks_repo_forks_event_max_datetime": "2022-03-20T07:08:46.000Z", "max_forks_repo_forks_event_min_datetime": "2022-03-11T20:53:41.000Z", "max_forks_repo_head_hexsha": "4d4b351c8a2540c522d00138e1bcf0edc528b540", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "bullybutcher/progvar-library", "max_forks_repo_path": "notebook/tex/strings.tex", "max_issues_count": 19, "max_issues_repo_head_hexsha": "4d4b351c8a2540c522d00138e1bcf0edc528b540", "max_issues_repo_issues_event_max_datetime": "2022-03-30T07:14:59.000Z", "max_issues_repo_issues_event_min_datetime": "2021-11-27T14:40:00.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "bullybutcher/progvar-library", "max_issues_repo_path": "notebook/tex/strings.tex", "max_line_length": 99, "max_stars_count": 3, "max_stars_repo_head_hexsha": "4d4b351c8a2540c522d00138e1bcf0edc528b540", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "bullybutcher/progvar-library", "max_stars_repo_path": "notebook/tex/strings.tex", "max_stars_repo_stars_event_max_datetime": "2021-10-29T22:03:44.000Z", "max_stars_repo_stars_event_min_datetime": "2021-10-16T13:22:58.000Z", "num_tokens": 423, "size": 1559 }
\section{A Refined IO Monad}\label{sec:state} Next, we illustrate the expressiveness of Bounded Refinements by showing how they enable the specification and verification of stateful computations. We show how to % (1)~implement a refined \emph{state transformer} (\RIO) monad, where the transformer is indexed by refinements corresponding to \emph{pre}- and \emph{post}-conditions on the state~(\S~\ref{subsec:state:definition}), % (2)~extend \RIO with a set of combinators for \emph{imperative} programming, \ie whose types precisely encode Floyd-Hoare style program logics~(\S~\ref{subsec:state:examples}), and % (3)~use the \RIO monad to write \emph{safe scripts} where the type system precisely tracks capabilities and statically ensures that functions only access specific resources~(\S~\ref{subsec:state:files}). \subsection{The RIO Monad} \label{subsec:state:definition} \mypara{The RIO data type} describes stateful computations. Intuitively, a value of type @RIO a@ denotes a computation that, when evaluated in an input @World@ produces a value of type @a@ (or diverges) and a potentially transformed output @World@. We implement @RIO a@ as an abstractly refined type (as described in ~\citep{vazou13}) %%(cf. \S~\ref{sec:overview:data}) %%\nv{TODO: compare with Vac from ART} % % data World % \begin{code} type Pre = World -> Bool type Post a = World -> a -> World -> Bool data RIO a <p :: Pre, q :: Post a> = RIO { runState :: w:World <p> -> (x:a, World <q w x>) } \end{code} % That is, @RIO a@ is a function @World-> (a, World)@, where @World@ is a primitive type that represents the state of the machine \ie the console, file system, \etc % This indexing notion is directly inspired by the method of~\citep{Filliatre98} (also used in \cite{ynot}). \mypara{Our Post-conditions are Two-State Predicates} that relate the input- and output- world (as in~\cite{ynot}). % Classical Floyd-Hoare logic, in contrast, uses assertions which are single-state predicates. % We use two-states to smoothly account for specifications for stateful procedures. This increased expressiveness makes the types slightly more complex than a direct one-state encoding which is, of course also possible with bounded refinements. \mypara{An {RIO} computation is parameterized} by two abstract refinements: % \begin{inparaenum}[(1)] \item @p :: Pre@, which is a predicate over the \emph{input} world, \ie the input world @w@ satisfies the refinement @p w@; and % \item @q :: Post a@, which is a predicate relating the \emph{output} world with the input world and the value returned by the computation, \ie the output world @w'@ satisfies the refinement @q w x w'@ where @x@ is the value returned by the computation. \end{inparaenum} % Next, to use @RIO@ as a monad, we define @bind@ and @return@ functions for it, that satisfy the monad laws. %%like the ones defined %%for Haskell's state monad. \mypara{The return operator} yields a pair of the supplied value @z@ and the input world unchanged: % \begin{code} return :: z:a -> RIO <p, ret z> a return z = RIO $ \w -> (z, w) ret z = \w x w' -> w' == w && x == z \end{code} % $ The type of \return states that for any precondition @p@ and any supplied value @z@ of type @a@, the expression @return z@ is an \RIO computation with precondition @p@ and a post-condition @ret z@. The postcondition states that: % (1)~the output @World@ is the same as the input, and % (2)~the result equals to the supplied value @z@. % Note that as a consequence of the equality of the two worlds and congruence, the output world @w'@ trivially satisfies @p w'@. % %% CHECK (3)~the result world satisfies the precondition @p@. \mypara{The bind Operator} is defined in the usual way. However, to type it precisely, we require bounded refinements. % \begin{code} (>>=) :: (Ret q1 r, Seq r q1 p2, Trans q1 q2 q) => m:RIO <p, q1> a -> k:(x:a<r> -> RIO <p2 x, q2 x> b) -> RIO <p, q> b (RIO g) >>= f = RIO $ \x -> case g x of { (y, s) -> runState (f y) s } \end{code} %$ % The bounds capture various sequencing requirements (c.f. the Floyd-Hoare rules of consequence). % First, the output of the first action @m@, satisfies the refinement required by the continuation @k@; % \begin{code} bound Ret q1 r = \w x w' -> q1 w x w' => r x \end{code} % Second, the computations may be sequenced, \ie the postcondition of the first action @m@ implies the precondition of the continuation @k@ (which may be dependent upon the supplied value @x@): % \begin{code} bound Seq q1 p2 = \w x w' -> q1 w x w' => p2 x w' \end{code}% % Third, the transitive composition of the two computations, implies the final postcondition: % \begin{code} bound Trans q1 q2 q = \w x w' y w'' -> q1 w x w' => q2 x w' y w'' => q w y w'' \end{code} %%% \toolname verifies that the implementation of %%% @return@ and @>>=@ satisfy their refined type %%% signatures. %$ Both type signatures would be impossible to use if the programmer had to manually instantiate the abstract refinements (\ie pre- and post-conditions). % Fortunately, Liquid Type inference % automatically generates the instantiations making it practical to use \toolname to verify stateful computations written using @do@-notation. \subsection{Floyd-Hoare Logic in the RIO Monad} \label{subsec:state:examples} Next, we use bounded refinements to derive an encoding of Floyd-Hoare logic, by showing how to read and write (mutable) variables and typing higher order @ifM@ and @whileM@ combinators. \mypara{We Encode Mutable Variables} as fields of the @World@ type. For example, we might encode a global counter as a field: % \begin{code} data World = { ... , ctr :: Int, ... } \end{code} % We encode mutable variables in the refinement logic using McCarthy's @select@ and @update@ operators for finite maps and the associated axiom: % \begin{code} select :: Map k v -> k -> v update :: Map k v -> k -> v -> Map k v forall m, k1, k2, v. select (update m k1 v) k2 == (if k1 == k2 then v else select m k2 v) \end{code} % The quantifier free theory of @select@ and @update@ is decidable and implemented in modern SMT solvers~\cite{SMTLIB2}. \mypara{We Read and Write Mutable Variables} via suitable ``get'' and ``set'' actions. For example, we can read and write @ctr@ via: % \begin{code} getCtr :: RIO <pTrue, rdCtr> Int getCtr = RIO $ \w -> (ctr w, w) setCtr :: Int -> RIO <pTrue, wrCtr n> () setCtr n = RIO $ \w -> ((), w { ctr = n }) \end{code} %$ Here, the refinements are defined as: % \begin{code} pTrue = \w -> True rdCtr = \w x w' -> w' == w && x == select w ctr wrCtr n = \w _ w' -> w' == update w ctr n \end{code} % Hence, the post-condition of @getCtr@ states that it returns the current value of @ctr@, encoded in the refinement logic with McCarthy's @select@ operator while leaving the world unchanged. % The post-condition of @setCtr@ states that @World@ is updated at the address corresponding to @ctr@, encoded via McCarthy's @update@ operator. \mypara{The {ifM} combinator} takes as input a @cond@ action that returns a @Bool@ and, depending upon the result, executes either the @then@ or @else@ actions. We type it as: % %% name the return state v to fit it in one line \begin{code} bound Pure g = \w x v -> (g w x v => v == w) bound Then g p1 = \w v -> (g w True v => p1 v) bound Else g p2 = \w v -> (g w False v => p2 v) ifM :: (Pure g, Then g p1, Else g p2) => RIO <p , g> Bool -- cond -> RIO <p1, q> a -- then -> RIO <p2, q> a -- else -> RIO <p , q> a \end{code} % The abstract refinements and bounds correspond exactly to the hypotheses in the Floyd-Hoare rule for the @if@ statement. % The bound @Pure g@ states that the @cond@ action may access but does not \emph{modify} the @World@, \ie the output is the same as the input @World@. (In classical Floyd-Hoare formulations this is done by syntactically separating terms into pure \emph{expressions} and side effecting \emph{statements}). % The bound @Then g p1@ and @Else g p2@ respectively state that the preconditions of the @then@ and @else@ actions are established when the @cond@ returns @True@ and @False@ respectively. \mypara{We can use {ifM}} to implement a stateful computation that performs a division, after checking the divisor is non-zero. % We specify that @div@ should not be called with a zero divisor. Then, \toolname verifies that @div@ is called safely: % \begin{code} div :: Int -> {v:Int | v != 0} -> Int ifTest :: RIO Int ifTest = ifM nonZero divX (return 10) where nonZero = getCtr >>= return . (!= 0) divX = getCtr >>= return . (div 42) \end{code} % Verification succeeds as the post-condition of @nonZero@ is instantiated to {@\_ b w -> b <=>@ @select w ctr != 0@} and the pre-condition of @divX@'s is instantiated to {@\w -> select w ctr != 0@}, which suffices to prove that @div@ is only called with non-zero values. \mypara{The {whileM} combinator} formalizes loops as @RIO@ computations: % \begin{code} whileM :: (OneState q, Inv p g b, Exit p g q) => RIO <p, g> Bool -- cond -> RIO <pTrue, b> () -- body -> RIO <p, q> () \end{code} % As with @ifM@, the hypotheses of the Floyd-Hoare derivation rule become bounds for the signature. % Given a @cond@ition with pre-condition @p@ and post-condition @g@ and @body@ with a true precondition and post-condition @b@, the computation @whileM cond body@ has precondition @p@ and post-condition @q@ as long as the bounds (corresponding to the Hypotheses in the Floyd-Hoare derivation rule) hold. % First, @p@ should be a loop invariant; \ie when the @cond@ition returns @True@ the post-condition of the body @b@ must imply the @p@: % \begin{code} bound Inv p g b = \w w' w'' -> p w => g w True w' => b w' () w'' => p w'' \end{code} % Second, when the @cond@ition returns @False@ the invariant @p@ should imply the loop's post-condition @q@: % \begin{code} bound Exit p g q = \w w' -> p w => g w False w' => q w () w' \end{code} % Third, to avoid having to transitively connect the guard and the body, we require that the loop post-condition be a one-state predicate, independent of the input world (as in Floyd-Hoare logic): % \begin{code} bound OneState q = \w w' w'' -> q w () w'' => q w' () w'' \end{code} \mypara{We can use {whileM}} to implement a loop that repeatedly decrements a counter while it is positive, and to then verify that if it was initially non-negative, then at the end the counter is equal to @0@. % \begin{code} whileTest :: RIO <posCtr, zeroCtr> () whileTest = whileM gtZeroX decr where gtZeroX = getCtr >>= return . (> 0) posCtr = \w -> 0 <= select w ctr zeroCtr = \_ _ w' -> 0 == select w ctr \end{code} % Where the decrement is implemented by @decr@ with type: % \begin{code} decr :: RIO <pTrue, decCtr> () decCtr = \w _ w' -> w' == update w ctr ((select ctr w) - 1) \end{code} % $ \toolname verifies that at the end of @whileTest@ the counter is zero (\ie the post-condition @zeroCtr@) by instantiating suitable (\ie inductive) refinements for this particular use of @whileM@.
{ "alphanum_fraction": 0.6752332336, "avg_line_length": 31.4736842105, "ext": "tex", "hexsha": "4116ddb9939c5e2c16a92d5e5553db426fe63dd4", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2016-12-02T00:46:51.000Z", "max_forks_repo_forks_event_min_datetime": "2016-12-02T00:46:51.000Z", "max_forks_repo_head_hexsha": "a12f2e857a358e3cc08b657bb6b029ac2d500c3b", "max_forks_repo_licenses": [ "BSD-3-Clause" ], "max_forks_repo_name": "nikivazou/thesis", "max_forks_repo_path": "text/boundedrefinements/state.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "a12f2e857a358e3cc08b657bb6b029ac2d500c3b", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "BSD-3-Clause" ], "max_issues_repo_name": "nikivazou/thesis", "max_issues_repo_path": "text/boundedrefinements/state.tex", "max_line_length": 70, "max_stars_count": 11, "max_stars_repo_head_hexsha": "a12f2e857a358e3cc08b657bb6b029ac2d500c3b", "max_stars_repo_licenses": [ "BSD-3-Clause" ], "max_stars_repo_name": "nikivazou/thesis", "max_stars_repo_path": "text/boundedrefinements/state.tex", "max_stars_repo_stars_event_max_datetime": "2021-02-20T07:04:01.000Z", "max_stars_repo_stars_event_min_datetime": "2016-12-02T00:46:41.000Z", "num_tokens": 3443, "size": 11362 }
\documentclass[12pt,letterpaper]{article} \usepackage{fullpage} \usepackage[top=2cm, bottom=4.5cm, left=2.5cm, right=2.5cm]{geometry} \usepackage{amsmath,amsthm,amsfonts,amssymb,amscd} % \usepackage{lastpage} \usepackage{enumerate} \usepackage{fancyhdr} % \usepackage{mathrsfs} \usepackage{xcolor} \usepackage{graphicx} \usepackage{listings} \usepackage{hyperref} % define vector \newcommand{\q}{\underline} \newcommand{\mt}{\mathrm} % define in-line code \definecolor{codegray}{gray}{0.9} \newcommand{\code}[1]{\colorbox{codegray}{\texttt{#1}}} \hypersetup{% colorlinks=true, linkcolor=blue, linkbordercolor={0 0 1} } \renewcommand\lstlistingname{Snippet} \renewcommand\lstlistlistingname{Snippet} \def\lstlistingautorefname{Snippet.} \lstdefinestyle{python}{ language = python, frame = lines, basicstyle = \footnotesize, keywordstyle = \color{violet}, stringstyle = \color{green}, commentstyle = \color{red}\ttfamily } \setlength{\parindent}{0.2in} \setlength{\parskip}{0.1in} % Edit these as appropriate \newcommand\course{STAT203A} \newcommand\hwnumber{1} % <-- homework number \newcommand\NetIDa{M.-F. Ho} % <-- NetID of person #1 % \newcommand\NetIDb{netid12038} % <-- NetID of person #2 (Comment this line out for problem sets) \pagestyle{fancyplain} \headheight 35pt \lhead{\NetIDa} % \lhead{\NetIDa\\\NetIDb} % <-- Comment this line out for problem sets (make sure you are person #1) \chead{\textbf{\Large Homework 3}} \rhead{\course \\ \today} \lfoot{} \cfoot{} \rfoot{\small\thepage} \headsep 1.5em \newcommand{\Data}{\mathcal{D}} \newcommand{\xvec}{\boldsymbol{x}} \newcommand{\Xvec}{\boldsymbol{X}} \newcommand{\Var}{\textrm{Var}} \newcommand{\normal}{\textrm{N}} \newcommand{\xmean}{\langle \xvec \rangle} \newcommand{\newx}{\tilde{x}} \newcommand{\integer}{\mathbb{N}} \newcommand{\thetarv}{\tilde{\theta}} \newcommand{\phirv}{\tilde{\phi}} \newcommand{\ml}{m_{\ell}} \newcommand{\specterms}{^{2S+1}\mathcal{L}^p_\mathcal{J}} \begin{document} \section*{3.1: SII ion} The electronic configuration is $1s^2 2s^2 2p^6 3s^2 3p^3$. So we have three electrons in the shell, we have to guess how many configurations we could possibly rearrange them. We know that $\ml = -1, 0, 1$ and we know $m_s = -1/2, 1/2$. We have to distribute 3 electrons into $3\times 2$ slots: \begin{equation} C^{6}_3 = \frac{6 \times 5 \times 4}{3!} = 20. \end{equation} I crudely all the possibilities here, based on Pauli exclusion principle: \begin{figure*}[h] \centering \includegraphics[width=0.75\textwidth]{images/table.png} \caption{electron configuration} \label{fig:table} \end{figure*} Of course we need to write into spectroscopic terms like: \begin{equation*} \specterms, \end{equation*} where $\mathcal{L} \in \{S, P, D, ...\}$ and $p \in \{\mt{odd}, \mt{blank} \}$, and $\mathcal{J} = S + L$. Now we examine the total spin and total angular momentum in our list, and we get the last 2 columns in Fig~\ref{fig:table}. We follow the procedure taught in the class, and write these possibilities into a cubic: \begin{figure}[h] \centering \includegraphics[width=0.75\textwidth]{images/cubic.png} \end{figure} After examining all possibilities with symmetric along $\ml$ and $m_s$, we can write down these three: \begin{itemize} \item The $5\times 2$ matrix: $S = 1/2, L = 2, \mathcal{J} = 2 \pm 1/2$, thus $^2{\mt D}^{\mt o}_{3/2,5/2}$ \item The $3\times2$ matrix: $S = 1/2, L = 1, \mathcal{J}= 1 \pm 1/2$, thus $^2\mt{P}^{\mt o}_{1/2,3/2}$ \item The $1\times4$ matrix: $S=3/2, L = 0, \mathcal{J} = 0 + 3/2$, thus $^4{\mt S}^{\mt o}_{3/2}$ \end{itemize} Now we have to construct energy levels. Basically, there are 5 different level (including fine structure splitting) and any pair would be a valid transition. So, we would have $C^5_2 = 5!/3!/2! = 10$ transitions. The question is how we line up these states. According Hund's rule, the term with maximum multiplicity ($2S + 1$) has the lowest energy. So apparently we have $^4{\mt S}^{\mt o}_{3/2}$ to br the lowest. Also, the term with largest $L$ has lowest energy. So we should line up the order as $P \rightarrow D \rightarrow S$. Third rule says the lowest $\mathcal{J}$ has the lowest energy. Thus, the order in our mind right now should look like: \begin{equation*} {^2}\mt{P}^{\mt o}_{3/2} \rightarrow {^2}\mt{P}^{\mt o}_{1/2} \Rightarrow {^2}{\mt D}^{\mt o}_{5/2} \rightarrow {^2}{\mt D}^{\mt o}_{3/2} \Rightarrow {^4}{\mt S}^{\mt o}_{3/2}, \end{equation*} where bigger arrows indicate larger energy gaps downwardly. We should be able to draw these 10 transitions in our mind, but strictly speaking: \begin{enumerate} \item ${^2}\mt{P}^{\mt o}_{3/2} \rightarrow {^2}\mt{P}^{\mt o}_{1/2}$ \item ${^2}\mt{P}^{\mt o}_{3/2} \rightarrow {^2}{\mt D}^{\mt o}_{5/2}$ \item ${^2}\mt{P}^{\mt o}_{3/2} \rightarrow {^2}{\mt D}^{\mt o}_{3/2}$ \item ${^2}\mt{P}^{\mt o}_{3/2} \rightarrow {^4}{\mt S}^{\mt o}_{3/2}$ \item $ {^2}\mt{P}^{\mt o}_{1/2} \rightarrow {^2}{\mt D}^{\mt o}_{5/2}$ \item ${^2}\mt{P}^{\mt o}_{1/2} \rightarrow {^2}{\mt D}^{\mt o}_{3/2} $ \item $ {^2}\mt{P}^{\mt o}_{1/2} \rightarrow {^4}{\mt S}^{\mt o}_{3/2}$ \item $ {^2}{\mt D}^{\mt o}_{5/2} \rightarrow {^2}{\mt D}^{\mt o}_{3/2} $ \item $ {^2}{\mt D}^{\mt o}_{5/2} \rightarrow {^4}{\mt S}^{\mt o}_{3/2} $ \item $ {\mt D}^{\mt o}_{3/2} \rightarrow {^4}{\mt S}^{\mt o}_{3/2} $. \end{enumerate} \section*{3.2: Draine 4.1:} Those rules we should follow are listed in 6.7.1 in Draine. These are transitioning selection rules: \begin{enumerate} \item Parity must change. \item $\Delta L = 0, \pm 1$ \item $\Delta J = 0, \pm 1$, but $J = 0 \rightarrow 0$ is forbidden. \item Only one single-electron wave function $n\ell$ changes, with $\Delta \ell = \pm 1$. \item $\Delta S = 0$: Spin does {\bf not} change. \end{enumerate} Notably, violating 5th rule is called semiforbidden or intercombination. And violating any of 1st to 4th rule says to be forbidden. \begin{enumerate} \item CIII: $^3{\mt P}^{\mt o}_1 \rightarrow {^1}{\mt S}_0$: it fulfills (1, 2, 3, 4) and violates 5, so semi-forbidden. \item OIII: ${^1}{\mt D}_2 \rightarrow {^3}{\mt P}_2$: it violates 1, so forbidden. \item OIII: ${^1}{\mt S}_0 \rightarrow {^1}{\mt D}_2$: it violates 1, so forbidden. \item OIII: ${^5}{\mt S}^{\mt o}_2 \rightarrow {^3}{\mt P}_1$: it fulfills (1, 2, 3, 4) but violates 5, so semi-forbidden. \item CIV: ${^2}{\mt P}^{\mt o}_{3/2} \rightarrow {^2}{\mt S}_{1/2}$: it fulfills (1, 2, 3, 4, 5), so allowed. \item Ne II: ${^2}{\mt P}^{\mt o}_{1/2} \rightarrow {^2}{\mt P}^{\mt o}_{3/2}$: it violates 1, so forbidden. \item O I: ${^3}{\mt S}^{\mt o}_1 \rightarrow {^3}{\mt P}_2$: it fulfills (1, 2, 3, 4, 5), so allowed. \end{enumerate} \end{document}
{ "alphanum_fraction": 0.643731107, "avg_line_length": 39.6971428571, "ext": "tex", "hexsha": "1fd93c36456ade21a6e5b7b9fcc115e5b442ea43", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "1cd1a1d6e24356f0f1a94a2780598fdb9620c020", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "jibanCat/ISM_homework", "max_forks_repo_path": "hw3/hw3.tex", "max_issues_count": 1, "max_issues_repo_head_hexsha": "1cd1a1d6e24356f0f1a94a2780598fdb9620c020", "max_issues_repo_issues_event_max_datetime": "2022-01-22T00:14:24.000Z", "max_issues_repo_issues_event_min_datetime": "2022-01-22T00:14:24.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "jibanCat/ISM_homework", "max_issues_repo_path": "hw3/hw3.tex", "max_line_length": 126, "max_stars_count": null, "max_stars_repo_head_hexsha": "1cd1a1d6e24356f0f1a94a2780598fdb9620c020", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "jibanCat/ISM_homework", "max_stars_repo_path": "hw3/hw3.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 2595, "size": 6947 }
\title{Getting by with Git} %\author{} \date{} \begin{document} \tableofcontents For this document, assume \ilcode{$proj} is defined as something like: \begin{code} proj=my_awesome_project [email protected] \end{code} The following instructions show how to set up a git repository \ilcode{${host}:~/repo/${proj}.git} and use a local copy on your own machine \ilcode{~/code/${proj}}. \section{Set up a Repository} \subsection{Global Settings} It is important for people to know who you are when committing changes. Create a file \ilfile{~/.gitconfig} (by typing \ilcode{gedit ~/.gitconfig} in a terminal) which contains appropriate lines: \begin{code} [user] name = Alex Klinkhamer email = [email protected] [push] default = matching \end{code} \subsection{Making a Repository} Code shouldn't just be stored on one machine because the hard drive may crash or we may have multiple work machines. Therefore, we should make a repository on a server that all of our machines can access. \begin{code} ssh $host mkdir -p ~/repo/${proj}.git cd ~/repo/${proj}.git \end{code} \quicksec{Private} If this is a project just for you, then all we need to do is initialize the repository. \begin{code} git init --bare \end{code} \quicksec{Public} If this is a project should be shared with everyone, we must allow people to get to the repository and make it readable and writable by everyone. \begin{code} chmod a+x ~/ ~/repo git init --bare --shared=0666 \end{code} \quicksec{Shared with Group} Usually a public repository isn't the greatest idea because anyone on the machine can mess with your code. If you have the luxury of having a group containing all members, we can restrict access to the group. Let it be defined as \ilcode{group=my_group}. \begin{code} chmod a+x ~/ ~/repo chgrp $group . chmod g+rwxs . git --bare init --shared=group \end{code} \section{Basic Usage} This section describes the minimal amount of knowledge you need to use git. Note that some commands (\ilcode{git pull} and \ilcode{git push}) assume some default values which were set by \ilcode{git clone}. \subsection{Check out Code} Back on your own machine, get a copy of the repository using \ilcode{git clone}. \begin{code} mkdir -p ~/code cd ~/code git clone ${host}:~/repo/${proj}.git cd $proj \end{code} If you're a member of someone else's project (say, \ilcode{friend=alex}), the \ilcode{clone} command will need to reference their home directory on the server: \begin{code} git clone ${host}:~${friend}/repo/${proj}.git \end{code} After this initial \ilcode{clone}, it is simple to pull changes other people have made. \begin{code} git pull \end{code} \subsection{Check in Code} To add a new file to version control: \begin{code} git add $file \end{code} To see what files have been added, modified, or are not tracked by git, use the status command. \begin{code} git status \end{code} To commit all of your changes: \begin{code} git commit -a \end{code} This will open an editor so you can explain your changes in a \textit{commit message}. The editor is determined by the \ilcode{$EDITOR} environment variable, which is probably \ilcode{nano} by default... pretty easy to use. If you only have a short message and don't want to work in an editor, the message may be specified directly. \begin{code} git commit -a -m 'Fix size_t comparison warnings' \end{code} One can also change the most recent commit or its message (\textbf{ONLY DO THIS IF THE COMMIT HAS NOT BEEN PUSHED}). \begin{code} git commit -a --amend \end{code} Finally, push your changes to the repository, otherwise nobody will see them! \begin{code} git push origin master \end{code} If you're not pushing to an empty repository, the following shorthand does the same thing. \begin{code} git push \end{code} If you still need arguments, you can set the defaults manually with \ilcode{git-config}. \begin{code} git config branch.master.merge refs/heads/master git config branch.master.remote origin \end{code} To see other options: \begin{code} git config -l \end{code} To edit other options: \begin{code} git config -e \end{code} \subsection{Misc File Operations} Add file, remove file, move file, or discard changes. \begin{code} git add new-file.c git rm unwanted-file.c git mv old-file.c new-file.c git checkout HEAD file-with-changes.c \end{code} If you previously added a file and want to remove it, you must be rather forceful. \begin{code} git rm -f --cached unwanted-file.c \end{code} See previous commit messages. \begin{code} git log \end{code} \section{Working with Others} The above instructions are fine for working by yourself, but what about when others are making changes concurrently? If \ilcode{git push} complains about your copy not being up to date, you'll need to do a \ilcode{git pull}. However, there could be conflicts! (\textbf{TODO}: I don't have enough experience to give advice here.) If you have uncommitted changes and wish to pull new changes from a teammate, first stash your changes, then have them automatically merged. (Merges don't always go cleanly though, so copy your changes elsewhere just in case.) \begin{code} git stash save git pull git stash pop \end{code} \section{A Note on Commit Messages} Most commits do warrant some description. Imagine if your changes broke something, and someone else (or ``future you'') is tasked with fixing it. Without a meaningful commit message to read, that person doesn't know your intent in making those original changes, and their change may break something else! (Side note: Use tests to protect your code from others.) A commit message should be formatted with the first line being a short description (cut off at 50 characters), followed by an empty line, and then more detailed explanation. For example, this is one of mine: \begin{code} Add normal mapping to raytracer 1. In the raytraced version, one can now specify a normal map in object coordinates to give the illusion of a more complex surface. a. material.c/h map_normal_texture() a. wavefront-file.c readin_wavefront() a. raytrace.c fill_pixel() - This function is getting too complicated... 2. When determining the normal of the track's surface, be sure the damn thing is normalized! This only affects tracks without vertex normals. b. motion.c apply_track_gravity() 3. Clean up some parse code with the addition of a new macro. a. util.h AccepTok() c. wavefront-file.c + readin_wavefront() + readin_materials() c. dynamic-setup.c readin_Track() \end{code} This is just my style, but I describe changes in order of priority and hierarchically by: intent (number prefix), file (letter prefix), function or class (`+' prefix, or on same line if there's room), and extra description (`-' prefix). The letter prefixes signify the type of change: \textit{c} means change code, \textit{b} means bug fix, \textit{a} means add code (new functionality), \textit{r} means remove code, and \textit{d} means comment/documentation changes (if the file also contains code). This format works pretty well, though the letters may be a bit pedantic (former colleagues used a `*' prefix instead). \end{document} %export GIT_SSL_NO_VERIFY=true
{ "alphanum_fraction": 0.7555463117, "avg_line_length": 34.5071770335, "ext": "tex", "hexsha": "1569ca38750aa35f1b92c753e3e3301314703065", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "c7e08c4f4b39bb178ec0eadc83877629ba0c5b3f", "max_forks_repo_licenses": [ "CC0-1.0" ], "max_forks_repo_name": "grencez/www", "max_forks_repo_path": "src/tut/git.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "c7e08c4f4b39bb178ec0eadc83877629ba0c5b3f", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "CC0-1.0" ], "max_issues_repo_name": "grencez/www", "max_issues_repo_path": "src/tut/git.tex", "max_line_length": 265, "max_stars_count": null, "max_stars_repo_head_hexsha": "c7e08c4f4b39bb178ec0eadc83877629ba0c5b3f", "max_stars_repo_licenses": [ "CC0-1.0" ], "max_stars_repo_name": "grencez/www", "max_stars_repo_path": "src/tut/git.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 1837, "size": 7212 }
\documentclass[]{article} \usepackage[margin=1.0in]{geometry} \usepackage{amssymb} %title material \title{Astronomy 400B Final Review} \author{Brant Robertson} \date{May, 2015} %include latex definitions \input{astro400B_definitions.tex} %begin the document \begin{document} %make the title, goes after document begins \maketitle %first section \section{Lecture 1: Stellar Mass, Luminosity, Flux, Radius, and Temperature} Stellar masses are often expressed in terms of a {\it solar mass} ($\Msun$; mass of the Sun) \begin{equation} 1\Msun = 1.9891\times10^{33}\g \end{equation} \noindent in grams ($\g$). The most massive stars are $\approx100\Msun$ while the least massive stars are $\approx0.075\Msun$. Stellar luminosities (ergs of energy emitted per second; $1\erg = 1~\g~\cm^{2}~\s^{-2}$) are also often expressed in terms of the total (bolometric) {\it solar luminosity} ($\Lsun$; luminosity of the Sun) \begin{equation} 1\Lsun = 3.846\times10^{33} \erg~\s^{-1} \end{equation} \noindent Stars range in luminosity from $10^{6}\Lsun$ to less than $10^{-4}\Lsun$. The {\it flux} $F$ is the energy per unit second per unit area ($\erg~\s^{-1}~\cm^{-2}$) that is received from an object a distance $d$ away, and is given by the {\it inverse square law} \begin{equation} \label{eqn:inverse_square} F = \frac{L}{4\pi d^{2}}. \end{equation} \noindent The {\it flux density} ($\Fnu \equiv dF/d\nu$) of an object is best measured in {\it janskys} \begin{equation} 1\Jy = 10^{-23} \erg~\s^{-1}~\cm^{-2}~\Hz^{-1}. \end{equation} \noindent Note that the {\it hertz} ($1\Hz = 1 s^{-1}$) is the unit of frequency $\nu$. Astronomers will also use a flux density ($\Flambda = dF/d\lambda$) defined relative to wavelength. Typically, the units of $\Flambda$ are in $\erg~\cm^{-2}~\s^{-1}~\Ang^{-1}$, where $\Ang$ is the {\it angstrom} ($1\Ang = 10^{-8}\cm$). The wavelength and frequency of light are related by $c = \nu \lambda$, and the two flux densities are therefore related by $\Flambda = (c/\lambda^2)\Fnu$. The total flux is related to the flux density by \begin{equation} F = \int \Fnu d \nu = \int \Flambda d\lambda. \end{equation} \noindent If the distance of an object is known, astronomers will also use the term {\it luminosity density} (e.g., $\Lnu = dL/d\nu$). The luminosity, radius, and temperature of an object are related. For stellar objects, we often use the {\it solar radius} ($\Rsun$) to express sizes \begin{equation} 1\Rsun = 6.995 \times 10^{10} \cm \end{equation} \noindent If the luminosity $L$ and radius $R$ of an object are known, we can define the effective temperature $T$ in {\it Kelvin} ($K$) of an object through the {\it Stefan-Boltzmann Law} \begin{equation} L = 4\pi R^{2}\sigmaSB T^{4} \end{equation} \noindent where the {\it Stefan-Boltzmann constant} is \begin{equation} \sigmaSB = 5.670373\times10^{-5} \erg~\cm^{-2}~\s^{-1}~\K^{-4}. \end{equation} The Stefan-Boltzmann constant can be computed theoretically in terms of the speed of light, the {\it Boltzmann constant} \begin{equation} \kB = 1.380658\times10^{-16} \erg~\K^{-1} \end{equation} \noindent and the {\it Planck constant} \begin{equation} h = 6.6260755\times10^{-27}\erg~\s, \end{equation} \noindent which gives the definition \begin{equation} \sigmaSB \equiv \frac{2\pi^5 \kB^4}{15 h^3 c^2}. \end{equation} The effective temperature is typically a measure of the temperature of the star at its {\it photosphere}, or the radius where the optical depth of the star's atmosphere is $\tau\approx1$ and it becomes opaque. For the Sun, $T\approx5780\K$. The temperature is also related to the wavelength at the peak of the black body curve via {\it Wien's Displacement Law} \begin{equation} \lambda_{\mathrm{max}} = \frac{2.897756\times10^{7} \Ang~\K}{T}, \end{equation} which gives $\lambda_{\mathrm{max}}\approx5000\Ang$ (yellow) for the Sun. \section{Lecture 2: Extinction} Dust absorbs and scatters optical light, and we can describe the rate at which light is absorbed as it travels along the $x$ direction through the differential relation \begin{equation} \frac{d\Flambda}{dx} = - \kappa_{\lambda} \Flambda \end{equation} \noindent where $\kappa_{\lambda}$ is the {\it opacity}. This relation has a simple solution \begin{equation} \Flambda(x) = \Flambda(x=0)\exp\left(-\int_{0}^{x} \kappa_{\lambda} dx'\right) \end{equation} \noindent where we can define the optical depth \begin{equation} \tau = \int_{0}^{x} \kappa_{\lambda} dx' \end{equation} \noindent In the limit of pure absorption, a system is {\it optically thick} when $\tau = 1$ and $\Flambda = \Flambda(x=0)/e$. For optical light, interstellar dust has approximately $\kappa_{\lambda}\propto1/\lambda$. \section{Lecture 3: Galaxy Luminosity Function} We can count the number density of galaxies as a function of their luminosity, and we appropriately call this distribution the {\it luminosity function}. The galaxy luminosity function has been found to have a shape close to a parameterized form called the {\it Schechter} function (after Paul Schechter). The Schechter function provides the number density of galaxies in a differential luminosity bin $dL$ as \begin{equation} \label{eqn:schechter_function_luminosity} \Phi(L)dL = \phi_{\star} \left(\frac{L}{L_{\star}}\right)^{\alpha}\exp\left(-\frac{L}{L_{\star}}\right)\frac{dL}{L_{\star}} \end{equation} \noindent where $L_{\star}$ is a characteristic luminosity of galaxies and $\phi_{\star}$ is a typical abundance. Below $L_\star$, the luminosity function is a power law, and above $L_\star$ the abundance of galaxies drops exponentially. Sometimes, astronomers will use $1+\alpha$ as the power law exponent, so beware! {\bf See Figure 1.16 of Sparke and Gallagher.} It happens to be the case that $L_{\star}\approx2\times10^{10}\Lsun$, which is close to the luminosity of the Milky Way. The typical abundance of galaxies is $\phi_\star\approx7\times10^{-3}\Mpc^{-3}$. The faint-end slope of the 2DF luminosity function is $\alpha=-0.46$. Defined as in Equation \ref{eqn:schechter_function_luminosity}, the number of galaxies diverges as $L\to0$ if $\alpha<-1$. The total luminosity density provided by galaxies can be found by integrating Equation \ref{eqn:schechter_function_luminosity} as \begin{equation} \rho_{L} = \int_{0}^{\infty} \Phi(L) L dL = \phi_\star L_{\star} \Gamma(\alpha + 2) \end{equation} \noindent where $\Gamma$ is the Gamma function, which for an integer $n$ is $\Gamma(n) = (n-1)!$. It turns out that $\Gamma(1.5)\approx0.886227$, so we have that $\rho_{L}\approx1.25\times10^{8}\Lsun\Mpc^{-3}$. For a partial integral, we have \begin{equation} \rho_{L}(L>L_{\mathrm{min}}) = \int_{L_{\mathrm{min}}}^{\infty} \Phi(L) L dL = \phi_\star L_{\star} \Gamma(\alpha + 2, L_{\mathrm{min}}/L_{\star}), \end{equation} \noindent where $\Gamma(a,x)$ is the incomplete gamma function. \section{Lecture 4: Recombination and the Ionization State of the ISM} Ionized hydrogen (proton plus electron) can undergo a recombination to form neutral hydrogen. The rate of this process depends on the properties of the ionized gas. We can write the recombination rate $dn_e/dt$ that changes the number density $n_e$ of free electrons as \begin{equation} \frac{d n_{e}}{d t} = n_{e}^2 \alpha(T_e) \end{equation} \noindent where $T_e$ is the temperature of the gas and $\alpha(T_e)$ is the ``recombination coefficient'' \begin{equation} \alpha(T_e) \approx 2 \times 10^{-13} \left( \frac{T_e}{10^4~\K}\right)^{-3/4}~\cm^{3}~\s^{-1}. \end{equation} \noindent There's a lot hidden in the coefficient! We can also define the recombination time $t_{rec}$ that characterizes the timescale over which the ionized gas will significantly increase its neutrality. The recombination time is given by \begin{equation} t_{rec} = \frac{n_e}{|dn_e/dt|} = \frac{1}{n_e \alpha(T_e)} \approx 1500~\yr \times \left(\frac{T_e}{10^{4}K}\right)^{3/4}\left(\frac{100~\cm^{-3}}{n_e}\right) \end{equation} In HII regions, $t_{rec}$ is a few thousand years. In the diffuse ionized ISM, the recombination time is a few million years. \subsection{Cooling Rate and Time} Gas has an internal thermal energy of $E \propto nT$. If the gas radiates this thermal energy with an energy $L$, the cooling timescale for the gas is $t_{cool} \propto nT/L$. For optically thin gas, we can write \begin{equation} L = n^{2} \Lambda(T) \end{equation} \noindent and \begin{equation} t_{cool} \propto T/[n\Lambda(T)]. \end{equation} The quantity $\Lambda(T)$ is called the ``cooling function'' and depends only on temperature. {\bf See Table 2.5 and Figure 2.25 of Sparke and Gallagher.} At high temperatures $T>10^{7}~\K$, free-free cooling dominates such that $\Lambda(T) \propto T^{1/2}$ and $t_{cool} \propto \sqrt{T}/n$. \section{Lecture 5: Poisson's Equation} Take the potential and apply the Laplacian operator \begin{equation} \nabla^{2} \equiv \nabla \cdot \nabla = \left[\frac{\partial^{2}}{\partial x^{2}}+\frac{\partial^{2}}{\partial y^{2}}+\frac{\partial^{2}}{\partial z^{2}}\right] \end{equation} \noindent to both sides. Remembering that the operator acts on $\vx$ and not $\vx'$, we have \begin{equation} \label{eqn:pois_init} \nabla^{2} \Phi(\vx) = - \int G \rho(\vx') \nabla^{2} \left(\frac{1}{|\vx-\vx'|}\right)d^{3}\vx'. \end{equation} \noindent We can evaluate this by noting that \begin{equation} \label{eqn:nabla} \nabla\left(\frac{1}{|\vx - \vx'|}\right) = -\frac{\vx-\vx'}{|\vx-\vx'|^3}, \nabla^{2}\left(\frac{1}{|\vx - \vx'|}\right) = 0. \end{equation} \noindent So we conclude that outside of a very small region around $\vx$, $\nabla^{2}\Phi(\vx)=0$. Let's take a spherical region $S(\epsilon)$ of radius $\epsilon$ centered on $\vx$. In proceeding, let's note that \begin{equation} \nabla^{2} f(|\vx-\vx'|) = \nabla_{\vx'}^{2} f(|\vx-\vx'|) \end{equation} \noindent for any function $f(|\vx - \vx'|)$. If we take $\epsilon$ to be small enough such that $\rho(\vx)\approx$ a constant, then we can write \begin{eqnarray} \label{eqn:pois_med} \nabla^{2}\Phi(\vx) &\approx& - G\rho(\vx) \int_{S(\epsilon)} \nabla^2 \left( \frac{1}{|\vx - \vx'|} \right)d^{3}\vx' \nonumber \\ &=& - G \rho(\vx) \int_{S(\epsilon)} \nabla_{\vx'}^2 \left( \frac{1}{|\vx - \vx'|} \right)d V'. \end{eqnarray} \noindent Now we get to use the {\it divergence} theorem \begin{equation} \int \nabla^{2} f dV = \oint \nabla f \cdot dS, \end{equation} \noindent which allows us to write Equation \ref{eqn:pois_med} as \begin{equation} - G \rho(\vx) \int_{S(\epsilon)} \nabla_{\vx'}^2 \left( \frac{1}{|\vx - \vx'|} \right)d V' = - G \rho(\vx) \oint_{S(\epsilon)} \nabla_{\vx'}\left(\frac{1}{|\vx-\vx'|}\right) \cdot d\vS' \end{equation} \noindent By applying Equation \ref{eqn:nabla} and the identity $\nabla_{\vx'} f = -\nabla f$, we have \begin{eqnarray} - G \rho(\vx) \oint_{S(\epsilon)} \nabla_{\vx'}\left(\frac{1}{|\vx-\vx'|}\right) \cdot d\vS' &=& -G \rho(\vx) \oint_{S(\epsilon)} \left(\frac{\vx-\vx'}{|\vx-\vx'|^{3}}\right) \cdot d\vS' \nonumber \\ &=& 4 \pi G \rho(\vx) \end{eqnarray} \section{Lecture 6: Distribution Function} The {\it distribution function} $f(\vx,\vv,t)$ gives the probability density in six-dimensional {\it phase space} $(\vx,\vv)$ of having an object in the volume $d\vx d\vv$. The number density $n(\vx,t)$ of objects is the volume integral of the distribution function \begin{equation} n(\vx,t) = \int_{-\infty}^{\infty}\int_{-\infty}^{\infty}\int_{-\infty}^{\infty} f(\vx,\vv,t) dv_{x}dv_y,dv_z. \end{equation} \noindent We can use this expression to define moments of the velocity distribution, such as the average velocity \begin{equation} \ave{\vv(\vx,t)} = \frac{1}{n(\vx,t)} \int_{-\infty}^{\infty}\int_{-\infty}^{\infty}\int_{-\infty}^{\infty} \vv f(\vx,\vv,t) dv_{x}dv_y,dv_z. \end{equation} For a collisionless system where objects cannot be created or destroyed, the number density of objects in a given volume will follow the continuity equation \begin{equation} \label{eqn:continuity} \frac{\partial n}{\partial t} + \frac{\partial (nv)}{\partial x} = 0. \end{equation} \noindent This equation simply describes the mass conservation of objects, such that the time rate of change of the number density is balanced by the advection of spatial gradients in the number density. The {\it collisionless Boltzmann equation} that describes the probability density of objects in phase space is more complicated because it must describe a time variation in the velocity as well as the spatial coordinates. In one dimension, we have that \begin{equation} \frac{\partial f}{\partial t} + v\frac{\partial f}{\partial x} + \frac{dv}{dt}(x,v,t) \cdot \frac{\partial f}{\partial v} = 0 \end{equation} \noindent The acceleration of an object will only depend on position in a background potential, so we have that $dv/dt = -\partial \Phi/\partial x$, and \begin{equation} \label{eqn:boltzmann} \frac{\partial f}{\partial t} + v \frac{\partial f}{\partial x} - \frac{\partial \Phi}{\partial x}(x,t)\cdot \frac{\partial f}{\partial v} = 0. \end{equation} \noindent In three dimensions, we can write \begin{equation} \label{eqn:bvec} \frac{\partial f(\vx, \vv, t)}{\partial t} + \vv \cdot \nabla f - \nabla \Phi \cdot \frac{\partial f}{\partial \vv} = 0. \end{equation} \noindent We usually don't deal with this equation in this form, but often will take moments with respect to e.g., velocity \begin{equation} \frac{\partial n(x,t)}{\partial t} + \frac{\partial}{\partial x}[n(x,t)\ave{v(x,t)}] - \frac{\partial \Phi}{\partial x}(x,t)[f]_{-\infty}^{\infty} = 0 \end{equation} \noindent The last term will be zero if $f$ is well behaved, and we get back Equation \ref{eqn:continuity}. If we integrate Equation \ref{eqn:boltzmann} multiplied by $v$, we instead find \begin{equation} \frac{\partial}{\partial t}[n(x,t)\ave{v(x,t)}] + \frac{\partial}{\partial x}[n(x,t)\ave{v^2(x,t)}] = - n(x,t) \frac{\partial \Phi}{\partial x} \end{equation} \noindent If we define the velocity dispersion as \begin{equation} \ave{v^2(x,t)} = \ave{v(x,t)}^2 + \sigma^2, \end{equation} \noindent apply this definition, and then divide by $n(x,t)$, we have \begin{equation} \label{eqn:v_ave} \frac{d\ave{v}}{dt} + \ave{v}\frac{\partial \ave{v}}{\partial x} = -\frac{\partial \Phi}{\partial x} - \frac{1}{n}\frac{\partial}{\partial x}[n\sigma^2(x,t)]. \end{equation} \section{Lecture 7: A Simple Example of the Tidal Limit} Consider the satellite to have mass $m$ and the main galaxy to have mass $M$, separated by distance $D$, orbiting the center of mass $C$ with angular speed $\Omega$. Take $x$ as the coordinate along the ray between $m$ toward $M$. The center of mass is then at $x=DM/(M+m)$. The effective potential is then \begin{equation} \Phieff(x) = -\frac{GM}{|D-x|} -\frac{Gm}{|x|} - \frac{\Omega^2}{2}\left(x-\frac{DM}{M+m}\right)^2. \end{equation} The effective potential has three maxima known as the {\it Lagrange points}, [$L_1$, $L_2$, $L_3$]. Let's find them by setting the derivative of the effective potential to zero. \begin{equation} \label{eqn:find_phieff_max} \frac{\partial\Phieff}{\partial x} = 0 = -\frac{GM}{(D-x)^2} \pm \frac{Gm}{x^2} - \Omega^2\left(x-\frac{DM}{M+m}\right) \end{equation} \noindent The acceleration $\Omega^2DM/(M+m)$ of $m$ as it circles $C$ owes to the gravitational attraction of $M$. We can then write \begin{equation} \Omega\frac{DM}{M+m}=\frac{GM}{D^2},~~\mathrm{so}~\Omega^2=\frac{G(M+m)}{D^3} \end{equation} \noindent If the satellite is much less massive than the main galaxy, $L_1$ and $L_2$ will lie close to $m$. Substitute $\Omega^2$ into Equation \ref{eqn:find_phieff_max}, and expand in powers of $x/D$ to find \begin{equation} 0 \approx -\frac{GM}{D^2} - 2\frac{GM}{D^3}x \pm \frac{Gm}{x^2} - \frac{G(M+m)}{D^3}\left(x-\frac{DM}{M+m}\right) \end{equation} \noindent At the Lagrange points $L_1$ and $L_2$ we have \begin{equation} x = \pm r_J, ~~\mathrm{where}~r_J = D\left(\frac{m}{3M+m}\right)^{1/3} \end{equation} \noindent Stars that cannot stray further from the satellite than $r_J$, the {\it Jacobi radius} or {\it Roche limit}, will remain bound to it. The radius $L_1$ is not where the gravitational force from the satellite and main galaxy are the same, but where the effective potential has a minimum and lies further from the satellite. This radius is also where expanding stars lose mass to their companion. When $M\gg m$, then the mean density within $r_J$ is three times the mean density within $D$ of the main galaxy. A star orbiting the satellite near $r_J$ will have an orbital period comparable to the orbit period of the satellite around the main galaxy. When satellites are not on circular orbits, the relevant $r_J$ is determined at the pericenter. If the satellite is orbiting within the dark matter halo of the main galaxy, the relevant radius is \begin{equation} r_J = D\left[\frac{m}{2M(<D)}\right]^{1/3} \end{equation} where $M(<D)$ is the mass within D. What's the Jacobi radius of the LMC-Milky Way system? The LMC is at a distance of $\sim50\kpc$ where the speed of a circular orbit is about the same as it is at the solar circle, or about $\sim200\km~\s^{-1}$. The mass of the Milky Way within the LMC's orbit is about $5\times10^{11}\Msun$. The LMC mass is about $10^{10}\Msun$, so we have \begin{equation} r_J\approx 50\kpc~\times~\left(\frac{10^{10}\Msun}{2\times5\times10^{11}\Msun}\right)^{1/3} \approx 11\kpc. \end{equation} The LMC disk lies within this radius, but the SMC is too far away to stay bound to the LMC. \section{Lecture 8: Rotation Curve} For a galaxy at a systemic velocity $V_{sys}$ inclined at an angle $i$ to face-on, the rotation curve we measure as a function of radial distance $R$ and azimuth $\phi$ is \begin{equation} V_r(R,i) = V_{sys} + V(R)\sin i \cos \phi \end{equation} \noindent so to determine $V(R)$ we must determine the inclination and perhaps determine an angular average over azimuth. In determining the rotation curve, we are often trying to weigh the galaxy. This is straightforward to do in some limiting cases. For a thin exponential disk that supplies its own gravity, the rotation curve can be written in terms of Bessel functions as \begin{equation} V^2(R) = 4\pi G \Sigma_0 h_R y^2 [I_0(y) K_0(y) - I_1(y) K_1(y)] \end{equation} \noindent where $\Sigma_0$ is the central mass surface density, $y\equiv R/2h_R$, and $I$ and $K$ are modified Bessel functions (mind the subscript!). We can relate this to the total disk mass $M_d = 2\pi\Sigma_0 h_R^2$, such that \begin{equation} V^2(R) = \frac{2GM_d}{h_R}f(y). \end{equation} Otherwise, if we examine the regions of a disk where the dark halo dominates the potential, than we can just use the radial force equation \begin{equation} \frac{V^2(R)}{R} = \frac{GM(<R)}{R^2} \end{equation} \noindent to estimate the interior mass from the rotation curve. \section{Lecture 9: Fundamental Relations of Elliptical Galaxies} There is a correlation between elliptical galaxy luminosity and velocity dispersion called the Faber-Jackson relation \begin{equation} \frac{L_V}{2\times10^{10}\Lsun} \approx \left( \frac{\sigma}{200~\km~\s^{-1}}\right)^{4}. \end{equation} There is another relation called the {\it Fundamental Plane} that connects the effective radius, surface brightness within the effective radius, and velocity dispersion as \begin{equation} \Reff \propto \sigma^{1.2} I_{e}^{-0.8}. \end{equation} Note that expectations would suggest $\Reff \propto \sigma^2 I_{e}^{-1}$. \section{Lecture 10: Dynamical Friction} One of the most important phenomena in groups and clusters is {\it dynamical friction}, which is the process by which the galaxies in groups and clusters lose energy and angular momentum to the surrounding sea of dark matter and stars in the intragroup or intracluster medium. Dynamical friction causes galaxies to ``sink'' to the central regions of the group or cluster and thereby be assimilated into the larger system. So how does this process work? As with the impluse approximation we studied before, consider a {\it galaxy} of mass $M$ moving past a star (or clump of dark matter) with a mass $m$ at an impact parameter $b$. During the passage, the galaxy has a change of velocity in the direction perpendicular to the direction motion of \begin{equation} \Delta V_{\perp} = \frac{2Gm}{bV}. \end{equation} \noindent This equation applies when the impact parameter $b$ is larger than the size of the galaxy with mass $M$, and when the velocity $V$ is large enough at separation $b$ such that $\Delta V_{\perp}\ll V$. We then require that \begin{equation} b \gg \frac{2G(M+m)}{V^2} \equiv 2 r_s. \end{equation} \noindent Note this radius differs from the strong encounter radius we used previously because $M\ne m$ in this case. The perturber must obtain an equal and opposite momentum, so the total kinetic energy in the perpendicular motion becomes \begin{equation} \Delta KE_{\perp} = \frac{M}{2}\left(\frac{2Gm}{bV}\right)^2 + \frac{m}{2}\left(\frac{2GM}{bV}\right)^2 = \frac{2G^2mM(M+m)}{b^2 V^2}. \end{equation} \noindent The smaller object acquires most of the energy, which must be depleted from the motion by an amount $\Delta V_{\parallel}$ of the larger object. Once the galaxy and the perturber are far away (and before the encounter) the kinetic energies before and after the encounter must be the same. We then have that \begin{equation} \frac{M}{2}V^2 = \Delta KE_{\perp} + \frac{M}{2}(V + \Delta V_{\parallel})^2 + \frac{m}{2}\left(\frac{M}{m} \Delta V_{\parallel} \right)^2 \end{equation} \noindent If we take $\Delta V_{\parallel} \ll V$, then the terms proportional to $\Delta V_{\parallel}^2$ are very small and can be ignored. We can then find the amount by which each perturber reduces the velocity of the galaxy with mass $M$ as \begin{equation} - \Delta V_{\parallel} \approx \frac{\Delta KE_{\perp}}{M V} = \frac{2G^2 m(M+m)}{b^2 V^3}. \end{equation} OK, now what if there are a number density per cubic parsec $n$ perturbers of mass $m$? We then have to integrate through the cylinder of radius $b$ to find the total effect on the galaxy of mass $M$. We find that \begin{equation} -\frac{dV}{dt} = \int_{b_{\mathrm{min}}}^{b_{\mathrm{max}}} n V \frac{2G^2 m (M+m)}{b^2 V^3} 2\pi b db = \frac{4\pi G^2(M+m)}{V^2} n m \ln \Lambda \end{equation} \noindent where $\Lambda\equiv b_{\mathrm{max}}/b_{\mathrm{min}}$. Some interesting things to note \begin{enumerate} \item The slower the galaxy $M$ moves, the larger its deceleration. \item If $V\ll\sigma$, where $\sigma$ is the velocity dispersion of perturbers, we find that $dV/dt\propto -V$. This is the same as what happens to a parachutist. \item The net effect is to lower the total kinetic motions of the galaxies over time. Before the encounter, the kinetic energy of one of the galaxies is \begin{equation} E_0 = KE_0 + PE_0 = - KE_0 \end{equation} \noindent since the potential energy is $ PE_0 = - 2KE_0$ in virial equilibrium. Dynamical friction increases the energy in random motions and the internal kinetic energy by $\Delta KE$. After the system again reaches virial equilibrium, the kinetic energy is less than before \begin{equation} KE_1 = - (E_0 + \Delta KE) = KE_0 - \Delta KE \end{equation} \noindent Stars that gain the most kinetic energy are ejected. \item Groups have lower relative velocties, so the dynamical friction should operate more efficiently. \end{enumerate} \section{Lecture 10: Gravitational Lensing} Einstein predicted that light passing a distance $b$ past a mass $M$ would be deflected by an angle \begin{equation} \label{eqn:alpha_grav_lens} \alpha \approx \frac{4GM}{bc^2} = \frac{2R_s}{b} \end{equation} \noindent where $R_s = 2 GM/c^2$ is the {\it Schwarzschild radius}. For the Sun, $R_s\sim3~\km$. The approximation holds only for small deflections $\alpha\ll1$. Note this is exactly twice the deflection determined by the impluse approximation. {\bf Show figure 7.14 of SG.} Without the lense $L$, the object would appear at an angle $\beta = y/d_{S}$ as long as $d_{S} \gg y$. The light is bent by $\alpha$, so the object instead appears at an angle $\theta \approx x/d_S$ (for $d_S \gg x$). When the bending is small $x-y = \alpha d_{LS}$. The impact parameter in the lens plane is $b = \theta d_{L}$ as long as $d_S \gg b$. If we divide Equation \ref{eqn:alpha_grav_lens} by $d_S$ we find \begin{equation} \theta - \beta = \frac{\alpha d_{LS}}{d_S} = \frac{1}{\theta} \frac{4 GM}{c^2} \frac{d_{LS}}{d_L d_S} \equiv \frac{1}{\theta} \theta_E^2 \end{equation} \noindent where $\theta_E$ is called the {\it Einstein radius}. We have a quadratic relation between the angular distance $\theta$ between $L$ and the object's position as \begin{equation} \theta^2 - \beta \theta - \theta_E^2 = 0,~\mathrm{so}~\theta_{\pm} = \frac{\beta \pm \sqrt{\beta^2 + 4 \theta_E^2}}{2}. \end{equation} \noindent A star directly behind the lense with $\beta=0$ will be seen as a circle on the sky with radius $\theta_E$. When $\beta>0$, the image at $\theta_{+}$ is further away from the lense with $\theta_{+}>\beta$ and is outside the Einstein radius with $\theta_{+}>\theta_E$ (these were the images seen in the Sun's lensing). The image at $\theta_{-}$ is inverted, on the other side of the lens, and is within the Einstein radius. Stars in the disk plane of the Milky Way will lense each other. We often cannot resolve the lense and the source separately, but we will see the magnification of the star -- we call this microlensing. Gravitational lensing leaves the surface brightness unchanged by increases the area of an extended source on the sky. The increase of the apparent brightness is then just proportional to the increase in the area. {\bf Show figure 7.15 of SG.} Consider an annulus of width $S'$ centered on $L$ between radius $y$ and $y+\Delta y$. An image $I$ of $S'$ occupies the same angle $\delta \Phi$ but the distance from the center is expanded or contracted as $x/y = \theta/\beta$ while $\delta x/\delta y= d\theta/d\beta$. The ratio of the areas is \begin{equation} \frac{A_{\pm}(image)}{A(source)} = \left| \frac{\theta}{\beta}\frac{d\theta}{d\beta}\right| = \frac{1}{4}\left( \frac{\beta}{\sqrt{\beta^2 + 4 \theta_E^2}} + \frac{\sqrt{\beta^2 + 4 \theta_E^2}}{\beta}\pm 2 \right) \end{equation} \noindent The image at $\theta_{+}$ is always brighter than the source and is stretched in the tangential direction. The closer image is dimmer unless \begin{equation} \beta^2 < (3-2\sqrt{2})\theta_E^2 / \sqrt{2} \end{equation} \noindent or \begin{equation} \beta \lesssim 0.348 \theta_E. \end{equation} \section{Lecture 11: Expansion History} The rate of expansion is determined by the gravitational effects of the energy densities it contains. We can model the expansion using Newtonian physics first, and then use general relativity to revise for a more correct answer. Consider a sphere of radius $r$ at time $t$ when the universe has a typical density $\rho(t)$, and assume $r$ is much less than any curvature radius in the universe. Assuming symmetry about $r=0$, the gravitational force at radius $r$ is just supplied by the mass within the sphere. If the sphere is large enough that pressure forces are small, then the force on a object of mass $m$ at radius $r$ is \begin{equation} m \frac{d^2 r}{dt^2} = - \frac{G m M(<r)}{r^2} = - \frac{4\pi Gm}{3}\rho(t)r. \end{equation} \noindent The radius of the sphere of matter is expanding with the rest of the universe, so $r\propto R(t)$. The mass of the cloud cancels, such that \begin{equation} \label{eqn:friedman_A} \ddot{R}(t) = - \frac{4\pi G}{3} \rho(t) R(t). \end{equation} \noindent The mass doesn't change, so $\rho(t) R(t)$ is a constant. We can multiply by $\dot{R}(t)$ to find \begin{equation} \frac{1}{2} \frac{d}{dt}[\dot{R}^2(t)] = - \frac{4 \pi G}{3} \frac{\rho(t_0)R^3(t_0)}{R^2(t)} \dot{R}(t) \end{equation} \noindent where $t_0$ is the present day. We then integrate to find \begin{equation} \label{eqn:friedman_B} \dot{R}^2(t) = \frac{8\pi G}{3}\rho(t) R^2(t) - kc^2 \end{equation} \noindent where $k$ is a constant of integration. Turns out Equation \ref{eqn:friedman_B} is valid in GR, and tells us that $k$ is the same constant as in the metric. We appeal to thermodynamics to tell us that as heat $\Delta Q$ flows into a volume $V$ its internal energy $E$ must increase, or it expands and does work against pressure \begin{equation} \Delta Q = \Delta E + p \Delta V = V \Delta(\rho c^2) + (\rho c^2 + p)\Delta V \end{equation} \noindent where $\rho$ includes all the forms of matter and energy. But no volume $V$ gains heat at the expense of another and \begin{equation} \Delta Q = 0 = \Delta \rho + \left(\rho + \frac{p}{c^2}\right)\frac{\Delta V}{V} \end{equation} \noindent or \begin{equation} \frac{d\rho}{dt} = - c \frac{\dot{R}(t)}{R(t)}\left( \rho + \frac{p}{c^2}\right) \end{equation} \noindent Differentiating Equation \ref{eqn:friedman_B} and substituting for $d\rho/dt$ yields \begin{equation} \label{eqn:friedman_C} \ddot{R}(t) = - \frac{4\pi G}{3} R(t) \left[ \rho(t) + \frac{3p(t)}{c^2}\right] \end{equation} \noindent So in GR, the pressure $p$ adds to the gravitational attraction. Equations \ref{eqn:friedman_B} and \ref{eqn:friedman_C} are called the {\it Friedmann equations}. For cool matter the pressure $p \sim \rho c_s^2$ where the sound speed $c_s \ll c$, and the pressure term in equation \ref{eqn:friedman_C} is 0. Then $\rho(t)\propto R^{-3}$. For radiation and particles, $p\approx pc^2/3$ and $\rho(t) \propto R^{-4}(t)$. For matter and radiation $\rho + 3p/c^2$ is positive, and the universe decelerates. The quantity $\rho(t)R^2(t)$ decreases as $R(t)$ grows, so the right hand side of eqn \ref{eqn:friedman_B} becomes negative for large $R$ if $k=1$. Since $\dot{R}^2$ cannot be negative, $R$ will reach a maximum and turn around. For $k\le0$, the expansion never ends. GR allows for a {\it vacuum energy} with constant density $\rho_{\Lambda} = \Lambda/(8\pi G)$. Since $\rho_{\Lambda}$ is a constant, the RHS of the $\Delta Q$ equation must be zero since $\rho_{\Lambda}$ cannot change and the pressure is $p_{\Lambda} = -\Lambda c^2/(8\pi G)$, which is more like a tension pulling out than a pressure pushing in. Once the universe expands so $\rho_{\Lambda}$ is larger than the other components, the universe expands exponentially. This kind of expansion may have happened in the early universe (inflation), which enables us to explain away several cosmological puzzles. \section{Lecture 11: Growth of Structure} The initial perturbations seeded by cosmic inflation likely had a power spectrum $P(k) \propto k$. These perturbations were affected by physical processes at later times to alter $P(k)$ on small scales. We see evidence of this in the cosmic microwave background. As photons try to move out of the gravitational potential wells of the initial density perturbations, the photons experience a {\it gravitational redshift} $\Delta\Phi_g$ that changes the temperature $T$ of the radiation by an amount $\Delta T$ according to \begin{equation} \frac{\Delta T}{T}|_{\mathrm{grav}} \sim \Delta \Phi_g c^{-2} \end{equation} \noindent The temperature is reduced where the potentially is unusually deep since $\Delta \Phi_g$ is negative there, and time runs more slowly such that $\Delta t/t =\Delta \Phi_g/c^2$ and we see the gas at an earlier time when it was hotter. The temperature declines by $T\propto 1/ a(t)$, so we have \begin{equation} \left.\frac{\Delta T}{T}\right|_{\mathrm{time}} = -\frac{\Delta a}{a} = -\frac{2}{3}\frac{\Delta t}{t} = -\frac{2}{3}\frac{\Delta \Phi_g}{c^2} \end{equation} \noindent where we've used $a\propto t^{2/3}$ for the matter-dominated era. The net effect is that $\Delta T/T \sim \Delta \Phi_g / (3 c^2)$. If the region has density $\rho = \bar{\rho}(1+\delta)$ and a corresponding mass excess of $\Delta M = 4\pi \bar{\rho}R^3 \delta / 3$, then \begin{equation} 3c^2\frac{\Delta T}{T} = \Delta \Phi_g \sim - \frac{2G\Delta M}{R} = - \frac{8\pi}{3} G \bar{\rho}R^2 \delta \approx - \delta(t)[\bar{H}(t)R]^2 \end{equation} \noindent such that radiation from dense regions is {\it colder}. We can model the temperature map of the sky by expanding the pattern of the CMB in spherical harmonics as \begin{equation} \Delta T(\theta, \phi) = \sum_{l>1} \sum_{-l\le m \le l} a_l^m Y_l^m(\theta, \phi). \end{equation} \noindent The theoretical predictions are often expressed in terms of $C_l = \ave{|a_l^m|^2}$ averaged over $m$ since this does not depend on the direction of ``north'' in the map. A commonly plotted quantity is $\Delta_T^2 = T^2 l(l+1)C_l/(2\pi)$. Pressure forces can act to suppress structure, but only on the sound horizon scale. The sound horizon scale when the gas becomes transparent to photons (recombination) is \begin{equation} R(t_{rec}) \sigma_H = 3 c t_{rec} = \frac{2c}{H(t_{rec})} \approx \frac{2c}{H_0\sqrt{\Omega_m}(1+z_{rec})^{3/2}}. \end{equation} \noindent This works out to about $184/(h^2 \Omega_m)^{1/2}$~Mpc today. The angle $\theta_H$ subtended by the sound horizon depends on the angular-size distance to $z_{rec}$. When $\Omega_\Lambda\to0$ and $\Omega_0 z\gg1$, then $d_A\to 2c/(H_0 z \Omega_0)$. We then have \begin{equation} \theta_H \approx \frac{R(t_{rec}\sigma_H)}{d_A(t_{rec})} \approx \sqrt{\frac{\Omega_0}{z_{rec}}} \approx 2^\circ \times \sqrt{\Omega_0}, \end{equation} \noindent and only points separated by this angle on the sky can communicate before $t_{rec}$. For $\Omega_0=1$, $\Delta_T$ is largest on the scale of about a degree, where the {\it first acoustic peak} of the CMB is found. A model with $\Omega_0=0.3$ provides about half this angle. In a cosmology close to what we think ours is, it turns out the first peak is at $l\approx 220$ that corresponds to a size scale of aobut $105~\Mpc$ today, which is a sphere that contains $2.5\times10^{16}M_{\odot}$. The second peak is at $l\approx540$. \section{Lecture 12: Tidal Torques} Peculiar velocities grow as $\vv\propto t^{1/3}$ while distances grow as $d\propto a(t) \propto t^{2/3}$. In the linear regime, angular momentum then grows as $d \times v\propto t$. The region stops accruing angular momentum once it turns around and begins to collapse. Therefore, more overdense regions tend to have less time to spin up. However, tidal torques are also stronger in dense regions and so objects acquire the same average angular momentum in relation to their mass or energy. A galaxy of radius $R$, mass $M$, and angular momentum $L$ will rotate an angular speed \begin{equation} \omega \sim L / (MR^2). \end{equation} \noindent The angular speed of a circular orbit at radius $R$ is \begin{equation} \omega_c^2 R \sim G M/R^2. \end{equation} The energy is $E\sim - GM^2/R$. We therefore have \begin{equation} \frac{\omega}{\omega_c}\equiv\lambda = \frac{L}{MR^2} \times \frac{R^{3/2}}{\sqrt{GM}} = \frac{L|E|^{1/2}}{GM^{5/2}}. \end{equation} \noindent From N-body simulations, we expect that $\lambda\sim$ a few percent. Ellipticals have about this spin, but the Milky Way has $\lambda \approx 0.5$. Since the MW is a disk, energy dissipation can help amplify $\lambda$. This also argues for a dark halo, as otherwise the disk would not have time to form. Without a halo, $L$ and $M$ remain fixed as the disk moves in. The radius must decrease by $100\times$ for $E$ to increase proportionally by the same amount. Disk material near the Sun would need to originate $800~\kpc$ from the center, but $M(<R)$ would lie interior to the Sun's orbit. The orbital period of the Sun would be $1000\times$ longer than it's observed to be ($240~\Gyr$). It would take many times the age of the universe to make the disk. Since the milky way has a large DM halo, the gas in the MW disk originates from a radius closer by a factor $M_d/M_{DM}$. The decrease in size is only a factor of 10. Shrinking at $200~\km~\s^{-1}$ from a radius $80~\kpc$, the disk could have formed in $\lesssim2~\Gyr$. \end{document}
{ "alphanum_fraction": 0.7135746858, "avg_line_length": 42.6619217082, "ext": "tex", "hexsha": "91faa9c18a58e5c8da35fbcdca95fb5d28be8af5", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "95cd675c23b9c44242f428516ed3e0fca54d3b4f", "max_forks_repo_licenses": [ "CC0-1.0" ], "max_forks_repo_name": "brantr/astro400B", "max_forks_repo_path": "final_review.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "95cd675c23b9c44242f428516ed3e0fca54d3b4f", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "CC0-1.0" ], "max_issues_repo_name": "brantr/astro400B", "max_issues_repo_path": "final_review.tex", "max_line_length": 214, "max_stars_count": 1, "max_stars_repo_head_hexsha": "95cd675c23b9c44242f428516ed3e0fca54d3b4f", "max_stars_repo_licenses": [ "CC0-1.0" ], "max_stars_repo_name": "brantr/astro400B", "max_stars_repo_path": "final_review.tex", "max_stars_repo_stars_event_max_datetime": "2015-05-03T23:30:58.000Z", "max_stars_repo_stars_event_min_datetime": "2015-05-03T23:30:58.000Z", "num_tokens": 11727, "size": 35964 }
\chapter{Appendix} \begin{algorithm}[H] \caption{Branch-and-Bound algorithm for a minimization problem} \label{alg:lit:bab} \begin{algorithmic}[1] \Statex \State $Incumbent=\infty$ \State Initialize the tree with root node, $n_0$, and solve the relaxed LP % \Let{$BestBound$}{LP solution} \Let{$Nodes$}{$\{n_0\}$} \Comment{\emph{The set of nodes to be fathomed}} \Statex \While{$Nodes\neq \emptyset$} \State Select a node $n\in Nodes$ \State Choose a relaxed variable, $x_i$, in the LP solution \State Branch on $x_i$, splitting its domain into two subsets, creating nodes $n_a$, $n_b$ \State Solve the two relaxed LPs of each node. \For{$n_{new} \in \{n_a, n_b\}$} \If{$LPsol$ is integer} \If{$LPsol < Incumbent$} \Let{$Incumbent$}{$LPsol$} \Comment{\emph{Found new best solution}} \ElsIf{$LPsol \geq Incumbent$} \Let{$Nodes$}{$Nodes\cup\{n_{new}\}$} \EndIf \ElsIf{$LPsol$ is not integer} \If{$LPsol \leq BestBound$} \Let{$Nodes$}{$Nodes\cup\{n_{new}\}$} \ElsIf{$LPsol > BestBound$} \Let{$Nodes$}{$Nodes\setminus\big(n_{new}\cup Children(n_{new})\big)$} \Comment{\emph{Pruning}} \State Continue to next node \EndIf \EndIf \If{$LPsol < BestBound$} \Let{$BestBound$}{$LPsol$} \EndIf % \State Current node is fathomed \EndFor \EndWhile \Statex \State \Return $Incumbent$ \end{algorithmic} \end{algorithm} \begin{lstlisting}[caption={Full MiniZinc model for CP sub-problems},label={lst:appen:CPmodel},language=minizinc] % Constraint Programming model for a station's sub-problem include "cumulative.mzn"; include "disjunctive.mzn"; include "redefinitions.mzn"; %~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~% % INSTANCE INITIALISATION int: nTasks; int: nPrecs; int: maxLoad; % maximum makespan set of int: TASK; set of int: PREC = 1..nPrecs; set of int: TIME = 0..maxLoad; array[TASK] of int: dur; % duration array[TASK] of set of TASK: suc; % set of successors array[TASK,TASK] of int: forwSU; % forward setup times array[TASK,TASK] of int: backSU; % backward setup times array[TASK] of set of TASK: followForw; % allowed followers in forward load array[TASK] of set of TASK: followBack; % allowed followers in backward load array[TASK] of set of TASK: precedeForw; % allowed preceders in forward load array[TASK] of set of TASK: precedeBack; % allowed preceders in backward load %~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~% % DECISION VARIABLES array[TASK] of var TIME: s; % start time array[TASK,TASK] of var TIME: spair; % start time pairings array[TASK,TASK] of var bool: y; % forward direction following array[TASK,TASK] of var bool: z; % backward direction following var TIME: load; % load %~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~% % CONSTRAINTS % Only one follower in either station load direction constraint forall ( i in TASK )( sum( j in followForw[i] )( y[i,j] ) + sum( j in followBack[i] )( z[i,j] ) == 1 ); % Only one preceder in either station load direction constraint forall ( j in TASK )( sum( i in precedeForw[j] )( y[i,j] ) + sum( i in precedeBack[j] )( z[i,j] ) == 1 ); % Exactly one backward setup constraint sum( i in TASK, j in followBack[i] )( z[i,j] ) == 1 ; % Precedence constraints constraint forall ( i in TASK, j in suc[i] )( s[i] + dur[i] + forwSU[i,j]*y[i,j] <= s[j] ); % Forward station load respects setup times constraint forall ( i in TASK, j in followForw[i] )( y[i,j] <-> ( s[i] + dur[i] + forwSU[i,j] == s[j] ) ); % Backward station load respects station load constraint forall ( i in TASK )( s[i] + dur[i] + sum( j in followBack[i] )( backSU[i,j]*z[i,j] ) <= load ); %~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~% % REDUNDANT CONSTRAINTS % Cumulative Global constraint cumulative( [ spair[i,j] | i in TASK, j in TASK ], [ dur[i] + forwSU[i,j] | i in TASK, j in TASK ], [ y[i,j] | i in TASK, j in TASK ], 1 ); constraint forall( i in TASK, j in TASK )( s[i] == spair[i,j] ); % Fix some ordering variables to zero constraint forall ( i in TASK, j in TASK where not( j in followForw[i] ) )( y[i,j] == 0 ); constraint forall ( i in TASK, j in TASK where not( j in followBack[i] ) )( z[i,j] == 0 ); %~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~% % OBJECTIVE ann: my_search; % Solve solve :: my_search minimize load; %~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~% % OUTPUT output if full_output == 0 then ["load = " ++ show(load) ++ "\n"] elseif full_output == 1 then ["load = " ++ show(load) ++ "\n"] ++ ["start = " ++ show(s) ++ "\n"] else [""] endif; \end{lstlisting} \begin{table}[tpb] \centering \caption{Breakdown of all 1076 adapted SBF2 instances} \vspace{2mm} \begin{tabular}{llrrrrr} \toprule Class & Creator & \#Inst. & $n$ & $m$ & $|E|$ & $OS$ \\ \midrule\midrule 1 & & 108 & 7-21 & 2-8 & 6-27 & \\ & {\tt mertens} & 6 & 7 & 2-6 & 6 & 52.40\\ & {\tt bowman8} & 1 & 8 & 5 & 8 & 75.00\\ & {\tt jaeschke} & 5 & 9 & 3-8 & 11 & 83.33\\ & {\tt jackson} & 6 & 11 & 3-8 & 13 & 58.18\\ & {\tt mansoor} & 3 & 11 & 2-4 & 11 & 60.00\\ & {\tt mitchell} & 6 & 21 & 3-8 & 27 & 70.95\\\midrule 2 & & 112 & 25-30 & 3-14 & 32-40 & \\ & {\tt roszieg} & 6 & 25 & 4-10 & 32 & 71.67\\ & {\tt heskia} & 6 & 28 & 3-8 & 40 & 22.49\\ & {\tt buxey} & 7 & 29 & 7-13 & 36 & 50.74\\ & {\tt sawyer30} & 9 & 30 & 5-14 & 32 & 44.83\\\midrule 3 & & 176 & 32-58 & 3-31 & 38-82 & \\ & {\tt lutz1} & 6 & 32 & 6-11 & 38 & 83.47\\ & {\tt gunther} & 7 & 35 & 7-14 & 43 & 59.50\\ & {\tt kilbrid} & 10 & 45 & 3-10 & 62 & 44.60\\ & {\tt hahn} & 5 & 53 & 4-8 & 82 & 83.82\\ & {\tt warnecke} & 16 & 58 & 14-31 & 70 & 59.10\\\midrule 4 & & 224 & 70-83 & 7-63 & 86-112 & \\ & {\tt tonge} & 16 & 70 & 7-23 & 86 & 59.04 \\ & {\tt wee-mag} & 24 & 75 & 31-63 & 87 & 22.70 \\ & {\tt arc83} & 16 & 83 & 8-21 & 112 & 59.10 \\\midrule 5 & & 144 & 89-94 & 12-49 & 116-181 & \\ & {\tt lutz2} & 11 & 89 & 24-49 & 116 & 77.60 \\ & {\tt lutz3} & 12 & 89 & 12-23 & 116 & 77.60 \\ & {\tt mukherje} & 13 & 94 & 13-25 & 181 & 44.80 \\\midrule 6 & & 208 & 111-148 & 7-51 & 175-176 & \\ & {\tt arc111} & 17 & 111 & 9-27 & 176 & 40.40 \\ & {\tt barthold} & 8 & 148 & 7-14 & 175 & 25.80 \\ & {\tt barthol2} & 27 & 148 & 25-51 & 175 & 25.80 \\\midrule 7 & & 104 & 297-297 & 25-50 & 423-423 & \\ & {\tt scholl} & 26 & 297 & 25-50 & 423 & 58.20 \\\midrule \multicolumn{2}{l}{Overall} & 1076 & 7-297 & 2-31 & 6-82 & \\ \bottomrule \end{tabular} \label{tab:appen:dataSBF2} \end{table} \begin{table}[tpb] \tiny \caption{Full results of FSBF-2 with (\ref{eq:mip:valIneq1}) on classes 1,2 and 3} \centering \vspace{2mm} \begin{tabular}{cclrrrrrr} \toprule Class & Alpha & Creator & \#Nodes & \%Gap & \#No solution & \#Optimal & \%Optimal & Runtime(s) \\\midrule\midrule 1 & 1.00 & {\tt mertens} & 117 & 0.00 & 0 & 6/6 & 100.00 & 0.46 \\ & & {\tt bowman8} & 0 & 0.00 & 0 & 1/1 & 100.00 & 0.40 \\ & & {\tt jaeschke} & 249 & 0.00 & 0 & 5/5 & 100.00 & 0.60 \\ & & {\tt jackson} & 3,798 & 0.00 & 0 & 6/6 & 100.00 & 6.31 \\ & & {\tt mansoor} & 5,237 & 0.00 & 0 & 3/3 & 100.00 & 3.95 \\ & & {\tt mitchell} & 234,143 & 0.79 & 0 & 5/6 & 83.33 & 682.81 \\ & 0.75 & {\tt mertens} & 80 & 0.00 & 0 & 6/6 & 100.00 & 0.30 \\ & & {\tt bowman8} & 0 & 0.00 & 0 & 1/1 & 100.00 & 0.25 \\ & & {\tt jaeschke} & 106 & 0.00 & 0 & 5/5 & 100.00 & 0.40 \\ & & {\tt jackson} & 4,456 & 0.00 & 0 & 6/6 & 100.00 & 8.91 \\ & & {\tt mansoor} & 4,010 & 0.00 & 0 & 3/3 & 100.00 & 2.52 \\ & & {\tt mitchell} & 188,922 & 1.02 & 0 & 4/6 & 66.67 & 799.00 \\ & 0.50 & {\tt mertens} & 80 & 0.00 & 0 & 6/6 & 100.00 & 0.32 \\ & & {\tt bowman8} & 0 & 0.00 & 0 & 1/1 & 100.00 & 0.10 \\ & & {\tt jaeschke} & 143 & 0.00 & 0 & 5/5 & 100.00 & 0.47 \\ & & {\tt jackson} & 4,456 & 0.00 & 0 & 6/6 & 100.00 & 8.81 \\ & & {\tt mansoor} & 3,902 & 0.00 & 0 & 3/3 & 100.00 & 2.70 \\ & & {\tt mitchell} & 166,621 & 1.02 & 0 & 4/6 & 66.67 & 737.33 \\ & 0.25 & {\tt mertens} & 80 & 0.00 & 0 & 6/6 & 100.00 & 0.37 \\ & & {\tt bowman8} & 0 & 0.00 & 0 & 1/1 & 100.00 & 0.06 \\ & & {\tt jaeschke} & 107 & 0.00 & 0 & 5/5 & 100.00 & 0.42 \\ & & {\tt jackson} & 4,456 & 0.00 & 0 & 6/6 & 100.00 & 8.76 \\ & & {\tt mansoor} & 1,394 & 0.00 & 0 & 3/3 & 100.00 & 1.21 \\ & & {\tt mitchell} & 166,876 & 1.02 & 0 & 4/6 & 66.67 & 738.00 \\[1mm] \multicolumn{2}{l}{Overall} & & 43,437 & 0.16 & 0 & 101/108 & 93.52 & 166.57 \\\midrule 2 & 1.00 & {\tt roszieg} & 269,369 & 12.92 & 0 & 0/6 & 0.00 & 1800.47 \\ & & {\tt heskia} & 401,337 & 12.40 & 0 & 0/6 & 0.00 & 1800.89 \\ & & {\tt buxey} & 55,003 & 28.79 & 0 & 0/7 & 0.00 & 1800.70 \\ & & {\tt sawyer30} & 30,589 & 28.26 & 0 & 0/9 & 0.00 & 1800.84 \\ & 0.75 & {\tt roszieg} & 211,653 & 8.15 & 0 & 0/6 & 0.00 & 1800.44 \\ & & {\tt heskia} & 375,894 & 8.60 & 0 & 0/6 & 0.00 & 1800.88 \\ & & {\tt buxey} & 26,939 & 25.80 & 0 & 0/7 & 0.00 & 1800.70 \\ & & {\tt sawyer30} & 23,766 & 25.86 & 0 & 0/9 & 0.00 & 1800.86 \\ & 0.50 & {\tt roszieg} & 364,062 & 5.29 & 0 & 0/6 & 0.00 & 1800.41 \\ & & {\tt heskia} & 354,638 & 6.84 & 0 & 0/6 & 0.00 & 1800.68 \\ & & {\tt buxey} & 38,177 & 16.03 & 0 & 0/7 & 0.00 & 1800.57 \\ & & {\tt sawyer30} & 34,790 & 16.35 & 0 & 0/9 & 0.00 & 1800.67 \\ & 0.25 & {\tt roszieg} & 364,552 & 5.29 & 0 & 0/6 & 0.00 & 1800.41 \\ & & {\tt heskia} & 330,465 & 3.81 & 0 & 0/6 & 0.00 & 1800.71 \\ & & {\tt buxey} & 85,544 & 8.13 & 0 & 0/7 & 0.00 & 1800.64 \\ & & {\tt sawyer30} & 71,434 & 7.64 & 0 & 0/9 & 0.00 & 1800.59 \\[1mm] \multicolumn{2}{l}{Overall} & & 168,899 & 13.76 & 0 & 0/112 & 0.00 & 1800.66 \\\midrule 3 & 1.00 & {\tt lutz1} & 52,738 & 10.98 & 0 & 1/6 & 16.67 & 1733.60 \\ & & {\tt gunther} & 37,277 & 31.30 & 0 & 0/7 & 0.00 & 1800.91 \\ & & {\tt kilbrid} & 45,651 & 24.87 & 1 & 0/10 & 0.00 & 1801.75 \\ & & {\tt hahn} & 74,365 & 14.70 & 0 & 0/5 & 0.00 & 1801.53 \\ & & {\tt warnecke} & 4,813 & 58.44 & 0 & 0/16 & 0.00 & 1802.23 \\ & 0.75 & {\tt lutz1} & 49,364 & 10.60 & 0 & 0/6 & 0.00 & 1800.53 \\ & & {\tt gunther} & 39,351 & 22.45 & 0 & 0/7 & 0.00 & 1800.90 \\ & & {\tt kilbrid} & 85,124 & 18.87 & 2 & 0/10 & 0.00 & 1801.61 \\ & & {\tt hahn} & 112,021 & 7.18 & 0 & 0/5 & 0.00 & 1801.44 \\ & & {\tt warnecke} & 4,580 & 48.45 & 1 & 0/16 & 0.00 & 1802.40 \\ & 0.50 & {\tt lutz1} & 115,017 & 2.81 & 0 & 2/6 & 33.33 & 1706.07 \\ & & {\tt gunther} & 26,716 & 18.32 & 0 & 0/7 & 0.00 & 1800.84 \\ & & {\tt kilbrid} & 49,411 & 17.51 & 1 & 0/10 & 0.00 & 1801.55 \\ & & {\tt hahn} & 108,078 & 7.62 & 0 & 0/5 & 0.00 & 1801.18 \\ & & {\tt warnecke} & 7,121 & 41.94 & 1 & 0/16 & 0.00 & 1802.23 \\ & 0.25 & {\tt lutz1} & 63,440 & 1.76 & 0 & 4/6 & 66.67 & 733.06 \\ & & {\tt gunther} & 31,915 & 11.11 & 0 & 0/7 & 0.00 & 1800.79 \\ & & {\tt kilbrid} & 100,912 & 5.77 & 3 & 0/10 & 0.00 & 1801.41 \\ & & {\tt hahn} & 161,258 & 4.92 & 0 & 0/5 & 0.00 & 1801.18 \\ & & {\tt warnecke} & 7,657 & 31.58 & 0 & 0/16 & 0.00 & 1802.41 \\[1mm] \multicolumn{2}{l}{Overall} & & 46,060 & 19.56 & 9 & 7/176 & 3.98 & 1759.67 \\ \bottomrule \end{tabular} \label{tab:appen:mipfsbf} \end{table} \begin{table}[tpb] \tiny \caption{Full results of SCBF-2 with (\ref{eq:mip:valIneq1}) on classes 1,2 and 3} \centering \vspace{2mm} \begin{tabular}{cclrrrrrr} \toprule Class & Alpha & Creator & \#Nodes & \%Gap & \#No solution & \#Optimal & \%Optimal & Runtime(s) \\\midrule\midrule 1 & 1.00 & {\tt mertens} & 623 & 0.00 & 0 & 6/6 & 100.00 & 0.31 \\ & & {\tt bowman8} & 568 & 0.00 & 0 & 1/1 & 100.00 & 0.36 \\ & & {\tt jaeschke} & 739 & 0.00 & 0 & 5/5 & 100.00 & 0.43 \\ & & {\tt jackson} & 11,149 & 0.00 & 0 & 6/6 & 100.00 & 9.00 \\ & & {\tt mansoor} & 10,954 & 0.00 & 0 & 3/3 & 100.00 & 5.38 \\ & & {\tt mitchell} & 1,123,442 & 11.10 & 0 & 0/6 & 0.00 & 1800.25 \\ & 0.75 & {\tt mertens} & 500 & 0.00 & 0 & 6/6 & 100.00 & 0.30 \\ & & {\tt bowman8} & 142 & 0.00 & 0 & 1/1 & 100.00 & 0.21 \\ & & {\tt jaeschke} & 596 & 0.00 & 0 & 5/5 & 100.00 & 0.28 \\ & & {\tt jackson} & 18,033 & 0.00 & 0 & 6/6 & 100.00 & 11.11 \\ & & {\tt mansoor} & 9,303 & 0.00 & 0 & 3/3 & 100.00 & 5.04 \\ & & {\tt mitchell} & 957,732 & 11.81 & 0 & 0/6 & 0.00 & 1800.24 \\ & 0.50 & {\tt mertens} & 500 & 0.00 & 0 & 6/6 & 100.00 & 0.27 \\ & & {\tt bowman8} & 34 & 0.00 & 0 & 1/1 & 100.00 & 0.10 \\ & & {\tt jaeschke} & 665 & 0.00 & 0 & 5/5 & 100.00 & 0.26 \\ & & {\tt jackson} & 18,033 & 0.00 & 0 & 6/6 & 100.00 & 10.85 \\ & & {\tt mansoor} & 11,228 & 0.00 & 0 & 3/3 & 100.00 & 6.38 \\ & & {\tt mitchell} & 1,116,662 & 12.23 & 0 & 0/6 & 0.00 & 1800.23 \\ & 0.25 & {\tt mertens} & 500 & 0.00 & 0 & 6/6 & 100.00 & 0.29 \\ & & {\tt bowman8} & 49 & 0.00 & 0 & 1/1 & 100.00 & 0.10 \\ & & {\tt jaeschke} & 545 & 0.00 & 0 & 5/5 & 100.00 & 0.28 \\ & & {\tt jackson} & 18,033 & 0.00 & 0 & 6/6 & 100.00 & 10.73 \\ & & {\tt mansoor} & 12,718 & 0.00 & 0 & 3/3 & 100.00 & 7.20 \\ & & {\tt mitchell} & 1,117,030 & 12.23 & 0 & 0/6 & 0.00 & 1800.24 \\[1mm] \multicolumn{2}{l}{Overall} & & 244,811 & 1.97 & 0 & 84/108 & 77.78 & 403.17 \\\midrule 2 & 1.00 & {\tt roszieg} & 991,067 & 20.31 & 0 & 0/6 & 0.00 & 1800.30 \\ & & {\tt heskia} & 726,379 & 11.69 & 2 & 0/6 & 0.00 & 1800.60 \\ & & {\tt buxey} & 685,017 & 16.08 & 4 & 0/7 & 0.00 & 1800.50 \\ & & {\tt sawyer30} & 530,560 & 27.54 & 4 & 0/9 & 0.00 & 1800.57 \\ & 0.75 & {\tt roszieg} & 734,178 & 18.44 & 0 & 0/6 & 0.00 & 1800.24 \\ & & {\tt heskia} & 756,204 & 17.72 & 2 & 0/6 & 0.00 & 1800.56 \\ & & {\tt buxey} & 558,135 & 24.44 & 3 & 0/7 & 0.00 & 1800.59 \\ & & {\tt sawyer30} & 497,528 & 29.45 & 4 & 0/9 & 0.00 & 1800.49 \\ & 0.50 & {\tt roszieg} & 1,195,487 & 16.73 & 0 & 0/6 & 0.00 & 1800.31 \\ & & {\tt heskia} & 711,764 & 13.75 & 2 & 0/6 & 0.00 & 1800.52 \\ & & {\tt buxey} & 609,114 & 22.60 & 2 & 0/7 & 0.00 & 1800.50 \\ & & {\tt sawyer30} & 556,436 & 19.70 & 4 & 0/9 & 0.00 & 1800.53 \\ & 0.25 & {\tt roszieg} & 1,193,838 & 16.73 & 0 & 0/6 & 0.00 & 1800.28 \\ & & {\tt heskia} & 652,072 & 11.64 & 2 & 0/6 & 0.00 & 1800.49 \\ & & {\tt buxey} & 580,369 & 23.58 & 0 & 0/7 & 0.00 & 1800.48 \\ & & {\tt sawyer30} & 614,005 & 25.58 & 2 & 0/9 & 0.00 & 1800.47 \\[1mm] \multicolumn{2}{l}{Overall} & & 701,617 & 19.75 & 31 & 0/112 & 0.00 & 1800.47 \\\midrule 3 & 1.00 & {\tt lutz1} & 716,824 & 18.94 & 0 & 0/6 & 0.00 & 1800.37 \\ & & {\tt gunther} & 482,696 & 10.68 & 5 & 0/7 & 0.00 & 1800.65 \\ & & {\tt kilbrid} & 318,856 & -- & 10 & 0/10 & 0.00 & 1801.17 \\ & & {\tt hahn} & 450,264 & 21.67 & 0 & 0/5 & 0.00 & 1800.85 \\ & & {\tt warnecke} & 170,586 & -- & 16 & 0/16 & 0.00 & 1801.47 \\ & 0.75 & {\tt lutz1} & 644,760 & 16.40 & 0 & 0/6 & 0.00 & 1800.38 \\ & & {\tt gunther} & 525,585 & 21.13 & 4 & 0/7 & 0.00 & 1800.62 \\ & & {\tt kilbrid} & 320,037 & -- & 10 & 0/10 & 0.00 & 1801.08 \\ & & {\tt hahn} & 414,795 & 27.77 & 0 & 0/5 & 0.00 & 1800.86 \\ & & {\tt warnecke} & 167,942 & -- & 16 & 0/16 & 0.00 & 1801.36 \\ & 0.50 & {\tt lutz1} & 473,510 & 15.23 & 0 & 0/6 & 0.00 & 1800.42 \\ & & {\tt gunther} & 463,846 & 28.02 & 2 & 0/7 & 0.00 & 1800.60 \\ & & {\tt kilbrid} & 323,799 & -- & 10 & 0/10 & 0.00 & 1800.96 \\ & & {\tt hahn} & 363,883 & 18.67 & 0 & 0/5 & 0.00 & 1800.92 \\ & & {\tt warnecke} & 135,786 & -- & 16 & 0/16 & 0.00 & 1801.48 \\ & 0.25 & {\tt lutz1} & 475,795 & 16.00 & 0 & 0/6 & 0.00 & 1800.33 \\ & & {\tt gunther} & 427,559 & 23.37 & 2 & 0/7 & 0.00 & 1800.50 \\ & & {\tt kilbrid} & 356,573 & -- & 10 & 0/10 & 0.00 & 1800.97 \\ & & {\tt hahn} & 310,303 & 26.20 & 0 & 0/5 & 0.00 & 1800.70 \\ & & {\tt warnecke} & 111,564 & -- & 16 & 0/16 & 0.00 & 1801.41 \\[1mm] \multicolumn{2}{l}{Overall} & & 326,284 & 12.20 & 117 & 0/176 & 0.00 & 1801.00 \\ \bottomrule \end{tabular} \label{tab:appen:mipSCBF} \end{table}
{ "alphanum_fraction": 0.495857625, "avg_line_length": 42.9947229551, "ext": "tex", "hexsha": "21f0349298713258692acb75d3e271d121cfd3ec", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "6863558a9fdb946fb16c67a7660172154274cfe0", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "BillChan226/ODA-Multi-Manipulator", "max_forks_repo_path": "Assembly Task Scheduling/sualbsp-2-master/thesis/chap_appendix.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "6863558a9fdb946fb16c67a7660172154274cfe0", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "BillChan226/ODA-Multi-Manipulator", "max_issues_repo_path": "Assembly Task Scheduling/sualbsp-2-master/thesis/chap_appendix.tex", "max_line_length": 115, "max_stars_count": null, "max_stars_repo_head_hexsha": "6863558a9fdb946fb16c67a7660172154274cfe0", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "BillChan226/ODA-Multi-Manipulator", "max_stars_repo_path": "Assembly Task Scheduling/sualbsp-2-master/thesis/chap_appendix.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 8327, "size": 16295 }
%------------------------- % Resume in Latex % Author : Ujjwal Singh % License : MIT %------------------------ \documentclass[letterpaper,11pt]{article} \usepackage{latexsym} \usepackage[empty]{fullpage} \usepackage{titlesec} \usepackage{marvosym} \usepackage[usenames,dvipsnames]{color} \usepackage{verbatim} \usepackage{enumitem} \usepackage[pdftex]{hyperref} \usepackage{fancyhdr} \pagestyle{fancy} \fancyhf{} % clear all header and footer fields \fancyfoot{} \renewcommand{\headrulewidth}{0pt} \renewcommand{\footrulewidth}{0pt} % Adjust margins \addtolength{\oddsidemargin}{-0.375in} \addtolength{\evensidemargin}{-0.375in} \addtolength{\textwidth}{1in} \addtolength{\topmargin}{-.5in} \addtolength{\textheight}{1.0in} \urlstyle{same} \raggedbottom \raggedright \setlength{\tabcolsep}{0in} % Sections formatting \titleformat{\section}{ \vspace{-4pt}\scshape\raggedright\large }{}{0em}{}[\color{black}\titlerule \vspace{-5pt}] %------------------------- % Custom commands \newcommand{\resumeItem}[2]{ \item\small{ \textbf{#1}{: #2 \vspace{-2pt}} } } \newcommand{\resumeSubheading}[4]{ \vspace{-1pt}\item \begin{tabular*}{0.97\textwidth}{l@{\extracolsep{\fill}}r} \textbf{#1} & #2 \\ \textit{\small#3} & \textit{\small #4} \\ \end{tabular*}\vspace{-5pt} } \newcommand{\resumeSubItem}[2]{\resumeItem{#1}{#2}\vspace{-4pt}} \renewcommand{\labelitemii}{$\circ$} \newcommand{\resumeSubHeadingListStart}{\begin{itemize}[leftmargin=*]} \newcommand{\resumeSubHeadingListEnd}{\end{itemize}} \newcommand{\resumeItemListStart}{\begin{itemize}} \newcommand{\resumeItemListEnd}{\end{itemize}\vspace{-5pt}} %------------------------------------------- %%%%%% CV STARTS HERE %%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{document} %----------HEADING----------------- \begin{tabular*}{\textwidth}{l@{\extracolsep{\fill}}r} \textbf{\href{http://ujjwalksingh.com/}{\Large Ujjwal Kumar Singh}} & Email : \href{mailto:[email protected]}{[email protected]}\\ \href{http://ujjwalksingh.com/}{http://www.ujjwalksingh.com} & Mobile : +91 8826235246 \\ \end{tabular*} %-----------EDUCATION----------------- \section{Education} \resumeSubHeadingListStart \resumeSubheading {Indian Institute of Technology}{Kanpur, India} {Bachelor of Technology in Computer Science and Engineering; CGPA: 8.2}{Aug. 2010 -- Dec. 2014} \resumeSubHeadingListEnd %-----------EXPERIENCE----------------- \section{Experience} \resumeSubHeadingListStart \resumeSubheading {Mindtickle}{Pune, India} {SDEII}{Jun 2017 - Present} \resumeItemListStart \resumeItem{Deep Link and Cross-Platform Integration} {Node REST microservice to generate deep link to help routing on web and on mobile.} \resumeItem{Augment Reality} {Interactive virtual content for user in real environment. Uses image processing and spatial tracking to augment real conspicuous environment.} \resumeItem{Mobile Architecture} {Architect and Developer of main customer android app.} \resumeItemListEnd \resumeSubheading {ruly}{Delhi, India} {Co-Founder}{Oct 2015 - Feb 2017} \resumeItemListStart \resumeItem{Platform} {Social platform for individual legal needs and queries.} \resumeItem{Recommendations and Indexing} {Recommendations based on user activity and likes. Auto-suggestion for query asked based on indexing using solr and ES.} \resumeItem{Database} {Primary database - MySQL database using Amazon RDS. Document indexing using Amazon elastic search service.} \resumeItem{Data Collection} {Collected data for legal judgments, police station all over India, lawyers on social platforms.} \resumeItem{Content Discovery} {Designed platform with primary concern for relevant content discovery. Relevant Q/A's, judgments, legal project can be searched with single text query.} \resumeItem{Analytics} {Logging all user activities using kafka and gcloud big data.} \resumeItem{Android App} {Created customisable android app for lawyer and users.} \resumeItemListEnd \resumeSubheading {Samsung Electronics}{Noida, India} {Engineer}{July 2014 and Oct 2015} \resumeItemListStart \resumeItem{Ideation and Commercialisation} {Involves sketching and prototyping ideas for patents and developing commercial product for general public.} \resumeItem{RPM: Resource Power Manager} {RPM responsibility is to manage power supply to various components in mobile. Enhancement of battery life and monitoring power consumption} \resumeItemListEnd \resumeSubHeadingListEnd %-----------PROJECTS----------------- \section{Projects} \resumeSubHeadingListStart \resumeSubItem{Data-Mining Research Publication} {K-Dominant Skyline Join Queries: Extending the Join Paradigm to K-Dominant Skylines.} \resumeSubItem{Google Assisstant} {Custom google assistant with capabilities to book Uber cab.} \resumeSubItem{Slack Bot} {Intelligent messaging slack bot with small talk capabilities using Dialogflow.} \resumeSubItem{Gaming Agent} {Agent capable of playing expertly FSM(Finite State Machine) games described using GDL(Game Description Language).} \resumeSubItem{Recommendation System} {Movie recommender systems using collaborative filtering on MovieLens datasets.} \resumeSubItem{Hostel Management System} {Django based application to manage hall (alias for Hostel in IITK).} \resumeSubHeadingListEnd %--------ACHIEVEMENTS------------ \section{Achievements} \resumeSubHeadingListStart \resumeSubItem{Data Engineering Publication in IEEE Conference} {\href{https://ieeexplore.ieee.org/document/7929945/}{K-Dominant Skyline Join Queries: Extending the Join Paradigm to K-Dominant Skylines}} \resumeSubItem{ACM ICPC International Collegiate Programming Contest 2012} {Kanpur site regional finalist} \resumeSubHeadingListEnd %--------PROGRAMMING SKILLS------------ \section{Programming Skills} \resumeSubHeadingListStart \item{ \textbf{Languages}{: Kotlin, Python, JS, SQL, Java, Scala} \hfill \textbf{Technologies}{: AWS, Kafka, Docker, Android, Django} } \resumeSubHeadingListEnd %--------HOBBIES------------ \section{Hobbies} \begin{description} \item{ {Reading non-fictional books, \href{https://medium.com/@ujjwal.singh}{Tech Blogger}, Gamer} } \end{description} %------------------------------------------- \end{document}
{ "alphanum_fraction": 0.6833733493, "avg_line_length": 35.2592592593, "ext": "tex", "hexsha": "a4a0811908a0ce2a4b57d631e37cbd297388164a", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "5fdccf04562eddd5db984924b1367f3d1c539355", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "ujjwalks/resume-latex", "max_forks_repo_path": "ujjwal_singh_resume.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "5fdccf04562eddd5db984924b1367f3d1c539355", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "ujjwalks/resume-latex", "max_issues_repo_path": "ujjwal_singh_resume.tex", "max_line_length": 163, "max_stars_count": null, "max_stars_repo_head_hexsha": "5fdccf04562eddd5db984924b1367f3d1c539355", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "ujjwalks/resume-latex", "max_stars_repo_path": "ujjwal_singh_resume.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 1760, "size": 6664 }
\clearpage \newpage \section{New methods} %% Présenter le problème et la factorisation matricielle %% %% In this section we present two new algorithms for estimating individual admixture coefficients and ancestral genotype frequencies assuming $K$ ancestral populations. In addition to genotypes, the new algorithms require individual geographic coordinates of sampled individuals. \paragraph{$Q$ and $G$-matrices} Consider a genotypic matrix, {\bf Y}, recording data for $n$ individuals at $L$ polymorphic loci for a $p$-ploid species (common values for $p$ are $p = 1,2$). For autosomal SNPs in a diploid organism, the genotype at locus $\ell$ is an integer number, 0, 1 or 2, corresponding to the number of reference alleles at this locus. In our algorithms, disjunctive forms are used to encode each genotypic value as the indicator of a heterozygote or a homozygote locus (Frichot et al. 2014). For a diploid organism each genotypic value $,0,1,2$ is encoded as $100$, $010$ and $001$. For $p$-ploid organisms, there are $(p+1)$ possible genotypic values at each locus, and each value corresponds to a unique disjunctive form. While our focus is on SNPs, the algorithms presented in this section extend to multi-allelic loci without loss of generality. Moreover, the method can be easily extended to genotype likelihoods by using the likelihood to encode each genotypic value~\citep{Korneliussen2014}. Our algorithms provide statistical estimates for the matrix ${\bf Q} \in \mathbb{R}^{K \times n}$ which contains the admixture coefficients, ${\bf Q}_{i,k}$, for each sampled individual, $i$, and each ancestral population, $k$. The algorithms also provide estimates for the matrix ${\bf G} \in \mathbb{R}^{(p+1)L \times K}$, for which the entries, ${\bf G}_{(p+1)\ell + j, k}$, correspond to the frequency of genotype $j$ at locus $\ell$ in population $k$. Obviously, the $Q$ and $G$-matrices must satisfy the following set of probabilistic constraints $$ \quad {\bf Q},{\bf G} \geq 0 \, , \quad \sum_{k=1}^K {\bf Q}_{i,k} = 1 \, , \quad \sum_{j=0}^p {\bf G}_{(p+1)\ell + j, k} = 1 \, , \quad j = 0,1,\dots, p, $$ for all $i, k$ and $\ell$. Using disjunctive forms and the law of total probability, estimates of {\bf Q} and {\bf G} can be obtained by factorizing the genotypic matrix as follows ${\bf Y}$=${\bf Q}\,{\bf G}^T$~\citep{Frichot2014}. Thus the inference problem can be solved by using constrained nonnegative matrix factorization methods~\citep{Lee1999, Cichocki2009}. In the sequel, we shall use the notations $\Delta_Q$ and $\Delta_G$ to represent the sets of probabilistic constraints put on the {\bf Q} and {\bf G} matrices respectively. \paragraph{Geographic weighting} Geography is introduced in the matrix factorization problem by using weights for each pair of sampled individuals. The weights impose regularity constraints on ancestry estimates over geographic space. The definition of geographic weights is based on the spatial coordinates of the sampling sites, $(x_i)$. Samples close to each other are given more weight than samples that are far apart. The computation of the weights starts with building a complete graph from the sampling sites. Then the weight matrix is defined as follows $$ w_{ij} = \exp( - {\rm dist}( x_i, x_j )^2/ \sigma^2), $$ \noindent where dist$( x_i, x_j )$ denotes the geodesic distance between sites $x_i$ and $x_j$, and $\sigma$ is a range parameter. Values for the range parameter can be investigated by using spatial variograms~\citep{Cressie1993}. To evaluate variograms, we extend the univariate variogram to genotypic data as follows \begin{equation} \gamma(h) = \frac{1}{2 |N(h)|} \sum_{i,j \in N(h)} \frac{1}{L} \sum_{l = 1}^{(p+1)L} |Y_{i,l} - Y_{j,l}|, \label{eq:gamma} \end{equation} \noindent where $N(h)$ is defined as the set of individuals separated by geographic distance $h$. % The function $\gamma$ can be approximated as follows % \begin{equation} % \hat{\gamma}(h) = {C}_0 \frac{1}{2 |N(h)|} \sum_{i,j \in N(h)} \| {\bf Q}_{i,.} - {\bf Q}_{j,.}\|^2 + {C}_1,~~~C_0, C_1 > 0, % \label{eq:gammahat} % \end{equation} % \noindent where $\| u \|^2$ is the squared norm of the vector $u$, and ${\bf Q}_{i,.}$ is the $i$th row of the admixture matrix ${\bf Q}$. Arguments that justify this approximation are given in Appendix~\ref{app:approx}. In applications, computing and visualizing the $\gamma$ function provides useful information on the level of spatial autocorrelation between individuals in the data. Next, we introduce the {\it Laplacian matrix} associated with the geographic weight matrix, {\bf W}. The Laplacian matrix is defined as ${\bf \Lambda}$ = {\bf D} $-$ {\bf W} where {\bf D} is a diagonal matrix with entries ${\bf D}_{i,i} = \sum_{j = 1}^n {\bf W}_{i,j}$, for $i = 1, \dots, n$~\citep{Belkin2003}. Elementary matrix algebra shows that~\citep{DengCai2011} $$ {\rm Tr} ({\bf Q}^T {\bf \Lambda} {\bf Q}) = \frac12 \sum_{i,j = 1}^n w_{ij} \| {\bf Q}_{i,.} - {\bf Q}_{j,.} \|^2 \, . $$ In our approach, assuming that geographically close individuals are more likely to share ancestry than individuals at distant sites is thus equivalent to minimizing the quadratic form ${\cal C}({\bf Q}) ={\rm Tr} ({\bf Q}^T {\bf \Lambda} {\bf Q})$ while estimating the matrix ${\bf Q}$. \paragraph{Least-squares optimization problems} Estimating the matrices ${\bf Q}$ and ${\bf G }$ from the observed genotypic matrix ${\bf Y}$ is performed through solving an optimization problem defined as follows~\citep{Caye2016} \begin{equation} \begin{aligned} & \underset{Q, G}{\text{min}} & & {\rm LS}({\bf Q}, {\bf G}) = \| {\bf Y} - {\bf QG}^T \|^2_{\rm F} + \alpha ' \frac{(p+1)L}{K \lambda_{\max}} {\cal C}({\bf Q}) , \\ & \text{s.t.} & & {\bf Q} \in \Delta_Q , \\ & & & {\bf G} \in \Delta_G . \\ \end{aligned} \label{eq:LS} \end{equation} \noindent The notation $\| {\bf M} \|_{\rm F}$ denotes the Frobenius norm of a matrix, {\bf M}. The regularization term is normalized by $(p+1)L/K \lambda_{\max}$, where $\lambda_{\max}$ is the largest eigenvalue of the Laplacian matrix. With this normalization, both terms of the optimization problem~\eqref{eq:LS} are given the same order of magnitude. The regularization parameter $\alpha ' $ controls the regularity of ancestry estimates over geographic space. Large values of $\alpha ' $ imply that ancestry coefficients have similar values for nearby individuals, whereas small values ignore spatial autocorrelation in observed allele frequencies. In the rest of the article, we will use $\alpha ' = 1$ and $\alpha = (p+1)L/K \lambda_{\max}$. Using the least-squares approach, the number of ancestral populations, $K$, can be chosen after the evaluation of a cross-validation criterion for each $K$~\citep{Alexander2011, Frichot2014, Frichot2015}. \paragraph{The Alternating Quadratic Programming (AQP) method} Because the poly\-edrons $\Delta_Q$ and $\Delta_G$ are convex sets and the LS function is convex with respect to each variable ${\bf Q}$ or ${\bf G}$ when the other one is fixed, the problem~\eqref{eq:LS} is amenable to the application of block coordinate descent~\citep{Bertsekas1995}. The APQ algorithm starts from initial values for the $G$ and $Q$-matrices, and alternates two steps. The first step computes the matrix {\bf G} while {\bf Q} is kept fixed, and the second step permutates the roles of {\bf G} and {\bf Q}. Let us assume that {\bf Q} is fixed and write {\bf G} in a vectorial form, $g = {\rm vec({\bf G})} \in \mathbb{R}^{K(p + 1)L}$. The first step of the algorithm actually solves the following quadratic programming subproblem. Find \begin{equation} \begin{aligned} g^\star = \underset{g \in \Delta_G}{\arg \min} ( -2 v^T_Q \, g + g^T {\bf D}_Q g ) \, , \end{aligned} \label{eq:AQPg} \end{equation} \noindent where ${\bf D}_Q = {\bf I}_{(p+1)L} \otimes {\bf Q}^T {\bf Q}$ and $v_Q = {\rm vec}({\bf Q}^T {\bf Y})$. Here, $\otimes$ denotes the Kronecker product and ${\bf I}_d$ is the identity matrix with $d$ dimensions. Note that the block structure of the matrix ${\bf D}_Q$ allows us to decompose the subproblem~\eqref{eq:AQPg} into $L$ independent quadratic programming problems with $K(p + 1)$ variables. Now, consider that {\bf G} is the value obtained after the first step of the algorithm, and write {\bf Q} in a vectorial form, $q = {\rm vec({\bf Q})} \in \mathbb{R}^{nK}$. The second step solves the following quadratic programming subproblem. Find \begin{equation} \begin{aligned} q^\star = \underset{q \in \Delta_Q}{\arg \min} ( -2 v^T_G \, q + q^T {\bf D}_G q ) \, , \end{aligned} \label{eq:AQPq} \end{equation} \noindent where ${\bf D}_G = {\bf I}_{n} \otimes {\bf G}^T {\bf G } + \alpha {\bf \Lambda} \otimes {\bf I}_K$ and $v_G = {\rm vec}({\bf G}^T{\bf Y}^T)$. Unlike subproblem~\eqref{eq:AQPg}, subproblem~\eqref{eq:AQPq} can not be decomposed into smaller problems. Thus, the computation of the second step of the AQP algorithm implies to solve a quadratic programming problem with $nK$ variables which can be problematic for large samples ($n$ is the sample size). The AQP algorithm is described in details in Appendix~\ref{algo:aqp}. For AQP, we have the following convergence result. \begin{thm} \label{th} The AQP algorithm converges to a critical point of problem~\eqref{eq:LS}. \end{thm} \begin{proof} The quadratic convex functions defined in subproblems~\eqref{eq:AQPg} and~\eqref{eq:AQPq} have finite lower bounds. The convex sets $\Delta_Q$ and $\Delta_G$ are not empty sets, and they are compact sets. Thus the sequence generated by the AQP algorithm is well-defined, and has limit points. According to Corollary 2 of ~\cite{Grippo2000}, we conclude that the AQP algorithm converges to a critical point of problem~\eqref{eq:LS}. \end{proof} \paragraph{Alternating Projected Least-Squares (APLS)} In this paragraph, we introduce an APLS estimation algorithm which approximates the solution of problem~\eqref{eq:LS}, and reduces the complexity of the AQP algorithm. The APLS algorithm starts from initial values of the $G$ and $Q$-matrices, and alternates two steps. The matrix {\bf G} is computed while {\bf Q} is kept fixed, and {\it vice versa}. Assume that the matrix {\bf Q} is known. The first step of the APLS algorithm solves the following optimization problem. Find \begin{equation} {\bf G}^\star = \arg \min \| {\bf Y} - {\bf QG}^T \|^2_{\rm F} \, . \end{equation} This operation can be done by considering $(p+1)L$ (the number of columns of ${\bf Y}$) independent optimization problems running in parallel. The operation is followed by a projection of ${\bf G}^\star$ on the polyedron of constraints, $\Delta_G$. For the second step, assume that {\bf G} is set to the value obtained after the first step is completed. We compute the eigenvectors, {\bf U}, of the Laplacian matrix, and we define the diagonal matrix ${\bf \Delta}$ formed by the eigenvalues of ${\bf \Lambda}$ (The eigenvalues of ${\bf \Lambda}$ are non-negative real numbers). According to the spectral theorem, we have $$ {\bf \Lambda} = {\bf U}^T {\bf \Delta} {\bf U} \, . $$ \noindent After this operation, we project the data matrix {\bf Y} on the basis of eigenvectors as follows $$ {\rm proj} ({\bf Y}) = {\bf U}{\bf Y} \, , $$ \noindent and, for each individual, we solve the following optimization problem \begin{equation} q_i^\star = \arg \min \| {\rm proj} ({\bf Y})_i - {\bf G}^Tq \|^2 + \alpha \lambda_i \| q \|^2 \, , \label{eq:APSLq} \end{equation} \noindent where proj({\bf Y}$)_i$ is the $i$th row of the projected data matrix, proj({\bf Y}), and $\lambda_i$ is the $i$th eigenvalue of ${\bf \Lambda}$. The solutions, $q_i$, are then concatenated into a matrix, ${\rm conc}(q)$, and ${\bf Q}$ is defined as the projection of the matrix ${\bf U}^T {\rm conc}(q)$ on the polyedron $\Delta_Q$. The complexity of step~\eqref{eq:APSLq} grows linearly with $n$, the number of individuals. While the theoretical convergence properties of AQP algorithms are lost for APLS algorithms, the APLS algorithms are expected to be good approximations of AQP algorithms. The APLS algorithm is described in details in Appendix~\ref{algo:apls}. \paragraph{Comparison with {\tt tess3}} The algorithm implemented in a previous version of {\tt tess3} also provides approximation of of solution of~\eqref{eq:LS}. The {\tt tess3} algorithm first computes a Cholesky decomposition of the Laplacian matrix. Then, by a change of variables, the least-squares problem is transformed into a sparse nonnegative matrix factorization problem~\citep{Caye2016}. Solving the sparse non-negative matrix factorization problem relies on the application of existing methods~\citep{Kim2011, Frichot2014}. The methods implemented in {\tt tess3} have an algorithmic complexity that increases linearly with the number of loci and the number of clusters. They lead to estimates that accurately reproduce those of the Monte Carlo algorithms implemented in the Bayesian method {\tt tess} 2.3~\citep{Caye2016}. Like for the AQP method, the {\tt tess3} previous algorithms have an algorithmic complexity that increases quadratically with the sample size. \paragraph{Ancestral population differentiation statistics and local adaptation scans} Assuming $K$ ancestral populations, the $Q$ and $G$-matrices obtained from the AQP and from the APLS algorithms were used to compute single-locus estimates of a population differentiation statistic similar to $F_{\rm ST}$~\citep{Martins2016}, as follows $$ F^{Q}_{\rm ST} = 1 - \sum_{k=1}^K q_k \frac{f_k (1-f_k)}{f(1-f)} \, , $$ \noindent where $q_k$ is the average of ancestry coefficients over sampled individuals, $q_k = \sum_{i =1}^n q_{ik}/n$, for the cluster $k$, $f_k$ is the ancestral allele frequency in population $k$ at the locus of interest, and $f = \sum_{k = 1}^K q_k f_k$ (Martins et al. 2016). The locus-specific statistics were used to perform statistical tests of neutrality at each locus, by comparing the observed values to their expectations from the genome-wide background. The test was based on the squared $z$-score statistic, $z^2 = (n-K) F^{Q}_{\rm ST}/(1 - F^{Q}_{\rm ST})$, for which a chi-squared distribution with $K-1$ degrees of freedom was assumed under the null-hypothesis~\citep{Martins2016}. The calibration of the null-hypothesis was achieved by using genomic control to adjust the test statistic for background levels of population structure~\citep{Devlin1999, Francois2016}. After recalibration of the null-hypothesis, the control of the false discovery rate was achieved by using the Benjamini-Hochberg algorithm~\citep{Benjamini1995}. \paragraph{{\tt R} package} We implemented the AQP and APLS algorithms in the {\tt R} package {\tt tess3r}, available from Github and submitted to the Comprehensive R Archive Network (R Core Team, 2016). \section{Simulated and real data sets} \paragraph{Coalescent simulations} We used the computer program {\tt ms} to perform coalescent simulations of neutral and outlier SNPs under spatial models of admixture~\citep{Hudson2002}. Two ancestral populations were created from the simulation of Wright\rq{}s two-island models. The simulated data sets contained admixed genotypes for $n$ individuals for which the admixture proportions varied continuously along a longitudinal gradient~\citep{Durand2009, Francois2010}. In those scenarios, individuals at each extreme of the geographic range were representative of their population of origin, while individuals at the center of the range shared intermediate levels of ancestry in the two ancestral populations~\citep{Caye2016}. For those simulations, the $Q$ matrix, ${\bf Q}_0$, was entirely described by the location of the sampled individuals. Neutrally evolving ancestral chromosomal segments were generated by simulating DNA sequences with an effective population size $N_0 = 10^6$ for each ancestral population. The mutation rate per bp and generation was set to $\mu = 0.25 \times 10^{-7}$, the recombination rate per generation was set to $r = 0.25 \times 10^{-8}$, and the parameter $m$ was set to obtained neutral levels of $F_{\rm ST}$ ranging between values of $0.005$ and $0.10$. The number of base pairs for each DNA sequence was varied between 10k to 300k to obtain numbers of polymorphic locus ranging between 1k and 200k after filtering out SNPs with minor allele frequency lower than 5$\%$. To create SNPs with values in the tail of the empirical distribution of $F_{\rm ST}$, additional ancestral chromosomal segments were generated by simulating DNA sequences with a migration rate $m_s$ lower than $m$. The simulations reproduced the reduced levels of diversity and the increased levels of differentiation expected under hard selective sweeps occurring at one particular chromosomal segment in ancestral populations~\citep{Martins2016}. For each simulation, the sample size was varied in the range $n =$ 50-700. We compared the AQP and APLS algorithm estimates with those obtained with the {\tt tess3} algorithm. Each program was run 5 times. Using $K = 2$ ancestral populations, we computed the root mean squared error (RMSE) between the estimated and known values of the $Q$-matrix, and between the estimated and known values of the $G$-matrix. To evaluate the benefit of spatial algorithms, we compared the statistical errors of APLS algorithms to the errors obtained with {\tt snmf} method that reproduces the outputs of the {\tt structure} program accurately~\citep{Frichot2014,Frichot2015}. To quantify the performances of neutrality tests as a function of ancestral and observed levels of $F_{\rm ST}$, we used the area under the precision-recall curve (AUC) for several values of the selection rate. Subsamples from a real data set were used to perform a runtime analysis of the AQP and APLS algorithms ({\it A. thaliana} data, see below). Runtimes were evaluated by using a single computer processor unit Intel Xeon 2.0 GHz. \paragraph{Application to European ecotypes of {\it Arabidopsis thaliana}} We used the APLS algorithm to survey spatial population genetic structure and to investigate the molecular basis of adaptation by considering SNP data from 1,095 European ecotypes of the plant species {\it A. thaliana} (214k SNPs, \cite{Horton2012}). The cross-validation criterion was used to evaluate the number of clusters in the sample, and a statistical analysis was performed to evaluate the range of the variogram from the data. We used {\tt R} functions of the {\tt tess3r} package to display interpolated admixture coefficients on a geographic map of Europe (R Core team 2016). A gene ontology enrichment analysis using the software AMIGO~\citep{Carbon2009} was performed in order to evaluate which molecular functions and biological processes might be involved in local adaptation in Europe.
{ "alphanum_fraction": 0.7388514767, "avg_line_length": 122.461038961, "ext": "tex", "hexsha": "951395d48a1821873f7b5c2945a130d3cb5350ab", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "14d7c3fd03aac0ee940e883e37114420aa614b41", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "cayek/Thesis", "max_forks_repo_path": "2Article/TESS3Article-master/Article/method.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "14d7c3fd03aac0ee940e883e37114420aa614b41", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "cayek/Thesis", "max_issues_repo_path": "2Article/TESS3Article-master/Article/method.tex", "max_line_length": 1190, "max_stars_count": null, "max_stars_repo_head_hexsha": "14d7c3fd03aac0ee940e883e37114420aa614b41", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "cayek/Thesis", "max_stars_repo_path": "2Article/TESS3Article-master/Article/method.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 5231, "size": 18859 }
\chapter{Basic concepts}\label{sec:basic_concepts} \section{Main principles} \subsection{Communication} \subsubsection{Architecture} A UAVCAN network is a decentralized peer network, where each peer (node) has a unique numeric identifier\footnote{Here and elsewhere in this specification, \emph{ID} and \emph{identifier} are used interchangeably unless specifically indicated otherwise.} --- \emph{node-ID} --- ranging from 0 up to a transport-specific upper boundary which is guaranteed to be not less than 127. Nodes of a UAVCAN network can communicate using the following communication methods: \begin{description} \item[Message publication] --- The primary method of data exchange with one-to-many publish/subscribe semantics. \item[Service invocation] --- The communication method for one-to-one request/response interactions\footnote{Like remote procedure call (RPC).}. \end{description} For each type of communication, a predefined set of data types is used, where each data type has a unique name. Additionally, every data type definition has a pair of major and minor version numbers, which enable data type definitions to evolve in arbitrary ways while ensuring a well-defined migration path if backward-incompatible changes are introduced. Some data types are standard and defined by the protocol specification (of which only a small subset are required); others may be specific to a particular application or vendor. \subsubsection{Subjects and services}\label{sec:basic_subjects_and_services} Message exchanges between nodes are grouped into \emph{subjects} by the semantic meaning of the message. Message exchanges belonging to the same subject use same message data type down to the major version (minor versions are allowed to differ) and pertain to same function or process within the system. Request/response exchanges between nodes are grouped into \emph{services} by the semantic meaning of the request and response, like messages are grouped into subjects. Requests and their corresponding responses that belong to the same service use same service data type down to the major version (minor versions are allowed to differ; as a consequence, the minor data type version number of a service response may differ from that of its corresponding request) and pertain to the same function. Each message subject is identified by a unique natural number -- a \emph{subject-ID}; likewise, each service is identified by a unique \emph{service-ID}. An umbrella term \emph{port-ID} is used to refer either to a subject-ID or to a service-ID (port identifiers have no direct manifestation in the construction of the protocol, but they are convenient for discussion). The sets of subject-ID and service-ID are orthogonal. Port identifiers are assigned to various functions, processes, or data streams within the network at the system definition time. Generally, a port identifier can be selected arbitrarily by a system integrator by changing relevant configuration parameters of connected nodes, in which case such port identifiers are called \emph{non-fixed port identifiers}. It is also possible to permanently associate any data type definition with a particular port identifier at a data type definition time, in which case such port identifiers are called \emph{fixed port identifiers}; their usage is governed by rules and regulations described in later sections. A port-ID used in a given UAVCAN network shall not be shared between functions, processes, or data streams that have different semantic meaning. A port-ID used in a given UAVCAN network shall not be used with different data type definitions unless they share the same full name and the same major version number\footnote{More on data type versioning in section \ref{sec:dsdl_versioning}.}. A data type of a given major version can be used simultaneously with an arbitrary number of non-fixed different port identifiers, but not more than one fixed port identifier. \subsection{Data types} \subsubsection{Data type definitions} Message and service data types are defined using the \emph{data structure description language} (DSDL) (chapter \ref{sec:dsdl}). A DSDL definition specifies the name, major version, minor version, the data schemas, and an optional fixed port-ID of the data type among other less important properties. Message data types always define exactly one data schema, whereas service data types contain two independent schema definitions: one for request, and the other for response. \subsubsection{Regulation}\label{sec:basic_concepts_data_type_regulation} Data type definitions can be created by the UAVCAN specification maintainers or by its users, such as equipment vendors or application designers. Irrespective of the origin, data types can be included into the set of data type definitions maintained and distributed by the UAVCAN specification maintainers; definitions belonging to this set are termed \emph{regulated data type definitions}. The specification maintainers undertake to keep regulated definitions well-maintained and may occasionally amend them and release new versions, if such actions are believed to benefit the protocol. User-created (i.e., vendor-specific or application-specific) data type definitions that are not included into the aforementioned set are called \emph{unregulated data type definitions}. Unregulated definitions that are made available for reuse by others are called \emph{unregulated public data type definitions}; those that are kept closed-source for private use by their authors are called \emph{(unregulated) private data type definitions}\footnote{The word \emph{``unregulated''} is redundant because private data types cannot be regulated, by definition. Likewise, all regulated definitions are public, so the word \emph{``public''} can be omitted.}. Data type definitions authored by the specification maintainers for the purpose of supporting and advancing this specification are called \emph{standard data type definitions}. All standard data type definitions are regulated. Fixed port identifiers can be used only with regulated data type definitions or with private definitions. Fixed port identifiers must not be used with public unregulated data types, since that is likely to cause unresolvable port identifier collisions\footnote{% Any system that relies on data type definitions with fixed port identifiers provided by an external party (i.e., data types and the system in question are designed by different parties) runs the risk of encountering port identifier conflicts that cannot be resolved without resorting to help from said external party since the designers of the system do not have control over their fixed port identifiers. Because of this, the specification strongly discourages the use of fixed unregulated private port identifiers. If a data type definition is ever disclosed to any other party (i.e., a party that did not author it) or to the public at large it is important that the data type \emph{not} include a fixed port-identifier. }. This restriction shall be followed at all times by all compliant implementations and systems\footnote{% In general, private unregulated fixed port identifiers are collision-prone by their nature, so they should be avoided unless there are very strong reasons for their usage and the authors fully understand the risks. }. \begin{UAVCANSimpleTable}{Data type taxonomy}{|l|X|X|} & Regulated & Unregulated \\ \bfseries{Public} & Standard and contributed (e.g., vendor-specific) definitions.\newline Fixed port identifiers are allowed; they are called \emph{``regulated port-ID''}. & Definitions distributed separately from the UAVCAN specification.\newline Fixed port identifiers are \emph{not allowed}. \\ \bfseries{Private} & Nonexistent category. & Definitions that are not available to anyone except their authors.\newline Fixed port identifiers are permitted (although not recommended); they are called \emph{``unregulated fixed port-ID''}. \\ \end{UAVCANSimpleTable} DSDL processing tools shall prohibit unregulated fixed port identifiers by default, unless they are explicitly configured otherwise. Each of the two sets of valid port identifiers (which are subject identifiers and service identifiers) are segregated into three categories (the ranges are documented in chapter \ref{sec:application_layer}): \begin{itemize} \item Application-specific port identifiers. These can be assigned by changing relevant configuration parameters of the connected nodes (in which case they are called \emph{non-fixed} or runtime-assigned), or at the data type definition time (in which case they are called \emph{fixed unregulated}, and they generally should be avoided due to the risks of collisions as explained earlier). \item Regulated non-standard fixed port identifiers. These are assigned by the specification maintainers for non-standard contributed vendor-specific public data types. \item Standard fixed port identifiers. These are assigned by the specification maintainers for standard regulated public data types. \end{itemize} Data type authors that want to release regulated data type definitions or contribute to the standard data type set should contact the UAVCAN maintainers for coordination. The maintainers will choose unoccupied fixed port identifiers for use with the new definitions, if necessary. Since the set of regulated definitions is maintained in a highly centralized manner, it can be statically ensured that no identifier collisions will take place within it; also, since the identifier ranges used with regulated definitions are segregated, regulated port-IDs will not conflict with any other compliant UAVCAN node or system\footnote{% The motivation for the prohibition of fixed port identifiers in unregulated public data types is derived directly from the above: since there is no central repository of unregulated definitions, collisions would be likely. }. \subsubsection{Serialization} A DSDL description can be used to automatically generate the serialization and deserialization code for every defined data type in a particular programming language. Alternatively, a DSDL description can be used to construct appropriate serialization code manually by a human. DSDL ensures that the worst case memory footprint and computational complexity per data type are constant and easily predictable. Serialized message and service objects\footnote{% Here and elsewhere, an \emph{object} means a value that is an instance of a well-defined type. } are exchanged by means of the transport layer (chapter \ref{sec:transport_layer}), which implements automatic decomposition of long transfers into several transport frames\footnote{Here and elsewhere, a \emph{transport frame} means a block of data that can be atomically exchanged over the transport layer network, e.g., a CAN frame.} and reassembly from these transport frames back into a single atomic data block, allowing nodes to exchange serialized objects of arbitrary size (DSDL guarantees, however, that the minimum and maximum size of the serialized representation of any object of any data type is always known statically). \subsection{High-level functions} On top of the standard data types, UAVCAN defines a set of standard high-level functions including: node health monitoring, node discovery, time synchronization, firmware update, plug-and-play node support, and more. For more information see chapter \ref{sec:application_layer}. \begin{figure}[hbt] \centering \begin{tabular}{|c|c|l|c|l|c|} \hline \multicolumn{6}{|c|}{Applications} \\ \hline \qquad{} & Required functions & \qquad{} & Standard functions & \qquad{} & Custom functions \\ \cline{2-2} \cline{4-4} \cline{6-6} \multicolumn{2}{|c|}{Required data types} & \multicolumn{2}{c|}{Standard data types} & \multicolumn{2}{c|}{Custom data types} \\ \hline \multicolumn{6}{|c|}{Serialization} \\ \hline \multicolumn{6}{|c|}{Transport} \\ \hline \end{tabular} \caption{UAVCAN architectural diagram.\label{fig:architecture}} \end{figure} \section{Message publication} Message publication refers to the transmission of a serialized message object over the network to other nodes. This is the primary data exchange mechanism used in UAVCAN; it is functionally similar to raw data exchange with minimal overhead, additional communication integrity guarantees, and automatic decomposition and reassembly of long payloads across multiple transport frames. Typical use cases may include transfer of the following kinds of data (either cyclically or on an ad-hoc basis): sensor measurements, actuator commands, equipment status information, and more. Information contained in a published message is summarized in the table \ref{table:published_message_info}. \begin{UAVCANSimpleTable}{Published message properties}{|l X|}\label{table:published_message_info} Property & Description \\ Payload & The serialized message object. \\ Subject-ID & Numerical identifier that indicates how the information should be interpreted. \\ Source node-ID & The node-ID of the transmitting node (excepting anonymous messages). \\ Transfer-ID & A small overflowing integer that increments with every transfer of this message type from a given node. Used for message sequence monitoring, multi-frame transfer reassembly, and elimination of transport frame duplication errors for single-frame transfers. Additionally, Transfer-ID is crucial for automatic management of redundant transport interfaces. Its properties are explained in detail in the chapter \ref{sec:transport_layer}. \\ \end{UAVCANSimpleTable} \subsection{Anonymous message publication} Nodes that don't have a unique node-ID can publish only \emph{anonymous messages}. An anonymous message is different from a regular message in that it doesn't contain a source node-ID, and that it can't be decomposed across several transport frames. UAVCAN nodes will not have an identifier initially until they are assigned one, either statically (which is generally the preferred option for applications where a high degree of determinism and high safety assurances are required) or automatically (i.e., plug-and-play). Anonymous messages are particularly useful for the plug-and-play feature, which is explored in detail in chapter~\ref{sec:application_layer}. Anonymous messages cannot be decomposed into multiple transport frames, meaning that their payload capacity is limited to that of a single transport frame. More info is provided in chapter~\ref{sec:transport_layer}. \section{Service invocation} Service invocation is a two-step data exchange operation between exactly two nodes: a client and a server. The steps are\footnote{The request/response semantic is facilitated by means of hardware (if available) or software acceptance filtering and higher-layer logic. No additional support or non-standard transport layer features are required.}: \begin{enumerate} \item The client sends a service request to the server. \item The server takes appropriate actions and sends a response to the client. \end{enumerate} Typical use cases for this type of communication include: node configuration parameter update, firmware update, an ad-hoc action request, file transfer, and other functions of similar nature. Information contained in service requests and responses is summarized in the table \ref{table:service_req_resp_info}. \begin{UAVCANSimpleTable}{Service request/response properties}{|l X|}\label{table:service_req_resp_info} Property & Description \\ Payload & The serialized request/response object. \\ Service-ID & Numerical identifier that indicates how the service should be handled. \\ Client node-ID & Source node-ID during request transfer, destination node-ID during response transfer. \\ Server node-ID & Destination node-ID during request transfer, source node-ID during response transfer. \\ Transfer-ID & A small overflowing integer that increments with every call of this service type from a given node. Used for request/response matching, multi-frame transfer reassembly, and elimination of transport frame duplication errors for single-frame transfers. Additionally, Transfer-ID is crucial for automatic management of redundant transport interfaces. Its properties are explained in detail in the chapter \ref{sec:transport_layer}. \\ \end{UAVCANSimpleTable} Both the request and the response contain same values for all listed fields except payload, where the content is application-defined. Clients match responses with corresponding requests using the following fields: service-ID, client node-ID, server node-ID, and transfer-ID.
{ "alphanum_fraction": 0.7828362114, "avg_line_length": 56.7623762376, "ext": "tex", "hexsha": "031725aa3a8d25f515c21dfb3c7cb41062f03c69", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "5fd0f0aed2a255007cefc0f29b48f2ba9cb88931", "max_forks_repo_licenses": [ "CC-BY-4.0" ], "max_forks_repo_name": "veronistar/specification", "max_forks_repo_path": "specification/basic_concepts/basic_concepts.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "5fd0f0aed2a255007cefc0f29b48f2ba9cb88931", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "CC-BY-4.0" ], "max_issues_repo_name": "veronistar/specification", "max_issues_repo_path": "specification/basic_concepts/basic_concepts.tex", "max_line_length": 117, "max_stars_count": null, "max_stars_repo_head_hexsha": "5fd0f0aed2a255007cefc0f29b48f2ba9cb88931", "max_stars_repo_licenses": [ "CC-BY-4.0" ], "max_stars_repo_name": "veronistar/specification", "max_stars_repo_path": "specification/basic_concepts/basic_concepts.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 3469, "size": 17199 }
\section*{Affidavit} \vspace*{1.0cm} "I hereby confirm that the work presented has been performed and interpreted solely by myself except for where I explicitly identified the contrary. I assure that this work has not been presented in any other form for the fulfillment of any other degree or qualification. Ideas taken from other works in letter and in spirit are identified in every single case."\\ \newline \newline \newline \noindent \rule{5.5cm}{0.4pt} \phantom{ssssssssssssssssspace} \rule{5.5cm}{0.4pt}\\ \phantom{space}Place, Date \phantom{sssssssssssssssssssssssssssssssssssssssssssspace} Signature
{ "alphanum_fraction": 0.7912621359, "avg_line_length": 51.5, "ext": "tex", "hexsha": "20e06bf6b5ab7cb5d7b8d06300bb46498b55bf8d", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "bb882bc6c809331c370a4d6442c36ad67ccad498", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "Yuleii/yulei-thesis-QBSM-kw94", "max_forks_repo_path": "tex/11_affidavit.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "bb882bc6c809331c370a4d6442c36ad67ccad498", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "Yuleii/yulei-thesis-QBSM-kw94", "max_issues_repo_path": "tex/11_affidavit.tex", "max_line_length": 371, "max_stars_count": null, "max_stars_repo_head_hexsha": "bb882bc6c809331c370a4d6442c36ad67ccad498", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "Yuleii/yulei-thesis-QBSM-kw94", "max_stars_repo_path": "tex/11_affidavit.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 183, "size": 618 }
\section{Experiments} \label{sec:exp} \begin{table*}[t] \footnotesize \begin{tabular}{|l|l|l|} \hline Distributional Models & \mto & \vm \\ \hline \cite{Lamar:2010:LCU:1870658.1870736} & .708 & -\\ %algorithm name LDC \cite{Brown:1992:CNG:176313.176316}* & .678 & .630\\ %kcls(Och,1999) & .737 & .656\\ %\cite{goldwater-griffiths:2007:ACLMain} & .632 & .562\\ (Goldwater et al., 2007) & .632 & .562\\ \cite{Ganchev:2010:PRS:1859890.1859918}* & .625 & .548\\ \cite{maron2010sphere} & .688 (.0016)&-\\ Bigrams (Sec.~\ref{sec:bigram}) & \bgmto & \bgvm \\ Partitions (Sec.~\ref{sec:rpart}) & \rpmto & \rpvm \\ Substitutes (Sec.~\ref{sec:wordsub}) & \wsmto & \wsvm \\ \hline \end{tabular} \begin{tabular}{|l|l|l|} \hline Models with Additional Features & \mto & \vm \\ \hline \cite{Clark:2003:CDM:1067807.1067817}* & .712 & .655 \\ \cite{christodoulopoulos-goldwater-steedman:2011:EMNLP} & .728 & .661\\ \cite{bergkirkpatrick-klein:2010:ACL} & .755 & -\\ % Interesting in christo paper:73.9/67.7 \cite{Christodoulopoulos:2010:TDU:1870658.1870714} & .761 & .688\\ \cite{blunsom-cohn:2011:ACL-HLT2011} & .775 & .697\\ Substitutes and Features (Sec.~\ref{sec:feat}) & \ftmto & \ftvm \\ & & \\ & & \\ \hline \end{tabular} \caption{Summary of results in terms of the \mto and \vm scores. Standard errors are given in parentheses when available. Starred entries have been reported in the review paper \cite{Christodoulopoulos:2010:TDU:1870658.1870714}. Distributional models use only the identity of the target word and its context. The models on the right incorporate orthographic and morphological features.} \label{tab:results} \end{table*} In this section we present experiments that evaluate substitute vectors as representations of word context within the S-CODE framework. Section~\ref{sec:bigram} replicates the bigram based S-CODE results from \cite{maron2010sphere} as a baseline. The S-CODE algorithm works with discrete inputs. The substitute vectors as described in Section~\ref{sec:lm} are high dimensional and continuous. We experimented with two approaches to use substitute vectors in a discrete setting. Section~\ref{sec:rpart} presents an algorithm that partitions the high dimensional space of substitute vectors into small neighborhoods and uses the partition id as a discrete context representation. Section~\ref{sec:wordsub} presents an even simpler model which pairs each word with a random substitute. When the left-word -- right-word pairs used in the bigram model are replaced with word -- partition-id or word -- substitute pairs we see significant gains in accuracy. These results support our running hypothesis that paradigmatic features, i.e. potential substitutes of a word, are better determiners of syntactic category compared to left and right neighbors. Section~\ref{sec:feat} explores morphologic and orthographic features as additional sources of information and its results improve the state-of-the-art in the field of unsupervised syntactic category acquisition. Each experiment was repeated 10 times with different random seeds and the results are reported with standard errors in parentheses or error bars in graphs. Table~\ref{tab:results} summarizes all the results reported in this paper and the ones we cite from the literature. \subsection{Bigram model}\label{sec:bigram} In \cite{maron2010sphere} adjacent word pairs (bigrams) in the corpus are fed into the S-CODE algorithm as $X, Y$ samples. The algorithm uses stochastic gradient ascent to find the $\phi_x, \psi_y$ embeddings for left and right words in these bigrams on a single 25-dimensional sphere. At the end each word $w$ in the vocabulary ends up with two points on the sphere, a $\phi_w$ point representing the behavior of $w$ as the left word of a bigram and a $\psi_w$ point representing it as the right word. The two vectors for $w$ are concatenated to create a 50-dimensional representation at the end. These 50-dimensional vectors are clustered using an instance weighted k-means algorithm and the resulting groups are compared to the correct part-of-speech tags. Maron et al. \shortcite{maron2010sphere} report many-to-one scores of .6880 (.0016) for 45 clusters and .7150 (.0060) for 50 clusters (on the full PTB45 tag-set). If only $\phi_w$ vectors are clustered without concatenation we found the performance drops significantly to about .62. To make a meaningful comparison we re-ran the bigram experiments using our default settings and obtained a many-to-one score of \bgmto\ and the V-measure is \bgvm\ for 45 clusters. The following default settings were used: (i) each word was kept with its original capitalization, (ii) the learning rate parameters were adjusted to $\varphi_0=50$, $\eta_0=0.2$ for faster convergence in log likelihood, (iii) the number of s-code iterations were increased from 12 to 50 million, (iv) k-means initialization was improved using \cite{arthur2007k}, and (v) the number of k-means restarts were increased to 128 to improve clustering and reduce variance. \subsection{Random partitions}\label{sec:rpart} Instead of using left-word -- right-word pairs as inputs to S-CODE we wanted to pair each word with a paradigmatic representation of its context to get a direct comparison of the two context representations. To obtain a discrete representation of the context, the random--partitions algorithm first designates a random subset of substitute vectors as centroids to partition the space, and then associates each context with the partition defined by the closest centroid in cosine distance. Each partition thus defined gets a unique id, and word ($X$) -- partition-id ($Y$) pairs are given to S-CODE as input. The algorithm cycles through the data until we get approximately 50 million updates. The resulting $\phi_x$ vectors are clustered using the k-means algorithm (no vector concatenation is necessary). Using default settings (64K random partitions, 25 s-code dimensions, $Z=0.166$) the many-to-one accuracy is \rpmto\ and the V-measure is \rpvm. To analyze the sensitivity of this result to our specific parameter settings we ran a number of experiments where each parameter was varied over a range of values. \begin{figure}[ht] \centering \includegraphics[width=\linewidth]{plot-p.pdf} \caption{\mto is not sensitive to the number of partitions used to discretize the substitute vector space within our experimental range.} \label{plot-p} \end{figure} Figure~\ref{plot-p} gives results where the number of initial random partitions is varied over a large range and shows the results to be fairly stable across two orders of magnitude. \begin{figure}[ht] \centering \includegraphics[width=\linewidth]{plot-d.pdf} \caption{\mto falls sharply for less than 10 S-CODE dimensions, but more than 25 do not help.} \label{plot-d} \end{figure} Figure~\ref{plot-d} shows that at least 10 embedding dimensions are necessary to get within 1\% of the best result, but there is no significant gain from using more than 25 dimensions. \begin{figure}[ht] \centering \includegraphics[width=\linewidth]{plot-z.pdf} \caption{\mto is fairly stable as long as the $\tilde{Z}$ constant is within an order of magnitude of the real $Z$ value.} \label{plot-z} \end{figure} Figure~\ref{plot-z} shows that the constant $\tilde{Z}$ approximation can be varied within two orders of magnitude without a significant performance drop in the many-to-one score. For uniformly distributed points on a 25 dimensional sphere, the expected $Z\approx 0.146$. In the experiments where we tested we found the real $Z$ always to be in the 0.140-0.170 range. When the constant $\tilde{Z}$ estimate is too small the attraction in Eq.~\ref{eq:attract} dominates the repulsion in Eq.~\ref{eq:repulse} and all points tend to converge to the same location. When $\tilde{Z}$ is too high, it prevents meaningful clusters from coalescing. %%% I have seen the first, but the second is pure guess, need to %%% look. The distances seem to be decreasing on that end as well! We find the random partition algorithm to be fairly robust to different parameter settings and the resulting many-to-one score significantly better than the bigram baseline. \subsection{Random substitutes}\label{sec:wordsub} Another way to use substitute vectors in a discrete setting is simply to sample individual substitute words from them. The random-substitutes algorithm cycles through the test data and pairs each word with a random substitute picked from the pre-computed substitute vectors (see Section~\ref{sec:lm}). We ran the random-substitutes algorithm to generate 14 million word ($X$) -- random-substitute ($Y$) pairs (12 substitutes for each token) as input to S-CODE. Clustering the resulting $\phi_x$ vectors yields a many-to-one score of \wsmto\ and a V-measure of \wsvm. This result is close to the previous result by the random-partition algorithm, \rpmto, demonstrating that two very different discrete representations of context based on paradigmatic features give consistent results. Both results are significantly above the bigram baseline, \bgmto. Figure~\ref{plot-s} illustrates that the random-substitute result is fairly robust as long as the training algorithm can observe more than a few random substitutes per word. \begin{figure}[ht] \centering \includegraphics[width=\linewidth]{plot-s.pdf} \caption{\mto is not sensitive to the number of random substitutes sampled per word token.} \label{plot-s} \end{figure} \subsection{Morphological and orthographic features}\label{sec:feat} Clark \shortcite{Clark:2003:CDM:1067807.1067817} demonstrates that using morphological and orthographic features significantly improves part-of-speech induction with an HMM based model. Section~\ref{sec:related} describes a number other approaches that show similar improvements. This section describes one way to integrate additional features to the random-substitute model. %% In order to accommodate multiple feature types the CODE model needs to %% be extended to handle more than two variables. %% \cite{globerson2007euclidean} suggest the following likelihood %% function: %% \begin{eqnarray} %% &\ell(\phi,& \psi^{(1)}, \ldots, \psi^{(K)}) = \label{eq:multicode}\\ %% &&\sum_k w_k \sum_{x,y^{(k)}} \bar{p}(x,y^{(k)}) \log p(x,y^{(k)}) \nonumber %% \end{eqnarray} %% \noindent where $Y^{(1)}, \ldots, Y^{(K)}$ are $K$ different variables %% whose empirical joint distributions with $X$, %% $\bar{p}(x,y^{(1)})\ldots\bar{p}(x,y^{(K)})$, are known. %% Eq.~\ref{eq:multicode} then represents a set of CODE models %% $p(x,y^{(k)})$ where each $Y^{(k)}$ has an embedding $\psi_y^{(k)}$ %% but all models share the same $\phi_x$ embedding. The weights $w_k$ %% reflect the relative importance of each $Y^{(k)}$. %% We adopt this likelihood function, set all $w_k=1$, let $X$ %% represent a word, $Y^{(1)}$ represent a random substitute and %% $Y^{(2)}, \ldots, Y^{(K)}$ stand for various morphological and %% orthographic features of the word. With this setup, the training %% procedure needs to change little: each time a word -- %% random-substitute pair is sampled, the relevant word -- feature pairs %% are also generated and input to the gradient ascent algorithm. The orthographic features we used are similar to the ones in \cite{bergkirkpatrick-EtAl:2010:NAACLHLT} with small modifications: \begin{itemize} \item Initial-Capital: this feature is generated for capitalized words with the exception of sentence initial words. \item Number: this feature is generated when the token starts with a digit. \item Contains-Hyphen: this feature is generated for lowercase words with an internal hyphen. \item Initial-Apostrophe: this feature is generated for tokens that start with an apostrophe. \end{itemize} %%% dy: which wsj? 1M? why not 156M?? We generated morphological features using the unsupervised algorithm Morfessor \cite{creutz05}. Morfessor was trained on the WSJ section of the Penn Treebank using default settings, and a perplexity threshold of 300. The program induced 5 suffix types that are present in a total of 10,484 word types. These suffixes were input to S-CODE as morphological features whenever the associated word types were sampled. In order to incorporate morphological and orthographic features into S-CODE we modified its input. For each word -- random-substitute pair generated as in the previous section, we added word -- feature pairs to the input for each morphological and orthographic feature of the word. Words on average have 0.25 features associated with them. This increased the number of pairs input to S-CODE from 14.1 million (12 substitutes per word) to 17.7 million (additional 0.25 features on average for each of the 14.1 million words). Using similar training settings as the previous section, the addition of morphological and orthographic features increased the many-to-one score of the random-substitute model to \ftmto\ and V-measure to \ftvm. Both these results improve the state-of-the-art in part-of-speech induction significantly as seen in Table~\ref{tab:results}. \begin{figure*}[ht] \centering \vspace*{-25mm} \includegraphics[width=\textwidth]{hinton.png} \vspace*{-30mm} \caption{Hinton diagram comparing most frequent tags and clusters.} \label{plot-hinton} \end{figure*}
{ "alphanum_fraction": 0.7674017402, "avg_line_length": 47.4448398577, "ext": "tex", "hexsha": "72a08ad4380ccec06fab2ce5ce0fdbae9d68e60b", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2019-04-06T07:56:00.000Z", "max_forks_repo_forks_event_min_datetime": "2019-04-06T07:56:00.000Z", "max_forks_repo_head_hexsha": "27d610318a0c777e2ca88b1ab2de5aa48f5a399f", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "ai-ku/upos", "max_forks_repo_path": "papers/cl2012/emnlp12/experiments.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "27d610318a0c777e2ca88b1ab2de5aa48f5a399f", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "ai-ku/upos", "max_issues_repo_path": "papers/cl2012/emnlp12/experiments.tex", "max_line_length": 92, "max_stars_count": 4, "max_stars_repo_head_hexsha": "27d610318a0c777e2ca88b1ab2de5aa48f5a399f", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "ai-ku/upos", "max_stars_repo_path": "papers/cl2012/emnlp12/experiments.tex", "max_stars_repo_stars_event_max_datetime": "2019-05-18T11:35:02.000Z", "max_stars_repo_stars_event_min_datetime": "2015-01-24T11:27:18.000Z", "num_tokens": 3585, "size": 13332 }
\documentclass{article} \usepackage[utf8]{inputenc} \usepackage{listings} \usepackage[a4paper]{geometry} \usepackage{hyperref} \usepackage{graphicx} %package to manage images \usepackage{wrapfig} \usepackage{indentfirst}\setlength\parindent{2em} \title{Manual:A Programming Environment for Kinba} \author{Woo Min Park / woo\[email protected] } \date{September 2017} \begin{document} \maketitle \section*{Download and Run} \noindent Download location - \url{https://github.com/wm9947/Programming-environment} \noindent Short description of the application - \url{https://github.com/wm9947/Programming-environment/wiki} The project uses localhost port number 5000, 7000, and 8000. Before running the program, make sure that the corresponding port is not used by another program. If the occupied port is unchangeable, you must modify the port number at the source code level. \subsection*{Scratch Extension} \begin{figure}[!ht] \centering \includegraphics[width=0.6\textwidth]{scratch.png} \end{figure} The sample project provided on the GitHub can be used by \textbf{Load Project} in file menu. The project extension of the file name is '*.sb2'. The Scratch extension made by JavaScript can be added by \textbf{Load Experimental Extension}(right click of the mouse). If the extension has syntax error, the block will not appear. The message transmitted to the Data Transmitter can be checked by the console in the Developer tool. \subsection*{Data Transmitter} \begin{lstlisting}[language=bash] $node DataTransmitter.js \end{lstlisting} \subsection*{Xtion} \begin{lstlisting}[language=bash] $./UserViewer \end{lstlisting} The application is located in './Desktop/woomin/NiTE/Sample/Bin/'. It transfers the location of the joint obtained by Xtion using OpenNi and NiTE to the Data Transmitter. It contains the UDP client to send data. \newpage \section*{Environment Installation} \subsection*{Node.js and Web Socket library} \begin{lstlisting}[language=bash] $ sudo apt-get install python-software-properties $ curl -sL https://deb.nodesource.com/setup_8.x | sudo -E bash - $ sudo apt-get install nodejs $ sudo npm install websocket \end{lstlisting} \subsection*{Chrome} \begin{lstlisting}[language=bash] $wget https://dl.google.com/linux/direct/google-chrome- stable_current_amd64.deb $sudo dpkg -i google-chrome-stable_current_amd64.deb \end{lstlisting} \subsection*{OpenNi} \url{https://structure.io/openni} \subsection*{Xtion} \url{https://www.asus.com/3D-Sensor/Xtion_PRO/HelpDesk_Download/} \end{document}
{ "alphanum_fraction": 0.7723735409, "avg_line_length": 27.0526315789, "ext": "tex", "hexsha": "62453613bfd00f6b5714f9dcef9d5eeb569b67ff", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "5d05a0cd12757b6f81e58b0aa287cb605b907c04", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "wm9947/Programming-environment", "max_forks_repo_path": "Manual/main.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "5d05a0cd12757b6f81e58b0aa287cb605b907c04", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "wm9947/Programming-environment", "max_issues_repo_path": "Manual/main.tex", "max_line_length": 193, "max_stars_count": 1, "max_stars_repo_head_hexsha": "5d05a0cd12757b6f81e58b0aa287cb605b907c04", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "wm9947/Programming-environment", "max_stars_repo_path": "Manual/main.tex", "max_stars_repo_stars_event_max_datetime": "2017-10-19T11:04:21.000Z", "max_stars_repo_stars_event_min_datetime": "2017-10-19T11:04:21.000Z", "num_tokens": 688, "size": 2570 }