Search is not available for this dataset
text
string
meta
dict
% Copyright (c) 2005 Nokia Corporation % % Licensed under the Apache License, Version 2.0 (the "License"); % you may not use this file except in compliance with the License. % You may obtain a copy of the License at % % http://www.apache.org/licenses/LICENSE-2.0 % % Unless required by applicable law or agreed to in writing, software % distributed under the License is distributed on an "AS IS" BASIS, % WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. % See the License for the specific language governing permissions and % limitations under the License. \section{\module{contacts} --- A contacts related services package} \label{sec:contacts} \declaremodule{extension}{contacts} \platform{S60} \modulesynopsis{A contacts related services package.} The \module{contacts} module offers an API to address book services allowing the creation of contact information databases. The \module{contacts} module represents a Symbian contact database as a dictionary-like \class{ContactDb} object, which contains \class{Contact} objects and which is indexed using the unique IDs of those objects. A \class{Contact} object is itself a list-like object, which contains \class{ContactField} objects and which is indexed using the field indices. Unique IDs and field indices are integers. The \class{ContactDb} object supports a limited subset of dictionary functionality. Therefore, only \code{__iter__}, \code{__getitem__}, \code{__delitem__},\code{__len__}, \code{keys}, \code{values}, and \code{items} are included. \class{ContactDb} objects represent a live view into the database. If a contact is changed outside your Python application, the changes are visible immediately, and conversely any changes you commit into the database are visible immediately to other applications. It is possible to lock a contact for editing, which will prevent other applications from modifying the contact for as long as the lock is held. This can be done in, for example, a contacts editor application when a contact is opened for editing, very much like with the Contacts application in your Nokia device. If you try to modify a contact without locking it for editing, the contact is automatically locked before the modification and released immediately afterwards. \subsection{Module Level Functions} \label{subsec:contmod} The following free functions - functions that do not belong to any class - are defined in the \class{Contact} module: \begin{funcdesc}{open}{\optional{filename\optional{, mode}}} Opens a contacts database and returns a \class{ContactDb} object. \var{filename} should be a full Unicode path name. If \var{filename} is not given, opens the default contacts database. If \var{mode} is not given, the database must exist. If \var{mode} is '\code{c}', the database is created if it does not already exist. If \var{mode} is '\code{n}', a new, empty database is created, overwriting the possible previous database. \end{funcdesc} \begin{notice}[warning] Using \code{open} together with the additional parameters \var{filename} or \var{mode} is intended for testing purposes only. Due to S60 SDK functionality, the \code{open} method can sometimes be unreliable with these parameters. \end{notice} \subsection{ContactDb Object} \label{subsec:contactdb} There is one default contact database, but it is possible to create several databases with the \code{open} function. \begin{classdesc*}{ContactDb} \class{ContactDb} objects have the following methods: \begin{methoddesc}[ContactDb]{add_contact}{} Adds a new contact into the database. Returns a \class{Contact} object that represents the new contact. The returned object is already locked for modification. Note that a newly created contact will contain some empty default fields. If you do not want to use the default fields for anything, you can ignore them. \end{methoddesc} \begin{methoddesc}[ContactDb]{find}{searchterm} Finds the contacts that contain the given Unicode string as a substring and returns them as a list. \end{methoddesc} \begin{methoddesc}[ContactDb]{import_vcards}{vcards} Imports the vCard(s) in the given string into the database. \end{methoddesc} \begin{methoddesc}[ContactDb]{export_vcards}{ids} Converts the contacts corresponding to the ID's in the given tuple \var{ids} to vCards and returns them as a string. \end{methoddesc} \begin{methoddesc}[ContactDb]{keys}{} Returns a list of unique IDs of all \code{Contact} objects in the database. \end{methoddesc} \begin{methoddesc}[ContactDb]{compact_required}{} Verifies whether compacting is recommended. Returns an integer value indicating either a true or false state. Returns \code{True} if more than 32K of space is unused and if this comprises more than 50 percent of the database file, or if more than 256K is wasted in the database file. \end{methoddesc} \begin{methoddesc}[ContactDb]{compact}{} Compacts the database to its minimum size. \end{methoddesc} \begin{methoddesc}[ContactDb]{__delitem__}{id} Deletes the given contact from the database. \end{methoddesc} \begin{methoddesc}[ContactDb]{field_types}{} Returns a list of dictionary objects that contains information on all supported field types. The list contains dictionary objects, which each describe one field type. The most important keys in the dictionary are \code{'type'} and \code{'location'} which together indentify the field type. \code{'type'} can have string values such as \code{'email_address'}. \code{'location'} can have the string values \code{'none'}, \code{'home',} or \code{'work'}. Another important key is \code{'storagetype'}, which defines the storage type of the field. \code{'storagetype'} can have the string values \code{'text'}, \code{'datetime'}, \code{'item_id',} or \code{'binary'}. Note that the \code{Contacts} extension does not support adding, reading, or modifying fields of any other type than '\code{text'} or \code{'datetime'}. The other content returned by \code{field_types} is considered to be advanced knowledge and is not documented here. \end{methoddesc} \end{classdesc*} \subsection{Contact Object} \label{subsec:contact} A \class{Contact} object represents a live view into the state of a single contact in the database. You can access the fields either with a contact's numeric field ID as \code{contact[fieldid]}, or using the \code{find} method. Attempting to modify a contact while it has been locked for editing in another application will raise the exception \code{ContactBusy}. \begin{classdesc*}{Contact} \class{Contact} objects have the following attributes: \begin{memberdesc}[Contact]{id} The unique ID of this \class{Contact}. Read-only. \end{memberdesc} \begin{memberdesc}[Contact]{title} The title of this \class{Contact}. Read-only. \end{memberdesc} \class{Contact} objects have the following methods: \begin{methoddesc}[Contact]{begin}{} Locks the contact for editing. This prevents other applications from modifying the contact for as long as the lock is held. This method will raise the exception \code{ContactBusy} if the contact has already been locked. \end{methoddesc} \begin{methoddesc}[Contact]{commit}{} Releases the lock and commits the changes made into the database. \end{methoddesc} \begin{methoddesc}[Contact]{rollback}{} Releases the lock and discards all changes that were made. The contact remains in the state it was before \code{begin}. \end{methoddesc} \begin{methoddesc}[Contact]{as_vcard}{} Returns the contact as a string in vCard format. \end{methoddesc} \begin{methoddesc}[Contact]{add_field}{type \optional{, value \optional{, label=field_label}\optional{, location=location_spec}}} Adds a new field into this \class{Contact}. This method raises \code{ContactBusy} if the contact has been locked by some other application. \var{type} can be one of the supported field types as a string. The following field types can be added at present: \begin{itemize} \item \code{city} \item \code{company_name} \item \code{country} \item \code{date} \item \code{dtmf_string} \item \code{email_address} \item \code{extended_address} \item \code{fax_number} \item \code{first_name} \item \code{job_title} \item \code{last_name} \item \code{mobile_number} \item \code{note} \item \code{pager_number} \item \code{phone_number} \item \code{po_box} \item \code{postal_address} \item \code{postal_code} \item \code{state} \item \code{street_address} \item \code{url} \item \code{video{\_}number} \item \code{wvid} \end{itemize} The following field types are recognized but cannot be created at present: \begin{itemize} \item \code{first{\_}name{\_}reading} \item \code{last{\_}name{\_}reading} \item \code{picture} \item \code{speeddial} \item \code{thumbnail{\_}image} \item \code{voicetag} \end{itemize} All supported field types are passed as strings or Unicode strings, except for \code{'date}' which is a float that represents Unix time. For more information on Unix time, see Section \ref{subsec:datetime}, Date and Time. \var{field{\_}label} is the name of the field shown to the user. If you do not pass a label, the default label for the field type is used. \var{location{\_}spec}, if given, must be \code{'home'} or \code{'work'}. Note that not all combinations of type and location are valid. The settings of the current contacts database in use determine which ones are valid. \end{methoddesc} \begin{methoddesc}[Contact]{find}{\optional{type=field{\_}type}\optional{, location=field{\_}location}} Finds the fields of this contact that match the given search specifications. If no parameters are given, all fields are returned. \end{methoddesc} \begin{methoddesc}[Contact]{__delitem__}{fieldindex} Deletes the given field from this contact. Note that since this will change the indices of all fields that appear after this field in the contact, and since the \class{ContactField} objects refer to the fields by index, old \class{ContactField }objects that refer to fields after the deleted field will refer to different fields after this operation. \end{methoddesc} \end{classdesc*} \subsection{ContactField Object} \label{subsec:contactfield} A \class{ContactField} represents a field of a \class{Contact} at a certain index. A \class{ContactField} has attributes, some of which can be modified. If the parent \class{Contact} has not been locked for editing, modifications are committed immediately to the database. If the parent \class{Contact} has been locked, the changes are committed only when \class{commit} is called on the \class{Contact}. \begin{classdesc*}{ContactField} \class{ContactField} objects have the following attributes: \begin{memberdesc}[ContactField]{label} The user-visible label of this field. Read-write. \end{memberdesc} \begin{memberdesc}[ContactField]{value} The value of this field. Read-write. \end{memberdesc} \begin{memberdesc}[ContactField]{type} The type of this field. Read-only. \end{memberdesc} \begin{memberdesc}[ContactField]{location} The location of this field. This can be \code{'none'}, \code{'work'}, or \code{'home'}. \end{memberdesc} \begin{memberdesc}[ContactField]{schema} A dictionary that contains some properties of this field. The contents of this dictionary correspond to those returned by the \class{ContactDb} method \method{field{\_}types}. \end{memberdesc} \end{classdesc*}
{ "alphanum_fraction": 0.7681197479, "avg_line_length": 33.6991150442, "ext": "tex", "hexsha": "bdccab085be139dafdcd4a1a011c08e56da5633d", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "205414c33fba8166167fd8a6a03eda1a68f16316", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "tuankien2601/python222", "max_forks_repo_path": "Doc/lib/libcontacts.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "205414c33fba8166167fd8a6a03eda1a68f16316", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "tuankien2601/python222", "max_issues_repo_path": "Doc/lib/libcontacts.tex", "max_line_length": 99, "max_stars_count": 1, "max_stars_repo_head_hexsha": "205414c33fba8166167fd8a6a03eda1a68f16316", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "dillionhacker/python222", "max_stars_repo_path": "Doc/lib/libcontacts.tex", "max_stars_repo_stars_event_max_datetime": "2022-03-17T13:55:02.000Z", "max_stars_repo_stars_event_min_datetime": "2022-03-17T13:55:02.000Z", "num_tokens": 2865, "size": 11424 }
\section{Definitions} % \item Classification of security flaws \\ % \subsection{Classification of Security Flaws} The development of seeding for security flaws is addressed using a specific classification for security flaws: simple (local) security flaws, security flaw chains (defined over the control flow) and composite security flaws (defined via state over the execution of the whole program). We define these according to discussions at NIST of classification work by MITRE. \begin{enumerate} \item Simple Security Flaws \\ A simple security flaw is defined as an isolated single security flaw which can be locally defined in an application source code. It can be roughly characterized as a security flaw for which the fix is a local modification. \item Security Flaw Chains \\ A security flaw chain is a sequential set of operations along the control flow which collectively for a security flaw. Each link in the chain may or may not be a security flaw, and may or may not even be a bad practice (a bug effecting maintainability or violating a specific style guide). Collectively, the set of operations form a security flaw. We will be interested in evaluating a tools ability to identifying such security flaw chains, so seeding such flaws will be a focus of this work. However, since such work is likely to use the same infrastructure as the seeding of simple security flaws, the design of this support can be deferred to later. \item Composite Security Flaws \\ A composite security flaw is a set of operations each of which may or may not be a security flaw (e.g. same as for the links of a security flaw chain). For the same reasons as for security flaw chains, we will be interested in seeding these sorts of flaws to test static analysis tools. The security flaw chain and the composite security flaw are likely related but the composite security flaw may be defined over the control flow of the whole program where as the security flaw chain is defined over a significantly smaller, (and more readily characterizable, subset of the control flow. Further design work specific to the design for this is deferred. \fixme{Need to verify/clarify the differences between the {\it security flaw chain} and the {\it composite security flaw}.} \end{enumerate} \section{Scope} The design of a tool to seed security flaws and develop evaluation test suites requires addressing a range of related subjects. The following subsections outline more detail. % required to handle more general security flaws is as follows: % \begin{enumerate} % \item Range of Security Flaws \\ \subsection{Range of Security Flaws} We assume a range of 30-40 security flaws (yet to be identified) which can be seeded independently of each other. Some candidate lists of specific security flaws exist, for example, the \href{http://nvd.nist.gov/cwe.cfm}{NIST list of security flaws}, and the \href{https://www.securecoding.cert.org/confluence/display/seccode/CERT+Secure+Coding+Standards}{CERT Secure Coding Rules}. A simple list of ten used in a recent study will be used until we have collectively can agree on a better list. Note that some of the descriptions are taken from the summaries for each security flaw on the CWE website, the links to more information where it is available on Wikipedia, plus links to the relevant CWE web pages are also provided. The goal is to be as clear as possible, at this early point in the design, about the definitions of the security flaws that might be targets for this seeding research. \begin{enumerate} \item Buffer Overflow \\ There are a lot of different types of buffer overflow cases to address, more information is available at: \href{http://en.wikipedia.org/wiki/Buffer_overflow}{Wikipedia: Buffer Overflow Security Flaws}, see also: \href{http://cwe.mitre.org/data/definitions/119.html}{CWE-119}. \item Command Injection \\ Code injection can be used by an attacker to introduce (or "inject") code into a computer program to change the course of execution, more information is available at: \href{http://en.wikipedia.org/wiki/Command_injection}{Wikipedia: Command Injection}, see also: \href{http://cwe.mitre.org/data/definitions/78.html}{CWE-78}. \item Use after {\em free} \\ Referencing memory after it has been freed can cause a program to crash, use unexpected values, or execute code, more information is available at: \href{http://cwe.mitre.org/data/definitions/416.html}{CWE-416}. \item Format string vulnerability \\ The software uses externally-controlled format strings in printf-style functions, which can lead to buffer overflows or data representation problems. More information is available at: \href{http://en.wikipedia.org/wiki/Format_string_vulnerabilities}{Wikipedia: Format string attack}, see also: \href{http://cwe.mitre.org/data/definitions/134.html}{CWE-134}. \item Memory leaks \\ more information is available at: \href{http://en.wikipedia.org/wiki/Memory_leak}{Memory leaks}, see also: \href{http://cwe.mitre.org/data/definitions/401.html}{CWE-401}. \item Uninitialized variables \\ The code uses a variable that has not been initialized, leading to unpredictable results, more information is available at: \href{http://en.wikipedia.org/wiki/Uninitialized_variable}{Wikipedia: Uninitialized variables}, see also: \href{http://cwe.mitre.org/data/definitions/457.html}{CWE-457}. \item Improper return types \\ If a function does not generate the correct return/status codes, or if at a call site it does not handle all possible return/status codes that could be generated by a function, then security issues may result. More information is available at: \href{http://cwe.mitre.org/data/definitions/389.html}{CWE-389}. \item Null pointer dereference \\ A NULL pointer dereference occurs when the application dereferences a pointer that it expects to be valid, but is NULL, typically causing a crash or exit, more information is available at: \href{http://en.wikipedia.org/wiki/Pointer}{Wikipedia: Null pointers}, see also: \href{http://cwe.mitre.org/data/definitions/476.html}{CWE-476}. \item Tainted Data / Unvalidated user input \\ % The software does not sufficiently verify the origin or authenticity of % data, in a way that causes it to accept invalid data, The software has an absent or incorrect protection mechanism that fails to properly validate input that can affect the control flow or data flow of a program, more information is available at: \href{http://en.wikipedia.org/wiki/Taint_checking}{Wikipedia: Taint checking}, see also: % NIST has suggested that CWE-20 be used instead of CWE-345. % \href{http://cwe.mitre.org/data/definitions/345.html}{CWE-345}. \href{http://cwe.mitre.org/data/definitions/20.html}{CWE-20}. \item TOCTOU (Time-of-check-to-time-of-use) \\ Time-of-check, time-of-use race conditions occur when between the time in which a given resource (or its reference) is checked, and the time that resource is used, a change occurs in the resource to invalidate the results of the check, more details on this security flaw is available at: \href{http://en.wikipedia.org/wiki/Time-of-check-to-time-of-use}{Wikipedia: Time-of-check-to-time-of-use}, see also: \href{http://cwe.mitre.org/data/definitions/367.html}{CWE-367}. \commentout{ List that just arrived from NIST: Here is a list of weaknesses we are most interested in. Many come from our source code sec. spec. (We kept some from the spec off this list, because we are not as interested in researching them.) We grouped them into categories with CWE IDs. We do *not* consider this list Frozen Forever. Rather it is a "living" document that you, we, and other will occasionally update with times and circumstances. * Insufficient input validation #20 - OS Command Injection #78 - Path Manipulation #73 - Format string vulnerability #134 * Memory problems (we consider overflow and underflow the same problem): - Failure to Constrain Operations within Bounds ... #119 - Heap Overflow #122 - Stack Overflow #121 - Improper Null Termination #170 - Use After Free #416 - Double Free #415 - Sensitive information uncleared before release #266 (and #244) - Memory Leak #401 - Null Pointer Dereference #476 * Numeric Errors - Integer Overflow #190 - Integer Coercion Error (cast problem) #192 - Divide by zero #369 * Timing issues - TOCTOU race #367 - Race condition Within threads #366 - Unrestricted Critical Resource Lock #412 * Security Features - Hard coded password #259 - Leftover Debug Code #489 - Backdoors #510 * Coding errors - Error Conditions, Return Values, Status Codes #389 - Unchecked Return Value #252 - Unchecked Error Condition #248 - Dead code #561 - Uninitialized Variable #457 } \end{enumerate} \section{Requirements} This section lays out a design for the construction of a source-to-source translator that will automatically insert security flaws into existing application codes. The design is meant to be collaborative effort with NIST. \subsection{A Database of Program Information} \label{sec:SpecificationOfDatabase} An aspect of the define to support security flaws seeding is recording where in the application seeding occurs and matching the detected security flaws back to the source code. Since the detection of the security flaws is an issues associated with running the static analysis tools on the generated seeded application it is outside of the design goals for this design document. However, the seeding for arbitrary applications does require managing information about what security flaw vulnerabilities were identified (see subsection \ref{sec:SpecificationOfVulnerabilities}) and the location of seeded security flaws (see subsection \ref{sec:SeedingOfSecurityFlaws}). To support this we proposed a database (of some sort) to record such details. Work at NIST may be helpful in defining an appropriate schema to support this (based on their experience with collecting related tool independent information specific to security flaws). % \item Specification of vulnerabilities (domain) \\ \subsection{Specification of Vulnerabilities (domain)} \label{sec:SpecificationOfVulnerabilities} For each security flaw and input application code pairing, there is an evaluation of the application code to identify where such security flaws could exist (specifically, where such flaws could be seeded). This step defines the domain (mathematically) from which the application vulnerabilities can be mapped to versions of the application with seeded security flaws. For example, if an application does not contain arrays, then it might have no vulnerability to buffer overflow security flaws, and thus it may be inappropriate to seed such an application with this type of security flaw. It is not the goal of security flaw seeding to be a generator of every type of security flaw independent of the input applications (security flaw generators can in principle do this, but see subsection \ref{sec:SecurityFlawGenerator} for an alternative point of view). However, if a vulnerability for a security flaw is detected in the application code, then every possible variation of code that can seed that security flaw into the application may be appropriate. For each security flaw, the design of the security flaw seeding tool would: \begin{enumerate} \item Identify the location of where the vulnerabilities could exist. \item Classify the location of the vulnerabilities by what analysis is required to resolve it as a vulnerability (loop analysis, pointer analysis, inter-procedural analysis, etc.). \item Record this information into a database (see subsection \ref{sec:SpecificationOfDatabase}) so that the location of potential vulnerabilities can be saved separately from where security flaws are seeded into the application. \end{enumerate} % \item Specification of search space (range) \\ \subsection{Specification of Search Space (range)} \label{sec:SpecificationOfSearchSpace} The seeding of any single security flaws can happen in many ways. The mapping of the vulnerabilities (domain) to the space of possible seeded security flaws (range) may help define a convient mathematical basis for the description of security flaw seeding. For example, a buffer overflow many be introduced using transformed literal values, scalar variables, static declared arrays, heap declared arrays, dynamically differently defined arrays (along different paths), etc. The number of fundamentally different ways in which a security flaw may be introduced (seeded) into source code defines the number of dimensions of the search space. A number of ways on introducing security flaws are general and so can form a security flaw invariant subspace. The dimensions of the search space can be considered to be the number of fundamentally different ways that a security flaw can be hidden. Example search space dimensions are: \begin{enumerate} \item For values associated with a security flaw, the values may be: \begin{enumerate} \item Use of literal values (most trivial) \item Use of scalar variables \item Use of array values \item Use of structure field values \item Use of pointers to values (most difficult) \end{enumerate} \item For expressions where security flaws are either embedded or which form a part of the security flaw, the expressions can include any of the expression language constructs available in the target language. Expressions can be: \begin{enumerate} \item unary operators (most trivial) \item binary operators \item function arguments \item array index expressions \item declaration initializers \item template instantiations (template expressions) \end{enumerate} \item For statements containing vulnerabilities, the embedding of the security flaw in different statements, or the use of different types of statements in the seeding of the security flaw allows for the range of security flaws to be seeded. Statements can be: \begin{enumerate} \item declarations (variables, functions, classes, templates, etc.) \item loop constructs \item branch constructs \item template instantiations (template functions) \end{enumerate} \end{enumerate} In several cases, there can be a depth (nested levels) at which the values can be hidden. This defines a size along the respective dimension. % \item Security flaw generator \\ \subsection{Security Flaw Test Code Generator} \label{sec:SecurityFlawGenerator} It is reasonable that the same infrastructure used to seed security flaws into existing applications could be used to as a generator of security flaws independent of any input application. A simple code generator can be written using a range of techniques, from macros in m4, scripting languages such as Perl and Python, to CPP macros using C/C++. However, leveraging the proposed work to build seeding via a source-to-source compiler may permit a trivial mode of operation as a simple code generator for the same security flaws without the seeding. % Certainly a source-to-source compiler infrastructure can handle the generation % of code as a trivial transformation, even if it is a larger hammer than required. We could have the proposed seeding tool have a mode where it simply generates the different variations of the security flaws that it would otherwise seed into an input application. This may even be a useful mode for debugging and presenting the security flaws that would be introduced. From a compiler transformation perspective this adds little to the complexity of such a tool (at least if using ROSE); however the transformation can't be associated with a site in the source code for the recognized security flaw vulnerability. It is not yet clear if this detail is important. % \item Seeding of individual security flaws \\ \subsection{Seeding of Individual Security Flaws} \label{sec:SeedingOfSecurityFlaws} An aspect of the design of a security flaw seeding tool is the seeding of individual security flaws within each classification. Specifically each security flaw can be seeded in isolation; treated orthogonally to all other security flaws. Addressing more sophisticated security flaws would be done by treating them a either as Security Flaw Chains or Composite Security Flaws. For each classification of security flaws, we define the following steps: \begin{enumerate} \item Identify the location of potential vulnerabilities \\ Each security flaw has a relevant domain (possible location in the input application) a potential vulnerability where it can be introduced (seeded). See item \ref{sec:SpecificationOfVulnerabilities} (above) for more details. \item Identify the space of transformations to represent the target bug \\ This defines the size of the search space (the size of the space representing the code that could be generated). See item \ref{sec:SpecificationOfSearchSpace} (above) for more details. \item Define the subset of points in the space to evaluate \\ The space of transformations can be expected to be unreasonably large. It is reasonable define subspaces and explicitly define generated security flaws for only the subspece so that we can avoid the exhaustive search of extremely large spaces; which might generate too much code to test or take an unreasonable amount of time to evaluate using the static analysis tools. This is our experience from the generation of code to test performance optimizations associated with a research area in compiler optimization called {\em autotuning} (also referred to as {\em empirical analysis}). \item Identify the granularity of the security flaw seeding \\ where the security flaw should be applied (e.g. expression, statement, basic block, function, file). For example, a buffer overflow may be able to be applied to a loop nest and either within each loop of the loop nest of the inner most loop depending on the granularity. Similarly, multiple versions of the code for any seeded security flaw may be copied at different levels of granularity, from file level granularity to function level granularity, or even block level granularity. Each have consequences to the amount of code generated and the degree to which the seeding of the security flaw is allowed to change the input application. \item Generate a copy of the relevant portion of the AST \\ to support the seeding of multiple variations of the security flaw. AST copy mechanism in ROSE are well suited to this purpose; a copy of the AST subtree is a defined level or granularity is generated and the security flaw is seeded into the copy. Attention should be paid to the way this copy is woven back into the control flow so that static analysis will not ignore the seeded bug as dead code. \item Apply transformation to modify each copy of the AST \\ Each AST subtree is transformed to insert a single security flaw. \end{enumerate} % \item Seeding likely to generate security flaw false positives \\ \subsection{Seeding of Security Flaw False Positives} \label{sec:FalsePositiveEvaluation} Since many static analysis tools are overly conservative in their program analysis they are sensitive to falsely detecting security flaws (false positives). Since detection of false positives are a significant noise in the detection of real security flaws the seeding code which might appear to be a security flaw is yet another way to evaluated static analysis tools. The detection of any valid code seeded into an application as a security flaw would count as a false positive. There are a few strategies to the introduction of good code that would appear as a security flaw, specifically such seeded code should be introduced hidden behind different types of complexities. Complexities that we think would be effective in fooling static analysis to report good code as false security flaws would be: \begin{enumerate} \item Introduction of code that uses variables defined (in use-def sense) over complex paths (path sensitive complexity). \item Introduction of code that uses variables defined (in use-def sense) over complexity from different function calls (context sensitive complexity). \item Introduce flaws in dead code (dead code complexity) \item Introduce good code hidden behind code that appears dead (e.g. complex switch statements; Duff's devise) (dead code complexity) \item Introduce good code with parts assembled from template meta programming (C++ specific template instantiation complexity) \item Hide valid code behind code requiring pointer analysis. Hide valid code behind pointer manipulations of varying complexity that are not easily analyzable by different grades of pointer analysis. \end{enumerate} % \end{enumerate} \section{Design} The define of a translator to seed bugs into existing applications will be based on the ROSE source-to-source compiler infrastructure. Likely any source-to-source compiler infrastructure might alternatively work as well, or could be adapted. The steps are implemented via a traversal over the AST, additional graphs for more interesting program analysis may also be consulted. Information is passed between phases of the bug seeding via AST attributes placed on IR nodes of the AST. This approach is supporting in many program analysis infrastructures, and is well supported in ROSE (ROSE includes visualization support as well to simplify understanding the security flaw seeding process and debugging). The design will addressed using a few phases: \begin{enumerate} \item Identification of security flaw vulnerabilities \\ This work identifies where in the source code a specific security flaw {\em could} be seeded, it does not detect an existing security flaw in the input application. this part of the project is expected to be of value all alone as a way to count and record the locations of opportunities for flaw so that the number of identified flaws by static analysis tools can be reasoned via a percentable of the opportunities for the flaw instead of just a total count. This sort of evaluation does not require the security flaw seeding that is much of the purpose of the rest of the design. Note that markers (AST Attributes) are attached to relevant IR nodes during this phase. A data structure called {\tt SecurityVulnerabilityAttribute} is placed at each location in the source code where a specific type of vulnerability could exist (see figure xxx). \item Clone fragments of code in the application to support seeding \\ The user decides if flaws will be seeded into the application or into copied pieces of the applications (clones). It is expected that if all possible security flaws were to be seeded into a single application that this would confuse the static analysis and we would not have a meaningful evaluation of static analysis tools. To support a more refined approach fragments of different sizes are copied to support the seeding of flaws (with a definable number of flaws per clone). \end{enumerate}
{ "alphanum_fraction": 0.6793139293, "avg_line_length": 58.8122270742, "ext": "tex", "hexsha": "a96c79e6ad5b76a889546ce8f9c962ccc2bd0163", "lang": "TeX", "max_forks_count": 2, "max_forks_repo_forks_event_max_datetime": "2019-02-19T12:29:49.000Z", "max_forks_repo_forks_event_min_datetime": "2019-02-19T01:27:51.000Z", "max_forks_repo_head_hexsha": "7435d4fa1941826c784ba97296c0ec55fa7d7c7e", "max_forks_repo_licenses": [ "BSD-3-Clause" ], "max_forks_repo_name": "sujankh/rose-matlab", "max_forks_repo_path": "projects/bugSeeding/design.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "7435d4fa1941826c784ba97296c0ec55fa7d7c7e", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "BSD-3-Clause" ], "max_issues_repo_name": "sujankh/rose-matlab", "max_issues_repo_path": "projects/bugSeeding/design.tex", "max_line_length": 132, "max_stars_count": 4, "max_stars_repo_head_hexsha": "7597292cf14da292bdb9a4ef573001b6c5b9b6c0", "max_stars_repo_licenses": [ "BSD-3-Clause" ], "max_stars_repo_name": "maurizioabba/rose", "max_stars_repo_path": "projects/bugSeeding/design.tex", "max_stars_repo_stars_event_max_datetime": "2022-01-12T05:32:47.000Z", "max_stars_repo_stars_event_min_datetime": "2015-03-17T13:52:21.000Z", "num_tokens": 5230, "size": 26936 }
\chapter*{Acknowledgements} Thanks to everyone, especially to my supervisors and my mum. And obviously to the ones who made that great template. Bacon ipsum dolour sit amet porchetta beef turkey, bacon turducken boudin hamburger venison ball tip. Brisket pork loin bresaola short loin ground round leberkas pastrami tongue jerky cow turducken beef ribs. Pork ribeye landjaeger prosciutto pig venison tenderloin. Swine beef ribs kielbasa, porchetta tenderloin salami venison pork belly tail. Bacon ipsum dolour sit amet porchetta beef turkey, bacon turducken boudin hamburger venison ball tip. Brisket pork loin bresaola short loin ground round leberkas pastrami tongue jerky cow turducken beef ribs. Pork ribeye landjaeger prosciutto pig venison tenderloin. Swine beef ribs kielbasa, porchetta tenderloin salami venison pork belly tail. Bacon ipsum dolour sit amet porchetta beef turkey, bacon turducken boudin hamburger venison ball tip. Brisket pork loin bresaola short loin ground round leberkas pastrami tongue jerky cow turducken beef ribs. Pork ribeye landjaeger prosciutto pig venison tenderloin. Swine beef ribs kielbasa, porchetta tenderloin salami venison pork belly tail. \ldots
{ "alphanum_fraction": 0.8284762698, "avg_line_length": 70.6470588235, "ext": "tex", "hexsha": "a8d100225e9704109c8138d0f1302775d5fa00a4", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "a0708fd31c9a71e224de6ed643e9380d15fa26ec", "max_forks_repo_licenses": [ "CC-BY-4.0" ], "max_forks_repo_name": "kriskenesei/geo2020-tex", "max_forks_repo_path": "template_unchanged/front/acknow.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "a0708fd31c9a71e224de6ed643e9380d15fa26ec", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "CC-BY-4.0" ], "max_issues_repo_name": "kriskenesei/geo2020-tex", "max_issues_repo_path": "template_unchanged/front/acknow.tex", "max_line_length": 345, "max_stars_count": null, "max_stars_repo_head_hexsha": "a0708fd31c9a71e224de6ed643e9380d15fa26ec", "max_stars_repo_licenses": [ "CC-BY-4.0" ], "max_stars_repo_name": "kriskenesei/geo2020-tex", "max_stars_repo_path": "template_unchanged/front/acknow.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 310, "size": 1201 }
\section{Preprocessing} This section deals with the operations that can be executed outside trim convergence loops, with the aim of boosting efficiency and improving code clarity. Preprocessing deals with the assignment of time shape functions for resolution of the rotor modes, generation of blade mode shapes and setting up the trim problem.
{ "alphanum_fraction": 0.8313953488, "avg_line_length": 172, "ext": "tex", "hexsha": "161a5575d969025216d9196b271e82ddf2ebee43", "lang": "TeX", "max_forks_count": 5, "max_forks_repo_forks_event_max_datetime": "2021-04-20T15:44:18.000Z", "max_forks_repo_forks_event_min_datetime": "2018-11-27T21:21:19.000Z", "max_forks_repo_head_hexsha": "3f754e1bd3cebdb5b5c68c8a2d84c47be1df2f02", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "ananthsridharan/vtol_sizing", "max_forks_repo_path": "Autodoc/theory_prog_manual/Preprocessing.tex", "max_issues_count": 4, "max_issues_repo_head_hexsha": "3f754e1bd3cebdb5b5c68c8a2d84c47be1df2f02", "max_issues_repo_issues_event_max_datetime": "2021-10-04T18:19:59.000Z", "max_issues_repo_issues_event_min_datetime": "2020-12-08T10:26:41.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "ananthsridharan/vtol_sizing", "max_issues_repo_path": "Autodoc/theory_prog_manual/Preprocessing.tex", "max_line_length": 320, "max_stars_count": 10, "max_stars_repo_head_hexsha": "3f754e1bd3cebdb5b5c68c8a2d84c47be1df2f02", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "ananthsridharan/vtol_sizing", "max_stars_repo_path": "Autodoc/theory_prog_manual/Preprocessing.tex", "max_stars_repo_stars_event_max_datetime": "2021-11-22T18:49:25.000Z", "max_stars_repo_stars_event_min_datetime": "2020-03-24T10:20:52.000Z", "num_tokens": 62, "size": 344 }
\chapter{Installation} \section{Binary Installation} This is the recommended method. Only two things are required: \begin{description} \item [{Stereo~Pipeline~Tarball.}] The main Stereo Pipeline page is \\ \url{http://ti.arc.nasa.gov/tech/asr/intelligent-robotics/ngt/stereo/}. Download the \emph{Binary} option that matches the platform you wish to use. The required \ac{ISIS} version is listed next to the name; choose the newest version you have available. \item [{USGS~ISIS~Binary~Distribution.}] The Stereo Pipeline depends on \ac{ISIS} version 3 from the \ac{USGS}\@. Their installation guide is at \url{http://isis.astrogeology.usgs.gov/documents/InstallGuide}. You must use their binaries as-is; if you need to recompile, you must follow the \emph{Source Installation} guide for the Stereo Pipeline in Section~\ref{sec:Source-Installation}. Note also that the \ac{USGS} provides only the current version of \ac{ISIS} and the previous version (denoted with a `\texttt{\_OLD}' suffix) via their \texttt{rsync} service. If the current version is newer than the version of ISIS that the Stereo Pipeline is compiled against, be assured that we're working on rolling out a new version. In the meantime, you should be able to sync the previous version of ISIS which should work with Stereo Pipeline. To do so, view the listing of modules that is provided via the `\texttt{rsync isisdist.wr.usgs.gov::}' command. You should see several modules listed with the `\texttt{\_OLD}' suffix. Select the one that is appropriate for your system, and \texttt{rsync} according to the instructions. If you have a need to keep current with ISIS, but don't want to loose the ability to use Stereo Pipeline while we update our binaries to the new current version of ISIS, you may wish to retain the version of ISIS that matches your version of Stereo Pipeline. \end{description} \subsection{Quick Start} \begin{description} \item[{Fetch~Stereo~Pipeline}] ~\\ Download the Stereo Pipeline from \url{http://ti.arc.nasa.gov/stereopipeline}. \item [{Fetch~ISIS~Binaries}] ~\\ \texttt{rsync -azv -\/-delete isisdist.wr.usgs.gov::isis3\_\textit{ARCH\_OS\_VERSION}/isis .} \item [{Fetch~ISIS~Data}] ~\\ \texttt{rsync -azv --delete isisdist.wr.usgs.gov::isis3data/data/base data/}\\ \texttt{rsync -azv --delete isisdist.wr.usgs.gov::isis3data/data/\textit{MISSION} data/} \item [{Untar~Stereo~Pipeline}] ~\\ \texttt{tar xzvf StereoPipeline-\textit{VERSION-ARCH-OS}.tar.gz} % Verbatim has way too much white space. Couldn't seem to take care of it with % vskip/vspace negative. Sigh. \item [{Add~Stereo~Pipeline~to~Path~(optional)}] ~\\ bash: \texttt{export PATH="\textit{/path/to/StereoPipeline}/bin:\$\{PATH\}"} \\ csh: \texttt{setenv PATH "\textit{/path/to/StereoPipeline}/bin:\$\{PATH\}"} \item[Set~Up~Isis] ~\\ bash: \\ \hspace*{2em}\texttt{export ISISROOT=\textit{/path/to/isisroot}} \\ \hspace*{2em}\texttt{source \$ISISROOT/scripts/isis3Startup.sh} \\ csh: \\ \hspace*{2em}\texttt{setenv ISISROOT \textit{/path/to/isisroot}} \\ \hspace*{2em}\texttt{source \$ISISROOT/scripts/isis3Startup.csh} \item [{Try~It~Out!}] ~\\ See the next chapter (Chapter~\ref{ch:tutorial}) for an example! \end{description} \subsection{Common Traps} Here are some errors you might see, and what it could mean. Treat these as templates for problems--in practice, the error messages might be slightly different. \begin{verbatim} stereo: error while loading shared libraries: libisis3.so: cannot open shared object file: No such file or directory \end{verbatim} You just need to set up your ISIS environment. \begin{verbatim} dyld: Library not loaded: $ISISROOT/lib/libisis3.dylib Referenced from: /some/path/goes/here/bin/program Reason: image not found Trace/BPT trap \end{verbatim} You just need to set up your ISIS environment. \begin{verbatim} point2mesh E0201461-M0100115-PC.tif E0201461-M0100115-L.tif [...] 99% Vertices: [************************************************************] Complete! > size: 82212 vertices Drawing Triangle Strips Attaching Texture Data zsh: bus error point2mesh E0201461-M0100115-PC.tif E0201461-M0100115-L.tif \end{verbatim} The source of this problem is an old version of OpenSceneGraph in your library path. Check your \verb#LD_LIBRARY_PATH# (for Linux), \verb#DYLD_LIBRARY_PATH# (for OSX), or your \verb#DYLD_FALLBACK_LIBRARY_PATH# (for OSX) to see if you have an old version listed, and remove it from the path if that is the case. It is not necessary to remove the old versions from your computer, you just need to remove the reference to them from your library path. \pagebreak \section{\label{sec:Source-Installation}Source Installation} This method is for advanced users with moderate build system experience. Some dependencies such as ISIS and its dependencies \emph{(like SuperLU, Qwt, CSpice)} use custom build systems. Because of this and time, we won't help with questions on how to build dependencies. \subsection{Dependency List} This is a list of the direct dependencies of Stereo Pipeline. Some libraries (like \ac{ISIS}) have dependencies of their own which are not covered here. \begin{figure}[h] \centering \includegraphics[width=5in]{graph/asp_deps.pdf} \caption{Graph outlining some dependencies. Not all of ISIS's are shown.} \end{figure} \begin{description} \item [{Boost}] (Required) \url{http://www.boost.org/}\\ Version 1.35 or greater is required. Along with the base library set, the Stereo Pipeline specifically requires: Program Options, Filesystem, Thread, and Graph. \item [{LAPACK}] (Required)\\ There are many sources for LAPACK\@. For OSX, you can use the vecLib framework. For Linux, you can use the netlib LAPACK/CLAPACK distributions, or Intel's MKL, or any of a number of others. The math is unfortunately not a hotspot in the code, though, so using a faster LAPACK implementation will not change much. Therefore, you should probably just use the LAPACK your package manager (RPM for Red Hat Linux, Yast for SuSE, etc.) has available. \item [{ISIS}] (Recommended) \url{http://isis.astrogeology.usgs.gov/documents/InstallGuide}\\ The \ac{USGS} \acf{ISIS} library. This library handles the camera models and image formats used for instruments. \ac{ISIS} is usually downloaded and used as a binary distribution. Compilation of \ac{ISIS} from source can be challenging, and their support forums may provide assistance: \url{https://isis.astrogeology.usgs.gov/IsisSupport/}. Cleaning and modification of their source code maybe required if you would like to use a newer version of ISIS's dependencies that may be available already by your system. Because of the extreme difficulty of building ISIS (since most problems will needed be sorted out by the user themselves), it is not recommended that users try to build ISIS themselves. This leaves users the only option of using ISIS and ASP binaries that are available online. \item [{OpenSceneGraph}] (Optional) \url{http://www.openscenegraph.org/}\\ OpenSceneGraph is required to run the \texttt{point2mesh} tool (See Section~\ref{point2mesh}). This library provides a convenient way of building OpenGL graphics through the method of scene graphs. It also provides a file format and utilities for display these scene graphs. The output file of \texttt{point2mesh} is an OpenSceneGraph binary scene graph format. \item [{Vision~Workbench}] (Required) \url{http://ti.arc.nasa.gov/visionworkbench/}\\ Vision Workbench forms much of the core processing code of the Stereo Pipeline. Vision Workbench contains almost all of the image processing algorithms, such as image filters, image arithmetic, stereo correlation, and triangulation. This means that Stereo Pipeline is just a collection of applications that implement Vision Workbench in the context of ISIS. \item [{Python 2.4+}] (Required) \url{http://www.python.org}\\ Some applications of Stereo Pipeline are actually python scripts. \end{description} \subsection{Build System} The build system is built on GNU autotools. In-depth information on autotools is available from \url{http://sources.redhat.com/autobook/}. The basics, however, are simple. To compile the source code, first run~\verb#./configure# from the top-level directory. This will search for the dependencies and enable the modules you requested. There are a number of options that can be passed to \verb#configure#; many of these options can also be placed into a \verb#config.options# file (in the form of \verb#VARIABLE="VALUE"#) in the same directory as \verb#configure#. Table \ref{tab:Supported-configure-options} lists the supported options. \begin{table} \begin{longtable}{|l|l|c|m{5cm}|} \hline \textbf{Variable Name} & \textbf{Configure option} & \textbf{Default} & \textbf{Function}\tabularnewline \hline \hline \small\verb#PREFIX# & \small\verb#--prefix# & /usr/local & Set the install prefix (ex: binaries will go in \$PREFIX/bin)\tabularnewline \hline \small\verb#HAVE_PKG_XXX# & \small\verb#--with-xxx# & auto & Set to {}``no'' to disable package XXX, or a path to only search that path\tabularnewline \hline \small\verb#PKG_PATHS# & \small\verb#--with-pkg-paths# & many & Prepend to default list of search paths\tabularnewline \hline \small\verb#ENABLE_PKG_PATHS_DEFAULT# & \small\verb#--enable-pkg-paths-default# & yes & Append built-in list of search paths\tabularnewline \hline \small\verb#ENABLE_OPTIMIZE# & \small\verb#--enable-optimize# & 3 & Level of compiler optimization?\tabularnewline \hline \small\verb#ENABLE_DEBUG# & \small\verb#--enable-debug# & no & How much debug information?\tabularnewline \hline \small\verb#ENABLE_CCACHE# & \small\verb#--enable-ccache# & no & Use \verb#ccache# if available\tabularnewline \hline \small\verb#ENABLE_RPATH# & \small\verb#--enable-rpath# & no & Set RPATH on built binaries and libraries\tabularnewline \hline \small\verb#ENABLE_ARCH_LIBS# & \small\verb#--enable-arch-libs# & no & Pass in 64 or 32 to look for libraries by default in \verb#lib64# or \verb#lib32#\tabularnewline \hline \small\verb#ENABLE_PROFILE# & \small\verb#--enable-profile# & no & Use function profiling?\tabularnewline \hline \small\verb#PKG_XXX_CPPFLAGS# & & & Append value to CPPFLAGS for package XXX\tabularnewline \hline \small\verb#PKG_XXX_LDFLAGS# & & & Prepend value to LDFLAGS for package XXX\tabularnewline \hline \small\verb#PKG_XXX_LIBS# & & & Override the required libraries for package XXX\tabularnewline \hline \small\verb#PKG_XXX_MORE_LIBS# & & & Append to required libraries for package XXX\tabularnewline \hline \small\verb#ENABLE_EXCEPTIONS# & \small\verb#--enable-exceptions# & yes & Use C++ exceptions? Disable at own risk.\tabularnewline \hline \small\verb#ENABLE_MULTI_ARCH# & \small\verb#--enable-multi-arch# & no & OSX Only: Build \emph{Fat} binary with space-separated list of arches\tabularnewline \hline \small\verb#ENABLE_AS_NEEDED# & \small\verb#--enable-as-needed# & no & Pass --as-needed to GNU linker. Use at your own risk.\tabularnewline \hline \end{longtable}\caption{\label{tab:Supported-configure-options}Supported configure options} \end{table} \pagebreak \section{\label{sec:Settings}Settings Optimization} Finally the last thing to be done for Stereo Pipeline is to setup up Vision Workbench's render settings. This step is optional, but for best performance some thought should be applied here. Vision Workbench is a multithreaded image processing library used by Stero Pipeline. The settings by which Vision Workbench processes is configurable by having a \texttt{~/.vwrc} file hidden in your home directory. Below is an example of one used on a Mac Pro workstation. \begin{minipage}{0.94\linewidth} \small\listinginput{1}{LogConf.example} \end{minipage} There are a lot of possible options that can be implemented in the above example. Let's cover the most important options and the concerns the user should have when selecting a value. \pagebreak \begin{description} \item[default\_num\_threads \textnormal (default=2)] \hfill \\ This sets the maximium number of threads that can being used for rendering. When stereo's \texttt{subpixel\_rfne} is running you'll probably notice 10 threads are running when you have \texttt{default\_num\_threads} set to 8. This is not an error, you are seeing 8 threads be used for rendering, 1 thread for holding \texttt{main()}'s execution, and finally 1 optional thread acting as the interface to the file driver. It is usually best to set this parameter equal to the number of processors on your systems. Be sure to include the number of logical processors in your arithmetic if your system supports hyper-threading. Adding more threads for rasterization increases the memory demands of Stereo Pipeline. If your system is memory limited, it might be best to lower the \texttt{default\_num\_threads} option. Remember that 32 bit systems can only allocate 4 GB of memory per process. Despite Stereo Pipeline being a multithreaded application, it is still a single process. \item[write\_pool\_size \textnormal (default=21)] \hfill \\ The \texttt{write\_pool\_size} option represents the max waiting pool size of tiles waiting to be written to disk. Most file formats do not allow tiles to be written arbitrarily out of order. Most however will let rows of tiles to be written out of order, while tiles inside a row must be written in order. Because of the previous constraint, after a tile is rasterized it might spend some time waiting in the 'write pool' before it can be written to disk. If the 'write pool' fills up, only the next tile in order can be rasterized. That makes Stereo Pipeline perform like it is only using a single processor. Increasing the \texttt{write\_pool\_size} makes Stereo Pipeline more able to use all processing cores in the system. Having this value too large can mean excessive use of memory. For 32 bit systems again, they can run out of memory if this value is too high for the same reason as described for \texttt{default\_num\_threads}. \item[system\_cache\_size \textnormal (default=805306368)] \hfill \\ Accessing a file from hard drive can be very slow. It is especially bad if an application needs to make multiple passes over an input file. To increase performance, Vision Workbench will usually leave an input file stored in memory for quick access. This file storage is known as the 'system cache' and its max size is dictated by \texttt{system\_cache\_size}. The default value is 768 MB. Setting this value too high can cause your application to crash. It is usually recommend to keep this value around 1/4 of the maximum available memory on the system. For 32 bit systems, this means don't set this value any greater than 1 GB. The units of this property is in bytes. \item[0 = *.progress] \hfill \\ This line is not assigning a value to progress, it is however setting the logging level of progress bars. In the above example, this statement is made under the \texttt{[logfile console]} state. This means that only progress bars of type \texttt{ErrorMessage} will ever be printed to the console. If you wanted progress bars up to type \texttt{InfoMessage}, then the line in log file should be changed to: \begin{verbatim} [logfile console] 20 = *.progress \end{verbatim} \end{description}
{ "alphanum_fraction": 0.7350325651, "avg_line_length": 48.0963855422, "ext": "tex", "hexsha": "beac8335c24785add488d6f628552ee1e65bb3cf", "lang": "TeX", "max_forks_count": 27, "max_forks_repo_forks_event_max_datetime": "2020-01-10T01:31:17.000Z", "max_forks_repo_forks_event_min_datetime": "2015-01-15T04:20:50.000Z", "max_forks_repo_head_hexsha": "8b9c0bcab258c41d10cb2973d97722765072a7bf", "max_forks_repo_licenses": [ "NASA-1.3" ], "max_forks_repo_name": "imagineagents/StereoPipeline", "max_forks_repo_path": "docs/book/installation.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "8b9c0bcab258c41d10cb2973d97722765072a7bf", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "NASA-1.3" ], "max_issues_repo_name": "imagineagents/StereoPipeline", "max_issues_repo_path": "docs/book/installation.tex", "max_line_length": 270, "max_stars_count": 29, "max_stars_repo_head_hexsha": "8b9c0bcab258c41d10cb2973d97722765072a7bf", "max_stars_repo_licenses": [ "NASA-1.3" ], "max_stars_repo_name": "nasa/StereoPipeline", "max_stars_repo_path": "docs/book/installation.tex", "max_stars_repo_stars_event_max_datetime": "2021-12-19T22:55:29.000Z", "max_stars_repo_stars_event_min_datetime": "2015-05-06T01:28:21.000Z", "num_tokens": 4112, "size": 15968 }
\subsection{SLAM Testing} \label{subsec:slamtesting} \input{figures/rosgraph/slam.tex} SLAM (simultaneous localization and mapping) Testing is done in the simulation with purpose to test the virtual room ability in simulating the real room. In this test, the robot model will move around the room while doing the mapping using the SLAM method. As shown in the figure \ref{fig:rosgraphslam}, \lstinline{rtabmap} node will be connected to the \lstinline{depth_camera_plugin} node and the \lstinline{navigation_plugin} node that will send color image, depth image, and odometry data which will be used in the room mapping process. \input{figures/slam.tex} The result, as shown in the figure \ref{fig:slam}, the room mapping process produce a three-dimensional map that composed of point clouds which shape the room that is tested in the simulation. That map is displayed on the GUI that owned by the \lstinline{rtabmap} node. Besides that, The map also shows a visualization of the path that the robot has traversed as a line with a blue dot.
{ "alphanum_fraction": 0.7903225806, "avg_line_length": 52.7, "ext": "tex", "hexsha": "c67c5b9a84939fa0cac2f3929eef57cae29d4246", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "83eee492fac0a21633e2abe87bcb041893f2fae5", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "threeal/robotics-simulation-paper", "max_forks_repo_path": "document/contents/5-experiments/e-slam-testing-en.tex", "max_issues_count": 3, "max_issues_repo_head_hexsha": "83eee492fac0a21633e2abe87bcb041893f2fae5", "max_issues_repo_issues_event_max_datetime": "2021-08-09T00:48:43.000Z", "max_issues_repo_issues_event_min_datetime": "2021-07-18T13:42:46.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "threeal/paper-simulasi-robot", "max_issues_repo_path": "document/contents/5-experiments/e-slam-testing-en.tex", "max_line_length": 238, "max_stars_count": 1, "max_stars_repo_head_hexsha": "83eee492fac0a21633e2abe87bcb041893f2fae5", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "threeal/robotics-simulation-paper", "max_stars_repo_path": "document/contents/5-experiments/e-slam-testing-en.tex", "max_stars_repo_stars_event_max_datetime": "2021-05-22T07:02:04.000Z", "max_stars_repo_stars_event_min_datetime": "2021-05-22T07:02:04.000Z", "num_tokens": 252, "size": 1054 }
\chapter{Project Description} \section{Basics} %Can put this into the introduction We use the following notation: \begin{itemize} \item Matrices are written in uppercase bold e.g. $\mathbf{X}$. \item Vectors are written in lowercase bold e..g $\mathbf{x}$. \item scalars are written in lowercase or uppercase. Lowercase indicates that it is a counting variable and uppercase that it is one of the limits in an finite set. \item For all functions , e.g. $f(x)$, where $x = \mathbf{X}$ , we apply the function element wise. \item Dot product is indicated by simple concatenation of two matrices/vectors e.g. $\mathbf{X} \mathbf{y}$. \item Element wise multiplication is indicated as $\mathbf{X} \otimes \mathbf{Z}$. \end{itemize} Neural networks are a statistical model for regression or classification. Even though naturally they do not produce any classification output, we can model the regressed output to be used as a classifier. Overall we have two different modes: Training and evaluation. In the training phase we use the back propagation algorithm to train the network and adjust it so that it produces output, which is close to our target output. Furthermore a neural network contains $L$ hidden layers, which are called hidden since their output is not directly observable. The weights are in our simple case randomly initialized and then updated using the back propagation rule. A basic neural network has therefore the following parameters: \begin{itemize} \item The weights from an neuron $k$ in layer $a$ to a neuron $j$ in layer $b$ \item The input to the network, usually a vector $\mathbf{x}$ with $k$ values \item The target which should be estimated, usually a vector $\mathbf{t}$ with $q$ values. \item The learning rate $\eta$, which is the indicator how fast the network learns. \end{itemize} Moreover we can add some advanced techniques as the momentum to speed up the training the momentum \subsection{Gradient Descent} Gradient descent is a first-order optimization algorithm. To find a local minimum of a function using gradient descent, it takes steps proportional to the negative of the gradient (or of the approximate gradient) of the function at the current point. If instead one takes steps proportional to the positive of the gradient, one approaches a local maximum of that function; the procedure is then known as gradient ascent \cite{Kiwiel2001,Qian1999}. Gradient descent is based on the observation that if the multivariable function $F(\mathbf{x})$ is defined and differentiable in a neighborhood of a point $\mathbf{a}$, then $F(\mathbf{x})$ decreases fastest if one goes from $\mathbf{a}$ in the direction of the negative gradient of F at $\mathbf{a}, -\nabla F(\mathbf{a})$ \cite{Yuan1999}. It follows that, if \begin{equation} \mathbf{b} = \mathbf{a}-\gamma\nabla F(\mathbf{a}) \end{equation} for $\gamma$ small enough, then $F(\mathbf{a})\geq F(\mathbf{b})$. With this observation in mind, one starts with a guess $\mathbf{x}_0$ for a local minimum of F, and considers the sequence $\mathbf{x}_0, \mathbf{x}_1, \mathbf{x}_2, \dots$ such that \begin{equation} \mathbf{x}_{n+1}=\mathbf{x}_n-\gamma_n \nabla F(\mathbf{x}_n),\ n \ge 0. \end{equation} Finally we have \cite{Cauchy1847}: \begin{equation} F(\mathbf{x}_0)\ge F(\mathbf{x}_1)\ge F(\mathbf{x}_2)\ge \cdots, \end{equation} \subsection{Minibatch Stochastic Gradient Descent} Both statistical estimation and machine learning consider the problem of minimizing an objective function that has the form of a sum: \begin{equation} Q(w) = \sum_{i=1}^n Q_i(w) \end{equation} where the parameter $w^*$ which minimizes Q(w) is to be estimated. Each summand function $Q_i$ is typically associated with the $i$-th observation in the data set (used for training). When used to minimize the above function, a standard (or "batch") gradient descent method would perform the following iterations : \begin{equation} w := w - \eta \nabla Q(w) = w - \alpha \sum_{i=1}^n \nabla Q_i(w) \end{equation} where $\eta$ is the learning rate. But since in many cases, the update of one gradient would take to estimate the gradient over the whole dataset, the computation would take too much time. Often times we only need a rough estimate of the gradient, not a precise one. What stochastic gradient descent does is to remove the sum and only uses one sample to estimate the gradient. \begin{equation} w := w - \eta \nabla Q(w) = w - \alpha \nabla Q(w) \end{equation} In practice this turns out to be less effective than batch gradient descent, since we only get a very rough estimate of the gradient. Therefore the minibatch gradient descent technique is used. The difference here is that while batch gradient descent uses $n$ samples and stochastic gradient descent uses only $1$, minibatch gradient uses $b$ training samples, where $b << n$. Typical sizes for $b$ are in the range if $b \in {2,\ldots,100}$. Therefore in minibatch stochastic Gradient descent we randomly subset the training set and only take some samples out of it and estimate the gradient out of the seen sample set. The important thing to do is to randomize the training set, since otherwise only a small portion of the training set will be seen and leads to wrong gradients. \subsection{The Forward Backward procedure in detail} To train the network we use the standard back-propagation algorithm, which essentially does a forward and a backward pass of the network in a row. Firstly we initialize the weights in a matrix. \begin{align} \mathbf{W}_{j \times k } = \begin{bmatrix} w_{11} & w_{12} & \ldots & w_{1k}\\ w_{21} & \ddots & & \vdots\\ \vdots & & \ddots & \vdots\\ w_{j1} & \ldots & \ldots & w_{jk}\\ \end{bmatrix}, \mathbf{x}_{1 \times k} = \begin{bmatrix} a_{1} \\ \vdots\\ \vdots \\ a_{k} \end{bmatrix} \end{align} \paragraph{Forward propagation} is one of the two passes the network needs to do ( hence it's name ). In forward propagation the network calculates the predicted output of the network. The output of the hidden layer $j$ of the network can be calculated as the weighted sum of the inputs \begin{align} \mathbf{nnet}_j = \sum_{k=1}^{n}w_{kj}x_k = \mathbf{W} \mathbf{x} \end{align} Even though this method should already work, we add a bias to every node to increase the learning speed and void that the network stops learning. \begin{align} \mathbf{nnet}_j = \sum_{k=1}^{n}w_{kj}x_k + b_k = \mathbf{W} \mathbf{x} + \mathbf{b} \end{align}r Later we see that back-propagation needs some variables, which can be already precomputed during the forward step. These two variables are the output $\mathbf{o}_j$ of layer $j$ and the derivative of the output w.r.t to the weighted sum $\mathbf{nnet}_j$. We represent the derivatives in a matrix $\mathbf{D}$ and the output of the current layer \begin{align} \mathbf{D}_j = \frac{\mathbf{o}_j}{\mathbf{nnet}_j} \end{align} In case of sigmoid activation function, we obtain: \begin{align} \mathbf{D}_j = \varphi \left( \mathbf{nnet}_j \right) \left( 1 - \varphi \left(\mathbf{nnet}_j \right) \right) \end{align} For the outputs, we simply feed forward the network and store the output $\mathbf{o}_j$ in out buffered lists. \begin{align} \mathbf{o}_j = \varphi \left( \mathbf{Wx+b} \right) \end{align} As soon as the feed forward is done, we produced an output of the network, which we can denote as $\mathbf{o}_L$. Now we need to decide how close our output is to the targeted output $\mathbf{t}$. For this we use a cost function in usual cases it is sufficient to use $MSE$ as such, which is defined as: \begin{equation} \label{eq:mse} \mathbf{MSE} = \frac{1}{2} \left( \mathbf{o}_L - \mathbf{t} \right) ^2 \end{equation} Here (\ref{eq:mse}) we need to make sure that the last output layer has the same dimensions as the target output. \paragraph{Backpropagation} starts when all layers are trained. The idea behind back propagation is that the hidden layers do not produce any output. So we cannot modify their weights directly, since we don’t know how large the error was. With back propagation we calculate the error at the output layer $L$ and then propagate this error to the hidden layers back. Therefore we update all $L-1$ weight matrices and biases. At first we calculate the differences between the target output and the estimated output. \begin{align} \boldsymbol{err} = \mathbf{o}_L - y\\ \boldsymbol{\delta}_L &= \mathbf{err} \otimes \mathbf{D}_L\\ \nabla \mathbf{W}_L &= \mathbf{o}_L \boldsymbol{\delta}_L^T \end{align} We store both, the deltas and the nablas to later update the weights ( with the $\nabla$s) and the biases (using the $\mathbf{\delta}$). From here on we then calculate the other layers backwards, we do: \begin{align} \boldsymbol{\delta}_i &= \mathbf{D}_i \otimes \left( \mathbf{W}_{i+1}^T \boldsymbol{\delta}_{i+1} \right)\\ \nabla \mathbf{W}_i &= \mathbf{o}_i \boldsymbol{\delta}_i^T \end{align} Whereas we again store the deltas and the nablas to later update the weights. To update the weights, we use the usual gradient descent update rule: \begin{align} \mathbf{W}^{*} = \mathbf{W} - \alpha \nabla \mathbf{W}_i^T\\ \end{align} To update the biases, we use essentially the same update rule, but only consider the given $\boldsymbol{\delta}_{i}$s. \begin{equation} \mathbf{b}^{*} = \mathbf{b} - \alpha \boldsymbol{\delta}_i^T\\ \end{equation} Finally, if we would like to improve the converging speed we can apply momentum. Momentum adds extra "velocity" towards the gradient curve, by using the last estimated value of the gradient $\mathbf{W}^{*}_{i-1}$ ($i$ denotes the current iteration) and applying on that the momentum ($ \alpha$) as: \begin{equation} \mathbf{W}_{i+1} = \mathbf{W}_i - \eta \nabla \mathbf{W}_i + \alpha \mathbf{W}_i \end{equation} The momentum is initialized with 0 and takes effect after one iteration of gradient descent. \section{Implementation details} OpenCL offers multiple implementation types. It is natively written in C, but has also a C++ wrapper onboard. We used in our project the C++ wrapper since it is more naturally to use that in a C++ project. First one needs to include the necessary header into any class. \begin{lstlisting}[caption=OpenCL C++ header] #include <CL/cl.hpp> \end{lstlisting} \paragraph{The interface to OpenCL} was written in C++11 and makes heavily use of the current variadic args feature. For this small scale task, the interface is definitely too complex, but it can be used for any other OpenCL task. \label{lst:mult} \begin{lstlisting}[caption=Example Kernel function] __kernel void mult(const int wSrc, __global const float* A,__global const float* B,__global float* output) { const int idx = get_global_id(0); const int idy = get_global_id(1); output[idy*wSrc+idx] = A[idy*wSrc+idx] * B[idy*wSrc+idx]; } \end{lstlisting} OpenCL uses externally defined functions ( coined as kernels ) to compile the code and run it, during the runtime. Therefore one needs to write a kernel for its needs. An example kernel can be seen at \ref{lst:mult}. In our case we defined various kernels, for dot products and other matrix operations. Since OpenCL is a multi device and platform GPU/CPU interface, one needs to first figure out which kind of platform (e.g. NVIDIA,AMD,Intel) the current machine is using and then decide which accelerator will be used \ref{lst:plat}. \label{lst:plat} \begin{lstlisting}[caption=Get Platforms and devices] void exampleplatform(const char * programpath){ std::vector<cl::Platform> all_platforms; cl::Platform::get(&all_platforms); //Assume having only one platform cl::Platform defaultplatform = all_platforms[0]; std::vector<cl::Device> all_gpu_devices; //CL_DEVICE_TYPE_GPU can also be CL_DEVICE_TYPE_CPU for CPU defaultplatform.getDevices(CL_DEVICE_TYPE_GPU, &all_gpu_devices); //Assume having only one device / take the first cl::Device device = all_gpu_devices.front(); //Init the current context with the device cl::Context(device); //We wrote a helper function here to get the string data out of the kernel file const char* content = util::file_contents(programpath); } \end{lstlisting} Moreover since OpenCL is compiled during runtime, one needs to extract the code from the kernel file (e.g. kernel.cl) and give this content to the OpenCL. As soon as the context is initialized we can run our kernels on the device. The problem here is that the kernel is defined with it's own parameters and types, but we dont know these during the compile time. To have a universal interface we used the variadicargs feature to allow the programmer a very straight forward way to use any kernel function. \label{lst:kernel} \begin{lstlisting}[caption=Kernel usage] void runKernel(const char *kernelname){ cl::Program::Sources sources; //contents is the already read out content of the kernels file (e.g. kernels.cl) sources.push_back(std::make_pair(contents,strlen(contents)+1)); //Init the program with the context and the source cl::Program program(context,sources); //Build the program on the device program.build({device}); cl::CommandQueue queue(context,device); //The operator allows us to set args to the kernel cl::Kernel kernel_operator(program,kernelname); //kernel_operator allows us to send arguments to the kernel by calling //kernel_operator.setArgs(ARGNUMBER,ARGUMENT); //Init space on the device using cl::Buffers //Send Arguments to the device .... // quene.enqueueWriteBuffer() ..... //Wait until the arguments did arrive on the device queue.finish(); //Execute! cl::Event event; queue.enqueueNDRangeKernel(kernel_operator,cl::NullRange,cl::NDRange(10),cl::NullRange,NULL,&event); //Wait until the execution has finished event.wait(); // Read out the results by calling // quene.enqueueReadBuffer() } \end{lstlisting} The usual Kernel execution can be seen in \ref{lst:kernel}. Even though it is recommended to use OpenCL in this fashion, we cannot cope with different arguments for the kernel. Therefore we would need to init a device and context every time ( or at least as a singleton ) and then rewrite the argument passing for every parameter independently. This would take a lot of time, if the GPU is used extensively. Our solution to that problem is as follows: \begin{lstlisting}[caption=Add arguments to kernel dynamic way] class OpenCL{ //Constructor ..... etc // Hook for the iteration template<std::size_t P=0,typename... Tp> typename std::enable_if<P == sizeof...(Tp), void>::type addkernelargs(std::tuple<Tp ...>&& t,cl::Kernel &k,cl::CommandQueue &,std::vector<cl::Buffer> &outputbuffers) const{ // Do nothing } // Start of the iteration template<std::size_t P = 0, typename... Tp> typename std::enable_if< P < sizeof...(Tp), void>::type addkernelargs(std::tuple<Tp...> && t,cl::Kernel &kernel,cl::CommandQueue &,std::vector<cl::Buffer> &outputbuffers) const{ // Type typedef typename std::tuple_element<P, std::tuple<Tp...>>::type type; // Add the value of the current item from std::get<P> to the args in kernel // This function decides which type the kernel arg is addkernelarg(P, std::get<P>(t), kernel,queue,outputbuffers); // Recurse to get the remaining args addkernelargs<P + 1, Tp...>(std::forward<std::tuple<Tp...>>(t), kernel,queue,outputbuffers); } // Adding Std::vector as type to the kernel args list template<typename T> void addkernelarg(std::size_t i, std::vector<T> const & arg, cl::Kernel & kernel,cl::CommandQueue &) const{ cl::Buffer buffer(this->context,CL_MEM_READ_WRITE,arg.size()*sizeof(T)); queue.enqueueWriteBuffer(buffer,CL_FALSE,0,sizeof(T)*arg.size(),&(arg[0])); kernel.setArg(i,buffer); } // Adding any array into the kernel args template<typename T,std::size_t N> void addkernelarg(std::size_t i, T const (& arg)[N], cl::Kernel & kernel,cl::CommandQueue &) const{ cl::Buffer buffer(this->context,CL_MEM_READ_WRITE,N*sizeof(T)); queue.enqueueWriteBuffer(buffer,CL_FALSE,0,sizeof(T)*N,&arg); kernel.setArg(i,buffer); } // Adding any constant to the kernel template<typename T> void addkernelarg(std::size_t i, T const & arg, cl::Kernel & kernel,cl::CommandQueue &) const{ cl::Buffer buffer(this->context,CL_MEM_READ_WRITE,arg.size()*sizeof(T)); queue.enqueueWriteBuffer(buffer,CL_FALSE,0,sizeof(T)*arg.size(),&(arg[0])); kernel.setArg(i,buffer); } } \end{lstlisting} This recipe can be used to do the same actions for the reading buffers out after the transfer has finished. Therefore we can create a highly dynamic wrapper class for OpenCL. \subsection{Using the Interface}
{ "alphanum_fraction": 0.7475980422, "avg_line_length": 51.8777429467, "ext": "tex", "hexsha": "dc604738761d8eda27f29f38af38047d48a92b8f", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "93edc82bd9cbaf6cc80d138be0d276712954ef13", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "crysxd/Parallel-Computing-and-Algorithms-X033537-Project", "max_forks_repo_path": "documentation/content/project.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "93edc82bd9cbaf6cc80d138be0d276712954ef13", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "crysxd/Parallel-Computing-and-Algorithms-X033537-Project", "max_issues_repo_path": "documentation/content/project.tex", "max_line_length": 447, "max_stars_count": 1, "max_stars_repo_head_hexsha": "93edc82bd9cbaf6cc80d138be0d276712954ef13", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "crysxd/Parallel-Computing-and-Algorithms-X033537-Project", "max_stars_repo_path": "documentation/content/project.tex", "max_stars_repo_stars_event_max_datetime": "2020-10-21T13:41:51.000Z", "max_stars_repo_stars_event_min_datetime": "2020-10-21T13:41:51.000Z", "num_tokens": 4389, "size": 16549 }
\section{Experiments} \label{s:experiments} The AlexNet models were trained on a desktop with an 8 core Intel(R) Core(TM) i7-6900K CPU @ 3.20GHz CPU and a Nvidia Titan-X GPU with 12GB of onboard memory. The Inception and Resnet models were trained on a server with two 22-core Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz CPUs and a Tesla P100 GPU with 16GB of onboard memory. We trained both our Inception-V3 and ResNet-V2-101 models using the Tensorflow~\cite{tensorflow} Slim framework. We trained our AlexNet models by modifying the example code. All models were trained with an Adam Optimizer~\cite{adam}. For Inception and ResNet our best models trained with $0.8$ keep probability for dropout, an initial learning rate of $0.1$, and optimizer parameters $\beta_1 = 0.9$ and $\beta_2 =0.999$. The learning rate was decayed every 25 batches. We used a batch size of 128 for ImageNet and 60 for ResNet. Otherwise the initialization for Inception V3 is identical to the Inception~\cite{inception} Paper. Our best AlexNet model was trained with a $0.5$ keep probability for dropout and an initial learning rate of $0.001$ and the Tensorflow default parameters for the Adam optimizer. %% subsection: Grid Search \input{grid_search}
{ "alphanum_fraction": 0.779757085, "avg_line_length": 112.2727272727, "ext": "tex", "hexsha": "2ea58103d1aabdf598ac22fdca7a0c1502fca35b", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "9325e814571000633e233433006b4e913194a34e", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "jmftrindade/miniplaces_challenge", "max_forks_repo_path": "report/experiments.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "9325e814571000633e233433006b4e913194a34e", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "jmftrindade/miniplaces_challenge", "max_issues_repo_path": "report/experiments.tex", "max_line_length": 809, "max_stars_count": 2, "max_stars_repo_head_hexsha": "9325e814571000633e233433006b4e913194a34e", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "jmftrindade/miniplaces_challenge", "max_stars_repo_path": "report/experiments.tex", "max_stars_repo_stars_event_max_datetime": "2020-02-27T09:14:02.000Z", "max_stars_repo_stars_event_min_datetime": "2019-11-18T18:31:27.000Z", "num_tokens": 330, "size": 1235 }
\TOWRITE{ALL}{Proofread 3.3 pass 2} \label{sect:mgt} The management and decision-making approach of \TheProject has been tailored to the real needs of the project and \emph{overcomplexity has been avoided in favour of an efficient management organisation able to effectively guarantee project success}. To this end, the number of committees has been kept to the minimum and the rules will be flexible whilst still providing a good basis for efficient implementation of the project. Project management activities in \TheProject comprise a wide array of activities, including scientific and administrative management, guidance on decision making, contractual management, financial management, supervision of and compliance with ethical standards, management of knowledge and IPR issues, and coordination of communication activities. Most partners have extensive experience with EU funded projects, while two SMEs are recent start-ups. \TheProject management will thus be tailored to varying needs and requirements of individual partners. \subsubsection{Management} The project will be coordinated by \site{SRL}, represented by Benjamin Ragan-Kelley (Project Coordinator), who has been a developer and leader in the Jupyter project since its inception in 2014, and IPython before that, and a site and work package leader in prior H2020 project, OpenDreamKit, and currently leads the JupyterHub and Binder open source projects in the Jupyter community. The Project Coordinator will be assisted by a part-time (50\%) Project Manager, Katarina Subakova, who has significant experience with Horizon 2020, both as a project manager and as a head of Horizon 2020 Helpdesk. Additional feedback and expertise will be brought by Financial, Legal and European affairs officers from Simula. In addition, Hans Fangohr will act as Project Deputy, being constantly updated by the coordinator and manager on the project evolution in order to be able to temporarily take over the coordination in case the coordinator would be incapacitated. \subsubsection{Organisational structure and decision-making} \begin{figure} \centering \includegraphics[width=0.75\textwidth]{management_structure.pdf} \caption{Management structure} \label{figure.management} \end{figure} The organisational structure, shown in the Figure~\ref{figure.management}, has been designed to enable efficient coordination of the project --- the development and evaluation of an Open Science toolkit integrating several previously separated tools and software and involving both academic actors and industrial stakeholders. It is jointly agreed on by \TheProject consortium and adapted to its size and composition, the tasks and duties of all partners involved. We have designed the management structure and procedures to deal in a flexible manner with the following challenges: \begin{compactitem} \item to integrate all consortium members and to mobilise their expertise, knowledge and networks at every stage of the project; \item to give the maximum attention to the end-users needs and requirements; \item to continuously involve expertise and knowledge of relevant stakeholders and their networks, and \item to efficiently coordinate the project implementation in a collaborative environment and ensure its sustainability. \end{compactitem} The design has been largely adapted from that of OpenDreamKit which has proved very effective for this type of project, with some simplification as suggested by past experience: \begin{itemize} \item Suppression of the Quality Review Board: its role can effectively be subsumed by the steering committee; also the head of OpenDreamKit's Quality Review Board, Hans Fangohr, will carry over the accumulated experience and best practices, and the lessons learned from the OpenDreamKit quality review board will be shared within the BOSSEE consortium. \item Suppression of the End User Group: as in OpenDreamKit, all participants are either end users themselves, or in close contact with such end users. To guarantee the effectiveness of this simplified structure, we include end users in the advisory board. \end{itemize} The coordinator acts as an intermediary between the Partners and the European Commission. The coordinator will oversee the project planning, monitor that execution is carried out in time and that the objectives are achieved and closely interact with the project officer for project monitoring and delivery of the performance indicators. The Project Manager will ensure efficient day-to-day management of the project, reporting, feedback to partners on administrative, financial and legal issues, tracking of resource allocation and consumption, and communication inside and outside the consortium. The resources of all partners will be mobilised by decentralisation of responsibilities through the assignment of leadership for work packages. Clear distribution of tasks, efficient decision making mechanisms and a sound financial management will safeguard the achievement of the project's objectives. \ifgrantagreement\else \subsubsection{Milestones} For a description of the milestones and their motivations see Section~\ref{sec:milestones}; a tabulation of the milestones, which work packages are involved, and a means of verification can be seen in Table~\ref{tab:milestonetable}. \milestonetable \fi \subsubsection{Project roles} %Project Coordinator and Project Manager can meet any time and at least %twice a week. The following bodies will form the organisational structure of the \TheProject project : Coordination Team (CT), Steering Committee (SC), Advisory Board (AB).%, End User Group (EUG) and Quality Review board (QRB). \begin{description} \item{\textbf{Coordination Team (CT)}} \nobreak\par \textbf{Members:} The CT is composed of the Work Package leaders and headed by the Project Coordinator, assisted by the Project Manager. \textbf{Responsibilities:} The CT is an executive body in charge of the project implementation and monitoring. It takes operational decisions necessary for the smooth execution of the project. \textbf{Tasks:} \begin{compactenum} \item Monitoring the timely execution of the tasks and achievement of the objectives; \item Preparation of scientific and financial progress reports; \item Controlling Work Package progress by assessing it through technical reports developed by the partners; \item Making proposals to the Steering Committee of re-allocation of tasks, resources and financial needs for the fulfilment of the work plan; \item Preparing the drafts and validating the project deliverables to be submitted to the Commission. \end{compactenum} \textbf{Meetings:} The coordination team will meet every 6 months. If necessary, extra meetings will be arranged. \item{\textbf{Steering Committee (SC)}}\nobreak\par \textbf{Members:} The SC is chaired by the Project Coordinator and includes one representative from each partner organisation. \textbf{Responsibilities:} The SC is the decision-making body in charge of the strategic orientation of the project. It takes decisions on scientific directions, re-allocation of resources, consortium changes and intellectual property rights. \textbf{Meetings:} Annual meetings. If necessary, extra-meetings will be arranged. Written minutes of each meeting will be produced, which shall be the formal record of all decisions taken. A procedure for comment and acceptance is proposed. \textbf{Voting procedure:} The SC shall not deliberate until a quorum of three-fourth (3/4) of all Members are present (possibly through video-conference) or represented. Each Member shall have one vote. The SC will work on consensual decisions as much as possible and resort to voting only if unavoidable. Voting decisions shall be taken by a majority of two-thirds (2/3) of votes with quorum two-thids (2/3) of the whole set of members. Exceptional decisions (large changes to the budget ($\geq$ 100k euros), evolution to the consortium, firing the coordinator, resolving ambiguity about whether something is a hard question) shall be taken by a majority of three-fourth (3/4) of votes with quorum three-fourth (3/4) of the whole set of members. Votes can be electronic. \item{\textbf{Advisory board (AB)}} \nobreak\par \textbf{Members:} top level experts and end-users from partner and external organisations, from a variety of disciplines and both from academic and industrial sector. Together, they have a deep understanding of both market and technical problems, and an awareness of opportunities \textbf{Responsibilities:} to give an independent opinion on steering, scientific and innovation matters, in order to guaranty quality implementation of the project, adequateness to the end-users needs, efficient innovation management, and project sustainability. % \textbf{Tasks:} to control the project execution from the point of % view of the end user needs and requirements, to test the tool and to % detect its potential shortcomings at the early stages, to propose % adaptation measures. \textbf{Meetings:} at the request of the Steering Committee, including attending annual project meetings. \end{description} \subsubsection{Project management tools and procedures} Project partners and management bodies will communicate through a dedicated project web platform, maintained by the Project Manager. WP leaders will monitor progress of participants of their WP at least monthly, and participants will inform their WP leaders when problems are encountered. Major problems will be discussed in (teleconference) meetings with the Project Coordinator and Project Manager. Each WP leader will be free to organise extra meetings with WP partners, if necessary. Scientific and financial progress reports will be collected, assembled and transmitted to the Project Coordinator by the WP leaders through the web platform. On basis of the Progress Reports, the Coordination Team will monitor progress of the project, identify bottlenecks and find solutions for these problems. Where needed, adaptations to the project plan will be made, with the aim of ensuring the delivery of the project results as agreed with the EC. Major adaptations need to be approved by the Steering Committee. The Coordination Team, will ensure efficient innovation management. They will carefully monitor new opportunities in order to suggest, if necessary, to new directions to the Steering Committee. For legal aspects, the latter will have a feedback from legal officers from the Coordinator’s European affairs office, specialised in Intellectual Property. Our management structure and procedures will ensure that our network of partners from both academic and industrial sectors is focused at achieving the promised tasks and deliverables, efficiently managing the innovation process and largely opening the developed tools, insights and services to the EOSC and its final users. The partners will sign a Consortium Agreement, in which operational rules and decision making procedures will be laid down. \emph{CONFLICT RESOLUTION} - For the cases where contentious decisions need to be made, or conflicts are arising which cannot be solved bilaterally, steps to prevent further escalation and procedures to resolve disputes and decision-making procedures are laid down in the Consortium Agreement. In case of conflicts or a needed decision in a task or work package, the WP Leader shall resolve the conflict and agree on a solution or decision. If a solution of the conflict cannot be found or if a Party does not accept the proposed solution, the issue must be forwarded to the Steering committee (SC), which in its turn will try to resolve the issue together with the conflicting parties within 10 working days. If no agreement can be reached, the Project Coordinator (PC) can base its decision on the 2/3 majority vote of the SC or take a different decision. In any case, the Coordinator will take the final decision. All parties will make efforts that decisions are resolved at a level as low as possible. \subsubsection{Quality and Innovation management} \TheProject Project Quality Management will ensure that the project meets a high quality level as well as satisfying the EU commission requirements. The basic approach to quality management is compatible with the International Project Management standards and guidelines (PMI). \begin{itemize} \item Quality planning: internal standards from participants and national and international standards and regulatory requirements will be considered to create the quality metric. \item Quality control: the overall project performance will be evaluated on a regular basis to provide confidence that the project satisfies the expectation and EU provisions. \item Quality assurance: specific project results will be monitored to determine if they comply with expectations and eliminate causes of unsatisfactory performance. To this aim, deliverables have been planned to give the necessary information and evidence that quality standards have been achieved. \end{itemize} In the context of the Responsible Research Innovation paradigm promoted by the European Commission in the H2020 framework, \TheProject is willing to respond to societal desirability and societal acceptability question identifying new opportunities in the Open Science hub through innovation that is in the public interest. \TheProject consortium recognises innovation as an essential element for driving business growth and maintaining competitive advantage. The innovation management work will prevent that \TheProject results suffer from missing expected functionalities, which would lead to user dissatisfaction, and will ensure that the ethical and societal aspects are incorporated into the design process from the beginning. \subsubsection{Risks and risk management strategy}\label{sec:risks} The risk in the project execution as planned is carefully assessed and managed. We base our plans on long standing experience, and we bring together the world's experts in the relevant tools and techniques. A key feature of this project is the involvement of a wide set of partners from multiple domains. While this ensures complementary coverage of a wide set of skills and provides robustness in different ways, we will have to ensure that all the partners work together as a closely knit team. Our open source approach means that all our code and outputs will be open and visible to anybody at sites like GitHub and Bitbucket throughout the project. It is common for some users to run the latest development versions of computational and infrastructure software, thus beta-testing code between major releases. This reduces the risk of developing software which people won't use: where our design decision or technical approaches are controversial, this will be detected early by those users, giving the consortium useful feedback to consider. As part of the Management Work Package, and with support from the Coordination Team, the project coordinator will maintain and regularly update a Risk Management Plan; at the end of each Reporting Period, this updated plan will be included in the project's Technical Report. It will identify and categorise all potential strategic risks (legal, financial, human resources risks, etc.) to the successful delivery of the project, their probability and impact. For each risk area, mechanisms for risk mitigation will be identified and contingency actions will be proposed. Risks will be evaluated in terms of project goals and objectives, according to the following four steps: \begin{enumerate} \item Identification of risks using a structured and rational approach to ensure that all areas are addressed. \item Quantitative assessment and ranking of the risks. \item Definition of procedure to reduce (or minimize) risk. \item Monitoring and management of risks throughout the project life with milestone review and reassessment. \end{enumerate} Finally, as reported above, a conflict resolution mechanism will be put in place, whereby decision making divergence and conflicts that cannot be solved at the Steeing Committee (SC) level will be submitted to the Coordinator. The mediation and resolution process used is the following: \begin{itemize} \item Case presented by the involved parties. \item Development of a fact-based and neutral report by the coordinator to be provided to the conflicting parties and SC. \item Final decision to resolve the conflict made by SC. \end{itemize} \ifgrantagreement\else An initial risk assessment appears as Table~\ref{risk-table}. \TOWRITE{ALL}{risk about EOSC integration} \begin{table} \begin{center} \begin{tabular}{|m{.2\textwidth}|m{.12\textwidth}|m{.58\textwidth}|}\hline Risk & Level without / with mitigation & Mitigation measures \\\hline \multicolumn{3}{|c|}{ \textit{General technical / scientific risks} } \\\hline Implementing infrastructure that does not match the needs of end users & High/Low & Many of the members of the consortium are themselves end-users with a diverse range of needs and points of views; hence the design of the proposal and the governance of the project is naturally steered by demand; besides, because we provide a toolkit, users have the flexibility to adapt the infrastructure to their needs. In addition, the open source nature of the project facilitates and promotes the involvement of the wider community in terms of providing feedback and requesting additional features via platforms such as GitHub and Bitbucket on a regular basis. \\\hline Lack of predictability for tasks that are pursued jointly with the community & Medium/Low & The PIs have a strong experience managing community-developed projects where the execution of tasks depends on the availability of partners. Some tasks may end up requiring more manpower from \TheProject to be completed on time, while others may be entirely taken care of by the community. Reallocating tasks and redefining work plans is common practice needed to cater for a fast evolving context. Such random factors will be averaged out over the large number of independent tasks.\\\hline Reliance on external software components & Medium/Low & The non trivial software components \TheProject relies on are open source. Most are very mature and supported by an active community, which offers strong long run guarantees. The other components could be replaced by alternatives, or even taken over by the participants if necessary. \\\hline % \multicolumn{3}{|c|}{ % \textit{Use-case risks} % } % \\\hline % % & & \TOWRITE{WP4}{Risks related to use-cases in WP4} % \\\hline \multicolumn{3}{|c|}{ \textit{Management risks} } \\\hline Recruitment of highly qualified staff & High/Medium & Great care was taken coordinating with currently running projects to rehire personnel with strong track record, and identifying pool of candidates to hire from, notably in the developers community of software related to the project. This was favored by the partners' long history of training and outreach activities. In addition, we have a critical mass of qualified staff in the project enabling us to train and mentor new recruits. \\\hline Different groups not forming effective team & Medium/Low & The participants have a long track record of working collaboratively on code across multiple sites. Aggressive planning of project meetings, workshops and one-to-one partner visits will facilitate effective teamwork, combining face-to-face time at one site with remote collaboration.\\\hline % this also justifies our generous travel budget. Partner leaves the consortium & Low/High & If the GA requires a replacement in order to achieve the project's objectives, the consortium will invite a new relevant partner in. If a replacement is not necessary, the resources and tasks of the departing partner will be reallocated to the alternative ones within the consortium. \\\hline \multicolumn{3}{|c|}{ \textit{Dissemination risks} } \\\hline Impact of dissemination activities is lower than planned. & Low/Medium & Partners in the consortium have a proven track record at community building, training, dissemination, social media communication, and outreach, which reduces the risk. The Project Coordinator will monitor impact of all dissemination activities. If a deficiency is identified, the consortium will propose relevant corrective actions.\\\hline \end{tabular} \end{center} \caption{\label{risk-table}Initial Risk Assessment} \end{table} \fi %\TOWRITE{NT/Eugenia}{Impredictability} %\includegraphics[width=.94\textwidth]{Pictures/Impact-img1.png} % But: since Open Source softwares are freely accessible, security % and privacy issues are a concern. Anytime a resource is shared, % there is greater risk of unauthorised access and contaminated data. % Providers must demonstrate security solutions, which should include % physical security controlling access to the facility and protection % of user data from corruption and cyber attacks.} \TOWRITE{ALL}{ Add a paragraph about data management plan. What data will we produce, which data is available from the start, how do we handle it... } % LocalWords: mgt Paris-Sud UPSud Thiery Sage-Combinat decentralisation textwidth hline % LocalWords: textwidth Jupyter slmhnlnhfnhs hsfhs ghshsh includegraphics unauthorised %%% Local Variables: %%% mode: latex %%% TeX-master: "proposal" %%% End: % LocalWords: TOWRITE subsubsection organisational compactenum ipr Impredictability % LocalWords: textbf nobreak smallbreak
{ "alphanum_fraction": 0.8043498253, "avg_line_length": 47.4847161572, "ext": "tex", "hexsha": "d137843d96134d7726d1e48f8752349dd7e4b6c7", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "fe92ab33d02bf2412644357ff80129cf35fe2381", "max_forks_repo_licenses": [ "BSD-3-Clause" ], "max_forks_repo_name": "trallard/proposal", "max_forks_repo_path": "management_structure_and_procedures.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "fe92ab33d02bf2412644357ff80129cf35fe2381", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "BSD-3-Clause" ], "max_issues_repo_name": "trallard/proposal", "max_issues_repo_path": "management_structure_and_procedures.tex", "max_line_length": 107, "max_stars_count": 1, "max_stars_repo_head_hexsha": "fe92ab33d02bf2412644357ff80129cf35fe2381", "max_stars_repo_licenses": [ "BSD-3-Clause" ], "max_stars_repo_name": "trallard/proposal", "max_stars_repo_path": "management_structure_and_procedures.tex", "max_stars_repo_stars_event_max_datetime": "2019-09-15T20:32:10.000Z", "max_stars_repo_stars_event_min_datetime": "2019-09-15T20:32:10.000Z", "num_tokens": 4641, "size": 21748 }
% This is a sample LaTeX input file. (Version of 9 April 1986) % % A '%' character causes TeX to ignore all remaining text on the line, % and is used for comments like this one. \documentclass{article} % Specifies the document style. % The preamble begins here. \title{A Sample Document} % Declares the document's title. \author{Leslie Lamport} % Declares the author's name. \date{December 12, 1984} % Deleting this command produces today's date. \begin{document} % End of preamble and beginning of text. \maketitle % Produces the title. This is a sample input file. Comparing it with the output it generates can show you how to produce a simple document of your own. \section{Ordinary Text} % Produces section heading. Lower-level % sections are begun with similar % \subsection and \subsubsection commands. The ends of words and sentences are marked by spaces. It doesn't matter how many spaces you type; one is as good as 100. The end of a line counts as a space. One or more blank lines denote the end of a paragraph. Since any number of consecutive spaces are treated like a single one, the formatting of the input file makes no difference to \TeX, % The \TeX command generates the TeX logo. but it makes a difference to you. When you use \LaTeX, % The \LaTeX command generates the LaTeX logo. making your input file as easy to read as possible will be a great help as you write your document and when you change it. This sample file shows how you can add comments to your own input file. Because printing is different from typewriting, there are a number of things that you have to do differently when preparing an input file than if you were just typing the document directly. Quotation marks like ``this'' have to be handled specially, as do quotes within quotes: ``\,`this' % \, separates the double and single quote. is what I just wrote, not `that'\,''. Dashes come in three sizes: an intra-word dash, a medium dash for number ranges like 1--2, and a punctuation dash---like this. A sentence-ending space should be larger than the space between words within a sentence. You sometimes have to type special commands in conjunction with punctuation characters to get this right, as in the following sentence. Gnats, gnus, etc.\ % `\ ' makes an inter-word space. all begin with G\@. % \@ marks end-of-sentence punctuation. You should check the spaces after periods when reading your output to make sure you haven't forgotten any special cases. Generating an ellipsis \ldots\ % `\ ' needed because TeX ignores spaces after % command names like \ldots made from \ + letters. % % Note how a `%' character causes TeX to ignore the % end of the input line, so these blank lines do not % start a new paragraph. with the right spacing around the periods requires a special command. \TeX\ interprets some common characters as commands, so you must type special commands to generate them. These characters include the following: \$ \& \% \# \{ and \}. In printing, text is emphasized by using an %% END OF FIRST PAGE {\em italic\/} % The \/ command produces the tiny % extra space that should be added % between a slanted and a following % unslanted letter. type style. \begin{em} A long segment of text can also be emphasized in this way. Text within such a segment given additional emphasis with\/ {\em Roman} type. Italic type loses its ability to emphasize and become simply distracting when used excessively. \end{em} It is sometimes necessary to prevent \TeX\ from breaking a line where it might otherwise do so. This may be at a space, as between the ``Mr.'' and ``Jones'' in ``Mr.~Jones'', % ~ produces an unbreakable interword space. or within a word---especially when the word is a symbol like \mbox{\em itemnum\/} that makes little sense when hyphenated across lines. Footnotes\footnote{This is an example of a footnote.} pose no problem. \TeX\ is good at typesetting mathematical formulas like \( x-3y = 7 \) or \( a_{1} > x^{2n} / y^{2n} > x' \). Remember that a letter like $x$ % $ ... $ and \( ... \) are equivalent is a formula when it denotes a mathematical symbol, and should be treated as one. \section{Displayed Text} Text is displayed by indenting it from the left margin. Quotations are commonly displayed. There are short quotations \begin{quote} This is a short a quotation. It consists of a single paragraph of text. There is no paragraph indentation. \end{quote} and longer ones. \begin{quotation} This is a longer quotation. It consists of two paragraphs of text. The beginning of each paragraph is indicated by an extra indentation. This is the second paragraph of the quotation. It is just as dull as the first paragraph. \end{quotation} Another frequently-displayed structure is a list. The following is an example of an {\em itemized} list. \begin{itemize} \item This is the first item of an itemized list. Each item in the list is marked with a ``tick''. The document style determines what kind of tick mark is used. \item This is the second item of the list. It contains another list nested inside it. The inner list is an {\em enumerated} list. \begin{enumerate} \item This is the first item of an enumerated list that is nested within the itemized list. \item This is the second item of the inner list. \LaTeX\ allows you to nest lists deeper than you really should. \end{enumerate} This is the rest of the second item of the outer list. It is no more interesting than any other part of the item. %% END OF SECOND PAGE \item This is the third item of the list. \end{itemize} You can even display poetry. \begin{verse} There is an environment for verse \\ % The \\ command separates lines Whose features some poets will curse. % within a stanza. % One or more blank lines separate stanzas. For instead of making\\ Them do {\em all\/} line breaking, \\ It allows them to put too many words on a line when they'd rather be forced to be terse. \end{verse} Mathematical formulas may also be displayed. A displayed formula is one-line long; multiline formulas require special formatting instructions. \[ x' + y^{2} = z_{i}^{2}\] Don't start a paragraph with a displayed equation, nor make one a paragraph by itself. \end{document} % End of document.
{ "alphanum_fraction": 0.6714326566, "avg_line_length": 38.218579235, "ext": "tex", "hexsha": "b6e841d5a3634eeb6aadbe162f7b2066a85c4d56", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "49290cccf14f6c76dc8108eb68b99449e8775216", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "j8takagi/tex_mk", "max_forks_repo_path": "sample/sample/sample.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "49290cccf14f6c76dc8108eb68b99449e8775216", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "j8takagi/tex_mk", "max_issues_repo_path": "sample/sample/sample.tex", "max_line_length": 75, "max_stars_count": 1, "max_stars_repo_head_hexsha": "49290cccf14f6c76dc8108eb68b99449e8775216", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "j8takagi/tex_mk", "max_stars_repo_path": "sample/sample/sample.tex", "max_stars_repo_stars_event_max_datetime": "2015-01-21T13:28:30.000Z", "max_stars_repo_stars_event_min_datetime": "2015-01-21T13:28:30.000Z", "num_tokens": 1665, "size": 6994 }
\section{Output} PyLith currently supports output to HDF5/Xdmf and VTK files, which can be imported directly into a number of visualization tools, such as ParaView, Visit, and MayaVi. The HDF5 files can also be directly accessed via Matlab and Python modules, such as h5py and PyTables. PyLith v3.x supports output of the solution subfields (see Section \vref{sec:solution:observers}) all auxiliary fields (material properties, boundary condition parameters, and fault interface parameters, etc.) and fields derived from the auxiliary field and/or solution, such as stress and stress. HDF5 writer provides parallel binary output, whereas the VTK writer provides serial ASCII output. Additionally, with the VTK writer each time step is written to a separate file; the HDF5 writer puts all information for a given domain, boundary condition, or fault interface into a single file. \subsection{Physics Observer (\object{OutputPhysics})} Analogous to the \object{OutputSoln} objects discussed in Section \vref{sec:solution:observers} for output of the solution, the physics objects (material, boundary conditions, and fault interfaces) have \object{OutputPhysics} objects to provide output of the solution, properties, state variables, etc. The parameters for the \object{OutputPhysics} are: \begin{inventory} \propertyitem{info\_fields}{List of auxiliary subfields to observer/output (default=all which will output all of the subfields);} \propertyitem{data\_fields}{List of solution subfields to observer/output (default=all which will output all of the subfields);} \facilityitem{writer}{Writer for data (default=\object{DataWriterHDF5});} \facilityitem{trigger}{Trigger defining how often output is written (default=\object{OutputTriggerStep}); and} \facilityitem{field\_filter}{Filter for output fields (default=\object{FieldFilterNone}).} \end{inventory} \begin{cfg}[\object{PhysicsOutput} parameters in a \filename{cfg} file] <h>[pylithapp.problem.materials.elastic.observers.observer]</h> <p>writer.filename</p> = output/step01-elastic.h5 <f>field_filter</f> = pylith.meshio.FieldFilterNone \end{cfg} \subsection{Output Field Filter (\facility{field\_filter})} Sometimes fields may not be in the form a user wants. For example, the displacement solution subfield may have a basis order of 2 with degrees of freedom associated with edges, which is incompatible with most visualization packages that require information be provided with a basis order of 1 (vertex data) or 0 (cell data). The \facility{field\_filter} facility of the \object{OutputPhysics} object allows passing a field through a ``filter'' to yield the desired form of the output. The default filter is \object{FieldFilterNone}, which simply passes the input field on to the \facility{writer} for output. The \object{FieldFilterProject} object provides the ability to project the input field to a lower basis order. \subsubsection{Projecting to lower basis order (\object{FieldFilterProject})} This filter projects a field to a lower basis order. It is most often used to reduce the basis order of solution subfields, e.g,, displacement, to a basis order of 1 from a basis order of 2, so that the solutino field can be visualized in common ParaView, etc. The \object{fieldFilterProject} property is: \begin{inventory} \propertyitem{basis\_order}{Basis order of the projected field (default=1).} \end{inventory} \begin{cfg}[Using \object{FieldFilterProject} with solution output in a \filename{cfg} file] <h>[pylithapp.problem.solution_observers.domain]</h> <p>writer.filename</p> = output/step01-domain.h5 <f>field_filter</f> = pylith.meshio.FieldFilterProject <p>field_filter.basis_oder</p> = 0 \end{cfg} \subsection{Data Writers (\facility{writer})} \subsubsection{VTK Output (\object{DataWriterVTK})} PyLith writes legacy (non-XML) VTK files. These are simple files with vertex coordinates, the mesh topology, and fields over vertices and/or cells. Each time step is written to a different file. The time stamp is included in the filename with the decimal point removed. This allows automatic generation of animations with many visualization packages that use VTK files. The default time stamp is the time in seconds, but this can be changed using the normalization constant to give a time stamp in years, tens of years, or any other value. The parameters for the VTK writer are: \begin{inventory} \propertyitem{filename}{Name of VTK file.} \propertyitem{time\_format}{C-style format string for time stamp in filename. The decimal point in the time stamp will be removed for compatibility with VTK visualization packages that provide seamless animation of data from multiple VTK files.} \propertyitem{time\_constant}{Value used to normalize time stamp in VTK files (default is 1.0 s).} \end{inventory} \userwarning{We strongly discourage use of using VTK output. It is slow, inefficient, and not easily read from post-processing scripts. We strongly recommwnd using HDF5 output instead, which is the default starting in PyLith v3.0.0.} \subsubsection{HDF5/Xdmf Output (\object{DataWriterHDF5}, \object{DataWriterHDF5Ext})} \label{sub:HDF5/Xdmf-Output} HDF5 files provide a flexible framework for storing simulation data with datasets in groups logically organized in a tree structure analogous to files in directories. HDF5 output offers parallel, multi-dimensional array output in binary files, so it is much faster and more convenient than the VTK output which uses ASCII files and separate files for each time step. Standards for organizing datasets and groups in HDF5 files do not exist for general finite-element software in geodynamics. Consequently, PyLith uses its own simple layout show in Figure \vref{fig:hdf5:layout}. In order for visualization tools, such as ParaView, to determine which datasets to read and where to find them in the hierarchy of groups within the HDF5 file, we create an Xdmf (eXtensible Data Model and Format, \url{www.xdmf.org}) metadata file that provides this information. This file is written when PyLith closes the HDF5 file at the end of the simulation. In order to visualize the datasets in an HDF5 file, one simply opens the corresponding Xdmf file (the extension is \filename{xmf}) in ParaView or Visit. The Xdmf file contains the relative path to the HDF5 file so the files can be moved but must be located together in the same directory. \important{The Xdmf format supports representation of two- and three-dimensional coordinates of points, scalar fields, and three-dimensional vector and tensor fields but not two-dimensional vector or tensor fields. Consequently, for two-dimensional vector fields we build a three-component vector from the two-component vector (x and y components) and a separate zero scalar field (z component). For tensor fields, we create a scalar field for each of the tensor components, adding the component as a suffix to the name of the field.} \begin{figure}[htbp] \includegraphics{runpylith/figs/hdf5layout} \caption{General layout of a PyLith HDF5 file. The orange rectangles with rounded corners identify the groups and the blue rectangles with sharp corners identify the datasets. The dimensions of the data sets are shown in parentheses. Most HDF5 files will contain either \texttt{vertex\_fields} or \texttt{cell\_fields} but not both.} \label{fig:hdf5:layout} \end{figure} See Table \vref{tab:materials:statevars} in Section \vref{sec:material:parameters} for a table of component values for tensor output in HDF5 files. To avoid confusion about the ordering of components for tensor data, we separate the components in the Xdmf file. The \object{DataWriterHDF5} and \object{DataWriterHDF5Ext} property is: \begin{inventory} \propertyitem{filename}{Name of HDF5 file (default=output.h5; the Xdmf filename is generated from the same prefix).} \end{inventory} \begin{cfg}[\object{DataWriterHDF5Ext} parameters in a \filename{cfg} file] <h>[pylithapp.timedependent.domain.output]</h> <p>output_freq</p> = time_step <p>time_step</p> = 1.0*yr <p>cell_data_fields</p> = [displacement, velocity] <f>writer</f> = pylith.meshio.DataWriterHDF5Ext <p>writer.filename</p> = dislocation.h5 \end{cfg} In this example, we change the writer from the default VTK writer to the HDF5 writer with external datasets (\object{DataWriterHDF5Ext}) for output over the domain. HDF5 files do not contain self-correcting features that allow a file to be read if part of a dataset is corrupted. This type of error can occur if a job terminates abnormally in the middle or at the end of a simulation on a large cluster or other parallel machine. Fortunately, HDF5 also offers the ability to store datasets in external binary files with the locations specified by links in the HDF5 file. Note that the use of external data files results in one data file per dataset in addition to the HDF5 and Xdmf files. The external data files use the name of the HDF5 file with the dataset name added to the prefix and the \filename{h5} suffix replaced by \filename{dat}. The HDF5 files include relative paths to the external data files, so these files can also be moved, but they, too, must be kept together in the same directory. This provides a more robust method of output because one can generate an HDF5 file associated with the uncorrupted portions of the external data files should an error occur. Currently, PyLith does not include a utility to do this, but we plan to add one in a future release. Thus, there are two options when writing PyLith output to HDF5 files: (1) including the datasets directly in the HDF5 files themselves using the \object{DataWriterHDF5} object or (2) storing the datasets in external binary files with just metadata in the HDF5 files using the \object{DataWriterHDF5Ext} object. Both methods provide similar performance because they will use MPI I/O if it is available. \userwarning{Storing the datasets within the HDF5 file in a parallel simulation requires that the HDF5 library be configured with the \commandline{-{}-enable-parallel} option. The binary PyLith packages include this feature and it is a default setting in building HDF5 via the PyLith Installer.} Accessing the datasets for additional analysis or visualization is nearly identical in the two methods because the use of external data files is completely transparent to the user except for the presence of the additional files. Note that in order for ParaView to find the HDF5 and external data files, it must be run from the same relative location where the simulation was run. For example, if the simulation was run from a directory called ``work'' and the HDF5/Xdmf files were written to ``work/output'', then ParaView should be run from the ``work'' directory. See Table \vref{tab:materials:statevars} in Section \vref{sec:material:parameters} for a table of component values for tensor output. \begin{cfg}[Selecting \object{DataWriterHDF5Ext} in a \filename{cfg} file] <h>[pylithapp.problem]</h> <f>solution_observers</f> = [domain] <f>solution_observers.domain>/f> = pylith.mehsio.DataWriterHDF5Ext <h>[pylithapp.problem.solution_observers.domain]</h> <p>writer.filename</p> = output/step01-domain.h5 \end{cfg} \subsubsection{HDF5 Utilities} HDF5 includes several utilities for examining the contents of HDF5 files. \filename{h5dump} is very handy for dumping the hierarchy, dimensions of datasets, attributes, and even the dataset values to stdout. \begin{shell} # Dump the entire HDF5 file (not useful for large files). $ h5dump mydata.h5 # Dump the hierarchy of an HDF5 file. $ h5dump -n mydata.h5 # Dump the hierarchy with dataset dimensions and attributes. $ h5dump -H mydata.h5 # Dump dataset 'vertices' in group '/geometry' to stdout. $ h5dump -d /geometry/vertices mydata.h5 \end{shell} We have also include a utility \filename{pylith\_genxdmf} (see Section \vref{sec:pylith:genxdmf}) that generates an appropriate Xdmf file from a PyLith HDF5 file. This is very useful if you add fields to HDF5 files in post-processing and wish to view the results in ParaView or Visit. \subsection{Output Triggers (\facility{trigger})} \newfeature{3.0} By default PyLith will write the requested output after every time step. In many cases we prefer to save the solution, state variables, etc at a coarser temporal resolution. The \object{OutputTriggerStep} controls the decimation of the output by time step, and the \object{OutputTriggerTime} controls the decimation of the output via time. For uniform time stepping these are equivalent. \subsubsection{Decimate by time step (\object{OutputTriggerStep})} \object{OutputTriggerStep} decimates the output by skipping a user-specified number of time steps. The property is \begin{inventory} \propertyitem{num\_skip}{Number of solution steps to skip between writes (default=0).} \end{inventory} \subsubsection{Decimate by time (\object{OutputTriggerTime})} \object{OutputTriggerTime} decimates the output by skipping a user-specified elasped time. The property is \begin{inventory} \propertyitem{elapsed\_time}{Elasped time between writes (default=0.0*s).} \end{inventory} \usertip{Due to roundoff error in determining the simulation time over many time steps, a simulation may occasionally skip writing output unexpectedly when using \object{OutputTriggerTime}. The best workaround is to use an \property{elapsed\_time} that is a fraction of the time step size smaller than the desired elapsed time, such as 0.9999*year instead of 1.0*year.} % End of file
{ "alphanum_fraction": 0.7908366534, "avg_line_length": 47.3916083916, "ext": "tex", "hexsha": "802bfe57260468750050be6a98246bd564dd8096", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "f6338261b17551eba879da998a5aaf2d91f5f658", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "Grant-Block/pylith", "max_forks_repo_path": "doc/userguide/runpylith/output.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "f6338261b17551eba879da998a5aaf2d91f5f658", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "Grant-Block/pylith", "max_issues_repo_path": "doc/userguide/runpylith/output.tex", "max_line_length": 131, "max_stars_count": null, "max_stars_repo_head_hexsha": "f6338261b17551eba879da998a5aaf2d91f5f658", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "Grant-Block/pylith", "max_stars_repo_path": "doc/userguide/runpylith/output.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 3374, "size": 13554 }
\section{Endomorphisms of vector spaces}
{ "alphanum_fraction": 0.7906976744, "avg_line_length": 10.75, "ext": "tex", "hexsha": "659074094e6e249701b8782399f7763799ebfddd", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "adamdboult/nodeHomePage", "max_forks_repo_path": "src/pug/theory/geometry/endomorphisms/01-00-Endomorphisms_of_vector_spaces.tex", "max_issues_count": 6, "max_issues_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_issues_repo_issues_event_max_datetime": "2022-01-01T22:16:09.000Z", "max_issues_repo_issues_event_min_datetime": "2021-03-03T12:36:56.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "adamdboult/nodeHomePage", "max_issues_repo_path": "src/pug/theory/geometry/endomorphisms/01-00-Endomorphisms_of_vector_spaces.tex", "max_line_length": 40, "max_stars_count": null, "max_stars_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "adamdboult/nodeHomePage", "max_stars_repo_path": "src/pug/theory/geometry/endomorphisms/01-00-Endomorphisms_of_vector_spaces.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 12, "size": 43 }
\title{Computer Architecture - CS 301} % You may change the title if you want. \author{Rishit Saiya - 180010027, Assignment - 6} \date{\today} \documentclass[12pt]{article} \usepackage{fullpage} \usepackage{enumitem} \usepackage{amsmath,mathtools} \usepackage{amssymb} \usepackage[super]{nth} \usepackage{textcomp} \usepackage{hyperref} \begin{document} \maketitle %---------------------------------------------------------------- \section{} We have seen some software inspired methods to patch Data Hazards are as follows: \begin{itemize} \item Re-Ordering of Instructions so as to avoid Hazards. \item Adding \textbf{nop} instruction at specific order to avoid Hazards. \end{itemize} These methods reprimand the Hazard to a certain level but they have their own drawback as follows: \begin{itemize} \item Since Pipelining represents the internal structure of the Architecture, we would like expose less of its details to Compiler. Apart from the fact that the risk of higher complexity, the exposure of Pipelining details also increases the surface area of attack and hence can prove fatal in security perspective. \item Since the above methods have to performed by the compiler, we can't completely rely on its ability. \item There will always have to be a compiler compatible to a specific hardware and architecture and this will increase the man efforts and no generalisation would be made. \end{itemize} So, in order to increase efficiency and accuracy, we implement more of a hardware approach. In this question we will be discussing the \textbf{Interlock} approaches. \\ The following are 2 types of Interlocks in general: Data Locks, Branch Locks. Data Locks are used for Data Hazards and Branch Locks are used for Control Hazards. So we will elaborate on Data Locks here because we are asked regarding Data Hazards. \begin{itemize} \item \textbf{\textit{Data Lock:}} It does not allows a consumer instruction to move beyond OF stage till it has fetches the correct values. \\ We must understand that, using Data Lock Interlock gives the portability with variations in pipelines but our performance is compromised here in simple versions of interlocking in pipelines.\\ The main task of the Data Lock is to stall the IF \& OF stages. So basically, when an instruction reaches OF stage, it checks if it has a contradiction with any instructions in EX, MA, RW stages. If no contradiction, then normal flow would continue and if a contraction arises, then stall the pipeline (IF \& OF stages).\\ We define a \textbf{\textit{Bubble}} as a Nop instruction packet. This is used in implementation in Data Lock. \\ Let's consider an example as follows: \begin{verbatim} add r1, r2, r3 sub r4, r1, r2 \end{verbatim} Here, the instruction[1], r1 is assigned the value of r2 + r3 in the 5th Time Cycle and its value can be fetched only after 5th cycle by instruction[2]. So, till the 6th time cycle, instruction[2] has to be stalled to avoid hazards invasion and we do it using bubble. \begin{table}[] \begin{center} \begin{tabular}{|c|cccccccll} \hline \multicolumn{1}{|l|}{} & \multicolumn{1}{c|}{1} & \multicolumn{1}{c|}{\textbf{2}} & \multicolumn{1}{c|}{\textbf{3}} & \multicolumn{1}{c|}{\textbf{4}} & \multicolumn{1}{c|}{\textbf{5}} & \multicolumn{1}{c|}{\textbf{6}} & \multicolumn{1}{c|}{\textbf{7}} & \multicolumn{1}{c|}{\textbf{8}} & \multicolumn{1}{c|}{\textbf{9}} \\ \hline \textbf{IF} & \multicolumn{1}{c|}{\textbf{1}} & \multicolumn{1}{c|}{\textbf{2}} & & \multicolumn{1}{l}{} & \multicolumn{1}{l}{} & \multicolumn{1}{l}{} & \multicolumn{1}{l}{} & & \\ \cline{1-7} \textbf{OF} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{\textbf{1}} & \multicolumn{1}{c|}{\textbf{2}} & \multicolumn{1}{c|}{\textbf{2}} & \multicolumn{1}{c|}{\textbf{2}} & \multicolumn{1}{c|}{\textbf{2}} & \multicolumn{1}{l}{} & & \\ \cline{1-1} \cline{3-8} \textbf{EX} & & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{\textbf{1}} & \multicolumn{1}{c|}{\textbf{B}} & \multicolumn{1}{c|}{\textbf{B}} & \multicolumn{1}{c|}{\textbf{B}} & \multicolumn{1}{c|}{\textbf{2}} & & \\ \cline{1-1} \cline{4-9} \textbf{MA} & & & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{\textbf{1}} & \multicolumn{1}{c|}{\textbf{B}} & \multicolumn{1}{c|}{\textbf{B}} & \multicolumn{1}{c|}{\textbf{B}} & \multicolumn{1}{c|}{\textbf{2}} & \\ \cline{1-1} \cline{5-10} \textbf{RW} & & & & \multicolumn{1}{l|}{} & \multicolumn{1}{c|}{\textbf{1}} & \multicolumn{1}{c|}{\textbf{B}} & \multicolumn{1}{c|}{\textbf{B}} & \multicolumn{1}{c|}{\textbf{B}} & \multicolumn{1}{c|}{\textbf{2}} \\ \cline{1-1} \cline{6-10} \end{tabular} \end{center} \caption{Pipeline - Data Lock Explanation} \end{table} In the Figure 1, the above instruction have been shown in Pipeline view. The rows are the instruction and the column are the time cycles. \textbf{B} is the Bubble. \end{itemize} \textbf{Minimum number of Bubbles Required:} \\ If 2 consecutive instructions say X \& Y are having consecutive executions i.e., if instruction Y is causing a hazard to instruction X, then no.of bubbles have to be selected appropriately. Let's assume Instruction X is in $x^{th}$ cycle in RW stage, then Instruction Y can fetch correct values only at or after $(x+1)^{th}$ cycle. So the minimum number of bubbles would be the number of bubbles required to stall Instruction Y at OF stage to avoid Data Hazard. Clearly if there is no conflict, then the no. of bubbles will be \textbf{0}. \\ When an instruction reaches the OF stage, we check if it has a conflict with any of the instructions in the EX, MA, and RW stages respectively. If it has conflict with the instruction at EX stage, then the no. of bubbles will be \textbf{3}. If it has conflict with the instruction at MA stage, then no. of bubbles will be \textbf{2} and the no. of bubbles will be 1 if its at RW stage. So, in case of conflict, the min no. of bubbles to be introduced would be \textbf{1}. %---------------------------------------------------------------- \section{} We have seen some software inspired methods to patch Data Hazards are as follows: \begin{itemize} \item Re-Ordering of Instructions so as to avoid Hazards. \item Adding \textbf{nop} instruction at specific order to avoid Hazards. \end{itemize} These methods reprimand the Hazard to a certain level but they have their own drawback as follows: \begin{itemize} \item Since Pipelining represents the internal structure of the Architecture, we would like expose less of its details to Compiler. Apart from the fact that the risk of higher complexity, the exposure of Pipelining details also increases the surface area of attack and hence can prove fatal in security perspective. \item Since the above methods have to performed by the compiler, we can't completely rely on its ability. \item There will always have to be a compiler compatible to a specific hardware and architecture and this will increase the man efforts and no generalisation would be made. \end{itemize} So, in order to increase efficiency and accuracy, we implement more of a hardware approach. In this question we will be discussing the \textbf{Forwarding} approaches. \\ The following paths are created following above rules: RW $\rightarrow$ OF, RW $\rightarrow$ EX, RW $\rightarrow$ MA (load to store) and MA $\rightarrow$ EX (ALU instructions, load, store). \\ Let's consider an example as follows: \begin{verbatim} add r1, r2, r3 sub r4, r1, r2 \end{verbatim} We know that Figure 1 shows the correct pipelining data, and we finally need the correct data fetch for r1 without any Hazard intervention. So, to be more efficient we forward values in above stages. \begin{table}[] \begin{center} \begin{tabular}{|c|cccccc} \hline \multicolumn{1}{|l|}{} & \multicolumn{1}{c|}{1} & \multicolumn{1}{c|}{\textbf{2}} & \multicolumn{1}{c|}{\textbf{3}} & \multicolumn{1}{c|}{\textbf{4}} & \multicolumn{1}{c|}{\textbf{5}} & \multicolumn{1}{c|}{\textbf{6}} \\ \hline \textbf{IF} & \multicolumn{1}{c|}{\textbf{1}} & \multicolumn{1}{c|}{\textbf{2}} & & \multicolumn{1}{l}{} & \multicolumn{1}{l}{} & \multicolumn{1}{l}{} \\ \cline{1-4} \textbf{OF} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{\textbf{1}} & \multicolumn{1}{c|}{\textbf{2}} & \textbf{} & \textbf{} & \textbf{} \\ \cline{1-1} \cline{3-5} \textbf{EX} & & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{\textbf{1}} & \multicolumn{1}{c|}{\textbf{2}} & \textbf{} & \textbf{} \\ \cline{1-1} \cline{4-6} \textbf{MA} & & & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{\textbf{1}} & \multicolumn{1}{c|}{\textbf{2}} & \textbf{} \\ \cline{1-1} \cline{5-7} \textbf{RW} & & & & \multicolumn{1}{l|}{} & \multicolumn{1}{c|}{\textbf{1}} & \multicolumn{1}{c|}{\textbf{2}} \\ \cline{1-1} \cline{6-7} \end{tabular} \end{center} \caption{Pipeline - Forwarding Explanation} \end{table} Let's consider a hazard pipeline to understand better as shown in Figure 2. In Figure 2, we see that in time cycle 4, Instruction[1] reaches MA stage and correct value of r1 has been reached. At the same time, instruction[2] is in EX stage and needs correct r1 value. So we \textbf{forward} r1 value from MA stage to EX stage. This process is defined as Forwarding. \\ Clearly, this process doesn't require stalling and hence doesn't require any bubble. There is an exception, when there is a load instruction involved. Let's consider the following example. \begin{verbatim} ld r1, 5[r2] mul r3, r1, r4 \end{verbatim} Why is this case (Load Use Hazard) unique? It is because value of r1 is written in time cycle 5 and as mentioned above instruction[2] requires correct value of r1 at the same time and clearly this is not possible using Forwarding. So we need to tweak the idea of Forwarding in this unique case by introducing bubbles here. Figure 3 explains the pipelining in that case. \begin{table}[] \begin{center} \begin{tabular}{|c|ccccccll} \hline \multicolumn{1}{|l|}{} & \multicolumn{1}{c|}{1} & \multicolumn{1}{c|}{\textbf{2}} & \multicolumn{1}{c|}{\textbf{3}} & \multicolumn{1}{c|}{\textbf{4}} & \multicolumn{1}{c|}{\textbf{5}} & \multicolumn{1}{c|}{\textbf{6}} & \multicolumn{1}{c|}{\textbf{7}} & \multicolumn{1}{c|}{\textbf{8}} \\ \hline \textbf{IF} & \multicolumn{1}{c|}{\textbf{1}} & \multicolumn{1}{c|}{\textbf{2}} & & \multicolumn{1}{l}{} & \multicolumn{1}{l}{} & \multicolumn{1}{l}{} & & \\ \cline{1-4} \textbf{OF} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{\textbf{1}} & \multicolumn{1}{c|}{\textbf{2}} & \textbf{} & \textbf{} & \textbf{} & & \\ \cline{1-1} \cline{3-7} \textbf{EX} & & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{\textbf{1}} & \multicolumn{1}{c|}{\textbf{2}} & \multicolumn{1}{c|}{\textbf{2}} & \multicolumn{1}{c|}{\textbf{2}} & & \\ \cline{1-1} \cline{4-8} \textbf{MA} & & & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{\textbf{1}} & \multicolumn{1}{c|}{\textbf{2 | B}} & \multicolumn{1}{c|}{\textbf{B}} & \multicolumn{1}{c|}{\textbf{2}} & \\ \cline{1-1} \cline{5-9} \textbf{RW} & & & & \multicolumn{1}{l|}{} & \multicolumn{1}{c|}{\textbf{1}} & \multicolumn{1}{c|}{\textbf{2 | B}} & \multicolumn{1}{l|}{\textbf{B}} & \multicolumn{1}{c|}{\textbf{2}} \\ \cline{1-1} \cline{6-9} \end{tabular} \end{center} \caption{Pipeline - Load Use Hazard} \end{table} The forwarding here is taking place from RW instruction[1] to EX instruction[2]. \\ \textbf{Minimum number of Bubbles Required:} \\ As mentioned above, it depends on how conflict is arising in the instructions. In this case, the minimum number of bubbles to be introduced will be \textbf{0}, because in forwarding we connect wires to forward data to different stage to improve performance index. In the case of load-use where we need to introduce a bubble, the number of bubbles needed is \textbf{1}. %---------------------------------------------------------------- \end{document}
{ "alphanum_fraction": 0.5943643512, "avg_line_length": 90.3552631579, "ext": "tex", "hexsha": "a950598790d4b03e76f5c44a7b52cc93de0d71d5", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "1e73e590e88664dcc4ca652a599cdc2cde07a41a", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "rishitsaiya/Computer-Architecture-Theory", "max_forks_repo_path": "Assignment-6/180010027_RishitSaiya.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "1e73e590e88664dcc4ca652a599cdc2cde07a41a", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "rishitsaiya/Computer-Architecture-Theory", "max_issues_repo_path": "Assignment-6/180010027_RishitSaiya.tex", "max_line_length": 545, "max_stars_count": 1, "max_stars_repo_head_hexsha": "1e73e590e88664dcc4ca652a599cdc2cde07a41a", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "rishitsaiya/Computer-Architecture-Theory", "max_stars_repo_path": "Assignment-6/180010027_RishitSaiya.tex", "max_stars_repo_stars_event_max_datetime": "2020-12-25T17:20:42.000Z", "max_stars_repo_stars_event_min_datetime": "2020-12-25T17:20:42.000Z", "num_tokens": 3949, "size": 13734 }
% !TEX root = Documentation.tex \label{create-user.php} \section{create-user.php} The create-user.php site offers the opportunity to create a new user and to set the right for this specific user. User creation follows the idea of dropping rights, which only allows a user to create another user with less rights than he has himself. To ensure this, create-user.php is just visible to users with a role, higher than student. This includes the following roles: admin [role code: 4], professor [role code: 3], scientific coworker [role code: 2]. \\ Furthermore only professors and admins are allowed to create users at a different institution, scientific coworker are only allowed to create users at the same institution as they are. In the same way, the erasing of users is limited to professors and admin users. \paragraph{Create a new user} Creating a user leads to modifications in the Dim\_User table and the authentication database. This database updates are proceeded, if the user clicks on the ``create user'' button, which calls the ``create\_user\_script.php'' file to process the form data and insert them into the database. \paragraph{Create new institution} The creation of new institution is also limited to people logged in as professors and admin. Each department is created separetely, even if an institution owns several departments, taking part in the 2Org-Cows project. This improves the adjustability of rights sticked to the specific department in case of splitted user rights on certain datasets. The data inserted for the creation of an institution are processed by the create\_institution\_script.php file. \paragraph{Delete user} Furthermore, the create-user.php allows the deletion of existing users. This is limited to users with the role code 4 and 3 (admin and professor), therefore the form is just visible to users of that role. To prevent accidental deletions, a separate checkbox has to be checked, to delete a user. User deletion is limited to users from the same institution and exclude the current user. The deletion of the user is proceeded by the delete\_user\_script.php . \paragraph{Update password} To allow users, who forgot their password, to regain access to their accounts, users with higher roles can set the password for users in their institution. As there is, up till now, no further security in the password updates, these actions are limited to high profile users. In case it might be necessary, further protection mechanisms to the password updates, such as requesting a secret, etc. could be implemented. The password updates are processed by the set-password.php script. \paragraph{Grant rights} As the data in the database are bound to specific groups, the rights management on the group level has to be performed with the administration aswell. Therefore, all departments are retrieved from the database and listed in a table. To improve the overview to the departments, the table is splitted into institutions and departments. Finally, the table includes one checkbox per department, which sets a value in the corresponding table. The table also shows with the checkboxes, which departments are granted access rights to the data tables of the institution invoked. As all permissions are stored in a translation table in the database, an own script updates this table and sets the value for the access on the data. \paragraph{CSRF protection} As there are multiple forms on one page, the csrf protection tokens are applied as split calculation tokens. In this case, the \$\_SESSION token just contains an random number, generated with \texttt{uniqid()}. For the hidden input field, this number is part of the input for an SHA256 hash calculation with \texttt{hash\_hmac()}, where a string and the number from the \$\_SESSION variable are the input. The same operation takes place in all the scripts, the output is then compared to the submitted \$\_POST variable, using \texttt{strcmp()}. \subsection{create\_user\_script.php} The create\_user\_script.php is the processing file for the entries to create a new user in the web interface for the database. Basically the script just performs the validity checks of the form data provided in create-user.php and an \textbf{INSERT} query to the MySQL database.\\ At the beginning the script checks whether the provided check bytes submitted in create-user.php match those from the session. Again this is a CSRF-protection in case the session has been hijacked. If there are no problems on the CSRF protection, the script checks for the validity of the form provided. This part already takes place in the html, where the tag ``required'' is set for all fields, but a controll in the script should avoid empty fields in the database. \paragraph{Type checking of entries} For the validity checks the following steps are taken: \begin{itemize} \item validity check of the e-mail adress provided \item check whether the desired username is available \item check, whether the password matches matches the control field \item check whether the user permissions, entered in the form, are allowed to be performed by the user invoked \item check whether the user invoked is allowed to select an institution different from his own \item check whether the entered permissions are met by the other data entered into the database, such as check, if a user role is possible or if a group exists. \end{itemize} All manual input on free text fields, such as the username, etc. is not checked again by the script and taken for true and granted. During the script, only one value is changed automatically, which is the username, as it is set to a small letter string. \paragraph{Data gathering from different tables} If any of these checks fails, the user receives an error notice and is directed to the form.As the group, department and the country entries are stored in a different way in the database, than in the user session, the database values have to be retrieved from the database. Therefore, the corresponding institution, department and country are queried from the Dim\_Group table. As the country is stored as an integer value, the corresponding ID\_Country has to be queried from the Dim\_Country in the next step.\\ The passwords are stored as bcrypt hash values in a separate database, the hashing is performed after the validity checks for the form. \paragraph{Data insertion} The next step is to insert all data into the Dim\_User table in the agri\_star\_001 database. During this step the ID\_User value, which identifies the user during further operation, is set as the resulting auto-increment value for the primary key from the database. To allow sufficiently large user amounts, the ID\_User field is of the type BIGINT. The insert is proceeded in an objective style, generating the query first, then assigning the values to the query (\texttt{mysqli::prepare(\$query)}, \texttt{mysqli::bind\_param(parameters)}) and than executing the query (\texttt{mysqli::execute}). The transaction id of that insert is retrieved, as it is used as ID\_User, to map the data in the auth database to the corresponding user. In the next step the username, the password hash and the ID\_User are inserted into the auth database, using the same process as for the Dim\_User table.\\ The scripts quits with a success message. \label{create_institution_script.php} \subsection{create\_institution\_script.php} The create\_institution\_script.php is the processing script for the entries in the institution form. Again it is basically an \textbf{INSERT} query to the database, where no external have to be retrieved from other tables in the database. \paragraph{Type checking} The only type checking for an input field, being performed by the create\_institution\_script.php, is the check of the provided e-mail address. This check just checks for the basic match with e-mail regulations. Furthermore, a logical check is performed on the selected country, where the script checks, whether the selected country exists in the database. In all other cases, the user input is not checked again by the script and taken for granted and true. If all fields are filled, the provided data is put into the Dim\_Group table in the database, using a similar method as for the users. \paragraph{Rights management} To allow groupwise right grants, a separate table with all institutions in the Dim\_Group keeps track of the granted rights. Therefore, it is important to create connections between all groups created during the project. In order to do this, the transaction ID is retrieved from the database and all other ID\_Group values are queried from the database. Than these values are written into the grants table, where one column is the giving group and another column is the receiving group. \\ Finally the script quits with a success message. \subsection{delete\_user\_script.php} The deletion of a user does not lead to all his data being wiped out of the Dim\_User and the auth table, instead the user is simply kept from successfully logging into the frontend. Therefore, the password in the auth table is updated to a cryptographically save binary string. Furthermore, the column user\_not\_active in the Dim\_User table is set to 1, which does not show the user in any active state anymore. The script exits with a success message and refers to the create-user.php page. \subsection{set-password.php} In case a user has forgotten his password, these can be resetted by a user with a role higher than 2 (professor and admin). As there is no further checking, whether the user taking the action is allowed to set the password, this is type of a trust system, which should exist in the institution taking part in the project. The set-password.php script does the following steps: \begin{itemize} \item check if the two password fields match \item hashing the password, using the bcrypt standard with 12 rounds, to increase calculation requirements for password cracking \item saving the password hash to the auth database \end{itemize} If all entries are fine, the script quits successfully and refers to the create-user.php page again. \subsection{grant-rights.php} The permissions administration in the rights table is written with the grant-rights.php into the database. Unlike the former scripts, which basically performed an INSERT query on the database, the grant-rights.php first checks, whether an entry corresponding to the two mentioned groups exists in the grants table. This just occurs, if the user has chosen a group, with which he wants to share data. Otherwise, nothing happens to the table and not selected groups are neither inserted nor updated. In case the entry with the two groups does not exist both ID\_Group values and the value for data access (1) are written into the database.\\ If the matching of both groups already exists, as it was created during a group creation, the script just updates the access value to one.\\ To withdraw rights from existing groups, the corresponding checkboxes on the admin page have to be unchecked. As the content of the checkboxes is transmitted as an array, containing all group numbers delivered by checking the checkboxes, the grant-rights.php script can check for the existance of all shown groups in the array. If an ID\_Group value does not come up in the array, the connected checkbox has either not been checked or has been unchecked.\\ As the first step, the script gathers all groups, to which the current user's group has granted rights, from the database, where entries to these groups must exist, as they have been at least created during the rights permission process. In the loop to create the array from the database results, the script performs a check, if the currently processed group comes up in the checkbox array from the script. If the ID\_Group value does not come up the checkbox array, an SQL query is executed to replace the value for data access (1) with a 0. Otherwise, the loop ends, when the array has been completely parsed.\\ Then the scripts exits with a success message, but does not indicate, which results have been achieved. This can easily be done by checking the checkboxes, which show the current right status for other groups.
{ "alphanum_fraction": 0.7980275491, "avg_line_length": 108.5752212389, "ext": "tex", "hexsha": "b6bdcdddc1bea1fb22c4f3c9b3bf5a9aa58ccd56", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "c983a34872a70eb36e441de7dca8796dbf0fa3fc", "max_forks_repo_licenses": [ "BSD-3-Clause" ], "max_forks_repo_name": "hollyclergyman/2Org-Cows-Docs", "max_forks_repo_path": "create-user.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "c983a34872a70eb36e441de7dca8796dbf0fa3fc", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "BSD-3-Clause" ], "max_issues_repo_name": "hollyclergyman/2Org-Cows-Docs", "max_issues_repo_path": "create-user.tex", "max_line_length": 203, "max_stars_count": 1, "max_stars_repo_head_hexsha": "c983a34872a70eb36e441de7dca8796dbf0fa3fc", "max_stars_repo_licenses": [ "BSD-3-Clause" ], "max_stars_repo_name": "hollyclergyman/2Org-Cows-Docs", "max_stars_repo_path": "create-user.tex", "max_stars_repo_stars_event_max_datetime": "2017-04-24T07:43:01.000Z", "max_stars_repo_stars_event_min_datetime": "2017-04-24T07:43:01.000Z", "num_tokens": 2569, "size": 12269 }
\documentclass{article} \usepackage[pdftex]{graphicx} \pdfcompresslevel=9 \pdfoutput=1 \topmargin 0in \textheight 8in \oddsidemargin 0in \evensidemargin 0in \textwidth 6.5in \parskip 0.1in \parindent 0in \renewcommand{\arraystretch}{2} \newenvironment{Matrix}[1]{\left[ \begin{array}{#1}}{\end{array} \right]} \newcommand{\Real}{{\mbox{\rm I}\hspace*{-2pt}\mbox{\rm R}}} \newcommand{\Code}[1]{{\tt #1}} \begin{document} \DeclareGraphicsExtensions{.pdf,.png} \begin{center} \large\bf Directions for Running TriangleIntersection2.exe \end{center} \section{Introduction} This is a Microsoft Windows application that is built on top of the \Code{FWinApplication} and \Code{MagicFM} libraries. The program illustrates four intersection routines for triangles in 2D: \begin{enumerate} \item Test for intersection of stationary triangles. \item Find the intersection set of stationary triangles. \item Test for intersection of moving triangles during a specified time interval and report the first time of contact. \item Find the intersection of moving triangles during a specified time interval and report the first time of contact and the intersection set at that time. \end{enumerate} When you run the program, a window is displayed and shows two triangles, one drawn in red (triangle $0$) and one drawn in blue (triangle $1$). Figure 1 shows the initial window. % % FIGURE 1 % \begin{center} \includegraphics{Figure1.png} Figure 1. The initial window for TestTriTri.exe. \end{center} % The status bar shows the active triangle (initially $-1$ since no triangle is active). The status bar also shows the velocity of each triangle as a speed and an angle index for an angle measured from the positive x--axis. Given an angle index of $N \in \{0,\ldots,31\}$, the actual angle is $\theta = 2\pi N/32$. The initial angles are both zero, so the direction of velocity is $(1,0)$. The speeds are both zero, so the velocities themselves are $(0,0)$. A triangle is activated by clicking on it with the left mouse button. The upper portion of the window shows the type of intersection test: \Code{TEST}, \Code{FIND}, \Code{TEST VEL}, or \Code{FIND VEL}. These correspond to the four tests mentioned above, in that order. To toggle among the types, press the \Code{`t'} key. After the type is a statement about whether or not the triangles are intersecting. \section{Moving the Triangles} You can move a triangle by clicking on an interior point with the left mouse button and dragging to a new location. In the \Code{TEST} case, Figure 2 shows the two stationary triangles that have been dragged so that they intersect. % % FIGURE 2 % \begin{center} \includegraphics{Figure2.png} Figure 2. Two intersecting triangles in the \Code{TEST} query. \end{center} \section{Changing the Triangle Vertices} You can also change a triangle vertex by clicking with the left mouse button near a vertex and dragging it. Figure 3 shows the two stationary triangles that have been modified and moved. The \Code{FIND} case is used. In this case, the intersection section set is calculated and displayed in purple. % % FIGURE 3 % \begin{center} \includegraphics{Figure3.png} Figure 3. Two modified and intersecting triangles in the \Code{FIND} query. \end{center} \section{Changing the Triangle Velocities} Activate the triangle whose velocity you want to change. The speed can be modified by pressing the \Code{`<'} and \Code{`>'} keys. The minimum speed is zero and the maximum speed is $64$. The angle index can be modified by pressing the \Code{`+'} and \Code{`-'} keys. In the \Code{TEST} and \Code{FIND} cases, only the status bar changes. However, in the \Code{TEST VEL} and \Code{FIND VEL} cases, triangles are drawn in dark red and dark blue that correspond to the original triangles moved by their velocities over an interval of $1$ unit of time. The paths of the vertices over that time interval are also drawn. Figure 4 shows a typical image for two modified and moved triangles in the \Code{TEST VEL} case. % % FIGURE 4 % \begin{center} \includegraphics{Figure4.png} Figure 4. Two modified and moved triangles in the \Code{TEST VEL} query, both triangles having nonzero velocities. \end{center} \section{Test or Find Intersections for Moving Triangles} In the cases \Code{TEST VEL} and \Code{FIND VEL}, if the triangles do intersect at first time $T \in [0,1]$, the program draws the triangles moved only over the interval $[0,T]$ so that they are just touching. As you drag one original triangle towards the other, you will notice that the moved triangles vary in distance from the originals, this being the case since $T$ is changing as you drag the triangles. If $T = 0$, the triangles are initially intersecting. In the case \Code{FIND VEL}, the intersection set at first time of contact is drawn in purple, either a single point or a line segment. If $T = 0$ in this case, the polygon of intersection is drawn. Figure 5 shows a case where $T > 0$. % % FIGURE 5 % \begin{center} \includegraphics{Figure5.png} Figure 5. Two modified and moved triangles in the \Code{FIND VEL} query, both triangles having nonzero velocities. \end{center} % Figure 6 shows the case $T = 0$. % % FIGURE 6 % \begin{center} \includegraphics{Figure6.png} Figure 6. Two modified and moved triangles in the \Code{FIND VEL} query, both triangles having nonzero velocities, but are intersecting at time $0$. \end{center} % Note that the moved triangles are drawn on top of the original ones, despite the fact that both have nonzero velocities. This is the case since $T = 0$ causes the triangles to be moved by a zero distance. \section{Exiting the Program} Press the \Code{`q'} or \Code{ESC} keys or select the x--button at the upper right of the window. \end{document}
{ "alphanum_fraction": 0.7317560813, "avg_line_length": 37.2795031056, "ext": "tex", "hexsha": "cc634fec1ff86b7125e86b20319df5582422203a", "lang": "TeX", "max_forks_count": 2, "max_forks_repo_forks_event_max_datetime": "2020-09-11T20:17:27.000Z", "max_forks_repo_forks_event_min_datetime": "2020-05-17T10:01:14.000Z", "max_forks_repo_head_hexsha": "65e85c0e31e82d612c793d980dc4b73fa186c76c", "max_forks_repo_licenses": [ "Linux-OpenIB" ], "max_forks_repo_name": "acidicMercury8/xray-1.0", "max_forks_repo_path": "sdk/MagicSoftware/FreeMagic/Applications/TriangleIntersection2/TestTriTri.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "65e85c0e31e82d612c793d980dc4b73fa186c76c", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Linux-OpenIB" ], "max_issues_repo_name": "acidicMercury8/xray-1.0", "max_issues_repo_path": "sdk/MagicSoftware/FreeMagic/Applications/TriangleIntersection2/TestTriTri.tex", "max_line_length": 79, "max_stars_count": 10, "max_stars_repo_head_hexsha": "65e85c0e31e82d612c793d980dc4b73fa186c76c", "max_stars_repo_licenses": [ "Linux-OpenIB" ], "max_stars_repo_name": "acidicMercury8/xray-1.0", "max_stars_repo_path": "sdk/MagicSoftware/FreeMagic/Applications/TriangleIntersection2/TestTriTri.tex", "max_stars_repo_stars_event_max_datetime": "2022-01-20T20:24:28.000Z", "max_stars_repo_stars_event_min_datetime": "2021-05-04T06:40:27.000Z", "num_tokens": 1608, "size": 6002 }
\documentclass{howto} \usepackage{ltxmarkup} \usepackage{times} \usepackage{distutils} \title{Installing Python Modules} % The audience for this document includes people who don't know anything % about Python and aren't about to learn the language just in order to % install and maintain it for their users, i.e. system administrators. % Thus, I have to be sure to explain the basics at some point: % sys.path and PYTHONPATH at least. Should probably give pointers to % other docs on "import site", PYTHONSTARTUP, PYTHONHOME, etc. % % Also, I need to take into account that most modules out there don't % (yet) use Distutils: briefly explain the old Makefile.pre.in % convention (maybe move material from the E&E manual to here?), and % explain where to copy .py and .so files manually if the distribution % doesn't provide a mechanism for doing so. % % Finally, it might be useful to include all the material from my "Care % and Feeding of a Python Installation" talk in here somewhere. Yow! \author{Greg Ward} \authoraddress{E-mail: \email{[email protected]}} \makeindex \begin{document} \maketitle \begin{abstract} \noindent This document describes the Python Distribution Utilities (``Distutils'') from the end-user's point-of-view, describing how to extend the capabilities of a standard Python installation by building and installing third-party Python modules and extensions. \end{abstract} %\begin{abstract} %\noindent %Abstract this! %\end{abstract} \tableofcontents \section{Introduction} \label{intro} Although Python's extensive standard library covers many programming needs, there often comes a time when you need to add some new functionality to your Python installation in the form of third-party modules. This might be necessary to support your own programming, or to support an application that you want to use and that happens to be written in Python. In the past, there has been little support for adding third-party modules to an existing Python installation. With the introduction of the Python Distribution Utilities (Distutils for short) in Python 2.0, this is starting to change. Not everything will change overnight, though, so while this document concentrates on installing module distributions that use the Distutils, we will also spend some time dealing with the old ways. This document is aimed primarily at the people who need to install third-party Python modules: end-users and system administrators who just need to get some Python application running, and existing Python programmers who want to add some new goodies to their toolbox. You don't need to know Python to read this document; there will be some brief forays into using Python's interactive mode to explore your installation, but that's it. If you're looking for information on how to distribute your own Python modules so that others may use them, see the \citetitle[../dist/dist.html]{Distributing Python Modules} manual. \subsection{Best case: trivial installation} \label{trivial-install} In the best case, someone will have prepared a special version of the module distribution you want to install that is targeted specifically at your platform and is installed just like any other software on your platform. For example, the module developer might make an executable installer available for Windows users, an RPM package for users of RPM-based Linux systems (Red Hat, SuSE, Mandrake, and many others), a Debian package for users of Debian-based Linux systems (Debian proper, Caldera, Corel, etc.), and so forth. In that case, you would download the installer appropriate to your platform and do the obvious thing with it: run it if it's an executable installer, \code{rpm --install} it if it's an RPM, etc. You don't need to run Python or a setup script, you don't need to compile anything---you might not even need to read any instructions (although it's always a good idea to do so anyways). Of course, things will not always be that easy. You might be interested in a module distribution that doesn't have an easy-to-use installer for your platform. In that case, you'll have to start with the source distribution released by the module's author/maintainer. Installing from a source distribution is not too hard, as long as the modules are packaged in the standard way. The bulk of this document is about building and installing modules from standard source distributions. \subsection{The new standard: Distutils} \label{new-standard} If you download a module source distribution, you can tell pretty quickly if it was packaged and distributed in the standard way, i.e. using the Distutils. First, the distribution's name and version number will be featured prominently in the name of the downloaded archive, e.g. \file{foo-1.0.tar.gz} or \file{widget-0.9.7.zip}. Next, the archive will unpack into a similarly-named directory: \file{foo-1.0} or \file{widget-0.9.7}. Additionally, the distribution will contain a setup script \file{setup.py}, and a \file{README.txt} (or possibly \file{README}), which should explain that building and installing the module distribution is a simple matter of running \begin{verbatim} python setup.py install \end{verbatim} If all these things are true, then you already know how to build and install the modules you've just downloaded: run the command above. Unless you need to install things in a non-standard way or customize the build process, you don't really need this manual. Or rather, the above command is everything you need to get out of this manual. \subsection{The old way: no standards} \label{old-way} Before the Distutils, there was no infrastructure to support installing third-party modules in a consistent, standardized way. Thus, it's not really possible to write a general manual for installing Python modules that don't use the Distutils; the only truly general statement that can be made is, ``Read the module's own installation instructions.'' However, if such instructions exist at all, they are often woefully inadequate and targeted at experienced Python developers. Such users are already familiar with how the Python library is laid out on their platform, and know where to copy various files in order for Python to find them. This document makes no such assumptions, and explains how the Python library is laid out on three major platforms (Unix, Windows, and Mac~OS), so that you can understand what happens when the Distutils do their job \emph{and} know how to install modules manually when the module author fails to provide a setup script. Additionally, while there has not previously been a standard installation mechanism, Python has had some standard machinery for building extensions on Unix since Python \XXX{version?}. This machinery (the \file{Makefile.pre.in} file) is superseded by the Distutils, but it will no doubt live on in older module distributions for a while. This \file{Makefile.pre.in} mechanism is documented in the ``Extending \& Embedding Python'' manual, but that manual is aimed at module developers---hence, we include documentation for builders/installers here. All of the pre-Distutils material is tucked away in section~\ref{pre-distutils}. \section{Standard Build and Install} \label{standard-install} As described in section~\ref{new-standard}, building and installing a module distribution using the Distutils is usually one simple command: \begin{verbatim} python setup.py install \end{verbatim} On Unix, you'd run this command from a shell prompt; on Windows, you have to open a command prompt window (``DOS box'') and do it there; on Mac~OS, things are a tad more complicated (see below). \subsection{Platform variations} \label{platform-variations} You should always run the setup command from the distribution root directory, i.e. the top-level subdirectory that the module source distribution unpacks into. For example, if you've just downloaded a module source distribution \file{foo-1.0.tar.gz} onto a Unix system, the normal thing to do is: \begin{verbatim} gunzip -c foo-1.0.tar.gz | tar xf - # unpacks into directory foo-1.0 cd foo-1.0 python setup.py install \end{verbatim} On Windows, you'd probably download \file{foo-1.0.zip}. If you downloaded the archive file to \file{C:\textbackslash{}Temp}, then it would unpack into \file{C:\textbackslash{}Temp\textbackslash{}foo-1.0}; you can use either a GUI archive manipulator (such as WinZip) or a command-line tool (such as \program{unzip} or \program{pkunzip}) to unpack the archive. Then, open a command prompt window (``DOS box''), and run: \begin{verbatim} cd c:\Temp\foo-1.0 python setup.py install \end{verbatim} On Mac~OS, you have to go through a bit more effort to supply command-line arguments to the setup script: \begin{itemize} \item hit option-double-click on the script's icon (or option-drop it onto the Python interpreter's icon) \item press the ``Set unix-style command line'' button \item set the ``Keep stdio window open on termination'' if you're interested in seeing the output of the setup script (which is usually voluminous and often useful) \item when the command-line dialog pops up, enter ``install'' (you can, of course, enter any Distutils command-line as described in this document or in the ``Distributing Python Modules'' document: just leave of the initial \code{python setup.py} and you'll be fine) \end{itemize} \XXX{this should change: every Distutils setup script will need command-line arguments for every run (and should probably keep stdout around), so all this should happen automatically for setup scripts} \subsection{Splitting the job up} \label{splitting-up} Running \code{setup.py install} builds and installs all modules in one run. If you prefer to work incrementally---especially useful if you want to customize the build process, or if things are going wrong---you can use the setup script to do one thing at a time. This is particularly helpful when the build and install will be done by different users---e.g., you might want to build a module distribution and hand it off to a system administrator for installation (or do it yourself, with super-user privileges). For example, you can build everything in one step, and then install everything in a second step, by invoking the setup script twice: \begin{verbatim} python setup.py build python setup.py install \end{verbatim} (If you do this, you will notice that running the \command{install} command first runs the \command{build} command, which---in this case---quickly notices that it has nothing to do, since everything in the \file{build} directory is up-to-date.) You may not need this ability to break things down often if all you do is install modules downloaded off the 'net, but it's very handy for more advanced tasks. If you get into distributing your own Python modules and extensions, you'll run lots of individual Distutils commands on their own. \subsection{How building works} \label{how-build-works} As implied above, the \command{build} command is responsible for putting the files to install into a \emph{build directory}. By default, this is \file{build} under the distribution root; if you're excessively concerned with speed, or want to keep the source tree pristine, you can change the build directory with the \longprogramopt{build-base} option. For example: \begin{verbatim} python setup.py build --build-base=/tmp/pybuild/foo-1.0 \end{verbatim} (Or you could do this permanently with a directive in your system or personal Distutils configuration file; see section~\ref{config-files}.) Normally, this isn't necessary. The default layout for the build tree is as follows: \begin{verbatim} --- build/ --- lib/ or --- build/ --- lib.<plat>/ temp.<plat>/ \end{verbatim} where \code{<plat>} expands to a brief description of the current OS/hardware platform. The first form, with just a \file{lib} directory, is used for ``pure module distributions''---that is, module distributions that include only pure Python modules. If a module distribution contains any extensions (modules written in C/C++, or Java for JPython), then the second form, with two \code{<plat>} directories, is used. In that case, the \file{temp.\filevar{plat}} directory holds temporary files generated by the compile/link process that don't actually get installed. In either case, the \file{lib} (or \file{lib.\filevar{plat}}) directory contains all Python modules (pure Python and extensions) that will be installed. In the future, more directories will be added to handle Python scripts, documentation, binary executables, and whatever else is needed to handle the job of installing Python modules and applications. \subsection{How installation works} \label{how-install-works} After the \command{build} command runs (whether you run it explicitly, or the \command{install} command does it for you), the work of the \command{install} command is relatively simple: all it has to do is copy everything under \file{build/lib} (or \file{build/lib.\filevar{plat}}) to your chosen installation directory. If you don't choose an installation directory---i.e., if you just run \code{setup.py install}---then the \command{install} command installs to the standard location for third-party Python modules. This location varies by platform and by how you built/installed Python itself. On Unix and Mac OS, it also depends on whether the module distribution being installed is pure Python or contains extensions (``non-pure''): \begin{tableiv}{l|l|l|c}{textrm}% {Platform}{Standard installation location}{Default value}{Notes} \lineiv{Unix (pure)} {\filenq{\filevar{prefix}/lib/python2.0/site-packages}} {\filenq{/usr/local/lib/python2.0/site-packages}} {(1)} \lineiv{Unix (non-pure)} {\filenq{\filevar{exec-prefix}/lib/python2.0/site-packages}} {\filenq{/usr/local/lib/python2.0/site-packages}} {(1)} \lineiv{Windows} {\filenq{\filevar{prefix}}} {\filenq{C:\textbackslash{}Python}} {(2)} \lineiv{Mac~OS (pure)} {\filenq{\filevar{prefix}:Lib}} {\filenq{Python:Lib} \XXX{???}} {} \lineiv{Mac~OS (non-pure)} {\filevar{prefix}:Mac:PlugIns} {\filenq{Python:Mac:PlugIns}\XXX{???}} {} \end{tableiv} \noindent Notes: \begin{description} \item[(1)] Most Linux distributions include Python as a standard part of the system, so \filevar{prefix} and \filevar{exec-prefix} are usually both \file{/usr} on Linux. If you build Python yourself on Linux (or any Unix-like system), the default \filevar{prefix} and \filevar{exec-prefix} are \file{/usr/local}. \item[(2)] The default installation directory on Windows was \file{C:\textbackslash{}Program Files\textbackslash{}Python} under Python 1.6a1, 1.5.2, and earlier. \end{description} \filevar{prefix} and \filevar{exec-prefix} stand for the directories that Python is installed to, and where it finds its libraries at run-time. They are always the same under Windows and Mac~OS, and very often the same under Unix. You can find out what your Python installation uses for \filevar{prefix} and \filevar{exec-prefix} by running Python in interactive mode and typing a few simple commands. Under Unix, just type \code{python} at the shell prompt; under Windows, run ``Python 2.0 (interpreter)'' \XXX{right?}; under Mac~OS, \XXX{???}. Once the interpreter is started, you type Python code at the \samp{>>> } prompt. For example, on my Linux system, I type the three Python statements shown below, and get the output as shown, to find out my \filevar{prefix} and \filevar{exec-prefix}: \begin{verbatim} Python 1.5.2 (#1, Apr 18 1999, 16:03:16) [GCC pgcc-2.91.60 19981201 (egcs-1.1.1 on linux2 Copyright 1991-1995 Stichting Mathematisch Centrum, Amsterdam >>> import sys >>> sys.prefix '/usr' >>> sys.exec_prefix '/usr' \end{verbatim} If you don't want to install to the standard location, or if you don't have permission to write there, then you need to read about alternate installations in the next section. % This rather nasty macro is used to generate the tables that describe % each installation scheme. It's nasty because it takes two arguments % for each "slot" in an installation scheme, there will soon be more % than five of these slots, and TeX has a limit of 10 arguments to a % macro. Uh-oh. \newcommand{\installscheme}[8] {\begin{tableiii}{lll}{textrm} {Type of file} {Installation Directory} {Override option} \lineiii{pure module distribution} {\filevar{#1}\filenq{#2}} {\longprogramopt{install-purelib}} \lineiii{non-pure module distribution} {\filevar{#3}\filenq{#4}} {\longprogramopt{install-platlib}} \lineiii{scripts} {\filevar{#5}\filenq{#6}} {\longprogramopt{install-scripts}} \lineiii{data} {\filevar{#7}\filenq{#8}} {\longprogramopt{install-data}} \end{tableiii}} \section{Building Extensions: Tips and Tricks} \label{building-ext} (This is the section to read for people doing any sort of interesting build. Things to talk about: \begin{itemize} \item the \file{Setup} file (any platform now, but Unix-biased) \item CFLAGS and LDFLAGS (must implement them first!) \item using non-MS compilers on Windows (how to convert Python's library, ...) \end{itemize} \subsection{Tweaking compiler/linker flags} \label{tweak-flags} \subsection{Using non-Microsoft compilers on Windows} \label{non-ms-compilers} \XXX{One place to look: \url{http://www.cyberus.ca/~g_will/pyExtenDL.shtml}} \section{Alternate Installation} \label{alt-install} Often, it is necessary or desirable to install modules to a location other than the standard location for third-party Python modules. For example, on a Unix system you might not have permission to write to the standard third-party module directory. Or you might wish to try out a module before making it a standard part of your local Python installation; this is especially true when upgrading a distribution already present: you want to make sure your existing base of scripts still works with the new version before actually upgrading. The Distutils \command{install} command is designed to make installing module distributions to an alternate location simple and painless. The basic idea is that you supply a base directory for the installation, and the \command{install} command picks a set of directories (called an \emph{installation scheme}) under this base directory in which to install files. The details differ across platforms, so read whichever of the following section applies to you. \subsection{Alternate installation: Unix (the home scheme)} \label{alt-install-prefix} Under Unix, there are two ways to perform an alternate installation. The ``prefix scheme'' is similar to how alternate installation works under Windows and Mac~OS, but is not necessarily the most useful way to maintain a personal Python library. Hence, we document the more convenient and commonly useful ``home scheme'' first. The idea behind the ``home scheme'' is that you build and maintain a personal stash of Python modules, probably under your home directory. Installing a new module distribution is as simple as \begin{verbatim} python setup.py install --home=<dir> \end{verbatim} where you can supply any directory you like for the \longprogramopt{home} option. Lazy typists can just type a tilde (\code{\textasciitilde}); the \command{install} command will expand this to your home directory: \begin{verbatim} python setup.py install --home=~ \end{verbatim} The \longprogramopt{home} option defines the installation base directory. Files are installed to the following directories under the installation base as follows: \installscheme{home}{/lib/python} {home}{/lib/python} {home}{/bin} {home}{/share} \subsection{Alternate installation: Unix (the prefix scheme)} \label{alt-install-home} The ``prefix scheme'' is useful when you wish to use one Python installation to perform the build/install (i.e., to run the setup script), but install modules into the third-party module directory of a different Python installation (or something that looks like a different Python installation). If this sounds a trifle unusual, it is---that's why the ``home scheme'' comes first. However, there are at least two known cases where the prefix scheme will be useful. First, consider that many Linux distributions put Python in \file{/usr}, rather than the more traditional \file{/usr/local}. This is entirely appropriate, since in those cases Python is part of ``the system'' rather than a local add-on. However, if you are installing Python modules from source, you probably want them to go in \file{/usr/local/lib/python1.\filevar{X}} rather than \file{/usr/lib/python1.\filevar{X}}. This can be done with \begin{verbatim} /usr/bin/python setup.py install --prefix=/usr/local \end{verbatim} Another possibility is a network filesystem where the name used to write to a remote directory is different from the name used to read it: for example, the Python interpreter accessed as \file{/usr/local/bin/python} might search for modules in \file{/usr/local/lib/python1.\filevar{X}}, but those modules would have to be installed to, say, \file{/mnt/\filevar{@server}/export/lib/python1.\filevar{X}}. This could be done with \begin{verbatim} /usr/local/bin/python setup.py install --prefix=/mnt/@server/export \end{verbatim} In either case, the \longprogramopt{prefix} option defines the installation base, and the \longprogramopt{exec-prefix} option defines the platform-specific installation base, which is used for platform-specific files. (Currently, this just means non-pure module distributions, but could be expanded to C libraries, binary executables, etc.) If \longprogramopt{exec-prefix} is not supplied, it defaults to \longprogramopt{prefix}. Files are installed as follows: \installscheme{prefix}{/lib/python1.\filevar{X}/site-packages} {exec-prefix}{/lib/python1.\filevar{X}/site-packages} {prefix}{/bin} {prefix}{/share} There is no requirement that \longprogramopt{prefix} or \longprogramopt{exec-prefix} actually point to an alternate Python installation; if the directories listed above do not already exist, they are created at installation time. Incidentally, the real reason the prefix scheme is important is simply that a standard Unix installation uses the prefix scheme, but with \longprogramopt{prefix} and \longprogramopt{exec-prefix} supplied by Python itself (as \code{sys.prefix} and \code{sys.exec\_prefix}). Thus, you might think you'll never use the prefix scheme, but every time you run \code{python setup.py install} without any other options, you're using it. Note that installing extensions to an alternate Python installation has no effect on how those extensions are built: in particular, the Python header files (\file{Python.h} and friends) installed with the Python interpreter used to run the setup script will be used in compiling extensions. It is your responsibility to ensure that the interpreter used to run extensions installed in this way is compatibile with the interpreter used to build them. The best way to do this is to ensure that the two interpreters are the same version of Python (possibly different builds, or possibly copies of the same build). (Of course, if your \longprogramopt{prefix} and \longprogramopt{exec-prefix} don't even point to an alternate Python installation, this is immaterial.) \subsection{Alternate installation: Windows} \label{alt-install-windows} Since Windows has no conception of a user's home directory, and since the standard Python installation under Windows is simpler than that under Unix, there's no point in having separate \longprogramopt{prefix} and \longprogramopt{home} options. Just use the \longprogramopt{prefix} option to specify a base directory, e.g. \begin{verbatim} python setup.py install --prefix="\Temp\Python" \end{verbatim} to install modules to the \file{\textbackslash{}Temp} directory on the current drive. The installation base is defined by the \longprogramopt{prefix} option; the \longprogramopt{exec-prefix} option is not supported under Windows. Files are installed as follows: \installscheme{prefix}{} {prefix}{} {prefix}{\textbackslash{}Scripts} {prefix}{\textbackslash{}Data} \subsection{Alternate installation: Mac~OS} \label{alt-install-macos} Like Windows, Mac~OS has no notion of home directories (or even of users), and a fairly simple standard Python installation. Thus, only a \longprogramopt{prefix} option is needed. It defines the installation base, and files are installed under it as follows: \installscheme{prefix}{:Lib:site-packages} {prefix}{:Lib:site-packages} {prefix}{:Scripts} {prefix}{:Data} See section~\ref{platform-variations} for information on supplying command-line arguments to the setup script with MacPython. \section{Custom Installation} \label{custom-install} Sometimes, the alternate installation schemes described in section~\ref{alt-install} just don't do what you want. You might want to tweak just one or two directories while keeping everything under the same base directory, or you might want to completely redefine the installation scheme. In either case, you're creating a \emph{custom installation scheme}. You probably noticed the column of ``override options'' in the tables describing the alternate installation schemes above. Those options are how you define a custom installation scheme. These override options can be relative, absolute, or explicitly defined in terms of one of the installation base directories. (There are two installation base directories, and they are normally the same---they only differ when you use the Unix ``prefix scheme'' and supply different \longprogramopt{prefix} and \longprogramopt{exec-prefix} options.) For example, say you're installing a module distribution to your home directory under Unix---but you want scripts to go in \file{\textasciitilde/scripts} rather than \file{\textasciitilde/bin}. As you might expect, you can override this directory with the \longprogramopt{install-scripts} option; in this case, it makes most sense to supply a relative path, which will be interpreted relative to the installation base directory (your home directory, in this case): \begin{verbatim} python setup.py install --home=~ --install-scripts=scripts \end{verbatim} Another Unix example: suppose your Python installation was built and installed with a prefix of \file{/usr/local/python}, so under a standard installation scripts will wind up in \file{/usr/local/python/bin}. If you want them in \file{/usr/local/bin} instead, you would supply this absolute directory for the \longprogramopt{install-scripts} option: \begin{verbatim} python setup.py install --install-scripts=/usr/local/bin \end{verbatim} (This performs an installation using the ``prefix scheme,'' where the prefix is whatever your Python interpreter was installed with--- \file{/usr/local/python} in this case.) If you maintain Python on Windows, you might want third-party modules to live in a subdirectory of \filevar{prefix}, rather than right in \filevar{prefix} itself. This is almost as easy as customizing the script installation directory---you just have to remember that there are two types of modules to worry about, pure modules and non-pure modules (i.e., modules from a non-pure distribution). For example: \begin{verbatim} python setup.py install --install-purelib=Site --install-platlib=Site \end{verbatim} The specified installation directories are relative to \filevar{prefix}. Of course, you also have to ensure that these directories are in Python's module search path, e.g. by putting a \file{.pth} file in \filevar{prefix} (\XXX{should have a section describing .pth files and cross-ref it here}). If you want to define an entire installation scheme, you just have to supply all of the installation directory options. The recommended way to do this is to supply relative paths; for example, if you want to maintain all Python module-related files under \file{python} in your home directory, and you want a separate directory for each platform that you use your home directory from, you might define the following installation scheme: \begin{verbatim} python setup.py install --home=~ \ --install-purelib=python/lib \ --install-platlib=python/lib.$PLAT \ --install-scripts=python/scripts --install-data=python/data \end{verbatim} or, equivalently, \begin{verbatim} python setup.py install --home=~/python \ --install-purelib=lib \ --install-platlib='lib.$PLAT' \ --install-scripts=scripts --install-data=data \end{verbatim} \code{\$PLAT} is not (necessarily) an environment variable---it will be expanded by the Distutils as it parses your command line options (just as it does when parsing your configuration file(s)). Obviously, specifying the entire installation scheme every time you install a new module distribution would be very tedious. Thus, you can put these options into your Distutils config file (see section~\ref{config-files}): \begin{verbatim} [install] install-base=$HOME install-purelib=python/lib install-platlib=python/lib.$PLAT install-scripts=python/scripts install-data=python/data \end{verbatim} or, equivalently, \begin{verbatim} [install] install-base=$HOME/python install-purelib=lib install-platlib=lib.$PLAT install-scripts=scripts install-data=data \end{verbatim} Note that these two are \emph{not} equivalent if you supply a different installation base directory when you run the setup script. For example, \begin{verbatim} python setup.py --install-base=/tmp \end{verbatim} would install pure modules to \filevar{/tmp/python/lib} in the first case, and to \filevar{/tmp/lib} in the second case. (For the second case, you probably want to supply an installation base of \file{/tmp/python}.) You probably noticed the use of \code{\$HOME} and \code{\$PLAT} in the sample configuration file input. These are Distutils configuration variables, which bear a strong resemblance to environment variables. In fact, you can use environment variables in config files---on platforms that have such a notion---but the Distutils additionally define a few extra variables that may not be in your environment, such as \code{\$PLAT}. (And of course, you can only use the configuration variables supplied by the Distutils on systems that don't have environment variables, such as Mac~OS (\XXX{true?}).) See section~\ref{config-files} for details. \XXX{need some Windows and Mac~OS examples---when would custom installation schemes be needed on those platforms?} \section{Distutils Configuration Files} \label{config-files} \section{Pre-Distutils Conventions} \label{pre-distutils} \subsection{The Makefile.pre.in file} \label{makefile-pre-in} \subsection{Installing modules manually} \label{manual-install} \end{document}
{ "alphanum_fraction": 0.7610475675, "avg_line_length": 43.0548696845, "ext": "tex", "hexsha": "657849698686f76dbec63d0b80e51385ffa1160d", "lang": "TeX", "max_forks_count": 2, "max_forks_repo_forks_event_max_datetime": "2022-03-27T01:55:17.000Z", "max_forks_repo_forks_event_min_datetime": "2015-07-16T08:14:13.000Z", "max_forks_repo_head_hexsha": "73c739a764e8b1dc84640e73b880bc66e1916bca", "max_forks_repo_licenses": [ "PSF-2.0" ], "max_forks_repo_name": "marcosptf/cpython-2.0.1", "max_forks_repo_path": "Doc/inst/inst.tex", "max_issues_count": 6, "max_issues_repo_head_hexsha": "73c739a764e8b1dc84640e73b880bc66e1916bca", "max_issues_repo_issues_event_max_datetime": "2021-05-03T21:20:50.000Z", "max_issues_repo_issues_event_min_datetime": "2020-11-18T15:48:14.000Z", "max_issues_repo_licenses": [ "PSF-2.0" ], "max_issues_repo_name": "marcosptf/cpython-2.0.1", "max_issues_repo_path": "Doc/inst/inst.tex", "max_line_length": 91, "max_stars_count": 5, "max_stars_repo_head_hexsha": "73c739a764e8b1dc84640e73b880bc66e1916bca", "max_stars_repo_licenses": [ "PSF-2.0" ], "max_stars_repo_name": "marcosptf/cpython-2.0.1", "max_stars_repo_path": "Doc/inst/inst.tex", "max_stars_repo_stars_event_max_datetime": "2022-03-30T21:47:20.000Z", "max_stars_repo_stars_event_min_datetime": "2022-03-26T21:53:36.000Z", "num_tokens": 7506, "size": 31387 }
\cleardoublepage \chapter{Introduction} \label{cha:introduction} Function-as-a-Service (FaaS) is a cutting-edge service model that has developed with the current advancement of cloud computing. Cloud functions allow custom code to be executed in response to an event. In most cases, developers need only worry about their actual code, as event queuing, underlying infrastructure, dynamic scaling, and dispatching are all handled by the cloud provider~\cite{Baldini2017-zf,McGrath2017-or}. This scalable and flexible event-based programming model is a great fit for IoT event and data processing. Consider as an example a connected button and lightbulb. When the button is pressed it sends an event to a function in the cloud which in turn sends a command to the lamp to turn on the light. The three components are easily connected and only the actual function code would need to be provided. Thanks to managed FaaS, this approach also scales from two devices to thousands of devices without any additional configuration. Current FaaS platforms do provide these benefits for the IoT, however, using them in this way is inefficient. Sending all events and data to the cloud for processing leads to a high load on the network and high response latency~\cite{paper_zhang_cloud_is_not_enough_GDP,paper_bermbach_fog_computing}. It is much more efficient to process IoT data closer to their service consumers such as our lightbulb and button, as is the idea of fog computing~\cite{Bermbach2020-sf,Pfandzelter2019-so}. Positioned in the same network, our button may send its event to an edge function placed, for example, on a common gateway. This also introduces additional transparency about data movement within the network and alleviates some security concerns about cloud computing~\cite{Bonomi2012-if,paper_bermbach_fog_computing}. Currently, however, there are no open FaaS platforms that are built specifically for IoT data processing at the edge. State-of-the-art platforms are instead built for powerful cloud hardware, for web-based services, or are proprietary software that is not extensible. We therefore make the following contributions in this thesis: \begin{enumerate} \item We discuss the unique challenges of IoT data processing and edge computing and derive requirements for an edge FaaS platform (Chapter~\ref{cha:background}) \item We introduce \textit{tinyFaaS}, a novel FaaS platform architecture that addresses the requirements we have identified (Chapter~\ref{cha:systemdesign}) \item We evaluate \textit{tinyFaaS} through a proof-of-concept prototype and a number of experiments in which we compare it to state-of-the-art FaaS platforms, namely Kubeless and Lean OpenWhisk (Chapter~\ref{cha:evaluation}) \end{enumerate}
{ "alphanum_fraction": 0.8157033806, "avg_line_length": 88.7419354839, "ext": "tex", "hexsha": "4be04086b3042cc27492a4c98630a164f9dfdd6d", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "22f2ac5bd44c3afd05ab32863d55a4f7736edd20", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "pfandzelter/TU-Thesis-Template", "max_forks_repo_path": "mainmatter/01_introduction.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "22f2ac5bd44c3afd05ab32863d55a4f7736edd20", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "pfandzelter/TU-Thesis-Template", "max_issues_repo_path": "mainmatter/01_introduction.tex", "max_line_length": 227, "max_stars_count": null, "max_stars_repo_head_hexsha": "22f2ac5bd44c3afd05ab32863d55a4f7736edd20", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "pfandzelter/TU-Thesis-Template", "max_stars_repo_path": "mainmatter/01_introduction.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 601, "size": 2751 }
% \documentclass[a5paper,openany,10pt]{memoir} % Packages \usepackage[nobottomtitles]{titlesec} \usepackage{fontspec} \setmainfont{TeX Gyre Pagella} %palatino clone \usepackage[yyyymmdd,hhmmss]{datetime} % \usepackage{microtype} \usepackage{etoolbox} \usepackage{lettrine} \usepackage[british, latin]{babel} \usepackage{paracol} \usepackage{ifthen} \usepackage{scalerel} \usepackage{verse,anyfontsize} \usepackage{stackengine} \usepackage[hidelinks, final]{hyperref} \usepackage{multicol} \usepackage{afterpage} \usepackage[]{gitinfo2} % local \usepackage{rubrics} \usepackage{styling} % Config \pagestyle{giplain} \thispagestyle{empty} % ========= % layout \setulmarginsandblock{0.5in}{0.7in}{*} \setlrmarginsandblock{0.5in}{0.5in}{*} \checkandfixthelayout % headers \setsecnumdepth{chapter} \setsecheadstyle{\LARGE\raggedright\centering} \setsubsecheadstyle{\Large\scshape\raggedright\centering} \setsubsubsecheadstyle{\color{red}\large\scshape\raggedright\centering} \setparaheadstyle{\color{red}\large\raggedright\centering} \setafterparaskip{1.5ex plus .2ex} % multicols \renewcommand{\columnseprulecolor}{\color{red}} \setlength{\multicolsep}{0em} \setlength{\columnseprule}{0.4pt} \colseprulecolor{red} % page \openany \input{propers} \makeatletter \newcommand{\oneraggedpage}{\let\mytextbottom\@textbottom \let\mytexttop\@texttop \raggedbottom \afterpage{% \global\let\@textbottom\mytextbottom \global\let\@texttop\mytexttop}} \makeatother \renewcommand{\X}{\textcolor{red}{\grecross}} \renewcommand{\r}{\textcolor{red}{\gothRbar.}} \renewcommand{\v}{\textcolor{red}{\gothVbar.}} % \renewcommand{\red}[2][\centering]{%do the red.... % % call with empty optional argument for justified text; or % % \raggedright.] % % \filbreak % {\textcolor{red}{\noindent#2}#1}\par\nopagebreak\medskip\nopagebreak} \newcommand{\ps}[2][]{%typeset `psalmus' \paragraph{Psalmus #2#1}} % \medskip\nopagebreak % {\textcolor{red}{\noindent Psalmus #2#1}\centering\par\nopagebreak\vspace{-0.5\baselineskip}\nopagebreak} } \newcommand{\dropcap}[2]{% \lettrine{\textcolor{red}{#1}}{#2}} \renewcommand{\red}[2][\centering]{%do the red.... % call with empty optional argument for justified text; or % \raggedright.] % \filbreak {\noindent\textcolor{red}{\itshape#2}#1\par\nopagebreak\medskip\nopagebreak}\nopagebreak} \newcommand{\black}[2][0.5]{%say the black... \columnratio{#1} \begin{paracol}{2} \setlength{\parindent}{0em} \setlength{\parskip}{0.2\baselineskip} #2 \end{paracol} \medskip } \renewcommand{\X}{\textcolor{red}{\grecross}} \renewcommand{\l}[2][]{\selectlanguage{latin} \ifthenelse{\equal{s}{#1}}% {\emph{S.}~}% {}% \ifthenelse{\equal{m}{#1}}% {\emph{M.}~}% {}% #2\par\selectlanguage{british}\switchcolumn} \newcommand{\e}[2][]{% \ifthenelse{\equal{s}{#1}}% {\emph{P.}~}% {}% \ifthenelse{\equal{m}{#1}}% {\emph{S.}~}% {}% #2\par\switchcolumn*} \renewcommand{\S}{\emph{S.}~} \newcommand{\M}{\emph{M.}~} \newcommand{\amen}{\l{Amen.}\e{Amen.}} \newcommand{\dominusvobiscum}{% \l[s]{Dóminus vobíscum.} \e[s]{The Lord be with you.} \l[m]{Et cum spíritu tuo.} \e[m]{And with thy spirit.}} \newcommand{\oremus}{ \l{Orémus.} \e{Let us pray.} } \newcommand{\domineexaudi}{ \l{\v~Dómine, exáudi oratiónem meam.} \e{\v~O Lord, hear my prayer.} \l{\r~Et clamor meus ad te véniat.} \e{\r~And let my cry come unto thee.} } \newcommand{\deogratias}{ \l{\r~Deo grátias.} \e{\r~Thanks be to God.} } \newcommand{\onecolumntranslation}[1]{% {\small #1}} \newcommand{\threecolumntranslation}[1]{% \begin{multicols}{3} \small #1 \end{multicols}} % modified from https://tex.stackexchange.com/questions/163334/using-lettrine-or-equivalent-inside-verse-environment \def\startverse#1\\#2\\{% \begin{minipage}{4in}% \firstline#1\relax% \def\verselineB{#2}% \if Q\versalletter\def\descstrut{\strut}\else\def\descstrut{}\fi% \def\Versal{\textcolor{red}{% \scalerel*{$\fontsize{28}{30}\selectfont\versalletter$}% {\def\stacktype{L}\stackon{T\descstrut}{T}}}}% \setbox0=\hbox{\Versal\,}% \def\leftoffset{-.5\wd0}% \hspace*{\wd0}\hspace{\leftoffset}% \verselineA\\% \hspace*{\wd0}\hspace{\leftoffset}% \llap{\smash{\box0}}% \hspace{0.5em}\verselineB\strut% \end{minipage}\\% } \def\firstline#1#2 #3\relax{\def\versalletter{#1}\def\verselineA{\textsc{#2} #3}} \begin{document} \selectlanguage{latin} \midsloppy \setlength{\parskip}{\medskipamount} \setlength{\parindent}{0em} % title page \TitlePage{Sacra Missa}{\masstype}{\feast} \section{Introit} \red{When the celebrant reaches the altar:} \gregorioscore{intr.gtex} \onecolumntranslation{\introittranslation} \section{Kyrie} \red{Immediately after the Introit:} \gregorioscore{kyrie.gtex} \section{Gloria} \red{Immediately after the Kyrie:} \gregorioscore{gloria.gtex} \section{Collect} \gregorioscore{dominus_vobiscum.gtex} \black{\collect} \gregorioscore{amen.gtex} \section{Lesson} \red{The server chants the lesson:} \gregorioscore{lesson.gtex} \threecolumntranslation{\lessontranslation} \section{Responsory} \red{After the Chanting of the Epistle, or during the (secret) reading if so appointed:} \gregorioscore{grad.gtex} \gregorioscore{alleluia.gtex} \onecolumntranslation{\responsorytranslation} \section{Gospel} \red{The choir sings the responses at the Holy Gospel:} \gregorioscore{dominus_vobiscum.gtex} \gregorioscore{sequenti.gtex} \black{\gospel} \section{Response prior to the Offertory} \red{Just before the chanting of the Offertory:} \gregorioscore{dominus_vobiscum.gtex} \section{Offertory} \red{Immediately after the Celebrant chants the previous “Oremus”:} \gregorioscore{offert.gtex} \onecolumntranslation{\offertorytranslation} \red{A motet or hymn may be sung here.} \section{Responses at the Preface} \gregorioscore{pref.gtex} \section{Sanctus and Benedictus} \red{Immediately after the Celebrant concludes the chanting of the Preface:} \gregorioscore{sanctus.gtex} \section{Amen at the end of the Canon of the Mass} \gregorioscore{amen.gtex} \section{Pater Noster} \red{The Celebrant alone chants the “Pater Noster.” The Choir sings the conclusion:} \gregorioscore{sed.gtex} \section{Breaking of the Host} \gregorioscore{amen.gtex} \gregorioscore{dominus_vobiscum.gtex} \section{Agnus Dei} \red{After the proceeding ``Et cum spiritu tuo:''} \gregorioscore{amen.gtex} \gregorioscore{pax_domini.gtex} \gregorioscore{agnus.gtex} \subsubsection{Communion} \red{Once the Celebrant begins to distribute Holy Communion:} \gregorioscore{communio.gtex} \onecolumntranslation{\communiontranslation} \red{A motet or hymn may be sung here.} \subsubsection{Post-Communion Collect} \gregorioscore{dominus_vobiscum.gtex} \black{\postcommunion} \gregorioscore{amen.gtex} \subsubsection{Dismissal} \gregorioscore{dominus_vobiscum.gtex} \gregorioscore{ite.gtex} \ifdefempty{\marian}{}{% \section{Marian Antiphon} \red{At the conclusion of Mass the Marian antiphon proper to the season is sung:} \gregorioscore{marian.gtex} \red{A motet or hymn may follow.} } \begin{center} \+ \end{center} \vfill % Typeset byJohn Morrisin ten-point Palatino. Translations from the Knox Bible, divinumofficium.com, the St. Dominic Missal, and my own. The latest version of this booklet can always be found at \url{https://github.com/2e0byo/StCuthbertsMasses}. Comments? Suggestion? Found a mistake? Open an issue in the repository. {\centering\footnotesize\texttt{Compositum \today\ hora \currenttime}\par} \end{document} %%% Local Variables: %%% mode: latex %%% TeX-engine: lualatex %%% End:
{ "alphanum_fraction": 0.7298181342, "avg_line_length": 25.9965986395, "ext": "tex", "hexsha": "a88c63c230ec669b9f749e284bfa2845ea3a514e", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "b837fb86e07fc510939d89a1ff82494af69df683", "max_forks_repo_licenses": [ "CC0-1.0" ], "max_forks_repo_name": "2e0byo/StCuthbertsMasses", "max_forks_repo_path": "s-stanislai-episcopi-et-martyris/missalette.tex", "max_issues_count": 1, "max_issues_repo_head_hexsha": "b837fb86e07fc510939d89a1ff82494af69df683", "max_issues_repo_issues_event_max_datetime": "2022-02-04T21:17:32.000Z", "max_issues_repo_issues_event_min_datetime": "2022-02-04T21:17:32.000Z", "max_issues_repo_licenses": [ "CC0-1.0" ], "max_issues_repo_name": "2e0byo/StCuthbertsMasses", "max_issues_repo_path": "s-stanislai-episcopi-et-martyris/missalette.tex", "max_line_length": 116, "max_stars_count": null, "max_stars_repo_head_hexsha": "b837fb86e07fc510939d89a1ff82494af69df683", "max_stars_repo_licenses": [ "CC0-1.0" ], "max_stars_repo_name": "2e0byo/StCuthbertsMasses", "max_stars_repo_path": "s-stanislai-episcopi-et-martyris/missalette.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 2729, "size": 7643 }
\center \section{Aim 3 Title} \raggedright \subsection{Introduction to Aim 3} An introduction to Aim 3. \subsection{Background to Aim 3} This section will include the most relevant literature addressing this aim. \subsection{Methods} Maybe you'll discuss some methods. \subsubsection{Some crucial details about the method} It'll probably have a sub(sub)heading. \subsubsection{Conceptual model, research questions and hypotheses} Blah blah blah. \subsection{Results of Aim 3} Blah blah blah. \subsection{Discussion of Aim 3} Blah blah blah. \subsection{Conclusion of Aim 3} Blah blah blah.
{ "alphanum_fraction": 0.7866666667, "avg_line_length": 20, "ext": "tex", "hexsha": "b7c37f186bf99cca09a4f6e1fef2b50e10a2762f", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "bca1e60389eff101e7a630025c314425aa88e0a9", "max_forks_repo_licenses": [ "CC-BY-4.0" ], "max_forks_repo_name": "greenstick/ohsu-dissertation-template", "max_forks_repo_path": "sections/aim-3.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "bca1e60389eff101e7a630025c314425aa88e0a9", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "CC-BY-4.0" ], "max_issues_repo_name": "greenstick/ohsu-dissertation-template", "max_issues_repo_path": "sections/aim-3.tex", "max_line_length": 70, "max_stars_count": 2, "max_stars_repo_head_hexsha": "85b7f3a5c1506822986b76ec4646c9f2a81cbdb8", "max_stars_repo_licenses": [ "CC-BY-4.0" ], "max_stars_repo_name": "jdrusso/dissertation", "max_stars_repo_path": "sections/aim-3.tex", "max_stars_repo_stars_event_max_datetime": "2022-01-13T17:18:31.000Z", "max_stars_repo_stars_event_min_datetime": "2021-05-20T02:50:44.000Z", "num_tokens": 149, "size": 600 }
\documentclass[10pt,letterpaper]{moderncv} \usepackage[utf8]{inputenc} \moderncvtheme[red]{classic} \firstname{Babak} \familyname{Amin~Azad} \title{Curriculum Vitae} \address{Department of Computer Science \\ Stony Brook University \\ Stony Brook, NY, 11794\\} % \mobile{+1~(234)~567~890} % \phone{+2~(345)~678~901} % \fax{+3~(456)~789~012} \email{[email protected]} \homepage{https://silverf0x00.com} % \extrainfo{additional information} % \photo[64pt][0.4pt]{picture} % \quote{Some quote (optional)} \begin{document} \makecvtitle \section{Education} % \cventry{years}{degree}{institution}{location}{grade}{description} \cventry{2017-Present}{PhD in Computer Science}{Stony Brook University}{NY}{}{} \cventry{2017-2020}{MSc in Computer Science}{Stony Brook University}{NY}{}{} \cventry{2010-2015}{BSc in Software Engineering}{Shahid Beheshti University}{Tehran}{}{} \section{Employment} % \cventry{years}{position}{company}{location}{grade}{description} \cventry{2021}{Software Engineer Intern}{Cloudflare}{CA}{}{Developed QUIC \& HTTP/3 fingerprinting methods} \cventry{2020}{Software Engineer Intern}{Cloudflare}{CA}{}{Designed and built red team automation platform for bot management} \cventry{2018-Present}{Research Assistant}{PragSecLab, Stony Brook University}{NY}{}{Static \& dynamic code analysis, software debloating, bot detection, browser fingerprinting, large scale internet studies} \cventry{2017-2018}{Teaching Assistant}{Stony Brook University}{NY}{}{Undergraduate operating systems and introduction to security courses} \cventry{2014-2016}{Cyber Security Analyst, Incident Response Team}{Kashef Banking Security}{Tehran}{}{ \textbf{Notable projects:}\\ \textit{Website Monitoring and Deface Detection Service} \\ \textit{Banking Websites SSL Configuration Report and Hardening Guide} \\ \textit{Mobile Banking Software Security Report and Secure Android Development Guide} } \cventry{2013-2016}{Freelance Web Developer}{}{}{}{Design and development of personal websites, online businesses, migration of desktop applications to web} \section{Talks \& Publications} \cventry{2021}{Good Bot, Bad Bot: Characterizing Automated Browsing Activity}{IEEE S\&P 2021}{Online}{}{} \cventry{2020}{Less is More: Introducing an Automated Debloating Pipeline based on Dynamic Web Application Usage}{TPCP Software Security Summer School (SSSS '20)}{Online}{}{} \cventry{2020}{Web Runner 2049: Evaluating Third-Party Anti-bot Services}{DIMVA 2020 (Best Video Presentation Award) }{Online}{}{} \cventry{2020}{Taming The Shape Shifter: Detecting Anti-fingerprinting Browsers}{DIMVA 2020}{Online}{}{} \cventry{2020}{Less is More: Web Application Attack Surface Reduction Through Software Debloating}{Georgia Tech Cybersecurity Lecture Series}{Online}{}{} \cventry{2019}{Gas What? I can see you GasPots. Studying the fingerprintability of ICS honeypots in the wild }{ACSAC 2019}{PR}{}{} \cventry{2019}{Less is More: Quantifying the Security Benefits of Debloating Web Applications}{OWASP Global AppSec 2019}{DC}{}{} \cventry{2019}{Less is More: Quantifying the Security Benefits of Debloating Web Applications}{USENIX Security 2019}{CA}{}{} \cventry{2018}{Fingerprinting users on the web. The good, the bad and the ugly.}{POSCON 2018}{Online}{}{} \cventry{2016}{Penetration Testing Methods for Android Applications }{1st Offseconf Conference}{Tehran}{}{} \cventry{2016}{Ransomware Threats and Mitigation Techniques }{5th Annual Conference on E-Banking and Payment Systems}{Tehran}{}{} \section{Service} \cventry{2020}{OWASP GLOBAL APPSEC SAN FRANCISCO}{Review Committee}{}{}{} \cventry{2020}{Usenix Security}{Artifact Evaluation Committee}{}{}{} \cventry{2020}{IEEE S\&P}{Poster Review Committee}{}{}{} \cventry{2020}{IEEE S\&P}{Shadow PC}{}{}{} \cventry{2020}{MADWEB}{External Reviewer}{}{}{} \cventry{2020}{NECCDC}{Proctor}{}{}{} \cventry{2019}{DIMVA}{External Reviewer}{}{}{} \cventry{2020}{OWASP GLOBAL APPSEC SAN FRANCISCO}{Review Committee}{}{}{} \cventry{2019}{IMC}{Shadow PC}{}{}{} \end{document}
{ "alphanum_fraction": 0.7247641509, "avg_line_length": 48.7356321839, "ext": "tex", "hexsha": "c499e2e1ce554e6625205f841adef9d2ad6d616a", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "81102b58ce8db4878f0dcc3c4e7d60f312a5b363", "max_forks_repo_licenses": [ "LPPL-1.3c" ], "max_forks_repo_name": "silverfoxy/moderncv", "max_forks_repo_path": "babak_cv.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "81102b58ce8db4878f0dcc3c4e7d60f312a5b363", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "LPPL-1.3c" ], "max_issues_repo_name": "silverfoxy/moderncv", "max_issues_repo_path": "babak_cv.tex", "max_line_length": 211, "max_stars_count": null, "max_stars_repo_head_hexsha": "81102b58ce8db4878f0dcc3c4e7d60f312a5b363", "max_stars_repo_licenses": [ "LPPL-1.3c" ], "max_stars_repo_name": "silverfoxy/moderncv", "max_stars_repo_path": "babak_cv.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 1241, "size": 4240 }
\documentclass[12pt,oneside,letterpaper]{memoir} \usepackage[american]{babel} \usepackage{natbib} \usepackage{appendix} \DisemulatePackage{setspace} \usepackage[onehalfspacing]{setspace} \usepackage{geometry} \usepackage[loadonly]{enumitem}[] \newlist{hanglist}{enumerate}{1} \setlist[hanglist]{label=\arabic*.} \setlist[hanglist,1]{leftmargin=0pt} \usepackage{amssymb,amsmath,amsthm} % \usepackage{theorem} \usepackage{mathpartir} \usepackage{listings} \usepackage{microtype} \usepackage{hyperref} \hypersetup{pdfborder = 0 0 0} % Surround parts of graphics with box \usepackage{boxedminipage} \usepackage{longtable} \usepackage{booktabs} \usepackage{ifxetex,ifluatex,ifpdf} %% Avoid ugly "Type 3" fonts \usepackage{lmodern} \usepackage[LY1]{fontenc} %% Substitute your favorite serif and sans fonts here.... \usepackage{tgpagella} \usepackage[LY1]{eulervm} \ifpdf \usepackage[pdftex]{graphicx} \else \usepackage{graphicx} \fi %% This is my infobox class \usepackage{xcolor} \usepackage{mdframed} \definecolor{infoblue}{RGB}{238, 247, 250} % Helpful demo of using mdframded % https://latex.org/forum/viewtopic.php?p=80700&sid=e0de1d6e3f5658d0778166a187b2a7ec#p80700 \newenvironment{infobox} { \begin{singlespace}\begin{small} % \raggedright % \textbf{Example: }% }{ \end{small}\end{singlespace} } \surroundwithmdframed[ hidealllines=true, backgroundcolor=infoblue ]{infobox} % Trick to have markdown inside of LaTeX environments get rendered % # https://github.com/jgm/pandoc/issues/3145#issuecomment-302787889 \let\Begin\begin \let\End\end % % \usepackage{url} % % % \urlstyle{same} % % Add - to url's default breaks % \def\UrlBreaks{\do\.\do\@\do\\\do\/\do\!\do\_\do\|\do\;\do\>\do\]% % \do\)\do\,\do\?\do\&\do\'\do+\do\=\do\#\do-} % % \DeclareUrlCommand\doi{% % \urlstyle{rm}\renewcommand\UrlLeft{doi:\ }% % \renewcommand\UrlRight{}} % % UrlRight{\href{http://dx.doi.org/##1}{\uri@doipre##1\uri@doipost}}}% % % % % via: https://tex.stackexchange.com/a/68532/55985 % % \DeclareUrlCommand\doi{\def\UrlLeft##1\UrlRight{\href{http://dx.doi.org/##1}{doi:##1}}} % commands and environments needed by pandoc snippets % extracted from the output of `pandoc -s` %% Make R markdown code chunks work \usepackage{array} \ifxetex \usepackage{fontspec,xltxtra,xunicode} \defaultfontfeatures{Mapping=tex-text,Scale=MatchLowercase} \newfontfamily\unicodefont{Junicode} % \usepackage{polyglossia} % \setdefaultlanguage[variant=american]{english} \else \ifluatex \usepackage{fontspec} \defaultfontfeatures{Mapping=tex-text,Scale=MatchLowercase} \else \usepackage[utf8]{inputenc} \fi \fi \usepackage{color} \usepackage{fancyvrb} \DefineShortVerb[commandchars=\\\{\}]{\|} \DefineVerbatimEnvironment{Highlighting}{Verbatim}{commandchars=\\\{\}} % Add ',fontsize=\small' for more characters per line % \newenvironment{Shaded} % { % \begin{spacing}{0.9} % % \raggedright % % \textbf{Example: }% % }{ % \end{spacing} % } % https://github.com/nightsense/snow/blob/master/colors/snow.vim \definecolor{gry0}{HTML}{f2f3f5} \definecolor{gry1}{HTML}{dfe3ea} \definecolor{gry2}{HTML}{768294} \definecolor{gry3}{HTML}{525d6e} \definecolor{gryc}{HTML}{303842} \definecolor{srch}{HTML}{e8bc4a} \definecolor{gryp}{HTML}{a0adc0} \definecolor{sprd}{HTML}{ff0000} \definecolor{spbl}{HTML}{0087ff} \definecolor{spcy}{HTML}{008787} \definecolor{spmg}{HTML}{d700d7} \definecolor{red_}{HTML}{c34853} \definecolor{gold}{HTML}{8e6e08} \definecolor{gren}{HTML}{2a843c} \definecolor{cyan}{HTML}{008895} \definecolor{blue}{HTML}{0779c5} \definecolor{mgnt}{HTML}{a158a8} \usepackage{framed} \definecolor{shadecolor}{HTML}{f2f3f5} \newenvironment{Shaded}{\begin{snugshade}\begin{singlespace}\begin{small}}{\end{small}\end{singlespace}\end{snugshade}} \newcommand{\KeywordTok}[1]{\textcolor{cyan}{#1}} \newcommand{\DataTypeTok}[1]{\textcolor{mgnt}{#1}} \newcommand{\CharTok}[1]{\textcolor{blue}{#1}} \newcommand{\StringTok}[1]{\textcolor{blue}{#1}} \newcommand{\VerbatimStringTok}[1]{\textcolor{blue}{#1}} \newcommand{\SpecialStringTok}[1]{\textcolor{blue}{#1}} \newcommand{\OtherTok}[1]{\textcolor{blue}{#1}} \newcommand{\DecValTok}[1]{\textcolor{blue}{#1}} \newcommand{\BaseNTok}[1]{\textcolor{blue}{#1}} \newcommand{\FloatTok}[1]{\textcolor{blue}{#1}} \newcommand{\DocumentationTok}[1]{\textcolor{gry2}{\textbf{\textit{#1}}}} \newcommand{\AnnotationTok}[1]{\textcolor{gry2}{\textbf{\textit{#1}}}} \newcommand{\CommentVarTok}[1]{\textcolor{gry2}{\textbf{\textit{#1}}}} \newcommand{\ControlFlowTok}[1]{\textcolor{gren}{\textbf{#1}}} \newcommand{\OperatorTok}[1]{\textcolor{gren}{#1}} \newcommand{\ImportTok}[1]{#1} \newcommand{\BuiltInTok}[1]{#1} \newcommand{\ExtensionTok}[1]{#1} \newcommand{\RegionMarkerTok}[1]{#1} \newcommand{\CommentTok}[1]{\textcolor{gry3}{\textit{#1}}} \newcommand{\PreprocessorTok}[1]{\textcolor{gry3}{\textit{#1}}} \newcommand{\InformationTok}[1]{\textcolor{gry3}{\textbf{\textit{#1}}}} \newcommand{\WarningTok}[1]{\textcolor{gry3}{\textbf{\textit{#1}}}} \newcommand{\NormalTok}[1]{\textcolor{gryc}{{#1}}} \newcommand{\FunctionTok}[1]{\textcolor{gryc}{#1}} \newcommand{\VariableTok}[1]{\textcolor{gryc}{#1}} \newcommand{\ConstantTok}[1]{\textcolor{gryc}{#1}} \newcommand{\SpecialCharTok}[1]{\textcolor{gryc}{#1}} \newcommand{\AttributeTok}[1]{\textcolor[rgb]{0.77,0.63,0.00}{#1}} \newcommand{\AlertTok}[1]{\textcolor[rgb]{0.94,0.16,0.16}{#1}} \newcommand{\ErrorTok}[1]{\textcolor[rgb]{0.64,0.00,0.00}{\textbf{#1}}} %%% NB: the ``deposit'' chapter- and page- styles should conform to UW %%% requirements. If you are producing a pretty version of your %%% dissertation for web use later, you will certainly want to make %%% your own chapter and page styles. %% Thomas Dye’s southall chapter style \newlength{\headindent} \newlength{\rightblock} \makechapterstyle{deposit}{% \setlength{\headindent}{36pt} \setlength{\rightblock}{\textwidth} \addtolength{\rightblock}{-\headindent} \setlength{\beforechapskip}{2\baselineskip} \setlength{\afterchapskip}{2\baselineskip} \setlength{\midchapskip}{0pt} \renewcommand{\chaptitlefont}{\huge\rmfamily\raggedright} \renewcommand{\chapnumfont}{\chaptitlefont} \renewcommand{\printchaptername}{} \renewcommand{\chapternamenum}{} \renewcommand{\afterchapternum}{} \renewcommand{\printchapternum}{% \begin{minipage}[t][\baselineskip][b]{\headindent} {\vspace{0pt}\chapnumfont%%%\figureversion{lining} \thechapter} \end{minipage}} \renewcommand{\printchaptertitle}[1]{% \hfill\begin{minipage}[t]{\rightblock} {\vspace{0pt}\chaptitlefont ##1\par}\end{minipage}} \renewcommand{\afterchaptertitle}{% \par\vspace{\baselineskip}% \hrulefill \par\nobreak\noindent \vskip\afterchapskip} } \makepagestyle{deposit} \makeatletter \renewcommand{\chaptermark}[1]{\markboth{#1}{}} \renewcommand{\sectionmark}[1]{\markboth{#1}{}} \makeevenfoot{deposit}{}{}{} \makeoddfoot{deposit}{}{}{} \makeevenhead{deposit}{\thepage}{}{} \makeoddhead{deposit}{}{}{\thepage} \makeatother %%% set up page numbering for chapter pages to satisfy UW requirements %%% NB: You will want to delete until the ``SNIP'' mark if you are %%% making a ``nice'' copy \copypagestyle{chapter}{plain} \makeoddfoot{chapter}{}{}{} \makeevenhead{chapter}{\thepage}{}{} \makeoddhead{chapter}{}{}{\thepage} \copypagestyle{part}{plain} \makeoddfoot{part}{}{}{} \makeevenhead{part}{\thepage}{}{} \makeoddhead{part}{}{}{\thepage} %%% SNIP % \input{includes/defs} %% put lstnewenvironment declarations here, if you're using listings %% end lstnewenvironment declarations %% I put convenience definitions that will go in several chapters here %%%%% begin convenience definitions \makeatletter \newcommand{\wb@episource}{} \newenvironment{wbepi}[1]{\begin{quote}\renewcommand{\wb@episource}{#1}\itshape}{\par\upshape \raggedleft --- \textsc{\wb@episource}\\ \end{quote}} \makeatother %%%%% end convenience definitions % \input{includes/thesisdefs} % thesisdefs.tex % This is mostly adapted from withesis.cls. The original copyright % notice for withesis.cls follows, preceded by two percent signs (%%): %% withesis.cls %% LaTeX Style file for the University of Wisconsin-Madison Thesis Format %% Adapted from the Purdue University Thesis Format %% Originally by Dave Kraynie %% Edits by Darrell McCauley %% Adapted to UW-Madison format by Eric Benedict (Noted with <EB>) %% Updated to LaTeX2e by Eric Benedict 24 July 00 %% %%============================================================================= %% Licensed under the Perl Artistic License. %% see: http://www.ctan.org/tex-archive/help/Catalogue/licenses.artistic.html %% for more info... %%============================================================================= % withesis.cls is available from CTAN. The modifications to this file % are also licensed under the Perl Artistic License. % --wb, 2008 \makeatletter \newcounter{tocpage} \newcounter{lofpage} \newcounter{lotpage} \newcounter{listofheading} \newcommand\@thesistitlemedskip{0.25in} \newcommand\@thesistitlebigskip{0.55in} \newcommand{\degree}[1]{\gdef\@degree{#1}} \newcommand{\project}{\gdef\@doctype{A masters project report}} \newcommand{\prelim}{\gdef\@doctype{A preliminary report}} \newcommand{\thesis}{\gdef\@doctype{A thesis}} \newcommand{\dissertation}{\gdef\@doctype{A dissertation}} \newcommand{\department}[1]{\gdef\@department{(#1)}} \newcommand{\oralexamdate}[1]{\gdef\@oralexamdate{#1}} \newcommand{\committeeone}[1]{\gdef\@committeeone{#1}} \newcommand{\committeetwo}[1]{\gdef\@committeetwo{#1}} \newcommand{\committeethree}[1]{\gdef\@committeethree{#1}} \newcommand{\committeefour}[1]{\gdef\@committeefour{#1}} \newcommand{\committeefive}[1]{\gdef\@committeefive{#1}} \newcommand{\committeesix}[1]{\gdef\@committeesix{#1}} \newcommand{\committeeseven}[1]{\gdef\@committeeseven{#1}} \newcommand{\school}[1]{\gdef\@school{#1}} \newenvironment{titlepage} {\@restonecolfalse\if@twocolumn\@restonecoltrue\onecolumn \else \newpage \fi \thispagestyle{empty} % \c@page\z@ -- deleted: count title page in thesis }{\if@restonecol\twocolumn \else \newpage \fi} \gdef\@degree{Doctor of Philosophy} \gdef\@doctype{A dissertation} \gdef\@department{(Electrical Engineering)} \gdef\@oralexamdate{} \gdef\@committeeone{} \gdef\@committeetwo{} \gdef\@committeethree{} \gdef\@committeefour{} \gdef\@committeefive{} \gdef\@committeesix{} \gdef\@committeeseven{} \gdef\@school{} \renewcommand{\maketitle}{% \begin{titlepage} %----------------------------------------------------------------------------- % -- The thesis office doesn't like thanks on title page. Put it in % -- the acknowledgments. This is here so you don't have to change % -- your titlepage when converting from report style. -> from Purdue, but I % left it here since it seems compatible with UW-Madison, Eric %----------------------------------------------------------------------------- \def\thanks##1{\typeout{Warning: `thanks' deleted from thesis titlepage.}} \let\footnotesize\small \let\footnoterule\relax \setcounter{page}{1} %sets new margins for title page so that committee members can be placed there % \vspace*{0.1in} \begin{center} {\textbf{\expandafter\uppercase\expandafter{\@title}}} \\[\@thesistitlebigskip] by \\[\@thesistitlemedskip] \@author \\[\@thesistitlebigskip] \@doctype\ submitted in partial fulfillment of \\ the requirements for the degree of\\[\@thesistitlebigskip] \@degree \\[\@thesistitlemedskip] \@department \\[\@thesistitlebigskip] at the \\[\@thesistitlebigskip] \MakeUppercase{\@school}\\[\@thesistitlebigskip] \@date \\[\@thesistitlebigskip] \end{center} % section added by Steven Baumgart on 3/2012 % adds committee list to the title page % add or delete committee members as you need to, these are defined in the dissertation.tex document % comment out other things as you need to as well. - SB \noindent Date of final oral examination: \@oralexamdate \hspace*{\fill} \\[\@thesistitlemedskip] \noindent The dissertation is approved by the following members of the Final Oral Committee:\\* $for(signature)$\indent $signature$$sep$\\*$endfor$ \end{titlepage} \setcounter{footnote}{0} \setcounter{page}{1} %title page is NOT counted \let\thanks\relax \let\maketitle\relax \let\degree\relax \let\project\relax \let\prelim\relax \let\department\relax \gdef\@thanks{}\gdef\@degree{}\gdef\@doctype{} \gdef\@department{} %\gdef\@author{}\gdef\@title{} } %============================================================================= % ABSTRACT %============================================================================= % The abstract should begin with two single-spaced lines describing % the author and title in a standard format. After these lines comes % the standard abstract. %============================================================================= \def\abstract{ \chapter*{Abstract} \addcontentsline{toc}{chapter}{Abstract} \relax\markboth{Abstract}{Abstract}} \def\endabstract{\par\newpage} %============================================================================= % UMI ABSTRACT %============================================================================= % The UMI abstract should begin with the author and title in a standard format. % After the author comes the advisor and university. After these lines comes % a bunch of double spaced text to make up the standard abstract. % After the abstract, the advisor's approval signature follows. % This page is not numbered and is delivered seperately to the thesis office. %============================================================================= % \def\advisortitle#1{\gdef\@advisortitle{#1}} \def\abstractadvisor#1{\gdef\@abstractadvisor{#1}} \def\abstractschool#1{\gdef\@abstractschool{#1}} \def\abstractyear#1{\gdef\@abstractyear{#1}} \gdef\@abstractadvisor{Professor Advisor} \gdef\@abstractschool{The University} \gdef\@abstractyear{2000} \def\umiabstract{ % \thispagestyle{empty} % \addtocounter{page}{-1} \addcontentsline{toc}{chapter}{Abstract} % \relax\markboth{Abstract}{Abstract}} \begin{center} {\textbf{\expandafter\uppercase\expandafter{Abstract}}}\\ \vspace{6pt} {\@title}\\ \vspace{6pt} by \\ \vspace{6pt} \@author \\ \vspace{12pt} \@abstractschool, \@abstractyear\\ Under the supervision of \@abstractadvisor \end{center} } %============================================================================ % VERBATIMFILE %============================================================================ % \verbatimfile{<filename>} for verbatim inclusion of a file % - Note that the precise layout of line breaks in this file is important! % - added the \singlespace - EB %============================================================================ \def\verbatimfile#1{\begingroup \singlespace \@verbatim \frenchspacing \@vobeyspaces \input#1 \endgroup } %============================================================================= % SEPARATOR Pages % Creates a blank page with a text centered horizontally and vertically. % The page is neither counted nor numbered. % These pages are required in the thesis format before sections such % as appendices, vita, bibliography, etc. %============================================================================= \def\separatorpage#1{ \newpage \thispagestyle{empty} \addtocounter{page}{-1} \null \vfil\vfil \begin{center} {\textbf{#1}} \end{center} \vfil\vfil \newpage} % COPYRIGHTPAGE % The copyright must do the following: % - start a new page with no number % - place the copyright text centered at the bottom. \def\copyrightpage{ \newpage \thispagestyle{empty} % No page number \addtocounter{page}{-1} \chapter*{} % Required for \vfill to work \begin{center} \vfill \copyright\ Copyright by \@author\ \@date\\ All Rights Reserved \end{center} \par\newpage} % GLOSSARY % The glossary environment must do the following: % - produce the table of contents entry for the glossary % - start a new page with GLOSSARY centered two inches from the top \def\glossary{ \chapter*{GLOSSARY} \addcontentsline{toc}{chapter}{Glossary}} \def\endglossary{\par\newpage} % NOMENCLATURE % The nomenclature environment must do the following: % - produce the table of contents entry for the nomenclature section % - start a new page with NOMENCLATURE centered two inches from the top \def\nomenclature{\separatorpage{DISCARD THIS PAGE} \chapter*{Nomenclature} \addcontentsline{toc}{chapter}{NOMENCLATURE}} \def\endnomenclature{\par\newpage} % CONVENTIONS % The conventions environment must do the following: % - produce the table of contents entry for the nomenclature section % - start a new page with CONVENTIONS centered two inches from the top \def\conventions{\separatorpage{DISCARD THIS PAGE} \chapter*{Conventions} \addcontentsline{toc}{chapter}{CONVENTIONS}} \def\endconventions{\par\newpage} % COLOPHON % The colophon environment must do the following: % - produce the table of contents entry for the nomenclature section % - start a new page with COLOPHON centered two inches from the top \def\colophon{ \chapter*{Colophon} \addcontentsline{toc}{chapter}{Colophon}} \def\endcolophon{\par\newpage} % LIST OF SYMBOLS % The list of symbols environment must do the following: % - produce the table of contents entry for the list of symbols section % - start a new page with LIST OF SYMBOLS centered two inches from the top \def\listofsymbols{ \eject \chapter*{LIST OF SYMBOLS} \addcontentsline{toc}{chapter}{LIST OF SYMBOLS}} \def\endlistofsymbols{\par\newpage} % VITA % The vita environment must do the following: % - produce a separator page with the word vita centered % - produce the table of contents entry for the vita % - start a new page with VITA centered two inches from the top \def\vita{ \chapter*{VITA} \addcontentsline{toc}{chapter}{VITA}} \def\endvita{\par\newpage} % ACKNOWLEDGMENTS \def\acks{ \chapter*{Acknowledgments} \addcontentsline{toc}{chapter}{Acknowledgments}} \def\endacks{\par\newpage} % DEDICATION % The dedication environment must do the following: % - start a new page % - center the text vertically % - include the text in a center environment \def\dedication{ \newpage \null\vfil \begin{center}\Large} \def\enddedication{\end{center}\par\vfil\newpage} %============================================================================= % DATE %============================================================================= %\def\today{\ifcase\month\or %January\or February\or March\or April\or May\or June\or %July\or August\or September\or October\or November\or December\fi %\space\number\day, \number\year} \newcount\@testday \def\today{\@testday=\day \ifnum\@testday>30 \advance\@testday by -30 \else\ifnum\@testday>20 \advance\@testday by -20 \fi\fi \number\day\ \ \ifcase\month\or January \or February \or March \or April \or May \or June \or July \or August \or September \or October \or November \or December \fi\ \number\year } % Single counter for theorems and theorem-like environments: \newtheorem{theorem}{Theorem}[chapter] \newtheorem{assertion}[theorem]{Assertion} \newtheorem{claim}[theorem]{Claim} \newtheorem{conjecture}[theorem]{Conjecture} \newtheorem{corollary}[theorem]{Corollary} \newtheorem{definition}[theorem]{Definition} \newtheorem{example}[theorem]{Example} \newtheorem{figger}[theorem]{Figure} \newtheorem{lemma}[theorem]{Lemma} \newtheorem{prop}[theorem]{Proposition} \newtheorem{remark}[theorem]{Remark} %============================================================================= % BIBLIOGRAPHY %============================================================================= % The thebibliography environment executes the following commands: % % o start a new 'chapter' with BIBLIOGRAPHY as the heading % o produce a separator page for the bibliography % % \def\newblock{\hskip .11em plus .33em minus -.07em} -- % Defines the `closed' format, where the blocks (major units of % information) of an entry run together. % % \sloppy -- Used because it's rather hard to do line breaks in % bibliographies, % % \sfcode`\.=1000\relax -- % Causes a `.' (period) not to produce an end-of-sentence space. %============================================================================= % \altbibtitle % The default title for the References chapter is ``LIST OF REFERENCES'' % Since some people prefer ``BIBLIOGRAPHY'', the command % \altbibtitle has been added to change the chapter title. % This command does nothing more than change REFERENCES to BIBLIOGRAPHY %============================================================================ \def\@bibchaptitle{Bibliography} \def\altbibtitle{\def\@bibchaptitle{Bibliography}} \def\thebibliography#1{ %\separatorpage{\@bibchaptitle} \global\@bibpresenttrue \chapter*{\@bibchaptitle\markboth{\@bibchaptitle}{\@bibchaptitle}} \addcontentsline{toc}{chapter}{\@bibchaptitle} \vspace{0.375in} % added to match 4 line requirement \interlinepenalty=10000 % added to prevent breaking of bib entries \singlespace\list {[\arabic{enumi}]}{\settowidth\labelwidth{[#1]}\leftmargin\labelwidth \advance\leftmargin\labelsep \usecounter{enumi}} \def\newblock{\hskip .11em plus .33em minus -.07em} \sloppy \sfcode`\.=1000\relax} \let\endthebibliography=\endlist \makeatother \title{$title$} \author{$author$} \department{$program$} \school{$school$} \oralexamdate{$oral-date$} \date{$year$} \abstractadvisor{$abstract-advisor$} \abstractschool{$abstract-school$} \abstractyear{$abstract-year$} % \committeeone{Iam A. Professor, Professor, Geography} % \committeetwo{Iam A. Professor, Professor, Geography} % \committeethree{Iam A. Professor, Assistant Professor, Geography} % \committeefour{Iam A. Professor, Associate Professor, Geography} % \committeefive{Iam A. Professor, Associate Professor, History} % \committeesix{Iam A. Professor, Associate Professor, History} % if you use any additional committe members you will need to uncomment % the corresponding lines 107 and/or 108 in thesisdef.tex You may also need % to adjust the value for \newcommand\@thesistitlemedskip{0.25in} and % \newcommand\@thesistitlebigskip{0.55in} in line 32 to get it to all fit %\committeesix{Iam A. Professor, Associate Professor, Geography} %\committeeseven{Iam A. Professor, Professor, Computer Sciences} % This makes the page numbers Roman (i, ii, etc) \clearpage\pagenumbering{roman} \begin{document} \newgeometry{left=1in,right=1in,bottom=1in,top=1in} \maketitle % set margins for rest of book $if(geometry)$ \newgeometry{$geometry$} $else$ \restoregeometry $endif$ \copyrightpage \begin{umiabstract} $abstract$ \end{umiabstract} \begin{dedication} \emph{$dedication$} \end{dedication} %% BEGIN PAGESTYLE %%% You can pick a pagestyle if you want; see the memoir class %%% documentation for more info. The default ``deposit'' option meets %%% the UW thesis typesetting requirements but is probably %%% unsatisfactory for making a version of your dissertation that %%% won't be deposited to the graduate school (e.g. for web or a nice %%% printed copy) \chapterstyle{deposit} \pagestyle{deposit} %% CONTENTS, TABLES, FIGURES \renewcommand{\printtoctitle}[1]{\chapter*{#1}} \renewcommand{\printloftitle}[1]{\chapter*{#1}} \renewcommand{\printlottitle}[1]{\chapter*{#1}} \renewcommand{\tocmark}{} \renewcommand{\lofmark}{} \renewcommand{\lotmark}{} \renewcommand{\tocheadstart}{} \renewcommand{\lofheadstart}{} \renewcommand{\lotheadstart}{} \renewcommand{\aftertoctitle}{} \renewcommand{\afterloftitle}{} \renewcommand{\afterlottitle}{} \renewcommand{\cftchapterfont}{\normalfont} \renewcommand{\cftsectionfont}{\normalfont} \renewcommand{\cftchapterpagefont}{\normalfont} \renewcommand{\cftchapterpresnum}{\bfseries} \renewcommand{\cftchapterleader}{} \renewcommand{\cftsectionleader}{} \renewcommand{\cftchapterafterpnum}{\cftparfillskip} \renewcommand{\cftsectionafterpnum}{\cftparfillskip} % Sets width of page number in table of contents so that % 3-digit page numbers don't touch the section title \setpnumwidth{2em} % Sets spacing for list of figures/tables % so that e.g. "12.10" doesn't touch the caption. % % \cftsetindents{<kind>}{indent}{numwidth}. \cftsetindents{figure}{0em}{2.5em} \cftsetindents{table}{0em}{2.5em} $if(contents-title)$ \renewcommand{\contentsname}{$contents-title$} $else$ \renewcommand{\contentsname}{Contents} $endif$ $if(lof-title)$ \renewcommand{\listfigurename}{$lof-title$} $else$ \renewcommand{\listfigurename}{List of Figures} $endif$ $if(lot-title)$ \renewcommand{\listtablename}{$lot-title$} $else$ \renewcommand{\listtablename}{List of Tables} $endif$ \tableofcontents \clearpage \listoftables \clearpage \listoffigures \clearpage %% ACKNOWLEDGMENTS are part of front matter \begin{acks} %% This template had support for a fancy epigraph at the start of the %% acknowledgements. % % \begin{wbepi}{David C.~Makinson (1965)} % It is customary for authors of academic books to include in their prefaces % statements such as this: ``I am indebted to ... for their invaluable help; % however, any errors which remain are my sole responsibility.'' Occasionally an % author will go further. Rather than say that if there are any mistakes then he % is responsible for them, he will say that there will inevitably be some mistakes % and he is responsible for them.... % % Although the shouldering of all responsibility is usually a social ritual, the % admission that errors exist is not --- it is often a sincere avowal of belief. % But this appears to present a living and everyday example of a situation which % philosophers have commonly dismissed as absurd; that it is sometimes rational to % hold logically incompatible beliefs. % \end{wbepi} % I sometimes am inconsistent $if(acknowledgments)$ $acknowledgments$ $else$ $acknowledgements$ $endif$ \end{acks} % switch to arabic page numbers \clearpage\pagenumbering{arabic} $body$ % I removed any bibliography and appendix stuff because Pandoc will do it. \end{document}
{ "alphanum_fraction": 0.6854536508, "avg_line_length": 31.4137931034, "ext": "tex", "hexsha": "c46a9a767a53be144eb1615e8b28953e9ca95aa5", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2018-09-28T16:47:15.000Z", "max_forks_repo_forks_event_min_datetime": "2018-09-28T16:47:15.000Z", "max_forks_repo_head_hexsha": "c00bb3e4d61e6fb3a9a03b2a4cf3b0579459875b", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "tjmahr/buckydown", "max_forks_repo_path": "inst/rmarkdown/templates/thesis/skeleton/template.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "c00bb3e4d61e6fb3a9a03b2a4cf3b0579459875b", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "tjmahr/buckydown", "max_issues_repo_path": "inst/rmarkdown/templates/thesis/skeleton/template.tex", "max_line_length": 147, "max_stars_count": 1, "max_stars_repo_head_hexsha": "c00bb3e4d61e6fb3a9a03b2a4cf3b0579459875b", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "tjmahr/buckydown", "max_stars_repo_path": "inst/rmarkdown/templates/thesis/skeleton/template.tex", "max_stars_repo_stars_event_max_datetime": "2018-09-26T21:15:25.000Z", "max_stars_repo_stars_event_min_datetime": "2018-09-26T21:15:25.000Z", "num_tokens": 7827, "size": 26419 }
% $Id$ % \section{\label{ref:FMradio}FM Radio} \opt{RECORDER_PAD}{ \note{The early V2 models were in fact FM Recorders in disguise, so they had the FM radio still mounted. Rockbox enables it if present - in case this menu does not show on your unit you can skip this chapter.\\} } \opt{sansa}{ \note{Not all Sansas have a radio receiver. Generally all American models do, but European models might not. Rockbox will display the radio menu only if it can find a radio receiver in your Sansa.} } \screenshot{main_menu/images/ss-fm-radio-screen}{The FM radio screen}{} This menu option switches to the radio screen. The FM radio has the ability to remember station frequency settings (presets). Since stations and their frequencies vary depending on location, it is possible to load these settings from a file. Such files should have the filename extension \fname{.fmr} and reside in the directory \fname{/.rockbox/fmpresets} (note that this directory does not exist after the initial Rockbox installation; you should create it manually). To load the settings, i.e. a set of FM stations, from a preset file, just ``play'' it from the file browser. Rockbox will ``remember'' and use it in \setting{PRESET} mode until another file has been selected. Some preset files are available here: \wikilink{FmPresets}. \opt{recording}{ \opt{swcodec}{ It is also possible to record the FM radio while listening. To start recording, enter the FM radio settings menu with \ActionFMMenu{} and then select \setting{Recording}. At this point, you will be switched to the \setting{Recording Screen}. Further information on \setting{Recording} can be found in \reference{ref:Recording}. } } \opt{masf}{\note{The radio will shorten battery life, because the MAS-chip is set to record mode for instant recordings.} } \begin{btnmap} \ActionFMPrev, \ActionFMNext \opt{HAVEREMOTEKEYMAP}{& \ActionRCFMPrev, \ActionRCFMNext} & Change frequency in \setting{SCAN} mode or jump to next/previous station in \setting{PRESET} mode.\\ % Long \ActionFMPrev, Long \ActionFMNext \opt{HAVEREMOTEKEYMAP}{& Long \ActionRCFMPrev, Long \ActionRCFMNext} & Seek to next station in \setting{SCAN} mode.\\ % \opt{SANSA_FUZEPLUS_PAD}{ \ActionFMPrevPreset, \ActionFMNextPreset & Jump to next/previous preset station.\\ } % \ActionFMSettingsInc, \ActionFMSettingsDec \opt{HAVEREMOTEKEYMAP}{ & \opt{IRIVER_RC_H100_PAD}{\ActionRCFMVolUp, \ActionRCFMVolDown}% \nopt{IRIVER_RC_H100_PAD}{\ActionRCFMSettingsInc, \ActionRCFMSettingsDec}% } & Change volume.\\ \opt{RECORDER_PAD}{ \ButtonPlay & Freeze all screen updates. May enhance radio reception in some cases.\\ } % \ActionFMExit \opt{HAVEREMOTEKEYMAP}{& \ActionRCFMExit} & Leave the radio screen with the radio playing.\\ % \ActionFMStop \opt{HAVEREMOTEKEYMAP}{& \ActionRCFMStop} & Stop the radio and return to \setting{Main Menu}.\\% % \nopt{ONDIO_PAD}{% \nopt{RECORDER_PAD}{\ActionFMPlay \opt{HAVEREMOTEKEYMAP}{& \ActionRCFMPlay} & Mute radio playback.\\}% % \ActionFMMode \opt{HAVEREMOTEKEYMAP}{& \ActionRCFMMode} & Switch between \setting{SCAN} and \setting{PRESET} mode.\\ % \ActionFMPreset \opt{HAVEREMOTEKEYMAP}{& \ActionRCFMPreset} & Open a list of radio presets. You can view all the presets that you have, and switch to the station.\\ }% % \ActionFMMenu \opt{HAVEREMOTEKEYMAP}{& \ActionRCFMMenu} & Display the FM radio settings menu.\\ % % software hold targets \nopt{hold_button}{% \opt{SANSA_CLIP_PAD}{\ButtonHome+\ButtonSelect} \opt{SANSA_FUZEPLUS_PAD}{\ButtonPower} \opt{ONDIO_PAD}{\ButtonMenu+\ButtonDown} & Key lock (software hold switch) on/off.\\ }% \end{btnmap} \begin{description} \item[Saving a preset:] Up to 64 of your favourite stations can be saved as presets. \opt{RECORDER_PAD}{Press \ButtonFTwo{} to go to the presets list, press \ButtonFOne{} to add a preset.}% \nopt{RECORDER_PAD}{% \ActionFMMenu{} to go to the menu, then select \setting{Add preset}.% } Enter the name (maximum number of characters is 32). Press \ActionKbdDone{} to save. \item[Selecting a preset:] \opt{ONDIO_PAD}{\ActionFMMenu{} to open the menu, select \setting{Preset}}% \nopt{ONDIO_PAD}{\ActionFMPreset} to go to the presets list. Use \ActionFMSettingsInc{} and \ActionFMSettingsDec{} to move the cursor and then press \ActionStdOk{} to select. Use \ActionStdCancel{} to leave the preset list without selecting anything. \item[Removing a preset:] \opt{ONDIO_PAD}{\ActionFMMenu{} to open the menu, select \setting{Preset}}% \nopt{ONDIO_PAD}{\ActionFMPreset} to go to the presets list. Use \ActionFMSettingsInc{} and \ActionFMSettingsDec{} to move the cursor and then press \ActionStdContext{} on the preset that you wish to remove, then select \setting{Remove Preset}. \opt{RECORDER_PAD,ONDIO_PAD}{ \item[Recording:] \opt{RECORDER_PAD}{Press \ButtonFThree}% \opt{ONDIO_PAD}{Double press \ButtonMenu} to start recording the currently playing station. Press \ButtonOff{} to stop recording.% \opt{RECORDER_PAD}{ Press \ButtonPlay{} again to seamlessly start recording to a new file.} The settings for the recording can be changed in the \opt{RECORDER_PAD}{\ButtonFOne{} menu}% \opt{ONDIO_PAD}{respective menu reached through the FM radio settings menu (Long \ButtonMenu)} before starting the recording. See \reference{ref:Recordingsettings} for details of recording settings. } \end{description} \note{The radio will turn off when starting playback of an audio file.}
{ "alphanum_fraction": 0.6345208845, "avg_line_length": 42.5620915033, "ext": "tex", "hexsha": "b8c76291dbd348f5a07979b0c16714a1cd6cbc5a", "lang": "TeX", "max_forks_count": 15, "max_forks_repo_forks_event_max_datetime": "2020-11-04T04:30:22.000Z", "max_forks_repo_forks_event_min_datetime": "2015-01-21T13:58:13.000Z", "max_forks_repo_head_hexsha": "a701aefe45f03ca391a8e2f1a6e3da1b8774b2f2", "max_forks_repo_licenses": [ "BSD-3-Clause" ], "max_forks_repo_name": "Rockbox-Chinese-Community/Rockbox-RCC", "max_forks_repo_path": "manual/main_menu/fmradio.tex", "max_issues_count": 4, "max_issues_repo_head_hexsha": "a701aefe45f03ca391a8e2f1a6e3da1b8774b2f2", "max_issues_repo_issues_event_max_datetime": "2018-05-18T05:33:33.000Z", "max_issues_repo_issues_event_min_datetime": "2015-07-04T18:15:33.000Z", "max_issues_repo_licenses": [ "BSD-3-Clause" ], "max_issues_repo_name": "Rockbox-Chinese-Community/Rockbox-RCC", "max_issues_repo_path": "manual/main_menu/fmradio.tex", "max_line_length": 88, "max_stars_count": 24, "max_stars_repo_head_hexsha": "a701aefe45f03ca391a8e2f1a6e3da1b8774b2f2", "max_stars_repo_licenses": [ "BSD-3-Clause" ], "max_stars_repo_name": "Rockbox-Chinese-Community/Rockbox-RCC", "max_stars_repo_path": "manual/main_menu/fmradio.tex", "max_stars_repo_stars_event_max_datetime": "2022-01-05T14:09:46.000Z", "max_stars_repo_stars_event_min_datetime": "2015-03-10T08:43:56.000Z", "num_tokens": 1667, "size": 6512 }
\documentclass{article} \usepackage[margin=1in]{geometry} \title{} \author{Aakash Pahuja} \date{} % No date on doc \begin{document} \maketitle %To include title on top of doc \section{Introduction} This article is about citations. I read a good book~\cite{Aakash2018} and I wanna cite that book here and I read an article~\cite{Pahuja2018}. I like to use \LaTeX~\cite{latex}. \section{References} \bibliography{myFirstBib} \bibliographystyle{plain} \subsection{This is a subsection} \section{Conclusion} \end{document}
{ "alphanum_fraction": 0.7551789077, "avg_line_length": 18.3103448276, "ext": "tex", "hexsha": "c860047beb7b100b3b2931a69a93b1f3ed415b55", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "536ce3115ef1ac50150db57d5f47a3acc9cf6813", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "dgr8akki/Latex", "max_forks_repo_path": "Bibliography/bibliography.tex", "max_issues_count": 1, "max_issues_repo_head_hexsha": "536ce3115ef1ac50150db57d5f47a3acc9cf6813", "max_issues_repo_issues_event_max_datetime": "2018-10-02T14:51:01.000Z", "max_issues_repo_issues_event_min_datetime": "2018-10-02T13:03:08.000Z", "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "dgr8akki/Latex", "max_issues_repo_path": "Bibliography/bibliography.tex", "max_line_length": 177, "max_stars_count": 1, "max_stars_repo_head_hexsha": "536ce3115ef1ac50150db57d5f47a3acc9cf6813", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "dgr8akki/Latex", "max_stars_repo_path": "Bibliography/bibliography.tex", "max_stars_repo_stars_event_max_datetime": "2018-10-03T18:07:45.000Z", "max_stars_repo_stars_event_min_datetime": "2018-10-03T18:07:45.000Z", "num_tokens": 158, "size": 531 }
\section{Some section} Here is some examples, how to use basics for document. \begin{itemize} \item Citation~\cite{ref:proc:powersaving2}. \item Footnote text\footnote{Here is some description.} \item Reference to picture č.~\ref{fig:pic}, graph~\ref{grp:graph}, table~\ref{tbl:table}. \end{itemize} \insertpicture[fig:pic]{tux.png}{5cm}{We love linux.} \insertgraph[grp:graph]{graph.png}{12cm}{We love \LaTeX.} \begin{inserttable}[tbl:table]{|l|c|r|}{Some table example.} \hline Here & is & some \\ \hline ultimately & amazing & example. \\ \hline \end{inserttable}
{ "alphanum_fraction": 0.6911519199, "avg_line_length": 29.95, "ext": "tex", "hexsha": "25c3f49fff6229a70ed731af29a0df71233f789e", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2020-11-02T14:50:23.000Z", "max_forks_repo_forks_event_min_datetime": "2020-11-02T14:50:23.000Z", "max_forks_repo_head_hexsha": "10a384aa75bb30c11816655e8b29d8a4c64c52ba", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "Dominik-97/latex-template", "max_forks_repo_path": "content/01obsah.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "10a384aa75bb30c11816655e8b29d8a4c64c52ba", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "Dominik-97/latex-template", "max_issues_repo_path": "content/01obsah.tex", "max_line_length": 92, "max_stars_count": null, "max_stars_repo_head_hexsha": "10a384aa75bb30c11816655e8b29d8a4c64c52ba", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "Dominik-97/latex-template", "max_stars_repo_path": "content/01obsah.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 196, "size": 599 }
\begin{publications} \section*{已发表论文} \begin{enumerate} \item \textbf{Xuda~Zhou}, Zidong~Du, Shijin~Zhang, Lei~Zhang, Huiying~Lan, Shaoli~Liu, Ling~Li, Qi~Guo, Tianshi~Chen, Yunji~Chen: Addressing Sparsity in Deep Neural Networks. IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, 2018. \item \textbf{Xuda~Zhou}, Zidong~Du, Qi Guo, Chengsi Liu, Chao Wang, Xuehai Zhou, Ling Li, Tianshi Chen, Yunji Chen: Cambricon-S, Addressing Irregularity in Sparse Neural Networks Through A Cooperative Software/Hardware Approach. MICRO 2018. \end{enumerate} \end{publications}
{ "alphanum_fraction": 0.7775919732, "avg_line_length": 46, "ext": "tex", "hexsha": "67f650a18cb36e036fa8ae0791b0b6f9f0d8bbaf", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "4c92658dfd4069b3697b1590a0b2b9b61ef35019", "max_forks_repo_licenses": [ "LPPL-1.3c" ], "max_forks_repo_name": "ustcanycall/graduate", "max_forks_repo_path": "chapters/publications.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "4c92658dfd4069b3697b1590a0b2b9b61ef35019", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "LPPL-1.3c" ], "max_issues_repo_name": "ustcanycall/graduate", "max_issues_repo_path": "chapters/publications.tex", "max_line_length": 253, "max_stars_count": null, "max_stars_repo_head_hexsha": "4c92658dfd4069b3697b1590a0b2b9b61ef35019", "max_stars_repo_licenses": [ "LPPL-1.3c" ], "max_stars_repo_name": "ustcanycall/graduate", "max_stars_repo_path": "chapters/publications.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 201, "size": 598 }
% BASIC SETTINGS \documentclass[a4paper,12pt]{article} % Set paper size and document type \usepackage{lmodern} % Use a slightly nicer looking font \usepackage{url} % Proper formatting for URLs \usepackage{graphicx} % Handle inclusion of non-PDF graphics \usepackage{subfig} % Allow sub-figures inside a figure \usepackage{enumitem} % Allow lists to pick up numbering where the last list left off % Change margins - default margins are too broad \usepackage[margin=20mm]{geometry} % SOURCE CODE LISTING SETTINGS % https://en.wikibooks.org/wiki/LaTeX/Source_Code_Listings \usepackage{listings} \usepackage{color} % Color definitions for source code listings \definecolor{mygreen}{rgb}{0,0.6,0} \definecolor{mygray}{rgb}{0.5,0.5,0.5} \definecolor{mymauve}{rgb}{0.58,0,0.82} % Formatting (line breaks, spacing, etc...) for code \lstset{ backgroundcolor=\color{white}, % choose the background color; you must add \usepackage{color} or \usepackage{xcolor} basicstyle=\footnotesize, % the size of the fonts that are used for the code breakatwhitespace=false, % sets if automatic breaks should only happen at whitespace breaklines=true, % sets automatic line breaking captionpos=b, % sets the caption-position to bottom commentstyle=\color{mygreen}, % comment style deletekeywords={...}, % if you want to delete keywords from the given language escapeinside={\%*}{*)}, % if you want to add LaTeX within your code extendedchars=true, % lets you use non-ASCII characters; for 8-bits encodings only, does not work with UTF-8 frame=single, % adds a frame around the code keepspaces=true, % keeps spaces in text, useful for keeping indentation of code (possibly needs columns=flexible) keywordstyle=\color{blue}, % keyword style otherkeywords={*,...}, % if you want to add more keywords to the set numbers=left, % where to put the line-numbers; possible values are (none, left, right) numbersep=5pt, % how far the line-numbers are from the code numberstyle=\tiny\color{mygray}, % the style that is used for the line-numbers rulecolor=\color{black}, % if not set, the frame-color may be changed on line-breaks within not-black text (e.g. comments (green here)) showspaces=false, % show spaces everywhere adding particular underscores; it overrides 'showstringspaces' showstringspaces=false, % underline spaces within strings only showtabs=false, % show tabs within strings adding particular underscores stepnumber=2, % the step between two line-numbers. If it's 1, each line will be numbered stringstyle=\color{mymauve}, % string literal style tabsize=2, % sets default tabsize to 2 spaces title=\lstname % show the filename of files included with \lstinputlisting; also try caption instead of title } % Set document title and author \title{\LaTeX \space{} Example \#1} \author{Jeremy Pedersen} \date{2019-02-18} % If date is left blank, it will be hidden % Document body \begin{document} \maketitle % Insert the title, author, and date \section{First section} % Create a section We can use the listing package to place source code into our documents from a file: % Include code from a file with filename indicated \vspace{5mm} \lstinputlisting[language=Python]{test_code.py} \vspace{5mm} \noindent Or we can quote a range of line numbers from the file (lines 19 and 20, for example): % Include only specific lines from a file \vspace{5mm} \lstinputlisting[language=Python, firstline=19, lastline=20]{test_code.py} \vspace{5mm} % Forces content onto the next page, useful for documents which should look nice when printed out \clearpage \noindent We can also place code directly into latex without importing it from a file: % Include code inline \vspace{5mm} \begin{lstlisting}[language=Python] print("Hi, I'm Python 3!") \end{lstlisting} \vspace{5mm} \subsection{First subsection} % Create a subsection We can create and format mathematical expressions like so: \vspace{2mm} % Format a mathematical expression $$x' = x \cdot s cos \theta - y \cdot s sin \theta + t_x$$ $$y' = x \cdot s sin \theta + y \cdot s cos \theta + t_y$$ \vspace{2mm} \noindent We can also make a nice list: % Make a list \vspace{2mm} \begin{enumerate} \item I am the first thing in the list \item I am the second thing in the list \end{enumerate} \vspace{2mm} \noindent We can inline mathematical expressions such as this one "$4 \sigma_0$" using the "\$" sign. We can make mathematical expressions that occupy their own line, like this: \vspace{2mm} $$u = (x - x_0) \frac{1}{4 \sigma_0} cos \theta_0 - (y - y_0) \frac{1}{4 \sigma_0} sin \theta_0 + 4 = (0 - 16) \frac{1}{4} - 0 + 4$$ \vspace{2mm} \subsection{Second subsection} We can also make tables and charts using the array type like so: % Make an array or table \vspace{2mm} \[ \phi = \left\{ \begin{array}{l l} \theta_0 + \theta_{pt} & if \ \theta_0 + \theta_{pt} \in [0,2 \pi)\\ \theta_0 + \theta_{pt} + 2 \pi & if \ \theta_0 + \theta_{pt} < 0\\ \theta_0 + \theta_{pt} - 2 \pi & if \ \theta_0 + \theta_{pt} \ge 2 \pi\\ \end{array} \right\} \] \vspace{2mm} \noindent I can start an enumerated list of items here... \vspace{5mm} \begin{enumerate} \item One thing \item Another thing \end{enumerate} \vspace{5mm} \noindent And then... \section{Second section} ...I can continue it here! \vspace{5mm} \begin{enumerate}[resume] \item Yet more stuff \item Some other things \end{enumerate} \vspace{5mm} Inserting figures is also relatively easy to do: \vspace{5mm} % Insert a figure with an image \begin{figure}[!ht] \centering \includegraphics[width=0.5\textwidth]{test_image.jpg} \caption{I can embed images too} \end{figure} \noindent We can make a table with centered elements: \vspace{5mm} \[ \left[ \begin{array}{c c c} 1.1754 & -0.8334 & 193.4191\\ 0.2062 & 1.0380 & -141.0333\\ -0.0008 & 0.0007 & 1.0000\\ \end{array} \right] \] \vspace{5mm} \noindent There you go! That should be enough to get you started on LaTeX! \end{document}
{ "alphanum_fraction": 0.6980436177, "avg_line_length": 34.2637362637, "ext": "tex", "hexsha": "b277fc280b29e452ae3387e8d637eeff29bb6883", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "c5f6731b5cb92bebd1ca2a6cce6ba154c5d5beca", "max_forks_repo_licenses": [ "Unlicense" ], "max_forks_repo_name": "jeremypedersen/latexHints", "max_forks_repo_path": "latex_hints.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "c5f6731b5cb92bebd1ca2a6cce6ba154c5d5beca", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Unlicense" ], "max_issues_repo_name": "jeremypedersen/latexHints", "max_issues_repo_path": "latex_hints.tex", "max_line_length": 168, "max_stars_count": 2, "max_stars_repo_head_hexsha": "c5f6731b5cb92bebd1ca2a6cce6ba154c5d5beca", "max_stars_repo_licenses": [ "Unlicense" ], "max_stars_repo_name": "jeremypedersen/latexHints", "max_stars_repo_path": "latex_hints.tex", "max_stars_repo_stars_event_max_datetime": "2022-01-29T07:47:38.000Z", "max_stars_repo_stars_event_min_datetime": "2021-12-27T05:11:39.000Z", "num_tokens": 1790, "size": 6236 }
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \subsection{\hspace{0.1in} Computational Music (or Music Computation) and Music Synthesis} \label{computationalmusic} I was first introduced to evolutionary music when I was looking into evolutionary computation, which is a field in artificial intelligence (AI), a couple of years back. That is, computer scientists are creating music using AI. \\ Dr. Peter J. Bentley (\url{http://www.cs.ucl.ac.uk/staff/p.bentley/}) leads the Digital Biology Interest Group (\url{http://www.cs.ucl.ac.uk/staff/P.Bentley/igdigitalbiology/index2.html}) at University College London (UCL). In an interview with BBC for his book, ``Digital Biology,'' he demonstrates how evolutionary algorithms (a technique in AI) can be used to create music (``evolutionary'' jazz music) that can complement a jazz musician, or improvise and lead a jazz band. \\ The AI software plays jazz along with musicians, and ``adapts'' to complement the musician during its ``evolution'' by tuning in to the way ``its'' band members play jazz music. Alternatively, the AI software can ``evolve'' and improvise its jazz piece during its ``evolution'', while real jazz musicians (people) can accommodate its style of music (just like musicians would accommodate John Coltrane or Miles Davis when they improvise). \\ You can listen to a sample of this demonstration at the end of the interview that is provided in the audio clip at: audio clip for evolutionary jazz music (\url{http://www.peterjbentley.com/digitalbiology3.html}). \\ I think this is neat. AI software for evolutionary music allows musicians to play cool stuff when they are alone or when one or two members of a jazz band is missing. What may surprise the musicians is that the evolutionary process of such AI software is stochastic in nature... It generates notes randomly, and selectively adds them to synthesize jazz music based on rules/grammar in music theory for a given genre. \\ Other examples and resources of evolutionary music are MusiGenesis (\url{http://www.musigenesis.com/}), Evolutionary Sound System (\url{http://www.mcld.co.uk/evolutionarysound/}), GeneticDrummer (\url{http://dostal.inf.upol.cz/evm.html}), Evolutionary Music Bibliography (\url{http://www.ist.rit.edu/~jab/EvoMusic/EvoMusBib.html}), and ``Evolutionary Computation and its application to art and design'' by Craig Reynolds (\url{http://www.red3d.com/cwr/evolve.html}). \\ Also, see articles from Wikipedia on Algorithmic composition (\url{http://en.wikipedia.org/wiki/Algorithmic_composition}), Pop Music Automation (\url{http://en.wikipedia.org/wiki/Pop_music_automation}), Computer music (\url{http://en.wikipedia.org/wiki/Computer_music}), Generative music (\url{http://en.wikipedia.org/wiki/Generative_music}), and Evolutionary music (\url{http://en.wikipedia.org/wiki/Evolutionary_music}). \\ Around this time, I noticed USC's dominance in research on entertainment technology and computational music (or music computation). \\ Later, when I was reading about Joseph Costello (former CEO of Cadence Design Systems), I learned of electronic implementations (as VLSI systems, or hardware if you like; \url{http://en.wikipedia.org/wiki/Sound_synthesis}) of speech (\url{http://en.wikipedia.org/wiki/Speech_synthesis}) or sound synthesis (\url{http://en.wikipedia.org/wiki/Software_synthesizer})... In the June 2010 edition of the IEEE Transactions on Very Large Scale Integration Systems (\url{http://ieeexplore.ieee.org/servlet/opac?punumber=92}), there is a journal paper on a VLSI design of a ``scalable and programmable sound synthesizer'' (\url{http://dx.doi.org/10.1109/TVLSI.2009.2017197}). This is pretty cool! \\ Recently, I encountered a reference from the June 25, 2010 edition (\url{http://technews.acm.org/archives.cfm?fo=2010-06-jun/jun-25-2010.html}) of ACM TechNews (\url{http://queue.acm.org/technews.cfm}) about ``a computer program (\url{http://www.csmonitor.com/Innovation/Tech/2010/0617/How-a-computer-program-became-classical-music-s-hot-new-composer}) that composes classical music by following rules of music its [computer] programmer taught it''. Ain't that awesome? An AI software that can compose classical music! \\ Some resources for computational music (or music computation) include: USC's Music Computation and Cognition (MuCoaCo) group (\url{http://www-bcf.usc.edu/~mucoaco/}), the MCM Workshop on Computational Music Analysis (\url{http://www.neurogems.org/music/index.html}), and Computational Music for Indian classical music (\url{http://www.computationalmusic.com/}). \\ Some resources for music synthesis include: the ``The Amateur Gentleman's Introduction to the Principles of Music Synthesis'' by Beau Sievers (\url{http://beausievers.com/synth/synthbasics/}); Stanford University's Center for Computer Research in Music and Acoustics (\url{https://ccrma.stanford.edu/groups}); CERL Sound Group, Computer-based Education Research Laboratory at the University of Illinois at Urbana-Champaign (\url{http://www.cerlsoundgroup.org/main.html}); and Dr. Lippold Haken's web page (\url{http://www.cerlsoundgroup.org/CSGdesc/LippoldHome.html}). \\ Also, see \url{http://www.acs.psu.edu/users/smithsh/synth.html} and \url{http://www.informatics.sussex.ac.uk/users/nc81/courses/cm1/cm1bibliography.html} for further information on computational music (or music computation) and music synthesis. \\ \vspace{1cm} \ \\ Claim: \\ There is art in engineering and computer science. \\ \ \\ Proof by existence: \\ Look at the conference proceedings for the track on ``Artificial Life Models for Musical Applications II : Searching for musical creativity'' (\url{http://galileo.cincom.unical.it/esg/Music/ALMMAII/ALMMAII_file/home.html}) at the Workshop of the $8^{th}$ International Conference on the Simulation and Synthesis of Living Systems (ALife VIII), which is held in 2002 at the University of New South Wales in Sydney, Australia. \\ \ \\ Lemma: \\ The aforementioned audio clip found on Dr. Bentley's web page. \\ \ \\ Conjecture: \\ Computer scientists can replace musicians with computers and software... It is preposterous of me to suggest this conjecture, and I apologize for any grievance caused. \\ \vspace{1cm} Yes!!! Computer scientists have used artificial intelligence to create music - opera, jazz, and classical music. And, also poems, jokes, stories, and visual art (e.g., evolutionary art [\url{http://en.wikipedia.org/wiki/Evolutionary_art}] or algorithmic art [\url{http://en.wikipedia.org/wiki/Algorithmic_art}])... \\ %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \subsubsection{\hspace{0.1in} Addendum \#1: Computational Music} \label{compmusicaddendum1} See \url{http://www.nsf.gov/news/special_reports/science_nation/index.jsp} or \url{http://www.nsf.gov/news/special_reports/science_nation/musicman.jsp} for a project where electrical engineers collaborate with musicians. That is, the musicians will play their musical instruments, while the technologies (developed by the electrical engineers) will transcribe the music and subsequently synthesize that music to accompany the musician(s). \\ Prof. Mark Bocko from the University of Rochester's Department of Electrical and Computer Engineering works on: \vspace{-0.3cm} \begin{enumerate} \itemsep -4pt \item musical acoustics (physics of wind instruments) \item automated transcription \item model-based music synthesis \item musical sound representations \item audio watermarking and steganography \end{enumerate} Prof. Bocko mentions that the audio/data compression technologies developed in his research projects can be used to enable musicians perform in geographically distributed locations. This requires technologies to synchronize the musicians, so that they can play in harmony. That said, USC had been working on this for several years at least. Check out the ``Distributed Immersive Performance'' (DIP) project at: \url{http://imsc.usc.edu/dip/index.html}. Related projects include ``Network Stream Reliability and Synchronization'' (\url{http://imsc.usc.edu/research/project/netstream/index.html}) and ``Video Streaming Over the Internet'' (\url{http://imsc.usc.edu/research/project/videostream/index.html}). \\ You can check out other cool research on entertainment technologies at Integrated Media Systems Center: \url{http://imsc.usc.edu/} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \subsubsection{\hspace{0.1in} Addendum \#2: Resources for Computational Music} \label{compmusicaddendum2} Resources for computational music: \vspace{-0.3cm} \begin{enumerate} \itemsep -4pt \item Texas A\&M University: \vspace{-0.3cm} \begin{enumerate} \itemsep -2pt \item Philip Galanter. Available online at: \url{http://philipgalanter.com/about/}; last accessed on November 3, 2010. \end{enumerate} \item University of Coimbra: \vspace{-0.3cm} \begin{enumerate} \itemsep -2pt \item Department of Informatics Engineering: \vspace{-0.2cm} \begin{enumerate} \itemsep -2pt \item Penousal Machado: \vspace{-0.1cm} \begin{enumerate} \itemsep -1pt \item \url{http://eden.dei.uc.pt/~machado/} \item Evolutionary Art \item Computational Creativity \item Evolutionary Computation \item Artificial Artists \item Computer-Aided Creativity \end{enumerate} \end{enumerate} \end{enumerate} \item Indiana Univesity, Bloomington: Music Informatics Program, \url{http://www.music.informatics.indiana.edu/} \item \cite{Romero2008} \item \cite{Miranda2007} \end{enumerate} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \subsubsection{\hspace{0.1in} Terms in Computational Music} \label{compmusicterms} Terms in computational music: \vspace{-0.3cm} \begin{enumerate} \itemsep -4pt \item atonal and aleatoric (randomly generated) music \item computer generated music \end{enumerate}
{ "alphanum_fraction": 0.7595517066, "avg_line_length": 81.7916666667, "ext": "tex", "hexsha": "c40bc87e7741a580301ebfe3378860e3940ca1ab", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2019-12-01T04:51:15.000Z", "max_forks_repo_forks_event_min_datetime": "2019-12-01T04:51:15.000Z", "max_forks_repo_head_hexsha": "af17d1b753127db7a57f314e99bd1be2f708356a", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "eda-ricercatore/eecs-stem-outreach", "max_forks_repo_path": "notes/computational_music.tex", "max_issues_count": 2, "max_issues_repo_head_hexsha": "af17d1b753127db7a57f314e99bd1be2f708356a", "max_issues_repo_issues_event_max_datetime": "2021-06-25T15:16:14.000Z", "max_issues_repo_issues_event_min_datetime": "2018-10-19T20:55:15.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "eda-ricercatore/eecs-stem-outreach", "max_issues_repo_path": "notes/computational_music.tex", "max_line_length": 708, "max_stars_count": null, "max_stars_repo_head_hexsha": "af17d1b753127db7a57f314e99bd1be2f708356a", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "eda-ricercatore/eecs-stem-outreach", "max_stars_repo_path": "notes/computational_music.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 2415, "size": 9815 }
\section{Schedule} \subsection{Gantt Chart} % The initial task break-down schedule displayed graphically as a Gantt chart. % (see https://www.fool.com/the-blueprint/gantt-chart/) % Gnatt charts list the tasks, show the length of each task, and dependencies % between tasks. \begin{figure}[h] \includegraphics[width=14cm]{docs/vision_and_scope/Gantt.PNG} \end{figure} \subsection{Key Milestones} % Milestones are points in the development where you have implemented some % number of major features. For senior project, you generally have five % milestones: one at the end of 491, three in 492, and one in 493. % In this section, divide the Major Features between the milestones. You should % front-load the schedule as much as possible. That is, leave 10% of the work % for the last milestone. Milestone 1: Research and Implement Pipeline \begin{itemize} \itemsep0em \item Research software to use for the project \item Setup TTS Web Service \item Find a placeholder for \LaTeX\ to SSML/MathML \item Find a parsing service for \LaTeX\ file \item Create simple web service for uploading files \item Create list of major commands and environments in \LaTeX\ \end{itemize} Milestone 2: Parse \LaTeX\ file to SSML/MathML \begin{itemize} \itemsep0em \item Understand how to use parser \item \LaTeX\ basic commands and environments will be parsed to a marked up file \end{itemize} Milestone 3: Implement Base \LaTeX\ Packages \begin{itemize} \itemsep0em \item Implement package conversion \item Refine wavelengths (text analysis) \end{itemize} Milestone 4: Support Additional Packages \begin{itemize} \itemsep0em \item Support additional \LaTeX\ packages in converter \item Continue to refine wavelengths (text analysis) \end{itemize} Milestone 5: Add UI Functionality/Additional Features \begin{itemize} \itemsep0em \item Setup web browser for public usage \item Integrate with other applications such as Overleaf \end{itemize} \subsection{Resource Assignments} % Resources include budgets, consumables like paint, and computing equipment % like servers and development systems. Assign these resources, if any, to the % tasks, listed above. % The only costly resources for our project will be the cloud-based TTS service and the server that will host our \TeX 2Speech application. Our top candidate for the TTS service, Amazon Polly, has a free trial period which extends beyond the development window for the initial programs release so this resource will not be a concern for the time being. The server hosting will be a bit more problematic, but will thankfully only be essential during the final phases of development. It will primarily be needed to test the UI and request handling portions of the project, so once we reach that stage a timesheet will have to be created to allot computation time. \subsection{Individual Responsibilities} % Who is responsible for which parts of the development effort. % As it stands, the project is currently undergoing a research period leading to future possibilities in task designation. Once a suitable foundation of background knowledge and experience in each aspect of the process is gained, we will then be able to appoint individual responsibilities based on acquired expertise.
{ "alphanum_fraction": 0.7797943134, "avg_line_length": 52.4761904762, "ext": "tex", "hexsha": "95efecef786707e277e1d1e8b7487b2e8adbb438", "lang": "TeX", "max_forks_count": 3, "max_forks_repo_forks_event_max_datetime": "2021-04-14T17:51:26.000Z", "max_forks_repo_forks_event_min_datetime": "2021-03-30T18:18:40.000Z", "max_forks_repo_head_hexsha": "36a69bb5ee74e1ca362968604b4a554034c5f408", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "willsower/latex2speech", "max_forks_repo_path": "Documentation/sample_input_files/schedule.tex", "max_issues_count": 50, "max_issues_repo_head_hexsha": "36a69bb5ee74e1ca362968604b4a554034c5f408", "max_issues_repo_issues_event_max_datetime": "2021-07-14T14:22:45.000Z", "max_issues_repo_issues_event_min_datetime": "2021-03-15T23:03:43.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "willsower/latex2speech", "max_issues_repo_path": "Documentation/sample_input_files/schedule.tex", "max_line_length": 659, "max_stars_count": 3, "max_stars_repo_head_hexsha": "36a69bb5ee74e1ca362968604b4a554034c5f408", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "willsower/latex2speech", "max_stars_repo_path": "Documentation/sample_input_files/schedule.tex", "max_stars_repo_stars_event_max_datetime": "2021-08-30T20:35:39.000Z", "max_stars_repo_stars_event_min_datetime": "2021-03-17T22:13:23.000Z", "num_tokens": 776, "size": 3306 }
\section{Test Harness} \label{sec:harness} % high level reference to test harness % is found in /src/doc/dev_guide/testing.tex %%%%%%%%%%%%%%%%%%%% % Test Harness Overview The Test Harness is a flexible test control system intended to provide a thorough parameter space exploration of remapping and redistribution of distributed arrays and fields. The parameter space is defined through configuration files which are interpreted at run time to perform the desired tests. The Test Harness is integrated into the Unit test framework, enabling the Test Harness to be built and run as part of the Unit tests. The test results are reported to a single standard-out file which is located with the unit test results. The motivation for employing such a hierarchy configuration files is to allow a high degree of customization of the test configurations by combining individual specification files. Complex combinations of test cases are easily specified in high level terms. Each class will have its own collection of specification files tailored to the needs of that class. \subsection{Specifying Test Harness Tests} The test harness code consists of a single executable that uses a customizable configuration files which are located within the \texttt{<classdir>/tests/harness\_config} directory of each supported class; currently only ESMF\_ARRAY and ESMF\_FIELD. There are three ways to invoke the test harness, as an integral part of running unit tests; as a stand-alone test invoked through gmake; and as a standalone test invoked through the command line. Running the harness along with the unit tests provides frequent regression testing of the redistribution and regridding features. Running the test harness in stand alone mode using gmake is useful for isolating faults in failed test cases. Running the test harness from the command line provides the most control over Test Harness execution. Understanding the underlying program allows the developer full access to test harness features. This is useful in developing makefiles, scripts, and configuration files. \subsection{Running Test Harness Tests as Part of Unit Tests} When running as an integral part of the unit tests, the makefile contained in the \texttt{<classdir>/tests} directory of each supported class is executed through the run\_unit\_tests target. As part of this target, the makefile selects the desired test harness target using a class specific environment variable (e.g. ESMF\_TESTHARNESS\_FIELD). A default case is provided in case the environment variable variable is not set. The environment variables currently defined are: \begin{itemize} \item ESMF\_TESTHARNESS\_ARRAY - set target for Array class tests \item ESMF\_TESTHARNESS\_FIELD - set target for Field class tests \end{itemize} The targets currently supported for the ESMF\_TESTHARNESS\_ARRAY variable are: \begin{itemize} \item RUN\_ESMF\_TestHarnessArray\_default - Used to verify functionality of the test harness \item RUN\_ESMF\_TestHarnessArray\_1 - Basic test of 2D and 3D redistribution \item RUN\_ESMF\_TestHarnessArray\_2 - Quick test of 2D and 3D redistribution \item RUN\_ESMF\_TestHarnessArrayUNI\_default - Used to verify functionality of the test harness in uni-PET mode \item RUN\_ESMF\_TestHarnessArrayUNI\_1 - Basic test of 2D and 3D redistribution in uni-PET mode \item RUN\_ESMF\_TestHarnessArrayUNI\_2 - Quick test of 2D and 3D redistribution in uni-PET mode \end{itemize} The targets currently supported for the ESMF\_TESTHARNESS\_FIELD variable are: \begin{itemize} \item RUN\_ESMF\_TestHarnessField\_default - Used to verify functionality of the test harness \item RUN\_ESMF\_TestHarnessField\_1 - Basic test of 2D and 3D regrid \item RUN\_ESMF\_TestHarnessFieldUNI\_default - Used to verify functionality of the test harness in uni-PET mode \item RUN\_ESMF\_TestHarnessFieldUNI\_1 - Basic test of 2D and 3D regrid in uni-PET mode \end{itemize} Each target selects the desired sequence of test cases for that target. For example, a series of test cases could be developed to run on different days providing partial test coverage on a particular day, but complete coverage over a week. In this case, each day would have a separate target and the environment variable would be set for that day's test case. An example of a test harness appears below. \begin{verbatim} RUN_ESMF_TestHarnessField_1: \$(MAKE) TESTHARNESSCASE=field_1 NP=4 run_test_harness \end{verbatim} In this example, the test harness target is RUN\_ESMF\_TestHarnessField\_1. This target will execute a single test harness test case. Additional lines can be added if additional steps are desired. The next section will describe the details of invoking a test case. \subsection{Invoking a Single Test Harness Test Case Using gmake} A single test harness test case can be invoked from the \texttt{<classdir>/tests} directory through the make command with the local parameters \texttt{TESTHARNESSCASE} and \texttt{NP} with the \texttt{run\_test\_harness} target. For example, \begin{verbatim} gmake TESTHARNESSCASE=field_1 NP=4 run_test_harness \end{verbatim} will run the test harness test case, \texttt{field\_1} on \texttt{4} processors. Each test case is defeined by a series of configuration files in the \texttt{<classdir>/tests/harness\_config} directory. All of the configuration files for a particular test case will be prefixed with \texttt{<casename>\_} where <classname> is a unique name for the test. The top level configuration file will have a suffix of \texttt{\_test.rc}. Thus, the top level configuration file for the \texttt{field\_1} test case will be \texttt{field\_1\_test.rc}. The next section describes invoking the Test Harness from the Command Line. \subsection{Invoking a Single Test Harness Test Case from the Command Line} The Test Harness is built by invoking gmake from the \$ESMF\_DIR/src/test\_harness/src directory or by building all the unit tests. This creates the exececutable file ESMF\_TestHarnessUTest in the currently selected test directory. To run the executable, enter: \begin{verbatim} \<run path\>ESMF\_TestHarnessUTest \<cmd args\> where, \<run path\> is the path to the executable, and \<cmd args\> are the optional command arguments The cmd args are as follows: -path \<config path\> -case \<case filename\> -xml \<xml filename\> -norun \end{verbatim} The path argument sets the path to the configuration files. All of the configuration files for a testcase must reside in the same directory. If this argument is not present, the current working directory will be used as the path. The case argument is the name of the top level configuration file. This is described in a later section of this document. If this argument is not present, test\_harness.rc will be used. The xml argument instructs the test harness to generate an XML test case summary file in the current working directory. The norun argument instructs the test harness not to run the test cases. The configuration files will be parsed and if selected, an XML file will be generated. The XML file can be post-processed to generate human readable test configuration summary reports. If running under mpi, you would need to prefix this command with the proper mpirun command. The next section describes the format of the top level resource file. \subsection{Top Level Configuration File} \label{sec:harness_toplevelfile} As mentioned above, the top level configuration file will be located in the \texttt{<classdir>/tests/harness\_config} directory and named \texttt{<casename>\_test.rc}. The top level configuration file specifies the test class, the format for reporting the test results, and the location and file names containing the \textit{problem descriptor files}. Originally, the test harness had a single configuration file for each class and supported only two test cases, a non-exhaustive case and an exhaustive case. This turned out to be too restrictive and test cases are now selected through environment variables. Unfortunately, some of the old constructs remain until the obsolete feature is removed. Currently, the test harness only uses the non-exhaustive test case for each configuration file. So, referencing the example below, only the nonexhaustive tag is actually used by the test harness. Also note, that comments are preceded by the {\#} sign, that single parameter values follow a single colon {:} punctuation mark, while tables of multiple parameter values follow, and are terminated by, a double set of colon {::} punctuation marks. This file is read by the \texttt{ESMF\_Config} class methods, therefore must adhere to their specific syntax requirements. The entries can be in any order, but the name tags must be exact - including CAPITALIZATION. While it is not strictly necessary, the file names are enclosed in quotation marks, either single or double, to guarantee they are read correctly. \begin{verbatim} # Field test Harness Config file # test class test_class: FIELD # report a summary of the test configurations before actually conducting tests setup_report: TRUE # test result report - options are: # test_report: FULL - full report presenting both success and failure configs # test_report: FAILURE - report only failure configurations # test_report: SUCCESS - report only successful configurations # test_report: NONE - no report test_report: FAILURE # descriptor file for the nonexhaustive case nonexhaustive:: 'nonexhaustive_descriptor.rc' :: # end of list # descriptor files for the exhaustive case (obsolete, but keep in for awhile) exhaustive:: 'exhaustive_descriptor.rc' :: # end of list \end{verbatim} The argument for the tag \texttt{test\_class:} specifies the ESMF class to be tested. Here it is Field.The tag \texttt{setup\_report:} specifies if a setup report is sent to the test report. The tag \texttt{test\_report:} specifies the style of report to be constructed. This report is appended to the standard-out file which is located with the Unit test results. The tag \texttt{nonexhaustive::} delimits a table which contains the file names of problem descriptor files pertaining to the current test configuration. The tag exhaustive is obsolete and will be removed when time permits. It has been replaced with the Test Harness environment variables which can select the desired test case from the available portfolio. %%%%%%%%%%%%%%%%%% \subsubsection{Problem descriptor file} \label{sec:harness_problemdescriptorfile} The problem descriptor files contain \textit{descriptor strings} that describe the family of problems with the same memory topology, distribution, and grid association. The files also contain the names of \textit{specifier files} which complete the descriptions contained within the descriptor strings. This structure allows a high level of customization of the test suite. The problem descriptor file contains only one table, again conforming to the \texttt{ESMF\_Config} class standard. The contents of the table must be delimited by the tag \texttt{problem\_descriptor\_string::}. The first element on the line, enclosed by quotes, is the problem descriptor string itself. Since the descriptor strings contain gaps and special characters, it is necessary to enclose the strings in quotation marks to guarantee proper parsing. The problem descriptor table may contain any number of descriptor strings, up to the limit imposed by the \texttt{ESMF\_Config} class, each on a new line. Lines can be continued by use of an ampersand {\&} as a continuation symbol at the beginning of any continued line. The syntax of the problem descriptor string follows the field taxonomy syntax. The following is a basic redistribution example. \begin{verbatim} # Basic redistribution example ##################################### problem_descriptor_string:: '[B1G1;B2G2] --> [B1G1;B2G2]' -d Dist.rc -g Grid.rc '[B1G1;B2G2] --> [B1G2;B2G1]' -d DistGrid.rc & otherDistGrid.rc yetanotherDistGrid.rc & -g Grid.rc anotherGrid.rc :: # end of list \end{verbatim} In the above example, the two problem descriptor strings specify that a redistribution test is to be conducted, indicated by the syntax \texttt{-->}, between a pair of rank two blocks of memory. Following the problem descriptor string, are multiple flags and the names of specifier files. Each flag indicates a portion of the configuration space which is defined by the contents of the indicated specifier files. \begin{center} \begin{tabular}{| c | c |} \hline {\em argument } & {\em definition} \\ \hline \hline -d & DELayout/DistGrid specification \\ -g & Grid specification \\ \hline \end{tabular} \end{center} The filenames following a flag are used to specify the values for the parameter associated with that flag. The specified values from each file are combined to define that parameter's span of values. All the files associated with all the flags are combined to define the full ensemble of tests. The first problem descriptor string in the example indicates that the ensemble of distribution configurations to be tested are specified by the configurations contained within the file \texttt{Dist.rc}. Similarly, the range of grid configurations are specified within the single file \texttt{Grid.rc}. For the second problem descriptor string, the ensemble of distribution configurations to be tested are specified by the union of configurations contained within the three files \texttt{DistGrid.rc}, \texttt{otherDistGrid.rc}, and \texttt{yetanotherDistGrid.rc}. While in the first case, the range of grid configurations is specified by the single file \texttt{Grid.rc}, the second case adds the configurations found in \texttt{anotherGrid.rc} to the ensemble. %%%%%%%%%%%%%%%%%%%%%%% \subsubsection{Problem descriptor string syntax} The problem descriptor string is contained in the problem descriptor file and describes a class of tests to be conducted. The basic syntax describes a contiguous chunk of memory spanned by some sort of logically rectangular indexing, where the left most index varies fastest in memory. Each semicolon delineated entry is associated with a native array dimension. The index locations are replaced by signifiers that express associations and distributions of the memory through the use of short descriptors. For example $[ G1 \; ; G2 ]$ indicates that a 2D logically rectangular block of memory is associated with a 2D grid in its natural order. The signifier $G$ represents an undistributed tensor grid. Reversing the grid signifiers $[ G2; G1 ]$ indicates that the fastest varying dimension is instead associated with the second grid dimension. Specific information about the grid, such as its size, type, topology are left to be defined by the specifier files. It is the associations between memory and the grid that are stressed with this syntax. To distribute the grid, a distribution signifier is needed. The block distribution of each memory dimension is indicated by the signifier $B$. Therefore $[B1 \; G1; \; B2 \; G2; \; G3]$ signifies that a 3D logically rectangular block of memory, which has a 3D associated grid, is distributed in its first two dimensions, and not its third. %%%%%%%%%%%%%%%%%%%%%%% \paragraph{Grid syntax} As we have already seen, the symbol $G$ in the problem descriptor string indicates that a dimension of a tensor grid is associated with a memory location. Alternatively, an unstructured grid is indicated with the symbol $U$. As an example, consider a block of memory. To associate a tensor grid with specific dimensions of that block, the symbol $G$ is used with a numerical suffix to indicate a specific dimension of the grid. The specific aspects of this grid are left undefined at this point, only the fact that a particular dimension of a grid is associated with a particular dimension of the memory block is implied by the grid syntax. The complete syntax for a tensor grid specification is $G i_g \; + H \{ \# : \# \}$ where \begin{itemize} \item where $i_g$ is the index of the grid axis, \item $H \{ \# : \# \}$, is the optional signifier indicating the upper and lower sizes of the halo, where the $\#$ symbols represent integer values. The option of a separate upper and lower values allow the indication of asymmetric haloes. \end{itemize} The symbol to indicate a halo is appended to the grid description through use of a $+$ sign. Its absence indicates that there is no halo. Currently, only symmetric haloes are supported. To illustrate the use of this syntax consider the following example. \begin{center} \begin{verbatim} [ G1 ; G2 + H{1:1} ] \end{verbatim} \end{center} This string indicates that each memory location of a 2D block of memory is associated with a grid dimension. The first memory location with the first grid dimension, and the second memory location with the second grid dimension. In addition the second memory index has an asymmetric halo of size one at the low end, and size 1 at the high end. At times a memory location might have no grid association. The symbol $\ast$ is used in this case as a place holder. For example, \begin{center} \begin{verbatim} [ G2 ; G1 ; * ] \end{verbatim} \end{center} indicates that only the first two memory locations are associated with grid dimensions. The symbol $\ast$ located in the last memory location is a placeholder that indicates that location in memory has no associations. We will see later that the $\ast$ can be used to indicate that a memory location has both no grid and no distribution association. In this example, the grid association has been reversed from from its natural order. The first memory location is associated with the second grid dimension, and the second memory location with the first grid dimension. %%%%%%%%%%%%%%%%%%%%%%% \paragraph{Distribution syntax} The distribution syntax describes how a contiguous chunk of memory is distributed among the DEs. The syntax is analogous to the grid description syntax.Three styles of memory distribution are supported; \begin{description} \item[simple block] $B \; i_D$; where $i_D$ is the distribution space axis. \item[full block-cyclic] $C \; i_D$; where $i_D$ is the distribution space axis. \item[arbitrary] $A$ \end{description} Assuming a two dimensional distribution, the expression $[ B1 \; G1; \; B2 \; G2 ]$ indicates that the two dimensional logically rectangular block of memory is; \begin{itemize} \item associated with a two dimensional grid, in its natural order. \item the first memory dimension is distributed according to the first distribution axis. \item the second memory dimension is distributed according to the second distribution axis. \end{itemize} The order of the distribution signifiers can be reversed just as with the grid association. As was mentioned before, the symbol $\ast$ is used as a place holder to indicate a lack of association. If the third memory location of an undistributed block of memory has no grid association, it would look like this; \begin{center} \begin{verbatim} [ G1; G2; * ] \end{verbatim} \end{center} Alternatively if the first two dimensions of a block of memory are distributed, but all three have associated grid dimensions it would take the form; \begin{center} \begin{verbatim} [ B1 G1; B2 G2; * G3 ] \end{verbatim} \end{center} An example of where the third memory location would have a grid association, but no distribution, is the case where the domain has been decomposed along the horizontal, but not the vertical. This is common for a 3D field such as the ocean or atmosphere. Lastly, if the last memory location has no associations, it would look like; \begin{center} \begin{verbatim} [ B1 G1; B2 G2; * * ] \end{verbatim} \end{center} The purpose of the $\ast$ symbols is increase readability by acting as a place holder. An illustrative example of this memory configuration is a two dimensional spatial field with multiple species or tracers represented by the third dimension. %%%%%%%%%%%%%%%%%%%%%%% \paragraph{Transformation method} The action of redistribution and remapping are signified by the symbols $-- \!\!\! >$ and $=\chi=>$ respectively, where $\chi$ refers to a character key representing a specific interpolation method. The currently supported interpolation methods are $B$ for first order bilinear and $P$ for the patch recovery method, which provides a smoother first order surface, which can meaningly differentiated. Future implementations will include $C$, for first order conservative, $S$ for second order bilinear, and $N$ for nearest neighbor. The character $X$ is reserved for unknown or user provided methods. \begin{center} \begin{tabular}{| c | c |} \hline {$\chi$} & {\em Action} \\ \hline \hline B & first order bilinear interpolation \\ P & patch recovery interpolation \\ & \\ C & first order conservative interpolation \\ S & second order conservative interpolation \\ N & nearest neighbor distance weighted average interpolation \\ X & unknown or user provided interpolation \\ \hline \end{tabular} \end{center} The problem descriptor string \begin{center} \begin{verbatim} [B1 G1; B2 G2 ] --> [B1 G1; B2 G2 ] \end{verbatim} \end{center} indicates an ensemble of redistribution problems where a 2D distributed source field, associated with a 2D grid, is redistributed into a new 2D {\em destination} distribution. Alternatively the example \begin{center} \begin{verbatim} [ B1 G1; B2 G2 ] =C=> [ B1 G1; B2 G2 ] \end{verbatim} \end{center} indicates the interpolation of one distributed 2D gridded field to another 2D grid using a first order conservative method. The interpolation method of first order conservative is indicated by placing a character $C$ in between a pair of equal $=$ signs. %%%%%%%%%%%%%%%%%%%%%%% \paragraph{Staggering syntax (Not currently implemented)} In addition to the default grid description, the stagger of the associated grid can be indicated by appending a key with an @ sign to the end of the block memory syntax. Since the stagger relates to the whole grid and not individual components of the grid, it is natural for it to be located outside of the block memory syntax $[ \cdots ]$. \begin{center} \begin{verbatim} [ B1 G1; B2 G2 ] @{#,#} \end{verbatim} \end{center} The key consists of values enclosed by a pair of brackets $\{ \cdots \}$. The rank of these values, equals the rank of the associated grid. Thus in the above example, there are two values indicating the stagger since the grid is of rank two. The stagger location key represents the location of a field with respect to the cell center location. It is indicated by relative cartesian coordinates of a unit square, cube etc. To illustrate this further, consider the example of a 2D grid. The cell is represented by a unit square with the xy axis placed at its center, with the positive x-axis oriented {\em East} and the positive y-axis oriented {\em North}. The actual values are suppressed, only the directions $+$, and $0$ are used. This geometry is for reference purposes only, and does not literally represent the shape of an actual cell. The cell center, located at the origin of the cell, is indicated by $\{ 0,0 \}$. The corner of the cell is indicated by $\{ +,+ \}$, where the ones have been dropped to just leave the plus signs. The cell face normal to the X axis (right wall) is indicated by $\{ +, 0 \}$, while the face normal to the Y axis ( top wall) is indicated by $\{ 0,+ \}$. This approach generalizes well to higher dimensions. \begin{center} \begin{tabular}{| c | c |} \hline {\em argument } & {\em definition} \\ \hline \hline Center & \{ 0,0 \} \\ Corner & \{ +,+ \} \\ Face normal to X axis & \{+,0 \} \\ Face normal to Y axis & \{0,+ \} \\ \hline \end{tabular} \end{center} With these four locations, it is possible to indicate the standard Arakawa grid staggerings. A key of $\{ 0,0 \}$ would be equivalent to an Arakawa A-grid, while a key of $\{ +,+ \}$ would represent an Arakawa B-grid. Components of the C and D grids would be indicated by wall positions $\{ +,0 \}$ and $\{ 0,+ \}$. \begin{center} \begin{tabular}{| c | c |} \hline {\em Grid Stagger } & {\em coordinates} \\ \hline \hline A-grid & \{ 0,0 \} \\ B-grid & \{ 1,1 \} \\ C-grid \& D-grid & \{ 0,1 \} and \{ 1,0\} \\ \hline \end{tabular} \end{center} Therefore a key of $\{ 0,0 \}$ in the previous example would indicate that the stagger is located at the cell center location, or the A-grid. Typically the stagger coordinates are suppressed for the A-grid. For example, the string \begin{verbatim} [B1 G1; B2 G2 ] =C=> [B1 G1; B2 G2 ] @(+,+) \end{verbatim} indicates that a collection of regridding tests are to be run where a block distributed two dimensional field, is interpolated from one two dimensional grid onto a second two dimensional grid, using a first order conservative method. The source field data is located at the cell centers (an A-grid stagger location) and the destination field is to be located at north east corner (a B-grid stagger location). Additional information about the pair of grids and their distributions must be specified to run an actual test. This process generalizes to higher dimensions. Consider \begin{verbatim} [B1 G1; B2 G2 ; G3 ] =C=> [B1 G1; B2 G2 ; G3 ] @(+,+,+) \end{verbatim} indicates that a collection of regridding tests are to be run between a 3D source grid (cell centered) and a 3D destination grid where the field stagger location is the upper corner of the 3D cell (B-grid horizontally and top of the cell vertically). A further discussion on grid staggers can be found in section on the ESMF Grid class in the Reference manual. %%%%%%%%%%%%%%%%%%%%%%% \subsubsection{General Data Structures} It is possible to represent more general data structures which cannot be described as simple logically rectangular blocks of memory. These embedded structures are represented by combining multiple contiguous data chunks separated by commas. For example, in models with nested grids or which employ multi-grid methods, it is useful to have an indexed structure of logically rectangular blocks of memory, where each block is generally a different shape and size. Such a structure can be represented with parentheses as delimiters, such as \begin{center} \begin{verbatim} ( tile , [ B1 G1; B2 G2 ] ) \end{verbatim} \end{center} Here we have a collection of 2D blocks of memory, each block being associated with a 2D grid and being distributed in both dimensions. %%%%%%%%%%%%%%%%%%%%%%% \subsubsection{Specifier files} The \textit{problem descriptor strings} are augmented by two types of \textit{specifier files} which complete the description of the test configuration. The specifier files indicate which members of the family described by the \texttt{problem descriptor string} are to be tested. The two types of specifier files define the grid and the distribution information needed to completely define a test. The two specifier files define information such as the type, size, and coordinates of the grid, the test function to be interpolated or redistributed, and the size of the distribution. The nature of the grid specifier file varies depending on whether the test is a redistribution or a regridding. A redistribution test takes a field associated with a grid and rearranges it in processor space. So while a source and destination distribution are needed, only a single grid is necessary to conduct the test. A regridding test, on the other hand, takes a field associated with a source grid and interpolates it to a destination grid thaty is also part of a field. In this case both source and destination grids are needed. It is also assumed that a source and destination distribution is specified. \begin{center} \begin{tabular}{| c | c |} \hline {\em Redistribution} & source grid \\ & source \& destination distribution \\ \hline \hline {\em Regridding} & source \& destination grids \\ & source \& destination distribution \\ \hline \end{tabular} \end{center} %%%%%%%%%%%%%%%%%%%%%% \subsubsection{Grid Specification} \label{sec:harness_gridspecifier} For the purposes of the test harness, a grid is completely defined in terms of its type, size, coordinates, and units. From this information either a rectilinear or a curvilinear grid is generated. The first step of the grid generation is to generate a rectilinear grid of specified size, range of coordinates, and either uniform or gaussian spacing between the coordinates. If only a rectilinear grid needed, the grid generation is finished. If a curvilinear grid is desired, the rectilinear grid is taken as a base grid and its coordinates are smoothly stretched and/or shrunk according to an analytical function, to produce a curvilinear mesh according. \paragraph{Curvilinear Grid Coordinate Generation (For Future Implementation)} If a curvilinear grid is desired, the rectilinear grid is taken as a base grid and its coordinates are smoothly stretched and/or shrunk according to an analytical function, to produce a curvilinear mesh according. The algebraic grid generation takes the form \begin{eqnarray*} X_{ij} &=& \hat{x}_{i} + \epsilon \Delta x \Gamma_{x}(x_{i},y_{j}) \\ Y_{ij} &=& \hat{y}_{j} + \epsilon \Delta y \Gamma_{y}(x_{i},y_{j}) \end{eqnarray*} where the new curvilinear coordinates $X_{ij}$ and $Y_{ij}$ are functions of the original rectilinear grid coordinates $\hat{x}_{i}$ and $\hat{y}_{j}$, a small parameter $0 < \epsilon < 1$, the smallest mesh spacings $\Delta x$ and $\Delta y$, and an analytical function of the coordinates $\Gamma$. Four forms of $\Gamma$ are supported, a contacted grid, an expanded grid, a symmetric sine perturbed grid, and an asymmetric sine perturbed grid. \begin{center} \begin{tabular}{| l | l |} \hline {\em key word } & {\em formula }$\Gamma(x_{i},y_{j})$ \\ \hline \hline contracting & $\Gamma_{x}(x_{i},y_{j}) = -(x_{i}/L_{x})(1-x_{i}/L_{x})(0.5-x_{i}/L_{x} )(y_{j}/L_{y})(1-y_{j}/L_{y})$ \\ & $\Gamma_{y}(x_{i},y_{j}) = -(y_{j}/L_{y})(1-y_{j}/L_{y})(0.5-y_{j}/L_{y} )(x_{i}/L_{x})(1-x_{i}/L_{x})$ \\ \hline expanding & $\Gamma_{x}(x_{i},y_{j}) = +(x_{i}/L_{x})(1-x_{i}/L_{x} )(0.5-x_{i}/L_{x} )(y_{j}/L_{y})(1-y_{j}/L_{y})$ \\ & $\Gamma_{y}(x_{i},y_{j}) = +(y_{j}/L_{y})(1-y_{j}/L_{y})(0.5-y_{j}/L_{y} )(x_{i}/L_{x})(1-x_{i}/L_{x})$ \\ \hline symmetric\_sin & $ \Gamma_{x}(x_{i},y_{j}) = \sin(4 \pi x{i}/L_{x}) \cos(6 \pi y_{j})/L_{y})$ \\ & $ \Gamma_{y}(x_{i},y_{j}) = \sin(4 \pi x{i}/L_{x}) \cos(6 \pi y_{j})/L_{y})$ \\ \hline asymmetric\_sin & $ \Gamma_{x}(x_{i},y_{j}) = \sin(4 \pi x{i}/L_{x}) \cos(6 \pi y_{j})/L_{y})$ \\ & $ \Gamma_{y}(x_{i},y_{j}) = \sin(4 \pi y{j}/L_{y}) \cos(6 \pi x_{i})/L_{x})$ \\ \hline \end{tabular} \end{center} These four choices for $\Gamma$ equate into four options for analytically generated curvilinear grids. Examples of these four curvilinear grid options are plotted in figure~\ref{fig:harness_smoothGrids}. \begin{center} \begin{figure} \begin{tabular}{ l l} \scalebox{0.3}{\includegraphics{Harness_ContractedGrid} } & \scalebox{0.3}{\includegraphics{Harness_ExpandedGrid} } \\ \scalebox{0.3}{\includegraphics{Harness_SymmetricGrid} } & \scalebox{0.3}{\includegraphics{Harness_AsymmetricGrid} } \end{tabular} \caption{The four options for smooth curvilinear grids; clockwise from the top left corner, a contacted grid, an expanded grid, a symmetric sine perturbed grid, and an asymmetric sine perturbed grid.} \label{fig:harness_smoothGrids} \end{figure} \end{center} This adds up to a total of 2 types of rectilinear grids (uniform and gaussian), each with three options (no connection, spherical pole connection, and a periodic option). A large variety of grids can be formed by mixing and matching the grid types and options. For example a standard latitude-longitude grid on a sphere is formed by using the pair of grid types uniform\_periodic and uniform\_pole. A gaussian grid is formed with uniform\_periodic and gaussian\_pole. And a regional grid on a sphere, without periodicity, with uniform and uniform. A summary of the rectilinear grid types is given below. \begin{center} \begin{tabular}{| c | l |} \multicolumn{2}{c}{Rectilinear grid options} \\ \hline {\em grid type } & {\em modifier} \\ \hline \hline UNIFORM & {\em none } \\ & \_POLE \\ & \_PERIODIC \\ \hline GAUSSIAN & {\em none } \\ & \_POLE \\ & \_PERIODIC \\ \hline \end{tabular} \end{center} The various curvilinear grid types are created in the same way of mixing and matching grid type options. A regional expanding grid is formed using the grid types expanding and expanding. Likewise a rotated regional grid is created by using the grid types rotation\_15degrees and rotation\_15degrees to indicate the grid type. A summary of the curvilinear grid types is given below. \begin{center} \begin{tabular}{| c | l |} \multicolumn{2}{c}{Curvilinear grid options} \\ \hline {\em grid type } & {\em modifier} \\ \hline \hline CONTRACTING & {\em none} \\ & \_POLE \\ & \_PERIODIC \\ \hline EXPANDING & {\em none} \\ & \_POLE \\ & \_PERIODIC \\ \hline SYMMETRIC\_SIN & {\em none} \\ & \_POLE \\ & \_PERIODIC \\ \hline ASYMMETRIC\_SIN & {\em none} \\ & \_POLE \\ & \_PERIODIC \\ \hline rotation\_15degrees & {\em none} \\ \hline rotation\_30degrees & {\em none} \\ \hline rotation\_30degrees & {\em none} \\ \hline \end{tabular} \end{center} It should be noted that the rotated grid type only supports non-connected domains, and therefore has no option for \_pole and \_periodic connectivity. %%%%%%%%%%%%%%%%%%%%%%%%% \paragraph{Specifier file syntax for redistribution} Since a redistribution test takes either an array or a field and rearranges it in processor space, the test requires only a single grid for the source and destination. The information required is: \begin{center} \begin{tabular}{| l |} \hline {\em parameters to define a grid for redistribution } \\ \hline \hline Grid rank \\ Grid dimensions \\ Range of the coordinate axis \\ Coordinate units \\ \hline \end{tabular} \end{center} This is the information is provided by the redistribution grid specifier file. An example of a redistribution grid specifier file is provided below. \begin{center} \begin{verbatim} # grid.rc ######################################## map_type: REDISTRIBUTION # regular rectilinear Grid specification map_redist:: #rank spacing size range (min/max) units # 2 'UNIFORM' 120 -3.14159 3.14159 'RAD' 'UNIFORM' 90 -1.57 1.57 'RAD' # 2 'UNIFORM' 400 -180 180 'DEG_E' 'GAUSSIAN' 200 -88 88 'DEG_N' :: \end{verbatim} \end{center} The first piece of information required for the file is the \texttt{map\_type} key. It should be set to \texttt{REDISTRIBUTION} to indicate that it is intended to define grids for a redistribution test. Next is a configuration table which specifies multiple grids. The specification can be stretched over multiple table lines by use of the continuation symbol $\&$. The order of information is as follows: \begin{center} \begin{tabular}{| l |} \hline {\em parameters to define a grid for redistribution } \\ \hline \hline Grid rank \\ Grid axis type \\ Grid size \\ Range of the coordinate axis \\ Coordinate units \\ \hline \end{tabular} \end{center} There are two 2D grids specified in the file. The first is a standard uniformly spaced latitude-longitude grid, where none of the grid coordinates has any topological connections. The horizontal coordinates are in radians. The second 2D grid is a gaussian spherical grid. The gaussian grid has a spherical topology and is in degrees. Four types of coordinate units are supported; degrees, radians, meters and kilometers. Each has multiple equivalent key words. \begin{center} \begin{tabular}{| l | l |} \multicolumn{2}{c}{ Supported units } \\ \hline {\em name } & {\em key word} \\ \hline \hline Degrees & DEGREES \\ & DEG \\ & DEG\_E \\ & DEG\_N \\ \hline Radians & RADIANS \\ & RAD \\ \hline Meters & METERS \\ & M \\ \hline Kilometers & KILOMETERS \\ & KM \\ \hline \end{tabular} \end{center} %%%%%%%%%%%%%%%%%%%%%%%%% \paragraph{Specifier file syntax for regridding} The process of regridding takes a distributed test field from a source grid and interpolates it to a distributed destination grid. Therefore it requires information specifying both a source and destination grid, and an explicit test function to interpolate. The information required is: \begin{center} \begin{tabular}{| l |} \hline {\em parameters to define a grid for remapping } \\ \hline \hline source and destination grid rank \\ source and destination grid axis type \\ source and destination grid dimensions \\ source and destination range of coordinate axis \\ source and destination coordinate units \\ test function with parameters to interpolate \\ \hline \end{tabular} \end{center} This information is provided by the regridding grid specifier file. An example of a regridding grid specifier file is provided below. \begin{center} \begin{verbatim} # grid.rc ######################################## map_type: REGRID ################################################################################ # grid | source | grid | grid | grid | units | destination | # rank | tag | spacing | dimension | range | | tag | ################################################################################ # Grid specification for regridding #rank spacing size range (min/max) units map_regrid:: # example of a pair of 2D periodic grids 2 SRC UNIFORM_PERIODIC 120 -3.14159 3.14159 RADIANS & UNIFORM_POLE 90 -1.57 1.57 RADIANS & DST UNIFORM_PERIODIC 120 -180 180 DEG_E & GAUSSIAN_POLE 88 -88 88 DEG_N & FUNCTION CONSTANT 2.1 0.1 END :: \end{verbatim} \end{center} The first piece of information required for the file is the \texttt{map\_type} key. It should be set to \texttt{REGRID} to indicate that it is intended to define grids for a regridding test. Next is a configuration table which specifies multiple pairs of grids. The specification can be stretched over multiple table lines by use of the continuation symbol $\&$. The order of information is as follows: \begin{center} \begin{tabular}{| l |} \hline {\em parameters to define a grid for reremapping } \\ \hline \hline Grid rank \\ Source grid axis type \\ Source grid size \\ Source range of the coordinate axis \\ Source coordinate units \\ Destination grid axis type \\ Destination grid size \\ Destination range of the coordinate axis \\ Destination coordinate units \\ Test function and parameters \\ \hline \end{tabular} \end{center} The regridding grid specifier file contains a pair of 2D grids. The source grid is a standard latitude-longitude grid on a spherical topology. The destination grid is a spherical gaussian grid also on a spherical topology. The source grid is in radians, while the destination grid is in degrees. The test function is a periodic function of the grid coordinates. The one new piece of information for the regridding specifier files are the predefined test functions. The test functions provide a physical field to be interpolated and are generated as an analytical function of the grid coordinates. Supported options include: \begin{center} \begin{tabular}{| l | l | l |} \multicolumn{2}{c}{ Test functions } \\ \hline {\em name } & {\em function} & {\em parameters} \\ \hline \hline constant & set value at all locations & value, relative error \\ coordinate & set to a multiple of the coordinate values & scale, relative error \\ spherical harmonic & periodic function of the grid coordinates & amplitudes and phases, relative error \\ \hline \end{tabular} \end{center} The CONSTANT value test function sets the field to the value of the first parameter following the name. The second value is the relative error threshold. For example, the test function specification: \begin{center} \begin{verbatim} & FUNCTION CONSTANT 2.1 0.1 END \end{verbatim} \end{center} indicates that the field is set to the constant value of $2.1$, with a relative error threshold of $0.1$. This test function specification holds for grids of any rank. The COORDINATE test function sets the field to the value of the one of the grid coordinates multiplied by the value of the first parameter following the name. The second value is the relative error threshold. For example, the test function specification: \begin{center} \begin{verbatim} & FUNCTION COORDINATEX 0.5 0.1 END \end{verbatim} \end{center} indicates that the field is set to one half of the X coordinate values. For a 2D grid the coordinate options include COORDINATEX and COORDINATEY. For a 3D grid there is the additional option of COORDINATEZ. Again, the relative error threshold is $0.1$. This test function specification holds for grids of any rank. The SPHERICAL\_HARMONIC test function sets the field to the periodic harmonic function $|a+b| + a*\cos(2\pi * k_x * x/L_x ) + b*\sin(2\pi * l_y * x/L_y )$. The parameters follow the order \begin{center} \begin{verbatim} & FUNCTION SPHERICAL_HARMONIC a kx b ly rel_error END \end{verbatim} \end{center} This test function specification is only valid for 2D grids. %%%%%%%%%%%%%%%%%%%%%% \subsubsection{Distribution Specification} \label{sec:harness_distributionsspecifier} The purpose of the distribution specification is to characterize the distribution of the grid. It differs from the grid specification in that it is designed to specify the distribution as both absolute values such as $2 \times 3$ PETs as well as in relative terms based on fractions of the total number of PETs. This second option is intended to allow the distribution space to scale with changing machine resources without having to change the distribution specification file. Here is an example of a distribution specification file for block and block cyclic distributions. \begin{center} \begin{verbatim} ################################################## # descriptive | source | source | operator | destination | dest | operator | end # string | tag | rank | & value | tag | rank | & value | tag ################################################## # table specifing 2D to 2D distributions distgrid_block_2d2d:: # example with two fixed distribution sizes '(1,2)-->(2,1)' 'SRC' 2 'D1==' 1 'D2==' 2 'DST' 2 'D1==' 2 'D2==' 1 'END' # example with one fixed and one variable distribution size '(1,n)-->(n,1)' 'SRC' 2 'D1==' 1 'D2=*' 1 'DST' 2 'D1=*' 1 'D2==' 1 'END' # example with variable distribution sizes '(2n,n/2)-->(n/2,2n)' 'SRC' 2 'D1=*' 2 'D2=*' 0.5 & 'DST' 2 'D1=*' 0.5 'D2=*' 2 'END' # another example with variable distribution sizes '(2n,n/2)-->(2n,(n/2)-1)' 'SRC' 2 'D1=*' 2 'D2=*' 0.5 & 'DST' 2 'D1=*' 2 'D2=*' 0.5 'D2=+' 1 'END' :: # table specifing 3D to 3D distributions distgrid_block_3d3d:: # example with two fixed distribution sizes '(1,2,1)-->(2,1,1)' 'SRC' 3 'D1==' 1 'D2==' 2 'D3==' 1 & 'DST' 3 'D1==' 2 'D2==' 1 'D3==' 1 'END' :: \end{verbatim} \end{center} The contents of the file are tables which specify pairs of distributions. Each table specifies a particular rank of distributions. The first table consists of 2D to 2D distributions, while the second table consists of a 3D to 3D pair of distributions. The order of information is as follows: \begin{center} \begin{tabular}{| l |} \hline {\em parameters to define a grid distribution } \\ \hline \hline Descriptive string \\ Source key \\ Source distribution rank \\ Source distribution axis and size for each dimension of the distribution \\ Destination key \\ Destination distribution rank \\ Destination grid size \\ Destination distribution axis and size for each dimension of the distribution \\ Termination key \\ \hline \end{tabular} \end{center} The first example in the table starts with a string used by the report as a brief description of the distribution. Such as $(1,2)-->(2,1)$ which indicates a source distribution of $ 1 \times 2$ and a destination distribution of $ 2 \times 1$. Next comes a key word, either \texttt{SRC} or \texttt{DST} to indicate the beginning of the source and destination descriptions, respectively. Following each tag is a specification of the distribution. In the first case a fixed source distribution is specified by the entries $D1==1$ and $D2=2$. This indicates that the source distribution is fixed to be $ 1 \times 2$, provided that the test is running with at least two processors. Likewise the destination distribution is fixed to $ 2 \times 1$. The second example, illustrates how to indicate a scalable distribution. Again the entry $D1==1$ indicates that the first dimension of the distribution is set to one, but the entry $D2=*1$ has a different meaning. It takes the total number of PETs NPETS and scales it by one. Therefore the source distribution becomes $ 1 \times NPETS$. It automatically scales with the number of PETs. Likewise, the destination distribution is set to $NPETS \times 1$. The third example is completely dynamic. Since both $D1$ and $D2$ are scalable, each dimension starts with $N$ PETs, where $N$ is the square root of $NPETS$ rounded down to the nearest integer. Therefore $N\times N \leq NPETS$. So if $NPETS=4$, Then $N=2$. If $NPETS=6$ then $N$ still equals $4$. This base value is then modified by the indicated entries. In this case the source distribution is $2 N \times 1/2 N$, since the tag $D1=* 2$ indicates that first dimension is the result of the base value being multiplied by two, and the second dimension is the result of the base value being multiplied by one half. Likewise, the destination distribution is set to $1/2 N \times 2N$, no matter the number of $NPETS$. For a rank three scalable distribution, $N$ is the cube root of NPETS rounded down to the next integer. And so on for higher rank distributions. The fourth example illustrates the last option in the syntax. Again, since the distribution specifies two scalable distributions, $N$ is the square root of $NPETS$ rounded down to the nearest integer. The source distribution is exactly the same as in the third example, but destination distribution has a new entry $D2=+ 1$. The first dimension of the destination distribution is set to $2N$ by the entry $D1=* 2$. the second dimension is first set to $1/2N$ by the entry $D2=* 0.5$, but then modified further to $(1/2)(N) +1$ by the entry $D2=+ 1$. The resulting destination distribution is $2N \times (1/2)(N) +1$. The syntax for modifying the size of the distribution space combines according to the order of the operations. The entries $D2=* 0.5$ and $D2=+ 1$, are not identical to $D2=+ 1$ and $D2=* 0.5$, which would result in a dimension of size $(1/2)(N+1)$. Three operations are supported: \begin{center} \begin{tabular}{| c | c |} \hline {\em Distribution specification operations} \\ \hline \hline D\# == & specify a fixed value \\ D\# =* & multiply a base value by a constant \\ D\# =+ & add a constant to the base value \\ \hline \end{tabular} \end{center} %%%%%%%%%%%%%%%%%%%%%% \subsubsection{Class Specification} The class specific specifier file is current unused, but is made available for future test expansion. %%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%% \subsection{Reporting test results} The test harness offers the option of producing a human readable report on the test results. The report consists of a concise summary of the test configuration along with the test results. The test configuration is described in terms of the Field Taxonomy syntax and user provided strings. The intent is not to provide a exhaustive description of the test, but rather to provide a useful description of the failed tests. Consider a problem descriptor string consisting of two descriptor strings describing an ensemble of remapping tests. \begin{center} \begin{verbatim} [ B1 G1; B2 G2 ] =C=> [ B1 G1; B2 G2 ] @{+,+} [ B1 G1; B2 G2 ] =B=> [ B1 G1; B2 G2 ] @{+,+} \end{verbatim} \end{center} Suppose the associated specifier files indicate that the source grid is rectilinear and is 100 X 50 in size. The destination grid is also rectilinear and is 80 X 20 in size. The remapping is conducted from the A-grid position of the source grid to the B-grid stagger of the destination grid. Both grids are block distributed in two ways, 1 X NPETS and NPETS X 1. And suppose that the first dimension of both the source and destination grids are periodic. If the test succeeds for the conservative remapping, but fails for one of the first order bilinear remapping configurations, the reported results could look something like \begin{center} \begin{verbatim} SUCCESS: [B1 G1; B2 G2 ] =C=> [B1 G1; B2 G2 ] @{+,+} FAILURE: [B1{1} G1{100}+P; B2{npets} G2{50} ] =B=> [B1{1} G1{80}+P; B2{npets} G2{20} ] @{+,+} failure at line 101 of test.F90 SUCCESS: [ B1{npets} G1{100} +P; B2{1} G2{50} ] =B=> [ B1{npets} G1{80}+P; B2{1} G2{20} ] @{+,+} \end{verbatim} \end{center} Notice that the problem descriptor string differs from that of the configuration files. This is because it represents specific realizations of ensemble. For example \begin{center} \begin{verbatim} [ B1{npets} G1{80}+P; B2{1} G2{20} ] @{+,+} \end{verbatim} \end{center} indicates that the 2D block of memory is periodic in the first dimension by the addition of the $+P$ to the first grid key. The size of the grid is indicated as $80 \times 20$ by the size arguments appended to the grid indicators $G1 \{80 \}$ and $G2 \{ 20 \}$. The size of the distribution is indicated in the same way as $npets \times 1$ by the block distribution indicators $B1 \{ npets \}$ and $B2 \{ 1 \}$. The report indicates that all the test configurations for the conservative remapping are successful. This is indicated by the key word SUCCESS which is followed by the successful problem descriptor string. Since all of the tests in the first case pass,there is no need to include any of the specifier information. For the second ensemble of tests, one configuration passed, while the other failed. In this case, since there is a mixture of successes and failures, the report includes specifier information for all the configurations to help indicate the source of the test failure. The supplemental information, while not a complete problem description, since it lacks items such as the physical coordinates of the grid and the nature of the test field, includes information crucial to isolating the failed test.
{ "alphanum_fraction": 0.7350385724, "avg_line_length": 55.1364124597, "ext": "tex", "hexsha": "7e490a37c55cb402816691c62683d90aece510b4", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "0e1676300fc91000ecb43539cabf1f342d718fb3", "max_forks_repo_licenses": [ "NCSA", "Apache-2.0", "MIT" ], "max_forks_repo_name": "joeylamcy/gchp", "max_forks_repo_path": "ESMF/src/test_harness/doc/test_harness.tex", "max_issues_count": 1, "max_issues_repo_head_hexsha": "0e1676300fc91000ecb43539cabf1f342d718fb3", "max_issues_repo_issues_event_max_datetime": "2022-03-04T16:12:02.000Z", "max_issues_repo_issues_event_min_datetime": "2022-03-04T16:12:02.000Z", "max_issues_repo_licenses": [ "NCSA", "Apache-2.0", "MIT" ], "max_issues_repo_name": "joeylamcy/gchp", "max_issues_repo_path": "ESMF/src/test_harness/doc/test_harness.tex", "max_line_length": 742, "max_stars_count": 1, "max_stars_repo_head_hexsha": "0e1676300fc91000ecb43539cabf1f342d718fb3", "max_stars_repo_licenses": [ "NCSA", "Apache-2.0", "MIT" ], "max_stars_repo_name": "joeylamcy/gchp", "max_stars_repo_path": "ESMF/src/test_harness/doc/test_harness.tex", "max_stars_repo_stars_event_max_datetime": "2018-07-05T16:48:58.000Z", "max_stars_repo_stars_event_min_datetime": "2018-07-05T16:48:58.000Z", "num_tokens": 12858, "size": 51332 }
% Options for packages loaded elsewhere \PassOptionsToPackage{unicode}{hyperref} \PassOptionsToPackage{hyphens}{url} % \documentclass[ ]{book} \usepackage{lmodern} \usepackage{amssymb,amsmath} \usepackage{ifxetex,ifluatex} \ifnum 0\ifxetex 1\fi\ifluatex 1\fi=0 % if pdftex \usepackage[T1]{fontenc} \usepackage[utf8]{inputenc} \usepackage{textcomp} % provide euro and other symbols \else % if luatex or xetex \usepackage{unicode-math} \defaultfontfeatures{Scale=MatchLowercase} \defaultfontfeatures[\rmfamily]{Ligatures=TeX,Scale=1} \fi % Use upquote if available, for straight quotes in verbatim environments \IfFileExists{upquote.sty}{\usepackage{upquote}}{} \IfFileExists{microtype.sty}{% use microtype if available \usepackage[]{microtype} \UseMicrotypeSet[protrusion]{basicmath} % disable protrusion for tt fonts }{} \makeatletter \@ifundefined{KOMAClassName}{% if non-KOMA class \IfFileExists{parskip.sty}{% \usepackage{parskip} }{% else \setlength{\parindent}{0pt} \setlength{\parskip}{6pt plus 2pt minus 1pt}} }{% if KOMA class \KOMAoptions{parskip=half}} \makeatother \usepackage{xcolor} \IfFileExists{xurl.sty}{\usepackage{xurl}}{} % add URL line breaks if available \IfFileExists{bookmark.sty}{\usepackage{bookmark}}{\usepackage{hyperref}} \hypersetup{ pdftitle={Computing}, pdfauthor={Bill Last Updated:}, hidelinks, pdfcreator={LaTeX via pandoc}} \urlstyle{same} % disable monospaced font for URLs \usepackage{color} \usepackage{fancyvrb} \newcommand{\VerbBar}{|} \newcommand{\VERB}{\Verb[commandchars=\\\{\}]} \DefineVerbatimEnvironment{Highlighting}{Verbatim}{commandchars=\\\{\}} % Add ',fontsize=\small' for more characters per line \usepackage{framed} \definecolor{shadecolor}{RGB}{248,248,248} \newenvironment{Shaded}{\begin{snugshade}}{\end{snugshade}} \newcommand{\AlertTok}[1]{\textcolor[rgb]{0.94,0.16,0.16}{#1}} \newcommand{\AnnotationTok}[1]{\textcolor[rgb]{0.56,0.35,0.01}{\textbf{\textit{#1}}}} \newcommand{\AttributeTok}[1]{\textcolor[rgb]{0.77,0.63,0.00}{#1}} \newcommand{\BaseNTok}[1]{\textcolor[rgb]{0.00,0.00,0.81}{#1}} \newcommand{\BuiltInTok}[1]{#1} \newcommand{\CharTok}[1]{\textcolor[rgb]{0.31,0.60,0.02}{#1}} \newcommand{\CommentTok}[1]{\textcolor[rgb]{0.56,0.35,0.01}{\textit{#1}}} \newcommand{\CommentVarTok}[1]{\textcolor[rgb]{0.56,0.35,0.01}{\textbf{\textit{#1}}}} \newcommand{\ConstantTok}[1]{\textcolor[rgb]{0.00,0.00,0.00}{#1}} \newcommand{\ControlFlowTok}[1]{\textcolor[rgb]{0.13,0.29,0.53}{\textbf{#1}}} \newcommand{\DataTypeTok}[1]{\textcolor[rgb]{0.13,0.29,0.53}{#1}} \newcommand{\DecValTok}[1]{\textcolor[rgb]{0.00,0.00,0.81}{#1}} \newcommand{\DocumentationTok}[1]{\textcolor[rgb]{0.56,0.35,0.01}{\textbf{\textit{#1}}}} \newcommand{\ErrorTok}[1]{\textcolor[rgb]{0.64,0.00,0.00}{\textbf{#1}}} \newcommand{\ExtensionTok}[1]{#1} \newcommand{\FloatTok}[1]{\textcolor[rgb]{0.00,0.00,0.81}{#1}} \newcommand{\FunctionTok}[1]{\textcolor[rgb]{0.00,0.00,0.00}{#1}} \newcommand{\ImportTok}[1]{#1} \newcommand{\InformationTok}[1]{\textcolor[rgb]{0.56,0.35,0.01}{\textbf{\textit{#1}}}} \newcommand{\KeywordTok}[1]{\textcolor[rgb]{0.13,0.29,0.53}{\textbf{#1}}} \newcommand{\NormalTok}[1]{#1} \newcommand{\OperatorTok}[1]{\textcolor[rgb]{0.81,0.36,0.00}{\textbf{#1}}} \newcommand{\OtherTok}[1]{\textcolor[rgb]{0.56,0.35,0.01}{#1}} \newcommand{\PreprocessorTok}[1]{\textcolor[rgb]{0.56,0.35,0.01}{\textit{#1}}} \newcommand{\RegionMarkerTok}[1]{#1} \newcommand{\SpecialCharTok}[1]{\textcolor[rgb]{0.00,0.00,0.00}{#1}} \newcommand{\SpecialStringTok}[1]{\textcolor[rgb]{0.31,0.60,0.02}{#1}} \newcommand{\StringTok}[1]{\textcolor[rgb]{0.31,0.60,0.02}{#1}} \newcommand{\VariableTok}[1]{\textcolor[rgb]{0.00,0.00,0.00}{#1}} \newcommand{\VerbatimStringTok}[1]{\textcolor[rgb]{0.31,0.60,0.02}{#1}} \newcommand{\WarningTok}[1]{\textcolor[rgb]{0.56,0.35,0.01}{\textbf{\textit{#1}}}} \usepackage{longtable,booktabs} % Correct order of tables after \paragraph or \subparagraph \usepackage{etoolbox} \makeatletter \patchcmd\longtable{\par}{\if@noskipsec\mbox{}\fi\par}{}{} \makeatother % Allow footnotes in longtable head/foot \IfFileExists{footnotehyper.sty}{\usepackage{footnotehyper}}{\usepackage{footnote}} \makesavenoteenv{longtable} \usepackage{graphicx,grffile} \makeatletter \def\maxwidth{\ifdim\Gin@nat@width>\linewidth\linewidth\else\Gin@nat@width\fi} \def\maxheight{\ifdim\Gin@nat@height>\textheight\textheight\else\Gin@nat@height\fi} \makeatother % Scale images if necessary, so that they will not overflow the page % margins by default, and it is still possible to overwrite the defaults % using explicit options in \includegraphics[width, height, ...]{} \setkeys{Gin}{width=\maxwidth,height=\maxheight,keepaspectratio} % Set default figure placement to htbp \makeatletter \def\fps@figure{htbp} \makeatother \setlength{\emergencystretch}{3em} % prevent overfull lines \providecommand{\tightlist}{% \setlength{\itemsep}{0pt}\setlength{\parskip}{0pt}} \setcounter{secnumdepth}{5} \usepackage{booktabs} \usepackage{amsthm} \makeatletter \def\thm@space@setup{% \thm@preskip=8pt plus 2pt minus 4pt \thm@postskip=\thm@preskip } \makeatother \usepackage[]{natbib} \bibliographystyle{apalike} \title{Computing} \author{Bill Last Updated:} \date{22 March, 2020} \begin{document} \frontmatter \maketitle { \setcounter{tocdepth}{1} \tableofcontents } \mainmatter \hypertarget{my-section}{% \chapter*{Preface: Motivation}\label{my-section}} \addcontentsline{toc}{chapter}{Preface: Motivation} All the notes I have done here are about computing. While I have tried my best, probably there are still some typos and errors. Please feel free to let me know in case you find one. Thank you! \hypertarget{monte-carlo-approximation}{% \chapter{Monte carlo approximation}\label{monte-carlo-approximation}} Since GLMM can use EM algorithm in its maximum likelihood calculation (see McCulloch, 1994), it is practically useful to rehearse EM and other computing techniques. Example: calculate the integral of \(p(z>2)\) when \(z \sim N(0,1)\). To use Monte Carlo approximation, we can have an indicator function, which will determine whether the sample from \(N(0,1)\) will be included into the calculation of the integral. \begin{Shaded} \begin{Highlighting}[] \NormalTok{Nsim=}\DecValTok{10}\OperatorTok{^}\DecValTok{4} \NormalTok{indicator=}\ControlFlowTok{function}\NormalTok{(x)\{} \NormalTok{y=}\KeywordTok{ifelse}\NormalTok{((x}\OperatorTok{>}\DecValTok{2}\NormalTok{),}\DecValTok{1}\NormalTok{,}\DecValTok{0}\NormalTok{)} \KeywordTok{return}\NormalTok{(y)\}} \NormalTok{newdata<-}\KeywordTok{rnorm}\NormalTok{(Nsim, }\DecValTok{0}\NormalTok{,}\DecValTok{1}\NormalTok{ )} \NormalTok{mc=}\KeywordTok{c}\NormalTok{(); v=}\KeywordTok{c}\NormalTok{(); upper=}\KeywordTok{c}\NormalTok{(); lower=}\KeywordTok{c}\NormalTok{()} \ControlFlowTok{for}\NormalTok{ (j }\ControlFlowTok{in} \DecValTok{1}\OperatorTok{:}\NormalTok{Nsim)} \NormalTok{\{} \NormalTok{mc[j]=}\KeywordTok{mean}\NormalTok{(}\KeywordTok{indicator}\NormalTok{(newdata[}\DecValTok{1}\OperatorTok{:}\NormalTok{j]))} \NormalTok{v[j]=(j}\OperatorTok{^}\NormalTok{\{}\OperatorTok{-}\DecValTok{1}\NormalTok{\})}\OperatorTok{*}\KeywordTok{var}\NormalTok{(}\KeywordTok{indicator}\NormalTok{(newdata[}\DecValTok{1}\OperatorTok{:}\NormalTok{j]))} \NormalTok{upper[j]=mc[j]}\OperatorTok{+}\FloatTok{1.96}\OperatorTok{*}\KeywordTok{sqrt}\NormalTok{(v[j])} \NormalTok{lower[j]=mc[j]}\OperatorTok{-}\FloatTok{1.96}\OperatorTok{*}\KeywordTok{sqrt}\NormalTok{(v[j])} \NormalTok{\}} \KeywordTok{library}\NormalTok{(ggplot2)} \NormalTok{values=}\KeywordTok{c}\NormalTok{(mc,upper,lower)} \NormalTok{type=}\KeywordTok{c}\NormalTok{(}\KeywordTok{rep}\NormalTok{(}\StringTok{"mc"}\NormalTok{,Nsim),}\KeywordTok{rep}\NormalTok{(}\StringTok{"upper"}\NormalTok{,Nsim),}\KeywordTok{rep}\NormalTok{(}\StringTok{"lower"}\NormalTok{,Nsim))} \NormalTok{iter=}\KeywordTok{rep}\NormalTok{(}\KeywordTok{seq}\NormalTok{(}\DecValTok{1}\OperatorTok{:}\NormalTok{Nsim),}\DecValTok{3}\NormalTok{)} \NormalTok{data=}\KeywordTok{data.frame}\NormalTok{(}\DataTypeTok{val=}\NormalTok{values, }\DataTypeTok{tp=}\NormalTok{type, }\DataTypeTok{itr=}\NormalTok{iter)} \NormalTok{Rcode<-}\KeywordTok{ggplot}\NormalTok{(data,}\KeywordTok{aes}\NormalTok{(itr,val,}\DataTypeTok{col=}\NormalTok{tp))}\OperatorTok{+}\KeywordTok{geom_line}\NormalTok{(}\DataTypeTok{size=}\FloatTok{0.5}\NormalTok{)} \NormalTok{Rcode}\OperatorTok{+}\KeywordTok{geom_hline}\NormalTok{(}\DataTypeTok{yintercept=}\DecValTok{1}\OperatorTok{-}\KeywordTok{pnorm}\NormalTok{(}\DecValTok{2}\NormalTok{),}\DataTypeTok{color=}\StringTok{"green"}\NormalTok{,}\DataTypeTok{size=}\FloatTok{0.5}\NormalTok{)} \end{Highlighting} \end{Shaded} \begin{verbatim} ## Warning: Removed 2 rows containing missing values (geom_path). \end{verbatim} \includegraphics{bookdown-demo_files/figure-latex/unnamed-chunk-1-1.pdf} \hypertarget{importance-sampling}{% \chapter{Importance sampling}\label{importance-sampling}} Importance sampling has samples generated from a different distribution than the distribution of interest. Specifically, assume that we want to calculate the expected value of \(h(x)\), and \(x \sim f(x)\). \[E(h(x))=\int h(x) f(x) dx = \int h(x) \frac{f(x)}{g(x)} g(x) dx \] We can sample \(x_i\) from \(g(x)\) and then calculate the mean of \(h(x_i) \frac{f(x_i)}{g(x_i)}\). Using the same explane above, we can use a shifted exponential distribution to help calculate the intergral for normal distribution. Specifically, \[\int_2^{\infty} \frac{1}{2 \pi} e^{-\frac{1}{2}x^2}dx = \int_2^{\infty} \frac{\frac{1}{2 \pi} e^{-\frac{1}{2}x^2}}{e^{-(x-2)}} e^{-(x-2)}dx \] The idea is that, we can generate \(x_i\) from exponential distribution of \(e^{-(x-2)}\), and then insert them into the targeted ``expected (value) function'' of \(\frac{\frac{1}{2 \pi} e^{-\frac{1}{2}x^2}}{e^{-(x-2)}}\). Thus, as you can see, importance sampling is based on the law of large numbers (i.e., If the same experiment or study is repeated independently a large number of times, the average of the results of the trials must be close to the expected value). We can use it to calculate integral based on link of the definition of expected value. \begin{Shaded} \begin{Highlighting}[] \NormalTok{Nsim=}\DecValTok{10}\OperatorTok{^}\DecValTok{4} \NormalTok{normal_density=}\ControlFlowTok{function}\NormalTok{(x)} \NormalTok{\{y=(}\DecValTok{1}\OperatorTok{/}\KeywordTok{sqrt}\NormalTok{(}\DecValTok{2}\OperatorTok{*}\NormalTok{pi))}\OperatorTok{*}\KeywordTok{exp}\NormalTok{(}\OperatorTok{-}\FloatTok{0.5}\OperatorTok{*}\NormalTok{(x}\OperatorTok{^}\DecValTok{2}\NormalTok{))} \KeywordTok{return}\NormalTok{(y)\}} \NormalTok{x=}\DecValTok{2}\OperatorTok{-}\KeywordTok{log}\NormalTok{(}\KeywordTok{runif}\NormalTok{(Nsim))} \NormalTok{ImpS=}\KeywordTok{c}\NormalTok{(); v=}\KeywordTok{c}\NormalTok{(); upper=}\KeywordTok{c}\NormalTok{(); lower=}\KeywordTok{c}\NormalTok{()} \ControlFlowTok{for}\NormalTok{ (j }\ControlFlowTok{in} \DecValTok{1}\OperatorTok{:}\NormalTok{Nsim)} \NormalTok{\{} \NormalTok{ImpS[j]=}\KeywordTok{mean}\NormalTok{(}\KeywordTok{normal_density}\NormalTok{(x[}\DecValTok{1}\OperatorTok{:}\NormalTok{j])}\OperatorTok{/}\KeywordTok{exp}\NormalTok{(}\OperatorTok{-}\NormalTok{(x[}\DecValTok{1}\OperatorTok{:}\NormalTok{j]}\OperatorTok{-}\DecValTok{2}\NormalTok{)))} \NormalTok{v[j]=(j}\OperatorTok{^}\NormalTok{\{}\OperatorTok{-}\DecValTok{1}\NormalTok{\})}\OperatorTok{*}\KeywordTok{var}\NormalTok{(}\KeywordTok{normal_density}\NormalTok{(x[}\DecValTok{1}\OperatorTok{:}\NormalTok{j])}\OperatorTok{/}\KeywordTok{exp}\NormalTok{(}\OperatorTok{-}\NormalTok{(x[}\DecValTok{1}\OperatorTok{:}\NormalTok{j]}\OperatorTok{-}\DecValTok{2}\NormalTok{)))} \NormalTok{upper[j]=ImpS[j]}\OperatorTok{+}\FloatTok{1.96}\OperatorTok{*}\KeywordTok{sqrt}\NormalTok{(v[j])} \NormalTok{lower[j]=ImpS[j]}\OperatorTok{-}\FloatTok{1.96}\OperatorTok{*}\KeywordTok{sqrt}\NormalTok{(v[j])} \NormalTok{\}} \KeywordTok{library}\NormalTok{(ggplot2)} \NormalTok{values=}\KeywordTok{c}\NormalTok{(ImpS,upper,lower)} \NormalTok{type=}\KeywordTok{c}\NormalTok{(}\KeywordTok{rep}\NormalTok{(}\StringTok{"mc"}\NormalTok{,Nsim),}\KeywordTok{rep}\NormalTok{(}\StringTok{"upper"}\NormalTok{,Nsim),}\KeywordTok{rep}\NormalTok{(}\StringTok{"lower"}\NormalTok{,Nsim))} \NormalTok{iter=}\KeywordTok{rep}\NormalTok{(}\KeywordTok{seq}\NormalTok{(}\DecValTok{1}\OperatorTok{:}\NormalTok{Nsim),}\DecValTok{3}\NormalTok{)} \NormalTok{data=}\KeywordTok{data.frame}\NormalTok{(}\DataTypeTok{val=}\NormalTok{values, }\DataTypeTok{tp=}\NormalTok{type, }\DataTypeTok{itr=}\NormalTok{iter)} \KeywordTok{ggplot}\NormalTok{(data,}\KeywordTok{aes}\NormalTok{(itr,val,}\DataTypeTok{col=}\NormalTok{tp))}\OperatorTok{+}\KeywordTok{geom_line}\NormalTok{(}\DataTypeTok{size=}\FloatTok{0.5}\NormalTok{)}\OperatorTok{+} \KeywordTok{geom_hline}\NormalTok{(}\DataTypeTok{yintercept=}\DecValTok{1}\OperatorTok{-}\KeywordTok{pnorm}\NormalTok{(}\DecValTok{2}\NormalTok{),}\DataTypeTok{color=}\StringTok{"green"}\NormalTok{,}\DataTypeTok{size=}\FloatTok{0.5}\NormalTok{)} \end{Highlighting} \end{Shaded} \begin{verbatim} ## Warning: Removed 2 rows containing missing values (geom_path). \end{verbatim} \includegraphics{bookdown-demo_files/figure-latex/unnamed-chunk-2-1.pdf} \hypertarget{newton-raphson-algorithm}{% \chapter{Newton Raphson algorithm}\label{newton-raphson-algorithm}} The main purpose of Newton Raphson algorithm is to calculate the root of a function (e.g., \(x^2-3=0\)). We know that in order to maximize the MLE, we need to calculate the first derivatice of the function and then set it to zero \(\ell^{'}(x)=0\). Thus, we can use the same Newton Raphson method to help calculate the MLE maximization as well. There are different ways to understand Newton Raphson method, but I found the method fo geometric the most easy way to explain. \begin{figure} \centering \includegraphics{Newton.jpg} \caption{Credit of this figure: \url{https://www.math.ubc.ca/~anstee/math104/newtonmethod.pdf}} \end{figure} Specifically, suppose that you want to calculate the root of a function \(f(x)=0\). We assume the root is \(r\). However, we do not know that, and we randomly guess a point of \(a\). Thus, we can get a tangent line with slope of \(f^{'}(a)\) and a point of \((a,f(a))\). Since we know the slope and one of its points, we can write the function for this tangent line. \[y-f(a)=f^{'}(a)(x-a)\] To calculate the \(x-intercept\), namely \(b\) in the figure, we can set \(y=0\), and get the following: \[-f(a)=f^{'}(a)(x-a) \Rightarrow x (or, b)= a-\frac{f(a)}{f^{'}(a)}\] If there is significant difference of \(|a-b|\), we know that our orginal guess of \(a\) is not good. We better use \(b\) as the next guess, and calculate its tangent line again. To generalize, we can write it as follows. \[x_{t+1}=x_{t}-\frac{f(x_t)}{f^{'}(x_t)}\] Okay, this method above is to calculate the root. For MLE, we can also use this method to calculate the root for the \(\ell ^{'}=0\). We can write it as follows. \[x_{t+1}=x_{t}-\frac{\ell^{'}(x_t)}{\ell^{''}(x_t)}\] Often, \(x\) is not just a single unknow parameter, but a vector. For this case, we can write it as follows. \[\beta_{t+1}=\beta_{t}-\frac{\ell^{'}(\beta_t)}{\ell^{''}(\beta_t)}\] \hypertarget{calculate-the-root}{% \subsection{Calculate the root}\label{calculate-the-root}} \(x^3-5=0\) Note that, this is obviously not a maximization problem. In contrast, it involves a function with zero. As we can see, we can think it as the first order of Taylor approximation. That is, \(f^{'}(x)=x^3-5=0\). As we can see the following plot, it converts very quickly. \begin{Shaded} \begin{Highlighting}[] \NormalTok{f_firstorder=}\ControlFlowTok{function}\NormalTok{(x)\{x}\OperatorTok{^}\DecValTok{3-5}\NormalTok{\}} \NormalTok{f_secondorder=}\ControlFlowTok{function}\NormalTok{(x)\{}\DecValTok{3}\OperatorTok{*}\NormalTok{x\}} \NormalTok{x_old=}\DecValTok{1}\NormalTok{;tolerance=}\FloatTok{1e-3} \NormalTok{max_its=}\DecValTok{2000}\NormalTok{;iteration=}\DecValTok{1}\NormalTok{;difference=}\DecValTok{2} \NormalTok{c_iteration<-}\KeywordTok{c}\NormalTok{() }\CommentTok{## to collect numbers generated in the iteration process } \ControlFlowTok{while}\NormalTok{(difference}\OperatorTok{>}\NormalTok{tolerance }\OperatorTok{&}\StringTok{ }\NormalTok{iteration}\OperatorTok{<}\NormalTok{max_its)\{} \NormalTok{ x_updated=x_old}\OperatorTok{-}\NormalTok{(}\KeywordTok{f_firstorder}\NormalTok{(x_old)}\OperatorTok{/}\KeywordTok{f_secondorder}\NormalTok{(x_old))} \NormalTok{ difference=}\KeywordTok{abs}\NormalTok{(x_updated}\OperatorTok{-}\NormalTok{x_old);} \NormalTok{ iteration=iteration}\OperatorTok{+}\DecValTok{1}\NormalTok{;} \NormalTok{ x_old=x_updated} \NormalTok{ c_iteration<-}\KeywordTok{c}\NormalTok{(c_iteration,x_updated)\}} \KeywordTok{plot}\NormalTok{(c_iteration,}\DataTypeTok{type=}\StringTok{"b"}\NormalTok{)} \end{Highlighting} \end{Shaded} \includegraphics{bookdown-demo_files/figure-latex/unnamed-chunk-3-1.pdf} \hypertarget{logistic-regression}{% \subsection{Logistic regression}\label{logistic-regression}} Suppose we have \(n\) observation, and \(m\) variables. \[\begin{bmatrix} x_{11} & x_{12} & x_{13} & ... & x_{1m}\\ x_{21} & x_{22} & x_{23} & ... & x_{2m} \\ ...\\ x_{n1} & x_{n2} & x_{n3} & ... & x_{nm} \end{bmatrix}\] Typically, we add a vector of \(1\) being used to estimate the constant. \[\begin{bmatrix} 1 & x_{11} & x_{12} & x_{13} & ... & x_{1m}\\ 1 & x_{21} & x_{22} & x_{23} & ... & x_{2m} \\ ...\\ 1 & x_{n1} & x_{n2} & x_{n3} & ... & x_{nm} \end{bmatrix}\] And, we have observe a vector of \(n\) \(y_i\) as well, which is a binary variable: \[Y = \begin{bmatrix}1 \\ 0 \\ 1 \\ 0 \\ 0 \\ 0 \\ ...\\ 1 \\ \end{bmatrix}\] Using the content from the MLE chapter, we can get: \[\mathbf{L}=\prod_{i=1}^{n} p_i^{ y_i}(1-p_i)^{(1-y_i)}\] Further, we can get a log-transformed format. \[log (\mathbf{L})=\sum_{i=1}^{n}[y_i log (p_i) + (1-y_i) log(1-p_i)]\] Given that \(p_i=\frac{e^{\beta_0+\beta_1x_1+...+\beta_nx_n}}{1+e^{\beta_0+\beta_1x_1+...+\beta_nx_n}}=\frac{e^{\beta^Tx}}{1+e^{\beta^Tx}}\), we can rewrite it as follows: \[log (\mathbf{L})=\ell=\sum_{i=1}^{n}[y_i log (\frac{e^{\beta^Tx_i}}{1+e^{\beta^Tx_i}}) + (1-y_i) log(1-\frac{e^{\beta^Tx_i}}{1+e^{\beta^Tx_i}})]\] Before doing the derivative, we set. \[\frac{e^{\beta^Tx_i}}{1+e^{\beta^Tx_i}} = p(\beta ^T x_i)\] \[log (\mathbf{L})=\ell=\sum_{i=1}^{n}[y_i log (p(\beta ^T x_i)) + (1-y_i) log(1-p(\beta ^T x_i))]\] Note that, \(\frac{\partial p(\beta ^T x_i)}{\partial (\beta ^T x_i)} = p(\beta ^T x_i)(1-p(\beta ^T x_i))\). We will use it later. \[\begin{aligned} \nabla \ell &= \sum_{i=1}^{n} [y_i \frac{1}{p(\beta ^T x_i)} \frac{\partial p(\beta ^T x_i)}{\partial (\beta ^T x_i)}\frac{\partial (\beta ^T x_i)}{\partial \beta}+(1-y_i) \frac{1}{1-p(\beta ^T x_i)}(-1)\frac{\partial p(\beta ^T x_i)}{\partial (\beta ^T x_i)}\frac{\partial (\beta ^T x_i)}{\partial \beta}] \\ &= \sum_{i=1}^{n} x_i^T[y_i \frac{1}{p(\beta ^T x_i)} p(\beta ^T x_i)(1-p(\beta ^T x_i))+(1-y_i) \frac{1}{1-p(\beta ^T x_i)}(-1)p(\beta ^T x_i)(1-p(\beta ^T x_i))] \\ &= \sum_{i=1}^{n} x_i^T[y_i \frac{1}{p(\beta ^T x_i)} p(\beta ^T x_i)(1-p(\beta ^T x_i))-(1-y_i) \frac{1}{1-p(\beta ^T x_i)}p(\beta ^T x_i)(1-p(\beta ^T x_i))] \\ &= \sum_{i=1}^{n} x_i^T[y_i (1-p(\beta ^T x_i))-(1-y_i) p(\beta ^T x_i)] \\ &=\sum_{i=1}^{n} x_i^T[y_i-y_ip(\beta ^T x_i)-p(\beta ^T x_i)+y_i p(\beta ^T x_i)] \\ &=\sum_{i=1}^{n} x_i^T[y_i-p(\beta ^T x_i)] \\ &= \sum_{i=1}^{n} x_i^T[y_i-\frac{e^{\beta^Tx_i}}{1+e^{\beta^Tx_i}}] \end{aligned}\] As noted, the Newton Raphson algorithm needs the second order. \[\begin{aligned} \nabla^2 \ell &=\frac{\partial \sum_{i=1}^{n} x_i^T[y_i-p(\beta ^T x_i)]}{\partial \beta} \\ &=-\sum_{i=1}^{n} x_i^T\frac{\partial p(\beta ^T x_i) }{\partial \beta}\\ &=-\sum_{i=1}^{n} x_i^T\frac{\partial p(\beta ^T x_i) }{\partial (\beta^Tx_i)} \frac{\partial (\beta^Tx_i)}{\partial \beta}\\ &=-\sum_{i=1}^{n} x_i^T p(\beta ^T x_i)(1-p(\beta ^T x_i))x_i \end{aligned}\] The following are the data simulation (3 IVs and 1 DV) and Newton Raphson analysis. \begin{Shaded} \begin{Highlighting}[] \CommentTok{# Data generation} \KeywordTok{set.seed}\NormalTok{(}\DecValTok{123}\NormalTok{)} \NormalTok{n=}\DecValTok{500} \NormalTok{x1_norm<-}\KeywordTok{rnorm}\NormalTok{(n)} \NormalTok{x2_norm<-}\KeywordTok{rnorm}\NormalTok{(n,}\DecValTok{3}\NormalTok{,}\DecValTok{4}\NormalTok{)} \NormalTok{x3_norm<-}\KeywordTok{rnorm}\NormalTok{(n,}\DecValTok{4}\NormalTok{,}\DecValTok{6}\NormalTok{)} \NormalTok{x_combined<-}\KeywordTok{cbind}\NormalTok{(}\DecValTok{1}\NormalTok{,x1_norm,x2_norm,x3_norm) }\CommentTok{# dimension: n*4} \NormalTok{coefficients_new<-}\KeywordTok{c}\NormalTok{(}\DecValTok{1}\NormalTok{,}\DecValTok{2}\NormalTok{,}\DecValTok{3}\NormalTok{,}\DecValTok{4}\NormalTok{) }\CommentTok{#true regression coefficient} \NormalTok{inv_logit<-}\ControlFlowTok{function}\NormalTok{(x,b)\{}\KeywordTok{exp}\NormalTok{(x}\OperatorTok{%*%}\NormalTok{b)}\OperatorTok{/}\NormalTok{(}\DecValTok{1}\OperatorTok{+}\KeywordTok{exp}\NormalTok{(x}\OperatorTok{%*%}\NormalTok{b))\}} \NormalTok{prob_generated<-}\KeywordTok{inv_logit}\NormalTok{(x_combined,coefficients_new)} \NormalTok{y<-}\KeywordTok{c}\NormalTok{()} \ControlFlowTok{for}\NormalTok{ (i }\ControlFlowTok{in} \DecValTok{1}\OperatorTok{:}\NormalTok{n) \{y[i]<-}\KeywordTok{rbinom}\NormalTok{(}\DecValTok{1}\NormalTok{,}\DecValTok{1}\NormalTok{,prob_generated[i])\}} \CommentTok{# Newton Raphson} \CommentTok{#We need to set random starting values.} \NormalTok{beta_old<-}\KeywordTok{c}\NormalTok{(}\DecValTok{1}\NormalTok{,}\DecValTok{1}\NormalTok{,}\DecValTok{1}\NormalTok{,}\DecValTok{1}\NormalTok{)} \NormalTok{tolerance=}\FloatTok{1e-3} \NormalTok{max_its=}\DecValTok{2000}\NormalTok{;iteration=}\DecValTok{1}\NormalTok{;difference=}\DecValTok{2} \NormalTok{W<-}\KeywordTok{matrix}\NormalTok{(}\DecValTok{0}\NormalTok{,n,n)} \ControlFlowTok{while}\NormalTok{(difference}\OperatorTok{>}\NormalTok{tolerance }\OperatorTok{&}\StringTok{ }\NormalTok{iteration}\OperatorTok{<}\NormalTok{max_its)} \NormalTok{ \{} \CommentTok{# The first order} \NormalTok{ f_firstorder<-}\KeywordTok{t}\NormalTok{(x_combined)}\OperatorTok{%*%}\NormalTok{(y}\OperatorTok{-}\KeywordTok{inv_logit}\NormalTok{(x_combined,beta_old))} \CommentTok{# The second order} \KeywordTok{diag}\NormalTok{(W) =}\StringTok{ }\KeywordTok{inv_logit}\NormalTok{(x_combined,beta_old)}\OperatorTok{*}\NormalTok{(}\DecValTok{1}\OperatorTok{-}\KeywordTok{inv_logit}\NormalTok{(x_combined,beta_old))} \NormalTok{ f_secondorder<-}\OperatorTok{-}\KeywordTok{t}\NormalTok{(x_combined)}\OperatorTok{%*%}\NormalTok{W}\OperatorTok{%*%}\NormalTok{x_combined} \CommentTok{# Calculate the beta_updated} \NormalTok{ beta_updated=beta_old}\OperatorTok{-}\NormalTok{(}\KeywordTok{solve}\NormalTok{(f_secondorder)}\OperatorTok{%*%}\NormalTok{f_firstorder)} \NormalTok{ difference=}\KeywordTok{max}\NormalTok{(}\KeywordTok{abs}\NormalTok{(beta_updated}\OperatorTok{-}\NormalTok{beta_old));} \NormalTok{ iteration=iteration}\OperatorTok{+}\DecValTok{1}\NormalTok{;} \NormalTok{ beta_old=beta_updated\}} \NormalTok{beta_old} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [,1] ## 0.9590207 ## x1_norm 1.7974165 ## x2_norm 3.0072303 ## x3_norm 3.9578107 \end{verbatim} \hypertarget{metropolis-hastings}{% \chapter{Metropolis Hastings}\label{metropolis-hastings}} Metropolis--Hastings is a MCMC method for obtaining a sequence of random samples from a probability distribution from which direct sampling is difficult. By using the samples, we can plot the distribution (through histgram), or we can calculate the integral (e.g., you need to calculate the expected value). (Side note: does this remind you the importance sampling? Very similiar!) Basic logic (my own summary): \begin{enumerate} \def\labelenumi{(\arabic{enumi})} \item Set up a random starting value of \(x_0\). \item Sample a \(y_0\) from the instrumental function of \(q(x)\). \item Calculate the following: \end{enumerate} \(p =\frac{f(y_0)}{f(x_0)}\frac{q(x_0)}{q(y_0)}\) \begin{enumerate} \def\labelenumi{(\arabic{enumi})} \setcounter{enumi}{3} \item \(\rho=min(p, 1)\) \item \(x_{1}=\begin{cases} y_0 & p \\ x_0 & 1-p \end{cases}\) \item Repeat \(n\) times (\(n\) is set subjectively.) \end{enumerate} Use normal pdf to sample gamma distribution \begin{Shaded} \begin{Highlighting}[] \NormalTok{alpha=}\FloatTok{2.7}\NormalTok{; beta=}\FloatTok{6.3} \CommentTok{# I randomly chose alpha and beta values for the target gamma function} \NormalTok{Nsim=}\DecValTok{5000} \CommentTok{## define the number of iteration } \NormalTok{X=}\KeywordTok{c}\NormalTok{(}\KeywordTok{rgamma}\NormalTok{(}\DecValTok{1}\NormalTok{,}\DecValTok{1}\NormalTok{)) }\CommentTok{# initialize the chain from random starting numbers} \NormalTok{mygamma<-}\ControlFlowTok{function}\NormalTok{(Nsim,alpha,beta)\{} \ControlFlowTok{for}\NormalTok{ (i }\ControlFlowTok{in} \DecValTok{2}\OperatorTok{:}\NormalTok{Nsim)\{} \NormalTok{ Y=}\KeywordTok{rnorm}\NormalTok{(}\DecValTok{1}\NormalTok{)} \NormalTok{ rho=}\KeywordTok{dgamma}\NormalTok{(Y,alpha,beta)}\OperatorTok{*}\KeywordTok{dnorm}\NormalTok{(X[i}\DecValTok{-1}\NormalTok{])}\OperatorTok{/}\NormalTok{(}\KeywordTok{dgamma}\NormalTok{(X[i}\DecValTok{-1}\NormalTok{],alpha,beta)}\OperatorTok{*}\KeywordTok{dnorm}\NormalTok{(Y))} \NormalTok{ X[i]=X[i}\DecValTok{-1}\NormalTok{] }\OperatorTok{+}\StringTok{ }\NormalTok{(Y}\OperatorTok{-}\NormalTok{X[i}\DecValTok{-1}\NormalTok{])}\OperatorTok{*}\NormalTok{(}\KeywordTok{runif}\NormalTok{(}\DecValTok{1}\NormalTok{)}\OperatorTok{<}\NormalTok{rho)} \NormalTok{\}} \NormalTok{X} \NormalTok{\}} \KeywordTok{hist}\NormalTok{(}\KeywordTok{mygamma}\NormalTok{(Nsim,alpha,beta), }\DataTypeTok{breaks =} \DecValTok{100}\NormalTok{)} \end{Highlighting} \end{Shaded} \includegraphics{bookdown-demo_files/figure-latex/unnamed-chunk-5-1.pdf} \hypertarget{em}{% \chapter{EM}\label{em}} EM algorithm is an iterative method to find ML or maximum a posteriori (MAP) estimates of parameters. Direct Ref: \url{http://www.di.fc.ul.pt/~jpn/r/EM/EM.html} Suppose that we only observe \(X\), and do not know \(Z\). We thus need to construct the complete data of \((X, Z)\). Given \(p(Z|X,\theta)\), we can compute the likelihood of the complete dataset: \[p(X, Z|\theta)=p(Z|X,\theta)p(X|\theta)\] The EM algorithm: \begin{enumerate} \def\labelenumi{(\arabic{enumi})} \setcounter{enumi}{-1} \item We got \(X\) and \(p(Z|X,\theta)\) \item Random assign a \(\theta_0\), since we do not know any of them. \item E-step: \(Q_{\theta_i} = E_{Z|X,\theta_i}[log p(X,Z|\theta)]\) \item M-step: compute \(\theta_{i+1} \leftarrow argmax Q_{\theta_i}\) \item If \(\theta_i\) and \(\theta_{i+1}\) are not close enough, \(\theta_i \leftarrow \theta_{i+1}\). Goto step 2. \end{enumerate} For examples, you can refer to the following link: \url{http://www.di.fc.ul.pt/~jpn/r/EM/EM.html} (It is em\_R.r in R\_codes folder. Personally, I can also refer to Quiz 2 in 536.) \hypertarget{references}{% \chapter{References}\label{references}} \begin{enumerate} \def\labelenumi{\arabic{enumi}.} \tightlist \item The UBC PDF about Newton \end{enumerate} \url{https://www.math.ubc.ca/~anstee/math104/newtonmethod.pdf} \begin{enumerate} \def\labelenumi{\arabic{enumi}.} \setcounter{enumi}{1} \tightlist \item Some other pages about Newton and logistic regression \end{enumerate} \url{http://www.win-vector.com/blog/2011/09/the-simpler-derivation-of-logistic-regression/} \url{https://stats.stackexchange.com/questions/344309/why-using-newtons-method-for-logistic-regression-optimization-is-called-iterati} \url{https://tomroth.com.au/logistic/} \url{https://www.stat.cmu.edu/~cshalizi/350/lectures/26/lecture-26.pdf} \url{https://www.stat.cmu.edu/~cshalizi/402/lectures/14-logistic-regression/lecture-14.pdf} \url{http://hua-zhou.github.io/teaching/biostatm280-2017spring/slides/18-newton/newton.html} \begin{enumerate} \def\labelenumi{\arabic{enumi}.} \setcounter{enumi}{2} \tightlist \item MH \end{enumerate} \url{https://www.youtube.com/watch?v=VGRVRjr0vyw} \hypertarget{twitter-example}{% \chapter{Twitter Example}\label{twitter-example}} The following is part of my course project for Stat 536. It aims to replicate part of the findings from Barbera (2015) Birds of the Same Feather Tweet Together: Bayesian Ideal Point Estimation Using Twitter Data. Political Analysis 23 (1). Note that, the following model is much simpler than that in the original paper. \hypertarget{model}{% \section{Model}\label{model}} Suppose that a Twitter user is presented with a choice between following or not following another target \(j \in \{ 1, ..., m\}\). Let \(y_{j}=1\) if the user decides to follow \(j\), and \(y_{j}=0\) otherwise. \[y_{j}=\begin{cases} 1 & Following \\ 0 & Not Following \end{cases}\] \[p(y_{j}=1|\theta) = \frac{exp(- \theta_0|\theta_1 - x_j|^2)}{1+exp(- \theta_0|\theta_1 - x_j|^2)}\] We additionally know the priors of \(\theta\). \[\theta_i \sim N(0,10^2) (i = 0, 1)\] The likelihood function is as follows. \[L(Y|\theta)=\prod_{j=1}^{m} (\frac{exp(- \theta_0|\theta_1 - x_j|^2)}{1+exp(- \theta_0|\theta_1 - x_j|^2)})^{y_j}(1-\frac{exp(- \theta_0|\theta_1 - x_j|^2)}{1+exp(- \theta_0|\theta_1 - x_j|^2)})^{(1-y_j)}\] Thus, the posterior is as follows. \[L(Y|\theta) \cdot N(\theta_0|0,10) \cdot N(\theta_1|0,10)\] \[\propto \prod_{j=1}^{m} (\frac{exp(- \theta_0|\theta_1 - x_j|^2)}{1+exp(- \theta_0|\theta_1 - x_j|^2)})^{y_j}(1-\frac{exp(- \theta_0|\theta_1 - x_j|^2)}{1+exp(- \theta_0|\theta_1 - x_j|^2)})^{(1-y_j)}\cdot exp(-\frac{1}{2}(\frac{\theta_0}{10})^2)\cdot exp(-\frac{1}{2}(\frac{\theta_1}{10})^2)\] \begin{Shaded} \begin{Highlighting}[] \CommentTok{#Establish the function for logistic regression} \NormalTok{Expit<-}\ControlFlowTok{function}\NormalTok{(x)\{}\KeywordTok{exp}\NormalTok{(x)}\OperatorTok{/}\NormalTok{(}\DecValTok{1}\OperatorTok{+}\KeywordTok{exp}\NormalTok{(x))\}} \CommentTok{#Construct the posterior - in a log-format} \CommentTok{#To make sure that the estimate of theta_1 is stable, } \CommentTok{#the following code wants to make sure that theta_0 is always greater than zero.} \NormalTok{log_post<-}\ControlFlowTok{function}\NormalTok{(Y, X, theta)} \NormalTok{ \{} \ControlFlowTok{if}\NormalTok{(theta[}\DecValTok{1}\NormalTok{]}\OperatorTok{<=}\DecValTok{0}\NormalTok{)\{post=}\OperatorTok{-}\OtherTok{Inf}\NormalTok{\}} \ControlFlowTok{if}\NormalTok{(theta[}\DecValTok{1}\NormalTok{]}\OperatorTok{>}\DecValTok{0}\NormalTok{)\{} \NormalTok{ prob1<-}\KeywordTok{Expit}\NormalTok{(}\OperatorTok{-}\NormalTok{theta[}\DecValTok{1}\NormalTok{]}\OperatorTok{*}\NormalTok{((theta[}\DecValTok{2}\NormalTok{]}\OperatorTok{-}\NormalTok{X)}\OperatorTok{^}\DecValTok{2}\NormalTok{))} \NormalTok{ likelihood<-}\KeywordTok{sum}\NormalTok{(}\KeywordTok{dbinom}\NormalTok{(Y,}\DecValTok{1}\NormalTok{,prob1,}\DataTypeTok{log =} \OtherTok{TRUE}\NormalTok{))} \NormalTok{ priors<-}\KeywordTok{sum}\NormalTok{(}\KeywordTok{dnorm}\NormalTok{(theta,}\DecValTok{0}\NormalTok{,}\DecValTok{10}\NormalTok{,}\DataTypeTok{log=}\OtherTok{TRUE}\NormalTok{))} \NormalTok{ post=likelihood}\OperatorTok{+}\NormalTok{priors\}} \KeywordTok{return}\NormalTok{(post)} \NormalTok{ \}} \NormalTok{Bayes_logit<-}\ControlFlowTok{function}\NormalTok{ (Y,X,}\DataTypeTok{n_samples=}\DecValTok{2000}\NormalTok{)} \NormalTok{\{} \CommentTok{#Initial values} \NormalTok{ theta<-}\KeywordTok{c}\NormalTok{(}\DecValTok{5}\NormalTok{,}\DecValTok{5}\NormalTok{)} \CommentTok{#store data} \NormalTok{ keep.theta<-}\KeywordTok{matrix}\NormalTok{(}\DecValTok{0}\NormalTok{,n_samples,}\DecValTok{2}\NormalTok{)} \NormalTok{ keep.theta[}\DecValTok{1}\NormalTok{,]<-theta} \CommentTok{#acceptance and rejection } \NormalTok{ acc<-att<-}\KeywordTok{rep}\NormalTok{(}\DecValTok{0}\NormalTok{,}\DecValTok{2}\NormalTok{)} \CommentTok{#current log posterior} \NormalTok{ current_lp<-}\KeywordTok{log_post}\NormalTok{(Y,X,theta)} \ControlFlowTok{for}\NormalTok{ (i }\ControlFlowTok{in} \DecValTok{2}\OperatorTok{:}\NormalTok{n_samples) } \NormalTok{ \{} \ControlFlowTok{for}\NormalTok{(j }\ControlFlowTok{in} \DecValTok{1}\OperatorTok{:}\DecValTok{2}\NormalTok{)} \NormalTok{ \{} \CommentTok{#attempt + 1} \NormalTok{ att[j]<-att[j]}\OperatorTok{+}\DecValTok{1} \NormalTok{ can_theta<-theta} \NormalTok{ can_theta[j]<-}\KeywordTok{rnorm}\NormalTok{(}\DecValTok{1}\NormalTok{,theta[j],}\FloatTok{0.5}\NormalTok{)} \CommentTok{#candidate of log posterior} \NormalTok{ candidate_lp<-}\KeywordTok{log_post}\NormalTok{(Y,X,can_theta)} \NormalTok{ Rho<-}\KeywordTok{min}\NormalTok{(}\KeywordTok{exp}\NormalTok{(candidate_lp}\OperatorTok{-}\NormalTok{current_lp),}\DecValTok{1}\NormalTok{)} \NormalTok{ Random_probability<-}\KeywordTok{runif}\NormalTok{(}\DecValTok{1}\NormalTok{)} \ControlFlowTok{if}\NormalTok{ (Random_probability}\OperatorTok{<}\NormalTok{Rho)} \NormalTok{ \{} \NormalTok{ theta<-can_theta} \NormalTok{ current_lp<-candidate_lp} \CommentTok{#acceptance + 1, as long as Random_probability<Rho} \NormalTok{ acc[j]<-acc[j]}\OperatorTok{+}\DecValTok{1} \NormalTok{ \}} \NormalTok{ \}} \CommentTok{#save theta} \NormalTok{ keep.theta[i,]<-theta} \NormalTok{ \}} \CommentTok{#Return: including theta and acceptance rate} \KeywordTok{list}\NormalTok{(}\DataTypeTok{theta=}\NormalTok{keep.theta,}\DataTypeTok{acceptance_rate=}\NormalTok{acc}\OperatorTok{/}\NormalTok{att)} \NormalTok{\}} \end{Highlighting} \end{Shaded} \hypertarget{simulating-data-of-senators-on-twitter}{% \section{Simulating Data of Senators on Twitter}\label{simulating-data-of-senators-on-twitter}} Assume that we have 100 senators, 50 Democrats and 50 Republicans, who we know their ideology. Assume that Democrats have negative ideology scores to indicate that they are more liberal, whereas Republicans have positive scores to indicate that they are more conservative. The following is data simulation for senators. \begin{Shaded} \begin{Highlighting}[] \CommentTok{# Republicans are more conservative, and they have positive numbers.} \NormalTok{Republicans<-}\KeywordTok{c}\NormalTok{()} \NormalTok{Republicans<-}\KeywordTok{rnorm}\NormalTok{(}\DecValTok{50}\NormalTok{,}\DecValTok{1}\NormalTok{,}\FloatTok{0.5}\NormalTok{)} \NormalTok{No_Republicans<-}\KeywordTok{rep}\NormalTok{(}\DecValTok{1}\OperatorTok{:}\DecValTok{50}\NormalTok{,}\DecValTok{1}\NormalTok{)} \NormalTok{Part_}\DecValTok{1}\NormalTok{<-}\KeywordTok{cbind}\NormalTok{(No_Republicans,Republicans)} \CommentTok{# Democrats are more liberal, and they have negative numbers.} \NormalTok{Democrats<-}\KeywordTok{c}\NormalTok{()} \NormalTok{Democrats<-}\KeywordTok{rnorm}\NormalTok{(}\DecValTok{50}\NormalTok{,}\OperatorTok{-}\DecValTok{1}\NormalTok{,}\FloatTok{0.5}\NormalTok{)} \NormalTok{No_Democrats<-}\KeywordTok{rep}\NormalTok{(}\DecValTok{51}\OperatorTok{:}\DecValTok{100}\NormalTok{,}\DecValTok{1}\NormalTok{)} \NormalTok{Part_}\DecValTok{2}\NormalTok{<-}\KeywordTok{cbind}\NormalTok{(No_Democrats,Democrats)} \NormalTok{Data_Elites<-}\KeywordTok{rbind}\NormalTok{(Part_}\DecValTok{1}\NormalTok{,Part_}\DecValTok{2}\NormalTok{)} \NormalTok{Data_Elites<-}\KeywordTok{as.data.frame}\NormalTok{(Data_Elites)} \KeywordTok{colnames}\NormalTok{(Data_Elites) <-}\StringTok{ }\KeywordTok{c}\NormalTok{(}\StringTok{"Elite_No"}\NormalTok{,}\StringTok{"Elite_ideology"}\NormalTok{)} \KeywordTok{head}\NormalTok{(Data_Elites)} \end{Highlighting} \end{Shaded} \begin{verbatim} ## Elite_No Elite_ideology ## 1 1 1.0541992 ## 2 2 0.3805544 ## 3 3 1.3568577 ## 4 4 0.9922547 ## 5 5 1.0089966 ## 6 6 0.8878271 \end{verbatim} \hypertarget{simulating-data-of-conservative-users-on-twitter-and-model-testing}{% \section{Simulating Data of Conservative Users on Twitter and Model Testing}\label{simulating-data-of-conservative-users-on-twitter-and-model-testing}} Assume that we observe one Twitter user, who is more conservative. To simulate Twitter following data for this user, I assign this user to follow more Republican senators. Thus, if the Metropolis Hastings algorithm works as intended, we would expect to see a positive estimated value for their ideology. Importantly, as we can see in the histogram below, the estimated value indeed is positive, providing preliminary evidence for the statistical model and the algorithm. In addition, for the acceptance rate, we can see that the constant has a lower number than ideology, since we only accept a constant when it is positive. \begin{Shaded} \begin{Highlighting}[] \CommentTok{#This user approximately follows 45 Republican Senators and 10 Democrat Senators. } \NormalTok{Data_user<-}\KeywordTok{as.data.frame}\NormalTok{(}\KeywordTok{matrix}\NormalTok{(}\KeywordTok{c}\NormalTok{(}\KeywordTok{ifelse}\NormalTok{(}\KeywordTok{runif}\NormalTok{(}\DecValTok{50}\NormalTok{)}\OperatorTok{<}\NormalTok{.}\DecValTok{1}\NormalTok{,}\DecValTok{0}\NormalTok{,}\DecValTok{1}\NormalTok{),}\KeywordTok{ifelse}\NormalTok{(}\KeywordTok{runif}\NormalTok{(}\DecValTok{50}\NormalTok{)}\OperatorTok{<}\NormalTok{.}\DecValTok{8}\NormalTok{,}\DecValTok{0}\NormalTok{,}\DecValTok{1}\NormalTok{))), }\DecValTok{100}\NormalTok{, }\DecValTok{1}\NormalTok{)} \KeywordTok{colnames}\NormalTok{(Data_user)<-}\KeywordTok{c}\NormalTok{(}\StringTok{"R_User"}\NormalTok{)} \NormalTok{Data_combined<-}\KeywordTok{cbind}\NormalTok{(Data_Elites,Data_user)} \NormalTok{X_data<-Data_combined}\OperatorTok{$}\NormalTok{Elite_ideology} \NormalTok{Y_data<-Data_combined}\OperatorTok{$}\NormalTok{R_User} \NormalTok{fit_C<-}\KeywordTok{Bayes_logit}\NormalTok{(Y_data,X_data)} \NormalTok{fit_C}\OperatorTok{$}\NormalTok{acceptance_rate} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [1] 0.1320660 0.5557779 \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \KeywordTok{plot}\NormalTok{(fit_C}\OperatorTok{$}\NormalTok{theta[,}\DecValTok{1}\NormalTok{],}\DataTypeTok{main=}\StringTok{"Constant (Conservative Users)"}\NormalTok{,} \DataTypeTok{xlab=}\StringTok{"Iteration Process"}\NormalTok{,}\DataTypeTok{ylab=}\StringTok{"Estimated Scores"}\NormalTok{,}\DataTypeTok{type=}\StringTok{"l"}\NormalTok{)} \end{Highlighting} \end{Shaded} \includegraphics{bookdown-demo_files/figure-latex/unnamed-chunk-8-1.pdf} \begin{Shaded} \begin{Highlighting}[] \KeywordTok{plot}\NormalTok{(fit_C}\OperatorTok{$}\NormalTok{theta[,}\DecValTok{2}\NormalTok{],}\DataTypeTok{main=}\StringTok{"Estimated Ideology Scores (Conservative Users)"}\NormalTok{,} \DataTypeTok{xlab=}\StringTok{"Iteration Process"}\NormalTok{,}\DataTypeTok{ylab=}\StringTok{"Ideology Scores"}\NormalTok{,}\DataTypeTok{type=}\StringTok{"l"}\NormalTok{)} \end{Highlighting} \end{Shaded} \includegraphics{bookdown-demo_files/figure-latex/unnamed-chunk-8-2.pdf} \begin{Shaded} \begin{Highlighting}[] \KeywordTok{hist}\NormalTok{(fit_C}\OperatorTok{$}\NormalTok{theta[,}\DecValTok{2}\NormalTok{],}\DataTypeTok{main=}\StringTok{"Estimated Ideology Scores (Conservative Users)"}\NormalTok{,} \DataTypeTok{xlab=}\StringTok{"Ideology Scores"}\NormalTok{,}\DataTypeTok{breaks =} \DecValTok{100}\NormalTok{)} \end{Highlighting} \end{Shaded} \includegraphics{bookdown-demo_files/figure-latex/unnamed-chunk-8-3.pdf} \hypertarget{simulating-data-of-liberal-users-on-twitter-and-model-testing}{% \section{Simulating Data of Liberal Users on Twitter and Model Testing}\label{simulating-data-of-liberal-users-on-twitter-and-model-testing}} To further verify the Metropolis Hastings algorithm, I plan to test the opposite estimate. Specifically, assume that we observe another user, who is more liberal. To simulate Twitter following data for this user, I assign this user to follow more Democrat senators. In this case, we would expect to see a negative value for their estimated ideology. As we can see in the histogram shown below, as expected, the estimated value is negative, providing convergent evidence for the model and the algorithm. \begin{Shaded} \begin{Highlighting}[] \CommentTok{#This user approximately follows 10 Republican Senators and 45 Democrat Senators. } \NormalTok{Data_user<-}\KeywordTok{as.data.frame}\NormalTok{(}\KeywordTok{matrix}\NormalTok{(}\KeywordTok{c}\NormalTok{(}\KeywordTok{ifelse}\NormalTok{(}\KeywordTok{runif}\NormalTok{(}\DecValTok{50}\NormalTok{)}\OperatorTok{<}\NormalTok{.}\DecValTok{8}\NormalTok{,}\DecValTok{0}\NormalTok{,}\DecValTok{1}\NormalTok{),}\KeywordTok{ifelse}\NormalTok{(}\KeywordTok{runif}\NormalTok{(}\DecValTok{50}\NormalTok{)}\OperatorTok{<}\NormalTok{.}\DecValTok{1}\NormalTok{,}\DecValTok{0}\NormalTok{,}\DecValTok{1}\NormalTok{))), }\DecValTok{100}\NormalTok{, }\DecValTok{1}\NormalTok{)} \KeywordTok{colnames}\NormalTok{(Data_user)<-}\KeywordTok{c}\NormalTok{(}\StringTok{"L_User"}\NormalTok{)} \NormalTok{Data_combined<-}\KeywordTok{cbind}\NormalTok{(Data_Elites,Data_user)} \NormalTok{X_data<-Data_combined}\OperatorTok{$}\NormalTok{Elite_ideology} \NormalTok{Y_data<-Data_combined}\OperatorTok{$}\NormalTok{L_User} \NormalTok{fit_L<-}\KeywordTok{Bayes_logit}\NormalTok{(Y_data,X_data)} \NormalTok{fit_L}\OperatorTok{$}\NormalTok{acceptance_rate} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [1] 0.1585793 0.5092546 \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \KeywordTok{plot}\NormalTok{(fit_L}\OperatorTok{$}\NormalTok{theta[,}\DecValTok{1}\NormalTok{],}\DataTypeTok{main=}\StringTok{"Constant (Liberal Users)"}\NormalTok{,} \DataTypeTok{xlab=}\StringTok{"Iteration Process"}\NormalTok{,}\DataTypeTok{ylab=}\StringTok{"Estimated Scores"}\NormalTok{,}\DataTypeTok{type=}\StringTok{"l"}\NormalTok{)} \end{Highlighting} \end{Shaded} \includegraphics{bookdown-demo_files/figure-latex/unnamed-chunk-9-1.pdf} \begin{Shaded} \begin{Highlighting}[] \KeywordTok{plot}\NormalTok{(fit_L}\OperatorTok{$}\NormalTok{theta[,}\DecValTok{2}\NormalTok{],}\DataTypeTok{main=}\StringTok{"Estimated Ideology Scores (Liberal Users)"}\NormalTok{,} \DataTypeTok{xlab=}\StringTok{"Iteration Process"}\NormalTok{,}\DataTypeTok{ylab=}\StringTok{"Ideology Scores"}\NormalTok{,}\DataTypeTok{type=}\StringTok{"l"}\NormalTok{)} \end{Highlighting} \end{Shaded} \includegraphics{bookdown-demo_files/figure-latex/unnamed-chunk-9-2.pdf} \begin{Shaded} \begin{Highlighting}[] \KeywordTok{hist}\NormalTok{(fit_L}\OperatorTok{$}\NormalTok{theta[,}\DecValTok{2}\NormalTok{],}\DataTypeTok{main=}\StringTok{"Estimated Ideology Scores (Liberal Users)"}\NormalTok{,} \DataTypeTok{xlab=}\StringTok{"Ideology Scores"}\NormalTok{,}\DataTypeTok{breaks =} \DecValTok{100}\NormalTok{)} \end{Highlighting} \end{Shaded} \includegraphics{bookdown-demo_files/figure-latex/unnamed-chunk-9-3.pdf} \backmatter \bibliography{book.bib,packages.bib} \end{document}
{ "alphanum_fraction": 0.7231473095, "avg_line_length": 56.4338138925, "ext": "tex", "hexsha": "35ec0f2eb994f33825408de84c434be73fc90a28", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "8ccb9002349dbb06f30dc4712a92385dfc3cb721", "max_forks_repo_licenses": [ "CC0-1.0" ], "max_forks_repo_name": "williamdingpullman/computing", "max_forks_repo_path": "docs/bookdown-demo.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "8ccb9002349dbb06f30dc4712a92385dfc3cb721", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "CC0-1.0" ], "max_issues_repo_name": "williamdingpullman/computing", "max_issues_repo_path": "docs/bookdown-demo.tex", "max_line_length": 624, "max_stars_count": null, "max_stars_repo_head_hexsha": "8ccb9002349dbb06f30dc4712a92385dfc3cb721", "max_stars_repo_licenses": [ "CC0-1.0" ], "max_stars_repo_name": "williamdingpullman/computing", "max_stars_repo_path": "docs/bookdown-demo.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 14476, "size": 43059 }
\lab{Complex Integration}{Integration in the Complex Plane} \objective{Understand some simple uses of residues and singularities in the complex plane.} In the previous lab, we looked a visual representations of the roots and singularities of complex functions. Here we look more at singularities and what they can be used to compute. \section*{Laurent Series and Singular Points} We will now introduce another form of series representation of functions. A Laurent series of a function is a series of the form \[\sum_{n= -\infty}^{\infty} a_n (z-z_0)^n\] It can be proven that \[a_n = \frac{1}{2\pi i} \int_C \frac{f(z)}{(z-z_0)^{n+1}} dz\] where $C$ is a contour which passes counterclockwise around the singularity exactly once. When $f$ does not have a singularity at $z_0$ this representation degenerates to a normal Taylor Series (with the derivatives evaluated by the formula for the $n$th derivative of an analytic function). These sorts of series are considered in greater detail in the text. The built in function \li{sympy.series} can evaluate the series expansion of a function at a singularity, for example \begin{lstlisting} import sympy as sy z = sy.Symbol('z') (1/sy.sin(z)).series(z,0,8) \end{lstlisting} The Laurent series representation provides a simple way to classify singularities in the complex plane. Isolated singular points can be classified as removable singular points, poles, or essential singular points. These definitions will also be discussed in greater detail in the text. Here we will show another method for visualizing complex functions. In Lab \ref{Lab:complex_intro} we presented color plots as a useful method for visualizing functions in the complex plane. Here we will also show how to use surface plots to visualize the modulus of complex functions. Now you have to take care when a surface plot is about a singularity. We must account for the fact that the function is not going to be defined at all of the points we use in our graph. We will also have to limit the $z$ axis on the plot to avoid creating a plot that is dominated exclusively by the extremely large and extremely small values of the function. Plotting libraries like Mayavi and Matplotlib do allow thresholding of 3D plots via the use of floating point values of \li{nan}, but doing this may result in graphs having jagged edges where they have been cut. We can avoid the jagged edges by artificially adding a small lip of constant values to the plot as well. This can be done with Matplotlib as follows: \begin{lstlisting} import numpy as np from matplotlib import pyplot as plt from mpl_toolkits.mplot3d import Axes3D #needed to create 3d plots x_bounds, y_bounds = (-1.,1.), (-1.,1.) res = 400 # Set the threshold at which to cut the 'z' values threshold = 2. # space in 'z' values to use to create a lip on the plot lip = .5 x = np.linspace(x_bounds[0], x_bounds[1], res) y = np.linspace(y_bounds[0], y_bounds[1], res) X, Y = np.meshgrid(x, y, copy=False) Z = 1 / (X + 1.0j * Y) Z = np.abs(Z) # Set the values between threshold and # threshold + lip to be equal to threshold. # This forms a somewhat more concrete # edge at the top of the plot. Z[(threshold+lip>Z)&(Z>threshold)] = threshold # Do the same thing for the negative restriction on 'z'. Z[(-threshold-lip<Z)&(Z<-threshold)] = -threshold # Set anything that is larger to np.nan so it doesn't get plotted. Z[np.absolute(Z) >= threshold + lip] = np.nan # Now actually plot the data. fig = plt.figure() ax = fig.gca(projection='3d') surf = ax.plot_surface(X, Y, Z, cmap="coolwarm") plt.show() \end{lstlisting} You may notice that this still leaves some very jagged edges around the singularity. Much more detailed code would be needed to obtain good surface plots around a singular point. Figures \ref{fig:inv_surfaces} show surface plots of some relatively well-behaved singular points. \begin{figure} \begin{subfigure}{.5\textwidth} \includegraphics[width=\textwidth]{absinvz.png} \end{subfigure} \begin{subfigure}{.5\textwidth} \includegraphics[width=\textwidth]{absinvz2.png} \end{subfigure} \caption{Surface plots of the absolute value of $\frac{1}{z}$ and $\frac{1}{z^2}$ about the origin.} \label{fig:inv_surfaces} \end{figure} \begin{problem} Write a function that takes in a function, x and y bounds, and a resolution and plots the modulus of the function as a 3d plot. Remember to take in account that it may be around a singularity. \end{problem} \begin{comment} Color plots, when used in their default form suffer from similar limitations. On the other hand, we can do things like taking the absolute value, then scaling the values of a function logarithmically to mitigate some of the effect that the rapid growth of the plot values has on the colors in the plot. We can also allow the color map used by the plotting library to repeat itself as the values of the function get larger and larger. This can be done by taking the sine of the values. Combining these things together, we obtain expressions like $\sin\left(\log\left|\text{Re}\left(Z\right)\right|\right)$. \begin{warn} When generating color plots like this, you will have to check for floating point values of \li{nan} and \li{inf}. One easy way to get rid of them is to set those values to zero. \end{warn} \begin{problem} Write a function that produces color plots of $\sin\left(\log\left|f\left(z\right)\right|\right)$ for a given function $f$. Have it accept bounds for the interval where you want the plot generated. Also have an argument that tells the function whether to use the real part, imaginary part, or the absolute value of the function. \end{problem} \end{comment} \section*{Residues} The number $a_{-1}$ in the Laurent expansion for a function $f$ at a point $z_0$ is called the Residue of $f$ at $z_0$. The formula for the coefficients of the Laurent series provides one way to compute residues. It is sometimes easier to evaluate a residue using limits or some other formula. One good way to do this is to use the following formula: Let $f(z)=\frac{p(z)}{q(z)}$ and let $p$ and $q$ be holomorphic at $z_0$. Let $p(z_0) \neq 0$ and $q(z_0)=0$. Suppose that $z_0$ is a pole of order $1$ of $f$. Then \[\Res{z=z_0} f(z) = \frac{p(z_0)}{q'(z_0)}\] A natural consequence of the Laurent series expansion of $f(z)$ and $f'(z)$ at a pole $z_0$ is that, where $d$ is the degree of the pole at $z_0$, \[- \Res{z=z_0} \frac{f'(z)}{f(z)} = d\] This is a useful bit of information that may be used to simplify computation of residues or of the Laurent series expansion of a function since it allows us to avoid evaluating a long series of integrals we already know will evaluate to $0$. There is also a natural relationship between the residues of a function at its poles and its partial fraction decomposition. Let $f = \frac{p}{q}$ where $p$ and $q$ are polynomials, $q$ has no repeated roots, and the degree of $p$ has degree less than the degree of $q$. This means that $f$ has a partial fraction representation \[f = \sum \frac{c_i}{z - z_i}\] where the $c_i$ are appropriately chosen coefficients and $z_i$ are the distinct zeros of $q$. Consider what happens when we take the residue of this sum. Since integrals distribute over sums, so do residues, so we may say \[\Res{z=z_i} f = \sum \Res{z=z_i} \frac{c_i}{z - z_i}\] Since the zeros of $q$ are distinct, and the function $\frac{1}{z - z_i}$ is holomorphic wherever $z \neq z_i$, we see that \[\Res{z=z_i} f = c_i \Res{z=z_i} \frac{1}{z-z_i} = c_i\] This means that we can use residues to compute partial fraction decompositions, so long as the function in the denominator does not have repeated roots. \begin{problem} Write a Python function that, given polynomial objects $p$ and $q$, computes the partial fraction decomposition of the function $\frac{p}{q}$. Assume that the degree of $p$ is less than the degree of $q$ and that $q$ has no repeated roots. Return two arrays. The first should contain the coefficients in the partial fraction decomposition. The second should contain the corresponding zeros of $q$. The \li{poly1d} object in NumPy makes it easy to work with polynomials: \begin{lstlisting} >>> from numpy import poly1d # To represent the polynomial y = x^2 - 1, we pass in a list of its coefficients, highest power to lowest power. >>> my_polynomial = poly1d([1., 0., -1]) >>> my_polynomial.roots array([ 1., -1.]) #The roots are 1 and -1 >>> my_polynomial.deriv() poly1d([ 2., 0.]) #The derivative is the polynomial y = 2x \end{lstlisting} \end{problem} \section*{Evaluating Indefinite Integrals Using Residues} One convenient use of residues is the evaluation of integrals that are difficult to evaluate symbolically in other ways. Often, when we cannot directly assign a value to one of these integrals, residues can still help us evaluate the Cauchy principal value of the integral. Recall that, for an integral $\int_{-\infty}^{\infty} f(x)dx$, the Cauchy principal value is $\lim_{r\to \infty} \int_{-r}^{r} f(x) dx$. This limit may exist, even though the integral itself may not. The methods for using residues to evaluate such integrals will be discussed in greater details in the text. Here we consider one example. \begin{figure} \includegraphics[width=\textwidth]{contour1.pdf} \caption{A contour used for integration using residues.} \label{complexint:c1} \end{figure} Consider the integral $\int_{-\infty}^{\infty}\frac{z^2}{z^4+1}$. Let $f(z)=\frac{z^2}{z^4+1}$. Notice that this function has poles at $e^{\frac{\pi i}{4}}$, $e^{\frac{3\pi i}{4}}$, $e^{\frac{5\pi i}{4}}$, and $e^{\frac{7\pi i}{4}}$. For notation, let these be $p_0$, $p_1$, $p_2$, and $p_3$. For some real $R>1$, consider the contour $C$ from $-R$ to $R$ and counterclockwise along the circle centered at $0$ of radius $R$ back to -$R$. This contour (a semi circle) is shown in Figure \ref{complexint:c1} with $R = 2$. Let $A$ be this second portion of $C$. Since $R>1$, $p_0$ and $p_1$ lie inside the contour, and we have \[\int_C f(z)dz = 2\pi i (\Res{z=p_0} f(z) +\Res{z=p_1} f(z))\] So, rewriting, we have \[\int_{-R}^R f(z) dz = 2\pi i (\Res{z=p_0} f(z) +\Res{z=p_1} f(z)) - \int_A f(z) dz\] so \[\int_{-\infty}^{\infty} f(z) dz = \lim_{R\to \infty} \int_{-R}^R f(z) dz = 2\pi i (\Res{z=p_0} f(z) +\Res{z=p_1} f(z)) - \lim_{R\to \infty} \int_A f(z) dz\] We would like to show that $\lim_{R\to\infty} \int_A f(z) dz = 0$, so note that on $A$, $\abs{z}=R$. With some effort, it follows from the triangle inequality that $\abs{z^4+1}\geq \abs{\abs{z}^4-1} = R^4 -1$, so we have that \[\abs{\int_A f(z) dz}\leq \int_A \abs{f(z)} dz \leq \int_A \frac{R^2}{R^4 -1}dz = \pi R \frac{R^2}{R^4-1}\] so $\lim_{R\to\infty} \int_A f(z) dz = 0$ as desired. This then implies that \[\int_{-\infty}^{\infty} f(z) dz = 2\pi i (\Res{z=p_0} f(z) +\Res{z=p_1} f(z))\] Evaluating the residues at $p_0$ and $p_1$ we have \[\int_{-\infty}^{\infty} f(z) dz = \frac{\pi}{\sqrt{2}}\] \begin{comment} If a function has a singularity on the real line, we can often still evaluate the value of $\int_{-\infty}^{\infty} f(z) dz$ using a similar argument as before, but we must now indent the path along the real axis around the singularity, then, as we take the limit as our outer contour moves out toward infinity, we can let the small contour around the singularity approach the singularity. An example of a contour like this is shown in Figure \ref{complexint:c2}. It is centered at the origin with $R=2$ on the outer circle and $R=\frac{1}{2}$ on the inner circle. \begin{figure} \includegraphics[width=\textwidth]{contour2.pdf} \caption{An example of a contour that has been modified to avoid a singularity at the origin.} \label{complexint:c2} \end{figure} The following is a useful theorem involving these ``indented path" methods. \begin{theorem} Consider a function $f$ with a pole of order $1$ at $z=x_0$ with a Laurent series representation in a punctured disk of radius $R$ about $x_0$ and residue $B_0$ at $x_0$. Let $C_r$ be the upper half of a circle $\abs{z-x_0}=r$ where $r<R$ oriented in the clockwise direction, then \[\lim_{r\to 0} \int_{C_r} f(z) dz = - B_0 \pi i\] \end{theorem} As a consequence of this, \end{comment} We will not consider functions that have a singularity on the real line. So for a function $f$ on $\mathbb{C}$ with only zeros of at most order $1$ on $\mathbb{R}$, where $A$ is the sum of the residues of $f$ on the upper half plane, \[\int_{-\infty}^{\infty} f(z) dz = 2\pi i A\] whenever this integral exists. When we can say that the integral of $f$ over the upper half of a circle centered at $0$ goes to $0$ as the radius of the circle increases and we can also say that the Cauchy principal value of the integral of $f$ over $\mathbb{R}$ exists, this formula will give us a proper numerical value for the Cauchy principal value of the integral of $f$ over $\mathbb{R}$. \begin{problem} Write a function that computes the sum $2\pi i A$ (where $A$ is defined as above) for a function $f$ of the form $\frac{p}{q}$ where $p$ and $q$ are polynomials of the same form as in Problem 2. \end{problem} Integration techniques using residues can also be extended to integration around branch points, some types of integrals involving sines and cosines, inverse Laplace transforms, and many other difficult integration problems. \begin{problem} In the text, the zero and pole counting formula was stated and proved. An immediate consequence of that theorem is that, for a meromorphic function $f$ and a positively oriented simple closed curve $\gamma$ that does not pass through any roots of $f$, \[\int_\gamma \frac{f'\left(z\right)}{f\left(z\right)} dz = 2 \pi i \left(a - b\right)\] where $a$ is the number of zeros on the interior of $\gamma$ (counting multiplicities) and $b$ is the number of poles of $f$ on the interior of $\gamma$ (also counting multiplicities). This formula gives another possible way to count the number of zeros of a polynomial on the interior of a given contour along with looking at the color plots and manually counting them as taught in the previous lab. Write a Python function that counts the number of zeros of a polynomial on the interior of the unit circle then plots the color plot of the function on the unit square. \end{problem}
{ "alphanum_fraction": 0.7351928617, "avg_line_length": 61.6147186147, "ext": "tex", "hexsha": "8527658bf9e7593d405075b875daaf844199f7c4", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2019-11-05T14:45:03.000Z", "max_forks_repo_forks_event_min_datetime": "2019-11-05T14:45:03.000Z", "max_forks_repo_head_hexsha": "9f474e36fe85ae663bd20e2f2d06265d1f095173", "max_forks_repo_licenses": [ "CC-BY-3.0" ], "max_forks_repo_name": "joshualy/numerical_computing", "max_forks_repo_path": "Vol1B/Complex2-Integration/Complex2.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "9f474e36fe85ae663bd20e2f2d06265d1f095173", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "CC-BY-3.0" ], "max_issues_repo_name": "joshualy/numerical_computing", "max_issues_repo_path": "Vol1B/Complex2-Integration/Complex2.tex", "max_line_length": 389, "max_stars_count": null, "max_stars_repo_head_hexsha": "9f474e36fe85ae663bd20e2f2d06265d1f095173", "max_stars_repo_licenses": [ "CC-BY-3.0" ], "max_stars_repo_name": "joshualy/numerical_computing", "max_stars_repo_path": "Vol1B/Complex2-Integration/Complex2.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 4083, "size": 14233 }
% \chapter{Further Study of GCN} \section{Discussion of GCN} Kipf et al.\cite{DBLP:journals/corr/KipfW16} modeled and trained the GCN on the whole graph set; whereas, the model reveals two shortcomings. The whole graph is stored in CPU/GPU in every epochs, which is computationally expensive and makes it hard to train on large-scale graph network. Meanwhile, mini-batch gradient descent cannot apply to the model, which should increase the stochasticity of the model. \section{Possible Improvements} Improvements on mini-batch, which is in considerable complexity and beyond the mathematical ability of the author, are revealed by reviews on other researches. \subsection{GCNSAGE} \textit{Inductive Representation Learning on Large Graph} by Hamilton et al.\cite{DBLP:journals/corr/HamiltonYL17} presents GCNSAGE model, which utilizes low-dimensional node embeddings and feature sampling and aggregating. The implementation of each training iteration in GCNSAGE is in Table \ref{implementation-gcnsage}. \begin{table}[H] \centering \noindent\rule{.7\textwidth}{1pt} \medskip \begin{varwidth}{.7\textwidth} \begin{itemize} \item Randomly select a minibatch $\upsilon_B\in \upsilon_L$ of nodes; \item Build a computation graph that only contains the activation $h_{\upsilon}^P(l)$ and $\bar{h}_{\upsilon}^{(l)}$ needed for the current minibatch; \item Get the predictions by forward propagation as $Z^{(l+1)} = \left(\hat{P}^{(l)}(H^{(l)} - \bar{H}^{(l)}) + P\bar{H}^{(l)}\right)W^{(l)}$; \item Get the gradients by backward propagation, and update the parameters by SGD; \item Update the historical activations; \end{itemize} \end{varwidth} \medskip \noindent\rule{.7\textwidth}{1pt} \caption{GCNSAGE Implementation} \label{implementation-gcnsage} \end{table} \subsection{FastGCN} \textit{FastGCN: Fast Learning with Graph Convolutional Networks via Importance Sampling} by Chen et al.\cite{DBLP:journals/corr/abs-1801-10247} \subsection{N-GCN} \cite{DBLP:journals/corr/abs-1802-08888}
{ "alphanum_fraction": 0.728, "avg_line_length": 44.2708333333, "ext": "tex", "hexsha": "5dd6337ae59b73f56040fe7b7a4f181f456a7b11", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "e455bb554d22319b4dbbcc2430970b890a67faaa", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "primus2019/BDTA-Course-Project", "max_forks_repo_path": "doc/chapters/chapter-3-further-study-of-gcn.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "e455bb554d22319b4dbbcc2430970b890a67faaa", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "primus2019/BDTA-Course-Project", "max_issues_repo_path": "doc/chapters/chapter-3-further-study-of-gcn.tex", "max_line_length": 408, "max_stars_count": null, "max_stars_repo_head_hexsha": "e455bb554d22319b4dbbcc2430970b890a67faaa", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "primus2019/BDTA-Course-Project", "max_stars_repo_path": "doc/chapters/chapter-3-further-study-of-gcn.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 577, "size": 2125 }
\chapter[\texorpdfstring{UN$^\infty$ in Weakly Orthogonal Systems}{UN in Weakly Orthogonal Systems}]{\texorpdfstring{Unique Normal Forms in\\Weakly Orthogonal Systems}{Unique Normal Forms in Weakly Orthogonal Systems}}\label{chap:unwo} Finitary orthogonal term rewriting systems have unique normal forms. In fact, weak orthogonality is enough to establish this property for finitary systems \citep[Chapter 4]{terese-03}. To what extent can these results be lifted to infinitary rewriting? In the infinitary setting, orthogonal TRSs exhibit the infinitary unique normal forms (UN$^\infty$) property \citep{kennaway-95,klop-de-vrijer-05}. We might expect this property to generalise to weakly orthogonal systems. After all, the motivation for allowing trivial critical pairs in these systems is that, intuitively, they are witnesses of harmless overlap. However, this intuition turns out to be unjust for the infinitary case. \section{A Counterexample}\label{sec:counterexample} We describe a simple counterexample showing that weak orthogonality does not imply the UN$^\infty$ property \citep{endrullis-10}. We work in a signature with unary function symbols $D$ and $U$.\footnote{We can think of $D$ and $U$ as `down' and `up'. The original formulation of this TRS uses $P$ and $S$ (`predecessor' and `successor'), but to avoid notational conflicts with the \coqexternalref{http://coq.inria.fr/stdlib/Coq.Init.Datatypes}{S}{\coqdocconstructor{S}} constructor for \coqexternalref{http://coq.inria.fr/stdlib/Coq.Init.Datatypes}{nat}{\coqdocinductive{nat}} in \Coq, we proceed with this modification.} In the notation of terms, we omit the brackets around arguments and assume right-associativity of function symbol application, e.g.\ writing $DUx$ for $D(U(x))$. A notation for finite repetitions of a function symbol $f$ terminated by a term $t$ is defined by \begin{inparaenum}[(i)] \item $f^0 t = t$ and \item $f^{n+1} = ff^nt$. \end{inparaenum} The infinite nesting $fff \ldots$ of $f$ is written $f^\omega$. Consider the TRS consisting of the two left-linear rewrite rules $\rho_1$ and $\rho_2$: \begin{align*} \rho_1 \, : \, DUx \to x \qquad \qquad \qquad \rho_2 \, : \, UDx \to x \end{align*} This system has two critical pairs $\langle Dx, Dx \rangle$ and $\langle Ux, Ux \rangle$, both of which are trivial, establishing weak orthogonality. The infinite term $\du = D^1 U^2 D^3 U^4 \ldots$ has two normal forms. It rewrites to $U^\omega$ in $\omega$ many $\rho_1$-steps and to $D^\omega$ in $\omega$ many $\rho_2$-steps. This contradicts UN$^\rewrites$ and therefore also UN$^\infty$. % give the overlap from which the critical pairs come? Other interesting properties of this TRS (e.g.\ weak normalisation is not preserved under rewriting) and a translation to the infinitary $\lambda \beta \eta$-calculus are discussed by \citet{endrullis-10}. \subsection{\texorpdfstring{Rewriting $\du$ to $U^\omega$}{Rewriting DUUDDD... to UUU...}}\label{sub:counterexample} We show briefly what rewriting $\du$ to $U^\omega$ amounts to. Rewriting $\du$ to $D^\omega$ is done in a similar way. An obvious way to define $\du$ by corecursion is via auxiliary terms $\du'_n$ parameterised by $n$ as follows: \begin{align*} \du = \du'_0 \qquad \qquad \qquad \du'_n = U^n D^{n + 1} \du'_{n + 2} \end{align*} But a more useful definition for our present purposes, and the one we stick with, is the slight reformulation: % TODO: why more useful? \begin{align*} \du = \du'_0 \qquad \qquad \qquad \du'_n = D^{2 n + 1} U^{2 n + 2} \du'_{n + 1} \end{align*} For any term $t$ and natural numbers $n,m$ we have $U^n D^{m+1} U^{m+1} t \rightarrow_{\rho_1} U^n D^m U^m t$ and thus $U^n D^m U^m t \rewrites U^n t$ by iterating $m$ such steps. Instantiating $m$ with $2 n + 1$ and $t$ with $U \du'_{n + 1}$, we obtain %$S^n P^{2 n + 1} S^{2 n + 1} S \du'_{n + 1} \equiv S^n \du'_n$ $U^n \du'_n \rewrites U^{n+1} \du'_{n + 1}$ for any $n$. Concatenating these sequences, iterating $n$ from $0$ onwards, we arrive at $\du \rewrites U^\omega$. % TODO: why not use definitions: % vu(n) = U^n vd(n+1) % vd(n) = D^n vu(n+1) \section{The Counterexample in \Coq} We implement the counterexample from Section~\ref{sec:counterexample} using the \Coq development described in Chapter~\ref{chap:implementation}. The rewrite rules $\rho_1$ and $\rho_2$ are straightforwardly defined and shown to be left-linear. By a simple proof we obtain that all critical pairs are trivial and hence that the TRS is weakly orthogonal. \begin{singlespace} \begin{coqdoccode} %\coqdocnoindent %\coqdockw{Definition} %\coqdef{ExampleUNWO.UNWOtrs}{UNWO\_trs}{\coqdocdefinition{$\mathcal{R}$}} %:= %\coqdocdefinition{$\rho_1$} :: %\coqdocdefinition{$\rho_2$} :: %\coqexternalref{http://coq.inria.fr/stdlib/Coq.Init.Datatypes}{nil}{\coqdocconstructor{nil}}.\coqdoceol %\coqdocemptyline \coqdocnoindent \coqdockw{Lemma} \coqdoclemma{wo$_\mathcal{R}$} : \coqref{Rewriting.weaklyorthogonal}{\coqdocdefinition{weakly\_orthogonal}} %\coqref{ExampleUNWO.UNWOtrs}{\coqdocdefinition{$\mathcal{R}$}}.\coqdoceol \coqdocdefinition{$\mathcal{R}$}.\coqdoceol \end{coqdoccode} \end{singlespace} We introduce the notation \begin{coqdoccode}\coqdocvariable{f} @ \coqdocvariable{t}\end{coqdoccode} to mean \begin{coqdoccode}\coqref{Term.Fun}{\coqdocconstructor{Fun}} \coqdocvariable{f} (\coqdocdefinition{vcons} \coqdocvariable{t} (\coqdocdefinition{vnil} \coqref{Term.term}{\coqdocinductive{term}}))\end{coqdoccode}. For brevity, in the following discussion we focus on the function symbol $U$ and omit analoguous definitions using the function symbol $D$. The infinite term $U^\omega$ is defined by corecursion and finite repetitions $U^n t$ are defined by recursion (and are assumed to generalise to contexts with the same notation). \begin{singlespace} \begin{coqdoccode} \coqdocnoindent \coqdockw{CoFixpoint} \coqdef{ExampleUNWO.repeatU}{repeat\_U}{\coqdocdefinition{U$^\omega$}} : \coqref{Term.term}{\coqdocinductive{term}} := \coqdocconstructor{U} @ \coqref{ExampleUNWO.repeatU}{\coqdocdefinition{U$^\omega$}}.\coqdoceol \coqdocemptyline \coqdocnoindent \coqdockw{Fixpoint} \coqdef{ExampleUNWO.Unt}{Unt}{\coqdocdefinition{U}}$^\coqdocvar{n}$ \coqdocvar{t} :=\coqdoceol \coqdocindent{1.00em} \coqdockw{match} \coqdocvariable{n} \coqdockw{with}\coqdoceol \coqdocindent{1.00em} \ensuremath{|} \coqexternalref{http://coq.inria.fr/stdlib/Coq.Init.Datatypes}{O}{\coqdocconstructor{O}} \ensuremath{\Rightarrow} \coqdocvariable{t}\coqdoceol \coqdocindent{1.00em} \ensuremath{|} \coqexternalref{http://coq.inria.fr/stdlib/Coq.Init.Datatypes}{S}{\coqdocconstructor{S}} \coqdocvar{n} \ensuremath{\Rightarrow} \coqdocconstructor{U} @ (\coqref{ExampleUNWO.Unt}{\coqdocdefinition{U}}$^\coqdocvariable{n}$ \coqdocvariable{t})\coqdoceol \coqdocindent{1.00em} \coqdockw{end}.\coqdoceol \end{coqdoccode} \end{singlespace} Unfortunately, $\du$ is not as easily defined. Although clearly productive, direct translations of the corecursive definitions in Subsection~\ref{sub:counterexample} do not satisfy \Coq's guardedness condition (see also Section~\ref{sec:guardedness}). The conclusion of a \emph{trial and error} approach is that we must use anonymous cofix constructions. The definition we proceed with is the following: % wording 'trial and error approach' is not so nice \begin{singlespace} \begin{coqdoccode} \coqdocnoindent \coqdockw{CoFixpoint} \coqdef{ExampleUNWO.psi'}{psi'}{\coqdocdefinition{$\du'$}} \coqdocvar{n} : \coqref{Term.term}{\coqdocinductive{term}} :=\coqdoceol \coqdocindent{1.00em} (\coqdocvar{cofix} \coqdocvar{Ds} (\coqdocvar{d} : \coqexternalref{http://coq.inria.fr/stdlib/Coq.Init.Datatypes}{nat}{\coqdocinductive{nat}}) :=\coqdoceol \coqdocindent{2.00em} \coqdockw{match} \coqdocvariable{d} \coqdockw{with}\coqdoceol \coqdocindent{2.00em} \ensuremath{|} \coqexternalref{http://coq.inria.fr/stdlib/Coq.Init.Datatypes}{O}{\coqdocconstructor{O}} \ensuremath{\Rightarrow} \coqdocconstructor{D} @ (\coqdocvar{cofix} \coqdocvar{Us} (\coqdocvar{u} : \coqexternalref{http://coq.inria.fr/stdlib/Coq.Init.Datatypes}{nat}{\coqdocinductive{nat}}) :=\coqdoceol \coqdocindent{7.50em} \coqdockw{match} \coqdocvariable{u} \coqdockw{with}\coqdoceol \coqdocindent{7.50em} \ensuremath{|} \coqexternalref{http://coq.inria.fr/stdlib/Coq.Init.Datatypes}{O}{\coqdocconstructor{O}} \ensuremath{\Rightarrow} \coqref{ExampleUNWO.psi'}{\coqdocdefinition{$\du'$}} (\coqexternalref{http://coq.inria.fr/stdlib/Coq.Init.Datatypes}{S}{\coqdocconstructor{S}} \coqdocvariable{n})\coqdoceol \coqdocindent{7.50em} \ensuremath{|} \coqexternalref{http://coq.inria.fr/stdlib/Coq.Init.Datatypes}{S}{\coqdocconstructor{S}} \coqdocvar{u} \ensuremath{\Rightarrow} \coqdocconstructor{U} @ \coqdocconstructor{U} @ (\coqdocvariable{Us} \coqdocvariable{u})\coqdoceol \coqdocindent{7.50em} \coqdockw{end}) (\coqexternalref{http://coq.inria.fr/stdlib/Coq.Init.Datatypes}{S}{\coqdocconstructor{S}} \coqdocvariable{n})\coqdoceol \coqdocindent{2.00em} \ensuremath{|} \coqexternalref{http://coq.inria.fr/stdlib/Coq.Init.Datatypes}{S}{\coqdocconstructor{S}} \coqdocvar{d} \ensuremath{\Rightarrow} \coqdocconstructor{D} @ \coqdocconstructor{D} @ (\coqdocvariable{Ds} \coqdocvariable{d})\coqdoceol \coqdocindent{2.00em} \coqdockw{end}) \coqdocvariable{n}.\coqdoceol \coqdocemptyline \coqdocnoindent \coqdockw{Definition} \coqdef{ExampleUNWO.psi}{psi}{\coqdocdefinition{$\du$}} := \coqref{ExampleUNWO.psi'}{\coqdocdefinition{$\du'$}} 0.\coqdoceol \end{coqdoccode} \end{singlespace} We now prove that $U^\omega$ and $D^\omega$ are (distinct) normal forms. This is a tedious proof that essentially consists of analysing the occurrence of a redex in these terms, yielding that there can not be such an occurrence. \begin{singlespace} \begin{coqdoccode} \coqdocnoindent \coqdockw{Lemma} \coqdoclemma{nf$_{\text{U}^\omega}$} : \coqref{Rewriting.normalform}{\coqdocdefinition{normal\_form}} (\coqdocvar{system} := \coqdocdefinition{$\mathcal{R}$}) \coqref{ExampleUNWO.repeatU}{\coqdocdefinition{U$^\omega$}}.\coqdoceol \coqdocemptyline \coqdocnoindent \coqdockw{Lemma} \coqdoclemma{nf$_{\text{D}^\omega}$} : \coqdocdefinition{normal\_form} (\coqdocvar{system} := \coqdocdefinition{$\mathcal{R}$}) \coqdocdefinition{D$^\omega$}.\coqdoceol \coqdocemptyline \coqdocnoindent \coqdockw{Lemma} \coqdoclemma{neq$^{\text{U}^\omega}_{\text{D}^\omega}$} : \ensuremath{\lnot} \coqref{ExampleUNWO.repeatU}{\coqdocdefinition{U$^\omega$}} \coqref{TermEquality.termbis}{$\bis$} \coqdocdefinition{D$^\omega$}.\coqdoceol \end{coqdoccode} \end{singlespace} Constructing a rewrite sequence from $\du$ to $U^\omega$ is done in much the same way as described in Subsection~\ref{sub:counterexample}. First, we define the parameterised step that is used in the rewrite sequence. It eliminates one pair of $D, U$ constructors in a term of the form $U^n D^{m+1} U^{m+1} t$. The omitted argument of the \coqref{Rewriting.Step}{\coqdocconstructor{Step}} constructor (denoted by \coqdoclemma{\_}) is a proof of $\rho_1 \in \mathcal{R}$. Note that \coqdocvariable{x} is the variable that is used in both rewrite rules. %TODO: names of definitions are not so nice and we need some notes %about the fact lemmas \begin{singlespace} \begin{coqdoccode} \coqdocnoindent \coqdockw{Definition} \coqdef{ExampleUNWO.sigma}{sigma}{\coqdocdefinition{$\sigma$}} \coqdocvar{t} (\coqdocvar{y} : \coqdocdefinition{X}) : \coqref{Term.term}{\coqdocinductive{term}} :=\coqdoceol \coqdocindent{1.00em} \coqdockw{match} \coqdocdefinition{beq\_var} \coqdocvariable{y} \coqdocvariable{x} \coqdockw{with}\coqdoceol \coqdocindent{1.00em} \ensuremath{|} \coqexternalref{http://coq.inria.fr/stdlib/Coq.Init.Datatypes}{true}{\coqdocconstructor{true}} \ensuremath{\Rightarrow} \coqdocvariable{t}\coqdoceol \coqdocindent{1.00em} \ensuremath{|} \coqexternalref{http://coq.inria.fr/stdlib/Coq.Init.Datatypes}{false}{\coqdocconstructor{false}} \ensuremath{\Rightarrow} \coqref{Term.Var}{\coqdocconstructor{Var}} \coqdocvariable{y}\coqdoceol \coqdocindent{1.00em} \coqdockw{end}.\coqdoceol \coqdocemptyline \coqdocnoindent \coqdockw{Lemma} \coqdef{ExampleUNWO.facttermbisUmDSnUSnt}{fact\_term\_bis\_UmDSnUSnt}{\coqdoclemma{fact$_\pi^\text{source}$}} : \ensuremath{\forall} (\coqdocvar{n} \coqdocvar{m} : \coqexternalref{http://coq.inria.fr/stdlib/Coq.Init.Datatypes}{nat}{\coqdocinductive{nat}}) (\coqdocvar{t} : \coqref{Term.term}{\coqdocinductive{term}}),\coqdoceol \coqdocindent{1.00em} (\coqref{ExampleUNWO.Unt}{\coqdocdefinition{U}}$^\coqdocvariable{n}$ \coqdocdefinition{D}$^\coqdocvariable{m}$ $\Box$)[\coqref{Substitution.substitute}{\coqdocdefinition{substitute}} (\coqref{ExampleUNWO.sigma}{\coqdocdefinition{$\sigma$}} (\coqref{ExampleUNWO.Unt}{\coqdocdefinition{U}}$^\coqdocvariable{m}$ \coqdocvariable{t})) (\coqref{Rewriting.lhs}{\coqdocprojection{lhs}} \coqdocdefinition{$\rho_1$})] \coqref{TermEquality.termbis}{$\bis$} \coqref{ExampleUNWO.Unt}{\coqdocdefinition{U}}$^\coqdocvariable{n}$ \coqdocdefinition{D}$^{\coqexternalref{http://coq.inria.fr/stdlib/Coq.Init.Datatypes}{S}{\coqdocconstructor{S}} \coqdocvariable{m}}$ \coqref{ExampleUNWO.Unt}{\coqdocdefinition{U}}$^{\coqexternalref{http://coq.inria.fr/stdlib/Coq.Init.Datatypes}{S}{\coqdocconstructor{S}} \coqdocvariable{m}}$ \coqdocvariable{t}.\coqdoceol \coqdocemptyline \coqdocnoindent \coqdockw{Lemma} \coqdef{ExampleUNWO.facttermbisUmDnUnt}{fact\_term\_bis\_UmDnUnt}{\coqdoclemma{fact$_\pi^\text{target}$}} : \ensuremath{\forall} (\coqdocvar{n} \coqdocvar{m} : \coqexternalref{http://coq.inria.fr/stdlib/Coq.Init.Datatypes}{nat}{\coqdocinductive{nat}}) (\coqdocvar{t} : \coqref{Term.term}{\coqdocinductive{term}}),\coqdoceol \coqdocindent{1.00em} (\coqref{ExampleUNWO.Unt}{\coqdocdefinition{U}}$^\coqdocvariable{n}$ \coqdocdefinition{D}$^\coqdocvariable{m}$ $\Box$)[\coqref{Substitution.substitute}{\coqdocdefinition{substitute}} (\coqref{ExampleUNWO.sigma}{\coqdocdefinition{$\sigma$}} (\coqref{ExampleUNWO.Unt}{\coqdocdefinition{U}}$^\coqdocvariable{m}$ \coqdocvariable{t})) (\coqref{Rewriting.rhs}{\coqdocprojection{rhs}} \coqdocdefinition{$\rho_1$})] \coqref{TermEquality.termbis}{$\bis$} \coqref{ExampleUNWO.Unt}{\coqdocdefinition{U}}$^\coqdocvariable{n}$ \coqdocdefinition{D}$^\coqdocvariable{m}$ \coqref{ExampleUNWO.Unt}{\coqdocdefinition{U}}$^\coqdocvariable{m}$ \coqdocvariable{t}.\coqdoceol \coqdocemptyline \coqdocnoindent \coqdockw{Definition} \coqdef{ExampleUNWO.pi}{pi}{\coqdocdefinition{$\pi$}} \coqdocvar{n} \coqdocvar{m} \coqdocvar{t} : \coqref{ExampleUNWO.Unt}{\coqdocdefinition{U}}$^\coqdocvariable{n}$ \coqdocdefinition{D}$^{\coqexternalref{http://coq.inria.fr/stdlib/Coq.Init.Datatypes}{S}{\coqdocconstructor{S}} \coqdocvariable{m}}$ \coqref{ExampleUNWO.Unt}{\coqdocdefinition{U}}$^{\coqexternalref{http://coq.inria.fr/stdlib/Coq.Init.Datatypes}{S}{\coqdocconstructor{S}} \coqdocvariable{m}}$ \coqdocvariable{t} \coqref{Rewriting.step}{$\rightarrow_\mathcal{R}$} \coqref{ExampleUNWO.Unt}{\coqdocdefinition{U}}$^\coqdocvariable{n}$ \coqdocdefinition{D}$^\coqdocvariable{m}$ \coqref{ExampleUNWO.Unt}{\coqdocdefinition{U}}$^\coqdocvariable{m}$ \coqdocvariable{t} :=\coqdoceol \coqdocindent{1.00em} \coqref{Rewriting.Step}{\coqdocconstructor{Step}} \coqdocdefinition{$\rho_1$} (\coqref{ExampleUNWO.Unt}{\coqdocdefinition{U}}$^\coqdocvariable{n}$ \coqdocdefinition{D}$^\coqdocvariable{m}$ $\Box$) (\coqref{ExampleUNWO.sigma}{\coqdocdefinition{$\sigma$}} (\coqref{ExampleUNWO.Unt}{\coqdocdefinition{U}}$^\coqdocvariable{m}$ \coqdocvariable{t})) \coqdoclemma{\_} (\coqref{ExampleUNWO.facttermbisUmDSnUSnt}{\coqdoclemma{fact$_\pi^\text{source}$}} \coqdocvariable{n} \coqdocvariable{m} \coqdocvariable{t}) (\coqref{ExampleUNWO.facttermbisUmDnUnt}{\coqdoclemma{fact$_\pi^\text{target}$}} \coqdocvariable{n} \coqdocvariable{m} \coqdocvariable{t}).\coqdoceol \end{coqdoccode} \end{singlespace} Generalising these rewrite steps \coqref{ExampleUNWO.pi}{\coqdocdefinition{$\pi$}}, we construct the rewrite sequences \coqref{ExampleUNWO.phia}{\coqdocdefinition{$\varphi_a$}}. In their recursive definition, the \coqdocdefinition{snoc} function is used to prepend \begin{coqdoccode}(\coqref{ExampleUNWO.pi}{\coqdocdefinition{$\pi$}} \coqdocvariable{n} \coqdocvariable{m} \coqdocvariable{t})\end{coqdoccode} to \begin{coqdoccode}(\coqref{ExampleUNWO.phia}{\coqdocdefinition{$\varphi_a$}} \coqdocvariable{n} \coqdocvariable{m} \coqdocvariable{t})\end{coqdoccode}. Doing some arithmetic, we obtain that these rewrite sequences can be used to define rewrite sequences \coqref{ExampleUNWO.phib}{\coqdocdefinition{$\varphi_b$}} of a more useful type. \begin{singlespace} \begin{coqdoccode} \coqdocnoindent \coqdockw{Fixpoint} \coqdef{ExampleUNWO.phia}{phia}{\coqdocdefinition{$\varphi_a$}} \coqdocvar{n} \coqdocvar{m} \coqdocvar{t} : \coqref{ExampleUNWO.Unt}{\coqdocdefinition{U}}$^\coqdocvariable{n}$ \coqdocdefinition{D}$^\coqdocvariable{m}$ \coqref{ExampleUNWO.Unt}{\coqdocdefinition{U}}$^\coqdocvariable{m}$ \coqdocvariable{t} \coqref{Rewriting.sequence}{$\rewrites_\mathcal{R}$} \coqref{ExampleUNWO.Unt}{\coqdocdefinition{U}}$^\coqdocvariable{n}$ \coqdocvariable{t} :=\coqdoceol \coqdocindent{1.00em} \coqdockw{match} \coqdocvariable{m} \coqdockw{with}\coqdoceol \coqdocindent{1.00em} \ensuremath{|} \coqdocconstructor{O} \ensuremath{\Rightarrow} \coqref{Rewriting.Nil}{\coqdocconstructor{Nil}} (\coqref{ExampleUNWO.Unt}{\coqdocdefinition{U}}$^\coqdocvariable{n}$ \coqdocvariable{t})\coqdoceol \coqdocindent{1.00em} \ensuremath{|} \coqdocconstructor{S} \coqdocvar{m} \ensuremath{\Rightarrow} \coqref{Rewriting.snoc}{\coqdocdefinition{snoc}} (\coqref{ExampleUNWO.pi}{\coqdocdefinition{$\pi$}} \coqdocvariable{n} \coqdocvariable{m} \coqdocvariable{t}) (\coqref{ExampleUNWO.phia}{\coqdocdefinition{$\varphi_a$}} \coqdocvariable{n} \coqdocvariable{m} \coqdocvariable{t})\coqdoceol \coqdocindent{1.00em} \coqdockw{end}.\coqdoceol \coqdocemptyline \coqdocnoindent %\coqdockw{Program Definition} \coqdockw{Definition} % here the S(2 n) is not correct (?) -- i think it is \coqdef{ExampleUNWO.phib}{phib}{\coqdocdefinition{$\varphi_b$}} \coqdocvar{n} : \coqref{ExampleUNWO.Unt}{\coqdocdefinition{U}}$^\coqdocvariable{n}$ (\coqref{ExampleUNWO.psi'}{\coqdocdefinition{$\du'$}} \coqdocvariable{n}) \coqref{Rewriting.sequence}{$\rewrites_\mathcal{R}$} \coqref{ExampleUNWO.Unt}{\coqdocdefinition{U}}$^{\coqexternalref{http://coq.inria.fr/stdlib/Coq.Init.Datatypes}{S}{\coqdocconstructor{S}} \coqdocvariable{n}}$ (\coqref{ExampleUNWO.psi'}{\coqdocdefinition{$\du'$}} (\coqexternalref{http://coq.inria.fr/stdlib/Coq.Init.Datatypes}{S}{\coqdocconstructor{S}} \coqdocvariable{n})) :=\coqdoceol \coqdocindent{1.00em} \coqref{ExampleUNWO.phia}{\coqdocdefinition{$\varphi_a$}} \coqdocvariable{n} (\coqexternalref{http://coq.inria.fr/stdlib/Coq.Init.Datatypes}{S}{\coqdocconstructor{S}} (2 $\times$ \coqdocvariable{n})) (\coqdocdefinition{U} @ \coqref{ExampleUNWO.psi'}{\coqdocdefinition{$\du'$}} (\coqexternalref{http://coq.inria.fr/stdlib/Coq.Init.Datatypes}{S}{\coqdocconstructor{S}} \coqdocvariable{n})).\coqdoceol \end{coqdoccode} \end{singlespace} We concatenate all rewrite sequences \coqref{ExampleUNWO.phib}{\coqdocdefinition{$\varphi_b$}} to construct rewrite sequences from $\du$ to a term that is equal to $U^\omega$ up to any given depth. \begin{singlespace} \begin{coqdoccode} \coqdocnoindent \coqdockw{Fixpoint} \coqdef{ExampleUNWO.phic}{phic}{\coqdocdefinition{$\varphi_c$}} \coqdocvar{n} : \coqref{ExampleUNWO.psi}{\coqdocdefinition{$\du$}} \coqref{Rewriting.sequence}{$\rewrites_\mathcal{R}$} \coqref{ExampleUNWO.Unt}{\coqdocdefinition{U}}$^\coqdocvariable{n}$ (\coqref{ExampleUNWO.psi'}{\coqdocdefinition{$\du'$}} \coqdocvariable{n}) :=\coqdoceol \coqdocindent{1.00em} \coqdockw{match} \coqdocvariable{n} \coqdockw{with}\coqdoceol \coqdocindent{1.00em} \ensuremath{|} \coqdocconstructor{O} \ensuremath{\Rightarrow} \coqref{Rewriting.Nil}{\coqdocconstructor{Nil}} \coqref{ExampleUNWO.psi}{\coqdocdefinition{$\du$}}\coqdoceol \coqdocindent{1.00em} \ensuremath{|} \coqdocconstructor{S} \coqdocvar{n} \ensuremath{\Rightarrow} \coqref{Rewriting.append}{\coqdocdefinition{concat}} (\coqref{ExampleUNWO.phic}{\coqdocdefinition{$\varphi_c$}} \coqdocvariable{n}) (\coqref{ExampleUNWO.phib}{\coqdocdefinition{$\varphi_b$}} \coqdocvariable{n})\coqdoceol \coqdocindent{1.00em} \coqdockw{end}.\coqdoceol \end{coqdoccode} \end{singlespace} The definition of the final rewrite sequence \coqref{ExampleUNWO.phi}{\coqdocdefinition{$\varphi$}} is done by combining \coqref{ExampleUNWO.phic}{\coqdocdefinition{$\varphi_c$}} with a proof that the target terms converge to $U^\omega$. \begin{singlespace} \begin{coqdoccode} \coqdocnoindent \coqdockw{Lemma} \coqdef{ExampleUNWO.conv}{conv}{\coqdoclemma{conv$_{\varphi_c}$}} : \coqref{Rewriting.converges}{\coqdocdefinition{converges}} (\coqdockw{fun} \coqdocvar{n} \ensuremath{\Rightarrow} \coqref{ExampleUNWO.Unt}{\coqdocdefinition{U}}$^\coqdocvariable{n}$ (\coqref{ExampleUNWO.psi'}{\coqdocdefinition{$\du'$}} \coqdocvariable{n})) \coqref{ExampleUNWO.repeatU}{\coqdocdefinition{U$^\omega$}}.\coqdoceol \coqdocemptyline \coqdocnoindent \coqdockw{Definition} \coqdef{ExampleUNWO.phi}{phi}{\coqdocdefinition{$\varphi$}} : \coqref{ExampleUNWO.psi}{\coqdocdefinition{$\du$}} \coqref{Rewriting.sequence}{$\rewrites_\mathcal{R}$} \coqref{ExampleUNWO.repeatU}{\coqdocdefinition{U$^\omega$}} := \coqref{Rewriting.Lim}{\coqdocconstructor{Lim}} \coqref{ExampleUNWO.phic}{\coqdocdefinition{$\varphi_c$}} \coqref{ExampleUNWO.conv}{\coqdoclemma{conv$_{\varphi_c}$}}.\coqdoceol \coqdocemptyline \coqdocnoindent \coqdockw{Lemma} \coqdoclemma{wf$_\varphi$} : \coqref{Rewriting.wf}{\coqdocdefinition{wf}} \coqref{ExampleUNWO.phi}{\coqdocdefinition{$\varphi$}}.\coqdoceol \end{coqdoccode} \end{singlespace} We can prove $\du \rewrites D^\omega$ in a similar way and conclude by proving our main theorem. \begin{singlespace} \begin{coqdoccode} \coqdocnoindent \coqdockw{Lemma} \coqdoclemma{no\_un$_\mathcal{R}$} : \ensuremath{\lnot} \coqref{Rewriting.uniquenormalforms}{\coqdocdefinition{unique\_normal\_forms}} \coqdocdefinition{$\mathcal{R}$}.\coqdoceol \coqdocemptyline \coqdocnoindent \coqdockw{Theorem} \coqdoclemma{no\_un\_wo} : \ensuremath{\lnot} \ensuremath{\forall} \coqdocvar{F} \coqdocvar{X} \coqdocvar{$\mathcal{R}$},\coqdoceol \coqdocindent{1.00em} \coqdocdefinition{weakly\_orthogonal} (\coqdocvar{F} := \coqdocvariable{F}) (\coqdocvar{X} := \coqdocvariable{X}) \coqdocvariable{$\mathcal{R}$} \ensuremath{\rightarrow} \coqdocdefinition{unique\_normal\_forms} \coqdocvariable{$\mathcal{R}$}.\coqdoceol \end{coqdoccode} \end{singlespace}
{ "alphanum_fraction": 0.7564821768, "avg_line_length": 43.6204238921, "ext": "tex", "hexsha": "27c1308d0699a54c777570a9b247a6fb599d7773", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "42483b7c4bf94ed708e2893c3ea961d025a10b5e", "max_forks_repo_licenses": [ "CC-BY-3.0" ], "max_forks_repo_name": "martijnvermaat/documents", "max_forks_repo_path": "vu/master-project/unwo.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "42483b7c4bf94ed708e2893c3ea961d025a10b5e", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "CC-BY-3.0" ], "max_issues_repo_name": "martijnvermaat/documents", "max_issues_repo_path": "vu/master-project/unwo.tex", "max_line_length": 158, "max_stars_count": 1, "max_stars_repo_head_hexsha": "42483b7c4bf94ed708e2893c3ea961d025a10b5e", "max_stars_repo_licenses": [ "CC-BY-3.0" ], "max_stars_repo_name": "martijnvermaat/documents", "max_stars_repo_path": "vu/master-project/unwo.tex", "max_stars_repo_stars_event_max_datetime": "2019-04-28T14:38:06.000Z", "max_stars_repo_stars_event_min_datetime": "2019-04-28T14:38:06.000Z", "num_tokens": 8446, "size": 22639 }
\section{Introduction} \label{sec:introduction} \subsection{Product overview} \label{sec:overview} \Friending{} is an online dating, friendship, and social networking mobile application that features user-created questionnaires and multiple choice questions. \Friending{} has two primary features: joining groups to find people similar to you or registering for events happening in your local area. You can create a group around one of your interests, then define questionnaires that can be used to match other members of the group. You can join these groups and fill in questionnaires to be matched with members of the group. The questionnaire answers are used to determine a mutual match based on the your and others responses. You will receive a notification if a mutual match is found. Events are managed by an event host that controls the event. Each event has a questionnaire that you will fill out to join. You can invite people to join your event, and when you are ready start the processing of pairing up people in the event. The participants of your event will then be matched into pairs. You will receive a notification when a match is set. \subsection{Home Pages} \label{sec:homepages} When opening \Friending{} for the first time you will be presented with \Screenshot{Start} (Figure~\ref{fig:start}). If you have already signed in before then you will be taken to \Screenshot{Sign In} (Figure~\ref{fig:signin}): \Screenshot{Start} is the opening page for \Friending{}. It allows you to learn more about \Friending{} through a carousel. Simply \action{tap} on the \keys{Start the tour} button, and you will be taken to \Screenshot{Onboarding} (Figure~\ref{fig:onboarding}). You can \action{Swipe Left} or \action{Swipe Right} to move through the descriptions to learn more about \Friending. When you are ready to begin, \action{Tap} on the \keys{Continue} button. You will be taken to \Screenshot{Sign Up} (Figure~\ref{fig:signup}). When you have successfully created your account or logged in, you will be presented with the \Screenshot{Home} (Figure~\ref{fig:home}): When you are in the application you can use the \Screenshot{Navigation Menu} (Figure~\ref{fig:navigation}) to quickly navigate throughout the application. \action{Tap} on the \appbutton{menu} button to open the \Screenshot{Navigation Menu}. The \Screenshot{Navigation Menu} is accessible on top level pages such as but not limited to: \Screenshot{Home} (Figure~\ref{fig:home}), \Screenshot{Groups} (Figure~\ref{fig:home}) and \Screenshot{My Questionnaires} (Figure~\ref{fig:home}). The links will take you to the following locations: \begin{itemize} \item Profile $\rightarrow$ Profile Settings (Figure 7.1). \item Preferences $\rightarrow$ Profile Settings (Figure 7.2). \item Notifications $\rightarrow$ Notifications (Figure 10.5). \item Events $\rightarrow$ Parties (Figure 9.3). \item Groups $\rightarrow$ Community Search (Figure 8.1). \item Questionnaires $\rightarrow$ Community Templates (Figure 11.4). \item Billing $\rightarrow$ Settings (Figure 6.1). \item Settings $\rightarrow$ Settings (Figure 5.3). \item Sign out $\rightarrow$ Start Page (Figure 1.1). \end{itemize}
{ "alphanum_fraction": 0.7708658956, "avg_line_length": 74.3953488372, "ext": "tex", "hexsha": "a89a6c348090f7a22a3a75900f215de3f68bab3d", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2019-10-20T00:12:14.000Z", "max_forks_repo_forks_event_min_datetime": "2019-10-20T00:12:14.000Z", "max_forks_repo_head_hexsha": "a670c70ccd63de262aa18ef2b952f7675e8fc83d", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "jrbeverly/friending-manual", "max_forks_repo_path": "src/section/introduction.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "a670c70ccd63de262aa18ef2b952f7675e8fc83d", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "jrbeverly/friending-manual", "max_issues_repo_path": "src/section/introduction.tex", "max_line_length": 484, "max_stars_count": null, "max_stars_repo_head_hexsha": "a670c70ccd63de262aa18ef2b952f7675e8fc83d", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "jrbeverly/friending-manual", "max_stars_repo_path": "src/section/introduction.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 768, "size": 3199 }
\chapter*{Acknowledgements} My foremost thanks go to my supervisor, Prof. Mark Plumbley, for his enthusiasm, encouragement, wisdom and guidance over the years. Huge thanks also to my co-supervisors: Dr. Nick Bryan-Kinns, Dr. Janko \'{C}ali\'{c} and Prof. David Frohlich, for their invaluable suggestions, feedback and advice, and for helping me navigate an unfamiliar field of research. I have been extremely privileged to have been able to undertake this research as part of my employment at the BBC. My heartfelt thanks go out to Prof. Graham Thomas, Dr. Frank Melchior, Chris Pike and Samantha Chadwick for giving me this rare opportunity, and for their steadfast support throughout the process. Many thanks to my colleagues in the BBC R\&D Audio Team for their patience, support and help with proofreading, but also for being brilliant people to work with. This work was only made possible through the involvement of colleagues from BBC Radio, who volunteered their time despite their demanding workload. My sincere thanks go to all of the radio producers who took the time to participate in the user studies and contribute to the design of the interfaces. Thanks also to Deborah Cohen, Hugh Levinson and John Goudie for allowing me access to their teams, and to Jonathan Glover his advocacy and encouragement. Thanks to Liam Bowes, Carl Garner and Richard Sargeant from Anoto for their support in the development of the digital pen interface, and to Matt Haynes and Matt Shotton from BBC R\&D, whose software formed part of the semantic speech editor. Finally, I'd personally like to thank my wife, Nancy, for her love, encouragement, patience and belief in me throughout this process. I couldn't have done it without you. \vfill \noindent The work in this thesis was fully funded by the British Broadcasting Corporation as part of the BBC Audio Research Partnership.
{ "alphanum_fraction": 0.8007438895, "avg_line_length": 64.8965517241, "ext": "tex", "hexsha": "73c66e931cd09e83be3d7c9f5e1754f9c9eddc4c", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "a122f3785b2be538a700069dcb3a9774a09c8c40", "max_forks_repo_licenses": [ "CC-BY-4.0" ], "max_forks_repo_name": "chrisbaume/thesis", "max_forks_repo_path": "acknowledgements.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "a122f3785b2be538a700069dcb3a9774a09c8c40", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "CC-BY-4.0" ], "max_issues_repo_name": "chrisbaume/thesis", "max_issues_repo_path": "acknowledgements.tex", "max_line_length": 120, "max_stars_count": 1, "max_stars_repo_head_hexsha": "a122f3785b2be538a700069dcb3a9774a09c8c40", "max_stars_repo_licenses": [ "CC-BY-4.0" ], "max_stars_repo_name": "chrisbaume/thesis", "max_stars_repo_path": "acknowledgements.tex", "max_stars_repo_stars_event_max_datetime": "2019-03-14T00:41:00.000Z", "max_stars_repo_stars_event_min_datetime": "2019-03-14T00:41:00.000Z", "num_tokens": 417, "size": 1882 }
This chapter has been adapted from the published paper titled \textit{The GALAH survey: a catalogue of carbon-enhanced stars and CEMP candidates} \cite{2019MNRAS.483.3196C} whose first author is author of this Doctoral thesis. The used computer code is published on GitHub platform \footnote{\url{https://github.com/kcotar/GALAH-survey-Carbon-enhanced-stars}} and results of the analysis as a catalogue on the VizieR service \footnote{\url{http://vizier.u-strasbg.fr/viz-bin/VizieR?-source=J/MNRAS/483/3196}}. In the previous chapter, we saw that the chemical tagging of open cluster stars is far from a straightforward process in large spectroscopic studies and relies on many assumptions. One of them is that analysed spectra are of normal, non-active stars whose outer layer was not polluted by other stars or anyhow modified during their lifetime. To improve the accuracy of chemical tagging, such peculiarities in acquired spectra must be detected and derived parameters effectively filtered out before any analysis is performed. One of such peculiarities are chemical enrichments that are usually a result of pollution by nearby evolved stars, such as novae and supernovae. This chapter deals with the identification of carbon-enhanced stars using supervised and unsupervised classification algorithms. The chapter begins with a brief introduction of carbon-enhanced stars, and their historical and present importance in Section \ref{sec:intro_cemp}. The section is followed by the description of the used algorithms for the detection of carbon-enhanced stars in Section \ref{sec:classification_cemp}. Properties of the classified objects are investigated in Section \ref{sec:analysis_cemp}, CEMP candidates are a focus of Section \ref{sec:cemp_cemp}, with Section \ref{sec:asiago_cemp} describing a follow-up study for one of them. Final remarks are given in Section \ref{sec:summary_cemp}. \section{Introduction} \label{sec:intro_cemp} Chemically peculiar stars whose spectra are dominated by carbon molecular bands were first identified by \citet{1869AN.....73..129S}. Their spectra are characterised by enhanced carbon absorption bands of CH, CN, SiC$_2$, and C$_{2}$ molecules, also known as Swan bands. Possible sources of enhancement are dredge-up events in evolved stars \cite{1983ApJ...275L..65I}, enrichment by carbon-rich stellar winds from a pulsating asymptotic giant branch (AGB) star, which settles on a main sequence companion \cite{1995MNRAS.277.1443H}, or it can be the result of a primordial enrichment \cite{2016ApJ...833...20Y}. Historically, high latitude carbon stars, presumed to be giants, were used as probes to measure the Galactic rotation curve \cite{2013Ap.....56...68B}, velocity dispersion in the Galactic halo \cite{1991AJ....101.2220B}, and to trace the gravitational potential of the Galaxy. Because of their strong spectral features, the most prominent candidates can easily be identified from large photometric surveys \cite{2002AJ....124.1651M, 2004AJ....127.2838D}. Specific photometric systems \cite{1960MNRAS.120..287G, 1968AJ.....73..313M, 1970A&AS....1..199H} were defined in the past to discover and further classify stars with enhanced carbon features in their spectra. Specifics of those systems were catalogued, compared, and homogenised by \citet{2000A&AS..147..361M} and \citet{2003A&A...401..781F}. Other useful data come from low-resolution spectroscopic surveys, whose classification identified from a few hundred to a few thousand of those objects \cite{2001A&A...375..366C, 2013ApJ...765...12G, 2013AJ....146..132L, 2016ApJS..226....1J, 2018ApJS..234...31L}. High-resolution spectroscopy is required to search for candidates with less pronounced molecular absorption features or to determine their stellar chemical composition. Multiple studies have been carried out to determine accurate abundances of metal-poor stars \cite{1997ApJ...488..350N, 2002ApJ...567.1166A, 2004A&A...416.1117C, 2005ESASP.560..433B, 2006AJ....132..137C, 2007ApJ...655..492A, 2007ApJ...670..774N, 2011ApJ...742...54H, 2013ApJ...762...26Y, 2014AJ....147..136R, 2015ApJ...807..173H, 2015ApJ...807..171J}. Such detailed abundance information is especially important for the analysis and classification of chemically peculiar objects \cite{2013ApJ...778...56C}. Today, the most sought after, of all carbon-enhanced stars, are the carbon-enhanced metal-poor (CEMP) ones whose fraction, among metal-poor stars, increases with decreasing metallicity \Meh\ \cite{1992AJ....103.1987B, 1997ApJ...488..350N, 1999ASPC..165..264R, 2005ApJ...633L.109C, 2005ApJ...625..825L, 2005AJ....130.2804R, 2006ApJ...652.1585F, 2007PhDT........22M, 2012ApJ...744..195C, 2013AJ....146..132L, 2013ApJ...762...27Y, 2014ApJ...797...21P, 2018ApJ...861..146Y}. Amongst these, those near the main-sequence turn-off are expected to be of particular importance, as they may have accreted enough material from their AGB companion to produce an observable change in their atmospheric chemical composition \cite{2004ApJ...611..476S, 2014MNRAS.441.1217S, 2015ApJ...807..173H}. The accreted material could ideide insight into the production efficiency of neutron capture elements in AGB stars \cite{2007ApJ...655..492A}. Multiple studies show that a peculiar observed abundance pattern and carbon enrichment in a certain type of CEMP stars could be explained by the supernova explosions of first-generation stars that enriched the interstellar medium \cite{2003Natur.422..871U, 2005ApJ...619..427U, 2014ApJ...785...98T, 2018MNRAS.tmp.2127B}. The exact origin and underlying physical processes governing multiple classes of CEMP stars are not yet fully understood and are a topic of ongoing research \cite{2014ApJ...788..180C, 2016ApJ...833...20Y, 2018MNRAS.475.4781C}. Classification into multiple sub-classes is performed using the abundance information of neutron-capture elements \cite{2005ARA&A..43..531B, 2013A&A...552A.107S, 2015ApJ...814..121H, 2016ApJ...833...20Y} that are thought to originate from different astrophysical phenomena responsible for the synthesis of those elements. \section{Detection procedure} \label{sec:classification_cemp} To search for carbon-enhanced stars in the GALAH data set, we focused on spectral features that can be distinguished and are known markers of carbon enhancement. Instead of using one very weak atomic carbon absorption line (at 6587.61~\AA), used by \TC\ to determine \Cfe\ abundance, we focused on a region between 4718 and 4760~\AA\ observed in the blue arm that covers 4718--4903~\AA\ in its rest-frame. In this range we can, depending on the radial velocity of the star, observe at least four Swan band features \cite{1927RSPTA.226..157J} with their band heads located at approximately 4715, 4722, 4737, and 4745~\AA. The bands are caused by molecules that contain carbon. Therefore their shape is not a simple single absorption line, but a wedge-like absorption feature - a sequence of molecular vibrational bands. Historically, the bands have also widely been used for the exploration of cometary spectra. Carbon enhancement is observable in spectra as a strong additional absorption feature (Figure \ref{fig:carbon_example}) that is the strongest at the wavelength of the band's head. After that its power gradually decreases with decreasing wavelength. The most prominent and accessible for all of the spectra is a feature located at 4737~\AA, produced by a $^{12}$C$^{12}$C molecule. If other carbon features, like the one produced by a $^{13}$C$^{12}$C molecule at 4745~\AA\ (shown in Figure \ref{fig:carbon_example2}) are present in the spectrum, the carbon isotope ratio $^{12}$C/$^{13}$C in a star can be determined. Its determination was not attempted in the scope of this chapter. \begin{figure} \centering \includegraphics[width=\textwidth]{rich_160515003401143.png} \caption{Equivalent plot as in the Figure \ref{fig:carbon_example} but presenting an example of a metal-rich star with multiple strong Swan features around 4737 and 4745 \AA. The latter is not fitted by the orange line in the bottom panel as it is rarely present in the analysed spectra. Presented star has a 2MASS identifier J13121354-3533120 and is known Galactic carbon star \cite{2001BaltA..10....1A}.} \label{fig:carbon_example2} \end{figure} Detection of spectral features was tackled using two different classification procedures. First, a supervised procedure was used to identify the most prominent spectra with carbon enhancement. The supervised selection was based on the following two assumptions: where in the spectrum those features are located and their shape. This assumption was augmented with an unsupervised dimensionality reduction algorithm that had no prior knowledge about the desired outcome. The goal of a dimensionality reduction was to transform n-dimensional spectra onto a 2D plane where differences between them are easier to analyse. The unsupervised algorithm was able to discern the majority of carbon-enhanced spectra from the rest of the data set and enabled us to discover spectra with less prominent carbon enhancement features. \begin{figure} \centering \includegraphics[width=\textwidth]{cemp_cand_150412003601009.png} \caption{Example of a metal-poor carbon-enhanced candidate with strong Swan absorption feature at 4737~\AA, caused by the carbon C$_2$ molecules. The first panel shows the stellar spectrum in blue and a corresponding reference median spectrum in red. The reference median spectrum was computed as the per-pixel median flux value of spectra with similar stellar parameters (selection defined in Section \ref{sec:supervised}) as the spectrum shown in blue. The spectral difference (second panel) and division (third panel) were computed from those two spectra. The middle panel shows in orange a linear fit to the spectral difference that was used to identify spectra with the reduction problems on the blue border of the spectral range. The black horizontal line gives the median value of the spectral difference. The orange curve in the last panel shows a fit that was used to determine the strength of the observed carbon feature. The shown spectrum belongs to a star with a Two Micron All-Sky Survey (2MASS, \cite{2006AJ....131.1163S}) identifier J11494247+0051023 and iron abundance \Feh\ of $-1.17$, as determined by \TC.} \label{fig:carbon_example} \end{figure} \subsection{Supervised classification} \label{sec:supervised} To search for additional absorption features that are usually not found in spectra of chemically normal stars, we first built a spectral library of median spectra based on an initial rough estimate of stellar physical parameters that are derived by the automatic reduction pipeline, described in detail by \citet{2017MNRAS.464.1259K}. The median spectrum for every observed spectrum in our data set was computed from physically similar spectra. Their effective temperature \Teff\ had to be in the range of $\Delta$\Teff~=~$\pm75$~K, surface gravity \Logg\ in the range of $\Delta$\Logg~=~$\pm0.125$~dex, and iron abundance \Feh\ in the range of $\Delta$\Feh~=~$\pm0.05$~dex around the stellar parameters of the investigated spectrum. The median spectrum was calculated only for observed targets with at least five similar spectra in the defined parameter range and with minimal noise level of SNR~=~$15$ per resolution element, as determined for the blue spectral arm. All considered spectra were resampled to a common wavelength grid with 0.04~\AA\ wide bins and then median combined. The automatic reduction pipeline \cite{2017MNRAS.464.1259K} performed the normalisation of the spectra along with the radial velocity determination and the corresponding shift to the rest frame. We checked that spectral normalisation and radial velocity determination are adequate also for carbon-enhanced stars. The normalisation procedure was done using a polynomial of low-order that is not strongly affected by the Swan band features or similar spectral structures. The radial velocity of a star was determined as an average of radial velocities that were independently determined for the blue, green, and red spectral arm. If one of the arms has a radial velocity deviating for more than two times the difference between the other two, it was excluded from the average (further details in \citet{2017MNRAS.464.1259K}). Therefore the final velocity should be correct even if one of those arms contains features that are not found in the set of reference spectra used in the cross-correlation procedure. With the limitation tat least five spectra must be used for the computation of the median spectrum, some possibly carbon-enhanced stars could be excluded from the supervised classification. The final number of spectra analysed by this method was 558,053. Spectra, for which we were able to determine the median spectrum of physically similar objects, were analysed further. In the next step, we tried to determine possible carbon enhancement by calculating a flux difference and flux division between the observed stellar and median spectra, as shown in Figure \ref{fig:carbon_example}. In order to describe the position, shape, and amplitude of the Swan feature with its head at 4737~\AA, we fitted a function that is based on a Log Gamma ($\log{}\Gamma$) distribution. The distribution, with three free parameters, was fitted to the division curve, where the Swan feature is most pronounced. Division curve, shown in the bottom panel of Figure \ref{fig:carbon_example}, was computed by dividing the observed spectrum with its corresponding median spectrum. The fitted function $f$ can be written as: \begin{equation} \centering f(\lambda) = f_0 - \log{}\Gamma(\lambda, S, \lambda_0, A). \label{equ:loggamma} \end{equation} The shape of the curve is defined by an additional offset $f_0$, shape parameter $S$, constant centre wavelength $\lambda_0$ that was not fitted, and amplitude $A$ of $\log{}\Gamma$ distribution. $\lambda$ represents rest wavelengths of the observed spectrum. This function was selected because of its sharp rise followed by the gradual descent that matches well with the shape of a residual absorption observed in the Swan regions. The steepness of the rising part is determined by the parameter $S$ (lower value indicates steeper raise) and its vertical scaling by the parameter $A$. We are not aware of any other profile shapes used for fitting Swan bands in the literature. To narrow down possible solutions for the best fitting curve, we used the following priors and limits. The initial value for the parameter $f_0$ was set to a median of all pixel values in the division curve and allowed to vary between $0.5$ and $1.5$. The limiting values are, however, never reached. The centre of the $\log{}\Gamma$ distribution $\lambda_0$ was set to $4737$~\AA\ and was allowed to vary by $2$~\AA. Wavelength limits were set to minimise the number of misfitted solutions, where the best fit would describe the nearby spectral absorption lines not present in the median spectra or problematic spectral feature caused by the spectral data reduction as shown by Figure \ref{fig:bad_fit1}. We did not set any limits on parameters $A$ and $S$ in order to catch fitted solutions describing a spectrum difference that is different from the expected shape of the molecular absorption band. By integrating the surface between the offset $f_0$ and the fitted curve we calculated the strength of the Swan band. The integral (\texttt{swan\_integ} in Table \ref{tab:out_table}) is derived between 4730 and 4738~\AA. It should not be used as a substitute for a carbon abundance measurement, but only to sort the detections of carbon-enhanced stars by their perceivable strength of the Swan band. \begin{figure} \centering \includegraphics[width=0.95\textwidth]{variable_A_S.pdf} \caption{Visualization of selecting appropriate values of shape and amplitude parameters during the function fitting explained in Section \ref{sec:supervised}. The left panel shows effect of varying shape parameter $S$ and the right panel varying amplitude parameter $A$ in Equation \ref{equ:loggamma}.} \label{fig:varibles_loggamma} \end{figure} \begin{table} \centering \caption{List and description of the fields in the published catalogue of detected objects and objects matched with multiple literature sources.} \label{tab:out_table} \begin{tabular}{l c l} \hline Field & Unit & Description \\ \hline \hline \texttt{source\_id} & & {\it Gaia} DR2 source identifier \\ \texttt{sobject\_id} & & Unique internal per-observation star ID \\ \texttt{ra} & deg & Right ascension from 2MASS, J2000 \\ \texttt{dec} & deg & Declination from 2MASS, J2000 \\ \texttt{det\_sup} & bool & Detected by the supervised fitting method \\ \texttt{det\_usup} & bool & Detected by the t-SNE method \\ \texttt{swan\_integ} & \AA & Swan band strength if determined \\ \texttt{teff} & K & \TC\ effective temperature \Teff \\ \texttt{e\_teff} & K & Uncertainty of determined \Teff \\ \texttt{logg} & $\log$(cm/s$^{2}$) & \TC\ surface gravity \Logg \\ \texttt{e\_logg} & $\log$(cm/s$^{2}$) & Uncertainty of determined \Logg \\ \texttt{feh} & dex & \TC\ iron abundance \Feh \\ \texttt{e\_feh} & dex & Uncertainty of determined \Feh \\ \texttt{flag\_cannon} & int & \TC\ flags in a bit mask format \\ \texttt{type} & & G for giants and D for dwarfs \\ \texttt{rv\_var} & bool & Is radial velocity variable. \\ & & Minimal radial velocity change of $0.5$~\kms \\ \texttt{li\_strong} & bool & Shows strong lithium absorption \\ & & \TC\ \Life\ had to be >~$1.0$ \\ \texttt{cemp\_cand} & bool & Is star CEMP candidate \\ & & \TC\ \Feh\ had to be >~$-1.0$ \\ \texttt{bib\_code} & & ADS bibcode of the matched literature source \\ \hline \end{tabular} \end{table} With so many spectra in our data set, unexpected reduction and analysis problems can hinder the selection of carbon-enhanced stars. In the first iteration, the results were ordered only by the value of the integrated Swan band region, but this proved to select too many spectra with reduction problems. Most of the problematic detections were caused by the incorrect normalisation of spectra with strong, non-carbon molecular bands. This normalisation issue is best observable at the border of thr spectral range, where Swan bands are located in the case of HERMES spectra. There, normalisation can be poorly defined in the case of numerous nearby absorption lines. In order to prevent miss-detections, additional limits on the shape ($S <= 1$) and amplitude ($A <= 1$) of the $\log{}\Gamma$ distribution were used to filter out faulty fitting solutions. Impact of varying $A$ and $S$ on the shape of the function is shown in Figure \ref{fig:varibles_loggamma}. Selecting too large parameter $S$ causes function to be too narrow as shown in Figure \ref{fig:bad_fit1}. Similarly, too large amplitude $A$ results in function with reduced skewness as fitted to the spectrum in Figure \ref{fig:bad_fit2}. Figure \ref{fig:bad_fit2} represents misfitted example where the function $f(\lambda)$ was fitted to the absorption lines of a double-lined spectroscopic binary, producing a shape of the function that is not characteristic for the analysed molecular band head. To remove spectra with reduction problems or peculiarity that would result in wrongly determined strength of the Swan band, we are also analysing the slope of the spectral difference and its integration in the limits of the Swan bands. One of the spectral trends that we are trying to catch with those indicators is shown in Figure \ref{fig:bad_fit2}, where spectral difference and its linear fit are steeply rising at the border of the spectrum. By visual inspection of the algorithm, diagnostic plots are shown in Figure \ref{fig:carbon_example}, we limited a final selection to 400 spectra with the strongest carbon enhancement that was still visually recognisable. The last selected spectrum is shown in the Figure \ref{fig:carbon_last_supervised}. Selection of spectra with lower enhancement would introduce possibly wrong classification of stars whose enhancement is driven by spectral noise levels, data reduction or any other process that has a subtle effect on the spectral shape. \subsection{Unsupervised classification} \label{sec:unsupervised} With numerous spectra of different stellar types, chemical composition, and degree of carbon enhancement, some might show different carbon features or be insufficiently distinctive to be picked out by the supervised algorithm described above. Another analysis technique, which is becoming increasingly popular is a dimensionality reduction procedure named t-distributed Stochastic Neighbor Embedding (t-SNE, \cite{van2008visualizing}) that has already proved to be beneficial in comparison and sorting of unknown spectral features of the same data set \cite{2017ApJS..228...24T}. With the method, we aim to reduce the number of dimensions while preserving the structure of analysed data. In our case of a stellar spectrum, we have numerous dimensions (one per wavelength bin), among which we want to look for groups of data points. This search for groups could be done in the original space with many dimension, but visualisation of such dataset is problematic. As the relations between wavelength bins are not linear, methodologies such as Principal Component Analysis (PCA), have to be supplemented with non-linear methodologies; t-SNE for example. An additional problem, we are dealing with is a density of investigated spectra. As peculiar spectra are usually in a minority, the algorithm must recover global and local structures alike. t-SNE is able to map data sets with high variations in density, so it is able to resolve small, local details, as well as the global picture. This feature is achieved by the internal use of a Student's distribution. The algorithm is fine-tuned using the property perplexity that controls whether t-SNE is more sensitive to global or local structures. This property is comparable to setting the number of nearest neighbours taken into consideration. The role of the perplexity can be tested in an online interactive tool produced by \citet{wattenberg2016how}. After computing similarities between all pairs of the investigated spectra, the algorithm tries to find the best transformation that optimally arranges spectra in a 2D plane. As the transformation is variable and non-linear, the actual distance between two objects in a final 2D plane does not linearly depend on the spectral similarity measure. This property of the t-SNE algorithm additionally ensures more homogeneous coverage of the 2D plane in comparison to other dimensionality reduction methods. \begin{figure} \centering \includegraphics[width=\textwidth]{tsne_circles.png} \caption{t-SNE projection of 588,681 observed spectra ranging between 4720 and 4890~\AA. Red dots (756 spectra) mark a clump in the projection that was manually selected to contain carbon-enhanced spectra. Superimposed blue dots represent carbon-enhanced spectra determined by the supervised algorithm. Outside the t-SNE selected clump, we have 224 spectra that were determined to be carbon-enhanced only by the supervised method. All other analysed spectra are shown in grey shades, depending on their density in the 2D projection. Two ellipses indicate regions where the majority of CEMP candidates are located in the projection.} \label{fig:tsne_plot} \end{figure} The t-SNE projection shown in Figure \ref{fig:tsne_plot} was computed from normalised spectra between 4720 and 4890~\AA. To maximise the number of analysed spectra, no other limiting cuts than the validity of the wavelength solution (bit 1 in \texttt{red\_flag} set to 0 by reduction pipeline \cite{2017MNRAS.464.1259K}) in this arm was used. This resulted in 588,681 individual spectra being analysed by the automatic unsupervised algorithm. This is $\sim30$k more spectra than in the case of supervised classification, where we applied more strict criteria for the selection of analysed spectra (Section \ref{sec:supervised}). Without any prior knowledge about the location of objects of interest in the obtained projection, we would have to visually analyse every significant clump of stars in order to discover whether the carbon-enhanced population is located in one of them. This can be simplified by adding the results of the supervised classification into this new projection. In Figure \ref{fig:tsne_plot}, the stars identified by the supervised classification are shown as blue dots plotted over grey dots representing all spectra that went into the analysis. The majority of blue dots are located in a clump on the left side of the projection. A high concentration of objects detected by a supervised method leads us to believe that this isolated clump represents carbon-enhanced objects in the t-SNE projection. To select stars inside the clump, we manually have drawn a polygon around it. Inspection of other blue labelled spectra outside the main clump revealed that their slight carbon enhancement could not be identified by the t-SNE similarity metric as another spectral feature might have dominated the spectral comparison. \begin{figure} \centering \includegraphics[width=\textwidth]{tsne_params_notitle.png} \caption{Spatial distribution of all available measurements of \Teff\ (left panel) and \Feh\ (right panel) as determined by \TC. Dots, representing analysed spectra in the t-SNE projection, are colour coded by their parameter values. Colours and their corresponding values are explained by a colourbar under the graph.} \label{fig:tsne_teff_feh} \end{figure} Additional exploration of the t-SNE projection revealed two smaller groups of metal-poor carbon-enhanced spectra located inside ellipses shown in Figure \ref{fig:tsne_plot}. A confirmation that those regions are populated with metal-poor stars can be found in Figure \ref{fig:tsne_teff_feh} where the dots representing spectra in the projection are colour coded by \Feh\ and \Teff. To maximise the number of those objects in the published catalogue, we manually checked all undetected spectra in the vicinity of the detected ones. This produced an additional 13 CEMP detections. \subsubsection{t-SNE limitations} \begin{figure} \centering \includegraphics[width=0.6\textwidth]{tsne_cemp.pdf} \caption{A collection of spectra that were determined to be mutually very similar by the t-SNE algorithm. Out of 46 spectra inside the right black ellipse in Figure \ref{fig:tsne_plot} we identified 8 carbon-enhanced spectra with visually very different and distinctive spectrum in the region from 4734 to 4737~\AA\ that is also depicted in this figure. For easier visual recognizability, they are coloured in red.} \label{fig:tsne_cemps} \end{figure} While checking the local neighbourhood of some of the blue dots in Figure \ref{fig:tsne_plot} that are strewn across the t-SNE projection we identified a possible limitation of our approach for the automatic detection of specific peculiar spectra if their number is very small compared to the complete data set. Figure \ref{fig:tsne_cemps} shows a collection of a few carbon-enhanced spectra embedded between other normal spectra that were taken out of the right ellipsoidal region in Figure \ref{fig:tsne_plot}. As they are quite different from the others, they were pushed against the edge of a larger cluster in the projection, but their number is not sufficient to form a distinctive group of points in the projection. Therefore any automatic algorithm that would try to distinguish those objects based solely on a local density of points would most probably fail. Another specific of the t-SNE projection that we must be aware of is how it computes the similarity between analysed spectra. Combined similarity, which is computed as a sum of per pixel differences, has zero knowledge about the locations where in the spectrum those differences occur. The red spectrum in Figure \ref{fig:tsne_30close} with a slight signature of carbon enhancement in the range between 4734 and 4737~\AA\ has been placed among spectra with similar physical properties. Its slight carbon enhancement and comparable spectral noise to other spectra in its vicinity are probably the reason why it was placed in such a region of the t-SNE projection. This unrecognizability could be solved by using a smaller portion of the spectrum in a dimensionality reduction, which could, at the same time, lead to a loss of other vital information about a star. \begin{figure} \centering \includegraphics[width=0.6\textwidth]{150206004301057_tsne_close30.pdf} \caption{Spectral comparison between one of the detected carbon-enhanced stars in red and its 30 closest neighbours in the t-SNE projection shown as black curves. Enhancement in the spectrum was probably not sufficiently distinct and was dominated by the spectral noise. Therefore the spectrum was placed among other physically similar spectra without visible enhancement.} \label{fig:tsne_30close} \end{figure} \section{Characteristics of candidates} \label{sec:analysis_cemp} The final list of detected carbon-enhanced stars consists of 918 stars, corresponding to 993 spectra detected by at least one of the described methods. Among them, 63 stars were observed and identified at least twice and up to a maximum of four times. Those identifications belong to repeated observations that were performed at different epochs. Because not all of the observed spectra were considered in the classification procedure (due to the limitations described in Section \ref{sec:classification_cemp}) this is not the final number of stars with repeated observations. By searching among the complete observational data set, the number of carbon-enhanced stars with repeated observations increases to 90. Out of those 90 stars, every repeated observation of 56 stars was classified as being carbon-enhanced. In total, we detected $76.5$~\% of the carbon-enhanced spectra among repeated observations where at least one of the repeats have been classified as having enhanced carbon features in its spectrum. The unclassified instances usually have a low SNR value that could decrease their similarity value towards other carbon-enhanced stars in the t-SNE analysis or have incorrect stellar parameters and were therefore compared to an incorrect median spectrum during the supervised analysis. \subsection{Radial velocity variations} \label{sec:binaries} With repeated observations in the complete observational data set, we can look into measured radial velocities and investigate a fraction of possible variables among detected stars. For certain types of carbon-enhanced objects, the fraction of variables should be high or even close to 100~\% \citet{2016ApJ...826...85S}. Taking into account all of the repeated observations in our data set and not just the repeats among the identified spectra, 52 out of 90 stars show a minimum velocity change of $0.5$~\kms\ (70 stars with minimum change of $0.25$ \kms) and a maximum of $45$~\kms\ in different time spans ranging from days to years. The detailed distribution is presented by Figure \ref{fig:rv_rep_dist}. The threshold was selected in a such way to be significantly larger than the estimated typical accuracy of $\sim0.1$~\kms\ as determined for the \Gh\ radial velocities by \citet{2018arXiv180406344Z}. Any radial velocity change can hint at the presence of a secondary member or at intrinsic stellar pulsation \cite{2002AA...390..987B, 2010JApA...31..177L, 2012A&A...544A..10B}, as carbon-enhanced stars are found among all long-period variable classes (Mira, SRa, and SRb \cite{2013A&A...553A..93B, 2014A&A...568A.100B}). Follow-up observations are needed to determine their carbon sub-class and subsequently, the reason behind variations of radial velocity. \begin{figure} \centering \includegraphics[width=0.6\textwidth]{rv_rep_dist.png} \caption{Distribution of maximal velocity change between repeated observations of the stars that were classified as carbon-enhanced.} \label{fig:rv_rep_dist} \end{figure} % TODO Kot rečeno spodaj, tu dodaš še primerjalni graf za ne CE zvezde enakih spektralnih tipov - razlika naj bi pokazala, da so RV variacije CE zvezd signifikantne. Visual inspection of variable candidates revealed that none of them shows obvious multiplications of spectral absorption lines, a characteristic of a double-lined binary system. Therefore we can conclude that none of them is a binary member in which both components are of comparable luminosity, and a difference between their projected radial velocities is high enough to form a double-lined spectrum. From our simulations with median spectra, such line splitting becomes visually evident at the velocity difference of $\sim$14~\kms. If the components do not contribute the same amount of flux, the minimal difference increases to $\sim$20~\kms. The chemical peculiarity of a dwarf carbon-enhanced star (dC) that exhibits enhancement of C$_2$ in its spectra could be explained by its interaction with a primary star in a binary system \cite{2018ApJ...856L...2M}. Chemically enhanced material is thought to be accreted from the evolved AGB companion. Less than thirty of such systems that show signs of the existence of an invisible evolved companion who might have enriched a dC by the carbon have been identified spectroscopically to date \cite{1986ApJ...300..314D, 2018ApJ...856L...2M, 2018MNRAS.479.3873W}. This low number of confirmations gave us the possibility to greatly increase the list with every additional confirmed object. The only detected dC star (for criteria see Section \ref{sec:cannon_params}) with repeated observations shows that its radial velocity is unchanged on the order of $0.1$~\kms\ during the two years between consecutive observations. Hence, it cannot be classified as a possible binary system from those two observations alone. The lack of clear evidence for binarity among dC stars, especially among the most metal-poor, can also be explained by another enrichment mechanism. \citet{2018MNRAS.477.3801F} showed that a substantial fraction of those stars belongs to the halo population based on their kinematics information. We performed similar analysis in Section \ref{sec:cemp_cemp} that is supported by Figures \ref{fig:orbits_vxvyvz} and \ref{fig:orbits_zmax}. Combined with the results of \citet{2016ApJ...833...20Y} that classified the prototype dC star \mbox{G 77-61} as a CEMP-no star, that is known to have intrinsically low binarity fraction \cite{2014MNRAS.441.1217S, 2016A&A...586A.160H}, their carbon-enhancement may be of a primordial origin. \subsection{Stellar parameters} \label{sec:cannon_params} For the analysis of stellar parameters, we used values determined by \TC\ data interpolation method that was trained on actual observed HERMES spectra. To exclude any potentially erroneous parameter, we applied a strict flagging rule of \texttt{flag\_cannon}=0 (extensive description of the flagging procedure can be found in \citet{buder2018}), thus obtaining a set of 347 objects with trustworthy stellar parameters. Such a large percentage of flagged objects could be attributed to their nature as an additional elemental enhancement that we are looking for might not be a part of the training set. A raised quality flag would hint that the spectrum is different from any other in the training set or that the fit is uncertain and has a large $\chi^2$. Therefore flagged parameters have to be used with care, especially on the border of, and outside the training set. The majority (338) of the unflagged detected objects are giants, and only nine are confirmed to be dwarf stars based on their spectroscopic stellar parameters (Figure \ref{fig:kiel_plot}). \begin{figure} \centering \includegraphics[width=0.6\textwidth]{Kiel_diagram.png} \caption{Kiel diagram for a subset of 338 detected carbon-enhanced stars with valid stellar parameters in red. Uncertain positions of flagged stars are shown with grey dots. Dashed orange line illustrates the border between giants and dwarfs.} \label{fig:kiel_plot} \end{figure} \subsection{S-process elements} \label{sec:sprocess} Focusing on a spectral signature of the detected objects inside and outside the t-SNE selected clump (Figure \ref{fig:tsne_plot}) we can further investigate which spectral feature might have contributed to their separation. The distributions of their abundances in Figure \ref{fig:sprocess_hist} and strength of spectral features corresponding to the same elements in Figure \ref{fig:sprocess_spectrum} hints to an enhancement of s-process elements among stars inside the selected clump. The abundance difference is slightly different for every elements. In general the abundance peaks are separated for about $1$~dex This additional enhancement might be another reason, besides the carbon enhancement, for the algorithm to cluster all of those stars as being different from the majority of spectra. \begin{figure} \centering \includegraphics[width=\textwidth]{sprocess_hist.png} \caption{Distribution of s-process element abundances for stars in three different groups. The most enhanced group in green represent a carbon-enhanced stars located in the t-SNE selected clump of stars. The red distribution presents all other detections that are placed around the projection, and outside the clump. As a control group, the same distribution in black is shown for their closest t-SNE neighbours, therefore the black and red distribution contain an equal number of objects. No abundance quality flags were used to limit abundance measurements.} \label{fig:sprocess_hist} \end{figure} \begin{figure} \centering \includegraphics[width=\textwidth]{sprocess_spectra.pdf} \caption{Spectral subset around the absorption features in the blue arm that were used to determine abundances of Fe and s-process elements. Same colour coding is used as in Figure \ref{fig:sprocess_hist}. Spectra inside the t-SNE determined clump are shown in red, and outside it in green. Median of all spectra is shown with a bold line of the same colour. The shaded area gives the wavelength range considered in the computation of abundances. The abundance difference is the result of green spectra having lower Fe enhancement and higher enhancement (deeper absorption lines) of s-process elements.} \label{fig:sprocess_spectrum} \end{figure} \subsection{Lithium abundance} \label{sec:lithium} The derivation of elemental abundances for known carbon-enhanced stars has shown that some of them can exhibit strongly enhanced levels of Li in their atmosphere \cite{1991A&A...245L...1A}. Lithium is thought to be produced by hot-bottom burning \cite{1974ApJ...187..555S} and brought to the surface from the stellar interior. Investigation of the Li line at 6707~\AA\ revealed 32 such stars. Their spectra, centred around the Li feature, show a greatly varying degree of Li abundance in Figure \ref{fig:li_abund}. \begin{figure} \centering \includegraphics[width=0.6\textwidth]{li_string_ch.pdf} \caption{Spectral subset of 32 lithium-rich carbon-enhanced stars among the identified stars. The highlighted wavelength region is used by \TC\ to determine the lithium abundance of a star.} \label{fig:li_abund} \end{figure} \subsection{Sub-classes} \label{sec:subclasses} Following a revision of the original MK classification \cite{1941ApJ....94..501K} introduced by \citet{1996ApJS..105..419B}, carbon stars are separated into five different classes named \mbox{C-H}, \mbox{C-R}, \mbox{C-J}, \mbox{C-N}, and Barium stars. Of all the spectral indices proposed for the spectral classification, we are only able to measure a small part of Swan C$_2$ bands and Ba II line at 6496~\AA. For a more detailed classification of detected objects into proposed classes, we would need to carry out additional observations with a different spectroscopic setup to cover all the significant features. Additionally, the features caused by the $^{13}$C$^{12}$C molecule are strongly enhanced only for a handful of spectra in our data set, therefore we did not perform any isotopic ratio analysis or identification of possible C-J objects, which are characterised by strong Swan bands produced by the heavier isotopes. According to the abundance trends presented in Section \ref{sec:sprocess} and the classification criteria defined by \citet{1996ApJS..105..419B}, we could argue that the stars selected from the t-SNE projection belong to the C-N sub-class. Their s-process elements are clearly enhanced over solar values (see Figure \ref{fig:sprocess_hist}), but the actual values should be treated with care as they are mostly flagged by \TC (having quality flag \texttt{flag\_cannon}~>~0). This uncertainty might come from the fact that the training set does not cover carbon-enhanced stars and/or stars with such enhancement of s-process elements. \subsection{Match with other catalogues} In the literature we can find numerous published catalogues of carbon-enhanced (CH) stars \cite{2001A&A...375..366C, 2001BaltA..10....1A,2016ApJS..226....1J} and CEMP stars \cite{2007ApJ...658..367K, 2010A&A...509A..93M, 2010AJ....139.1051P, 2014ApJ...797...21P, 2015A&A...581A..22A, 2017yCat..18330020Y} observed by different telescopes and analysed in inhomogeneous ways. Most of those analyses were also performed on spectra of lower resolving power than the HERMES, therefore some visual differences are expected for wide molecular bands. By matching published catalogues with the GALAH observations that were analysed by our procedures, we identified 44 stars that matched with at least one of the catalogues. Of these, 28 were found in CH catalogues and 16 in CEMP catalogues. From the stars recognised as CEMPs in the literature, we were able to detect only 1 star using the described methods. Visual assessment of the diagnostic plots provided by our analysis pipeline proved that the remaining 15 CEMP matches do not expresses any observable carbon enhancement in Swan bands and were therefore impossible to detect with the combination of our algorithms. The reason for this difference between our and literature results might be in the CEMP selection procedure employed by the aforementioned literature. Every considered study selects their set of interesting stars from one or multiple literature sources based on values of \Meh\ and \Cfe\ that were measured from the atomic spectral lines and not molecular lines. The match is larger in the case of CH matches, where we were able to confirm 11 out of 33 possible matched carbon-enhanced stars. As the observed molecular bands are prominent features in the spectra, we explored possible reasons for our low detection rate. Visual inspection of spectra for the remaining undetected matched stars proved that they also show no or barely noticeable carbon enhancement in the spectral region of Swan bands, therefore reason must lie in the detection procedures used in the cited literature. \citet{2001A&A...375..366C} used low-resolution spectra to evaluate enhancement of C$_2$ and CN bands. The results are also summarised in their electronic table. In here, all of our undetected stars are marked to contain enhanced CN bands but no C$_2$ bands. Combining this with Figure \ref{fig:ch_xmatch} we speculate that those stars occupy a narrow range of parameter space where C$_2$ is not expressed and therefore undetectable in the HERMES spectra. Number of successfully detected stars matched between the surveys could also be influenced by different excitation temperatures of analysed carbon-rich molecules. Frequently studied photometric G-band (centred at 4640~\AA\ and FWHM of 1200~\AA), that is not present in our spectra, covers a spectral region rich in CH molecule features whose temperature dependence is different than for a C$_2$ molecule. Presence of those bands is identified by classifying a carbon-enhanced star into C-H sub-class (see Section \ref{sec:subclasses}). As we detected all C-H stars identified by \citet{2016ApJS..226....1J}, that are also present in the GALAH data set, we are unable to discuss about the selection effect in the \Teff\ range between $\sim$5100 and $\sim$5300~K where those three stars were found. \begin{figure} \centering \includegraphics[width=0.6\textwidth]{ch_comb.png} \caption{Comparison between the stellar parameters of detected (green histogram) and undetected (red histogram) carbon-enhanced stars found in literature.} \label{fig:ch_xmatch} \end{figure} The position of all stars matched with the literature is also visualised on the \mbox{t-SNE} projection in Figure \ref{fig:tsne_ref_ch}, where it can be clearly seen that they lie outside the selected clump with identified carbon enhancement and are strewn across the projection. Close inspection of spectra that are spatially near the aggregation of CEMP stars from the literature, revealed no visible carbon enhancement. The enhancement is present neither in form of molecular bands nor expressed as stronger atomic carbon line. They therefore are indistinguishable from other metal-poor stars with similar physical parameters. \begin{figure} \centering \includegraphics[width=0.6\textwidth]{cemps_meh_feh.png} \caption{Correlation between published metallicities and \TC\ iron abundance for the stars that were classified as CEMPs in the literature. As some of those stars were taken from multiple literature sources, we also have multiple determinations of \Meh\ for them. This can be identified as horizontal clusters of dots at different \Meh, but with the same \Feh. Where available, uncertainties of parameters are shown. The dashed line follows a 1:1 relation.} \label{fig:cemps_feh} \end{figure} \section{Metal-poor candidates} \label{sec:cemp_cemp} CEMP stars are defined in the literature as having low metallicity \Meh \textless $-1$ and strong carbon enrichment \Cfe \textgreater $+1$. In the scope of this analysis, we assume that our measurement of \Feh\ is a good approximation for the metallicity. To be sure about this we compared \Meh\ values of CEMP stars found in the literature and \Feh\ derived by \TC\ for the same stars. The relation between them is shown in Figure \ref{fig:cemps_feh}. We see that our values start deviating from the published values at metallicities below $-1.5$. Below that threshold the differences are in the range of $\sim1$ dex, but the same trend is obvious for both data sets. General offset between \Feh\ and \Meh\ is expected as the later gives abundance information of all elem and not just iron. Uncertainty of the published \Meh, derived from multiple sources, can reach up to $0.5$. \begin{figure} \centering \includegraphics[width=0.6\textwidth]{Fe_H_cannon.png} \caption{Histogram of \Feh\ for detected carbon-enhanced stars with valid \TC\ stellar parameters in blue and for every detected carbon-enhanced star in grey. Two vertical lines are located at iron abundances of $-1.0$ and $-0.5$.} \label{fig:feh_candidates} \end{figure} Taking unflagged \TC\ parameters and abundances of the detected objects we can determine possible CEMP candidates among our sample. As also shown by Figure \ref{fig:feh_candidates} our set of carbon-enhanced stars consists of 41 objects with \Feh \textless $-0.5$ and 2 objects with \Feh \textless $-1.0$. If we also include potentially incorrect parameters, the number of objects with \Feh \textless $-1.0$ increases to 28, which is equal to $2.8$~\% of detected carbon-enhanced spectra. In any case, none of them has a valid determination of carbon abundance. Analysing HERMES spectra in order to determine carbon abundance is difficult because the automatic analysis is based on only one very weak atomic absorption line that is believed to be free of any blended lines. Consequently, we are also not able to measure the \CO\ abundance ratio, as a majority of determined \Cfe\ abundances is flagged as unreliable. Complementary observations are needed to determine the abundance and confirm suggested CEMP candidates. A low number of metal-poor candidates could also be explained by the specification of the HERMES spectrograph as its spectral bands were not selected in a way to search for and confirm the most metal-poor stars with \Feh \textless $-3.0$. With the release of {\it Gaia} DR2 data \cite{2018A&A...616A...1G}, stars low/high-metallicity could also be compared with their Galactic orbits. To determine the distribution of detected stars among different Galactic components, we performed an orbital integration in \texttt{MWPotential2014} Galactic potential using the \texttt{galpy} package \cite{2015ApJS..216...29B}. In order to construct a complete 6D kinematics information, {\it Gaia} parallax and proper motion measurements were supplemented with the GALAH radial velocities. Results shown in Figure \ref{fig:orbits_zmax} suggest that our CEMP candidates could belong to two different components of the Galaxy. Stars with maximal $z$~<~4~kpc most probably belong to the thick disk and stars with $z$~>~5~kpc to the halo population that is inherently metal-poor. This is also supported by their angular momentum in Figure \ref{fig:orbits_zmax} and their Galactic velocities shown in Figure \ref{fig:orbits_vxvyvz}. \begin{figure} \centering \includegraphics[width=0.6\textwidth]{carbon_orbits_vy_vxvz.png} \caption{Toomre diagram used to identify possible local halo stars among our detected carbon-enhanced stars, especially CEMP candidates. Halo stars in this diagram are located above the red circular line, satisfying the velocity condition $\left|\mathbf{v} - \mathbf{v_{LSR}} \right|$~>~210~\kms\ (the threshold taken from \citet{2018ApJ...860L..11K}). CEMP candidates are marked with star symbols.} \label{fig:orbits_vxvyvz} \end{figure} When looking at the distribution of \Feh\ for the complete set of observed stars, we find a comparable distribution as for carbon-enhanced stars. Similarly, about $1.8$~\% of stars are found to be metal-poor with \Feh \textless $-1.0$. \begin{figure} \centering \includegraphics[width=0.6\textwidth]{carbon_orbits_zmax_lznorm.png} \caption{Distributions of maximal height above/below the Galactic plane reached by the detected stars on their orbit around the centre of the Galaxy. Heights are compared towards their normalised angular momentum L$_z$, where L$_\odot$~=~$2033.6$~\kms~kpc. Vertical dashed line at L$_z$~=~$1000$~\kms~kpc highlights the transition from the halo to the disk population, where a majority of the halo stars is located below this threshold (the threshold was visually estimated from similar plots in \citet{2018ApJ...860L..11K}). CEMP candidates are marked with red star symbols.} \label{fig:orbits_zmax} \end{figure} \section{Follow-up observation} \label{sec:asiago_cemp} \begin{figure} \centering \includegraphics[width=0.55\textwidth]{asiago_cemp2.pdf} \caption{Subsets from the follow-up Asiago spectrum with resolving power comparable, but not identical, to the HERMES spectrum. It contains multiple spectral features used to evaluate carbon enhancement in a star and its carbon sub-class. Relevant spectral features (single line or tip of a molecular band) are marked with vertical dashed green lines and labels that represent a molecule or an element that is responsible for the features shown in the individual panel. The 2MASS identifier of the observed star is J11333341-0043060. In the wavelength ranges where it is available, the GALAH spectrum of the same star is shown in red.} \label{fig:asiago} \end{figure} To further classify and analyse one of the detected objects, a star with 2MASS identifier J11333341-0043060 was selected for a follow-up observation. We acquired its high-resolution Echelle spectrum (with the resolving power R $\sim 20,000$), using a spectrograph mounted on the $1.82$~m Copernico telescope located at Cima Ekar (Asiago, Italy). Because only a few of our detected candidates are observable from the Asiago observatory, we selected the best observable CEMP candidate, whose \Feh\ was determined by \TC\ to be $-0.96$. The selected star, with V = $12.79$, was on the dark limit of the used telescope, therefore low SNR was expected. The one-hour long exposure of the selected object was fully reduced, normalised order by order, and shifted to the rest frame. Although the acquired spectrum covers a much wider and continuous spectral range (from 3900 to 7200~\AA) than the HERMES spectra, only subsets, relevant for the classification of carbon-enhanced stars are presented in Figure \ref{fig:asiago}. They were identified by visually matching our observed spectrum with the published moderate-resolution spectral atlas \cite{1996ApJS..105..419B} of peculiar carbon stars. Where available, the GALAH spectrum is shown alongside the Asiago spectrum. Carbon enhancement is not expected to vary over a period of several years, therefore both spectra should show similar features. The second and fourth panel in Figure \ref{fig:asiago} confirm that both observations indicate a similar degree of carbon enhancement. Following the classification criteria of carbon stars, we determined that the star belongs to the C-H sub-class. The definitive features for this class are strong molecular CH bands, prominent secondary P-branch head near 4342~\AA\ (top panel in Figure \ref{fig:asiago}), and noticeable Ba II lines at 4554 and 6496~\AA\ \cite{2018ApJS..234...31L}, which are all present in the spectrum. The star definitely does not have a high ratio between $^{13}$C and $^{12}$C isotopes as the Swan features corresponding to $^{13}$C are clearly not present, therefore it can not be of a C-J sub-class. Following the current state of knowledge \cite{1990ApJ...352..709M, 2016AA...586A.158J, 2016ApJ...826...85S} that most, if not all, C-H stars show clear evidence for binarity, we compared the radial velocity between both observations. They hint at the variability of the object as the follow-up radial velocity ($126.75 \pm 1.63$ \kms) deviates by more than $3$~\kms\ from the velocity ($123.43 \pm 0.08$ \kms) observed as part of the GALAH survey. The time span between the two observations is more than 2.5~years, where the exact JD of the observation is $2458090.702$ for the Asiago spectrum, and $2457122.095$ for the GALAH spectrum. Further observations along the variability period would be needed to confirm whether it is a multiple stellar system. \section{Conclusions} \label{sec:summary_cemp} This work explores stellar spectra acquired by the HERMES spectrograph in order to discover peculiar carbon-enhanced stars, which were observed in the scope of multiple observing programmes conducted with the same spectrograph. We show that the spectra of such stars are sufficiently different from other stellar types to be recognisable in high-resolution spectra with limited wavelength ranges. This can be done using a supervised or unsupervised method. The latter was used to identify observed stars solely on the basis of acquired spectra. By combining both methodologies we identified 918 unique stars with evident signs of carbon enhancement of which 12 were already reported in the literature. Out of all matched objects from the literature, we were unable to detect and confirm 16 ($57$~\%) CH and 15 ($93$~\%) CEMP stars with our procedures. As some of those objects were proven to contain carbon enhancement detectable outside the HERMES wavelength ranges, this would have to be taken into account to say more about the underlying population of carbon-enhanced stars. In addition to a detection bias imposed by the analysis of C$_2$ bands and exclusion of CN, and CH molecular bands that might be excitated in different temperature ranges, varying degree of carbon-enhancement also has to be accounted for accurate population studies. As shown by \citet{2016ApJ...833...20Y}, CEMP stars can be found within a wide range of absolute carbon abundances. When an object selection is performed with a pre-defined threshold, as in the case of our supervised methodology, this may reduce the number of objects in only one of the sub-classes. In the case of CEMP stars, this selection may influence a number CEMP-no stars that are known to have lower absolute carbon abundance \cite{2016ApJ...833...20Y}. The identified objects were separated into dwarf and giant populations using their stellar atmospheric parameters that were also used to select possible CEMP candidates. All of the detections, with multiple observations at different epochs, were investigated for signs of variability. More than half of the repeats show signs of variability in their measured radial velocities. This could be an indicator that we are looking at a pulsating object or a multiple stellar system. With a follow-up observation of one of the identified stars, we were able to confirm the existence of carbon-rich molecules in its atmosphere in a wider wavelength range. The acquired spectrum was also used to determine its sub-class. Variation in radial velocity points to a possible variable nature of the star or binarity that is common for C-H stars. Follow-up observations are required to confirm variability of radial velocities observed for some of the detected carbon-enhanced stars and further investigate their nature. Careful spectral analysis, with the inclusion of carbon enhancement in models, is needed to confirm the metallicity levels of the metal-poor candidates. The list of detected stars presented in this chapter is accessible as electronic table through the CDS. Detailed structure is presented in Table \ref{tab:out_table}. The list also includes stars from the literature, matched with our observations, for which we were unable to confirm their carbon enhancement. The list could be used to plan further observations, allowing a better understanding of these objects. \begin{figure} \centering \includegraphics[width=\textwidth]{tsne_refpapers.png} \caption{t-SNE projection with marked known carbon-enhanced and CEMP objects from multiple different catalogues found in the literature that are also part of our analysed set of spectra. The dense clump of known CEMP stars is located close to our region of detected CEMP stars.} \label{fig:tsne_ref_ch} \end{figure} \begin{figure} \centering \includegraphics[width=\textwidth]{last_170515005101173.png} \caption{Equivalent plot as in the Figure \ref{fig:carbon_example} showing the last of 400 spectra, ordered by their degree of carbon enhancement, selected by the supervised methodology.} \label{fig:carbon_last_supervised} \end{figure} \begin{figure} \centering \includegraphics[width=\textwidth]{bad_fit1_150902002901051.png} \caption{Equivalent plot as in the Figure \ref{fig:carbon_example} but representing grossly over exaggerated carbon enhancement by a fit that describes a reduction problem (a cosmic ray in a subtracted sky spectrum).} \label{fig:bad_fit1} \end{figure} \begin{figure} \centering \includegraphics[width=\textwidth]{bad_fit2_150603001801056.png} \caption{Equivalent plot as in the Figure \ref{fig:carbon_example} but representing a fit to absorption lines of a double-lined spectroscopic binary. Final fit is not skewed as would be expected in the case of carbon enhancement.} \label{fig:bad_fit2} \end{figure}
{ "alphanum_fraction": 0.7985183938, "avg_line_length": 174.6911764706, "ext": "tex", "hexsha": "6f4321231cee2d852729b298746bc732bd4df3fc", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "486d96b3206fc015043cd7ab9a61164bb9085f3b", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "kcotar/PhD-thesis", "max_forks_repo_path": "04-peculiars_1.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "486d96b3206fc015043cd7ab9a61164bb9085f3b", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "kcotar/PhD-thesis", "max_issues_repo_path": "04-peculiars_1.tex", "max_line_length": 2159, "max_stars_count": null, "max_stars_repo_head_hexsha": "486d96b3206fc015043cd7ab9a61164bb9085f3b", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "kcotar/PhD-thesis", "max_stars_repo_path": "04-peculiars_1.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 13881, "size": 59395 }
\section{Methodology} This work simulates multiple transition scenarios to advanced reactors requiring \gls{HALEU} for fuel, then quantifies and compares the front-end material requirements of each scenario. Five different fuel cycle scenarios are modeled using \Cyclus \cite{huff_fundamental_2016}; an open-source, agent-based fuel cycle simulator. \Cyclus defines facilities, institutions, and regions as agents within a fuel cycle simulation and models material transactions between agents according to a dynamic resource exchange. The first scenario models only the \glspl{LWR} deployed in the United States and provides a reference for comparison of the material requirements of the transitions. The next two scenarios model no-growth transitions to the \gls{USNC} \gls{MMR} or the X-Energy Xe-100 reactor. The last two scenarios model a 1\% annual growth transition to the \gls{USNC} \gls{MMR} or the X-Energy Xe-100 reactor. Table \ref{tab:simulations} summarizes each of the scenarios. \begin{table}[ht] \centering \caption{Summary of the fuel cycle scenarios} \label{tab:simulations} \begin{tabular}{c c c} \hline Scenario Number & Reactors Present & Growth \\\hline 1 & \glspl{LWR} & N/A \\ 2 & \glspl{LWR} and \gls{USNC} \gls{MMR} & None \\ 3 & \glspl{LWR} and X-energy Xe-100 reactor& None \\ 4 & \glspl{LWR} and \gls{USNC} \gls{MMR}& 1\% \\ 5 & \glspl{LWR} and X-energy Xe-100 reactor& 1\% \\\hline \end{tabular} \end{table} The \gls{IAEA} \gls{PRIS} database \cite{noauthor_power_1989} provided the grid connection date and power level of each \gls{LWR}, and the decommission date for any reactor closed before December 2020. Reactor lifetimes assume 60 years of operation after the grid connection date for any \gls{LWR} still in operation in December 2020. The simulations do not consider research or experimental reactors, and therefore only include reactors with power levels above 400 MWe. The mass of fuel in the \gls{LWR} reactor cores, including the mass required for each refueling, was obtained from supplementary sources \cite{todreas_nuclear_2012,cacuci_handbook_2010}. All \glspl{LWR} are assumed to have an 18 month cycle length. A variety of open-source, non-proprietary documents supplied data about the advanced reactors \cite{mitchell_usnc_2020, hawari_development_2018, venneri_neutronic_2015, harlan_x-energy_2018, hussain_advances_2018}. This includes the power output, enrichment level, fuel form, reactor lifetime, fuel burnup, mass of uranium in the core, and cycle time, shown in Table \ref{tab:reactor_summary}. The uranium mass in the core was calculated based on information found in public, non-proprietary sources. Knowing a reactor's \gls{EOL} burnup, power output, and cycle length allows for calculating its initial load of fissile materials. For the \gls{MMR}, considering a burnup of 42.7 MWd/kgU \cite{hawari_development_2018}, a power output of 40 MWth, and a cycle length of 2,042 \gls{EFPD} \cite{venneri_neutronic_2015} results in the batch mass shown in Table \ref{tab:reactor_summary}. The following simulations assume that this value remains constant for the power output and cycle length specified in the same table. The mass of uranium needed for the fresh \gls{TRISO} based fuel pebbles for the Xe-100 reactor was found by calculating the total volume of uranium oxycarbide (UCO) \gls{TRISO} kernels in a fuel pebble, then multiplying by the number of pebbles in the Xe-100 core \cite{harlan_x-energy_2018}. Then, the total volume in the core is multiplied by the density of UCO and the mass fraction of uranium. The Xe-100 reactor has a greater power output, requires a higher enrichment level, and has a longer lifetime than the \gls{MMR}. However, the \gls{MMR} has a longer cycle time than the Xe-100 because it does not require refueling once it is operational. Both advanced reactors require fuel comprised of \gls{TRISO} particles, but in different forms. Refueling of the Xe-100 reactor is modeled as a replacement of 1/7 of the core mass every six months. \begin{table}[ht] \caption{Advanced reactor design specifications} \label{tab:reactor_summary} \begin{tabular}{p{4cm} p{3cm} p{3.8cm} } \hline Design Criteria & \gls{USNC} \gls{MMR} \cite{mitchell_usnc_2020}& X-Energy Xe-100 \cite{harlan_x-energy_2018,hussain_advances_2018} \\\hline Reactor type & Modular HTGR & Modular HTGR \\ Power Output (MWe) & 10 & 75 \\ Enrichment (\% $^{235}U$) & 13 & 15.5 \\ Cycle Length (years) & 20 & online refueling\\ Fuel form & \gls{TRISO} compacts & \gls{TRISO} pebbles\\ Reactor Lifetime (yrs) & 20 & 60 \\ Mass of uranium per refueling (kg) & 1912.9 & 223.87 \\ Burnup (MWd/kg U) & 42.7 & 163 \\ \hline \end{tabular} \end{table} Each of the simulations model reactor deployment and operation from 1965 to 2090, with the transition to advanced reactors beginning in 2025 for the applicable scenarios. Therefore, in the no growth scenarios the power demand remains constant at the power produced in 2025. A linear (for no growth) or exponential (for 1\% growth) equation models the energy demand of each transition scenario. A \Cycamore GrowthRegion archetype\cite{huff_fundamental_2016} defines the energy demand of the scenario and determines if additional facilities are required to meet the demand. The \glspl{LWR} are deployed using the \Cycamore DeployManagerInst archetype, and the advanced reactors are deployed as needed to meet the prescribed power demand of the scenario using the \Cycamore ManagerInst archetype \cite{huff_fundamental_2016}. The \Cycamore DeployManagerInst deploys facilities according to a manually defined schedule, with the \Cycamore GrowthRegion recognizing each facility deployed and how it contributes to the specified capacity. The scenarios model the fuel cycle from the uranium mine to final disposal in the \gls{HLW} Sink. Figure \ref{fig:fuel_cycle} shows the fuel cycle modeled in each simulation. Scenario 1 includes only the facilities in blue. All facilities are used in Scenarios 2-5, and the facility in red is the advanced reactor deployed in the scenario. Although the simulations model the back end of the fuel cycle, quantifying any waste is considered outside the scope of this work. \begin{figure}[ht] \centering \begin{tikzpicture}[node distance=1.5cm] \node (mine) [facility] {Uranium Mine}; \node (mill) [facility, below of=mine] {Uranium Mill}; \node (conversion) [facility, below of=mill] {Conversion}; \node (enrichment) [facility, below of=conversion]{Enrichment}; \node (fabrication) [facility, below of=enrichment]{Fuel Fabrication}; \node (reactor) [facility, below of=fabrication, xshift=-2cm]{LWR}; \node (adv_reactor) [transition, below of=fabrication, xshift=2cm]{Advanced Reactor}; \node (wetstorage) [facility, below of=reactor]{Wet Storage}; \node (drystorage) [facility, below of=wetstorage]{Dry Storage}; \node (sinkhlw) [facility, below of=drystorage, xshift=2cm]{HLW Sink}; \node (sinkllw) [facility, right of=enrichment, xshift=2cm] {LLW Sink}; \draw [arrow] (mine) -- node[anchor=east]{Natural U} (mill); \draw [arrow] (mill) -- node[anchor=east]{U$_3$O$_8$}(conversion); \draw [arrow] (conversion) -- node[anchor=east]{UF$_6$}(enrichment); \draw [arrow] (enrichment) -- node[anchor=east]{Enriched U}(fabrication); \draw [arrow] (enrichment) -- node[anchor=south]{Tails}(sinkllw); \draw [arrow] (fabrication) -- node[anchor=east]{Fresh UOX}(reactor); \draw [arrow] (fabrication) -- node[anchor=west]{TRISO fuel}(adv_reactor); \draw [arrow] (reactor) -- node[anchor=east]{Spent UOX}(wetstorage); \draw [arrow] (wetstorage) -- node[anchor=east]{Cool Spent UOX}(drystorage); \draw [arrow] (drystorage) -- node[anchor=east]{Casked Spent UOX}(sinkhlw); \draw [arrow] (adv_reactor) -- node[anchor=west]{Spent TRISO Fuel}(sinkhlw); \end{tikzpicture} \caption{Fuel cycle facilities and material flow between facilities. Facilities in red are deployed in the transition scenarios.} \label{fig:fuel_cycle} \end{figure} Recipes define the composition of the materials shown in Figure \ref{fig:fuel_cycle}. Yacout et al. \cite{yacout_visionverifiable_2006} supplied recipes for spent and fresh \gls{LWR} fuel assuming a 51 MWd/kg-U burnup. All other recipes capture the necessary uranium isotopic ratios, but do not include other elements. This work does not include neutronic or depletion simulations to determine any material compositions. The enrichment facility assumes the feed material is \gls{NU}, and the tails assay is 0.2\%.
{ "alphanum_fraction": 0.7128294111, "avg_line_length": 53.9239766082, "ext": "tex", "hexsha": "1fcdc80d030635ec9feeba8c01beb84eff69a23a", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2021-09-16T13:12:19.000Z", "max_forks_repo_forks_event_min_datetime": "2021-09-16T13:12:19.000Z", "max_forks_repo_head_hexsha": "a64fa461f9507ee27a6e8a2449a08fb3bdbad485", "max_forks_repo_licenses": [ "CC-BY-4.0" ], "max_forks_repo_name": "abachma2/2021-bachmann-epjn", "max_forks_repo_path": "revise/methods.tex", "max_issues_count": 4, "max_issues_repo_head_hexsha": "a64fa461f9507ee27a6e8a2449a08fb3bdbad485", "max_issues_repo_issues_event_max_datetime": "2021-10-05T13:24:12.000Z", "max_issues_repo_issues_event_min_datetime": "2021-07-08T17:29:46.000Z", "max_issues_repo_licenses": [ "CC-BY-4.0" ], "max_issues_repo_name": "abachma2/2021-bachmann-epjn", "max_issues_repo_path": "revise/methods.tex", "max_line_length": 97, "max_stars_count": 1, "max_stars_repo_head_hexsha": "a64fa461f9507ee27a6e8a2449a08fb3bdbad485", "max_stars_repo_licenses": [ "CC-BY-4.0" ], "max_stars_repo_name": "abachma2/2021-bachmann-epjn", "max_stars_repo_path": "revise/methods.tex", "max_stars_repo_stars_event_max_datetime": "2021-04-30T12:56:38.000Z", "max_stars_repo_stars_event_min_datetime": "2021-04-30T12:56:38.000Z", "num_tokens": 2543, "size": 9221 }
\documentclass[12pt,a4paper]{article} \usepackage[utf8]{inputenc} \usepackage[english]{babel} \usepackage{amsmath} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{makeidx} \usepackage{graphicx} \usepackage{lmodern} \usepackage{kpfonts} \usepackage{apacite} \usepackage{multirow} \usepackage{graphicx} \usepackage{framed} \setlength{\parindent}{0em} \usepackage[left=3cm,right=3cm,top=2cm,bottom=2cm]{geometry} \author{Gareth James, Daniela Witten, Trevor Hastie \& Robert Tibshirani} \title{An Introduction to Statistical Learning\\ \large{(Summary by \textit{Pablo Vivas})}} \date{January 13, 2020} \begin{document} \maketitle \section{Introduction} Statistical learning refers to a vast set of tools for \textbf{understanding data}. These tools can be classified as: \begin{itemize} \item supervised \item unsupervised \end{itemize} Broadly speaking, supervised statistical learning involves building a statistical model for predicting, or estimating, an output based on one or more inputs. Problems of this nature occur in fields as diverse as business, medicine, astrophysics, and public policy. With unsupervised statistical learning, there are inputs but no supervising output; nevertheless we can learn relationships and structure from such data. The datasets used in this books are: \begin{itemize} \item Wage \item Stock Market \item Gene Expression \end{itemize} \textbf{\textit{Some history}} \begin{framed} Legendre and Gauss published papers on the method of \textit{least squares}.\\ Fisher proposed \textit{linear discriminant analysis} in 1936.\\ In the 1940s, various authors put forth an alternative approach, \textit{logistic regression}.\\ In the early 1970s, Nelder and Wedderburn coined the term \textit{generalized linear models} for an entire class of statistical learning methods that include both linear and logistic regression as special cases.\\ In mid 1980s Breiman, Friedman, Olshen and Stone introduced \textit{classification and regression trees}.\\ Hastie and Tibshirani coined the term \textit{generalized additive models} in 1986 for a class of non-linear extensions to generalized linear models. \end{framed} \section{Statistical Learning} Suppose that we observe a quantitative response $Y$ and $p$ different predictors, $X_{1},X_{2},...,X_{p}$. \textbf{We assume} that there is some relationship between $Y$ and $X =(X_{1},X_{2},...,X_{p})$, which can be written in the very general form: \begin{center} \boxed{Y = f(X) + \epsilon} \end{center} Here $f$ is some fixed but unknown function of $X_{1},X_{2},...,X_{p}$, and $\epsilon$ is a random error term, which is independent of $X$ and has mean zero. In this formulation, $f$ represents the systematic information that $X$ provides about $Y$.\\ \newline \textit{Why Estimate $f$?} \begin{itemize} \item \textit{Prediction}: In many situations, a set of inputs $X$ are readily available, but the output $Y$ cannot be easily obtained. Since the error term averages to zero, we can predict $Y$ using \begin{center} \boxed{\hat{Y}=\hat{f}(X)} \end{center} Where $\hat{f}$ represents our estimate for f, and $\hat{Y}$ represents the resulting prediction for $Y$ . \textbf{In this setting,$\hat{f}$ is often treated as a black box}, in the sense that one is not typically concerned with the exact form of $\hat{f}$, provided that it yields accurate predictions for $Y$.\\ \newline The accuracy of $\hat{Y}$ as a prediction for Y depends on two quantities, which we call the \textit{reducible error} and the \textit{irreducible error}.$\hat{f}$ will not be a perfect estimate for f, and this inaccuracy will introduce some error. This error is reducible because we can potentially improve the accuracy of $\hat{f}$ by using the most appropriate statistical learning technique to estimate f. However, even if it were possible to form a perfect estimate for f, so that our estimated response took the form $\hat{Y}=f(x)$, our prediction would still have some error in it! This is because Y is also a function of $\epsilon$, which, by definition, cannot be predicted using X. Therefore, variability associated with $\epsilon$ also affects the accuracy of our predictions. This is known as the irreducible error, because no matter how well we estimate f, we cannot reduce the error introduced by $\epsilon$. \item \textit{Inference}: We are often interested in understanding the way that Y is affected as $X_{1},X_{2},...,X_{p}$ change. In this situation we wish to estimate $f$, but our goal is not necessarily to make predictions for Y . We instead want to understand the relationship between X and Y, or more specifically, to understand how Y changes as a function of $X_{1},X_{2},...,X_{p}$. \textbf{Now $\hat{f}$ cannot be treated as a black box}, because we need to know its exact form. In this setting, one may be interested in answering the following questions: \begin{itemize} \item Which predictors are associated with the response? \item What is the relationship between the response and each predictor? \item Can the relationship between Y and each predictor be adequately summarized using a linear equation, or is the relationship more complicated? \end{itemize} \end{itemize} \newpage \textit{How Do We Estimate $f$?} \begin{itemize} \item \textit{Parametric Models}: Parametric methods involve a two-step model-based approach. \begin{enumerate} \item First, we make an assumption about the functional form, or shape, of f. For example, one very simple assumption is that f is linear in X: \begin{center} \boxed{f(X)=\beta_{0}+\beta_{1}X_{1}+\beta_{2}X_{2},...,\beta_{p}X_{p}} \end{center} This is a \textit{linear model}. Once we have assumed that f is linear, the problem of estimating f is greatly simplified. Instead of having to estimate an entirely arbitrary p-dimensional function f(X), one only needs to estimate the $p+1$ coefficients $\beta_{0},\beta_{1},..,\beta_{p}$. \item After a model has been selected, we need a procedure that uses the training data to fit or train the model. In the case of the linear model, we need to estimate the parameters $\beta_{0},\beta_{1},..,\beta_{p}$. That is, we want to find values of these parameters such that \begin{center} $Y \approx \beta_{0}+\beta_{1}X_{1}+\beta_{2}X_{2},...,\beta_{p}X_{p}$ \end{center} \end{enumerate} This approach reduces the problem of estimating f down to one of estimating a set of parameters. Assuming a parametric form for f simplifies the problem of estimating f because it is generally much easier to estimate a set of parameters than it is to fit an entirely arbitrary function f. The potential disadvantage of a parametric approach is that the model we choose will usually not match the true unknown form of f. If the chosen model is too far from the true f, then our estimate will be poor. We can try to address this problem by choosing flexible models that can fit many different possible functional forms for f. But in general, fitting a more flexible model requires estimating a greater number of parameters. These more complex models can lead to a phenomenon known as overfitting the data, which essentially means they follow the errors, or noise, too closely. \item \textit{Non-Parametric Models}: Non-parametric methods do not make explicit assumptions about the functional form of f. Instead they seek an estimate of f that gets as close to the data points as possible without being too rough or wiggly. Such approaches can have a major advantage over parametric approaches: by avoiding the assumption of a particular functional form for f, they have the potential to accurately fit a wider range of possible shapes for f. Any parametric approach brings with it the possibility that the functional form used to estimate f is very different from the true f, in which case the resulting model will not fit the data well. In contrast, non-parametric approaches completely avoid this danger, since essentially no assumption about the form of f is made. But non-parametric approaches do suffer from a major disadvantage: since they do not reduce the problem of estimating f to a small number of parameters, a very large number of observations (far more than is typically needed for a parametric approach) is required in order to obtain an accurate estimate for f. \end{itemize} \newpage \textit{Prediction Accuracy and Model Interpretability: The Trade-Off}\\ \newline One might reasonably ask the following question: \textit{why would we ever choose to use a more restrictive method instead of a very flexible approach?} There are several reasons that we might prefer a more restrictive model. If we are mainly interested in \textbf{inference}, then restrictive models are much more interpretable. For instance, when inference is the goal, the linear model may be a good choice since it will be quite easy to understand the relationship between Y and $X_{1},X_{2},...,X_{p}$. In contrast, very flexible approaches can lead to such complicated estimates of f that it is difficult to understand how any individual predictor is associated with the response. \\ \newline In some settings, however, we are only interested in \textbf{prediction}, and the interpretability of the predictive model is simply not of interest. For instance, if we seek to develop an algorithm to predict the price of a stock, our sole requirement for the algorithm is that it predict accurately— interpretability is not a concern. In this setting, we might expect that it will be best to use the most flexible model available. Surprisingly, this is not always the case! We will often obtain more accurate predictions using a less flexible method. This phenomenon, which may seem counterintuitive at first glance, has to do with the potential for overfitting in highly flexible methods.\\ \newline Figure 1 provides an illustration of the trade-off between flexibility and interpretability for some methods. \begin{figure}[hbtp] \centering \includegraphics[scale=0.5]{F1.png} \caption{A representation of the tradeoff between flexibility and interpretability, using different statistical learning methods} \end{figure} \textit{Supervised vs Unsupervised Learning}\\ \newline \textit{Regression vs Classification}\\ \newline \textit{Quality of fit}\\ \newline \begin{center} \boxed{MSE = \frac{1}{n}\sum_{i=1}^{n}(y_{i}-\hat{f}(x_{i}))^{2}} \end{center} \textit{Bias vs Variance}\\ \newline \begin{center} \boxed{E\left[ y_{0}-\hat{f}(x_{0})\right] ^{2}=Var(\hat{f}(x_{0}))+[Bias(\hat{f}(x_{0}))]^{2}+Var(\epsilon)} \end{center} \section{Linear Regression} \section{Classification} \section{Resampling Methods} \section{Linear Model Selection and Regularization} \section{Moving Beyond Linearity} \section{Tree-Based Methods} \section{Support Vector Machines} \section{Unsupervised Learning} \end{document}
{ "alphanum_fraction": 0.776119403, "avg_line_length": 79.3161764706, "ext": "tex", "hexsha": "838c39d354583940d4661180568612ca02addef5", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "4a40864bf37123ea82adfb74d596d5779623af56", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "pablo-vivas/Lecturas", "max_forks_repo_path": "1/Resumen.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "4a40864bf37123ea82adfb74d596d5779623af56", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "pablo-vivas/Lecturas", "max_issues_repo_path": "1/Resumen.tex", "max_line_length": 1100, "max_stars_count": null, "max_stars_repo_head_hexsha": "4a40864bf37123ea82adfb74d596d5779623af56", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "pablo-vivas/Lecturas", "max_stars_repo_path": "1/Resumen.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 2702, "size": 10787 }
\lesson{1}{Nov 01 2021 Mon (08:16:07)}{The Legislative Branch}{Unit 3} \subsubsection*{Bicameral Legislature} \begin{definition}[Bicameral] Congress is \bf{\it{Bicameral}}. That means that it has two Houses. They are: \begin{itemize} \item The \bf{House of Representatives} \item The \bf{Senate} \end{itemize} For a bill to pass, it must pass the majority of both of the Houses of Congress to become a law. The \bf{House} gives more representation to states with larger populations. In contrast, state representation is equal in the \bf{Senate}, so smaller states have greater relative power there. \end{definition} As the nation grew, the number of \bf{House} members grew steadily until $1911$. That's when Congress set a fixed number of $435$ representatives. The \bf{U.S. Census Bureau} conducts an official count of a population carried out at a specific time every $10$ years. That's for the main purpose of apportionment. Each state must also have a representative. The \bf{House} members are more expected than \bf{Senators} to be extra sensitive to the \it{State} and \it{Local} issues. The \bf{House} members are more dependent on support from constituents since they are \it{""Closer to the people"}. Each representative speaks for about $700,000$ people. But, senators reflect stability through their longer terms. \subsubsection*{House} \begin{itemize} \item At least one representative per state, based on population. \item Each member of the \bf{House} represents over a half-million people. \item $435$ members total. \item Two-year terms. \end{itemize} \subsubsection*{Senate} \begin{itemize} \item Two senators per state, equal representation for states. \item $100$ members total. \item Six-year terms, one-third of the \bf{Senate} is up for re-election every two years. \end{itemize} \subsubsection*{Qualifications For Congress} The \bf{House} and \bf{Senate} both have an age, citizenship, and residency requirements. Members of both the \bf{House} and \bf{Senate} must be residents of the state people elect them to represent. While the Constitution does not require the \bf{House} members to live in the specific district they represent, they usually do so by tradition. Notice that the \bf{Senate} qualifications are higher for age and years of citizenship. The framers of the Constitution intended the \bf{Senate} to be the more stable \bf{House} of Congress with members likely to have greater political experience. \subsubsection*{House} \begin{figure}[h] \begin{quote} \it{"No Person shall be a Representative who shall not have attained to the Age of twenty five Years, and been seven Years a Citizen of the United States, and who shall not, when elected, be an Inhabitant of that State in which he shall be chosen."} \end{quote} \caption{From Article I, Section 2} \end{figure} These are the qualifications: \begin{itemize} \item At least 25 years old \item A U.S. citizen for at least seven years \item Resident of the state chosen to represent \end{itemize} \subsubsection*{Senate} \begin{figure}[h] \begin{quote} \it{"No Person shall be a Senator who shall not have attained to the Age of thirty Years, and been nine Years a Citizen of the United States, and who shall not, when elected, be an Inhabitant of that State for which he shall be chosen."} \end{quote} \caption{From Article I, Section 3} \end{figure} These are the qualifications: \begin{itemize} \item At least 30 years old \item A U.S. citizen for at least nine years \item Resident of the state chosen to represent \end{itemize} \subsubsection*{Powers of Congress} The Constitution specifies the areas where Congress may pass legislation in \bf{Article I, Section 8}. Some of these topics include collecting taxes and establishing the process for naturalization. Congress allocates funds collected to support specific programs and establishes agencies and rules to administer them. The final statement in \bf{Article I, Section 8} is not a specific power. It states that: \begin{figure}[h] \begin{quote} \it{"Congress has the power to make all laws which shall be necessary and proper for carrying into execution the foregoing powers, and all other powers vested by this Constitution in the government of the United States, or in any department or officer thereof."} \end{quote} \caption{From Article I, Section 8} \end{figure} We call this line the \it{"Elastic Clause"} because it \bf{"Stretches"} the powers of Congress to cover areas the framers of the Constitution could not have anticipated. Interpreting the phrase \it{"Necessary and Proper"} has been controversial over the course of U.S. history. But, people debate what is \it{"Necessary and Proper"}. They also debate on what authority is implied by the powers listed in the Constitution. For example, since Congress has the power \it{"to raise and support an Army and Navy"}, officials decided the power implied the ability to create an air force. The framers could not have anticipated the invention of airplanes, but Congress assumed that the power to maintain a military force would certainly include the use of new technology and needs. \subsubsection*{House} \begin{itemize} \item All bills the will create revenue, meaning raise money through taxes, must originate in the \bf{House}. \item Only the \bf{House} can impeach high officials, like the president or a Supreme Court justice. \end{itemize} \subsubsection*{Senate} \begin{itemize} \item The Senate must approve all foreign treaties made by the president. \item Only the Senate can conduct trials for and convict impeached federal officials. \item The Senate must confirm important appointments made by the president. \end{itemize} \subsubsection*{House VS Senate} The structure of the \bf{House} and \bf{Senate} are different. The Constitution grants many specific powers which are shared between both of the houses, while it may just reserve some other powers for either of the houses. The leadership in each house differs as well. The \bf{Speaker of the House} is the presiding officer of the \bf{House of Representatives}. Members of the political party holding a majority of the \bf{House} seats choose who will serve as the \bf{Speaker}. The \bf{Speaker} sets the agenda for the \bf{House}, determines who should serve on what committees, and what committees will study introduced bills. The \bf{Speaker} is second in the line of presidential succession. That means the Speaker would become acting president should the president and vice president become unable to perform their official duties. The \bf{Senate's} presiding officer is technically the \bf{Vice President} according to the Constitution. However, this role has very little power, since the \bf{Vice President} can only vote on bills in the case of a tie in the \bf{Senate}. The \bf{President Pro Tempore} However, this role has very little power. The reason is because the \bf{Vice President} can only vote on bills in the case of a tie in the \bf{Senate}. The \bf{President Pro Tempore} is the highest-ranking member of the \bf{Senate}. That happens only when the \bf{Vice President} is absent. And that happens a lot. Usually, the \bf{President Pro Tempore} is the member of the majority political party who has served in the \bf{Senate} the longest. The role of the \bf{President Pro Tempore} is similar to the \bf{Speaker of the House}. The \bf{President Pro Tempore} is third in the line of succession after the Speaker of the House. Both the \bf{Senate} and the \bf{House} give a leadership position to a member of the political party in the minority as well. \subsubsection*{Congress Making Laws} \begin{figure}[h] \begin{quote} \it{"Each House may determine the Rules of its Proceedings… (and) shall keep a Journal of its Proceedings, and from time to time publish the same."} \end{quote} \caption{From Article I, Section 5} \end{figure} Each house keeps a record of its actions and debates. You can access current and historical actions of Congress on the Internet. Congress determines many specifics of the lawmaking process for itself. But, it takes guidance from the Constitution. It can determine when to hold a vote on a bill and when to send it to another group of Congress members for further study and edits. A majority of both the \bf{House} and \bf{Senate} must pass new legislation and send it to the \bf{President} for approval to become a law. The \bf{House} conducts its business more quickly than the \bf{Senate} and has limits on length of debate for a bill. The \bf{Senate} has no limit on debate. And even sometimes, a senator will intentionally delay voting on a bill by talking as long as possible or reading a long piece of text. This is called a \bf{Filibuster}. A vote of cloture can end a \bf{Filibuster} and force a vote, with at least $60$ senators in agreement. \subsubsection*{Law Making Process} \paragraph{Bill Introduction} \bf{House}: A bill is introduced in the \bf{House}. It's given a number and referred to a committee. \bf{Senate}: A bill is introduced in the \bf{Senate}. It's given a number and referred to a committee. All of the bills start as an idea. Bills may begin in either the \bf{House of Representatives} or the \bf{Senate}. The representative or senator introduces his or her bill. The idea is read, given a number, and referred to a committee or subcommittee. The \bf{"H", "S"} in front of a bill number tells you whether it began in the \bf{House} or the \bf{Senate}. About $9,000$ bills are introduced each year in Congress, but only about $10\%$ become laws. \paragraph{Subcommittee Review} \bf{House}: A \bf{House} subcommittee often first considers the bill. It votes on advancing the bill to the full committee. \bf{Senate}: A \bf{Senate} subcommittee often first considers the bill. If votes on advancing the bill to the full committee. Most of the work on bills happens in committees. The legislators might edit the language, make additions, or take things out. Together, the \bf{House} and \bf{Senate} have $40$ committees and $160$ subcommittees. Representatives and senators will serve on committees or subcommittees that match their background or areas of interest. \paragraph{Committee Review} \bf{House}: A full committee in the \bf{House} considers and may change the bill. Then it will vote on advancing the bill. \bf{Senate}: A full committee in the \bf{Senate} considers and may change the bill. Then it vote on advancing the bill. Work in the full committee is the hardest step for a bill to pass. Only $1$ in $6$ bills will pass this step. The full committee may make more edits to the bill's language before taking a vote. If the bill does pass, it will go to the whole \bf{House} or \bf{Senate} for debate. For the \bf{House}, first they setup rules about the debate in the \bf{House Rules Committee}. \paragraph{House Rules Committee} \bf{House}: The \bf{House Rules Committee} may make special rules for debating and changing the bill. The \bf{House} votes on the rules. The \bf{House Rules Committee} will set rules for debate on the bill. Examples of these rules can include the time allowed for debate. It might limit the number of changes that can be made to the bill. The \bf{Senate} has fewer rules and unlimited time for debate. \paragraph{Floor Action} \bf{House}: The \bf{House} debates and may change the bill. Members vote on its passage. \bf{Senate}: The \bf{Senate} debates and may change the bill. Members vote on its passage. Debate of the whole \bf{House} or \bf{Senate} is called the \it{"Floor Action"}. This step is challenging because representatives and senators come from different parts of the nation. Each member has different interests and concerns. The bill needs approval from a majority of people in both chambers to advance. \paragraph{House-Senate Compromise} \bf{House}: \bf{House} and \bf{Senate} compromise on differences between each version of the bill. \bf{Senate}: \bf{House} and \bf{Senate} compromise on differences between each version of the bill. A conference committee with members from both the \bf{House} and \bf{Senate} meet. They discuss differences between each chamber's version of the bill. They will compromise on the differences to create a single bill. A compromise happens when both sides give up some things they originally wanted. \paragraph{Final House, Senate Vote} \bf{House}: The \bf{House} debates the compromise bill and votes on it. \bf{Senate}: The \bf{Senate} debates the compromise bill and votes on it. The compromise bill is sent back to both the \bf{House} and the \bf{Senate}. Members in each debate the compromise version of the bill. They vote on whether to approve it. Approval from both the \bf{House} and the \bf{Senate} is required on the same exact version of the bill for it to continue. \subsubsection*{Committees Organization} Committees do most of the work of lawmaking and have great power in determining whether a bill continues or dies. In fact, most bills originate from the work of committees. Members of committees research issues and potential solutions, debate ideas, and discuss the interests of different groups of people affected by the issues and bills. Usually people who have seniority in Congress serve on the major committees. Generally, members of a committee have a background or special interest in the topic focus of the committee. Standing–Bills are usually referred to a standing committee. They are permanent and have a specific area of responsibility. \bf{Ad hoc–An ad hoc} committee is temporary for the purpose of creating or enforcing mandates for a specific reason. Congress disbands an ad hoc committee when the need is fulfilled. \bf{Joint–A joint} committee consisting of members from both the \bf{House} and the \bf{Senate} forms to address important issues needing special attention from both. \bf{Joint committees} make unified recommendations to the whole Congress. \bf{Conference–A conference} committee has members from both houses and forms to settle differences between similar bills passed separately in the House and Senate. \bf{House Rules Committee–This committee}, specific to the \bf{House}, sets rules of debate for particular bills. \bf{The Joint Economic Committee (JEC)} studies and makes recommendations to the whole Congress regarding national economic conditions. A committee hearing is not a criminal or civil trial proceeding like in the judiciary system. Instead, congressional hearings are public meetings intended to gather information to inform decision on policy and law. \subsubsection*{Influences on Congress Member's Vote} When a member of Congress takes office, they pledge to represent the interests of the people from their district or state. However, even within a small district, ideas and concerns can be very diverse. Remember, each House member represents over a half-million people! So how do they decide how to vote on bills? Many factors influence the decisions made by members of Congress. Lawmakers study and consider their constituents' desires concerning proposed legislation. If polls and communications indicate that a majority of constituents want a bill to pass, lawmakers may be more likely to vote in favor of it. Lawmakers' own values and beliefs also influence how they vote on bills. Often they must depend on their own opinions, especially when their constituents are deeply divided on an issue. Interest groups and political parties also seek to sway members of Congress to vote for or against particular bills. Ultimately, the individuals who represent us in Congress have the difficult job of considering all the information and opinions available and making the best decision for their constituents. The amount of influence constituents, personal opinions, or organizations like interest groups and political parties have on a lawmaker's vote varies by issue and differs for each member of Congress. \bf{Republican Senator Susan Collins} from Maine was interviewed by reporters before an initial vote on a new economic investment bill. The bill met resistance, as is typical in congressional negotiations. However, a committee made up of both Democrats and Republicans, including Collins, put together a bill that passed in the Senate. The House presented its own stipulations for the bill, which is also normal in policy making. The multiple steps and delays in considering new bills helps make sure the process and final bill is thorough. Representatives must serve their constituents and consider the long-term impact of their decisions. Though sometimes the public thinks Congress moves too slowly, debate, compromise, and time are essential ingredients in creating new laws. \newpage
{ "alphanum_fraction": 0.77623329, "avg_line_length": 80.1232227488, "ext": "tex", "hexsha": "add5fd7a024f3d1e83a7e87e53000922defa4ef3", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "de33e73ca7df9d3adcb094aa9909ea0337e68fad", "max_forks_repo_licenses": [ "Info-ZIP" ], "max_forks_repo_name": "SingularisArt/notes", "max_forks_repo_path": "Grade-10/semester-1/hs-government/unit-3/lesson-1.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "de33e73ca7df9d3adcb094aa9909ea0337e68fad", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Info-ZIP" ], "max_issues_repo_name": "SingularisArt/notes", "max_issues_repo_path": "Grade-10/semester-1/hs-government/unit-3/lesson-1.tex", "max_line_length": 992, "max_stars_count": 6, "max_stars_repo_head_hexsha": "de33e73ca7df9d3adcb094aa9909ea0337e68fad", "max_stars_repo_licenses": [ "Info-ZIP" ], "max_stars_repo_name": "SingularisArt/notes", "max_stars_repo_path": "Grade-10/semester-1/hs-government/unit-3/lesson-1.tex", "max_stars_repo_stars_event_max_datetime": "2022-02-16T07:29:05.000Z", "max_stars_repo_stars_event_min_datetime": "2021-08-31T12:45:26.000Z", "num_tokens": 3816, "size": 16906 }
\chapter{Improving Model Generalization Performance} \label{chapter:experiments2} This chapter is divided into three different sections which will address the problems left open in \Cref{chapter:experiments}, namely, the low generalization performance on the validation and test sets in comparison to the training set. For this purpose, \Cref{section:balance} studies the impact of dataset size, data augmentation methods, and class balancing techniques. Subsequently, \Cref{section:ensemble} attempts to improve this approach's performance by finding a suitable way for ensembling multiple models from different architectures. Afterward, \Cref{section:outdist} presents results of different procedures to deal with the out of the training distribution test images. Finally, \Cref{section:discussion} discusses and compares the obtained results to other state-of-the-art skin lesion diagnosis classifiers from the \ac{ISIC} 2019 challenge. \par \section{Data Study} \label{section:balance} This section will explore the influence of factors like dataset size, data augmentation, and class balancing in the overall generalization capability of different variants of DenseNet pre-trained models. For each experiment, a benchmark has been set taken into consideration the results obtained in \Cref{chapter:experiments}. More specifically: \begin{itemize} \item Pre-trained models from the DenseNet architecture will be used, as they are part of one of the best performing architectures in this dataset (see \autoref{fig:pre_trained_model_val_comp}), while also having a comparatively small number of trainable parameters for all of its 3 tested variants, namely, the DenseNet121, DenseNet169, DenseNet201 (see \autoref{tables:pretrainedmodels}); \item An global average pooling layer is introduced at the end of the convolutional base in order to reduce the number of parameters for the classifier; \item The original pre-trained model's classifier is replaced by a new classifier composed of one fully-connected layer with 512 neurons which use the ReLU activation function, and one softmax layer with 8 neurons to translate each of the class's probabilities; \item For the transfer learning approach, all the layer's parameters from the convolutional base of the pre-trained model are extracted and fine-tuned, which yields the best performance for the \ac{ISIC} 2019 dataset (see the results presented in \Cref{section:models}); \item Following the work done by Gessert \textit{et al.} \cite{gessert2018} the Adam optimizer \cite{adam} is used, with $\rho_{1} = 0.9$ and $\rho_{2}=0.999$; \item The convolutional base is frozen for the initial 2 epochs with a learning rate of $10^{-3}$. The initial fine-tuning learning rate is $10^{-4}$, but is reduced by a factor of 10 when validation loss stops improving for 8 epochs. Each model trains for a maximum of 100 epochs but early stopping is performed whenever validation loss stops improving for 16 epochs; \item For each epoch, all the samples are shuffled before being feed into the network; \item Each batch is composed of 8 samples; \item For each training process, 3 models are saved: The model that obtained the highest \ac{BMA} on the validation set, the model with the lowest loss on the validation set, and finally the resulting model from the last epoch. However, the performance on the test set is evaluated according to the model which attained the highest \ac{BMA} on the validation set. \end{itemize} \subsection{Impact of Dataset Size} \label{section:dataset_impact} DenseNet201 has a considerably large number of trainable parameters when compared with other models such as DenseNet121 (see \autoref{tables:pretrainedmodels}), which in theory means that it is easier for the model to memorize samples, and therefore overfit. As such, by lowering the capacity of the network, one will force it to learn more generalizable patterns within the training data. To put this into test, results for variations of the DenseNet with a smaller number of parameters, specifically, DenseNet121 and DenseNet169, were compared with DenseNet201 in \autoref{fig:densenet_variations_5000_comp}. Results show that indeed lowering the network's capacity reduces the amount of overfitting, but not by a significant margin. \begin{figure}[ht] \centering \includegraphics[width=0.9\textwidth]{figs/densenet_variations_5000_comp_2.pdf} \caption[Different hyperparameter tuned DenseNet pre-trained models trained with 5000 samples vs \ac{BMA} on train, validation and test sets.]{Different hyperparameter tuned DenseNet pre-trained models trained with 5000 samples vs \ac{BMA} on train, validation and test sets. Online data augmentation is turned on.} \label{fig:densenet_variations_5000_comp} \end{figure} Moreover, the DenseNet121 model in \autoref{fig:densenet_variations_5000_comp} still shows a considerable amount of overfitting, as the validation \ac{BMA} does not approximate the train \ac{BMA}. Presumably, these results stem from an inappropriate number of training samples relative to the number of free parameters in the network. The dataset used so far has been an undersampled version of the \ac{ISIC} 2019 challenge dataset, more specifically, 5000 training samples that keep the same class distribution as the original \ac{ISIC} 2019 challenge dataset. This means that underrepresented classes like dermatofibroma have too few samples to learn strong generalizable features (see \autoref{fig:distribution}). \par Indeed, the results in \autoref{fig:samples_comp} show that an increase in dataset size enables the DenseNet201 model to attain an significantly better validation \ac{BMA} score. So far, this has been the only comparison that was able to significantly improve generalization performance and reduce overfitting, corroborating the importance of the dataset used for training deep neural networks. However, it should be noted that even with the full dataset (20517 training samples), overfitting remains an issue that should be addressed. \par \begin{figure}[ht] \centering \includegraphics[width=0.8\textwidth]{figs/densenet201_samples_comp.pdf} \caption[Number of training samples vs train and validation \ac{BMA} for the DenseNet201 pre-trained model.]{Number of training samples vs train and validation \ac{BMA} for the DenseNet201 pre-trained model. Online data augmentation is turned on.} \label{fig:samples_comp} \end{figure} Furthermore, by observing \autoref{fig:samples_bma_over_epochs}, one can see that models trained with more samples take longer to converge. Presumably, this is related to the learning rate scheduler, because in models with more samples, validation loss keeps improving for longer with high learning rates (\textit{e.g.}, $10^{-4}$), which makes the scheduler decrease the learning rate later (see \autoref{fig:samples_lr_over_epochs}). In contrast, models trained with fewer samples stop improving the validation loss earlier, which makes the learning rate scheduler decrease the learning rate earlier. Consequently, as models tend to convergence to a local minimum with low learning rates, models trained with more samples take longer to converge because they take longer to reach low learning rates. \par \begin{figure}[ht] \centering \includegraphics[width=\textwidth]{figs/densenet201_samples_bma_over_epochs.pdf} \caption[Number of training samples vs train and validation \ac{BMA} over epochs for the DenseNet201 model.]{Number of training samples vs train and validation \ac{BMA} over epochs for the DenseNet201 model. Online data augmentation is turned on.} \label{fig:samples_bma_over_epochs} \end{figure} \begin{figure}[ht] \centering \includegraphics[width=\textwidth]{figs/densenet201_samples_lr_over_epochs.pdf} \caption{Influence of the number of samples on the learning rate over epochs for the DenseNet201 model.} \label{fig:samples_lr_over_epochs} \end{figure} The results from the plot graph in \autoref{fig:densenet_variations_5000_comp} show that one could use a smaller pre-trained model from DenseNet, while maintaining similar performance, which would overall be beneficial to reduce overfitting. However, with a larger dataset, one could argue that the gap between models with a high and low number of trainable parameters would increase. Therefore, an experiment with different variations of the DenseNet has been made with 20517 training samples, to assess if this hypothesis is correct. \par Nevertheless, results in \autoref{fig:densenet_variations_20264_comp} show no significant improvements between different variants of the DenseNet architecture. Even though DenseNet201 got the best performance in comparison with its smaller versions, this improvement is not substantial as the difference between the DenseNet121 and DenseNet201 on test data is about 0.08\%. Considering the results, that difference becomes even more irrelevant when taken into consideration that the \ac{BMA} gap between train and test on DenseNet121 is lower when compared with both DenseNet169 and DenseNet201. As such, the smaller model DenseNet121 proves to be a better choice as it slightly reduces overfitting while keeping similar performance to bigger models on the test set. \par \begin{figure}[ht] \centering \includegraphics[width=0.9\textwidth]{figs/densenet_variations_20264_comp.pdf} \caption[Different hyperparameter tuned DenseNet pre-trained models trained with 20518 samples vs \ac{BMA} on train, validation and test sets]{Different hyperparameter tuned DenseNet pre-trained models trained with 20518 samples vs \ac{BMA} on train, validation and test sets. Online data augmentation is turned on.} \label{fig:densenet_variations_20264_comp} \end{figure} Finally, with the full dataset, the DenseNet121 model is able to attain a \ac{BMA} of 76.9\% and a accuracy of 84.9 \% over the test set. Furthermore, results show considerable improvements on the confusion matrix in \autoref{fig:unbalanced_full_dataset_conf_matrix} when compared with the results from \autoref{fig:hyperparameter_tuned_conf_matrix}. Overall every single class attains better performance on the test set, but this effect is much more noticeable in classes with fewer samples. Presumably, these results could be further improved with a larger training dataset, which should be a focus point for future benchmark challenges of \ac{ISIC}. \par \begin{figure}[ht] \centering \includegraphics[width=0.7\textwidth]{figs/densenet201_20517_unbalanced_conf_matrix.png} \caption[Confusion matrix of the hyperparameter optimized and fine-tuned DenseNet121 pre-trained model trained with the full unbalanced training dataset.]{Confusion matrix of the hyperparameter optimized and fine-tuned DenseNet121 pre-trained model trained with the full unbalanced training dataset (20517 samples). Online data augmentation was turned on as a measure to reduce overfitting.} \label{fig:unbalanced_full_dataset_conf_matrix} \end{figure} \subsection{Impact of Augmentation Techniques} \label{section:dg_impact} Considering the results from \autoref{fig:samples_comp}, there is a considerable amount of overfitting even with the full training dataset. Moreover, results in \autoref{fig:unbalanced_full_dataset_conf_matrix}, still show a considerable lack of generalization performance for underrepresented classes. As such, different strategies of data augmentation can be used to help alleviate these problems. One of such strategies is called offline data augmentation, which attempts to create a larger dataset by generating synthetic samples through augmentation techniques before the training process starts. \par However, there is a wide range of image processing methods for augmenting images (\textit{e.g.}, rotations, distortions) and they can be combined to produce samples that are substantially different from their originals. As such, depending on the number of augmentations to be considered, there could be a huge amount of different combinations between them, especially considering that some of them have different variations and parameters which can quite change the augmentation (\textit{e.g.}, the magnitude of distortion, the angle of rotation). Therefore, it is important to determine what combinations of data augmentation techniques significantly improve the overall performance for the problem of skin lesion classification. \par Figure \ref{fig:data_aug_group_bal_acc_comp} shows a comparison between different augmentation groups applied to the DenseNet121 model trained with 20518 class balanced samples (approximately 2565 samples per class). Moreover, underrepresented classes are oversampled through the augmentation techniques from each of the groups, and overrepresented classes are undersampled by randomly choosing samples from that class without repetition. In turn, this means that 9089 samples are generated through data augmentation and the rest of the images are original (11420 samples). Furthermore, the experiments use the augmentation techniques described in \Cref{section:augmentation} which are spread across the different augmentation groups. More specifically: \begin{itemize} \item Group 0: No augmentation techniques are applied. For data augmentation as a class balancing measure (offline data augmentation), oversampling is done by randomly copying and pasting samples from a specific class with repetition. This is the baseline that other groups should be compared to; \item Group 1: Composed of slight variations to the original image, such as rotations of 90 degrees (clockwise), flips from top to bottom or vice versa, flips from left to right or vice versa, and crops that keep at least 85\% of the original image pixels to assure that most of the lesion stays visible; \item Group 2: Composed of the augmentation techniques of group 1 plus some adjustments on pixel intensities, specifically, brightness changes from 50\% to 150\% (100\% being the original image and 0\% being a black image), contrast changes from 50\% to 150\% (100\% being the original image and 0\% being a solid grey image), and coloration changes from 50\% to 150\% (100\% being the original and 0\% being a black and white image); \item Group 3: Composed of the augmentation techniques of group 1 plus perspective transformations. These transformations include tilts on either the left or right side, tilts forward and backward, skews from a random corner of an image, and finally, shears with a variation anywhere from 20 degrees towards the left to 20 degrees towards the right side of the image or anywhere from 20 degrees towards the top or 20 degrees towards the bottom of the image; \item Group 4: Composed of the augmentation techniques of group 1 plus noise induction augmentation techniques. Elastic distortions are applied with a grid size of 8, meaning that the image is divided into an 8x8 grid and each region is distorted independently. Random erasing is also applied, which takes a random area equivalent to 25\% of the original image and replaces it by random pixels, meaning that each pixel has a random intensity, ultimately producing noise in that area. \end{itemize} In groups 1, 2, 3, and 4 there is a 0.5 probability of applying a specific augmentation technique in the group augmentation pipeline, which means that the same sample augmented multiple times will likely produce different augmented samples. This is a desirable property as producing the same synthetic sample over and over again would produce a lot of repetition in highly undersampled classes. Consequently, it could lead to problems like overfitting because the network would try to minimize loss for the same repeated samples, rather than learning generalizable knowledge from variations of that sample. \par \begin{figure}[ht] \centering \includegraphics[width=\textwidth]{figs/data_aug_group_bal_acc_comp.pdf} \caption[\ac{BMA} of train, validation and test sets of different offline data augmentation groups with the DenseNet121 trained on 20518 balanced samples.]{\ac{BMA} of train, validation and test sets of different offline data augmentation groups with the DenseNet121 trained on 20518 balanced samples. Online data augmentation is turned off.} \label{fig:data_aug_group_bal_acc_comp} \end{figure} Considering the results from \autoref{fig:data_aug_group_bal_acc_comp}, overall data augmentation does improve the performance in comparison with group 0, which does not employ any type of augmentation. Moreover, results indicate that simpler augmentation techniques like group 1 and 2 perform slightly better when compared to more complex augmentations groups such as group 3 and 4, which is to be expected in the context of skin lesion classification, because such groups alter important information about the lesion itself. \par As described in \Cref{chapter:sota}, one of the methods used to identify skin lesions is the ABCDE method, which relies on features like the shape and color of the lesion to be diagnosed. Therefore, by changing such features, one is potentially removing or changing important information about the lesion itself, which consequently will make the network perform worse. For example, group 4 introduces noise inducing transformations, however, one does not have the assurance that the random erasing will not erase important information from the lesion itself. Moreover, group 4 also applies distortions in the lesion samples, but such technique can also alter the shape of the lesion in such a way that it is no longer characteristic of its original lesion type. Similarly, by changing the colors of the lesion in group 2, important information about the color of the lesion might be changed, which is often an important property used by dermatologists to diagnose a lesion (\textit{e.g.}, ABCDE method). \par However, offline data augmentation is not the only method of applying data augmentation. Specifically, online data augmentation is from the literature presented in \Cref{chapter:sota}, a commonly used technique to reduce overfitting in deep learning based skin lesion classification models. While offline data augmentation can be used as a method to balance samples across classes, in online data augmentation, images are generated per iteration during training and differ in each epoch. In other words, in each epoch, the network is going to be fed a different variation of the original image, rather than the same image. Moreover, the results presented in \autoref{fig:data_aug_group_bal_acc_comp} show that turning off online data augmentation will cause a huge gap between train and validation/test \ac{BMA} (overfitting), as the training \ac{BMA} is 100\% for all 5 groups. Therefore, experiments were made to assess the impact of different combinations of online data augmentation and offline data augmentation (as a class balancing measure). As shown in \autoref{fig:data_aug_mode_bal_acc_comp}, online data augmentation significantly reduces the gap between train and validation and overall improves \ac{BMA} on the test set, both when employed with and without offline data augmentation. Even though the model with offline and online data augmentation performed better, it also increased overfitting in comparison with the model only trained with online data augmentation. Presumably, these results reflect the concerns discussed in \Cref{section:limitations}, namely, data augmentation can lead to a model that easily discriminates synthetic examples but does not generalize the attained knowledge to real data \cite{Ching2018}. \par \begin{figure}[ht] \centering \includegraphics[width=\textwidth]{figs/data_aug_mode_bal_acc_comp.pdf} \caption[\ac{BMA} of the train, validation and test sets of different combinations of offline and online data augmentation modes with a hyperparameter tuned DenseNet121 trained on 20518 balanced samples.]{\ac{BMA} of the train, validation and test sets of different combinations of offline and online data augmentation modes with the DenseNet121 trained on 20518 balanced samples. "No DA" is equivalent to data augmentation group 0 and "Only Offline DA" is equivalent to group 1 in figure \autoref{fig:data_aug_group_bal_acc_comp}. "Only Online DA" simply employs online data augmentation during the training process. "Online and Offline DA" combines the class balancing measures from offline data augmentation and the overfitting measures from online data augmentation during training. } \label{fig:data_aug_mode_bal_acc_comp} \end{figure} In the experiments showed so far, online data augmentation always uses augmentation group 1, which as shown in \autoref{fig:data_aug_mode_bal_acc_comp}, significantly reduces overfitting. However, an interesting question can be made of whether other augmentation groups can have the same effect on overfitting and generalization performance for online data augmentation. As such, an experiment with different augmentation groups for online data augmentation with an unbalanced dataset has been done, where the dataset used is the original 20518 samples to not bias the obtained results (no offline data augmentation). \par \begin{figure}[ht] \centering \includegraphics[width=\textwidth]{figs/data_aug_online_group_comp.pdf} \caption{\ac{BMA} of train, validation and test sets of different online data augmentation groups with the DenseNet121 trained on 20518 unbalanced samples (no offline data augmentation).} \label{fig:data_aug_online_group_comp} \end{figure} Figure \ref{fig:data_aug_online_group_comp} shows that online data augmentation significantly reduces overfitting for all 4 augmentation groups. Furthermore, group 3 seems to almost eliminate overfitting, because the train, validation, and test \ac{BMA} scores are quite similar. Group 3 also displays the best generalization performance on the test set in comparison with other groups. In contrast, group 1 performs slightly worse than group 3 on the test set, but significantly increases the amount of overfitting because the gap between train and validation/test is much larger. Other groups like 2 and 4 also reduce overfitting, but significantly reduce generalization performance on the test set. \par Presumably, these results stem from the nature of online data augmentation. Considering an input image, in each iteration of the training process, online data augmentation will produce a different image. This means that samples generated by the augmentation procedures of group 3, will produce significantly different images in each iteration of the training process (see examples in \autoref{fig:augmentations}). In contrast, group 1 will produce somewhat similar images across all iterations because it is composed of simple augmentation techniques such as flips, rotations, and crops. This means that when one uses group 3 for online data augmentation during the training process, the model is obliged to learn a different feature each time a sample is fed to the network because it sees a significantly different sample in each iteration. Therefore, the model is forced to learn generalizable knowledge across these iterations rather than memorize equal or similar samples, like in augmentation group 1. \par Unfortunately, these results came late in the development process of this work which meant that the next experiments were made using the augmentation group 1 for online data augmentation rather than group 3. Even though the \ac{BMA} scores of group 1 and 3 are similar for the test set in \autoref{fig:data_aug_online_group_comp}, the findings of using group 3 present a major way of reducing overfitting that should be further explored in future \ac{ISIC} challenges. \par \subsection{Impact of Class Balancing through Data Augmentation} \label{section:class_bal_impact} The results in \autoref{fig:unbalanced_full_dataset_conf_matrix} show that even by using the full dataset, classification towards underrepresented classes (\textit{e.g.}, squamous cell carcinoma or dermatofibroma) has overall worse performance than classes with more samples (\textit{e.g.}, melanocytic nevus). Moreover, results in \autoref{fig:data_aug_group_bal_acc_comp} have shown that offline data augmentation as a class balancing measure significantly improves the \ac{BMA} scores across the train, validation and test sets. Therefore, one could hypothesize that this method can improve the generalization performance of the network further in underrepresented classes. \par Results in \autoref{fig:densenet201_bma_comp} show an experiment where a balanced model with 2533 samples per class is compared against an unbalanced model of 20518 samples by their \ac{BMA} over the train, validation, and test sets. Results show that the balanced model will attain a better \ac{BMA} score concerning all these three sets. \par \begin{figure}[ht] \centering \includegraphics[width=0.9\textwidth]{figs/densenet201_bma_comp.pdf} \caption[Comparison of the \ac{BMA} of balanced and unbalanced datasets with 20518 samples for the train, validation and test sets using the hyperparameter optimized and fine-tuned DenseNet121 pre-trained model.]{Comparison of the \ac{BMA} of balanced and unbalanced datasets with 20518 samples for the train, validation and test sets using the hyperparameter optimized and fine-tuned DenseNet121 pre-trained model. Online data augmentation is turned on using group 1.} \label{fig:densenet201_bma_comp} \end{figure} However, by doing the same comparison but with the accuracy metric, results show that the accuracy over the unbalanced dataset is better than those from the balanced dataset (see \autoref{fig:densenet201_acc_comp}). It is evident from the results that offline data augmentation is improving performance on underrepresented classes shown by the \ac{BMA} increase, but by undersampling overrepresented classes, the accuracy takes a hit because these classes get slightly lower performance. \par \begin{figure}[ht] \centering \includegraphics[width=0.9\textwidth]{figs/densenet201_acc_comp.pdf} \caption[Comparison of the accuracy of balanced and unbalanced datasets with 20518 samples for the train, validation and test sets using the hyperparameter optimized and fine-tuned DenseNet121 pre-trained model.]{Comparison of the accuracy of balanced and unbalanced datasets with 20518 samples for the train, validation and test sets using the hyperparameter optimized and fine-tuned DenseNet121 pre-trained model. Online data augmentation is turned on using group 1.} \label{fig:densenet201_acc_comp} \end{figure} From the results shown, one can hypothesize that reducing undersampling from overrepresented classes, while keeping classes balanced using offline data augmentation would yield better overall accuracy and \ac{BMA} scores. Figure \ref{fig:densenet201_balanced_samples_metrics_comp} shows the influence of different balanced dataset sizes on the \ac{BMA} and accuracy scores. Larger balanced datasets are created by decreasing the undersampling of original samples on overrepresented classes and increasing oversampling through offline data augmentation on underrepresented classes. Furthermore, experiments were made for 10000, 20000, 30000, 40000, 50000, 60000, 70000, and 83432 class balanced datasets, in which the dataset with 83432 samples has no undersampling being applied, meaning that no original samples are discarded. Therefore, each class of the dataset with 83432 samples has 10429 samples because the most overrepresented class in the training set is the melanocytic nevus (NV) with 10429 samples. \par Considering the results from \autoref{fig:densenet201_balanced_samples_metrics_comp}, it seems that increasing the dataset through offline data augmentation does improve \ac{BMA} and accuracy scores on both the test and validation sets. Furthermore, on the training dataset, the \ac{BMA} scores approximate the accuracy scores, which is to be expected because the training dataset is balanced (recall \Cref{section:metrics}). \par The \ac{BMA} and accuracy scores are close together on test and validation sets on smaller datasets such as with 10000, 20000, and 30000 samples. However, with larger datasets, the gap between \ac{BMA} and accuracy scores start to increase. This gap is due to an overall increase in the generalization performance of overrepresented classes, while underrepresented classes do not keep improving. This happens because on smaller datasets overrepresented classes are undersampled, but most if not all of the underrepresented classes are already being oversampled. Therefore, performance towards overrepresented classes keeps rising, which greatly influences the accuracy scores, but underrepresented classes keep the same performance, which in turn makes a small impact on \ac{BMA} scores. These results support the notion that past a certain number of synthetically generated samples, oversampling has close to no effect on generalization performance of underrepresented classes. \par Furthermore, although oversampling the original dataset with class balancing seems to reduce overfitting, one can argue that this is due to the decrease in the undersampling of overrepresented classes, rather than the oversampling process. One can confirm such hypothesis by looking at the discrepancy of the accuracy and \ac{BMA} on larger datassets. \par It is also clear from the results that synthetic samples do not have the same impact on generalization performance as original samples. For example, the obtained improvements on the \ac{BMA} scores shown in \autoref{fig:densenet201_balanced_samples_metrics_comp} are considerably lower than the ones obtained in \autoref{fig:densenet_variations_20264_comp} in which the dataset is increased with original samples rather than synthetic ones. \par \begin{figure}[ht] \centering \includegraphics[width=0.9\textwidth]{figs/densenet201_balanced_samples_metrics_comp_proper.pdf} \caption[Comparison of train, validation and test \ac{BMA} and accuracy scores for different balanced dataset sizes.]{Comparison of train, validation and test \ac{BMA} and accuracy scores for different balanced dataset sizes using the described oversampling and undersampling techniques. The pre-trained model used is the hyperparameter optimized and fine-tuned DenseNet121. Online data augmentation is turned on using augmentation group 1.} \label{fig:densenet201_balanced_samples_metrics_comp} \end{figure} Finally, the DenseNet121 model trained with online data augmentation (group 1) on the class balanced dataset with 83432 samples is able to attain an accuracy of $86.70\%$ and a \ac{BMA} of $81.5\%$ (see \autoref{fig:densenet201_balanced_samples_metrics_comp}). These results show the remarkable impact of online and offline data augmentation techniques on the generalization performance of the network. This represents a significant improvement from the results obtained in \Cref{section:dataset_impact}, where the model was not using class balanced samples. Presumably, this jump in performance is due to an increase in sample size for classes that are underrepresented on the original dataset. One can confirm such hypothesis by looking at \autoref{fig:82400_balanced_conf_matrix}, which in comparison with \autoref{fig:unbalanced_full_dataset_conf_matrix} shows that the network is more capable of correctly classifying underrepresented classes. \par \begin{figure}[ht] \centering \includegraphics[width=0.7\textwidth]{figs/densenet201_82400_balanced_conf_matrix.pdf} \caption{Confusion matrix of the hyperparameter optimized and fine-tuned DenseNet121 pre-trained model, trained with the balanced and oversampled \ac{ISIC} 2019 dataset with 83432 samples. Online data augmentation was turned on during training with augmentation group 1.} \label{fig:82400_balanced_conf_matrix} \end{figure} \section{Model Ensemble} \label{section:ensemble} The literature suggests that the most successful approaches towards \ac{ISIC} challenges are those based on an ensemble of classifiers \cite{humanvsisic2018}. Even though this methods might not be practical for real world scenarios due to the added computational requirements, it might be useful to assess what improvements it could bring to the generalization performance of this approach. \par As seen in \Cref{section:skin_transfer_learning}, in order to properly create an ensemble one must combine different classifiers trained on different conditions (\textit{e.g.}, model architectures, hyperparameters) or with different data (\textit{e.g.} bagging). However, an ensemble of different models pre-trained on ImageNet will be created because in this study different pre-trained models were already studied in \Cref{section:models}. Furthermore, an important aspect to consider is that simply selecting different pre-trained models within the same architecture is not enough because the different variations within a model architecture are just scaled up/down versions of a baseline model. Therefore, averaging the predictions of such models would not significantly increase the overall performance because each model would produce similar results (\textit{e.g.}, VGG16, and VGG19 or EfficienNetB2 and EfficientNetB0). Therefore, the choice of the models to use from each architecture in the ensemble is made based on the results from \autoref{fig:pre_trained_model_val_comp}. More specifically, this approach selects the best performing model from each architecture: VGG16, ResNet152, InceptionResNetV2 and EfficientNetB2. The exception to this criteria is the DenseNet, as the chosen model is the DenseNet121 instead of the DenseNet201 (see arguments behind this decision in \Cref{section:dataset_impact}). \par Furthermore, different combinations between each of these models will be presented in order to attain the best possible performance. For simplification, all models will use the hyperparameters optimized for DenseNet in \Cref{section:hyperparameters}. All the models were trained under the same conditions based on the previous section's results, more specifically: \begin{itemize} \item An global average pooling layer is introduced at the end of the convolutional base in order to reduce the number of parameters for the classifier; \item The original pre-trained model's classifier is replaced by a new classifier composed of one fully-connected layer with 512 neurons which use the ReLU activation function, and one softmax layer with 8 neurons to translate each of the class's probabilities; \item Extracting all layer's weights while also fine-tuning these layers, which as shown in \Cref{section:models} yields higher performance since it will continue to optimize parameters relative to the target dataset; \item Following the work done by Gessert \textit{et al.} \cite{gessert2018} the Adam optimizer \cite{adam} is used, with $\rho_{1} = 0.9$ and $\rho_{2}=0.999$; \item The convolutional base is frozen for the initial 2 epochs with a learning rate of $10^{-3}$. The initial fine-tuning learning rate is $10^{-4}$ but is reduced by a factor of 10 when validation loss stops improving for 8 epochs. Each model trains for a maximum of 100 epochs but early stopping is performed whenever validation loss stops improving for 16 epochs; \item For each epoch, all the samples are shuffled before being feed into the network; \item Each batch is composed of 8 samples; \item Both online and offline data augmentation are used with random crops, rotations, and flips each with a probability of 0.5. Offline data augmentation is used as a class balancing measure and online data augmentation as a method to alleviate overfitting; \item For each training process, 3 models are saved: The model that obtained the highest \ac{BMA} on the validation set, the model with the lowest loss on the validation set, and finally the resulting model from the last epoch. However, the performance on the test set is evaluated according to the model which attained the highest \ac{BMA} on the validation set. \end{itemize} \begin{table}[h] \centering \begin{tabularx}{\textwidth}{|l|X|X|X|} \hline Approach Name & Pre-trained models & \ac{BMA} & Accuracy \\ \hline EfficientNetB2 approach & EfficientNetB2 & $\approx$ 0.820 & $\approx$ 0.851 \\ \hline DenseNet121 approach & DenseNet121 & $\approx$ 0.815 & $\approx$ 0.867 \\ \hline InceptionResNetV2 approach & InceptionResNetV2 & $\approx$ 0.811 & $\approx$ 0.862 \\ \hline ResNet152 approach & ResNet152 & $\approx$ 0.797 & $\approx$ 0.858 \\ \hline VGG16 approach & VGG16 & $\approx$ 0.753 & $\approx$ 0.824 \\ \hline Ensemble of best 2 models & EfficientNetB2, DenseNet121 & $\approx$ 0.842 & $\approx$ 0.878 \\ \hline Ensemble of best 3 models & EfficientNetB2, DenseNet121, InceptionResNetV2 & $\approx$ 0.846 & $\approx$ 0.891 \\ \hline Ensemble of best 4 models & EfficientNetB2, DenseNet121, InceptionResNetV2, ResNet152 & $\approx$ 0.841 & $\approx$ 0.894 \\ \hline Ensemble of all 5 models & EfficientNetB2, DenseNet121, InceptionResNetV2, ResNet152, VGG16 & $\approx$ 0.836 & $\approx$ 0.893 \\ \hline \end{tabularx} \caption[\ac{BMA} and Accuracy of different models and ensembles for 8-class classification of skin lesions.]{\ac{BMA} and Accuracy of different models and ensembles for 8-class classification of skin lesions. The Softmax probabilities a ensemble's models are averaged together in order to create each ensemble. All models (including models within ensembles) are fine-tuned and use the best hyperparameters found in \Cref{section:hyperparameters}. The training dataset is composed of 83432 class balanced samples (offline data augmentation) where synthetic samples are created using the augmentation group 1. Online data augmentation is turned on with augmentation group 1 as a measure to reduce overfitting.} \label{tables:82400_ensemble} \end{table} Results from \autoref{tables:82400_ensemble} show significant improvements of all ensembles over single-model performance on both \ac{BMA} and accuracy, with the ensemble of the best 3 models being the best performing one reaching near 90\% accuracy. Presumably, all the ensembles have superior performance to single models, because one is averaging predictions across models with significantly different architectures. This produces significant variations within the softmax probabilities, which when averaged produce better generalization performance. \par One can also observe that ensembling the most amount models is not necessarily the best approach. For example, the ensemble of the best 4 and the ensemble of 5 yields significantly worse performance than the ensemble of the best 3. This can be attributed to the discrepancy in the performance of VGG16 and ResNet152 concerning the rest of the models (EfficientNetB2, DenseNet121, InceptionResNetV2), which as a consequence of the averaging scheme brings the performance metrics down. However, the models composing the ensemble of the best 3 all have similar performance, which makes them good candidates for an ensemble with an averaging scheme. Moreover, one can assume that if more models were available with similar performance to the EfficientNetB2, DenseNet121, and InceptionResNetV2 then it would result in better \ac{BMA} and accuracy scores. Finally, the confusion matrix of the ensemble of the best 3 models is shown in \autoref{fig:densenet201_best_ensembe_3}. Results show that overall every class has a slightly better performance when compared to the single-model performance of DenseNet121 in \autoref{fig:82400_balanced_conf_matrix}. This performance increase also reflects on the \ac{BMA} and accuracy values, which become 0.846 and 0.891, respectively.\par \begin{figure}[ht] \centering \includegraphics[width=0.7\textwidth]{figs/densenet201_best_ensembe_3.pdf} \caption[Confusion matrix of the ensemble composed with the best 3 pre-trained models.]{Confusion matrix of the ensemble composed with the best 3 pre-trained models, namely DenseNet121, EfficientNetB2 and InceptionResNetV2. Each model is trained with 83432 balanced samples with the best hyperparameters obtained in \Cref{section:hyperparameters}. Online data augmentation is turned on.} \label{fig:densenet201_best_ensembe_3} \end{figure} \section{Out of Training Distribution Detection} \label{section:outdist} The presented models in \Cref{section:balance} and \Cref{section:ensemble} show that deep \ac{CNN}s can be effective predictors on the task of skin lesion classification. However, no out-of-distribution detection is being performed to identify "unknown" samples. In other words, the model is unaware of whether a test sample is part of the original train data distribution (\textit{i.e.}, the original 8 classes) or not. As presented in \Cref{chapter:sota}, in the \ac{ISIC} 2019 challenge, there is not a single way of identifying out-of-distribution samples, consequently resulting in a multitude of methodologies being employed by different authors. As a means to compare such methods, this work will explore 3 of them, namely: \begin{itemize} \item Following the submission by Zhou \textit{et al.} \cite{isic2019second}, experiments will be made with different top-1 softmax probability thresholds; \item The \ac{ODIN} \cite{odin} will be integrated within the single-model approach, following Wang's submission towards the \ac{ISIC} 2019 \cite{Wang}; \item Finally, like Gessert \textit{et al.} \cite{isic2019first} (\ac{ISIC} 2019 winner), results of out-of-distribution detection will be presented by training a model with an additional outlier class, composed of additional samples. \end{itemize} A benchmark has been set to test and compare these 3 approaches, more specifically: \begin{itemize} \item For simplicity of the results, the single best model approach towards 8 class classification, namely the hyperparameter tuned DenseNet121 model, will be used instead of the ensemble presented at \cref{section:ensemble}; \item An global average pooling layer is introduced at the end of the convolutional base in order to reduce the number of parameters for the classifier; \item The original pre-trained model's classifier is replaced by a new classifier composed of one fully-connected layer with 512 neurons which use the ReLU activation function, and one softmax layer with either 8 or 9 neurons to translate each of the class's probabilities; \item For the transfer learning approach, all the layer's parameters from the convolutional base of the pre-trained model are extracted and fine-tuned, which yields the best performance for this specific dataset (see the results presented in \Cref{section:models}); \item Following the work done by Gessert \textit{et al.} \cite{gessert2018} the Adam optimizer \cite{adam} is used, with $\rho_{1} = 0.9$ and $\rho_{2}=0.999$; \item The convolutional base is frozen for the initial 2 epochs with a learning rate of $10^{-3}$. The initial fine-tuning learning rate is $10^{-4}$ but is reduced by a factor of 10 when validation loss stops improving for 8 epochs. Each model trains for a maximum of 100 epochs but early stopping is performed whenever validation loss stops improving for 16 epochs; \item For each epoch, all the samples are shuffled before being feed into the network; \item Each batch is composed of 8 samples. \item Online and offline data augmentation is used, with random crops, rotations, and flips each with a probability of 0.5 (augmentation group 1). Offline data augmentation is used a class balancing measure and online data augmentation as a method to alleviate overfitting; \item For each training process, 3 models are saved: The model that obtained the highest \ac{BMA} on the validation set, the model with the lowest loss on the validation set, and finally the resulting model from the last epoch. However, the performance on the test set is evaluated according to the model which attained the highest \ac{BMA} on the validation set. \end{itemize} \subsection{Softmax Threshold} Multi-class classification problems using deep neural networks often use a softmax layer at the last layer of the neural network classifier. The softmax probabilities of a softmax layer always add up to 1 and are spread across all classes with non-zero values. As such, the network will presumably try to optimize softmax probabilities by maximizing the softmax of the ground truth class and minimizing the probabilities for other classes. Even though the softmax predictions of an out-of-distribution sample in an 8-class model will add up to 1, one can make the assumption that the highest score will be lower than an in-distribution sample because the model will likely not be confident of its predictions. \par Therefore, a threshold will be imposed on the softmax so that if the top softmax probability is lower than that threshold, then the sample will be labeled as "unknown", or more formally known as an out-of-distribution sample. However, there is no clear threshold value that will benefit the model the most. The experiments shown in \autoref{fig:densenet121_threshold_comp} suggest that low thresholds have close to no impact on the classification of the out-of-distribution samples, while high thresholds negatively impact the model performance. \begin{figure}[ht] \centering \includegraphics[width=\textwidth]{figs/densenet121_threshold_comp.pdf} \caption{Comparison of different top-1 softmax threshold values for the detection of out of training distribution samples from the test set.} \label{fig:densenet121_threshold_comp} \end{figure} Considering the results, low threshold values having no impact stem from the fact that the DenseNet model often outputs high softmax scores towards a specific class (see the examples from \autoref{fig:softmax_examples}). This is an expected attribute of these types of deep learning models because they were optimized for 8 classes as opposed to 9, which means that they will try to maximize softmax scores towards one of the 8 classes rather than considering that a sample might not be part of the initial 8 class distribution of the training dataset. \par This property also reflects in the results of higher thresholds, because such thresholds start negatively impacting correctly classified samples from other classes as seen by the confusion matrix in \autoref{fig:densenet121_conf_matrix_thresh_0_7}. These results show a bright contrast with the ones obtained by Hendricks \textit{et al.} \cite{Hendrycks2019} for his baseline approach towards out-of-distribution detection (recall \Cref{section:out_of_distribution}). However, one should take into consideration that the in-distribution dataset used by them is far different from the out-of-distribution samples. More specifically, they presented results with \ac{MNIST} \cite{LeCun1998} as an in-distribution dataset and out-of-distribution datasets such as Gaussian Noise, uniform noise and black and white CIFAR-10 \cite{Krizhevskya} samples. Unfortunately, for the problem of skin lesion classification in- and out-of-distribution samples can be quite similar which makes this approach sub-optimal. \begin{figure}[ht] \centering \includegraphics[width=0.7\textwidth]{figs/densenet121_conf_matrix_thresh_0_7.pdf} \caption[Confusion matrix of the test set predictions on the hyperparameter tuned DenseNet121 trained with 8 classes where out of distribution samples are identified using a top-1 softmax threshold of 0.7.]{Confusion matrix of the predictions of the test set containing out of training distribution samples. The used model is the hyperparameter tuned DenseNet121 trained with 83432 class balanced samples from 8 different classes, and online data augmentation is performed with group 1 as a method to reduce overfitting. Out of distribution samples (\textit{i.e.}, "unknown" samples) are identified using a top-1 softmax threshold of 0.7.} \label{fig:densenet121_conf_matrix_thresh_0_7} \end{figure} \subsection{\ac{ODIN}} As demonstrated in \Cref{section:out_of_distribution}, there are 3 parameters within the \ac{ODIN} that need tuning, namely: the temperature scaling $T$, the perturbation magnitude $\varepsilon$ and the threshold $\sigma$. However, as this tuning procedure has already been done by Hsin-Wei Wang Wang \cite{Wang} for the same dataset (\ac{ISIC} 2019 challenge dataset), it has been decided to not tune these parameters, but to rather use the ones already tuned by this author. More specifically, $T = 2$, $\varepsilon = 0.0002$ and $\sigma = 0.90385$. \par Results with this method present an accuracy of 69.08\% and a \ac{BMA} of 65.35\% over the test set. As shown in the confusion matrix of \autoref{fig:densenet121_conf_matrix_odin}, it is clear that \ac{ODIN} is not performing well enough as predictions towards the unknown class are evenly distributed across the 9 classes. It is hypothesized that ODIN is not a good option for out-of-distribution detection in skin lesion classification, because in and out of distribution samples are very similar to each other. This means that techniques like temperature scaling and adding small perturbations to the inputs have a small impact on the classification results. \par \begin{figure}[ht] \centering \includegraphics[width=0.7\textwidth]{figs/densenet121_conf_matrix_odin.pdf} \caption[Confusion matrix of the test set predictions on the hyperparameter tuned DenseNet121 trained with 8 classes where out of distribution samples are identified using the \ac{ODIN} detector.]{Confusion matrix of the predictions of the test set containing out of training distribution samples. The used model is the hyperparameter tuned DenseNet121 trained with 83432 class balanced samples from 8 different classes, and online data augmentation is performed with group 1 as a method to reduce overfitting. Out of distribution samples (\textit{i.e.}, "unknown" samples) are identified using \ac{ODIN} \cite{odin}} \label{fig:densenet121_conf_matrix_odin} \end{figure} These results contrast the results obtained by Liang \textit{et al.} \cite{odin}, where the in and the out-of-distribution samples were quite different from each other. More specifically, in distribution images belonged to the CIFAR-10 \cite{Krizhevskya} which consists of 60000 images in 10 classes and the out-of-distribution samples were either from a uniform noise distribution, a Gaussian noise distribution, the Tiny ImageNet dataset (a subset of the ImageNet dataset \cite{Deng2010}) which is composed of 200 classes, or finally from the LSUN dataset \cite{Yu2015} which is composed of 30 classes (10 scene categories and 20 object categories). From this, one can assume that \ac{ODIN} only works well when the training dataset is far different from the out of distribution samples. \par \subsection{Outlier Class} From the results so far, it is evident that only using the softmax probabilities is not enough to determine whether a sample belongs to the training dataset. As such, the final approach towards out of distribution detection is to re-train the original DenseNet121 model with a 9th class with the out of distribution data described in \Cref{chapter:environment}. \par Samples are split into train, validation, and test in a stratified way just like in the previous experiments (see \Cref{section:split}). This means that very few samples are used to train, validate, and test this class. Therefore, to alleviate this problem, the outlier class is oversampled through the data augmentation group 1 so that all classes are balanced (just like the final approach defined in \Cref{section:data}). In practice, this means that a DenseNet121 is trained with 93861 samples (most of them synthetic). \par \begin{figure}[ht] \centering \includegraphics[width=0.7\textwidth]{figs/densenet121_conf_matrix_unknown_train.pdf} \caption[Confusion matrix of the test set predictions on the hyperparameter tuned DenseNet121 trained with in and out of distribution samples]{Confusion matrix of the test set predictions on the hyperparameter tuned DenseNet121 trained with 93861 samples class balanced samples from 9 different classes. The 9th class is composed of the out of distribution data described in \Cref{chapter:environment}. Online data augmentation is performed with the group 1 as a method to reduce overfitting.} \label{fig:densenet121_conf_matrix_9thclass} \end{figure} As shown in the confusion matrix in \autoref{fig:densenet121_conf_matrix_9thclass}, results are considerably better than the previous approaches with an overall accuracy of 84.72\% and a \ac{BMA} of 74.98\%. However, it is still very apparent that the "unknown" class has significantly lower performance than the rest of the classes. These findings can be attributed to the small amount of rather heterogeneous samples in this class used to train the model. One can also observe that there are a lot of out-of-distribution samples (UNK) that are being classified as Melanocytic Nevus (NV) and then again one can speculate that this might be due to a lack of data for the "unknown" class. \par \subsection{Approach Comparison} Finally, these 3 approaches are compared in \autoref{tables:unknown_comp}, with respect to the test \ac{BMA} and accuracy scores. It is evident from the results that training an outlier class outperforms both softmax thresholding and the \ac{ODIN} method. These findings corroborate the results attained in the literature, specifically, Hsin-Wei Wang reported that the \ac{ODIN} classifier performed poorly due to an insignificant difference between in and out of distribution samples \cite{Wang}. In contrast, approaches such as the one done by Gessert \textit{et al.} \cite{isic2019first} attained one of the best results in 9-class classification, presumably because of the availability of a big in-house dataset used to train an outlier class composed of out of distribution samples (in-distribution being the original 8 classes). As such, more effort should be put into gathering a bigger set of "unknown" samples. \par \begin{table}[h] \centering \begin{tabularx}{\textwidth}{|l|X|X|} \hline Out-of-distribution method & \ac{BMA} & Accuracy \\ \hline Softmax threshold (0.7) & 0.721 & 0.823 \\ \hline \ac{ODIN} & 0.654 & 0.691 \\ \hline Outlier class & 0.750 & 0.847 \\ \hline \end{tabularx} \caption{Comparison of \ac{BMA} and accuracy scores on the test set for the different methodologies to detect out of training distribution samples.} \label{tables:unknown_comp} \end{table} However, it is important to point out some limitations of the outlier class approach. Specifically, there are very few "unknown" samples within the test set, which might be an indicator that the results might be biased. Furthermore, a good portion of the samples from the "unknown" test set belong to "Lentigo NOS". This might also be negatively impacting the training process for the outlier class approach because the most effective way of minimizing the loss within the "unknown" class is by correctly classifying "Lentigo NOS" samples. Moreover, the test set will have more "Lentigo NOS" samples than the rest of the classes which can also be biasing the test set results. \par \section{Results Discussion} \label{section:discussion} Finally, one can now compare the different methods employed throughout this work in \autoref{tables:approach_evoution}. Planned experiments in \Cref{section:balance} corroborated the importance of dataset size for deep neural networks, which significantly improved the performance of the model in comparison to the DenseNet201 employed in \Cref{chapter:experiments}. However, sufficient labeled data for skin lesions is hard to come by, which highlights the importance of data augmentation. The presented comparisons show that simple data augmentation techniques can significantly improve generalization performance through offline data augmentation as a class balancing measure. Furthermore, while online data augmentation proved to be an excellent approach to reduce overfitting, offline data augmentation has a big impact on the generalization performance of underrepresented classes. \par \begin{table}[h] \centering \begin{tabularx}{\textwidth}{|l|X|X|} \hline Approach Step & \ac{BMA} & Accuracy \\ \hline Pre-trained model choice (see \Cref{section:models}) & $\approx$ 0.579 & $\approx$ 0.746 \\ \hline Hyperparameter tuned model (see \Cref{section:hyperparameters})& $\approx$ 0.585 & $\approx$ 0.754 \\ \hline Full dataset model without online DA (see \Cref{section:dg_impact})& $\approx$ 0.636 & $\approx$ 0.799 \\ \hline Full dataset model with online DA (see \Cref{section:dg_impact})& $\approx$ 0.769 & $\approx$ 0.849 \\ \hline Class balanced model (see \Cref{section:class_bal_impact}) & $\approx$ 0.815 & $\approx$ 0.867 \\ \hline Ensemble of 3 models (see \Cref{section:ensemble}) & $\approx$ 0.846 & $\approx$ 0.891 \\ \hline Model with out-of-distribution detection (see \Cref{section:outdist}) & $\approx$ 0.750 & $\approx$ 0.847 \\ \hline \end{tabularx} \caption{\ac{BMA} and accuracy on the test set for the different models created throughout this work.} \label{tables:approach_evoution} \end{table} This study confirms that ensemble learning can further improve the results by using different \ac{CNN} architectures. Experimentation also underlines the importance of choosing the best models for the ensemble, otherwise, it will result in a decrease in performance as a consequence of the averaging scheme. However, one should attain whether it is worth to make such an approach for a practical application of these classifiers. For example, if one were to use the best model ensemble, namely, the 3 model ensemble with DenseNet121, InceptionResNetV2, and EfficientNetB2, then one could expect a performance increase of 3\% (see \autoref{tables:approach_evoution}), but it would require approximately 3 times as much time to predict the class label for all the models, which might become a limiting factor when deploying this classifier into a production environment. Therefore, such an approach is useful for benchmark challenges, but it could be impractical for real-world usage. \par The presented approach achieves competitive results in the \ac{ISIC} 2019 challenge without the need for additional external data in the original 8 classes. It outperforms the state-of-the-art approaches for 8-class classification on both single-model performance and multi-model performance through ensembling (see \autoref{tables:results_comparisson}). Results from 9-class classification show a significant decrease in both \ac{BMA} and accuracy scores in comparison with 8-class classification due to the underwhelming results of the out-of-distribution detection. \par \begin{table}[h] \centering \begin{tabularx}{\textwidth}{|l|X|X|X|X|} \hline \multirow{2}{*}{Approach} & \multicolumn{2}{l|}{In distribution (8 Classes)} & \multicolumn{2}{l|}{Out of distribution (9 Classes)} \\ \cline{2-5} & $BMA_{test}$ & $A_{test}$ & $BMA_{test}$ & $A_{test}$ \\ \hline \textbf{3 Model Ensemble} & 0.846 & 0.891 & 0.775 & 0.858 \\ \hline \textbf{DenseNet121} & 0.815 & 0.867 & 0.750 & 0.847 \\ \hline Gessert \textit{et al.} \cite{isic2019first} & 0.725 & NA & 0.636 & NA \\ \hline Zhou \textit{et al.} \cite{isic2019second} & 0.753 & NA & 0.607 & NA \\ \hline Hsin-Wei Wang \cite{Wang} & 0.828 & NA & 0.505 & NA \\ \hline \end{tabularx} \caption[Approach comparison with state-of-the-art approaches on 8- and 9-class classification for the \ac{ISIC} 2019.] {Approach comparison with state-of-the-art approaches on 8- and 9-class classification for the \ac{ISIC} 2019. The proposed approaches are highlighted in bold, namely the "3 Model Ensemble" and the "DenseNet121". The in distribution models (8 classes) are trained with 83432 class balanced samples and the out of distribution models (9 classes) are trained with 93861 class balanced samples, where the 9th class is composed of samples from a different distribution of the original 8 (see \autoref{subsection:unknown}). The approaches from Gessert \textit{et al.} \cite{isic2019first}, Zhou \textit{et al.}, and Hsin-Wei Wang \cite{Wang} are ensembles because it is their best performing models. For a detailed single-model performance of these approaches, relate to their original papers.} \label{tables:results_comparisson} \end{table} The obtained results for 8-class classification of skin lesions in the \ac{ISIC} 2019 dataset, resonate the extensive experimentation done throughout this work, that leads to a systematically better performing model step-by-step. However, some limitations are still present, more specifically, our approach for the "unknown" class was unable to attain good generalization performance, with the highest performing method being the outlier class. This is a fairly limited approach in the sense that one does not know the real-world distribution of this class. Even though other strategies such as softmax thresholding and the \ac{ODIN} were experimented with, those methods proved to be very ineffective in skin lesion diagnosis. \par Even though the presented results are superior to the state-of-the-art (Gessert \textit{et al.} \cite{isic2019first}) on 9-class classification, it should be noted that the test set used to compute the \ac{BMA} and accuracy for the 9-class classification in this approach was part of the unknown dataset gathered from the \ac{ISIC} archive (see \autoref{subsection:unknown}), rather than the 9-class test set originally used to compare different approaches on \ac{ISIC} 2019. This is the case because the \ac{ISIC} organization does not provide the ground truth data related to the \ac{ISIC} 2019 challenge test data. Furthermore, they do not allow any new submissions towards this challenge leaderboard, in order to compute this approach's metrics. \par
{ "alphanum_fraction": 0.7777968241, "avg_line_length": 176.293956044, "ext": "tex", "hexsha": "9a82a1a46fd0193c5c9ad885ec8be49573d85253", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "23e5b20cddd6285739b00b482ee5a24d7786356a", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "FabioTomaz/msc", "max_forks_repo_path": "doc/Dissertation/chapters/experiments2.tex", "max_issues_count": 1, "max_issues_repo_head_hexsha": "23e5b20cddd6285739b00b482ee5a24d7786356a", "max_issues_repo_issues_event_max_datetime": "2021-04-12T10:16:56.000Z", "max_issues_repo_issues_event_min_datetime": "2021-02-24T12:55:07.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "FabioTomaz/msc", "max_issues_repo_path": "doc/Dissertation/chapters/experiments2.tex", "max_line_length": 1019, "max_stars_count": null, "max_stars_repo_head_hexsha": "23e5b20cddd6285739b00b482ee5a24d7786356a", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "FabioTomaz/msc", "max_stars_repo_path": "doc/Dissertation/chapters/experiments2.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 14291, "size": 64171 }
\documentclass[11pt]{article} \usepackage[breakable]{tcolorbox} \usepackage{parskip} % Stop auto-indenting (to mimic markdown behaviour) \usepackage{iftex} \ifPDFTeX \usepackage[T1]{fontenc} \usepackage{mathpazo} \else \usepackage{fontspec} \fi % Basic figure setup, for now with no caption control since it's done % automatically by Pandoc (which extracts ![](path) syntax from Markdown). \usepackage{graphicx} % Maintain compatibility with old templates. Remove in nbconvert 6.0 \let\Oldincludegraphics\includegraphics % Ensure that by default, figures have no caption (until we provide a % proper Figure object with a Caption API and a way to capture that % in the conversion process - todo). \usepackage{caption} \DeclareCaptionFormat{nocaption}{} \captionsetup{format=nocaption,aboveskip=0pt,belowskip=0pt} \usepackage{float} \floatplacement{figure}{H} % forces figures to be placed at the correct location \usepackage{xcolor} % Allow colors to be defined \usepackage{enumerate} % Needed for markdown enumerations to work \usepackage{geometry} % Used to adjust the document margins \usepackage{amsmath} % Equations \usepackage{amssymb} % Equations \usepackage{textcomp} % defines textquotesingle % Hack from http://tex.stackexchange.com/a/47451/13684: \AtBeginDocument{% \def\PYZsq{\textquotesingle}% Upright quotes in Pygmentized code } \usepackage{upquote} % Upright quotes for verbatim code \usepackage{eurosym} % defines \euro \usepackage[mathletters]{ucs} % Extended unicode (utf-8) support \usepackage{fancyvrb} % verbatim replacement that allows latex \usepackage{grffile} % extends the file name processing of package graphics % to support a larger range \makeatletter % fix for old versions of grffile with XeLaTeX \@ifpackagelater{grffile}{2019/11/01} { % Do nothing on new versions } { \def\Gread@@xetex#1{% \IfFileExists{"\Gin@base".bb}% {\Gread@eps{\[email protected]}}% {\Gread@@xetex@aux#1}% } } \makeatother \usepackage[Export]{adjustbox} % Used to constrain images to a maximum size \adjustboxset{max size={0.9\linewidth}{0.9\paperheight}} % The hyperref package gives us a pdf with properly built % internal navigation ('pdf bookmarks' for the table of contents, % internal cross-reference links, web links for URLs, etc.) \usepackage{hyperref} % The default LaTeX title has an obnoxious amount of whitespace. By default, % titling removes some of it. It also provides customization options. \usepackage{titling} \usepackage{longtable} % longtable support required by pandoc >1.10 \usepackage{booktabs} % table support for pandoc > 1.12.2 \usepackage[inline]{enumitem} % IRkernel/repr support (it uses the enumerate* environment) \usepackage[normalem]{ulem} % ulem is needed to support strikethroughs (\sout) % normalem makes italics be italics, not underlines \usepackage{mathrsfs} % Colors for the hyperref package \definecolor{urlcolor}{rgb}{0,.145,.698} \definecolor{linkcolor}{rgb}{.71,0.21,0.01} \definecolor{citecolor}{rgb}{.12,.54,.11} % ANSI colors \definecolor{ansi-black}{HTML}{3E424D} \definecolor{ansi-black-intense}{HTML}{282C36} \definecolor{ansi-red}{HTML}{E75C58} \definecolor{ansi-red-intense}{HTML}{B22B31} \definecolor{ansi-green}{HTML}{00A250} \definecolor{ansi-green-intense}{HTML}{007427} \definecolor{ansi-yellow}{HTML}{DDB62B} \definecolor{ansi-yellow-intense}{HTML}{B27D12} \definecolor{ansi-blue}{HTML}{208FFB} \definecolor{ansi-blue-intense}{HTML}{0065CA} \definecolor{ansi-magenta}{HTML}{D160C4} \definecolor{ansi-magenta-intense}{HTML}{A03196} \definecolor{ansi-cyan}{HTML}{60C6C8} \definecolor{ansi-cyan-intense}{HTML}{258F8F} \definecolor{ansi-white}{HTML}{C5C1B4} \definecolor{ansi-white-intense}{HTML}{A1A6B2} \definecolor{ansi-default-inverse-fg}{HTML}{FFFFFF} \definecolor{ansi-default-inverse-bg}{HTML}{000000} % common color for the border for error outputs. \definecolor{outerrorbackground}{HTML}{FFDFDF} % commands and environments needed by pandoc snippets % extracted from the output of `pandoc -s` \providecommand{\tightlist}{% \setlength{\itemsep}{0pt}\setlength{\parskip}{0pt}} \DefineVerbatimEnvironment{Highlighting}{Verbatim}{commandchars=\\\{\}} % Add ',fontsize=\small' for more characters per line \newenvironment{Shaded}{}{} \newcommand{\KeywordTok}[1]{\textcolor[rgb]{0.00,0.44,0.13}{\textbf{{#1}}}} \newcommand{\DataTypeTok}[1]{\textcolor[rgb]{0.56,0.13,0.00}{{#1}}} \newcommand{\DecValTok}[1]{\textcolor[rgb]{0.25,0.63,0.44}{{#1}}} \newcommand{\BaseNTok}[1]{\textcolor[rgb]{0.25,0.63,0.44}{{#1}}} \newcommand{\FloatTok}[1]{\textcolor[rgb]{0.25,0.63,0.44}{{#1}}} \newcommand{\CharTok}[1]{\textcolor[rgb]{0.25,0.44,0.63}{{#1}}} \newcommand{\StringTok}[1]{\textcolor[rgb]{0.25,0.44,0.63}{{#1}}} \newcommand{\CommentTok}[1]{\textcolor[rgb]{0.38,0.63,0.69}{\textit{{#1}}}} \newcommand{\OtherTok}[1]{\textcolor[rgb]{0.00,0.44,0.13}{{#1}}} \newcommand{\AlertTok}[1]{\textcolor[rgb]{1.00,0.00,0.00}{\textbf{{#1}}}} \newcommand{\FunctionTok}[1]{\textcolor[rgb]{0.02,0.16,0.49}{{#1}}} \newcommand{\RegionMarkerTok}[1]{{#1}} \newcommand{\ErrorTok}[1]{\textcolor[rgb]{1.00,0.00,0.00}{\textbf{{#1}}}} \newcommand{\NormalTok}[1]{{#1}} % Additional commands for more recent versions of Pandoc \newcommand{\ConstantTok}[1]{\textcolor[rgb]{0.53,0.00,0.00}{{#1}}} \newcommand{\SpecialCharTok}[1]{\textcolor[rgb]{0.25,0.44,0.63}{{#1}}} \newcommand{\VerbatimStringTok}[1]{\textcolor[rgb]{0.25,0.44,0.63}{{#1}}} \newcommand{\SpecialStringTok}[1]{\textcolor[rgb]{0.73,0.40,0.53}{{#1}}} \newcommand{\ImportTok}[1]{{#1}} \newcommand{\DocumentationTok}[1]{\textcolor[rgb]{0.73,0.13,0.13}{\textit{{#1}}}} \newcommand{\AnnotationTok}[1]{\textcolor[rgb]{0.38,0.63,0.69}{\textbf{\textit{{#1}}}}} \newcommand{\CommentVarTok}[1]{\textcolor[rgb]{0.38,0.63,0.69}{\textbf{\textit{{#1}}}}} \newcommand{\VariableTok}[1]{\textcolor[rgb]{0.10,0.09,0.49}{{#1}}} \newcommand{\ControlFlowTok}[1]{\textcolor[rgb]{0.00,0.44,0.13}{\textbf{{#1}}}} \newcommand{\OperatorTok}[1]{\textcolor[rgb]{0.40,0.40,0.40}{{#1}}} \newcommand{\BuiltInTok}[1]{{#1}} \newcommand{\ExtensionTok}[1]{{#1}} \newcommand{\PreprocessorTok}[1]{\textcolor[rgb]{0.74,0.48,0.00}{{#1}}} \newcommand{\AttributeTok}[1]{\textcolor[rgb]{0.49,0.56,0.16}{{#1}}} \newcommand{\InformationTok}[1]{\textcolor[rgb]{0.38,0.63,0.69}{\textbf{\textit{{#1}}}}} \newcommand{\WarningTok}[1]{\textcolor[rgb]{0.38,0.63,0.69}{\textbf{\textit{{#1}}}}} % Define a nice break command that doesn't care if a line doesn't already % exist. \def\br{\hspace*{\fill} \\* } % Math Jax compatibility definitions \def\gt{>} \def\lt{<} \let\Oldtex\TeX \let\Oldlatex\LaTeX \renewcommand{\TeX}{\textrm{\Oldtex}} \renewcommand{\LaTeX}{\textrm{\Oldlatex}} % Document parameters % Document title \title{taruma\_0\_3\_5\_hk89\_NRECA} % Pygments definitions \makeatletter \def\PY@reset{\let\PY@it=\relax \let\PY@bf=\relax% \let\PY@ul=\relax \let\PY@tc=\relax% \let\PY@bc=\relax \let\PY@ff=\relax} \def\PY@tok#1{\csname PY@tok@#1\endcsname} \def\PY@toks#1+{\ifx\relax#1\empty\else% \PY@tok{#1}\expandafter\PY@toks\fi} \def\PY@do#1{\PY@bc{\PY@tc{\PY@ul{% \PY@it{\PY@bf{\PY@ff{#1}}}}}}} \def\PY#1#2{\PY@reset\PY@toks#1+\relax+\PY@do{#2}} \@namedef{PY@tok@w}{\def\PY@tc##1{\textcolor[rgb]{0.73,0.73,0.73}{##1}}} \@namedef{PY@tok@c}{\let\PY@it=\textit\def\PY@tc##1{\textcolor[rgb]{0.24,0.48,0.48}{##1}}} \@namedef{PY@tok@cp}{\def\PY@tc##1{\textcolor[rgb]{0.61,0.40,0.00}{##1}}} \@namedef{PY@tok@k}{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.00,0.50,0.00}{##1}}} \@namedef{PY@tok@kp}{\def\PY@tc##1{\textcolor[rgb]{0.00,0.50,0.00}{##1}}} \@namedef{PY@tok@kt}{\def\PY@tc##1{\textcolor[rgb]{0.69,0.00,0.25}{##1}}} \@namedef{PY@tok@o}{\def\PY@tc##1{\textcolor[rgb]{0.40,0.40,0.40}{##1}}} \@namedef{PY@tok@ow}{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.67,0.13,1.00}{##1}}} \@namedef{PY@tok@nb}{\def\PY@tc##1{\textcolor[rgb]{0.00,0.50,0.00}{##1}}} \@namedef{PY@tok@nf}{\def\PY@tc##1{\textcolor[rgb]{0.00,0.00,1.00}{##1}}} \@namedef{PY@tok@nc}{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.00,0.00,1.00}{##1}}} \@namedef{PY@tok@nn}{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.00,0.00,1.00}{##1}}} \@namedef{PY@tok@ne}{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.80,0.25,0.22}{##1}}} \@namedef{PY@tok@nv}{\def\PY@tc##1{\textcolor[rgb]{0.10,0.09,0.49}{##1}}} \@namedef{PY@tok@no}{\def\PY@tc##1{\textcolor[rgb]{0.53,0.00,0.00}{##1}}} \@namedef{PY@tok@nl}{\def\PY@tc##1{\textcolor[rgb]{0.46,0.46,0.00}{##1}}} \@namedef{PY@tok@ni}{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.44,0.44,0.44}{##1}}} \@namedef{PY@tok@na}{\def\PY@tc##1{\textcolor[rgb]{0.41,0.47,0.13}{##1}}} \@namedef{PY@tok@nt}{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.00,0.50,0.00}{##1}}} \@namedef{PY@tok@nd}{\def\PY@tc##1{\textcolor[rgb]{0.67,0.13,1.00}{##1}}} \@namedef{PY@tok@s}{\def\PY@tc##1{\textcolor[rgb]{0.73,0.13,0.13}{##1}}} \@namedef{PY@tok@sd}{\let\PY@it=\textit\def\PY@tc##1{\textcolor[rgb]{0.73,0.13,0.13}{##1}}} \@namedef{PY@tok@si}{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.64,0.35,0.47}{##1}}} \@namedef{PY@tok@se}{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.67,0.36,0.12}{##1}}} \@namedef{PY@tok@sr}{\def\PY@tc##1{\textcolor[rgb]{0.64,0.35,0.47}{##1}}} \@namedef{PY@tok@ss}{\def\PY@tc##1{\textcolor[rgb]{0.10,0.09,0.49}{##1}}} \@namedef{PY@tok@sx}{\def\PY@tc##1{\textcolor[rgb]{0.00,0.50,0.00}{##1}}} \@namedef{PY@tok@m}{\def\PY@tc##1{\textcolor[rgb]{0.40,0.40,0.40}{##1}}} \@namedef{PY@tok@gh}{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.00,0.00,0.50}{##1}}} \@namedef{PY@tok@gu}{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.50,0.00,0.50}{##1}}} \@namedef{PY@tok@gd}{\def\PY@tc##1{\textcolor[rgb]{0.63,0.00,0.00}{##1}}} \@namedef{PY@tok@gi}{\def\PY@tc##1{\textcolor[rgb]{0.00,0.52,0.00}{##1}}} \@namedef{PY@tok@gr}{\def\PY@tc##1{\textcolor[rgb]{0.89,0.00,0.00}{##1}}} \@namedef{PY@tok@ge}{\let\PY@it=\textit} \@namedef{PY@tok@gs}{\let\PY@bf=\textbf} \@namedef{PY@tok@gp}{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.00,0.00,0.50}{##1}}} \@namedef{PY@tok@go}{\def\PY@tc##1{\textcolor[rgb]{0.44,0.44,0.44}{##1}}} \@namedef{PY@tok@gt}{\def\PY@tc##1{\textcolor[rgb]{0.00,0.27,0.87}{##1}}} \@namedef{PY@tok@err}{\def\PY@bc##1{{\setlength{\fboxsep}{\string -\fboxrule}\fcolorbox[rgb]{1.00,0.00,0.00}{1,1,1}{\strut ##1}}}} \@namedef{PY@tok@kc}{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.00,0.50,0.00}{##1}}} \@namedef{PY@tok@kd}{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.00,0.50,0.00}{##1}}} \@namedef{PY@tok@kn}{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.00,0.50,0.00}{##1}}} \@namedef{PY@tok@kr}{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.00,0.50,0.00}{##1}}} \@namedef{PY@tok@bp}{\def\PY@tc##1{\textcolor[rgb]{0.00,0.50,0.00}{##1}}} \@namedef{PY@tok@fm}{\def\PY@tc##1{\textcolor[rgb]{0.00,0.00,1.00}{##1}}} \@namedef{PY@tok@vc}{\def\PY@tc##1{\textcolor[rgb]{0.10,0.09,0.49}{##1}}} \@namedef{PY@tok@vg}{\def\PY@tc##1{\textcolor[rgb]{0.10,0.09,0.49}{##1}}} \@namedef{PY@tok@vi}{\def\PY@tc##1{\textcolor[rgb]{0.10,0.09,0.49}{##1}}} \@namedef{PY@tok@vm}{\def\PY@tc##1{\textcolor[rgb]{0.10,0.09,0.49}{##1}}} \@namedef{PY@tok@sa}{\def\PY@tc##1{\textcolor[rgb]{0.73,0.13,0.13}{##1}}} \@namedef{PY@tok@sb}{\def\PY@tc##1{\textcolor[rgb]{0.73,0.13,0.13}{##1}}} \@namedef{PY@tok@sc}{\def\PY@tc##1{\textcolor[rgb]{0.73,0.13,0.13}{##1}}} \@namedef{PY@tok@dl}{\def\PY@tc##1{\textcolor[rgb]{0.73,0.13,0.13}{##1}}} \@namedef{PY@tok@s2}{\def\PY@tc##1{\textcolor[rgb]{0.73,0.13,0.13}{##1}}} \@namedef{PY@tok@sh}{\def\PY@tc##1{\textcolor[rgb]{0.73,0.13,0.13}{##1}}} \@namedef{PY@tok@s1}{\def\PY@tc##1{\textcolor[rgb]{0.73,0.13,0.13}{##1}}} \@namedef{PY@tok@mb}{\def\PY@tc##1{\textcolor[rgb]{0.40,0.40,0.40}{##1}}} \@namedef{PY@tok@mf}{\def\PY@tc##1{\textcolor[rgb]{0.40,0.40,0.40}{##1}}} \@namedef{PY@tok@mh}{\def\PY@tc##1{\textcolor[rgb]{0.40,0.40,0.40}{##1}}} \@namedef{PY@tok@mi}{\def\PY@tc##1{\textcolor[rgb]{0.40,0.40,0.40}{##1}}} \@namedef{PY@tok@il}{\def\PY@tc##1{\textcolor[rgb]{0.40,0.40,0.40}{##1}}} \@namedef{PY@tok@mo}{\def\PY@tc##1{\textcolor[rgb]{0.40,0.40,0.40}{##1}}} \@namedef{PY@tok@ch}{\let\PY@it=\textit\def\PY@tc##1{\textcolor[rgb]{0.24,0.48,0.48}{##1}}} \@namedef{PY@tok@cm}{\let\PY@it=\textit\def\PY@tc##1{\textcolor[rgb]{0.24,0.48,0.48}{##1}}} \@namedef{PY@tok@cpf}{\let\PY@it=\textit\def\PY@tc##1{\textcolor[rgb]{0.24,0.48,0.48}{##1}}} \@namedef{PY@tok@c1}{\let\PY@it=\textit\def\PY@tc##1{\textcolor[rgb]{0.24,0.48,0.48}{##1}}} \@namedef{PY@tok@cs}{\let\PY@it=\textit\def\PY@tc##1{\textcolor[rgb]{0.24,0.48,0.48}{##1}}} \def\PYZbs{\char`\\} \def\PYZus{\char`\_} \def\PYZob{\char`\{} \def\PYZcb{\char`\}} \def\PYZca{\char`\^} \def\PYZam{\char`\&} \def\PYZlt{\char`\<} \def\PYZgt{\char`\>} \def\PYZsh{\char`\#} \def\PYZpc{\char`\%} \def\PYZdl{\char`\$} \def\PYZhy{\char`\-} \def\PYZsq{\char`\'} \def\PYZdq{\char`\"} \def\PYZti{\char`\~} % for compatibility with earlier versions \def\PYZat{@} \def\PYZlb{[} \def\PYZrb{]} \makeatother % For linebreaks inside Verbatim environment from package fancyvrb. \makeatletter \newbox\Wrappedcontinuationbox \newbox\Wrappedvisiblespacebox \newcommand*\Wrappedvisiblespace {\textcolor{red}{\textvisiblespace}} \newcommand*\Wrappedcontinuationsymbol {\textcolor{red}{\llap{\tiny$\m@th\hookrightarrow$}}} \newcommand*\Wrappedcontinuationindent {3ex } \newcommand*\Wrappedafterbreak {\kern\Wrappedcontinuationindent\copy\Wrappedcontinuationbox} % Take advantage of the already applied Pygments mark-up to insert % potential linebreaks for TeX processing. % {, <, #, %, $, ' and ": go to next line. % _, }, ^, &, >, - and ~: stay at end of broken line. % Use of \textquotesingle for straight quote. \newcommand*\Wrappedbreaksatspecials {% \def\PYGZus{\discretionary{\char`\_}{\Wrappedafterbreak}{\char`\_}}% \def\PYGZob{\discretionary{}{\Wrappedafterbreak\char`\{}{\char`\{}}% \def\PYGZcb{\discretionary{\char`\}}{\Wrappedafterbreak}{\char`\}}}% \def\PYGZca{\discretionary{\char`\^}{\Wrappedafterbreak}{\char`\^}}% \def\PYGZam{\discretionary{\char`\&}{\Wrappedafterbreak}{\char`\&}}% \def\PYGZlt{\discretionary{}{\Wrappedafterbreak\char`\<}{\char`\<}}% \def\PYGZgt{\discretionary{\char`\>}{\Wrappedafterbreak}{\char`\>}}% \def\PYGZsh{\discretionary{}{\Wrappedafterbreak\char`\#}{\char`\#}}% \def\PYGZpc{\discretionary{}{\Wrappedafterbreak\char`\%}{\char`\%}}% \def\PYGZdl{\discretionary{}{\Wrappedafterbreak\char`\$}{\char`\$}}% \def\PYGZhy{\discretionary{\char`\-}{\Wrappedafterbreak}{\char`\-}}% \def\PYGZsq{\discretionary{}{\Wrappedafterbreak\textquotesingle}{\textquotesingle}}% \def\PYGZdq{\discretionary{}{\Wrappedafterbreak\char`\"}{\char`\"}}% \def\PYGZti{\discretionary{\char`\~}{\Wrappedafterbreak}{\char`\~}}% } % Some characters . , ; ? ! / are not pygmentized. % This macro makes them "active" and they will insert potential linebreaks \newcommand*\Wrappedbreaksatpunct {% \lccode`\~`\.\lowercase{\def~}{\discretionary{\hbox{\char`\.}}{\Wrappedafterbreak}{\hbox{\char`\.}}}% \lccode`\~`\,\lowercase{\def~}{\discretionary{\hbox{\char`\,}}{\Wrappedafterbreak}{\hbox{\char`\,}}}% \lccode`\~`\;\lowercase{\def~}{\discretionary{\hbox{\char`\;}}{\Wrappedafterbreak}{\hbox{\char`\;}}}% \lccode`\~`\:\lowercase{\def~}{\discretionary{\hbox{\char`\:}}{\Wrappedafterbreak}{\hbox{\char`\:}}}% \lccode`\~`\?\lowercase{\def~}{\discretionary{\hbox{\char`\?}}{\Wrappedafterbreak}{\hbox{\char`\?}}}% \lccode`\~`\!\lowercase{\def~}{\discretionary{\hbox{\char`\!}}{\Wrappedafterbreak}{\hbox{\char`\!}}}% \lccode`\~`\/\lowercase{\def~}{\discretionary{\hbox{\char`\/}}{\Wrappedafterbreak}{\hbox{\char`\/}}}% \catcode`\.\active \catcode`\,\active \catcode`\;\active \catcode`\:\active \catcode`\?\active \catcode`\!\active \catcode`\/\active \lccode`\~`\~ } \makeatother \let\OriginalVerbatim=\Verbatim \makeatletter \renewcommand{\Verbatim}[1][1]{% %\parskip\z@skip \sbox\Wrappedcontinuationbox {\Wrappedcontinuationsymbol}% \sbox\Wrappedvisiblespacebox {\FV@SetupFont\Wrappedvisiblespace}% \def\FancyVerbFormatLine ##1{\hsize\linewidth \vtop{\raggedright\hyphenpenalty\z@\exhyphenpenalty\z@ \doublehyphendemerits\z@\finalhyphendemerits\z@ \strut ##1\strut}% }% % If the linebreak is at a space, the latter will be displayed as visible % space at end of first line, and a continuation symbol starts next line. % Stretch/shrink are however usually zero for typewriter font. \def\FV@Space {% \nobreak\hskip\z@ plus\fontdimen3\font minus\fontdimen4\font \discretionary{\copy\Wrappedvisiblespacebox}{\Wrappedafterbreak} {\kern\fontdimen2\font}% }% % Allow breaks at special characters using \PYG... macros. \Wrappedbreaksatspecials % Breaks at punctuation characters . , ; ? ! and / need catcode=\active \OriginalVerbatim[#1,codes*=\Wrappedbreaksatpunct]% } \makeatother % Exact colors from NB \definecolor{incolor}{HTML}{303F9F} \definecolor{outcolor}{HTML}{D84315} \definecolor{cellborder}{HTML}{CFCFCF} \definecolor{cellbackground}{HTML}{F7F7F7} % prompt \makeatletter \newcommand{\boxspacing}{\kern\kvtcb@left@rule\kern\kvtcb@boxsep} \makeatother \newcommand{\prompt}[4]{ {\ttfamily\llap{{\color{#2}[#3]:\hspace{3pt}#4}}\vspace{-\baselineskip}} } % Prevent overflowing lines due to hard-to-break entities \sloppy % Setup hyperref package \hypersetup{ breaklinks=true, % so long urls are correctly broken across lines colorlinks=true, urlcolor=urlcolor, linkcolor=linkcolor, citecolor=citecolor, } % Slightly bigger margins than the latex defaults \geometry{verbose,tmargin=1in,bmargin=1in,lmargin=1in,rmargin=1in} \begin{document} \maketitle Berdasarkan isu \href{https://github.com/taruma/hidrokit/issues/89}{\#89}: \textbf{pemodelan NRECA} Referensi isu: - Crawford, N. H., \& Thurin, S. M. (1981). \emph{Hydrologic estimates for small hydroelectric projects}. 1800 Massachusetts Avenue NW, Washington, DC 20036: National Rural Electric Cooperative Association (NRECA). - Megariansyah, Taruma S. (2015): Kajian Penerapan Model NRECA di Bendung Pamarayan, Skripsi Program Sarjana, Universitas Katolik Parahyangan. Deskripsi Permasalahan: - Memperoleh nilai debit dari model NRECA dengan parameter seperti \emph{Moist Storage} \texttt{MSTOR} (\(mm\)), \emph{Groundwater Storage} \texttt{GSTOR} (\(mm\)), \texttt{PSUB}, \texttt{GWF}, \emph{Crop Factor} \texttt{CF}, \texttt{C}, \emph{Area} \texttt{AREA} (\(m^2\)), Strategi Penyelesaian: - Perhitungan dan penamaan peubah mengikuti makalah Crawford \& Thurin (1981). - Membuat fungsi untuk setiap perhitungan yang digunakan dalam model (dengan ditandai imbuhan \texttt{\_} dalam penamaan fungsi). - Membuat fungsi \texttt{model\_NRECA()} dengan kejelasan parameter sehingga bisa dikembangkan lebih lanjut untuk kalibrasi model atau mencari parameter terbaik. Catatan: - Dataset yang digunakan diambil dari Megariansyah (2015). - Input data berupa observasi curah hujan \texttt{PREP} (\(mm\)) dan evapotranspirasi \texttt{PET} (\(mm\)) setiap bulannya. \hypertarget{persiapan-dan-dataset}{% \section{PERSIAPAN DAN DATASET}\label{persiapan-dan-dataset}} \begin{tcolorbox}[breakable, size=fbox, boxrule=1pt, pad at break*=1mm,colback=cellbackground, colframe=cellborder] \prompt{In}{incolor}{ }{\boxspacing} \begin{Verbatim}[commandchars=\\\{\}] \PY{k+kn}{import} \PY{n+nn}{numpy} \PY{k}{as} \PY{n+nn}{np} \PY{k+kn}{import} \PY{n+nn}{pandas} \PY{k}{as} \PY{n+nn}{pd} \PY{k+kn}{import} \PY{n+nn}{matplotlib}\PY{n+nn}{.}\PY{n+nn}{pyplot} \PY{k}{as} \PY{n+nn}{plt} \end{Verbatim} \end{tcolorbox} \begin{tcolorbox}[breakable, size=fbox, boxrule=1pt, pad at break*=1mm,colback=cellbackground, colframe=cellborder] \prompt{In}{incolor}{ }{\boxspacing} \begin{Verbatim}[commandchars=\\\{\}] \PY{k+kn}{from} \PY{n+nn}{google}\PY{n+nn}{.}\PY{n+nn}{colab} \PY{k+kn}{import} \PY{n}{files} \PY{k}{try}\PY{p}{:} \PY{n}{pd}\PY{o}{.}\PY{n}{read\PYZus{}excel}\PY{p}{(}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{NRECA\PYZus{}sample.xlsx}\PY{l+s+s1}{\PYZsq{}}\PY{p}{)} \PY{k}{except}\PY{p}{:} \PY{n}{dataset\PYZus{}path} \PY{o}{=} \PY{n+nb}{list}\PY{p}{(}\PY{n}{files}\PY{o}{.}\PY{n}{upload}\PY{p}{(}\PY{p}{)}\PY{o}{.}\PY{n}{keys}\PY{p}{(}\PY{p}{)}\PY{p}{)}\PY{p}{[}\PY{l+m+mi}{0}\PY{p}{]} \end{Verbatim} \end{tcolorbox} \begin{tcolorbox}[breakable, size=fbox, boxrule=1pt, pad at break*=1mm,colback=cellbackground, colframe=cellborder] \prompt{In}{incolor}{ }{\boxspacing} \begin{Verbatim}[commandchars=\\\{\}] \PY{n}{dataset} \PY{o}{=} \PY{n}{pd}\PY{o}{.}\PY{n}{read\PYZus{}excel}\PY{p}{(}\PY{n}{dataset\PYZus{}path}\PY{p}{,} \PY{n}{header}\PY{o}{=}\PY{l+m+mi}{0}\PY{p}{,} \PY{n}{index\PYZus{}col}\PY{o}{=}\PY{l+m+mi}{0}\PY{p}{,} \PY{n}{parse\PYZus{}dates}\PY{o}{=}\PY{k+kc}{True}\PY{p}{)} \PY{n}{dataset}\PY{o}{.}\PY{n}{info}\PY{p}{(}\PY{p}{)} \PY{n}{dataset}\PY{o}{.}\PY{n}{head}\PY{p}{(}\PY{p}{)} \end{Verbatim} \end{tcolorbox} \begin{Verbatim}[commandchars=\\\{\}] <class 'pandas.core.frame.DataFrame'> DatetimeIndex: 120 entries, 1999-01-01 to 2008-12-01 Data columns (total 2 columns): PRECIP 120 non-null float64 PET 120 non-null float64 dtypes: float64(2) memory usage: 2.8 KB \end{Verbatim} \begin{tcolorbox}[breakable, size=fbox, boxrule=.5pt, pad at break*=1mm, opacityfill=0] \prompt{Out}{outcolor}{ }{\boxspacing} \begin{Verbatim}[commandchars=\\\{\}] PRECIP PET 1999-01-01 507.000000 142.63 1999-02-01 374.228918 128.84 1999-03-01 211.762683 138.19 1999-04-01 219.793874 138.32 1999-05-01 132.121255 125.36 \end{Verbatim} \end{tcolorbox} \hypertarget{kode}{% \section{KODE}\label{kode}} \begin{tcolorbox}[breakable, size=fbox, boxrule=1pt, pad at break*=1mm,colback=cellbackground, colframe=cellborder] \prompt{In}{incolor}{ }{\boxspacing} \begin{Verbatim}[commandchars=\\\{\}] \PY{c+c1}{\PYZsh{} FUNGSI NRECA} \PY{k}{def} \PY{n+nf}{\PYZus{}STORAT}\PY{p}{(}\PY{n}{STORAGE}\PY{p}{,} \PY{n}{NOMINAL}\PY{p}{)}\PY{p}{:} \PY{k}{return} \PY{n}{STORAGE}\PY{o}{/}\PY{n}{NOMINAL} \PY{k}{def} \PY{n+nf}{\PYZus{}STORAGE}\PY{p}{(}\PY{n}{STORAGE}\PY{p}{,} \PY{n}{DELSTOR}\PY{p}{)}\PY{p}{:} \PY{k}{return} \PY{n}{STORAGE} \PY{o}{+} \PY{n}{DELSTOR} \PY{k}{def} \PY{n+nf}{\PYZus{}PRERAT}\PY{p}{(}\PY{n}{PRECIP}\PY{p}{,} \PY{n}{PET}\PY{p}{)}\PY{p}{:} \PY{k}{return} \PY{n}{PRECIP}\PY{o}{/}\PY{n}{PET} \PY{k}{def} \PY{n+nf}{\PYZus{}ETRAT}\PY{p}{(}\PY{n}{STORAT}\PY{p}{,} \PY{n}{PRERAT}\PY{p}{)}\PY{p}{:} \PY{n}{VAL} \PY{o}{=} \PY{p}{(}\PY{n}{STORAT} \PY{o}{/} \PY{l+m+mi}{2}\PY{p}{)} \PY{o}{+} \PY{p}{(}\PY{p}{(}\PY{l+m+mi}{1} \PY{o}{\PYZhy{}} \PY{p}{(}\PY{n}{STORAT} \PY{o}{/} \PY{l+m+mi}{2}\PY{p}{)}\PY{p}{)} \PY{o}{*} \PY{n}{PRERAT}\PY{p}{)} \PY{k}{return} \PY{n}{VAL} \PY{k}{if} \PY{n}{VAL} \PY{o}{\PYZlt{}} \PY{l+m+mi}{1} \PY{k}{else} \PY{l+m+mi}{1} \PY{k}{def} \PY{n+nf}{\PYZus{}AET}\PY{p}{(}\PY{n}{ETRAT}\PY{p}{,} \PY{n}{PET}\PY{p}{,} \PY{n}{CF}\PY{o}{=}\PY{l+m+mi}{1}\PY{p}{)}\PY{p}{:} \PY{k}{return} \PY{n}{ETRAT} \PY{o}{*} \PY{n}{PET} \PY{o}{*} \PY{n}{CF} \PY{k}{def} \PY{n+nf}{\PYZus{}WATBAL}\PY{p}{(}\PY{n}{PRECIP}\PY{p}{,} \PY{n}{AET}\PY{p}{)}\PY{p}{:} \PY{k}{return} \PY{n}{PRECIP} \PY{o}{\PYZhy{}} \PY{n}{AET} \PY{k}{def} \PY{n+nf}{\PYZus{}EXMRAT}\PY{p}{(}\PY{n}{WATBAL}\PY{p}{,} \PY{n}{STORAT}\PY{p}{)}\PY{p}{:} \PY{k}{if} \PY{n}{WATBAL} \PY{o}{\PYZlt{}} \PY{l+m+mi}{0}\PY{p}{:} \PY{k}{return} \PY{l+m+mi}{0} \PY{k}{if} \PY{n}{STORAT} \PY{o}{\PYZgt{}} \PY{l+m+mi}{1}\PY{p}{:} \PY{k}{return} \PY{l+m+mi}{1} \PY{o}{\PYZhy{}} \PY{p}{(}\PY{l+m+mf}{0.5} \PY{o}{*} \PY{p}{(}\PY{l+m+mi}{2} \PY{o}{\PYZhy{}} \PY{n}{STORAT}\PY{p}{)}\PY{o}{*}\PY{o}{*}\PY{l+m+mi}{2}\PY{p}{)} \PY{k}{else}\PY{p}{:} \PY{k}{return} \PY{l+m+mf}{0.5} \PY{o}{*} \PY{p}{(}\PY{n}{STORAT}\PY{o}{*}\PY{o}{*}\PY{l+m+mi}{2}\PY{p}{)} \PY{k}{def} \PY{n+nf}{\PYZus{}EXMST}\PY{p}{(}\PY{n}{EXMRAT}\PY{p}{,} \PY{n}{WATBAL}\PY{p}{)}\PY{p}{:} \PY{k}{return} \PY{n}{EXMRAT} \PY{o}{*} \PY{n}{WATBAL} \PY{k}{def} \PY{n+nf}{\PYZus{}DELSTOR}\PY{p}{(}\PY{n}{WATBAL}\PY{p}{,} \PY{n}{EXMST}\PY{p}{)}\PY{p}{:} \PY{k}{return} \PY{n}{WATBAL} \PY{o}{\PYZhy{}} \PY{n}{EXMST} \PY{k}{def} \PY{n+nf}{\PYZus{}GWRECH}\PY{p}{(}\PY{n}{PSUB}\PY{p}{,} \PY{n}{EXMST}\PY{p}{)}\PY{p}{:} \PY{k}{return} \PY{n}{PSUB} \PY{o}{*} \PY{n}{EXMST} \PY{k}{def} \PY{n+nf}{\PYZus{}GWSTOR2}\PY{p}{(}\PY{n}{GWSTOR\PYZus{}1}\PY{p}{,} \PY{n}{GWRECH}\PY{p}{)}\PY{p}{:} \PY{k}{return} \PY{n}{GWSTOR\PYZus{}1} \PY{o}{+} \PY{n}{GWRECH} \PY{k}{def} \PY{n+nf}{\PYZus{}GWSTOR1}\PY{p}{(}\PY{n}{GWSTOR\PYZus{}2}\PY{p}{,} \PY{n}{GWFLOW}\PY{p}{)}\PY{p}{:} \PY{k}{return} \PY{n}{GWSTOR\PYZus{}2} \PY{o}{\PYZhy{}} \PY{n}{GWFLOW} \PY{k}{def} \PY{n+nf}{\PYZus{}GWFLOW}\PY{p}{(}\PY{n}{GWRAT}\PY{p}{,} \PY{n}{GWSTOR\PYZus{}2}\PY{p}{)}\PY{p}{:} \PY{k}{return} \PY{n}{GWRAT} \PY{o}{*} \PY{n}{GWSTOR\PYZus{}2} \PY{k}{def} \PY{n+nf}{\PYZus{}DFLOW}\PY{p}{(}\PY{n}{EXMST}\PY{p}{,} \PY{n}{GWRECH}\PY{p}{)}\PY{p}{:} \PY{k}{return} \PY{n}{EXMST} \PY{o}{\PYZhy{}} \PY{n}{GWRECH} \PY{k}{def} \PY{n+nf}{\PYZus{}FLOW}\PY{p}{(}\PY{n}{GWFLOW}\PY{p}{,} \PY{n}{DFLOW}\PY{p}{)}\PY{p}{:} \PY{k}{return} \PY{n}{GWFLOW} \PY{o}{+} \PY{n}{DFLOW} \PY{k}{def} \PY{n+nf}{\PYZus{}DISCHARGE}\PY{p}{(}\PY{n}{FLOW}\PY{p}{,} \PY{n}{AREA}\PY{p}{,} \PY{n}{DAYS}\PY{p}{)}\PY{p}{:} \PY{k}{return} \PY{p}{(}\PY{n}{FLOW}\PY{o}{/}\PY{l+m+mi}{1000}\PY{p}{)} \PY{o}{*} \PY{n}{AREA} \PY{o}{/} \PY{p}{(}\PY{n}{DAYS} \PY{o}{*} \PY{l+m+mi}{24} \PY{o}{*} \PY{l+m+mi}{60} \PY{o}{*} \PY{l+m+mi}{60}\PY{p}{)} \PY{k}{def} \PY{n+nf}{\PYZus{}NOMINAL}\PY{p}{(}\PY{n}{C}\PY{p}{,} \PY{n}{PRECIP\PYZus{}MEAN\PYZus{}ANNUAL}\PY{p}{)}\PY{p}{:} \PY{k}{return} \PY{l+m+mi}{100} \PY{o}{+} \PY{n}{C}\PY{o}{*}\PY{n}{PRECIP\PYZus{}MEAN\PYZus{}ANNUAL} \end{Verbatim} \end{tcolorbox} \begin{tcolorbox}[breakable, size=fbox, boxrule=1pt, pad at break*=1mm,colback=cellbackground, colframe=cellborder] \prompt{In}{incolor}{ }{\boxspacing} \begin{Verbatim}[commandchars=\\\{\}] \PY{k}{def} \PY{n+nf}{model\PYZus{}NRECA}\PY{p}{(}\PY{n}{df}\PY{p}{,} \PY{n}{precip\PYZus{}col}\PY{p}{,} \PY{n}{pet\PYZus{}col}\PY{p}{,} \PY{n}{MSTOR}\PY{p}{,} \PY{n}{GSTOR}\PY{p}{,} \PY{n}{PSUB}\PY{p}{,} \PY{n}{GWF}\PY{p}{,} \PY{n}{CF}\PY{p}{,} \PY{n}{C}\PY{p}{,} \PY{n}{AREA}\PY{p}{,} \PY{n}{as\PYZus{}df}\PY{o}{=}\PY{k+kc}{True}\PY{p}{,} \PY{n}{report}\PY{o}{=}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{discharge}\PY{l+s+s1}{\PYZsq{}}\PY{p}{)}\PY{p}{:} \PY{c+c1}{\PYZsh{} sub\PYZus{}df} \PY{n}{data} \PY{o}{=} \PY{n}{df}\PY{o}{.}\PY{n}{loc}\PY{p}{[}\PY{p}{:}\PY{p}{,} \PY{p}{[}\PY{n}{precip\PYZus{}col}\PY{p}{,} \PY{n}{pet\PYZus{}col}\PY{p}{]}\PY{p}{]} \PY{c+c1}{\PYZsh{} info df} \PY{n}{nrows} \PY{o}{=} \PY{n}{data}\PY{o}{.}\PY{n}{shape}\PY{p}{[}\PY{l+m+mi}{0}\PY{p}{]} \PY{c+c1}{\PYZsh{} initialization} \PY{n}{storage}\PY{p}{,} \PY{n}{storat}\PY{p}{,} \PY{n}{prerat} \PY{o}{=} \PY{p}{(}\PY{n}{np}\PY{o}{.}\PY{n}{zeros}\PY{p}{(}\PY{n}{nrows}\PY{p}{)} \PY{k}{for} \PY{n}{\PYZus{}} \PY{o+ow}{in} \PY{n+nb}{range}\PY{p}{(}\PY{l+m+mi}{3}\PY{p}{)}\PY{p}{)} \PY{n}{etrat}\PY{p}{,} \PY{n}{aet}\PY{p}{,} \PY{n}{watbal} \PY{o}{=} \PY{p}{(}\PY{n}{np}\PY{o}{.}\PY{n}{zeros}\PY{p}{(}\PY{n}{nrows}\PY{p}{)} \PY{k}{for} \PY{n}{\PYZus{}} \PY{o+ow}{in} \PY{n+nb}{range}\PY{p}{(}\PY{l+m+mi}{3}\PY{p}{)}\PY{p}{)} \PY{n}{exmst}\PY{p}{,} \PY{n}{exmrat}\PY{p}{,} \PY{n}{delstor}\PY{p}{,} \PY{n}{gwrech} \PY{o}{=} \PY{p}{(}\PY{n}{np}\PY{o}{.}\PY{n}{zeros}\PY{p}{(}\PY{n}{nrows}\PY{p}{)} \PY{k}{for} \PY{n}{\PYZus{}} \PY{o+ow}{in} \PY{n+nb}{range}\PY{p}{(}\PY{l+m+mi}{4}\PY{p}{)}\PY{p}{)} \PY{n}{gwstor1}\PY{p}{,} \PY{n}{gwstor2}\PY{p}{,} \PY{n}{gwflow} \PY{o}{=} \PY{p}{(}\PY{n}{np}\PY{o}{.}\PY{n}{zeros}\PY{p}{(}\PY{n}{nrows}\PY{p}{)} \PY{k}{for} \PY{n}{\PYZus{}} \PY{o+ow}{in} \PY{n+nb}{range}\PY{p}{(}\PY{l+m+mi}{3}\PY{p}{)}\PY{p}{)} \PY{n}{dflow}\PY{p}{,} \PY{n}{flow}\PY{p}{,} \PY{n}{discharge} \PY{o}{=} \PY{p}{(}\PY{n}{np}\PY{o}{.}\PY{n}{zeros}\PY{p}{(}\PY{n}{nrows}\PY{p}{)} \PY{k}{for} \PY{n}{\PYZus{}} \PY{o+ow}{in} \PY{n+nb}{range}\PY{p}{(}\PY{l+m+mi}{3}\PY{p}{)}\PY{p}{)} \PY{c+c1}{\PYZsh{} calculation} \PY{n}{precip\PYZus{}mean\PYZus{}annual} \PY{o}{=} \PY{p}{(}\PY{n}{data}\PY{p}{[}\PY{n}{precip\PYZus{}col}\PY{p}{]}\PY{o}{.}\PY{n}{groupby}\PY{p}{(}\PY{n}{by}\PY{o}{=}\PY{n}{data}\PY{o}{.}\PY{n}{index}\PY{o}{.}\PY{n}{year}\PY{p}{)} \PY{o}{.}\PY{n}{sum}\PY{p}{(}\PY{p}{)} \PY{o}{.}\PY{n}{mean}\PY{p}{(}\PY{p}{)}\PY{p}{)} \PY{n}{nominal} \PY{o}{=} \PY{n}{\PYZus{}NOMINAL}\PY{p}{(}\PY{n}{C}\PY{p}{,} \PY{n}{precip\PYZus{}mean\PYZus{}annual}\PY{p}{)} \PY{n}{days} \PY{o}{=} \PY{n}{data}\PY{o}{.}\PY{n}{index}\PY{o}{.}\PY{n}{days\PYZus{}in\PYZus{}month} \PY{n}{precip} \PY{o}{=} \PY{n}{data}\PY{o}{.}\PY{n}{iloc}\PY{p}{[}\PY{p}{:}\PY{p}{,} \PY{l+m+mi}{0}\PY{p}{]}\PY{o}{.}\PY{n}{values} \PY{n}{pet} \PY{o}{=} \PY{n}{data}\PY{o}{.}\PY{n}{iloc}\PY{p}{[}\PY{p}{:}\PY{p}{,} \PY{l+m+mi}{1}\PY{p}{]}\PY{o}{.}\PY{n}{values} \PY{k}{for} \PY{n}{i} \PY{o+ow}{in} \PY{n+nb}{range}\PY{p}{(}\PY{n}{nrows}\PY{p}{)}\PY{p}{:} \PY{k}{if} \PY{n}{i} \PY{o}{!=} \PY{l+m+mi}{0}\PY{p}{:} \PY{n}{storage}\PY{p}{[}\PY{n}{i}\PY{p}{]} \PY{o}{=} \PY{n}{\PYZus{}STORAGE}\PY{p}{(}\PY{n}{storage}\PY{p}{[}\PY{n}{i} \PY{o}{\PYZhy{}} \PY{l+m+mi}{1}\PY{p}{]}\PY{p}{,} \PY{n}{delstor}\PY{p}{[}\PY{n}{i} \PY{o}{\PYZhy{}} \PY{l+m+mi}{1}\PY{p}{]}\PY{p}{)} \PY{k}{else}\PY{p}{:} \PY{n}{storage}\PY{p}{[}\PY{n}{i}\PY{p}{]} \PY{o}{=} \PY{n}{\PYZus{}STORAGE}\PY{p}{(}\PY{n}{MSTOR}\PY{p}{,} \PY{l+m+mi}{0}\PY{p}{)} \PY{n}{storat}\PY{p}{[}\PY{n}{i}\PY{p}{]} \PY{o}{=} \PY{n}{\PYZus{}STORAT}\PY{p}{(}\PY{n}{storage}\PY{p}{[}\PY{n}{i}\PY{p}{]}\PY{p}{,} \PY{n}{nominal}\PY{p}{)} \PY{n}{prerat}\PY{p}{[}\PY{n}{i}\PY{p}{]} \PY{o}{=} \PY{n}{\PYZus{}PRERAT}\PY{p}{(}\PY{n}{precip}\PY{p}{[}\PY{n}{i}\PY{p}{]}\PY{p}{,} \PY{n}{pet}\PY{p}{[}\PY{n}{i}\PY{p}{]}\PY{p}{)} \PY{n}{etrat}\PY{p}{[}\PY{n}{i}\PY{p}{]} \PY{o}{=} \PY{n}{\PYZus{}ETRAT}\PY{p}{(}\PY{n}{storat}\PY{p}{[}\PY{n}{i}\PY{p}{]}\PY{p}{,} \PY{n}{prerat}\PY{p}{[}\PY{n}{i}\PY{p}{]}\PY{p}{)} \PY{n}{aet}\PY{p}{[}\PY{n}{i}\PY{p}{]} \PY{o}{=} \PY{n}{\PYZus{}AET}\PY{p}{(}\PY{n}{etrat}\PY{p}{[}\PY{n}{i}\PY{p}{]}\PY{p}{,} \PY{n}{pet}\PY{p}{[}\PY{n}{i}\PY{p}{]}\PY{p}{,} \PY{n}{CF}\PY{o}{=}\PY{n}{CF}\PY{p}{)} \PY{n}{watbal}\PY{p}{[}\PY{n}{i}\PY{p}{]} \PY{o}{=} \PY{n}{\PYZus{}WATBAL}\PY{p}{(}\PY{n}{precip}\PY{p}{[}\PY{n}{i}\PY{p}{]}\PY{p}{,} \PY{n}{aet}\PY{p}{[}\PY{n}{i}\PY{p}{]}\PY{p}{)} \PY{n}{exmrat}\PY{p}{[}\PY{n}{i}\PY{p}{]} \PY{o}{=} \PY{n}{\PYZus{}EXMRAT}\PY{p}{(}\PY{n}{watbal}\PY{p}{[}\PY{n}{i}\PY{p}{]}\PY{p}{,} \PY{n}{storat}\PY{p}{[}\PY{n}{i}\PY{p}{]}\PY{p}{)} \PY{n}{exmst}\PY{p}{[}\PY{n}{i}\PY{p}{]} \PY{o}{=} \PY{n}{\PYZus{}EXMST}\PY{p}{(}\PY{n}{exmrat}\PY{p}{[}\PY{n}{i}\PY{p}{]}\PY{p}{,} \PY{n}{watbal}\PY{p}{[}\PY{n}{i}\PY{p}{]}\PY{p}{)} \PY{n}{delstor}\PY{p}{[}\PY{n}{i}\PY{p}{]} \PY{o}{=} \PY{n}{\PYZus{}DELSTOR}\PY{p}{(}\PY{n}{watbal}\PY{p}{[}\PY{n}{i}\PY{p}{]}\PY{p}{,} \PY{n}{exmst}\PY{p}{[}\PY{n}{i}\PY{p}{]}\PY{p}{)} \PY{n}{gwrech}\PY{p}{[}\PY{n}{i}\PY{p}{]} \PY{o}{=} \PY{n}{\PYZus{}GWRECH}\PY{p}{(}\PY{n}{PSUB}\PY{p}{,} \PY{n}{exmst}\PY{p}{[}\PY{n}{i}\PY{p}{]}\PY{p}{)} \PY{k}{if} \PY{n}{i} \PY{o}{!=} \PY{l+m+mi}{0}\PY{p}{:} \PY{n}{gwstor1}\PY{p}{[}\PY{n}{i}\PY{p}{]} \PY{o}{=} \PY{n}{\PYZus{}GWSTOR1}\PY{p}{(}\PY{n}{gwstor2}\PY{p}{[}\PY{n}{i} \PY{o}{\PYZhy{}} \PY{l+m+mi}{1}\PY{p}{]}\PY{p}{,} \PY{n}{gwflow}\PY{p}{[}\PY{n}{i} \PY{o}{\PYZhy{}} \PY{l+m+mi}{1}\PY{p}{]}\PY{p}{)} \PY{k}{else}\PY{p}{:} \PY{n}{gwstor1}\PY{p}{[}\PY{n}{i}\PY{p}{]} \PY{o}{=} \PY{n}{\PYZus{}GWSTOR1}\PY{p}{(}\PY{n}{GSTOR}\PY{p}{,} \PY{l+m+mi}{0}\PY{p}{)} \PY{n}{gwstor2}\PY{p}{[}\PY{n}{i}\PY{p}{]} \PY{o}{=} \PY{n}{\PYZus{}GWSTOR2}\PY{p}{(}\PY{n}{gwstor1}\PY{p}{[}\PY{n}{i}\PY{p}{]}\PY{p}{,} \PY{n}{gwrech}\PY{p}{[}\PY{n}{i}\PY{p}{]}\PY{p}{)} \PY{n}{gwflow}\PY{p}{[}\PY{n}{i}\PY{p}{]} \PY{o}{=} \PY{n}{\PYZus{}GWFLOW}\PY{p}{(}\PY{n}{GWF}\PY{p}{,} \PY{n}{gwstor2}\PY{p}{[}\PY{n}{i}\PY{p}{]}\PY{p}{)} \PY{n}{dflow}\PY{p}{[}\PY{n}{i}\PY{p}{]} \PY{o}{=} \PY{n}{\PYZus{}DFLOW}\PY{p}{(}\PY{n}{exmst}\PY{p}{[}\PY{n}{i}\PY{p}{]}\PY{p}{,} \PY{n}{gwrech}\PY{p}{[}\PY{n}{i}\PY{p}{]}\PY{p}{)} \PY{n}{flow}\PY{p}{[}\PY{n}{i}\PY{p}{]} \PY{o}{=} \PY{n}{\PYZus{}FLOW}\PY{p}{(}\PY{n}{gwflow}\PY{p}{[}\PY{n}{i}\PY{p}{]}\PY{p}{,} \PY{n}{dflow}\PY{p}{[}\PY{n}{i}\PY{p}{]}\PY{p}{)} \PY{n}{discharge}\PY{p}{[}\PY{n}{i}\PY{p}{]} \PY{o}{=} \PY{n}{\PYZus{}DISCHARGE}\PY{p}{(}\PY{n}{flow}\PY{p}{[}\PY{n}{i}\PY{p}{]}\PY{p}{,} \PY{n}{AREA}\PY{p}{,} \PY{n}{days}\PY{p}{[}\PY{n}{i}\PY{p}{]}\PY{p}{)} \PY{c+c1}{\PYZsh{} results} \PY{k}{if} \PY{n}{report}\PY{o}{.}\PY{n}{lower}\PY{p}{(}\PY{p}{)} \PY{o}{==} \PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{full}\PY{l+s+s1}{\PYZsq{}}\PY{p}{:} \PY{n}{results} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{stack}\PY{p}{(}\PY{p}{(} \PY{n}{days}\PY{p}{,} \PY{n}{precip}\PY{p}{,} \PY{n}{pet}\PY{p}{,} \PY{n}{storage}\PY{p}{,} \PY{n}{storat}\PY{p}{,} \PY{n}{prerat}\PY{p}{,} \PY{n}{etrat}\PY{p}{,} \PY{n}{aet}\PY{p}{,} \PY{n}{watbal}\PY{p}{,} \PY{n}{exmrat}\PY{p}{,} \PY{n}{delstor}\PY{p}{,} \PY{n}{gwrech}\PY{p}{,} \PY{n}{gwstor1}\PY{p}{,} \PY{n}{gwstor2}\PY{p}{,} \PY{n}{gwflow}\PY{p}{,} \PY{n}{dflow}\PY{p}{,} \PY{n}{flow}\PY{p}{,} \PY{n}{discharge} \PY{p}{)}\PY{p}{,} \PY{n}{axis}\PY{o}{=}\PY{l+m+mi}{1}\PY{p}{)} \PY{n}{columns\PYZus{}name} \PY{o}{=} \PY{p}{[} \PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{DAYS}\PY{l+s+s1}{\PYZsq{}}\PY{p}{,} \PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{PRECIP}\PY{l+s+s1}{\PYZsq{}}\PY{p}{,} \PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{PET}\PY{l+s+s1}{\PYZsq{}}\PY{p}{,} \PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{STORAGE}\PY{l+s+s1}{\PYZsq{}}\PY{p}{,} \PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{STORAT}\PY{l+s+s1}{\PYZsq{}}\PY{p}{,} \PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{PRERAT}\PY{l+s+s1}{\PYZsq{}}\PY{p}{,} \PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{ETRAT}\PY{l+s+s1}{\PYZsq{}}\PY{p}{,} \PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{AET}\PY{l+s+s1}{\PYZsq{}}\PY{p}{,} \PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{WATBAL}\PY{l+s+s1}{\PYZsq{}}\PY{p}{,} \PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{EXMRAT}\PY{l+s+s1}{\PYZsq{}}\PY{p}{,} \PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{DELSTOR}\PY{l+s+s1}{\PYZsq{}}\PY{p}{,} \PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{GWRECH}\PY{l+s+s1}{\PYZsq{}}\PY{p}{,} \PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{GWSTOR1}\PY{l+s+s1}{\PYZsq{}}\PY{p}{,} \PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{GWSTOR2}\PY{l+s+s1}{\PYZsq{}}\PY{p}{,} \PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{GWFLOW}\PY{l+s+s1}{\PYZsq{}}\PY{p}{,} \PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{DFLOW}\PY{l+s+s1}{\PYZsq{}}\PY{p}{,} \PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{FLOW}\PY{l+s+s1}{\PYZsq{}}\PY{p}{,} \PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{DISCHARGE}\PY{l+s+s1}{\PYZsq{}} \PY{p}{]} \PY{k}{elif} \PY{n}{report}\PY{o}{.}\PY{n}{lower}\PY{p}{(}\PY{p}{)} \PY{o}{==} \PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{partial}\PY{l+s+s1}{\PYZsq{}}\PY{p}{:} \PY{n}{results} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{stack}\PY{p}{(}\PY{p}{(} \PY{n}{precip}\PY{p}{,} \PY{n}{pet}\PY{p}{,} \PY{n}{storage}\PY{p}{,} \PY{n}{gwstor2}\PY{p}{,} \PY{n}{flow}\PY{p}{,} \PY{n}{discharge}\PY{p}{)}\PY{p}{,} \PY{n}{axis}\PY{o}{=}\PY{l+m+mi}{1}\PY{p}{)} \PY{n}{columns\PYZus{}name} \PY{o}{=} \PY{p}{[}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{PRECIP}\PY{l+s+s1}{\PYZsq{}}\PY{p}{,} \PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{PET}\PY{l+s+s1}{\PYZsq{}}\PY{p}{,} \PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{STORAGE}\PY{l+s+s1}{\PYZsq{}}\PY{p}{,} \PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{GWSTOR2}\PY{l+s+s1}{\PYZsq{}}\PY{p}{,} \PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{FLOW}\PY{l+s+s1}{\PYZsq{}}\PY{p}{,} \PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{DISCHARGE}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]} \PY{k}{elif} \PY{n}{report}\PY{o}{.}\PY{n}{lower}\PY{p}{(}\PY{p}{)} \PY{o}{==} \PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{flow}\PY{l+s+s1}{\PYZsq{}}\PY{p}{:} \PY{n}{results} \PY{o}{=} \PY{n}{flow} \PY{n}{columns\PYZus{}name} \PY{o}{=} \PY{p}{[}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{FLOW}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]} \PY{k}{elif} \PY{n}{report}\PY{o}{.}\PY{n}{lower}\PY{p}{(}\PY{p}{)} \PY{o}{==} \PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{discharge}\PY{l+s+s1}{\PYZsq{}}\PY{p}{:} \PY{n}{results} \PY{o}{=} \PY{n}{discharge} \PY{n}{columns\PYZus{}name} \PY{o}{=} \PY{p}{[}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{DISCHARGE}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]} \PY{k}{else}\PY{p}{:} \PY{k}{raise} \PY{n+ne}{ValueError}\PY{p}{(} \PY{n+nb}{str}\PY{p}{(}\PY{n}{report}\PY{p}{)} \PY{o}{+} \PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{ not identified. }\PY{l+s+s1}{\PYZsq{}} \PY{o}{+} \PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{Use full / partial / flow / discharge.}\PY{l+s+s1}{\PYZsq{}} \PY{p}{)} \PY{k}{if} \PY{n}{as\PYZus{}df}\PY{p}{:} \PY{k}{return} \PY{n}{pd}\PY{o}{.}\PY{n}{DataFrame}\PY{p}{(} \PY{n}{data}\PY{o}{=}\PY{n}{results}\PY{p}{,} \PY{n}{index}\PY{o}{=}\PY{n}{data}\PY{o}{.}\PY{n}{index}\PY{p}{,} \PY{n}{columns}\PY{o}{=}\PY{n}{columns\PYZus{}name} \PY{p}{)} \PY{k}{else}\PY{p}{:} \PY{k}{return} \PY{n}{results} \end{Verbatim} \end{tcolorbox} \hypertarget{fungsi}{% \section{FUNGSI}\label{fungsi}} \hypertarget{fungsi-model_nreca}{% \subsection{\texorpdfstring{Fungsi \texttt{model\_NRECA()}}{Fungsi model\_NRECA()}}\label{fungsi-model_nreca}} Fungsi \texttt{model\_NRECA()} membangkitkan nilai debit menggunakan model yang dikembangkan oleh Crawford \& Thurin (1981). Untuk mengetahui masing-masing batasan parameter yang digunakan dalam model ini, harap melihat pada makalah terkait. Dalam \emph{notebook} ini akan lebih fokus dalam penggunaan fungsi. Fungsi memiliki 10 argumen yang harus diisi, dan 2 argumen opsional. Dibagi menjadi 3 bagian argumen yaitu: - Argumen untuk dataset: - \texttt{df}: dataset dalam bentuk \texttt{pandas.DataFrame}. - \texttt{precip\_col}: nama kolom untuk curah hujan (presipitasi). - \texttt{pet\_col}: nama kolom untuk evapotranspirasi potensial. - Argumen untuk parameter model: - \texttt{MSTOR}: \emph{Moist Storage} (\(mm\)). - \texttt{GSTOR}: \emph{Groundwater Storage} (\(mm\)). - \texttt{PSUB}: \emph{Percent Discharge to Subbase}. (\(0.3-0.8\)) - \texttt{GWF}: \emph{Groundwater Fraction}. (\(0.2-0.9\)) - \texttt{CF}: \emph{Crop Factor}. - \texttt{C}: Koefisien karakteristik DAS. (\(\{0.2,0.25\}\)) - \texttt{AREA}: Luas DAS. (\(m^2\)) - Argumen opsional untuk fungsi: - \texttt{as\_df}: \texttt{True} (default), keluaran berupa \texttt{pandas.DataFrame}. Keluaran berupa \texttt{numpy.ndarray} jika \texttt{False}. - \texttt{report}: \texttt{discharge} (default). Terdapat beberapa nilai untuk \texttt{report}: - \texttt{full}: keluaran akan menyertakan seluruh peubah yang dihitung dalam model seperti \texttt{WATBAL}, \texttt{ETRAT}, dll. - \texttt{partial}: keluaran hanya menyertakan kolom \texttt{PRECIP}, \texttt{PET}, \texttt{STORAGE}, \texttt{GWSTOR2}, \texttt{FLOW}, dan \texttt{DISCHARGE}. - \texttt{flow}: keluaran hanya menyertakan kolom \texttt{FLOW}. - \texttt{discharge}: keluaran hanya menyertakan kolom \texttt{DISCHARGE}. Berikut informasi satuan keluaran model: - \texttt{PRECIP}, \texttt{PET}, \texttt{STORAGE}, \texttt{AET}, \texttt{WATBAL}, \texttt{EXMST}, \texttt{DELSTOR}, \texttt{GWRECH}, \texttt{GWSTOR1}, \texttt{GWSTOR2}, \texttt{GWFLOW}, \texttt{DFLOW}, \texttt{FLOW} bersatuan \(mm/bulan\). - \texttt{STORAGE}, \texttt{STORAT}, \texttt{PRERAT}, \texttt{ETRAT}, \texttt{EXMRAT} merupakan rasio. \hypertarget{as_dftrue-reportdischarge-default}{% \subsubsection{\texorpdfstring{\texttt{as\_df=True}, \texttt{report=\textquotesingle{}discharge\textquotesingle{}} (\emph{default})}{as\_df=True, report='discharge' (default)}}\label{as_dftrue-reportdischarge-default}} Keluaran berupa \texttt{pandas.DataFrame}. \begin{tcolorbox}[breakable, size=fbox, boxrule=1pt, pad at break*=1mm,colback=cellbackground, colframe=cellborder] \prompt{In}{incolor}{ }{\boxspacing} \begin{Verbatim}[commandchars=\\\{\}] \PY{n}{model\PYZus{}NRECA}\PY{p}{(}\PY{n}{df}\PY{o}{=}\PY{n}{dataset}\PY{p}{,} \PY{n}{precip\PYZus{}col}\PY{o}{=}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{PRECIP}\PY{l+s+s1}{\PYZsq{}}\PY{p}{,} \PY{n}{pet\PYZus{}col}\PY{o}{=}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{PET}\PY{l+s+s1}{\PYZsq{}}\PY{p}{,} \PY{n}{MSTOR}\PY{o}{=}\PY{l+m+mi}{1000}\PY{p}{,} \PY{n}{GSTOR}\PY{o}{=}\PY{l+m+mi}{100}\PY{p}{,} \PY{n}{PSUB}\PY{o}{=}\PY{l+m+mf}{0.4}\PY{p}{,} \PY{n}{GWF}\PY{o}{=}\PY{l+m+mf}{0.2}\PY{p}{,} \PY{n}{CF}\PY{o}{=}\PY{l+m+mf}{0.6}\PY{p}{,} \PY{n}{C}\PY{o}{=}\PY{l+m+mf}{0.25}\PY{p}{,} \PY{n}{AREA}\PY{o}{=}\PY{l+m+mf}{1450.6e6}\PY{p}{)}\PY{o}{.}\PY{n}{head}\PY{p}{(}\PY{p}{)} \end{Verbatim} \end{tcolorbox} \begin{tcolorbox}[breakable, size=fbox, boxrule=.5pt, pad at break*=1mm, opacityfill=0] \prompt{Out}{outcolor}{ }{\boxspacing} \begin{Verbatim}[commandchars=\\\{\}] DISCHARGE 1999-01-01 130.474329 1999-02-01 124.872153 1999-03-01 66.438083 1999-04-01 70.787505 1999-05-01 41.998656 \end{Verbatim} \end{tcolorbox} Sebagian argumen bisa disimpan dalam bentuk \emph{dictionary}. \begin{tcolorbox}[breakable, size=fbox, boxrule=1pt, pad at break*=1mm,colback=cellbackground, colframe=cellborder] \prompt{In}{incolor}{ }{\boxspacing} \begin{Verbatim}[commandchars=\\\{\}] \PY{n}{parameter} \PY{o}{=} \PY{p}{\PYZob{}} \PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{MSTOR}\PY{l+s+s1}{\PYZsq{}}\PY{p}{:} \PY{l+m+mi}{1000}\PY{p}{,} \PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{GSTOR}\PY{l+s+s1}{\PYZsq{}}\PY{p}{:} \PY{l+m+mi}{100}\PY{p}{,} \PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{PSUB}\PY{l+s+s1}{\PYZsq{}}\PY{p}{:} \PY{l+m+mf}{0.4}\PY{p}{,} \PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{GWF}\PY{l+s+s1}{\PYZsq{}}\PY{p}{:} \PY{l+m+mf}{0.2}\PY{p}{,} \PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{CF}\PY{l+s+s1}{\PYZsq{}}\PY{p}{:} \PY{l+m+mf}{0.6}\PY{p}{,} \PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{C}\PY{l+s+s1}{\PYZsq{}}\PY{p}{:} \PY{l+m+mf}{0.25}\PY{p}{,} \PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{AREA}\PY{l+s+s1}{\PYZsq{}}\PY{p}{:} \PY{l+m+mf}{1450.6e6} \PY{p}{\PYZcb{}} \PY{n}{model\PYZus{}NRECA}\PY{p}{(}\PY{n}{df}\PY{o}{=}\PY{n}{dataset}\PY{p}{,} \PY{n}{precip\PYZus{}col}\PY{o}{=}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{PRECIP}\PY{l+s+s1}{\PYZsq{}}\PY{p}{,} \PY{n}{pet\PYZus{}col}\PY{o}{=}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{PET}\PY{l+s+s1}{\PYZsq{}}\PY{p}{,} \PY{o}{*}\PY{o}{*}\PY{n}{parameter}\PY{p}{)}\PY{o}{.}\PY{n}{head}\PY{p}{(}\PY{p}{)} \end{Verbatim} \end{tcolorbox} \begin{tcolorbox}[breakable, size=fbox, boxrule=.5pt, pad at break*=1mm, opacityfill=0] \prompt{Out}{outcolor}{ }{\boxspacing} \begin{Verbatim}[commandchars=\\\{\}] DISCHARGE 1999-01-01 130.474329 1999-02-01 124.872153 1999-03-01 66.438083 1999-04-01 70.787505 1999-05-01 41.998656 \end{Verbatim} \end{tcolorbox} \hypertarget{as_dffalse}{% \subsubsection{\texorpdfstring{\texttt{as\_df=False}}{as\_df=False}}\label{as_dffalse}} Keluaran berupa \texttt{numpy.array} \begin{tcolorbox}[breakable, size=fbox, boxrule=1pt, pad at break*=1mm,colback=cellbackground, colframe=cellborder] \prompt{In}{incolor}{ }{\boxspacing} \begin{Verbatim}[commandchars=\\\{\}] \PY{n}{model\PYZus{}NRECA}\PY{p}{(}\PY{n}{df}\PY{o}{=}\PY{n}{dataset}\PY{p}{,} \PY{n}{precip\PYZus{}col}\PY{o}{=}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{PRECIP}\PY{l+s+s1}{\PYZsq{}}\PY{p}{,} \PY{n}{pet\PYZus{}col}\PY{o}{=}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{PET}\PY{l+s+s1}{\PYZsq{}}\PY{p}{,} \PY{o}{*}\PY{o}{*}\PY{n}{parameter}\PY{p}{,} \PY{n}{as\PYZus{}df}\PY{o}{=}\PY{k+kc}{False}\PY{p}{)} \end{Verbatim} \end{tcolorbox} \begin{tcolorbox}[breakable, size=fbox, boxrule=.5pt, pad at break*=1mm, opacityfill=0] \prompt{Out}{outcolor}{ }{\boxspacing} \begin{Verbatim}[commandchars=\\\{\}] array([130.47432853, 124.87215318, 66.43808331, 70.78750455, 41.99865614, 28.45640939, 24.28274088, 19.7163445 , 12.90107429, 44.81356496, 53.93315909, 90.06341788, 95.31267461, 145.23779804, 117.57476262, 78.75685187, 91.24644372, 57.71641073, 37.27870151, 22.07183021, 24.97489079, 22.58939002, 35.79315297, 26.23750108, 138.0338475 , 235.17042934, 116.11823068, 87.70033456, 104.97934937, 109.51542424, 116.47710193, 34.51540744, 71.42737681, 33.81058852, 34.87405532, 36.58594762, 110.40885462, 112.85425887, 66.53789581, 97.93354457, 35.04151568, 22.39154574, 47.55438496, 16.65090467, 13.76474786, 10.65657899, 119.65349912, 57.66475933, 137.39388673, 199.12246399, 159.01388739, 172.29628498, 133.74151565, 44.77420904, 34.66390377, 27.73112302, 36.38175359, 120.31395471, 77.61700498, 126.38336198, 114.21457768, 123.66605913, 174.79252429, 195.67904295, 99.25002079, 61.22475824, 71.16772612, 31.42972227, 57.45303916, 44.65959205, 50.75185708, 91.32761479, 141.91241982, 103.15499701, 96.64076894, 63.34595895, 91.80413108, 133.77947175, 81.64626021, 31.2233851 , 56.84185425, 53.0918533 , 111.44781119, 124.64417381, 133.68443001, 82.44572213, 103.95773659, 78.36551337, 70.09140577, 28.12253459, 21.77228484, 17.41782788, 14.39873771, 11.14740984, 26.2002207 , 71.83216778, 77.60770993, 94.77473292, 112.44031817, 75.38748734, 52.32065854, 67.14854195, 35.46514257, 18.59214774, 15.3695088 , 21.20863532, 10.74189578, 49.26924462, 91.25050052, 170.94322505, 77.98879732, 81.89122336, 59.30935231, 27.5965865 , 19.29362528, 28.03710337, 13.98514355, 65.98636155, 69.40576331, 54.21634462]) \end{Verbatim} \end{tcolorbox} \hypertarget{report}{% \subsection{\texorpdfstring{\texttt{report}}{report}}\label{report}} \hypertarget{reportfull}{% \subsubsection{\texorpdfstring{\texttt{report=\textquotesingle{}full\textquotesingle{}}}{report='full'}}\label{reportfull}} \begin{tcolorbox}[breakable, size=fbox, boxrule=1pt, pad at break*=1mm,colback=cellbackground, colframe=cellborder] \prompt{In}{incolor}{ }{\boxspacing} \begin{Verbatim}[commandchars=\\\{\}] \PY{n}{model\PYZus{}NRECA}\PY{p}{(}\PY{n}{df}\PY{o}{=}\PY{n}{dataset}\PY{p}{,} \PY{n}{precip\PYZus{}col}\PY{o}{=}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{PRECIP}\PY{l+s+s1}{\PYZsq{}}\PY{p}{,} \PY{n}{pet\PYZus{}col}\PY{o}{=}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{PET}\PY{l+s+s1}{\PYZsq{}}\PY{p}{,} \PY{o}{*}\PY{o}{*}\PY{n}{parameter}\PY{p}{,} \PY{n}{report}\PY{o}{=}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{full}\PY{l+s+s1}{\PYZsq{}}\PY{p}{)}\PY{o}{.}\PY{n}{head}\PY{p}{(}\PY{p}{)} \end{Verbatim} \end{tcolorbox} \begin{tcolorbox}[breakable, size=fbox, boxrule=.5pt, pad at break*=1mm, opacityfill=0] \prompt{Out}{outcolor}{ }{\boxspacing} \begin{Verbatim}[commandchars=\\\{\}] DAYS PRECIP PET {\ldots} DFLOW FLOW DISCHARGE 1999-01-01 31.0 507.000000 142.63 {\ldots} 194.919612 240.908894 130.474329 1999-02-01 28.0 374.228918 128.84 {\ldots} 151.288962 208.252249 124.872153 1999-03-01 31.0 211.762683 138.19 {\ldots} 68.030474 122.671834 66.438083 1999-04-01 30.0 219.793874 138.32 {\ldots} 73.035300 126.486428 70.787505 1999-05-01 31.0 132.121255 125.36 {\ldots} 30.693325 77.546671 41.998656 [5 rows x 18 columns] \end{Verbatim} \end{tcolorbox} \hypertarget{reportpartial}{% \subsubsection{\texorpdfstring{\texttt{report=\textquotesingle{}partial\textquotesingle{}}}{report='partial'}}\label{reportpartial}} \begin{tcolorbox}[breakable, size=fbox, boxrule=1pt, pad at break*=1mm,colback=cellbackground, colframe=cellborder] \prompt{In}{incolor}{ }{\boxspacing} \begin{Verbatim}[commandchars=\\\{\}] \PY{n}{model\PYZus{}NRECA}\PY{p}{(}\PY{n}{df}\PY{o}{=}\PY{n}{dataset}\PY{p}{,} \PY{n}{precip\PYZus{}col}\PY{o}{=}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{PRECIP}\PY{l+s+s1}{\PYZsq{}}\PY{p}{,} \PY{n}{pet\PYZus{}col}\PY{o}{=}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{PET}\PY{l+s+s1}{\PYZsq{}}\PY{p}{,} \PY{o}{*}\PY{o}{*}\PY{n}{parameter}\PY{p}{,} \PY{n}{report}\PY{o}{=}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{partial}\PY{l+s+s1}{\PYZsq{}}\PY{p}{)}\PY{o}{.}\PY{n}{head}\PY{p}{(}\PY{p}{)} \end{Verbatim} \end{tcolorbox} \begin{tcolorbox}[breakable, size=fbox, boxrule=.5pt, pad at break*=1mm, opacityfill=0] \prompt{Out}{outcolor}{ }{\boxspacing} \begin{Verbatim}[commandchars=\\\{\}] PRECIP PET STORAGE GWSTOR2 FLOW DISCHARGE 1999-01-01 507.000000 142.63 1000.000000 229.946408 240.908894 130.474329 1999-02-01 374.228918 128.84 1096.555980 284.816435 208.252249 124.872153 1999-03-01 211.762683 138.19 1141.332627 273.206798 122.671834 66.438083 1999-04-01 219.793874 138.32 1156.797186 267.255638 126.486428 70.787505 1999-05-01 132.121255 125.36 1171.873560 234.266727 77.546671 41.998656 \end{Verbatim} \end{tcolorbox} \hypertarget{reportflow}{% \subsubsection{\texorpdfstring{\texttt{report=\textquotesingle{}flow\textquotesingle{}}}{report='flow'}}\label{reportflow}} \begin{tcolorbox}[breakable, size=fbox, boxrule=1pt, pad at break*=1mm,colback=cellbackground, colframe=cellborder] \prompt{In}{incolor}{ }{\boxspacing} \begin{Verbatim}[commandchars=\\\{\}] \PY{n}{model\PYZus{}NRECA}\PY{p}{(}\PY{n}{df}\PY{o}{=}\PY{n}{dataset}\PY{p}{,} \PY{n}{precip\PYZus{}col}\PY{o}{=}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{PRECIP}\PY{l+s+s1}{\PYZsq{}}\PY{p}{,} \PY{n}{pet\PYZus{}col}\PY{o}{=}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{PET}\PY{l+s+s1}{\PYZsq{}}\PY{p}{,} \PY{o}{*}\PY{o}{*}\PY{n}{parameter}\PY{p}{,} \PY{n}{report}\PY{o}{=}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{flow}\PY{l+s+s1}{\PYZsq{}}\PY{p}{)}\PY{o}{.}\PY{n}{head}\PY{p}{(}\PY{p}{)} \end{Verbatim} \end{tcolorbox} \begin{tcolorbox}[breakable, size=fbox, boxrule=.5pt, pad at break*=1mm, opacityfill=0] \prompt{Out}{outcolor}{ }{\boxspacing} \begin{Verbatim}[commandchars=\\\{\}] FLOW 1999-01-01 240.908894 1999-02-01 208.252249 1999-03-01 122.671834 1999-04-01 126.486428 1999-05-01 77.546671 \end{Verbatim} \end{tcolorbox} \hypertarget{reportdischarge-default}{% \subsubsection{\texorpdfstring{\texttt{report=\textquotesingle{}discharge\textquotesingle{}} (default)}{report='discharge' (default)}}\label{reportdischarge-default}} \begin{tcolorbox}[breakable, size=fbox, boxrule=1pt, pad at break*=1mm,colback=cellbackground, colframe=cellborder] \prompt{In}{incolor}{ }{\boxspacing} \begin{Verbatim}[commandchars=\\\{\}] \PY{n}{model\PYZus{}NRECA}\PY{p}{(}\PY{n}{df}\PY{o}{=}\PY{n}{dataset}\PY{p}{,} \PY{n}{precip\PYZus{}col}\PY{o}{=}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{PRECIP}\PY{l+s+s1}{\PYZsq{}}\PY{p}{,} \PY{n}{pet\PYZus{}col}\PY{o}{=}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{PET}\PY{l+s+s1}{\PYZsq{}}\PY{p}{,} \PY{o}{*}\PY{o}{*}\PY{n}{parameter}\PY{p}{,} \PY{n}{report}\PY{o}{=}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{discharge}\PY{l+s+s1}{\PYZsq{}}\PY{p}{)}\PY{o}{.}\PY{n}{head}\PY{p}{(}\PY{p}{)} \end{Verbatim} \end{tcolorbox} \begin{tcolorbox}[breakable, size=fbox, boxrule=.5pt, pad at break*=1mm, opacityfill=0] \prompt{Out}{outcolor}{ }{\boxspacing} \begin{Verbatim}[commandchars=\\\{\}] DISCHARGE 1999-01-01 130.474329 1999-02-01 124.872153 1999-03-01 66.438083 1999-04-01 70.787505 1999-05-01 41.998656 \end{Verbatim} \end{tcolorbox} \hypertarget{changelog}{% \section{Changelog}\label{changelog}} \begin{verbatim} - 20191214 - 1.0.0 - Initial \end{verbatim} \hypertarget{copyright-2019-taruma-sakti-megariansyah}{% \paragraph{\texorpdfstring{Copyright © 2019 \href{https://taruma.github.io}{Taruma Sakti Megariansyah}}{Copyright © 2019 Taruma Sakti Megariansyah}}\label{copyright-2019-taruma-sakti-megariansyah}} Source code in this notebook is licensed under a \href{https://choosealicense.com/licenses/mit/}{MIT License}. Data in this notebook is licensed under a \href{https://creativecommons.org/licenses/by/4.0/}{Creative Common Attribution 4.0 International}. % Add a bibliography block to the postdoc \end{document}
{ "alphanum_fraction": 0.5898283676, "avg_line_length": 59.8177777778, "ext": "tex", "hexsha": "b936d5b0bf72b61b347eb053ce96d4eaf07826cb", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "4e6fc1a2eb93ac43d619f8f485e5bf812514f052", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "hidrokit/manual", "max_forks_repo_path": "booklet_hidrokit/0.4.0/tex/manual/taruma_0_3_5_hk89_NRECA.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "4e6fc1a2eb93ac43d619f8f485e5bf812514f052", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "hidrokit/manual", "max_issues_repo_path": "booklet_hidrokit/0.4.0/tex/manual/taruma_0_3_5_hk89_NRECA.tex", "max_line_length": 503, "max_stars_count": null, "max_stars_repo_head_hexsha": "4e6fc1a2eb93ac43d619f8f485e5bf812514f052", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "hidrokit/manual", "max_stars_repo_path": "booklet_hidrokit/0.4.0/tex/manual/taruma_0_3_5_hk89_NRECA.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 23792, "size": 53836 }
\chapter{INTRODUCTION} Global Positioning System (GPS) is widely used outdoor, but it can not provide good indoor localization \cite{pulkkinen2011semi, varshavsky2007gsm}. To tackle this problem, many techniques have been developed. One intuitive approach is multi-lateration like GPS. Similar to the GPS, we needs to measure distance from a device to several "indoor satellites". \cite{hu2013pharos} uses LED bulbs and visible light modulation to broadcast a light bulb's position to receiver, the receiver will estimate its distance to the bulb by a function of received signal strength (RSS). In practice, user with their phone must hold their phone face up to get signal from at least 3 bulbs, which requires dense bulb placement. Setup of one bulb is already expensive since you need to let the bulb know its accurate position. \cite{whitehouse2007practical} use received signal strength (RSS) to estimate the distance from a receiver (e.g. cellphone) to a transmitter (e.g. Wi-Fi Access point). However this approach requires very specific environment (certain vegetation and etc.) and device installment (including but not limit to elevation and transmission power adjustment), which is not always possible. In general the multi-lateration methods have difficulties to balance 2 factors 1)Cost of indoor satellites 2)Accuracy of distance estimation One of the categories is called fingerprinting. A fingerprint is a list of values that is location sensitive. For example, \cite{chung2011indoor} uses array of e-compasses to get geo-magnetism as fingerprint. It requires Typical fingerprinting algorithms have two phases: training phase and positioning phase. In training phase, samples gathered in many places, one sample has one fingerprint and its location. In positioning phase, guess a new fingerprint's location by comparing it to fingerprints we collected in training phase. Current fingerprint \cite{varshavsky2007gsm} uses GSM signals as fingerprint. \cite{bahl2000radar} use use signals from 3 base stations as fingerprint, \cite{brunato2005statistical} use SVM (support vector machine) \cite{pulkkinen2011semi}semi-supervised manifold learning technique, what is the difference? CSI Deep learning \cite{chen2012fm} use FM signal as fingerprint visual has higher power consumption However, there are several problems with current fingerprinting localization so far. Need large training set to get good localization, it can be very expensive to do so. Not robust to infrastructure or environment changes, say fingerprints consist of Wi-Fi RSS, if remove or replace some of access points to new place, fingerprints change. It that case, localization is no more accurate until better training data obtained, which leads to more cost. What we can do to improve fingerprinting localization accuracy efficiently? How can we get same accuracy with less training fingerprints? How can we make localization be robust to changes? To answer the first two question, we use Fingerprint based Trajectory Reconstruction, For the 3rd question, we use Geometric base KNN localization using Sensor Dissimilarity Information to make a localization framework that is infrastructure-free, so it would minimize the need to retrain the system. Fingerprint based trajectory reconstruction While conventional fingerprinting methods try to locate a fingerprint by kNN, SVM or manifold regularization, Fingerprint based Trajectory Reconstruction takes historical fingerprints in to count. The intuition is, single fingerprint contains noise which will results inaccurate localization, so we use previously estimated position from the same device to prevent overfitting and have better localization. Geometric base KNN localization using Sensor Dissimilarity Information Conventional fingerprinting localization relies on location sensitive features, ideally those features can be obtained no matter when, where, which device we use. However that is not always true, in real life even we stay at the same place feature values may vary or totalling missing, those variations results inaccurate localization. In Geometric base KNN localization using Sensor Dissimilarity Information we assume some pairs of the devices can communicate with each other and dissimilarity measurement between them can be obtained, and the fingerprint for one device is the set of dissimilarity between its neighbors instead of a fixed set of features that hopefully measurable everywhere. To locate all devices, we assume location of some of the devices are known. We locate the rest devices by use kNN in a metric space with all devices embedded.
{ "alphanum_fraction": 0.8213517004, "avg_line_length": 105.5909090909, "ext": "tex", "hexsha": "79a108a559c804c81435d6f5a79d8db8c8d776f1", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "740937a6a2f4fd6a43c5317f5564de1e0a60a2c5", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "SYGong/sygong.github.io", "max_forks_repo_path": "_drafts/Untitled-1.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "740937a6a2f4fd6a43c5317f5564de1e0a60a2c5", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "SYGong/sygong.github.io", "max_issues_repo_path": "_drafts/Untitled-1.tex", "max_line_length": 854, "max_stars_count": null, "max_stars_repo_head_hexsha": "740937a6a2f4fd6a43c5317f5564de1e0a60a2c5", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "SYGong/sygong.github.io", "max_stars_repo_path": "_drafts/Untitled-1.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 930, "size": 4646 }
\documentclass{report} \def\glossaryname{Glossary} \def\theglossary{\chapter{\glossaryname} \begin{infomap}} \def\endtheglossary{\end{infomap}} \makeglossary \begin{document} \pagenumbering{roman} \title {Design documentation for MPPT-TEG design} \author {Gavin Hurlbut \\ Originally written starting September 17, 2019} \maketitle \tableofcontents \eject \listoffigures \listoftables \eject \setcounter{chapter}{0} \renewcommand\chaptername{\space} \renewcommand\thechapter{} \chapter{Introduction} I have for a while now been interested in alternate energy sources. This comes to head particulary when I'm being ``off the grid'' for a time, which usually means while camping. I like to get away from it all and go camping, but being as how I am a geek at nature, I usually end up bringing electronics like my cell phone and even my laptop with me. Coding up a pet project while lying in a hammock enjoying nature is very tempting. The problem is: I end up running out of battery power far too fast. While stumbling around the web looking for trouble... er ideas... I stumbled across a site where this enterprising dude made a hot water system that operated with no power, using the heat of the campfire and convection (and siphoning) to circulate water through a coil of copper tubing that he built his campfire on top of. He had spectacular results, heating a large cooler full of water nearly to a boil. I was also reading about how using a thermo-electric cooling (TEC) device in reverse as a thermo-electric generator (TEG) was possible. This is done using the corollary of the Peltier effect (which cools one side of a bi-metal junction and heats the other when electricity is applied), which is known as the Seebeck effect. The Seebeck effect is described as "a phenomenon in which a temperature difference between two dissimilar electrical conductors or semiconductors produces a voltage difference between the two substances." \cite{WI}. This set my engineering mind in motion, and I began to design a power generation unit that would use the hot water heated by the campfire (or other means if necessary), and use it to generate power by presenting a (hopefully) large temperature differential across TEG units. To do this, I would need to have another closed-circuit water flow that extracts heat from the hot water ``tank'' source reservoir using an immersion wort chiller coil (from homebrewing), and pumping it through watercooled heat-exchange blocks that were directly attachedi to the ``hot'' side of the TEG, and then recycled back into another reservoir, where the input of the wort chiller can then cycle it through again. Thei ``cold'' side of the TEG is attached to a large heatsink with air blowing across it. For safety to keep the TEG from overflowing, I plan to use a solenoid-controlled valve to divert the hot water right back into the reservoir rather than passing it through the heat exchangers on the TEG units. I also have the pump controlled by the CPU so it can be shut right off. From the multiple TEG units, I want to charge lead-acid batteries (either deep-cycle batteries or sealed gel-cells) and also lithium-ion batteries (18650 format) that would be used for bootstrapping the system. This requires a common bus that all cells are powering, which might cause me issues with load-sharing, but we shall see. I also decided (as I am an engineer after all) that I needed copious monitoring, including many temperature sensors, and also, if possible the flow rate of the circulating water, and also voltages, currents, etc all over the place. And, of course, I want to have an LCD screen on there to display the measurements and a WiFi connection to do an IoT-type connection via my cellphone. I found a nice "Thermo-electric cooling" device on Amazon that I should be able to leverage with minimal work to work in this way. It has 8 TEG units installed (4 on each side of the square heatsink with a fan blowing through the middle. As I am used to using Atmel MCUs (AVR, especially) I was thinking of using Arduino for the control, but I remembered that I was recently introduced to the Cypress PSoC series, with the newer ones using ARM Cortex M-series CPU cores. Challenge accepted. My design is based around two PSoC devices: a PSoC 4L series one to do the MPPT calculations and generate the corresponding PWM control waveforms and phased clock for the second DC-DC converter, and a PSoC 5LP device that wrangles the large pile of data and publishes it in CBOR format to a custom IoT server, and also writes it to an SD card, and has a touchscreen LCD GUI. \eject \setcounter{page}{1} \pagestyle{headings} \pagenumbering{arabic} \setcounter{chapter}{0} \renewcommand\chaptername{Chapter} \renewcommand\thechapter{\arabic{chapter}} \chapter{DC-DC Design Calculations} \section{MPPT DC-DC converter} The first DC-DC converter is to extract the most power possible fromt the TEG module. This power point is at a voltage half-way between $V_{short}$ and $V_{open}$, and a current half-way between $I_{open}$ and $I_{short}$. As $V_{short} = 0V$ and $I_{open} = 0A$, this value can be easily estimated via linear regression as the V/I relationship at any operating point (temperature differential) is a linear slope down from an Y-intercept of $V_{open}$ down to an X-intercept of $I_{short}$. Given any two points along that line, we can calculate the two intercepts, and thus calculate the expected MPPT point. This DC-DC converter is a boost converter that aims to convert to a maximum voltage of 18V, but will do so by controlling the current flow to maximize the power. This is not the normal mode used in a boost controller, and to protect the output filter capacitors, an 18V zener diode is in place to ensure the voltage cannot swing too high. As we are aiming for a ``constant'' V/I poin t depending on the thermal difference across the TEG, and also on its characteristics, the output voltage may vary. \section{Output DC-DC Converter} To be able to harness all of the TEG units at the same time, the voltage output of each must be at the same level. Even this isn't simple to arrange as the normal ``simple'' method to gang supplies together is to feed into the common bus using diodes, and the tolerance in the forward voltage drop in the diodes makes it unlikely that the multiple feeds will be at the exact same voltage which will cause some sources to take ``full'' load, and some to take nearly none. To mitigate this, I will be taking the feedback for the second DC-DC converter directly from the output bus (after diodes) if possible. This output DC-DC converter (per TEG channel) is another boost converter. This time the input/output characteristics are more the normal case for a converter, so it was easier to model. Essentially, we will be converting 9-18V input to a regulated 24V output, and then tying all of the 24V outputs together onto a 24V bus. The battery chargers (and local power supplies) will be regulated down off the 24V bus primarily. \section{+12V DC-DC Converter} The +12V supply is used for 3 external things: the fans on the heat sink, the water pump, and the solenoid on the dump valve. As the primary purpose of this design is to charge 12V lead-acid batteries, I am feeding the 12V rail (via diodes) from the 12V DC-DC converter, and from both attached 12V lead-acid batteries. The idea is that once we are charging, the batteries should be at around 13.2V and will thus be sourcing all of the load on the 12V rail. \section{+3.3V DC-DC Converter} The +3.3V supply runs nearly all of the electronics in the design. The DC-DC converter is sourced (via diodes) from the lithium-ion battery or from the +24V rail. This allows for a bootstrapping of the system to get it up to speed before any significant power can be generated, and also allows it to shutdown nicely. Once the +24V rail exceeds the LiIon battery (about 4.6V) the system electronic load should be coming from the +24V rail. \section{+5V DC-DC Converter} The +5V supply is also sourced in the same way as the +3.3V rail. Not much of the design uses +5V (mostly in the battery charging bits). \eject \addcontentsline{toc}{chapter}{Bibliography} \nocite{*} \bibliography{\jobname} \bibliographystyle{alpha} \end{document}
{ "alphanum_fraction": 0.7819512781, "avg_line_length": 50.503030303, "ext": "tex", "hexsha": "2c8df0afe19aacd1c28f8da4a2f05b16aec28514", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "468eace6b2ab321a8bd8f8fda57a7c7f3c6eb443", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "Beirdo/MPPT-TEG-PSoc", "max_forks_repo_path": "documentation/tex/mppt-teg.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "468eace6b2ab321a8bd8f8fda57a7c7f3c6eb443", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "Beirdo/MPPT-TEG-PSoc", "max_issues_repo_path": "documentation/tex/mppt-teg.tex", "max_line_length": 159, "max_stars_count": 1, "max_stars_repo_head_hexsha": "468eace6b2ab321a8bd8f8fda57a7c7f3c6eb443", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "Beirdo/MPPT-TEG-PSoc", "max_stars_repo_path": "documentation/tex/mppt-teg.tex", "max_stars_repo_stars_event_max_datetime": "2020-04-21T18:28:50.000Z", "max_stars_repo_stars_event_min_datetime": "2020-04-21T18:28:50.000Z", "num_tokens": 2121, "size": 8333 }
% ---------------------------------------------------------------------------- % % Agradecimentos: % Se o candidato não quer fazer agradecimentos, deve simplesmente eliminar esta página \chapter*{Agradecimentos} Texto texto texto texto texto texto texto texto texto texto texto texto texto texto texto texto texto texto texto texto texto texto texto texto texto texto texto texto texto texto texto texto texto texto texto texto texto texto texto texto texto texto texto. Texto opcional.
{ "alphanum_fraction": 0.6864754098, "avg_line_length": 54.2222222222, "ext": "tex", "hexsha": "058c45c3d83d92be90543054303e7cd09cd50464", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "6d7eb32eb7d8a98e8add329876452b5bd590b664", "max_forks_repo_licenses": [ "Unlicense" ], "max_forks_repo_name": "thiagokokada/qualificacaoIMEUSP", "max_forks_repo_path": "pretextual/agradecimentos.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "6d7eb32eb7d8a98e8add329876452b5bd590b664", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Unlicense" ], "max_issues_repo_name": "thiagokokada/qualificacaoIMEUSP", "max_issues_repo_path": "pretextual/agradecimentos.tex", "max_line_length": 87, "max_stars_count": 1, "max_stars_repo_head_hexsha": "6d7eb32eb7d8a98e8add329876452b5bd590b664", "max_stars_repo_licenses": [ "Unlicense" ], "max_stars_repo_name": "thiagokokada/qualificacaoIMEUSP", "max_stars_repo_path": "pretextual/agradecimentos.tex", "max_stars_repo_stars_event_max_datetime": "2020-08-30T07:37:22.000Z", "max_stars_repo_stars_event_min_datetime": "2020-08-30T07:37:22.000Z", "num_tokens": 90, "size": 488 }
\section*{Structure of exam \modulecode} The general shape of a theoretical exam for \texttt{\modulecode} is made up of highly structured open questions. \paragraph{Question 1: } \ \\ \textbf{General shape of the question:} \textit{Given the following class definitions, and a piece of code that uses them, fill in the stack, heap, and PC with all steps taken by the program at runtime.} \textbf{Concrete example of question:} \lstset{numbers=left,basicstyle=\ttfamily\small}\lstset{language=[Sharp]C} \begin{lstlisting} public interface Option<T> { U Visit<U>(Func<U> onNone, Func<T, U> onSome); } public class Some<T> : Option<T> { T value; public Some(T value) { this.value = value; } public U Visit<U>(Func<U> onNone, Func<T, U> onSome) { return onSome(value); } } public class None<T> : Option<T> { public U Visit<U>(Func<U> onNone, Func<T, U> onSome) { return onNone(); } } ... Option<int> number = new Some<int>(5); int inc_number = number.Visit(() => { throw new Exception("Expecting a value..."); }, i => i + 1); Console.WriteLine(inc_number); \end{lstlisting} \textbf{Points:} \textit{3 (30\% of total).} \textbf{Grading:} \textit{one point per correctly filled-in execution step.} \textbf{Associated learning objective:} \glsfirst{impbeh} \ \\ \paragraph{Question 2: } \ \\ \textbf{General shape of question:} \textit{Given the following class definitions, and a piece of code that uses them, fill in the declarations, class definitions, and PC with all steps taken by the compiler while type checking.} \textbf{Concrete example of question:} \lstset{numbers=left,basicstyle=\ttfamily\small}\lstset{language=[Sharp]C} \begin{lstlisting} public interface Option<T> { U Visit<U>(Func<U> onNone, Func<T, U> onSome); } public class Some<T> : Option<T> { T value; public Some(T value) { this.value = value; } public U Visit<U>(Func<U> onNone, Func<T, U> onSome) { return onSome(value); } } public class None<T> : Option<T> { public U Visit<U>(Func<U> onNone, Func<T, U> onSome) { return onNone(); } } ... Option<int> number = new Some<int>(5); int inc_number = number.Visit(() => { throw new Exception("Expecting a value..."); }, i => i + 1); Console.WriteLine(inc_number); \end{lstlisting} \textbf{Points:} \textit{3 (30\% of total).} \textbf{Grading:} \textit{Grading: one point per correctly filled-in type checking step.} \textbf{Associated learning objective:} \glsfirst{undbeh} \paragraph{Question 3: } \textbf{General shape of question:} \textit{Given the following UML diagram of a design pattern, fill in the missing parts.} \textbf{Concrete example of question:} \begin{center} \includegraphics[width=\textwidth / 2]{decorator_uml} \end{center} \textbf{Points:} \textit{4 (40\% of total).} \textbf{Grading:} \textit{Grading: one point per correctly filled-in block of code.} \textbf{Associated learning objective:} \glsfirst{abs} \ \\
{ "alphanum_fraction": 0.7022874701, "avg_line_length": 28.4368932039, "ext": "tex", "hexsha": "b772073b959d16be9997d962fecb733683442805", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "4198e0aa83c565eb33418927366192383daa3ed4", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "hogeschool/INFSEN01-2", "max_forks_repo_path": "Modulewijzer/ExamStructure.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "4198e0aa83c565eb33418927366192383daa3ed4", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "hogeschool/INFSEN01-2", "max_issues_repo_path": "Modulewijzer/ExamStructure.tex", "max_line_length": 229, "max_stars_count": 2, "max_stars_repo_head_hexsha": "4198e0aa83c565eb33418927366192383daa3ed4", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "hogeschool/INFSEN01-2", "max_stars_repo_path": "Modulewijzer/ExamStructure.tex", "max_stars_repo_stars_event_max_datetime": "2017-06-30T14:24:14.000Z", "max_stars_repo_stars_event_min_datetime": "2017-06-06T08:04:30.000Z", "num_tokens": 860, "size": 2929 }
\subsubsection{\stid{4.05} STDM07-VeloC: Very Low Overhead Transparent Multilevel Checkpoint/Restart} \paragraph{Overview} The VeloC project aims to provide a checkpoint/restart framework that leverages multi-level checkpointing (the combination of several resilience strategies and heterogeneous storage) to ensure that ECP applications run to completion with minimal performance overhead. It delivers a production-ready solution that increases development productivity by reducing the complexity of having to deal with a heterogeneous storage stack and multiple vendor APIs. VeloC offers a client library that can be used directly by the applications or indirectly by I/O libraries (e.g., HDF5, ADIOS, PnetCDF) to capture local application states, which are then coordinated and persisted using a resilience engine. The VeloC software intends to serve ECP applications such as HACC, NWChem, QMCPACK, LatticeQCD and EXAALT. \paragraph{Key Challenges} Applications typically employ simple checkpoint-restart mechanisms to survive failures that directly use a parallel file system. However, I/O bottlenecks to to concurrent writes are a known limitation in this case. At Exascale, failures are more frequent but the available file system I/O bandwidth per compute unit decreases. Therefore, there is a need to checkpoint more frequently but simple checkpointing becomes even less scalable and efficient. To compensate for the decreasing I/O bandwidth per compute unit, the storage stack is become increasingly more heterogeneous (e.g. NVRAM, burst buffers, key-value stores, etc.). This aspect introduces a new level of complexity for application developers: they need to adapt their checkpoint strategy to a variety of new storage sub-systems that may or may not be available on every machine. This is further amplified by the diversity of vendors that offer custom APIs. Thus, it is important to provide a scalable high-performance checkpointing solution that can leverage the heterogeneous storage stack transparently without sacrificing ease of use and flexibility. \paragraph{Solution Strategy} To address these challenges, VeloC is based on several design principles. First, it implements multi-level checkpointing. Specifically, it persists the local checkpoints to other nodes using collaborative resilience strategies (e.g., partner replication and partner erasure coding) and to external storage that may include a complex heterogeneous hierarchy (e.g., parallel file system, burst buffers, I/O nodes, key-value stores, etc.). Based on experience from previous efforts such as FTI~\cite{FTI} and SCR~\cite{SCR}, this strategy greatly improves performance and scalability. Second, it offers a simple API that enables users to either manually capture checkpoints into node-local files or to define memory regions that are automatically serialized into node-local files. All interactions with the complex heterogeneous storage stack is transparently handled by VeloC, which facilitates ease-of-use. Third, VeloC separates the management of node-local checkpoints from the actual implementation of the resilience strategies by using a client library that interfaces with a resilience engine. The engine can be linked directly into the application and will run in synchronous mode, or it can be part of a separate backend process that facilitates an asynchronous mode where the resilience strategies are applied in the background to avoid blocking the application. This ultimately leads to better performance and scalability. \paragraph{Recent Progress} We have interviewed and met with several ECP application team in an effort to understand their checkpointing needs. Most of our current efforts involved the HACC, ECP NWChem-X, Lattice QCD and QMCPACK teams. These interviews helped us understand the needs to the ECP applications and ensure that they get reflected in the VeloC API specification. In the last several months, the VeloC team has developed a stable version of the VeloC API specification, which is currently in the process of being implemented. Parallel to the VeloC API definition, we also worked on the architecture and design of the VeloC software. The VeloC software design is based on a modular and scalable architecture, as shown in Figure~\ref{fig:arch}. The design details of the VeloC software can be found in the VeloC design document, submitted to ECP during past milestones. Working off the design specification, we implemented the VeloC software according to the design principles introduced in the previous section. We have worked on understanding, refactoring, modularizing several components of the Fault Tolerance Interface (FTI) and Scalable Checkpoint/Restart (SCR) library to extract self-contained modules that can be used in the VeloC software. We recently completed the integration of self-contained modules and other driver modules into a flexible engine that allows VeloC the capability of running in synchronous mode directly in the application processes or in asynchronous mode in a separate active backend process. \begin{figure}[t] \centering \begin{subfloat}[Architecture: modular client and engine (running in synchronous and asynchronous mode)\label{fig:arch}] \centering \includegraphics[width=.47\textwidth]{projects/2.3.4-DataViz/2.3.4.05-VeloC/veloc-arch} \end{subfloat} \hfill %% \begin{subfloat}[Results: checkpointing overhead for a weak scaling experiment (local, local + sync mode, local + async mode). Lower is better\label{fig:res}] \centering \includegraphics[width=.47\textwidth]{projects/2.3.4-DataViz/2.3.4.05-VeloC/veloc-res} \end{subfloat} % \end{center} \caption{VeloC: Very Low Overhead Checkpoint-Restart}% \label{fig:veloc}% \end{figure} We have run an initial stress test of VeloC on the ANL Theta platform (KNL nodes, local SSD, Lustre parallel file system) using a heat distribution application benchmark. The test consists of a weak scaling experiment (64 processes per node) where the checkpointing overhead (runtime with a checkpoint vs. baseline without) is measured for three approaches: (1) node-local checkpoints written to the SSD; (2) node-local checkpoints followed by synchronous flush to the parallel file system; (3) node-local checkpoints followed by asynchronous flush to the parallel file system. The results are depicted in Figure~\ref{fig:res}. As can be observed, asynchronous checkpoints are faster than synchronous checkpoints by a significant margin and more scalable. We are also interacting with ALCF to understand the constraints for VeloC deployment on CORAL systems, such as the configuration of the resource and job manager and the possibility to run two MPI executions (application + back-end) on each node. \paragraph{Next Steps} The team will be working on the VeloC software release for the next quarter. Our goal is to finalize the integration and testing of all modules, clean up the code and run end-to-end testing. Furthermore, we will add documentation and tutorials to facilitate easy adopting in our user community.
{ "alphanum_fraction": 0.8124470787, "avg_line_length": 51.347826087, "ext": "tex", "hexsha": "769a9125d36ad0244253bbe76ab6462d602ae518", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "74d6fb18bae7ff1c32b78dd8cd7ae29e91218c33", "max_forks_repo_licenses": [ "BSD-2-Clause" ], "max_forks_repo_name": "tgamblin/ECP-ST-CAR-PUBLIC", "max_forks_repo_path": "projects/2.3.4-DataViz/2.3.4.05-VeloC/2.3.4.05-Veloc.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "74d6fb18bae7ff1c32b78dd8cd7ae29e91218c33", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "BSD-2-Clause" ], "max_issues_repo_name": "tgamblin/ECP-ST-CAR-PUBLIC", "max_issues_repo_path": "projects/2.3.4-DataViz/2.3.4.05-VeloC/2.3.4.05-Veloc.tex", "max_line_length": 160, "max_stars_count": null, "max_stars_repo_head_hexsha": "74d6fb18bae7ff1c32b78dd8cd7ae29e91218c33", "max_stars_repo_licenses": [ "BSD-2-Clause" ], "max_stars_repo_name": "tgamblin/ECP-ST-CAR-PUBLIC", "max_stars_repo_path": "projects/2.3.4-DataViz/2.3.4.05-VeloC/2.3.4.05-Veloc.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 1580, "size": 7086 }
The $\Delta{}t$ component represents the time that it takes for the sound emitted by source $S$ to reach the position of virtual microphone $m$. Following from (\ref{distance}), the value of $d_{\vec{mS}}$ when combined with the speed of sound constant $c$ can be used to find the time-domain component. \subsubsection{Calculating the speed of sound} The speed of sound in real spaces is generally considered to be ~\SI[per-mode=fraction]{343}{\m\per\s}. However, it is not a constant. The speed of sound can change based on a number of environmental parameters. To account for this, the speed of sound can be adjusted by some value, $\Delta{}c$, which is implemented in this version as a number $\Delta{}c = [-10, 10]$; so that: \begin{equation}\label{speedofsound} c = 343 + \Delta{}c \end{equation} Following the usual formula for time: \begin{equation}\label{timeFormula} t = \frac{d}{c} \end{equation} Substitutions can be made based on (\ref{distance}) and (\ref{speedofsound}) to give the expansion of $\Delta{}t$: \begin{equation}\label{deltat} \Delta{}t = \frac{d_{\vec{v}}}{343 + \Delta{}c} \end{equation}
{ "alphanum_fraction": 0.7310774711, "avg_line_length": 51.0454545455, "ext": "tex", "hexsha": "d04e8e731f2487227dc5882eab25e9f251b88eb0", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "a78161b4b99b50e481c133ba8a5049d548561954", "max_forks_repo_licenses": [ "BSD-3-Clause" ], "max_forks_repo_name": "jmclark85/StereoPairsEmulator", "max_forks_repo_path": "Technical Documentation/deltatcalc.tex", "max_issues_count": 1, "max_issues_repo_head_hexsha": "a78161b4b99b50e481c133ba8a5049d548561954", "max_issues_repo_issues_event_max_datetime": "2020-08-26T18:24:14.000Z", "max_issues_repo_issues_event_min_datetime": "2020-08-26T18:24:14.000Z", "max_issues_repo_licenses": [ "BSD-3-Clause" ], "max_issues_repo_name": "jmclark85/StereoPairsEmulator", "max_issues_repo_path": "Technical Documentation/deltatcalc.tex", "max_line_length": 378, "max_stars_count": null, "max_stars_repo_head_hexsha": "a78161b4b99b50e481c133ba8a5049d548561954", "max_stars_repo_licenses": [ "BSD-3-Clause" ], "max_stars_repo_name": "jmclark85/StereoPairsEmulator", "max_stars_repo_path": "Technical Documentation/deltatcalc.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 313, "size": 1123 }
\section{Mathematical Analysis} \subsection{Derivation of link-level successful transmission probability} As the first step, we study the successful transmission probability $p_s(r)$ for any given link with distance $r$. Without loss of generality, the device (i.e., the transmitter) labeled by $x_0$ is assumed to be located at the origin. The other end of this link (i.e., Base Station) with label $y_i$ is with distance $r$ away from the device. By definition, $p_s(r)$ is expressed as follows: \begin{align} p_{s} \left( r\right) & =\mathbb{P}\left\lbrace \frac{H \exp(G) r^{-\gamma}}{\sum_{x_j \in \Phi_{m}} H_{x_j} \exp(G_{x_j}) r_{x_j}^{-\gamma}} \geq \theta \right\rbrace \nonumber\\ %& =\mathbb{P} \left\lbrace \frac{H_{x_i} r_{x_i}^{-\gamma}}{\sum_{x_j \in \Phi_{m}} H_{x_j} \exp(G_{x_j}-G_{x_i}) r_{x_j}^{-\gamma}} \geq \theta \right\rbrace \nonumber \end{align} Let $I=\sum_{x_j \in \Phi_{m}} H_{x_j} \exp(G_{x_j}) r_{x_j}^{-\gamma}$, which is the cumulative interference suffered for device $x_0$. Thus, we have: \begin{align} \label{eq:def_ps} p_{s}\left( r \right) &= Pr \left\lbrace H \geq I \theta\exp(-G)r ^{\gamma} \right\rbrace \nonumber\\ % The follwing line can be hidden, not necessary in FINAL_VERSION &=\mathbb{E}_{G}\left\lbrace \mathbb{E}_{I} \left[ \exp(-\theta \exp(-g) r^{\gamma} I ) \vert G = g\right]\right\rbrace \nonumber\\ &= \mathbb{E}_{G} \left[ \mathcal{L}_{I}\left\lbrace \theta \exp(-G) r^{\gamma}\right\rbrace \right] \end{align} It is observed that the $p_s\left( r \right)$ is actually the expectation (with respect to $G$) of a conditional Laplace Transform of cumulative interference $I$ at point $\theta \exp(-G) r^{\gamma}$. \begin{align} \label{eq:interferece-laplace-transform} &\mathcal{L}_{I}\left( s \right) = \mathbb{E}\left[ \exp(-sI)\right] \nonumber\\ &= \mathbb{E}\left[ \exp(-s\sum_{x_j \in \Phi_m} H_{x_j} \exp(G_{x_j}) r_{x_j}^{-\gamma})\right] \nonumber \\ &= \mathbb{E}_{\Phi_m} \left\lbrace \prod_{x_j\in \Phi_m} \mathbb{E}\left[\exp(-sH_{x_j} \exp(G_{x_j}) r_{x_j}^{-\gamma})\right] \right\rbrace , \end{align} which is actually the Probability Generating Functional (PGFL) of a Poisson point process. Applying Campell's theorem, %\qsong{Here I give the wikipedia link for this theorem, in the final version, I will give a reference of one book: https://en.wikipedia.org/wiki/Campbell's_theorem_(probability)} \begin{align} \label{eq:laplace_trans_I} \mathcal{L}_{I}\left( s \right) &=\exp\left\lbrace -\lambda_m \pi \mathbb{E}\left[ H^{\frac{2}{\gamma}}\exp(\frac{2}{\gamma}G)\right] \Gamma(1-\frac{2}{\gamma})s^{\frac{2}{\gamma}} \right\rbrace \end{align} This result has been derived in~\cite[eq.3.20]{haenggi2009interference}. Since $H$ and $G$ are independent, we have: % The following two formula can be removed in FINAL_VERSION %\begin{align} %\label{eq:lognormal_moment} %\mathbb{E}\left[ \exp(\frac{2}{\gamma}G)\right] &= \exp \left\lbrace \left( \frac{\sqrt{2}\beta\sigma}{\gamma}\right) ^2\right\rbrace %\end{align} %\begin{align} %\label{eq:expo_moment} %\mathbb{E}\left[ H^{\frac{2}{\gamma}} \right] &= \Gamma(1+\frac{2}{\gamma}) %\end{align} %Substituting $(\ref{eq:expo_moment})(\ref{eq:lognormal_moment})$ into $(\ref{eq:laplace_trans_I})$, we have: \begin{align} \mathcal{L}_{I}\left( s \right) &= \exp\left\lbrace -p\lambda_m \pi \Gamma(1+\frac{2}{\gamma}) \Gamma(1-\frac{2}{\gamma}) \exp \left( \frac{\sqrt{2}\sigma}{\gamma}\right) ^2 s^{\frac{2}{\gamma}} \right\rbrace \end{align} Let $s=\theta \exp(-G) r^{\gamma}$, thus \begin{align} \label{eq:conditional_lp_trans} \mathcal{L}_{I}\left\lbrace \theta \exp(-G) r^{\gamma}\right\rbrace = \exp\left\lbrace -A r^2\exp(-\frac{2}{\gamma}G)\right\rbrace , \end{align} where $A=p\lambda_m\pi \Gamma(1+\frac{2}{\gamma}) \Gamma(1-\frac{2}{\gamma}) \exp \left( \frac{\sqrt{2}\sigma}{\gamma}\right) ^2 \theta^{\frac{2}{\gamma}}$, $\sigma = 0.1\ln(10)\sigma_{dB}$. With substitution of $(\ref{eq:conditional_lp_trans})$ into $(\ref{eq:def_ps})$, it remains to calculate the expectation with respect to $G$: \begin{align} \label{eq:def_ps_2} p_{s}(r) &= \mathbb{E}_{G}\left[ \exp(-A r^2 \exp(-\frac{2}{\gamma}G)) \right] \nonumber\\ &=\int_{-\infty}^{+\infty} \exp(-A r^2 e^{x})\frac{1}{\sqrt{2\pi}\frac{2}{\gamma}\sigma} \exp(-\frac{x^2}{2(\frac{2}{\gamma}\sigma)^2}) dx \nonumber\\ &=\mathbb{E}_{X}\left[ \exp(-Ar^2X)\right], \end{align} which is the expectation with respect to a new log-normal random variable $X \sim LN(0, \frac{2}{\gamma}\sigma)$. % The following should be removed in FINAL_VERSION %\begin{align} %p_{s}(r) &= \mathbb{E}\left[ \exp(-Ar^2X)\right] \nonumber\\ %&= \mathcal{L}_{X} \left[ Ar^2\right] %\end{align} Now we will not consider to get the closed-form expression for $p_{s}(r)$, because in the following the form will simplify the analysis. %现在我们先不考虑:获取P_s的解析解,因为在下一步计算会更方便 %A closed form expression of the Laplace transform of the lognormal distribution does not %exist. According to reference~\cite{asmussen2016laplace}, the Laplace transform of a log-normal random variable can be accurately approximated as follows: %\begin{align} %\label{eq:laplace-transform-lognormal-form-1} %\mathcal{L} \left\lbrace X \right\rbrace \left( s \right) %&= \frac{\exp(-\frac{W(s \sigma_{X}^2 e^{\mu_{X}} )^2 + 2W(s \sigma_{X}^2 e^{\mu_{X}})}{2\sigma_{X}^2})}{\sqrt{1 + W(s \sigma_{X}^2 e^{\mu_{X}})}}, %\end{align} %where $W\left( \cdot \right)$ is the Lambert W function~\cite{corless1996lambertw}, which is defined as the solution in principal branch of the %equation $W\left(x\right) e^{W \left( x\right) }= x$. \subsection{Derivation of outage probability over infinite plane} \subsubsection{Nearest BS attach method} As a comparison reference, now we consider probability $P_{f1}$ over an infinite plane using traditional method: the device attach the nearest BS. The distance to the nearest base station, we denote this distance as $r$. Its PDF, obtained by void probability of PPP, is: \begin{align} \label{eq:pdf_nearest_distance} f\left( r\right) = 2 \pi \lambda_b r \exp(-\lambda_b \pi r^2), r \in \left[ 0, +\infty\right] \end{align} In this case, it just needs to take the expectation with respect to $r$, to get the outage probability $P_{f1}$. \begin{align} P_{f1}&= \mathbb{E}\left[ 1-p_{s}\left(r\right) \right] \nonumber\\ &= 1-\int_{0}^{+\infty} \mathbb{E}\left[ \exp(-A r^2 X)\right] 2 \pi \lambda_b r \exp(-\lambda_b \pi r^2) dr \nonumber\\ &= 1 - \mathbb{E}\left[ \frac{1}{\frac{AX}{\pi\lambda_b}+1} \right] \nonumber \end{align} For the simplicity of writing in the following, let $B= \frac{A}{\pi \lambda_b}$. And we now focus on the term $\mathbb{E}\left[ \frac{1}{BX+1} \right]$: \begin{align} \label{eq:mean_bx+1_step1} &\mathbb{E}\left[ \frac{1}{BX+1} \right] = \int_{-\infty}^{+\infty} \frac{1}{ Be^t+1} \cdot \frac{1}{\sqrt{2\pi} \sigma_X} \exp\left\lbrace -\frac{t^2}{2 \sigma_X^2}\right\rbrace dt \nonumber\\ &= \int_{-\infty}^{+\infty} \frac{1}{ e^t+1} \cdot \frac{1}{\sqrt{2\pi} \sigma_X} \exp\left\lbrace -\frac{(t-\ln(B))^2}{2 \sigma_X^2}\right\rbrace dt %&= \int_{-\infty}^{+\infty} \frac{1}{1+e^{-(t-\ln(B))}} \cdot \frac{1}{\sqrt{2\pi} \sigma_X} \exp\left\lbrace -\frac{t^2}{2 \sigma_X^2}\right\rbrace dt \nonumber\\ %&= \int_{-\infty}^{+\infty} \frac{1}{1+\exp{-\left( \frac{t-\frac{\ln(B)}{\sigma_X}}{\frac{1}{\sigma_X}}\right) }} \cdot \frac{1}{\sqrt{2\pi}} \exp\left\lbrace -\frac{t^2}{2}\right\rbrace dt \nonumber\\ %& = \int_{-\infty}^{+\infty} \frac{\phi\left( t \right) }{1+\exp{-\left( \frac{t-\frac{\ln(B)}{\sigma_X}}{\frac{1}{\sigma_X}}\right) }} dt, \end{align} which is actually a logistic-normal integral. According to~\cite{crooks2009logistic}, this kind of integral can be accurately approximated by another reparameterized logistic function (i.e., sigmoid function). Thus, \begin{align} \label{eq:analytical_result_approach_2} P_{f1} &= 1 -\frac{1}{1 + \exp\left\lbrace \left( 1 +\frac{\pi \sigma_X^2}{8} \right)^{-\frac{1}{2}} \ln(B) \right\rbrace} \nonumber\\ &=1-\frac{1}{1 + B^{\left( 1 +\frac{\pi \sigma^2}{2\gamma^2} \right)^{-\frac{1}{2}}}}, \end{align} where $B= \frac{p\lambda_m}{\lambda_b}\Gamma(1+\frac{2}{\gamma}) \Gamma(1-\frac{2}{\gamma}) \exp \left( \frac{\sqrt{2}\sigma}{\gamma}\right) ^2 \theta^{\frac{2}{\gamma}}$. \subsubsection{BS reception diversity method} We now study the outage probability $P_{f2}$ over an infinite plane when applying BS reception diversity. In this case, one transmission is outage if and only if none of the Base Stations in the infinite plane has received the packet. Thus, $P_{f1}$ is expressed as follows: \begin{align} \label{eq:definition_pf1} P_{f2} &= \mathbb{E}\left[ \prod_{r_i \in \Phi_{b}} (1-p_{s}(r_i)) \right], \text{with } r_i \in \left[ r_{\text{min}}, +\infty\right], \end{align} where $r_i$ refers to the distance between the device and BS with label $i$. Note that $r_i$ should be not less than distance to the nearest BS denoted by $r_{\text{min}}$, whose probability density function is given in $(\ref{eq:pdf_nearest_distance})$. Expression $(\ref{eq:definition_pf1})$ is actually the Probability Generating FunctionaL(PGFL) of Point point process $\Phi_{b}$ for Base Station, which states for some function $f(x)$ that $\mathbb{E}\left[ \prod_{x \in \Phi}f(x) \right] = \exp(-\lambda(\int_{\mathbb{R}^2}(1-f(x))dx))$. Thus, $(\ref{eq:definition_pf1})$ can be expressed as follows: \begin{align} P_{f2} &= \exp\left\lbrace -\pi \lambda_{b} \mathbb{E}_{r_{\text{min}}} \left[ \int_{r_{\text{min}}}^{+\infty} p_{s}(r)rdr \right] \right\rbrace , \end{align} \begin{align} P_{f2} &= \exp\left\lbrace -\mathbb{E}_{r_{\text{min}}} \left[ \int_{r_{\text{min}}}^{+\infty} \mathbb{E}_{X}\left[ \exp(-AXr^2)\right] 2\pi\lambda_{b} rdr \right] \right\rbrace \nonumber\\ &= \exp\left\lbrace -\pi \lambda_{b} \mathbb{E}_{r_{\text{min}}} \left[\mathbb{E}_{X}\left[ \int_{r_{\text{min}}}^{+\infty}\exp(-AXr^2)dr\right] \right] \right\rbrace \nonumber\\ &= \exp\left\lbrace -\pi \lambda_{b} \mathbb{E}_{r_{\text{min}}} \left[\mathbb{E}_{X}\left[ \frac{e^{-Ar_{\text{min}}^2X} }{AX} \right] \right] \right\rbrace \nonumber\\ &= \exp\left\lbrace -\pi \lambda_{b} \mathbb{E}_{X} \left[\mathbb{E}_{r_{\text{min}}}\left[ \frac{e^{-Ar_{\text{min}}^2X} }{AX} \right] \right] \right\rbrace \nonumber\\ &= \exp\left\lbrace -\mathbb{E}_{X} \left[ \frac{1}{\left( \frac{AX}{\pi\lambda_b}\right)\left( \frac{AX}{\pi\lambda_b}+1\right) }\right] \right\rbrace \nonumber\\ &= \exp\left\lbrace \mathbb{E}_{X} \left[ \frac{1}{BX + 1}\right] - \mathbb{E}_{X} \left[ \frac{1}{BX}\right] \right\rbrace \end{align} \begin{align} \mathbb{E}_{X} \left[ \frac{1}{BX + 1}\right] = \frac{1}{1 + B^{\left( 1 +\frac{\pi \sigma^2}{2\gamma^2} \right)^{-\frac{1}{2}}}} \end{align} \begin{align} \mathbb{E}_{X} \left[ \frac{1}{BX}\right] =-\frac{1}{\frac{p\lambda_m}{\lambda_b}\Gamma(1+\frac{2}{\gamma}) \Gamma(1-\frac{2}{\gamma})\theta^{\frac{2}{\gamma}}} \end{align} Hence, we have: \begin{align} \label{eq:analytical_result_approach_1} P_{f2} &= \exp(\frac{1}{1 + B^{\left( 1 +\frac{\pi \sigma^2}{2\gamma^2} \right)^{-\frac{1}{2}}}}-\frac{1}{B\exp\left\lbrace -\left(\frac{\sqrt{2}\sigma}{\gamma}\right) ^2 \right\rbrace } ), \end{align} where $B= \frac{p\lambda_m}{\lambda_b}\Gamma(1+\frac{2}{\gamma}) \Gamma(1-\frac{2}{\gamma}) \exp \left( \frac{\sqrt{2}\sigma}{\gamma}\right) ^2 \theta^{\frac{2}{\gamma}}$. We surprisingly observe that with BS reception diversity, the limit case, the outage probability has the same formula with the case where just Rayleigh fading is considered. But in practical system, we still observe that shadowing has improved the outage probability, since the wireless network cannot be infinite and the population is always limited.
{ "alphanum_fraction": 0.6738519213, "avg_line_length": 83.8357142857, "ext": "tex", "hexsha": "1389bf9599b8ffa85df4b9db383450485c401b08", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "4bdd0a41012030398a23ae16a66d5a02631f76f5", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "hansomesong/PhD-Thesis", "max_forks_repo_path": "Chapter5/bs_rx_divers_math_analysis.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "4bdd0a41012030398a23ae16a66d5a02631f76f5", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "hansomesong/PhD-Thesis", "max_issues_repo_path": "Chapter5/bs_rx_divers_math_analysis.tex", "max_line_length": 606, "max_stars_count": null, "max_stars_repo_head_hexsha": "4bdd0a41012030398a23ae16a66d5a02631f76f5", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "hansomesong/PhD-Thesis", "max_stars_repo_path": "Chapter5/bs_rx_divers_math_analysis.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 4392, "size": 11737 }
\chapter{Background} \label{sec:chapterlabel2} In this chapter, we will present all the necessary background concepts as well as some related work on recommendation systems and reinforcement learning methods that help to understand the work of this dissertation project. First, we give an overview of the existing recommendation systems techniques and approaches that have been used to solve the recommendation problem. Then, we introduce the basic concepts to define a reinforcement learning task and expose the first attempts made on the field to create reinforcement agents that learn to give recommendation to users. Finally, recent work on reinforcement learning algorithms using deep learning is reviewed. \section{Recommender Systems} In recent years, there has been growing focus on the study of Recommender Systems (RSs), where different techniques have been published in order to to address the problem of information overload on the Internet. Consequently, the evolution of RS and the web usually go hand-in hand. Resnick and Varian \cite{resnick1997recommender} defined RS as systems that help users limit their search by supplying a list of items that might interest them. Such systems has found an active application area for diverse Information Retrieval, Web Mining and Machine Learning techniques that help to solve typical recommendation problems. Additionally, different RS mechanisms have been categorized depending on the way they analyze the data sources to find affinities between users and items. Figure \ref{fig:iomatrix} shows the most general setting in which recommender systems are studied. Having a rating matrix of n users and m items representing the current user preferences, the task of a RS is to predict all missing ratings $r_{a,i}$ for the active user a, and then recommend the item(s) with the highest rate \cite{melville2011recommender}. Nevertheless, the user ratings matrix is typically sparse, as most users do not rate most items. Different approaches to solve this task can be categorized into the following types. \begin{figure}[t] \centering \includegraphics[scale=0.9]{images/uimatrix} \caption[Typical user rating matrix]{Typical user ratings matrix, where each cell $r_{u,i}$ corresponds to the rating of user u for item i.} \label{fig:iomatrix} \end{figure} \subsection{Collaborative Filtering} In Collaborative Filtering (CF) systems, items are recommended by exploiting the similarities amongst several users based on the feedback of previously consumed items. Usually, databases that fit into a CF approach, store the past users interactions in the form of explicit ratings (e.g., 1 to 5 range), or implicit ratings (e.g. a song played by a user, or an item bought by her). In general, CF methods are subdivided into: memory-based and model-based approaches. \begin{itemize} \item \textbf{Memory-based CF:} items are recommended using a weighted combination of ratings in a subset of users that are chosen based on their rating similarity to the target user. The most commonly used measure is the Pearson correlation coefficient \cite{resnick1994grouplens}. %that measures the extent to which there is a linear dependence between the ratings of two users. Alternatively, the similarity between the ratings of two users (represented as a vector in an m-dimensional space) can be and computed using the Cosine similarity or the adjusted Cosine similarity measure which overcomes the issue of users with different rating behavior schemes \cite{yildirim2008random}. Previous studies found that correlation \cite{breese1998empirical} and adjusted cosine similarity \cite{yildirim2008random} performs slightly better. Memory-based CF methods have been extended and improved over the years. Linden, Smith, and York \cite{linden2003amazon} proposed an \textit{Item-based} (or item-item) CF method to overcome the problem of dimensionality found when conventional memory-based algorithms cannot scale well when finding similarities between millions of users and items. This approach, which matches a user's rated items using Pearson correlation, lead to faster online systems and improved recommendations. \item \textbf{Model-based CF:} provides recommendations by using statistical models for predicting user ratings. The most widely used models are Bayesian classifiers, neural networks, fuzzy systems, genetic algorithms, latent features and matrix factorization \cite{bobadilla2013recommender}. For instance, CF methods can be turned into a classification problem, where a classifier is built for each active user representing items as features over users and available ratings as labels. However, latent factor and matrix factorization models have emerged as a state-of-the-art methodology in this class of techniques \cite{koren2009matrix}. \end{itemize} %\subsubsection*{Matrix Factorization} %\label{sec:matrixFactorization} % %To reduce the high levels of sparsity in RS databases, matrix factorization in conjunction with dimensionality reduction techniques have been used. Having a sets U of users, and a set D of items, let $\mathbf{R}$ of size $|U| \times |D|$ be the matrix that contains all the ratings that the users have assigned to the items. Assuming that we would like to discover K latent features, the task is to find two matrices $\mathbf{P}$ (a $|U| \times K$ matrix) and $\mathbf{Q}$ (a $|D| \times K$ matrix) such that their product approximates $\mathbf{R}$: % %\begin{equation} %\label{} %\mathbf{R} \approx \mathbf{P} \times \mathbf{Q^{T}} = \hat{\mathbf{R}} %\end{equation} % %Each row of $\mathbf{P}$ would represent the strength of the associations between a user and the features. Similarly, each row of $\mathbf{Q}$ would represent the strength of the associations between an item and the features. One way to obtain $\mathbf{P}$ and $\mathbf{Q}$ is to first initialize both matrices with some values, and calculate how different their product is to $\mathbf{M}$ by using gradient descent with a regularization term in order to minimize the difference between $\mathbf{R}$ and the predicted preference matrix $\mathbf{\hat{R}}$ (e.g using the Mean Square Error). The update rules and the cost function are defined as follows: % %\begin{equation} %\label{} %p^{'}_{ik}=p_{ik}+\alpha\frac{\partial}{\partial p_{ik}}e^{2}_{ij}=p_{ik}+\alpha(2e_{ij}q_{kj}-\beta p_{ik}) %\end{equation} % %\begin{equation} %\label{} %q^{'}_{kj}=q_{kj}+\alpha\frac{\partial}{\partial q_{kj}}e^{2}_{ij}=q_{kj}+\alpha(2e_{ij}p_{ik}-\beta p_{kj}) %\end{equation} % %\begin{equation} %\label{} %e^2_{ij}=(r_{ij}-\hat{r{ij}})^2=(r_{ij}-\sum_{k=1}^{K}p^{'}_{ik}q^{'}_{kj})^2 %\end{equation} % %Another model-based technique combines Latent Semantic Index (LSI) and the reduction method Singular Value Decomposition (SVD) are typically to solve the same problem[***](Using singular value decomposition approximation for collaborative filtering). Although SVD methods provide good prediction results, it is computationally very expensive, therefore it might be only efficient in static off-line settings. \subsection{Content-based RS} Content-based (CB) RS (compared to pure CF that only utilizes the user rating matrix) tries to make a better personalized recommendation by exploiting the knowledge about a user (e.g. demographic information), or the properties of items (e.g. genre of a movie). Several approaches have treated this problem as an information retrieval (IR) task, where the content associated with the user's preferences is treated as a query, and the unrated items are scored with relevance/similarity to this query \cite{balabanovic1997fab}. Alternatively, CB-RS has also been treated as a classification task using algorithms such as k-Nearest Neighbors (k-NN), decision trees, and neural networks \cite{pazzani1997learning} \subsection{Hybrid approaches} In order to leverage the strengths of both CB-RS and CF methods, several hybrid approaches have been proposed. For instance, Melville et al. presented in \cite{melville2002content} a general framework for content-boosted collaborative filtering, where content-based predictions are applied to convert a sparse user ratings matrix into a full ratings matrix, and then a CF method is used to provide recommendations. In essence, this approach has been shown to perform better than pure CF, pure CB-RS, and a linear combination of the two. \subsection{The cold-start problem} The cold-start problem is a common issue that happens in recommender systems when it is not possible to make reliable recommendations due to an initial lack of ratings \cite{bobadilla2013recommender}. There are three kinds of known cold-start problems: new community, new item and new user. However, the latter represents one of the greatest difficulties faced by an RS in operation. Since new users have not yet provided any rating in the RS, they cannot receive any personalized recommendations when using memory-based CF. Usually, when the users enter their firsts ratings they expect the RS to offer them personalized recommendations, but there are not enough ratings yet to be able to make reliable predictions. As a consequence, new users feel that the RS does not offer the service they expected. This problem is often faced using hybrid approaches such as CF-CB RS, CF-demographic based RS, or CF-social based RS \cite{bobadilla2013recommender}. However, these methods can be combined with clustering techniques over items in order to improve the prediction quality. \subsection{Trends in Recommendation Systems} Latest studies in CF showed that combining conceptual (explicit) and usage (implicit) information can improve the quality of web recommendations. Bobadilla et al. \cite{bobadilla2013recommender} presented a taxonomy for RS which unifies the current recommender methods and algorithms that can be applied to incorporate memory-based, social and content-based information into their CF system depending on the type of information available. The taxonomy depicted in Figure \ref{fig:taxonomy}, also detail at its higher levels the current evaluation methods for RS in terms of quality measures, diversity and novelty. \begin{figure}[t] \centering \includegraphics[scale=0.8]{images/taxonomyrs} \caption[Recommender Systems taxonomy]{Recommender System taxonomy. Source: \textit{Recommender systems survey}\cite{bobadilla2013recommender}} \label{fig:taxonomy} \end{figure} Moreover, Bobadilla et al. argue that one the most widely used algorithm for CF is the k-nearest neighbors (kNN) model. In general, kNN generates recommendations by executing the following tasks: (1) determine k users neighbors; (2) implement an aggregation approach with the ratings for the neighborhood in items not rated by a; and (3) select the top N recommendations from predictions obtained in step 2. On the other hand, a more robust model that combines the advantages of Support Vector Machines with factorization models was introduced by Rendle S. in \cite{rendle2010factorization}. The Factorization Machine (FM) model is able to compute all interactions between variables using factorized parameters even in scenarios with huge sparsity like recommender systems. Later, Rendel S. presented in \cite{rendle2012factorization} the \textit{LibFM} software package that implements three learning methods have been proposed for FMs: Stochastic Gradient Descent (SGD) \cite{rendle2010factorization}, Alternating Least-Squares (ALS)\cite{rendle2011fast} and the Markov Chain Monte Carlo (MCMC) \cite{freudenthaler2011bayesian}. Wang et al. \cite{wang2015collaborative} made a generalization of recent advances in Deep Learning \cite{bengio2013representation} from i.i.d. input and applied it to the non-i.i.d. (e.g. CF-based) input by presenting hierarchical Bayesian models that jointly performs deep representation learning for the content information and CF for the ratings matrix. Additionally, Wu Y. et al. in \cite{wu2016collaborative} proposed a Collaborative Deep Auto Encoders (CDAE) model which formulates the top-N recommendation problem using Denoising Auto-Encoders to learn the latent factors from corrupted inputs. Experiments carried out over different domains have shown that the two deep learning approaches can significantly outperform the state of the art. Therefore, recent research has started to focused on defining deep representational approaches that can lead to better recommendations. \section{Reinforcement Learning} Reinforcement learning (RL) \cite{kaelbling1996reinforcement} is a family of machine learning algorithms that optimize sequential decision making processes based on scalar evaluations or rewards. Similarly to an n-armed bandit model \cite{katehakis1987multi}, RL considers the problem as a goal-directed agent interacting with an uncertain environment, where the main objective is to perform actions that maximize the expected sum of future reward for each state in the long term. An important feature that distinguishes RL from other types of learning is that it uses training information that evaluates the actions taken and learns from its own experience, like a trial-and-error search, rather than instructs by giving the correct actions (like supervised learning algorithms). RL uses a formal framework which defines the continuous interaction in terms of states, actions, and rewards, between the \textit{agent} and an \textit{environment} \cite{sutton1998reinforcement}. At each time step $t$, the agent receives some state representation of the environment $s_t \in \mathcal{S}$, where $\mathcal{S}$ is the set of possible states. Then it selects an action $a_t \in \mathcal{A}(s_t)$ , where $\mathcal{A}(s_t)$ is the set of actions available in the observed state. One time step later, the agent receives a numerical reward $r_{t+1} \in \mathcal{R}$, and the environment turns into a new state $s_{t+1}$. Figure \ref{fig:rlenvironment} shows the whole interaction in an agent-environment interface. \begin{figure}[t] \centering \includegraphics[scale=2]{images/rlenvironment} \caption[Agent-Environment interface in a Reinforcement Learning problem]{The Agent-Environment interface in a Reinforcement Learning problem. Source: Reinforcement Learning: a Survey \cite{sutton1998reinforcement} } \label{fig:rlenvironment} \end{figure} This framework is intended to be a simple way of representing four essential features of the artificial intelligence problem: (1) a \textit{policy} (commonly stochastic) defines a mapping from the perceived states of the environment to actions to be taken when in those states; (2) a \textit{reward} function maps each state-action pair to a single number that represents how good or bad the new state is for the environment; (3) a \textit{value} function specifies the total amount of reward an agent can expect to accumulate over the future and starting from a given state. This function is considered the most important as it is used by the agent during decision-making and planning; and optionally (4) a model that mimics the behavior of the environment (transitions and rewards) and provides a way of deciding which action to perform considering possible future situations before they are actually experienced. In general, agents are categorized depending on how the reinforcement learning problem needs to be modeled. \textit{Value-based} and \textit{Policy-based} agents are solely based on the value and policy functions respectively. \textit{Actor-critic} agents use both policy and value functions; \textit{Model-free} agents use policy and/or value function but no model; and \textit{Model-based} agents have all properties mentioned above. On the other hand, The \textit{return} $R_t$ is defined as the sum of the discounted future rewards over an episode of $T$ time steps that an agent actually seeks to maximize: \begin{equation} \label{eq:return} R_t = r_{t+1} + \gamma r_{t+2} + \gamma^2 r_{t+3} + ... = \sum^{\infty}_{k=0} \gamma^k r_{t+k+1} \end{equation} where $\gamma, 0 \leq \gamma \leq 1$ is a discount factor that determines the present value of future rewards \subsection{Markov Decision Processes} A Markov Decision Process (MDP) is a model for sequential stochastic decision problems. As such, it is widely used in applications where an autonomous agent is influencing its surrounding environment through actions \cite{shani2005mdp}. It is defined by a tuple $\langle S, A, R, Pr \rangle$ representing a Markov Chain with values and decisions that follows the Markov property: \textit{a state S is Markov if and only if $\mathbb{P}[S_{t+1}|S_t] = \mathbb{P}[S_{t+1}|S_1,...S_t]$. In other words, in a MDP the future is independent of the past given the present}. Therefore, if a reinforcement learning task satisfies the Markov property and the state and action spaces are finite, then it can be modeled as a finite Markov Decision Process. A particular finite MDP is defined by its state and action sets, a reward function $R$ (equation \ref{eq:expectedRew}) that assigns a real value to each state/action pair, and a state-transition function $Pr$ (equation \ref{eq:transitionsProb}) which provides the probability of transitioning between every pair of states given each action. Altogether, these quantities completely specify the most important aspects of the dynamics of a finite MDP. \begin{equation} \label{eq:expectedRew} \mathcal{R}^a_{ss'} = \mathbb{E} {r_{t+1} | s_t=s, a_t=a, s_{t+1}=s'} \end{equation} \begin{equation} \label{eq:transitionsProb} \mathcal{P}^a_{ss'} = Pr {s_{t+1} = s' | s_t=s, a_t=a} \end{equation} Therefore, the decision-maker's goal is to find an optimal policy $\pi$, such that at each stage of the decision process, the agent needs only to obtain the current state $s$ and execute the action $a=\pi(s)$. Various exact and approximate algorithms have been proposed for estimating the optimal policy $\pi*$, such as policy iteration, value iteration, Monte-Carlo learning, temporal difference, Q-learning, SARSA, etc \cite{kaelbling1996reinforcement}\cite{sutton1998reinforcement}. \subsection{A model-free Reinforcement Learning Setup} \label{sec:model-free} A reinforcement learning setup consists of an agent interacting with an environment $E$ at discrete time steps. At each time step $t$, the agent receives an observation $s_t$, takes an action $a_t$ and receives a reward $r_t$. Additionally, the environment $E$ may be stochastic so it can be modeled as an MDP with a state space $\mathcal{S}$, action space $\mathcal{A}$, an initial state distribution $\rho(s_1)$, transition dynamics $\rho(s_{t+1}|s_t, a_t)$, and reward function $r(s_t, a_t)$. On the other hand, the agent's behavior is defined by a policy $\pi$, which maps states to a probability distribution over the actions $\pi : \mathcal{S} \rightarrow P(\mathcal{A})$. Finally, the return from a state is defined as the sum of the discounted future reward $R_t = \sum^T_{i=t} \gamma^{(i?t)}r(s_i, a_i)$. As the return depends on the actions chosen and therefore on $\pi$, it may be stochastic. The goal in reinforcement learning is to learn a policy which maximizes the expected return from a start distribution $J = \mathbb{E}_{r_i,s_i \sim E,a_i \sim \pi}[R_1]$. The action-value function is used in many RL algorithms and describes the expected return after taking an action $a_t$ in state $s_t$ and thereafter following policy $\pi$: \begin{equation} \label{eq:action-valueFcn} Q^{\pi}(s_t, a_t) = \mathbb{E}_{r_{i \geq t}, s_{i > t} \sim E, a_{i > t} \sim \pi} [R_t| s_t, a_t] \end{equation} Many approaches in RL opt to represent the action-value function as a recursive relationship using the Bellman equation (eq. \ref{eq:bellmanEq}). Nevertheless, If the target policy is deterministic we can describe it as a function $\mu: \mathcal{S} \leftarrow \mathcal{A}$ and avoid the inner expectation as shown in equation \ref{eq_Qdeterministic}. As the expectation depends only on the environment, it is possible to learn $Q^{\mu}$ off-policy, using transitions which are generated from a different stochastic behavior policy $\mu$. \begin{equation} \label{eq:bellmanEq} Q^{\pi}(s_t, a_t) = \mathbb{E}_{r_{t}, s_{t+1} \sim E} [r(s_t, a_t) + \gamma \mathbb{E}_{a_{t+1} \sim \pi} [Q^{\pi}(s_{t+1}, a_{t+1})] ] \end{equation} \begin{equation} \label{eq_Qdeterministic} Q^{\mu}(s_t, a_t) = \mathbb{E}_{r_{t}, s_{t+1} \sim E} [r(s_t, a_t) + \gamma Q^{\mu}(s_{t+1}, \mu_{t+1}) ] \end{equation} Q-learning \cite{watkins1992q}, is a commonly used off-policy algorithm which uses a \textit{greedy} policy $\mu(s) = argmax_aQ(s, a)$ in order to estimate the action that gives the maximum reward. As the algorithm considers function approximators parameterized by $\theta^Q$, it can be used as an optimizer that minimize the loss: \begin{equation} \label{} L(\theta^Q) = \mathbb{E}_{s_{t} \sim \rho^{\beta}, a_t \sim \beta, r_t \sim E} [(Q(s_t, a_t | \theta_Q) - y_t)^2] \end{equation} where $y_t = \mathbb{E}_{s'\sim \mathcal{E}} [r(s_t, a_t) + \gamma max_{a'} Q(s_{t+1}, \mu(s_{t+1}) | \theta^Q)]$. %While $y_t$ is dependent on $\theta^Q$, this is typically ignored. \subsection{Exploration/Exploitation Dilemma} One of the challenges that arise in reinforcement learning is the trade-off between exploration and exploitation \cite{kaelbling1996reinforcement}\cite{sutton1998reinforcement}. To obtain a lot of reward, a reinforcement learning agent must prefer actions that has already tried in the past and found to be effective in terms of reward. In order to discover such patterns, it needs to try actions that it has not been selected before. so the agent has to be able to exploit actions that are already known as effective, but it also needs to explore in order to make better action selections in the future. Na\"{i}ve approaches use a greedy policy $\pi(s) = argmax_aQ(s, a)$ to get the action with maximum reward as a pure exploitative model. However, an $\epsilon$-greedy policy can be considered instead to allow exploration in the environment and improve the estimation of the non-greedy action values. Therefore, the agent will select a random action with a probability $\epsilon$, and will act greedily with probability $1 - \epsilon$ . The advantage of $\epsilon$-greedy policy over greedy policy is that the former continuously explore and improve the chances of recognizing possible actions that can lead to better rewards. Nevertheless, this has still been an issue to solve in some reinforcement learning tasks, so research has continuously focused on trying different techniques to solve the exploration/exploitation trade-off that can lead to the discovery of better policies \cite{kaelbling1996reinforcement}. \section{Recommender Systems and Reinforcement Learning} Although reinforcement learning seems to have great potential for improving the recommendation problem, it has received relatively little attention and found only limited application. The first experiments on using reinforcement learning techniques were focused on developing recommendation systems for web content using implicit data from server files which stored the navigational logs of the users throughout their contents. Ten Hagen et al. \cite{ten2003exploration} considered RL approach with an exploration/exploitation trade-off for discovering unknown areas and automatically improve their recommendation policy to helping users navigate through an adaptive web site. They stated that a model-free algorithm like Q-Learning can become infeasible if the number of actions are relatively high and showed that a greedy policy can indeed lead to a suboptimal recommendation model as the use of such kind of policy function, where the \"best\" actions (e.g with higher probability) are taken, does not always improve the policy. They finally demonstrated that this problem is solved by starting the algorithm with a lot of exploration and gradually reduce the parameter $\epsilon$ as the policy is getting closer to an optimum. Results concluded that a recommender system without exploration potentially can get trapped into a local maxima. Rojanavasu et al. in \cite{rojanavasu2005new} presented a general framework for web recommendation that learns directly from customer's past behavior by applying an RL process based on the SARSA method and a $\epsilon$-greedy policy. The system was composed of two models: a global model to keep track of customers trends as a whole, and a local model to record the user's individual browsing history. In general, the model treated pages as states of the system and links within a page as actions. To predict the next state, it used a ranking system that is also separated into 2 parts. i) a global ranking system using the data from a $Q_{global}$-matrix obtained from the action-value function and an $\epsilon$-greedy policy that gave the chance for rank new items that have few clicks but may match a user's interest; and ii) a local ranking $Q_{local}$ using an inverse $\epsilon$-greedy policy. The system then finds the final total reward using $Q_{total}=Q_{local} + wQ_{global}$ where $w \in (0-1]$ is a weight hyper parameter of the model. Overall, the system provided customers the chance to explore other products than the ones they already visited. Experimental results showed that a value of $\epsilon=0.2$ preserves the balance between exploration and exploitation, meanwhile If $\epsilon < 0.2$ the system will gave small chances to users to explore new items, or otherwise may include products that do not match to the customer's interest ($\epsilon > 0.2$). On the other hand, results also prove that even if the purpose of $Q_{local}$ was to discover new products, it appeared to be less effective than $Q_{global}$. Shani et al. in \cite{shani2005mdp} argued that it is more appropriate to formulate the problem of generating recommendations as a sequential optimization problem so they described a novel approach for a commercial web site based on MDPs together with a predictive model. The MDP-based recommender system took into account the expected utility and the long-time effect of a particular recommendation. Then it suggested items whose immediate reward is lower, but leads to more \textit{profitable} rewards in the future. However, the benefits of this model are offset by the fact that the model parameters are unknown, randomly initialized and that they take considerable time to converge. So, they defined a strong initial model-based for collaborative filtering, which solves quickly, and that does not consume too much memory. Before implementing the recommender, they initialize a predictive model of user behavior using data extracted from the web site, and then used it to provide the initial parameters for the MDP. They proposed to use maximum-likelihood estimation with three major enhancements: skipping, clustering, mixture modeling, so the predictive model can be thought of as a first-order Markov chain (MC) of user dynamics in which states correspond to sequences of \textit{k} events (in this case: previous selections) representing relevant information about the users interaction. Then, the transition function described the probability that a user ,whose \textit{k} recent selections were $x_1, . . . ,x_k$ will select the item $x'$ next. Authors experimented with small values of \textit{k} ($k \in [1-5]$) in order to overcome data sparsity and MDP solution complexity issues. Moreover, enhancements on the maximum-likelihood n-gram formulation showed to be useful to find a good estimate of the transition function for the initial predictive model without suffering the problem of data sparsity and bad performance presented on other model variations. %The proposed maximum-likelihood estimation include three major enhancements. First, a form of skipping based on the observation that the occurrence of the sequence $x_1,x_2,_x3$ lends some likelihood to the sequence $x_1,x_3$. In other words, after initializing the counts of each state transition using the observed data, for any given user sequence $x_1,x_2, ...,x_n$, they added a fractional count $1/2^{( j-(i+3))}$ to the transition from $s=\langle x_i,x_{i+1},x_{i+2}\rangle$ to $s'=\langle x_{i+1},x_{i+2},x_j\rangle$, for all $i+3 < j ?n$, which acts as a diminishing probability of skipping a large number of transactions in the sequence. Equation \ref{eq:trMCSkipping} shows the updated formulation of the (normalized )maximum-likelihood estimation, where $count(s,s')$ is the fractional count associated with the transition from s to s'. % %\begin{equation} %\label{eq:trMCSkipping} %tr_{MC}^{base}(s,s')=\frac{count(s, s')}{\sum_{s'}count(s, s')} %\end{equation} % %The second enhancement exploits the similarity of sequences in a form of clustering. The idea is that the likelihood of transition from $s$ to $s'$ can be predicted by occurrences from $t$ to $s'$, where $s$ and $t$ are similar. Equation \ref{eq:simTr} define the similarity of states $s_i$ and $s_j$ where $\delta(\dot, \dot)$ is the Kronecker delta function and $s^m_i$ is the m-th item in state $s_i$. Then, the similarity count from state $s$ to $s'$ was defined using equation \ref{eq:simCountTr}. Finally, the new transition probability from $s$ to $s'$ given by equation \ref{eq:newTr} yielded the best results during evaluation. % %\begin{equation} %\label{eq:simTr} %sim(s_i, s_j)=\sum_{m=1}^{k}\delta(s_i^m, s_j^m) \cdot (m+1) %\end{equation} %\begin{equation} %\label{eq:simCountTr} %simcount(s, s')=\sum_{s_i} sim(s, s_i) \cdot tr^{base}_{MC}(s, s') %\end{equation} %\begin{equation} %\label{eq:newTr} %tr_{MC}(s, s')=\frac{1}{2}tr^{base}_{MC}(s, s') + \frac{1}{2}\frac{simcount(s, s')}{\sum_{s''}simcount(s, s'')} %\end{equation} % %Due to larger values of k lead to states that are more informative whereas smaller values of k lead to states with more statistical meaning, they added as a third enhancement a finite mixture modeling (e.g. combine a trigram, a bigram and a unigram into a single model) in a way to balance these conflicting properties by mixing k models, where the \textit{i-th} model looks at the last i transactions. In addition, they applied mixture weights during experiments as $\pi_1 = · · · = \pi_k = 1/k$ so the generation of the models for smaller values entails little computational overhead. %An evaluation on the accuracy of the predictive model was performed using real user transactions and browsing paths (from web logs) from the commercial store web site. Besides, they compared different variations of their enhancements on the MC approach with other models like i)both sequential and non-sequential form of the \textit{Microsoft Commerce Server 2000}(Heckerman et al. (2000))[***] (local distributions are probabilistic decision trees); and ii)unordered versions of MC models. Results outlined that the MC models using skipping, clustering, and mixture modeling yielded better results than any other other variation of model \textit{i}. Sequence-sensitive models outperformed the accuracy of non-sequential models, while MC models were superior to the predictor models. and the skipping enhancement was only beneficial for the transactions data set On the other hand, %the MDP-based recommender system, models the recommendation process as a sequence of states and attempts to optimize it. Here, the predictive model played an important role in the construction of the model as it provides the probability, denoted by $Pr_{pred}(x|x_1, . . . ,x_k)$, that a user purchase a particular item x given her sequence of past purchases $x_1, . . . ,x_k$. the components of the reinforcement learning model were defined as follows: a) the states are the \textit{k}-tuples of items purchased; b) actions correspond to a recommendation of an item; c) rewards depends only on the last item defining the current state, where its net profit was used; and d) the transition function as the stochastic element of the model that estimates the user's actual choice.% In brief, equation \ref{eq:trMDP} represents the transition function or the probability that a user will select item $x''$ given that item $x'$ is recommended in state $\langle x_1,x_2,x_3 \rangle$. %\begin{equation} %\label{eq:trMDP} %tr^1_{MDP}(\langle x_1,x_2,x_3 \rangle,x',\langle x_2,x_3,x'' \rangle) %\end{equation} In order to solve the MDP problem, a policy iteration algorithm was used as the defined state space presented certain characteristics that lead to fast convergence. %Indeed, tests performed on real data showed that the policy iteration converges after a few iterations. %: i) the inherent directionality that transitions showed are useful to reduce the running time of the solution algorithm; ii) the computation of an optimal policy is not sensitive to variations in the number of \textit{k} past transactions a state represent; iii) ignoring unobserved states by maintaining transition probabilities only for states where a transition actually occurred; and iv) using the independence of recommendations or Markov property, where the probability that a user buys a particular item depends only on her current state, allowing the algorithm to handle large action spaces. The transition function was carefully initialized in order to be fairly accurate when the system was first deployed to avoid the cold-start problem,causing states that are infrequently observed to be updated faster than already observed states. Moreover, when the first transition to a state is observed, its probability is initialized to 0.9. In this way, the system balances the need to explore unobserved items in order to improve its model by recommending non-optimal items occasionally until getting their actual counts. Finally, to update the model, the system used an off-line approach which keeps track of the recommendations and the user selections and build a new model at fixed time intervals (e.g. once a week). %So, to re-estimate the transition function at time $t+1$ the following counts are obtained: % %\begin{equation} %\label{eq:countIn} %c^{t+1}_{in}(s, r, s \cdot r) = c^{t}_{in}(s, r, s \cdot r) + count(s, r, s \cdot r) %\end{equation} %\begin{equation} %\label{eq:countOut} %c^{t+1}_{out}(s, r, s \cdot r) = c^{t}_{out}(s, r, s \cdot r) + count(s, s \cdot r) - count(s, r, s \cdot r) %\end{equation} %\begin{equation} %\label{eq:countTotal} %c^{t+1}_{total}(s, s \cdot r) = c^{t}_{total}(s, r, s \cdot r) + count(s, s \cdot r) %\end{equation} %\begin{equation} %\label{eq:modelProb1} %tr(s, r \in R, s \cdot r) = \frac{c^{t+1}_{in}(s, r, s \cdot r)}{c^{t+1}_{total}(s, s \cdot r)} %\end{equation} %\begin{equation} %\label{eq:modelProb2} %tr(s, r \notin R, s \cdot r) = \frac{c^{t+1}_{out}(s, r, s \cdot r)}{c^{t+1}_{total}(s, s \cdot r)} %\end{equation} %\begin{equation} %\label{eq:initIn} %c^{0}_{in}(s, r, s \cdot r) = \xi_S \cdot tr(s, r, s \cdot r) %\end{equation} %\begin{equation} %\label{eq:initOut} %c^{0}_{out}(s, r, s \cdot r) = \xi_S \cdot tr(s, r, s \cdot r) %\end{equation} %\begin{equation} %\label{eq:initTotal} %c^{0}_{total}(s, s \cdot r) = \xi_S %\end{equation} %where Equation \ref{eq:countIn} corresponds to the number of times a recommendation r was accepted in state s, equation \ref{eq:countOut} is the number of times a user select item r in state s even though it was not recommended, equation \ref{eq:countTotal} is the number of times a user pick item r while being in state s, regardless of whether it was recommended or not, and finally, equations \ref{eq:modelProb1} and \ref{eq:modelProb2} represent the actual probability that a user will select item r given state s when r was recommended or not respectively. %The transition function had to be carefully initialized in order to be fairly accurate when the system was first deployed to avoid the cold-start problem, so the at time $t=0$ were calculated using equations \ref{eq:initIn} \ref{eq:initOut} \ref{eq:initTotal}, where $\xi_s= 10 \cdot count(s)$, causing states that are infrequently observed to be updated faster than already observed states. Moreover, when the first transition to a state $s \cdot r$ is observed, its probability is initialized to 0.9 the probability of the most likely next item in state s with $\xi_s =10$. In this way, the system balances the need to explore unobserved items in order to improve its model by recommending non-optimal items occasionally until getting their actual counts. Experiments demonstrated that a deployed system using three mixture components (combined using an equal weight), with history length $k \in [1,3]$ was able to generate 28\% higher average user reward compared to pure MC model, while in terms of computational costs, the MDP-based model was built quickly and provided fastest recommendations at the price of more memory use. All in all, the off-line predictive model approach provided an adequate initial performance that overcomes the cold-start problem, however, authors avoid using some form of reinforcement learning technique as they argued that at that time, its implementation requires many calls and computations by a recommender system online, which lead to slower responses and undesirable results for the web site owner. A machine learning perspective, introduced by Taghipour et al. in \cite{taghipour2007usage}, used an off-line reinforcement learning model to solve the recommendation problem using the Q-Learning algorithm while employing concepts and techniques commonly applied in the web usage mining domain. They argued that this method is appropriate for the nature of web page recommendation problem as it provides a mechanism which is constantly learning, does not need periodic updates, can be easily adapted to changes in the website structure and new trends in users behavior. The RL problem formulation was considered as as a competition between different recommender systems to gather more points, %than a 2-player game, as in the recommender system environment, self-play, a typical technique used in training RL systems[***], cannot be used to train the system so the system needed of actual web usage data for training. together with a stochastic model with the following properties: \begin{itemize} \item \textit{states:} showing the history of pages visited by the user so far. For this case, a notion of N-Grams was adopted and a sliding window of size $w$ was set to limit the page visit sequences to a constant number and avoid large state spaces. \item \textit{actions:} consisting of a single page recommendation at each state. \item \textit{policy:} rewards actions positively if it recommends a page that will be visited in one of the consequent states \item \textit{reward} function: defined as $r(s, a) += reward(Dist(Rs', p), Time(p^v_w))$, where $Dist(Rs', p)$ is the distance of page p from the end of the recommended pages list to state $s'$, and $Time(p^v_w)$ indicates the time user has spent on the last page of the state. Finally, the $reward(Dist, Time)$ function was defined as a linear combination of both values: $reward(Dist, Time) = \alpha \times dist + \beta \times Time$ with $\alpha + \beta = 1$. \end{itemize} %As the model did not have a predetermined reward function $R(s, a)$ or a transition function $\delta(s, a)$ to give the next state, the reward was estimated by considering that each state s is formed by two sequences $V_s =\langle p^V_{s,1}, p^V_{s,2},..., p^V_{s,w} \rangle$, and $R_s =\langle p^R_{s,1}, p^R_{s,2},..., p^R_{s, n} \rangle$, indicating the sequence of visited and previously recommended pages respectively, where $p^V_{s, i}$ , indicates the ith visited page in the state and $p^R_{s, i}$ indicates the ith recommended page in the state s, Additionally, they had two considerations in the implementation of the reward function: i) only the occurrence of the last page visited in the recommended pages list in state $s'$ is used to reward the action performed in the previous sate $s$, and ii) the time the user spends on a page, assuming that the more time a user spends on a page the more interested. So, the reward function was defined as follows: %\begin{enumerate} %\item Assume $\delta(s, a) = s'$ %\item $P_R = V_{s', w} \cap R_{s'}$ %\item if $p \neq \emptyset$ %\item For page $p$ in $P_R$ %\begin{enumerate} %\item $r(s, a) += reward(Dist(Rs', p), Time(p^v_w))$ %\end{enumerate} %\end{enumerate} %where $Dist(R_i, p)$ is the distance of page p from the end of the recommended pages list and $Time(p^v_w)$ indicates the time user has spent on the last page of the state. Finally, The $reward(Dist, Time)$ function was defined as a linear combination of both values as $reward(Dist, Time) = \alpha \times dist + \beta \times Time$ with $\alpha + \beta = 1$. % %Subsequently, the definition of the learning algorithm also took into consideration the following characteristics: 1) the Q-Learning algorithm given by equation \ref{eq:QLearningalg} was proposed as an update rule and the structure to estimate how successful a prediction can be. Here, the decreasing value of $\alpha_n$ caused these values to gradually converge and decreases the impact of changing reward values as the training continues; 2) an $\epsilon$-greedy action selection was picked as it is important to add some exploration of the state space especially at the beginning of the training; 3) the algorithm followed a TD(0) off-policy learning[***] procedure, as the maximum Q value of the next state is considered when estimating the future reward; and 4) the computation of $r(s, a)$ suffered a slightly change by considering value of $Q(s',a)$ with a coefficient $\gamma$ in order to propagate the value of performing a specific action beyond the limits imposed by the $w$. As the authors mentioned in their work: "the basic idea is that when an action/recommendation is appropriate in state Si, indicating the recommended page is likely to occur in the following states, it should also be considered appropriate in state $S_{i-1}$ and the actions in that state that frequently lead to $S_i$". During the training phase, the described algorithm converged after a few thousand (between 3000 and 5000) visits of each episode or user session. % %\begin{equation} %\label{eq:QLearningalg} %Q_n(s, a )= [(1 - \alpha_n)Q_{n-1}(s, a) + \alpha_n[r(s, a) + \gamma \max_{a'} Q_{n-1}(\delta(s, a), a')] %\end{equation} %with %\begin{equation} %%\label{} %\alpha_n = \frac{1}{1 + visits_n(s, a)} %\end{equation} % %For the purpose of performing a set of experimental evaluations of the proposed model, simulated log files generated by a web traffic simulator were used to train and tune the rewarding functions. The metric used for each evaluation were Recommendation Accuracy, Coverage (similar to precision and recall metrics) and Shortcut Gain (measures how many page-visits users can save if they follow the recommendations). First of all, an experiment with different values of $w$ showed that fixed window size of 3 as recommendation history resulted on better accuracy and shortcut gain, so this value was set for the rest of system evaluations. Next, they experimented the impact of gradually increasing (in steps of 5\%) the coefficient $\alpha$ of parameter \textit{Dist} in the reward function, resulting on both higher accuracy and higher shortcut gain for values up to 15\%. This matched the natural consequence of adding a bounded size of window on recommendations history. Finally, a set of experiments also tested the system performance by increasing the coefficient $\gamma$ in the reward function, obtaining an increase of the accuracy until reaching an upper bound (around 0.20\%) where it began to drop, while the shortcut gain increased steadily up to a point where recommendations became so inaccurate. The proposed off-line RL method used a simplified formulation of the recommendation problem but it obtained much better results compared to association rules and item-based CF methods, displaying a lower rate in which its accuracy decreased. Authors concluded that the algorithm is a good candidate for solving the recommendation problem as it does not rely on any previous assumptions regarding the probability distribution of visiting a page after having visited a sequence of pages, and that the nature of the problem matches perfectly with the notion of delayed reward (known as temporal difference). Additionally, they suggested that in order to produce better results it could be possible to use a more complicated formulation of the reward function such as a neural networks, rather than a linear combination of factors. Later in \cite{taghipour2008hybrid} they exploited the previous RL framework and present a hybrid web recommendation method enriched with semantic knowledge about the usage behavior and thus obtain a more generalized solution regarding to the usage data it has. The new system used the incremental Document Conceptual Clustering \cite{godoy2006modeling} algorithm to map pages to higher level concepts, and exploit the hierarchical and conceptual document clustering to provide a semantic relationship between them in combination with the usage data in a user session. Therefore, new states consist of a sequence of concepts visited by the user, while actions are recommendation of pages that belong to a specific concept. This definition resulted in a much smaller state-action space as the size of the state space is now dependent on the number of distinct page clusters. The reward function now takes into account the content similarity of the recommended and visited pages along with the usage-based reward defined in the previous approach. While, the modified algorithm does not make predictions based on weak usage patterns as the new states represent a generalized view of many single visit sequences. Evaluation results finally showed the flexibility of the new RL approach to incorporate different sources of information in order to improve the quality of recommendations. % %Now, an action $a$ recommending a concept $c$ is rewarded if the user visits a page belonging to concept $c$ later in his browsing session. The new reward function shown in equation \ref{eq:hybridreward} takes into account the content similarity of the recommended and visited pages, where CBR represents the content-based reward of an action (which is equal to the similarity score between concepts) defined in equation \ref{}, and UBR is the usage-based reward defined in the previous approach. % %\begin{equation} %\label{eq:hybridreward} %UBR(Dist(R_{s'} , r),Time(P_{t+1})) \times CBR(r, C(P_{t+1})) %\end{equation} %\begin{equation} %\label{} %l(c) = -log \mathcal{p}(c) %\end{equation} %\begin{equation} %\label{} %Sim(c_1, c_2) = max_{a \in LCA} {l(a)} %\end{equation} % %\textbf{Improving adaptation of ubiquitous recommender systems by using reinforcement learning and collaborative filtering} proposed a learning system for Context-based Recommender System (CBRS)[***] by modeling an MDP agent which combines a hybrid Q-learning algorithm (HQL), collaborative filtering and case-based reasoning techniques in order to define a \textit{contextual} recommendation process based on different context dimensions (cognitive, social, temporal, geographic). It addressed the following problems that came out in recommender systems: a)avoid the intervention of experts as the use of the Q-learning algorithm does not need initial user?s information; b)reduce the cold start problem thanks to the ability of the Q-learning algorithm to explore the knowledge of other users in the same context by using CF; c)accelerate the learning process by mixing Q-learning with case-based reasoning techniques (reuse of cases yields to faster user satisfaction); and d)an exploration strategy allows recommendations to adapt to the user's interest evolution. % %As a result, the model was based in three main algorithms. First, a hybrid-based CF approach which combines the advantages of the memory-based (fill the missing rating values of the user-item matrix) and model-based CF (form the nearest neighbors of each item). Second, the Case based reasoning (CBR)[***] algorithm was picked as it uses knowledge of previous cases to solve new problems, by finding a similar past case and thus reusing it to solve the current situation. Finally, the Q-learning algorithm was improved by: i)reusing past cases information gathered from the CBR, and ii)giving the ability to use information from other users sharing the same interests, by extending the $\epsilon$-greedy strategy to select a random action based on the similarity of user profiles (obtained from the CF algorithm). % %The global mechanism of a context-based recommender system (CBRS) which uses the HyQL algorithm was composed mainly by the following modules: % %\textbf{sensing module:} detects time, location, cognitive and social dimensions of the context. % %\textbf{thinking module:} composed by an abstraction phase that is based on inference rules defined on temporal and space ontologies, and an aggregation phase the two dimensions. % %\textbf{reasoning module:} chooses an action based on the HyQL algorithm % %Finally, evaluations comparing the Q-learning and HyQL with respect to solving the cold start problem were performed, and consisted on testing the precision of the first 100 trials of the system starting when the user is connected to the system. In General, results showed that the precision of HyQL was greater than the precision of Q-learning, demonstrating that this approach is a good candidate to be used in the recommender system problem. However, as the system implementation depends on the context data, it needs of a hand-crafted feature creation process in order to be used on different situations. %\textbf{Exploration in Interactive Personalized Music Recommendation: A Reinforcement Learning Approach} formulated an interactive and personalized music recommendation approach as a reinforcement learning task based on the multi-armed bandit problem, where songs were treated as arms and user ratings as payoffs, and the objective function was set to maximize the cumulative reward of a targeted user over the long term. The model considered the exploration-exploitation trade-off and a Bayesian model in order to deal with both audio content and the novelty of recommendations and thus learn the user musical preferences online based on their feedback. % %The model differs from other approaches in three aspects: i)it is based on audio content; ii)the model is highly efficient as it allows easy online updates; and iii)the model is evaluated based on online real-life user interaction data and does not need a test dataset. Moreover, each recommendation serves two objectives: (1) satisfy the user?s current musical taste, and (2) obtain user feedback needed to improve future recommendations. Even though authors mentioned that an MDP formulation can model a broader range of problems than the multi-armed bandit, it requires much more data to train and is often more computationally expensive. % %Their choice to work with a the content-based approach rather CF was justified for the following reasons. First, due to the bayesian nature of their formulation, methods for matrix factorization (MF) are much more complicated than the proposed linear model. Second, MF often needs of large amount of data for training. Third, existing Bayesian MF methods are inefficient for online updating, so they do not fit in a bandit model which is updated once a new rating is obtained, Fourth, a content-based method does not suffer of the new song problem. Finally, content-based approaches capture causality within music content rather than pure correlation that CF methods offer. % %Subsequently, the interactive, personalized recommender system used a reinforcement learning task called the multi-armed bandit[***] which addressed both the exploration-exploitation trade-off and playlist generation with a single unified model. Whereas some RL models considered only a greedy strategy that does not actively seek fort user feedback, resulting in suboptimal recommendations over the long term, this model takes into account the uncertainty of both the mean and the variance of the rating distribution, allowing the recommender to explore user preferences actively rather than merely exploiting the available rating information. An approach better than the typical $\epsilon$-greedy method to solve of the multi-armed bandit problem, is the use of the Upper Confidence Bound (UCB)[***] algorithm, but it requires an explicit form of the confidence bound that is difficult to derive in the case of music recommendation. % %The personalized music rating model was focused on audio content and novelty. The former considers the overall preference of a song and is represented by the linear function $U_c = \mathbf{\theta'x}$ where $\mathbf{x}$ is the feature vector of the audio content and $\mathbf{\theta}$ is the user preference in different music features (it was considered constant over time). On the other hand, the novelty factor was defined after examining the song's repetition distribution of 1000 users? listening histories collected from the Last.fm dataset. Results showed that most of the songs a user listens to are repeats and that the frequency distribution approximately follows the Zipf?s law[***], where only a small set of songs are repeated most of the time. As a consequence, it was assumed that the novelty of a song decays immediately after it is listened to and then gradually recovers according to the function $U_n = 1 - e^{-t/s}$, where s is a recovery speed (unknown) parameter learned from user interactions and $e^{-t/s}$ is the well-established forgetting curve[***] that measures user's memory retention of a song. In other words, novel songs are those of which that at a certain time a user has little or no memory. It is important to emphasize that the definition of this factor could be different or not applicable to other recommendation contexts than the musical environment. % %Overall, the rating model, defined in equation \ref{eq:combinedRatingModel}, assumed that a rating is formed as the combination of the user?s preference of the song?s content and the dynamically changing novelty. Hence, each user was represented by a set of parameters $\Omega = \{\theta, s\}$, where $\Omega$ needs to be estimated from historical data, so uncertainty had to be taken into account. % %\begin{equation} %\label{eq:combinedRatingModel} %U = U_cU_n = \mathbf{\theta'x}(1 - e^{-t/s}) %\end{equation} % %With this in mind, the new problem formulation was solved using the Bayesian Upper Confidence Bound (Bayes-UCB)[***] algorithm. In The Bayes-UCB, the true expected payoff $U_i$ defined in equation \ref{eq:expectedRatingUCB} for arm \textit{i} is treated as a random variable and the posterior distribution $p(U_i|\mathcal{D})$ of $U_i$ given the history of payoffs $\mathcal{D_l} = \{(\mathbf{x_i}, t_i, r_i)\}^{l}_{i=1}$ is predicted using equations\ref{eq:posteriorDistrOmega} and \ref{eq:posteriorDistrU}. Finally, the Bayes-UCB recommends song $k^*$ that maximizes the quantile function $argmax_{k=1...|S|} Q(\alpha, P(U_k|D_l))$ where Q satisfies$\mathcal{P}[Uk \leq Q(\alpha, P(U_k|D_l))] = \alpha$ and $\alpha = 1 - \frac{1}{l+1}$ % %\begin{equation} %\label{eq:expectedRatingUCB} %\mathcal{E}[R_i] = U_i = \mathbf{\theta'x_i}(1 - e^{-t_i/s}) %\end{equation} %\begin{equation} %\label{eq:posteriorDistrOmega} %p(\mathbf{\Omega}|D_l) \propto p(\mathbf{\Omega}) p(\mathbf{\Omega}|D_l) %\end{equation} %\begin{equation} %\label{eq:posteriorDistrU} %p(U_k|D_l) = \int p(U_k|\mathbf{\Omega}) p(\mathbf{\Omega}|D_l) \mathbf{d\Omega} %\end{equation} % %Since equation \ref{eq:posteriorDistrOmega} does not have a closed-form solution, a Markov Chain Monte Carlo (MCMC)[***] should be used as an approximate inference algorithm. However, authors argued that its performance is very low and users could wait for up to a minute until the MC converges. Hence, they developed a Bayesian model using a piecewise-linear function approximation, as well as a variational inference algorithm to approximate posterior distribution of $\Omega$. For more details about the model definition, please refer to section 4.2 in [***](this paper). Authors mentioned that even if the model just considered audio content and novelty of music on its definition, other factors like diversity, mood or genre could be also approximated by linear functions and then added to it. All in all, the defined approximate Bayesian model was used to obtain the fixed-level quantile of $p(U_i|\mathcal{D})$ and thus estimate the expected payoff and confidence bound of the conceptual Bayes-UCB. % %Evaluations of both efficiency and effectiveness were carried out over six recommendation algorithms and models (Random, LinUCB-C (Content-based), LinUCB-CN (Content and novelty based), Bayes-UCB-CN, Bayes-UCB-CN-V (Variational Inference), and Greedy-CN) demonstrating that the proposed approaches were accurate and highly efficient. The effectiveness study used the \textit{regret}[***] metric to compare the algorithms, showing that the Bayes-UCB-based algorithms performed better than Greedy-CN due to the balance of exploration and exploitation it offers, whereas the good performance of the Bayes-UCB- CN-V indicated that the piecewise-linear approximation and variational inference were appropriately defined. Moreover, results in the efficiency study empirically proved that the developed variational inference algorithm was 100 times faster than the MCMC. Additionally, a user study and an overall evaluation of recommendation performance dropped good conclusions. Results showed that the bandit approach with the novelty factor addressed the cold-start problem improving the recommendation performance. % %To sum up, this work was considered by the authors as the first to balance exploration and exploitation based on reinforcement learning and the multi-armed bandit, and thus improve recommendation performance by addressing the cold-start problem in music recommendation without relying on additional (contextual information). An approximation to the rating model and the new probabilistic inference algorithms helped to achieve real-time recommendation performance that could be generalized to other recommenders and/or media types. Finally, authors suggested that this work could be extended to model the correlations between different users to further reduce the amount of exploration by using hierarchical Bayesian models. On the other hand, if people prefer to boost the performance of CF, they suggested to exploit the exploration/exploitation trade-off idea by using the latent features learned during the matrix factorization rather than the audio features and keep other parts of the proposed system unchanged. % %\textbf{ENHANCING COLLABORATIVE FILTERING MUSIC RECOMMENDATION BY BALANCING EXPLORATION AND EXPLOITATION} extended the work presented above by introducing exploration into the CF context. The approach used a Bayesian graphical model that takes into account the CF latent factors and novelty of recommendation, as well as a Bayesian inference algorithm to efficiently estimate the posterior rating distributions. % %Authors argued that their previous work, based on a content-based approach, had experienced the following drawbacks: (i) the personalized user rating model suffer of a semantic meaning between low-level audio features and high-level user preferences; (ii) it is difficult to determine which acoustic features are actually effective in the music recommendation scenario; and (iii) recommendation of songs under this approach lacks of variety due to most of them are acoustically similar. Therefore, they presented a memory-based CF approach using matrix factorization in order to improve recommendations. % %Recalling the definition of Matrix factorization in Section \ref{sec:matrixFactorization}, this method characterizes users and songs by vectors of latent factors $\mathbf{u_i}$ and $\mathbf{v_j}$ respectively with $i \in [1, m], j \in [1, n]$. In order to learn the latent feature vectors, the system used equation \ref{eq:matrixFactorization} and Alternating Least Squares (ALS)[***] to minimize the regularized cost function on the training set: % %\begin{equation} %\label{eq:matrixFactorization} %\sum_{(i,j) \in I}(r_{ij}-\mathbf{u^T_i}\mathbf{v_j})^2 + \lambda (\sum_{i=1}^{m}n_{ui}\| \mathbf{u_i} \|^2 + \sum_{j=1}^{n}n_{vi}\| \mathbf{v_j} \|^2) %\end{equation} %where I is the index set of all known ratings, $\lambda$ a regularization parameter, $n_{ui}$ the number of ratings by user i, and $n_{vj}$ the number of ratings of song j. However, the traditional CF approach often fails to take into consideration novelty and works greedily. % %As a result, a reinforcement learning approach for CF-based music recommendation based on the n-arm bandit problem was proposed. The user rating model considered that song?s rating is affected by two factors: \textit{CF score}, (how much a user likes the song in terms of each CF latent factor) denoted by $U_{CF}$ and defined inn terms of matrix factorization, and \textit{novelty score} (the dynamically changing novelty of the song) denoted by $U_N$. Thus, the final user rating model, given in equation \ref{eq:UratingModel}, can be defined as a combination of the two scores, where vector $\theta$ indicates the user?s preferences for different CF latent factors, $\mathbf{v}$ is the song feature vector learned by the ALS CF algorithm, t is the time elapsed since when the song was last heard, s the relative strength of the user?s memory, and $e^{?t/s}$ the well-known forgetting curve. % %\begin{equation} %\label{eq:UratingModel} %U=U_{CF}U_{N}=(\mathbf{\theta^Tv})(1-e^{-t/s}) %\end{equation} % %In the same way as in [***](previous paper), each user was associated with a pair of parameters $\ohm=(\mathbf{\theta},s)$ to be learned from the user's rating history, and their underlying ratings were considered random rather than fixed numbers (estimated using $\mathcal{E}[R_j] = U_j$). Exploration was also introduced by using both a similar Bayesian Upper Confidence Bound (UCB) sampling algorithm and Bayesian Graphical model to estimate the posterior distribution $p(U_j|\mathcal{D})$ of $U_j$ given the target user?s rating history $\mathcal{D}$, but in terms of the CF latent factors rather than audio content. Then, the song with the highest fixed-level \textit{quantile} value (equation \ref{}) of $p(U_j|\mathcal{D})$ will be recommended to the target user. % %=BEGIN=ALREADY COMMENTED %The graphical model used in [***12] and depicted in Figure \ref{}*** was adopted to estimate the posterior distribution of $U$, and which is defined as follows: % %\begin{equation} %\label{} %R|\mathbf{v},t,\mathbf{\theta},s,\sigma^2 \sim \mathcal{N}(mathbf{\theta}^T\mathbf{v}(1-e^{-t/s}), \sigma^2) \\ %\end{equation} % %\begin{equation} %\label{} %\mathbf{\theta}|\sigma^2 \sim \mathcal{N}(\mathbf{0}, a_0\sigma^2\mathbf{I}) %\end{equation} % %\begin{equation} %\label{} %s \sim \mathit{Gamma}(b_0, c_0) %\end{equation} % %\begin{equation} %\label{} %\tau=1/\sigma^2 \sim \mathit{Gamma}(d_0, c_0) %\end{equation} % %where $\mathbf{I}$ is the $\mathit{f} \times \mathit{f}$ identity matrix, $\mathcal{N}$ represents the Gaussian distribution[***] and $\textit{Gamma}$ %represents the Gamma distribution[***] with parameters shape and rate. $\mathbf{\theta}$, s, and $\tau$ are parameters, while $a_0$, $b_0$, $c_0$, $d_0$, and $e_0$ are hyperparameters of the priors % %During iteration $h+1$, the model has delivered h recommendations to the user's history $\mathcal{D}_h = \{(\mathbf{v_i},t_i,r_i)\}^h_{i=1}$ which posterior distribution can be defined using Bayes theorem as $p(\Omega|\mathcal{D}_h) \propto p(\Omega)p(\mathcal{D}_h|\Omega)$. They used their own implementation of the Gibbs sampling algorithm to speed-up convergence compared the Monte Carlo Markov Chain (MCMC)[***] algorithm to sample the user's parameters $\Omega=(\mathcal{\theta}, s)$ by sampling from a conditional distribution. Then, they used Eq. \ref{eq:expUserRatingSongj} to obtain a sample for $U_j$, and finally, the posterior Probability Density Function PDF $p(U_j,\mathcal{D})$ was approximated by the histogram of the samples of $U_j$. % %After estimating the posterior PDF for rating each song, they used a Bayes-UCB approach to recommend song $\mathit{j}^*$ that maximizes the quantile function: % %\begin{equation} %\label{} %j^*=\arg\max_{j=1,...,|S|} \mathcal{Q}(\alpha, p(U_j,D_h)) %\end{equation} % %where $\alpha=1-\frac{1}{h+1}$ and the quantile function $\mathcal{Q}$ returns a value \textit{x} such that $Pr(U_j \leq x|D_h)=\alpha$ %=END=ALREADY COMMENTED % %However, the efficiency of the convergence in the Bayesian inference of this approach was improved by developing a specific Gibbs sampling algorithm[***] as in this case it is simple to sample the ratings from the conditional distribution of $\mathbf{\theta}$, while, a Metropolis-Hastings (MH) algorithm[***] was used to draw samples of $s$. For a more detail explanation of the algorithms, please refer to sections 3.2 and 3.3 in [***](this paper). % %Efficiency experiments were carried out over to compare the developed sampling implementation and an MCMC algorithm developed in JAGS\footnote{\url{http://mcmc-jags.sourceforge.net/}}. The results showed that their proposed Gibbs sampling algorithm is hundreds of times faster than MCMC evidencing the suitability of the algorithm for online recommender systems. Additionally, an online user study which compared the effectiveness of the proposed Bayes-UCB-CF algorithm against the traditional greedy method and the previous implementation Bayes-UCB-Content[***](previous paper), proved that the cumulative average rating of new algorithm significantly outperformed the baselines. All in all, the Bayes-UCB-CF algorithm achieved a better balanced exploration/exploitation trade-off and significantly showed an improvement on its recommendation performance. % %In conclusion, this first attempt to remedy the greedy nature of CF approaches could enhance the performance of CF-based music recommendation significantly. Moreover, the reinforcement learning model seems to be applicable to other contexts different from the music environment, as well as it could be deployed together with the content-based RL model from [***](previous paper) in order to build a hybrid framework which combines the strengths of both approaches. % %=BEGIN=ALREADY COMMENTED %\textbf{Generating Music Playlists with Hierarchical Clustering and Q-Learning} %describes a system that uses reinforcement learning over hierarchically-clustered sets of songs to learn a user?s listening preferences for a Automatic Playlist Generation (APG), % %recommendation algorithms focus on discovery, whereas APG tends to be more concerned with coherence within a set of ordered tracks % %The approach we have taken is to use CB methods to inform an unsupervised machine learning algorithm (Q-learning), which over time can learn fromimplicit user feedback to generate playlists which are personalised to different listeners % %Features extracted from the audio are also used as part of this process %solution that does not rely on tagged audio files from any data source or any external services to achieve a completely personal and independent music player. % %The playermonitors the user?s actions continuously using reinforcement learn- %ing and updates its matrices based on user behaviour. Using implicit user be- haviouronly, ourplayeris able to learn user preferences providing users with a better music listening experience % % %the aim is to create playlists from individual songs and learn the proba- bilities of transitioning between them ($n^2$ transitions for a library of n songs) %the solution is to first cluster the songs, and then learn transitions between these clusters % %The solution we propose is therefore to use hierarchical clusters, in which %atree of clustersisconstructed using k-means clustering, and Q-learning (ex- plained below) is performed on nodes at multiple levels. This keeps the benefits of clustering without introducing large transition matrices or large groups of %songs to choose between. In fact, this reduces the space complexity from O(n2) to just O(n). % %**learning from user behavior % %This is known as active learning %The MDP was defined as follows: set of states S corresponds to the set of clusters, and not the individual songs %we will need one model per node in the tree, with S being only the node?s child clusters % %the model-free Q-learning was used to compute the expected utility of taking a particular action a %In the music player, the agent does actually know %S, since this is just a deterministic function in which a1 means ?transition to s1?, a2??transition to s2?, and so on. % %However, R is clearly unknown because the rewards are assigned by the user. % %Having the transition matrix P % %\begin{equation} %\label{} %P^'_{r, c} = P_{r, c} + \alpha_t \times [R_t + \gamma_a P_{c, a} - P_{r, c}] %\end{equation} % %$\apha$ and R are functions which may depend on other parameters including the current song choice % %Q-learning algorithms often iterate within ?episodes? until local convergence %is reached. This method doesn?t apply well to the music player scenario, so instead there is just one update per user action. This reflects that rather than having a single clear goal for which an optimum path must be found, we are continually trying to find good states and may continue indefinitely. So there are no absorbing states in the MDP, and its transition graph may contain loops. % %Reward calculation scheme %takes values between -1,1 based on a linear interpolation of the time the user listened the song. %The basic measure of implicit reward is listening time, %This leads us to r = ?1.0 as the greatest negative reward for a skip after 0 seconds, and r =1.0 as the greatest positive reward for not skipping at all %So a track finishing is just a special case of this general rule, and we interpolate %linearly between the two reward extremes based on how far through the track we got to before the skip %whenever a track is skipped quickly, the previous song should be treated as the current track when future rewards are calculated. % %Also, the small rewards %should only be applied for smaller playlists since % % %**conclusions %perform well in a small user study, greatly reducing the relative number of songs that a user skips. %=END=ALREADY COMMENTED A model-based RL agent for music playlist recommendation proposed by Liebman et al. \cite{liebman2015dj} modeled the preferences of both songs and song transitions as MDPs, and demonstrated to be potentially effective in the domain and particularly good to solve the cold-start problem as it is able to generate personalized song sequences within a single listening session and with no prior knowledge of the new user's preferences. The adaptive playlist generation problem was defined as an episodic MDP $\langle S, A, P, R, T \rangle$ with the following components: \begin{itemize} \item a state space S composed by the ordered sequence of songs played, $S = \{(a_1, a_2, ..., a_i)|1 \leq i \leq k; \forall j \leq i, a_j \in \mathcal{M}\}$ \item an actions space A formed by all possible candidates for being the next song to play, $a_k \in A$ which means $A = \mathcal{M}$ \item a deterministic transition function $P(s, a) = s'$ which indicates the existing transitions from state s to s' by taking action a \item an utility function R(s, a) derived from hearing song a when in state s \item $T = \{(a_1, a_2, ..., m_k)\}$: the set of playlists of length k. \end{itemize} In order to find an optimal policy $\pi*$ (and thus obtain the most pleasing sequence of songs to the listener), a model-based approach was chosen arguing that even though model-free approaches learn the value of taking an action a from state s directly, it requires a lot of data to converge, and it is considerably scarce in the domain music recommendation (as well as in other kind of applications). On the other hand, model-based formulations often requires a lot of computation to find an approximate solution to the MDP, but the trade-off of computational expense for data efficiency makes the model-based approach a good option for this problem. Since P is deterministic, only the listener's reward function R was required to be modeled. Therefore, the reward function defined as $R(s, a) = R_s(a)+R_t(s, a)$ represented the sum of two distinct components: (1) the listener's preference over songs, $R_s : A \rightarrow \mathcal{R}$, and (2) the preference over transitions from songs played to a new song, $R_t : S \times A \rightarrow \mathcal{R}$. % Hence, R was defined using equation \ref{eq:Rdecomposed}. %\begin{equation} %\label{eq:Rdecomposed} %R(s, a) = R_s(a)+R_t(s, a) %\end{equation} In essence, the model represents each song as a compacted vector of spectral auditory descriptors that capture meaningful differences in user's preferences, so the system is able to leverage knowledge using only a few transition examples to plan a future sequence of songs. %However, the framework is in principle robust and agnostic to the choice of a specific song corpus. %First, the listener reward (linear) function over songs $R_s$ generates a binary feature vector using a sparse encoding of the song descriptors as $R_s(a) = \phi_s(u) \cdot \theta_s(a)$, where $\phi_s(u)$ represents the listener pleasure for a particular set of active features. Then, the listener reward (linear) function over transitions $R_t$ obtains a sparse binary feature vector $R_t(a_i, a_j) = \phi_t(u) \cdot \theta_t(a_i, a_j)$, where $\phi_t(u)$ is a user-dependent weight vector and $\theta_t$ is a binary feature vector, with both representing transitions between 10-percentile bins of the song descriptors they share (for learnability purposes). An evaluation on the expressiveness of feature representation showed that different sequences are indeed distinguishable. As a result, the compact representation of songs was still rich enough to capture meaningful differences in user's preferences, and the system was able to leverage knowledge using a few transition examples to plan a future sequence of songs. On the other hand, the agent architecture was composed by a module for learning the listener parameters %($\phi_s$ and $\phi_t$) which performs the during initialization and learning on the fly processes; and an additional module for planning a sequence of songs in the playlist. % and thus estimate the next appropriate song to play. The initialization is divided in two parts: (1) initialization of song preferences polls the listener for her $k_s$ favorite songs in the database and updates the user's preference vector %value $\phi_s(u)$ using the feature descriptors of each of the selected items; and 2) initialization of the transition preferences by presenting different possible transitions that encapsulate the variety in the dataset, and directly asking which of a possible set of options the listener would prefer. Updates are carried out in the same manner as the initialization of song preferences. After initialization, the agent begins to play songs for the listener, waiting for her feedback (reward), and updating the user's preferences accordingly. This learning of the fly procedure is perceived as a temporal-difference update with an attenuating learning rate that balances the trust between the previous history of observations and the newly obtained signal. %After initialization, the agent begins playing songs for the listener, waiting for her feedback, and updating $\phi_s$ and $\phi_t$ accordingly. The latter is computed by using a single unified reward signal that considers the relative contributions of the song and transition rewards to set weights (equations \ref{eq:w_s} and \ref{eq:w_t}) for credit assignment (equations \ref{eq:phis} and \ref{eq:phit}). In general, the learning of the fly procedure is perceived as a temporal-difference update with an attenuating learning rate that balances the trust between the previous history of observations and the newly obtained signal. % %\begin{equation} %\label{eq:w_s} %w_s = \frac{R_s(a_i)}{R_s(a_i) + R_t{a_{i-1}, a_i}} %\end{equation} %\begin{equation} %\label{eq:w_t} %w_t = \frac{R_t(a_{i-1}, a_i)}{R_s(a_i) + R_t(a_{i-1}, a_i)} %\end{equation} %\begin{equation} %\label{eq:phis} %\phi_s = \frac{i}{i+1} \cdot \phi_s + \frac{i}{i+1} \cdot \theta_s \cdot w_s \cdot r_{incr} %\end{equation} %\begin{equation} %\label{eq:phit} %\phi_t = \frac{i}{i+1} \cdot \phi_t + \frac{i}{i+1} \cdot \theta_t \cdot w_t \cdot r_{incr} %\end{equation} %After determining the MDP reward function, Then, a Monte Carlo Tree Search (MCTS) heuristic \cite{chaslot2010monte} is used for planning. The iterative process chooses a subset of 50\% of the songs in the database with highest $R_s$ score, and then at each point, it simulates a trajectory of future songs selected at random and calculate the expected payoff of the song trajectory using $R_s$ and $R_t$. The process ends when it finds the trajectory which yields to the highest expected payoff, and finally, the first item in the trajectory is selected to be the next song to recommend. Additionally, to mitigate the problem of high complexity during re-planning under large song spaces, the agent used the canonical K-means algorithm for clustering songs according to song types and reduce the search complexity drastically. An evaluation of the cumulative reward distribution of the proposed approach showed that overall, the DJ-MC agent outperforms two baselines models (a random agent and a greedy agent), while in terms of transition reward preferences, the new system presented a small but significant boost in performance compared to a model based only on reasoning about song preferences. All in all, this approach demonstrated to be a good improvement as a reinforcement learning model for music recommendation, as it enables learning from relatively few examples, and provides quality on recommendation sequences. %\textbf{Hybrid Collaborative Filtering with Neural Networks} \section{Deep Reinforcement Learning} So far, the RL agents presented above have shown to achieve some success under the recommendation domain, nevertheless, their performance has been conditioned to the quality of the hand-crafted feature engineering made, as well as the custom definition of value functions or policy representations. Moreover, to use RL successfully in situations approaching to real-world complexity, agents usually need to derive efficient representations of the environment from high-dimensional sensory inputs, and use these to generalise past experiences to new situations. Recent advances in deep learning \cite{bengio2013representation} have made it possible, but even if it seems that this approach could fit in a RL task, its applicability to the domain may present several challenges. For instance, successful deep learning applications have required large amounts of hand-labelled training data to obtain good predictions results. Additionally, most deep learning algorithms assume the data samples to be i.i.d., while in RL an agent typically encounters with non-i.i.d. sequences of highly correlated states. On the other hand, agents must be able to learn from a scalar reward signal that is frequently sparse, noisy and delayed, while the RL data distribution changes as the algorithm learns new behaviour. However, thanks to the increasing success of adapting different deep learning techniques to different domains, Deep Reinforcement Learning (DRL) models have started to be proposed, getting promising results. Mnih et al. presented in \cite{mnih2013playing}\cite{mnih2015human} the first deep learning model which uses an end-to-end reinforcement learning approach to learn control policies directly from a high-dimensional raw input space. The model, applied to a range of Atari 2600 games, consists of a convolutional neural network and an experience replay mechanism \cite{adam2012experience} to alleviate the problems of correlated data and non-stationary distributions in typical RL problems. Moreover, the model's architecture uses only the state representation as input, and a separate output unit for each possible action (target network), corresponding to the predicted Q-values of each individual action that can be applied to the input state. This configuration allows the computation of all possible actions reward in a given state with only a single forward pass. Finally, the resulting Deep Q neural network (DQN) is trained with a variant of Q-learning which learns from raw pixels input and the underlying environment properties, and outputs a value function estimating the future rewards. On the other hand, the RL formulation models the environment $\mathcal{E}$ as a MDP composed by a state space $\mathcal{S}$, a discrete action space $\mathcal{A}=\{1...K\}$, a scalar reward function $r(s_t, a_t)$ and a transition dynamics function $\rho(s_{t+1}|s_t, a_t)$. Then, given the agent's behaviour defined by a policy $\pi$, they estimate the action-value function by using an approximator function of the Bellman equation (presented in eq. \ref{eq:bellmanEq}) as an iterative update. A Q-network function approximator with weights $\theta$ is then trained by minimising the sequence of loss functions $L_i(\theta_i) = \mathbb{E}_{s, a \sim \rho (\cdot)}[(y_i - Q(s, a; \theta_i))^2]$ where $\rho(s, a)$ is the behaviour probability distribution over sequences of states s and actions a. %\begin{equation} %\label{eq:QnetLoss} %L_i(\theta_i) = \mathbb{E}_{s, a \sim \rho (\cdot)}[(y_i - Q(s, a; \theta_i))^2] %\end{equation} %where $y_i = \mathbb{E}_{s'\sim \mathcal{E}} [r + \gamma max_{a'} Q(s', a'; \theta_{i-1})|s, a]$ is the target for iteration i and $\rho(s, a)$ is a probability distribution over sequences of states s and actions a known as behavior. Finally, to compute the full expectations in the gradient of the loss function, they considered stochastic gradient mini-batch updates of the weight parameters, and a learning algorithm which is: a)\textit{model-free}: replaces the expectations by directly using uniform samples from the behaviour distribution $\rho$, and b)\textit{off-policy}: follows an $\epsilon$-greedy strategy of the behaviour distribution that ensures an adequate exploration of the state space. Evaluation results showed that this method is able to learn how the value function evolves for a reasonably complex sequence of events. Furthermore, the proposed DRL algorithm demonstrated to perform better compared to the SARSA and Contingency \cite{bellemare2012investigating} methods as they need to incorporate significant (handcrafted) prior knowledge about the visual problem to get considerable performance, in comparison to the raw inputs that DRL use. Also, DRL achieved better performance than an expert human player in 29 out of 49 games. However, this model is only able to handle discrete low-dimensional action spaces and cannot be directly applied to tasks under the continuous domain as the iterative process of finding the action that maximises the action-value function becomes intractable. Additionally, a na\"{i}ve discretization of the action space would no be a good solution as it may throw away information about the structure of the action domain. Lillicrap et al. \cite{lillicrap2015continuous} adapted the ideas presented above to the continuous action space and defined an actor-critic, model-free approach, based on the Deterministic Policy Gradient \cite{Silver2014} (DPG) algorithm, that is able to find policies end-to-end and whose performance is competitive compared to approaches based on a planning algorithm with access to the dynamics of the domain. While the model's architecture uses similar features of DQN (network architecture, experience replay memory, and target outputs), along with batch normalization \cite{ioffe2015batch} to overcome the internal covariate shift problem in the network input layers during mini-batches, the Deep Deterministic Policy Gradient (DDPG) algorithm maintains two functions: (1)an actor function $\mu(s|\theta_{\mu})$ that obtains the current policy by deterministically mapping states to specific actions; and 2)the critic function $Q(s,a)$ learned by applying the Bellman equation. During the learning process, DDPG updates the actor by applying the policy gradient to the expected return from a start distribution $J = \mathbb{E}_{r_i, si \sim E, a_i \sim \pi}[R_1]$ and with respect to the actor parameters. Even if convergence is no longer guaranteed due to the added non-linearity of the proposed approximation function, it produces a good generalization over large state spaces. %\begin{equation} %\label{} %\begin{aligned} %\nabla_{\theta^{\mu}}J & \approx \mathbb{E}_{s_t \sim \rho^{\beta}} [\nabla_{\theta^{\mu}} Q(s, a | \theta_Q)|_{s=s_t, a=\mu(s_t|\theta_{\mu})} ] \\ %& = \mathbb{E}_{s_t \sim \rho^{\beta}} [\nabla_a Q(s, a | \theta_Q)|_{s=s_t, a=\mu(s_t)} \nabla_{\theta^{\mu}} \mu(s|\theta^{\mu})|_{s=s_t}] %\end{aligned} %\end{equation} The DDPG algorithm also implements certain improvements to the RL task. First, it introduces stability in the learning process by applying soft target updated to the target networks instead of directly copying the weights. This is performed by exponentially moving their average values using $\theta' \leftarrow \tau\theta + (1 - \tau)\theta'$ with $\tau \ll 1$. Second, batch normalization allows DDPG to be able to generalize across environments using the same hyper parameters for learning. Finally, the exploration-explotaition trade-off is managed independently from the learning algorithm by adding noise sampled from a noise process $\mathcal{N}$ to the actor policy $\mu$. % %A second improvement solved the difficulty of finding hyper-parameters that allow generalization across environments when the low dimensional feature vector observations have different physical units and the ranges. This issue is addressed by adapting a batch normalization technique proposed by [***], that minimizes the covariance shift during training by normalizing each dimension across samples in a mini-batch to have unit mean and variance. As a result, DDPG ensures that each layer receives whitened input in favor of learning effectively across many different tasks with differing types of units. % %The third improvement tries to overcome the common challenge on how to manage exploration in reinforcement learning tasks with continuous actions spaces. The exploration-explotaition trade-off was managed independently from the learning algorithm by adding noise sampled from a noise process $\mathcal{N}$ to the actor policy $\mu$ and thus create an exploration policy $\mu'(s_t) = \mu(s_t|\theta^{\mu}_t) + \mathcal{N}$. Hence, any sampling process $\mathcal{N}$ can be plugged in to suit exploration on specific environments. To evaluate the algorithm performance, authors used simulated physical environments with different levels of difficulty, and the Ornstein-Uhlenbeck \cite{bibbona2008ornstein} sampling process for adding exploration to the learning policy. Results demonstrated that even when harder tasks obtain poor Q estimates, DDPG is able to learn good policies across a variety of domains with continuous action spaces and using both low-dimensional feature vector and high-dimensional pixel inputs. Additionally, all the experiments were solved using fewer steps of experience than the DQN algorithm, showing that DDPG may be able to solve even more difficult problems. However, the DDPG algorithm requires of a large number of training episodes to find solutions and only works under environments with a not so big number of actions. Therefore, a more robust model-free approach is needed in order to tackle these limitation. Later, Dulac-Arnold et al. \cite{Dulac-Arnold2015} presented a new policy architecture, under the same actor-critic framework used above, that not only allows reinforcement learning methods to be applied to large-scale learning problems, and to operate efficiently with a large number of actions, but also can generalize over the action set in logarithmic time. The full policy is then trained and optimized using DDPG. As a result, they obtained a more efficient algorithm that makes both learning and acting tractable in time and allows value-based policies to use the action features to reason about previously unseen actions. The so-called \textit{Wolpertinger} architecture leverages prior information about the actions and embed them in a continuous space upon which the actor can generalize using a smooth function approximator. After the policy produces a continuous action within this space, it uses an approximate nearest neighbour search to find the set of closest discrete actions in logarithmic time to finally select the action with the highest reward. Consequently, the algorithm avoids the heavy cost of evaluating all actions mainly by defining an efficient action-generating actor, and then using the critic to refine the actor's choices for the full policy. % %The action generation function $f_{\theta^{\pi}}(s) = \hat{\mathbf{a}}$ provides a proto-action in $\mathbb{R}^n$ for a given state s. $\hat{\mathbf{a}}$ is then mapped to the discrete action set $\mathcal{A}$ by applying the k-nearest-neighbor\footnote{The size of the generated action set k is task specific, and allows for an explicit trade-off between policy quality and speed} function from equation \ref{eq:knnDDPG}, which returns the k closest actions in $\mathcal{A}$ by computing the $L_2$ distance between them. Even if $g_k$ has the same complexity as the \textit{argmax} in the Q action-value function, each step of evaluation of the $L_2$ distance is faster than a full value-function evaluation as the lookup is performed in logarithmic time. %\begin{equation} %\label{eq:knnDDPG} %g_k(\hat{\mathbf{a}}) = \arg\min^k_{a \in \mathcal{A}} | \mathbf{a} - \mathbf{\hat{\mathbf{a}}} |_2 %\end{equation} % %Following the selection of the of k closest actions to $\hat{\mathbf{a}}$, the choice of the actual action to be applied to the environment is set according to the highest-scoring action obtained according $Q_{\theta^Q}$ defined in equation \ref{eq:qDDPG}. Consequently, the full Wolpertinger policy $\pi_{\theta}$ with parameter $\theta$, representing both the parameters of the action generation element $\theta_{\pi}$ and of the critic $\theta_Q$, makes the algorithm significantly more robust to imperfections in the choice of action representation. % %\begin{equation} %\label{eq:qDDPG} %\pi_{\theta}(s) = \arg\max_{a \in g_k \circ f_{\theta^{\pi}}(s) } Q_{\theta^Q}(s, a) %\end{equation} Having the definition of the full policy as $\pi_{\theta^{\pi}} = g \circ f_{\theta^{\pi}}(s)$, where $f_{\theta^{\pi}}(s) = \hat{\mathbf{a}}$ is the proto-action generation function and $g$ the actual action function that return the set of k closest actions to $\hat{\mathbf{a}}$, the final goal of this approach is to perform policy iteration by alternatively performing policy evaluation on the current policy with Q-learning, and then improving upon the current policy by following the DDPG over $f_{\theta^{\pi}}$, considering that the effects of $g$ are a deterministic aspect of the environment. The Wolpertinger agent, evaluated on three environment classes with large actions spaces (including a simulated recommender system), showed that it can scale to real-world MDPs with large number of actions as only a subset of the full set of actions was sufficient in many tasks to converge and provide significant speedups. Nevertheless, there is still an existing issue of exploration when the agent needs to learn from scratch.
{ "alphanum_fraction": 0.7834706796, "avg_line_length": 145.9635499208, "ext": "tex", "hexsha": "148a70634b5435cfe9693a047b165abefd39a282", "lang": "TeX", "max_forks_count": 8, "max_forks_repo_forks_event_max_datetime": "2020-05-07T08:44:09.000Z", "max_forks_repo_forks_event_min_datetime": "2017-03-21T18:22:15.000Z", "max_forks_repo_head_hexsha": "eec0cdb634cc1a33eee17e5f1f045784040a70e8", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "santteegt/ucl-drl-msc", "max_forks_repo_path": "latex_doc/Chapter2.tex", "max_issues_count": 4, "max_issues_repo_head_hexsha": "eec0cdb634cc1a33eee17e5f1f045784040a70e8", "max_issues_repo_issues_event_max_datetime": "2018-04-03T01:12:46.000Z", "max_issues_repo_issues_event_min_datetime": "2017-04-01T08:54:30.000Z", "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "santteegt/ucl-drl-msc", "max_issues_repo_path": "latex_doc/Chapter2.tex", "max_line_length": 1454, "max_stars_count": 24, "max_stars_repo_head_hexsha": "eec0cdb634cc1a33eee17e5f1f045784040a70e8", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "santteegt/ucl-drl-msc", "max_stars_repo_path": "latex_doc/Chapter2.tex", "max_stars_repo_stars_event_max_datetime": "2021-07-26T03:27:26.000Z", "max_stars_repo_stars_event_min_datetime": "2017-07-20T03:41:48.000Z", "num_tokens": 21389, "size": 92103 }
\chapter{Assignment: Data Inspection} \label{hw:data-inspection} \newthought{Understanding the data is crucial} for any data science task. And the best way to achieve this is with visualizations. With the \widget{Datasets} widget load \textit{Employee Attrition} data set, which describes employees of a company with 32 features, such as age, job role, years at company, and so on. We also know, whether an employee resigned from the company (Attrition = Yes) or not (Attrition = No). \begin{figure}[h] \centering \includegraphics[scale=0.6]{workflow.png}% \caption{Workflow for the assignment.} \label{fig:data-inspection-workflow} \end{figure} Inspect the data to understand it better: \begin{enumerate} \item Use \widget{Scatter Plot} and try \textit{Find Informative Projections}. How does the top combination look like? Is it useful? Why (not)? \item Use \widget{Box Plot} and try \textit{Order by relevance to subgroups}. Which attribute best splits the data by \textit{Attrition}? Explain it. \item Which widget would you use to observe relationship between two discrete variables in relation to Attrition? Use \textit{Find informative projections} in that widget and explore the top result. \end{enumerate}
{ "alphanum_fraction": 0.7712206952, "avg_line_length": 68.7222222222, "ext": "tex", "hexsha": "053bf28355e8fea10f5793adc0667496b1573203", "lang": "TeX", "max_forks_count": 4, "max_forks_repo_forks_event_max_datetime": "2021-03-21T20:35:41.000Z", "max_forks_repo_forks_event_min_datetime": "2021-01-19T16:55:20.000Z", "max_forks_repo_head_hexsha": "5072afa3e29cec77e1a7f6c0d1fd044e737fe378", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "PrimozGodec/orange-lecture-notes", "max_forks_repo_path": "assignments/001-data-inspection/data-inspection.tex", "max_issues_count": 10, "max_issues_repo_head_hexsha": "5072afa3e29cec77e1a7f6c0d1fd044e737fe378", "max_issues_repo_issues_event_max_datetime": "2021-03-25T19:15:34.000Z", "max_issues_repo_issues_event_min_datetime": "2021-02-26T13:33:10.000Z", "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "PrimozGodec/orange-lecture-notes", "max_issues_repo_path": "assignments/001-data-inspection/data-inspection.tex", "max_line_length": 419, "max_stars_count": 2, "max_stars_repo_head_hexsha": "5072afa3e29cec77e1a7f6c0d1fd044e737fe378", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "PrimozGodec/orange-lecture-notes", "max_stars_repo_path": "assignments/001-data-inspection/data-inspection.tex", "max_stars_repo_stars_event_max_datetime": "2022-03-31T03:47:06.000Z", "max_stars_repo_stars_event_min_datetime": "2021-10-13T14:31:00.000Z", "num_tokens": 305, "size": 1237 }
\documentclass[german,a4paper]{article} %\pagestyle{headings} \usepackage[utf8]{inputenc} \usepackage[T1]{fontenc} \usepackage[pdftex]{graphics} \usepackage{graphicx} \usepackage[pdftex]{thumbpdf} \usepackage{listings} \lstset{ language=[LaTeX]TeX, basicstyle=\ttfamily, breaklines=true, columns=fullflexible } \usepackage[margin=3cm]{geometry} \usepackage[ pdftex, a4paper, bookmarks, bookmarksopen=true, bookmarksnumbered=true, pdfauthor={Oliver Eickmeyer}, pdftitle={ELANMod Manual}, colorlinks, linkcolor=black, urlcolor=blue, citecolor=black ]{hyperref} \usepackage{amsmath} \usepackage{multirow} \usepackage[font=small,position=bottom,skip=0cm]{caption} \usepackage{mathpazo} \usepackage{floatflt} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % Eigentliches Dokument \begin{document} %\begin{titlepage} %\vspace*{1cm} \begin{center} {\LARGE\bfseries ELANMod Manual}\\[1.0ex] \end{center} %\end{titlepage} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{Introduction} ELANMod is a modification for ELAN. ELAN is an annotation tool, see \url{https://tla.mpi.nl/tools/tla-tools/elan/} for more information. With this modification it is possible to connect ELAN and VeniceHub via remote procedure call (RPC), so that the playback of VeniceHub will be synchronized with the playback of ELAN. \section{Installation} Download ELAN source code from \url{https://tla.mpi.nl/tools/tla-tools/elan/download/}. Be sure to download the version for which the modification is made for. Then copy the files of the modification just into the ELAN folder. Depending on the version there will be one or two files overwritten: ElanFrame2.java will be overwritten in every case. From version 4.8.0 on the pom.xml will also be overwritten, because with this version the developers of ELAN switched from ant to maven. Now execute the batch file \texttt{installLib.sh}. This will install one library that is unfortunately not available online, so maven can not find it by its own. The final step would be to call maven (for example with \texttt{mvn package}) and maven will build the application and download all necessary librarys. This can take some time. ELAN will be started automatically. But before the modified ELAN starts, VeniceHub should be started first (see section \ref{sec:usage}). \section{Usage} \label{sec:usage} To use the modified ELAN in conjunction with VeniceHub, first start VeniceHub. Usually that would be a command like: \begin{lstlisting} java -jar VeniceHub.jar -i Disk -f MyLogFile.xio.gz -o IIO \end{lstlisting} Then start the modified ELAN with \begin{lstlisting} mvn package \end{lstlisting} If maven is building ELAN the first time, it can take some time to download all dependencies. ELAN will be started automatically after maven finishes building. The default values of both programms are attuned to each other. For the RPC connection between modified ELAN and VeniceHub the default address is \texttt{localhost} and the default port is \texttt{4243}. See section \ref{sec:config} how to change address and port. After quitting ELAN, maven could need some time to finish. Modified ELAN should be stopped before VeniceHub is stopped. \section{Configuration} \label{sec:config} The configuration of the modified ELAN is set with the file \texttt{config.properties}. The relevant properties for the connection with VeniceHub are \begin{lstlisting} browserIP=localhost browserPort=4243 \end{lstlisting} To change those values for VeniceHub, run VeniceHub with the options \texttt{--rpcServerAdress} and \texttt{--rpcServerPort} In the configuration file for the modified ELAN there is a property \texttt{syncFreq}. It tells ELAN how often it send a seek command to VeniceHub to have it synchronized. \section{Trouble shooting} If VeniceHub gets stucked in a loop and doesn't stop skipping, try the runtime command \texttt{reset}. If you want to use a version that is not supported by the mod, see the README how to adapt the modification for other versions of ELAN (should be done only by experienced users). %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \end{document}
{ "alphanum_fraction": 0.7416470588, "avg_line_length": 36.0169491525, "ext": "tex", "hexsha": "e3a80dca39eadf81aafe0dfc00e2c66b121dd86b", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "8ddc61847904e431bc281974f26972c25eccd128", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "dsg-bielefeld/Venice", "max_forks_repo_path": "elan-mod/manual/ELANModManual.tex", "max_issues_count": 1, "max_issues_repo_head_hexsha": "8ddc61847904e431bc281974f26972c25eccd128", "max_issues_repo_issues_event_max_datetime": "2015-10-28T15:23:46.000Z", "max_issues_repo_issues_event_min_datetime": "2015-10-28T15:23:46.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "dsg-bielefeld/Venice", "max_issues_repo_path": "elan-mod/manual/ELANModManual.tex", "max_line_length": 323, "max_stars_count": null, "max_stars_repo_head_hexsha": "8ddc61847904e431bc281974f26972c25eccd128", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "dsg-bielefeld/Venice", "max_stars_repo_path": "elan-mod/manual/ELANModManual.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 1052, "size": 4250 }
% This file contains the content for the Scope \cleardoublepage \numberedformat \chapter{Scope} % Do not modify section title %% Modify below this line %% This document specifies the component naming and versioning conventions associated with ACES System components. Examples are provided.
{ "alphanum_fraction": 0.8082191781, "avg_line_length": 41.7142857143, "ext": "tex", "hexsha": "37891f63acd9ed9c90ba17c271b6a106ae89b42d", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "86284e2f145a89e3612f05ec7ea5a3e9d92cc779", "max_forks_repo_licenses": [ "AMPAS" ], "max_forks_repo_name": "colour-science/aces-dev", "max_forks_repo_path": "documents/LaTeX/S-2014-002/scope.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "86284e2f145a89e3612f05ec7ea5a3e9d92cc779", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "AMPAS" ], "max_issues_repo_name": "colour-science/aces-dev", "max_issues_repo_path": "documents/LaTeX/S-2014-002/scope.tex", "max_line_length": 134, "max_stars_count": 2, "max_stars_repo_head_hexsha": "76ea982a988d278dd12b563602771f46a5da3b83", "max_stars_repo_licenses": [ "AMPAS" ], "max_stars_repo_name": "KelSolaar/aces-dev", "max_stars_repo_path": "documents/LaTeX/S-2014-002/scope.tex", "max_stars_repo_stars_event_max_datetime": "2019-05-27T06:46:50.000Z", "max_stars_repo_stars_event_min_datetime": "2018-01-04T18:12:13.000Z", "num_tokens": 63, "size": 292 }
\documentclass[12pt,a4paper,final,titlepage]{article} \usepackage[utf8]{inputenc} \usepackage{amsmath} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{hyperref} \author{Jaiswal, Ankit Kumar} \newtheorem{dfn}{Definition} \newtheorem{thm}{Theorem} \newtheorem{pf}{Proof} \begin{document} \title{Extension of Rational to Real using Dedekind's Cuts} \date{May, 2013} \maketitle \begin{abstract} This report contains the bare bones of Dedekind's cut, with an attempt to define real numbers out of rational numbers, with a view to elaborate the reason for choosing this approach. This is in accordance with the view of the consolidated project to grasp the formation of real in its entirety. We assume familiarity with Elementary Set Theory, Natural \& Rational number constructed via Peano's Axioms and the fact that set of rational is a field. \end{abstract} \section{Introduction} We all have seen upto now, natural numbers ($\mathbb{N}$) being constructed from ``Naive Set Theory" and integers ($\mathbb{Z}$) and rational numbers ($\mathbb{Q}$) further being constructed from natural numbers with an interesting pattern, i.e. \begin{equation}\label{eq1} \mathbb{N}\:\subset \mathbb{Z}\:\subset \mathbb{Q} \end{equation} In a similar way, we will construct real numbers out of rational numbers which of course isn't obvious to see. But, all we need is a pattern among rational numbers which could be uniquely associated to every element in the set of real numbers ($\mathbb{R}$) (We do know what kinds of numbers to include; the numbers which we are using in the name of real numbers since childhood). In 1872, a German mathematician ``Richard Dedekind" discovered one such pattern and popularly his approach is known as ``Dedekind's Cuts"; and we will proceed the same way as contained in the books referred ($-$ list of reference is mentioned in the last). \textbf{NOTE:} The word ``number" would mean rational numbers unless otherwise specified. \section{Cuts} \begin{dfn}\label{def1} A set $\xi$ of rational numbers is called as a \textbf{cut}, if \begin{enumerate} \item it contains at least one rational number, but not all of them; \item every rational number of the set is smaller than every rational number not belonging to the set; \item it does not contain a greatest rational number (i.e. a number which is greater than any other number of the set). \end{enumerate} A number contained in the Cut is termed as \textbf{lower number} for $\xi$ and for numbers not belonging to the cut, \textbf{upper number} is used. \end{dfn} Also, \begin{dfn}\label{def2} Let $\xi$ and $\eta$ be two cuts, then $\xi\:=\:\eta$ ; if every lower number for $\xi$ is a lower number for $\eta$ and every lower number of $\eta$ is a lower number for $\xi$. \end{dfn} Now here comes a simple but important theorem; \begin{thm}\label{thm1} If X is an upper number for a cut $\xi$ and $X_1$ is some rational number such that $X_1 > X$ then, $X_1$ is also an upper number for $\xi$. \end{thm} \textbf{Proof:} Let's assume that $X_1$ is not an upper number for $\xi$, which from Definition \ref{def1} implies, $X_1 \in \xi$ and also $X_1 < X$ . But according to Theorem's conditions $X_1 > X$, which contradicts $X_1 \in \xi$. Hence, $X_1$ is an upper number. \medskip \begin{thm}\label{thm2} If $X$ is a lower number for $\xi$ and $X_1$ is some Rational Number such that, $X_1 < X$ then, $X_1$ is also a lower number for $\xi$. \end{thm} \textbf{Proof:} Let's assume that $X_1$ is not a lower number for $\xi$, which from Definition \ref{def1} implies, $X_1 \notin \xi$ and also $X_1 > X$ . But according to Theorem's conditions $X_1 < X$, which contradicts $X_1 \notin \xi$. Hence, $X_1$ is a lower number. \bigskip \bigskip Now, if we wish to show that a given set of rational number is a cut, we need to show only the following: \begin{enumerate} \item The set is not empty and there is a rational number not belonging to it. \item With every number it contains, the set also contains all numbers smaller than that number. \item with every number it contains, the set also contains a greater one. \end{enumerate} In order to substitute numbers (known till now) with ``cuts", we should be able to do all operations defined on numbers, over cuts. Some of them are ordering, addition, multiplication etc. \bigskip \subsection{Ordering} \begin{dfn}\label{def3} If $\xi$ and $\eta$ are cuts, then $\xi > \eta$; if there exist a lower number for $\xi$ which is an upper number for $\eta$. \end{dfn} \begin{dfn}\label{def4} If $\xi$ and $\eta$ are cuts, then $\xi < \eta$; if there exist an upper number for $\xi$ which is a lower number for $\eta$. \end{dfn} With this, we can show that cuts too follow the rule of trichotomy under ordering, like numbers. And the same has been shown as a theorem. \begin{thm}\label{thm3} For any given $\xi$, $\eta$, exactly one of\quad $\xi$=$\eta$, $\xi > \eta$, $\xi < \eta$\quad is the case. \end{thm} \textbf{Proof:} Let's check whether can two of them be simultaneously true. The statements $\xi = \eta$ \& $\xi < \eta$ , simultaneously are incompatible by Definition \ref{def2} and \ref{def3}. Similarly, the statements $\xi = \eta$ \& $\xi < \eta$ simultaneously are incompatible. But if we had $\xi>\eta$ \& $\xi<\eta$ then for sure we can say that there exists a lower number $X$ and an upper number $Y$ for $\xi$ such that $X$ is an upper number for $\eta$ and $Y$ is a lower number for $\eta$. Now, this implies $X<Y$ and $X>Y$ simultaneously; which is not possible. Hence we can say that two of them can't be true at the same time. Similarly, all of them can't be true simultaneously as two at a time is not possible. Therefore we can have at most one of the three cases true. If $\xi \neq \eta$ , then it's obvious that their lower numbers do not coincides, i.e. either some lower number for $\xi$ is an upper number for $\eta$, in which case, it follows that $\xi > \eta$ ; or some lower number for $\eta$ is an upper number for $\xi$, in which case, it follows that $\xi < \eta$. This implies, at least one of them is true. But among these three at most one can be true. Hence, one of them has to be true at a time. \bigskip \subsection{Addition} Before defining addition of cuts, there is one theorem which is helpful in our approach towards defining it. From the axiomatic point of view of addition of numbers, it is necessary to obtain a unique number on addition of two numbers. Following theorem fulfils this condition. \begin{thm}\label{thm4} Let $\xi$ and $\eta$ be two cuts and also let X be a lower number for $\xi$ and Y a lower number for $\eta$. \begin{enumerate} \item Then the set $\zeta$ of all rational number which are representable in the form $X + Y$, is itself a cut. \item No number of this set can be written as a sum of an upper number for $\xi$ and an upper number for $\eta$. \end{enumerate} \end{thm} \textbf{Proof:} Consider any lower numbers $X_1$ \&\ $X_2$ for $\xi$ and any lower numbers $Y_1$ \&\ $Y_2$ for $\eta$. Also consider any upper number $X_3$ for $\xi$ and any upper number $Y_3$ for $\eta$; with following relation. \begin{equation} X_1<X_2<X_3\qquad \&\ \qquad Y_1<Y_2<Y_3 \nonumber \end{equation} We can always find such numbers because of the properties of cuts. In order to prove $\zeta$ a cut, it is sufficient to show that it satisfies the three points of Definition \ref{def1}. \\ \begin{enumerate} \item Since $\xi$ and $\eta$ are cuts, means both are non-empty. And by given condition, sum of their elements belongs to the set $\zeta$. Also $\exists$ a number $Z=X_3+Y_3$ which doesn't belong to $\zeta$ (according to given condition). \item Now consider all numbers less than $X_1+Y_1$, i.e. $Z<X_1+Y_1$ or $\frac{Z}{X_1+Y_1} < 1$ . This statement can be used to draw the following : \begin{equation} X_1\dfrac{Z}{X_1+Y_1} < X_1\cdot1 \qquad \&\ \qquad Y_1\dfrac{Z}{X_1+Y_1} < Y_1\cdot1 \nonumber \end{equation} By Theorem \ref{thm2} and by given condition, the number \begin{equation} ( X_1\dfrac{Z}{X_1+Y_1} + Y_1\dfrac{Z}{X_1+Y_1} ) \in \zeta \nonumber \end{equation} which implies that $Z\in \zeta$. \item For every $X_1\in \xi$, $\exists$ a greater lower number $X_2$ and since it is a lower number for $\xi$, $(X_2+Y_1) \in \zeta$ even though $X_2+Y_1 > X_1+Y_1$. \end{enumerate} From 1, 2 and 3, we can conclude that the set $\zeta$ is a cut.\\ \medskip Now consider lower numbers for $\zeta$, from the first part of this theorem it could be represented as the sum of the lower numbers of both $\xi$ and $\eta$ i.e. of the form $X_1+Y_1$. And any such number can't be represented by the form $X_3+Y_3$, i.e. $X_1+Y_1 \neq X_3+Y_3$ as $X_1+Y_1 < X_3+Y_3$ (from the given relation). So, this proves the theorem. \bigskip \begin{dfn}\label{def5} The cut constructed in Theorem \ref{thm4} is denoted by $\xi+\eta$ and is called the sum of $\xi$ and $\eta$. \end{dfn} \begin{thm} (Commutative Law of Addition): $\xi+\eta = \eta+\xi$. \end{thm} \textbf{Proof:} Every X+Y is a Y+X, and vice versa; where X and Y are lower numbers for $\xi$ and $\eta$, respectively. \begin{thm} (Associative Law of Addition): $(\xi+\eta)+\zeta = \xi(\eta+\zeta)$ \end{thm} \textbf{Proof:} Every (X+Y)+Z is a X+(Y+Z), and vice versa; where X and Y are lower numbers for $\xi$ and $\eta$, respectively. There is yet another interesting property of property of cuts, which may be useful result to use. \begin{thm}\label{thm7} Given any positive Rational number A, and given a cut, then there exists a lower number X and an upper number U for the cut such that \begin{equation} U-X = A. \nonumber \end{equation} \end{thm} \textbf{Proof:} Let $X_1$ be some lower number and $Y_1$ be some upper number, and consider all rational numbers of the form $X_1+nA$ (where n is an integer) such that $X_1+nA > Y_1$. By Theorem \ref{thm1}, this number is an upper number. Consider a set S=\{$n\in\mathbb{N}$ | $X_1+nA$ is an uper number for $\xi$\} and since it is a set of natural number then it must have a least natural number, say u. This implies $X_1 + (u-1)A$ is a lower number. Therefore letting\: $U=X_1+uA$ and $X=X_1+(u-1)A$\: proves this. \medskip \begin{thm}\label{thm8} If \quad $\xi > \eta$ \quad then,\quad $\xi+\zeta > \eta+\zeta$. \end{thm} \textbf{Proof:} If $\xi>\eta$, then according to the definition \ref{def3} there exists a Rational number such that $Y\in\xi$ but $Y\notin\eta$. If $\xi$ is cut then, $\exists$ a rational number $X\in\xi$ such that, $X>Y$. By Theorem \ref{thm7} we can let $X-Y=U-Z$; where U \&\ Z are suitable upper and lower number for $\zeta$. The equation $X-Y=U-Z$ is equivalent to $X+Z=U+Y$ and $X+Z$ is a lower number for $(\xi+\zeta)$ and $U+Y$ is an upper number for $(\eta+\zeta)$ (by theorem \ref{thm4}). Hence, it is one such number which belongs to $(\xi+\zeta)$ but doesn't belong to $(\eta+\zeta)$. Therefore $\xi+\zeta > \eta+\zeta$. \bigskip \bigskip \bigskip \subsection{Multiplication} Before defining multiplication of cuts, there is one theorem which is helpful in our approach towards defining it. From the axiomatic point of view of multiplication of numbers, it is necessary to obtain a unique number on multiplication of two numbers. Following theorem fulfils this condition. \begin{thm}\label{thm9} Let $\xi$ and $\eta$ be cuts and also X be a lower number for $\xi$, whereas Y be a lower number for $\eta$. \begin{enumerate} \item Then the set $\zeta$ of all rational number which are representable in the form $X\cdot Y$, is itself a cut. \item No number of this set can be written as a product of an upper number for $\xi$ and an upper number for $\eta$. \end{enumerate} \end{thm} \textbf{Proof:} Consider any lower numbers $X_1$ \&\ $X_2$ for $\xi$ and any lower numbers $Y_1$ \&\ $Y_2$ for $\eta$. Also consider any upper number $X_3$ for $\xi$ and any upper number $Y_3$ for $\eta$; with following relation. \begin{equation} X_1<X_2<X_3\qquad \&\ \qquad Y_1<Y_2<Y_3 \nonumber \end{equation} We can always find such numbers because of the properties of cuts. In order to prove $\zeta$ a cut, it is sufficient to show that it satisfies the three points of Definition \ref{def1}. \\ \begin{enumerate} \item Since $\xi$ and $\eta$ are cuts, means both are non-empty. And by given condition, product of their elements belongs to the set $\zeta$. Also $\exists$ a number $Z=X_3\cdot Y_3$ which doesn't belong to $\zeta$ (according to given condition). \item Now consider all numbers less than $X_1\cdot Y_1$, i.e. $Z<X_1\cdot Y_1$ or $\frac{Z}{X_1} < Y_1$. This implies $\frac{Z}{X_1}$ is a lower number for $\eta$. Therefore its product with $X_1$ will belong to $\zeta$, which implies $Z\in \zeta$ (as $X_1\dfrac{Z}{X_1} = Z$ ). \item For every $X_1\in \xi$, $\exists$ a greater lower number $X_2$ and since it is a lower number for $\xi$, $(X_2\cdot Y_1) \in \zeta$ even though $X_2\cdot Y_1 > X_1\cdot Y_1$. \end{enumerate} From 1, 2 and 3, we can conclude that the set $\zeta$ is a cut.\\ Now consider lower numbers for $\zeta$, from the first part of this theorem it could be represented as the product of the lower numbers of both $\xi$ and $\eta$ i.e. of the form $X_1\cdot Y_1$. And any such number can't be represented by the form $X_3\cdot Y_3$, i.e. $X_1\cdot Y_1 \neq X_3\cdot Y_3$ as $X_1\cdot Y_1 < X_3\cdot Y_3$ (from the given relation). So, this proves the theorem. \bigskip \begin{dfn}\label{def6} The cut constructed in Theorem \ref{thm9} is denoted by $\xi\cdot \eta$ (however the dot is usually omitted), and is called the product of $\xi$ and $\eta$. \end{dfn} \begin{thm} (Commutative Law of Multiplication): $\xi\cdot \eta$ = $\eta\cdot \eta$ \end{thm} \textbf{Proof:} Every XY is a YX, and vice versa; where X and Y are lower numbers for $\xi$ and $\eta$, respectively. \begin{thm} (Associative Law of Multiplication): $(\xi\cdot \eta)\cdot \zeta$ = $\xi\cdot (\eta\cdot \zeta)$ \end{thm} \textbf{Proof:} Every (XY)Z is a X(YZ), and vice versa; where X,Y\&\ Z are lower numbers for $\xi$, $\eta$ and $\zeta$. \begin{thm} (Distributive Law of Multiplication): $\xi\cdot (\eta+\zeta)$ = $\xi\cdot \eta + \xi\cdot \zeta$ \end{thm} \textbf{Proof:}Let X,Y\&\ Z be the lower numbers for $\xi$, $\eta$ and $\zeta$.\\ From Definition \ref{def5} and \ref{def6}, lower number for $\xi\cdot (\eta+\zeta)$ will be of the form $X(Y+Z)$ which is equal to $XY + XZ$ (as Distributive Law holds over Rational number) which is the lower number for $\xi\cdot \eta + \xi\cdot \zeta$ (by Definition \ref{def5} and \ref{def6}). this implies that every lower number for $(\xi\cdot(\eta+\zeta))$ is a lower number for $(\xi\cdot \eta + \xi\cdot \zeta)$. Hence, $(\xi\cdot (\eta+\zeta)) \subset (\xi\cdot \eta + \xi\cdot \zeta)$. From Definition \ref{def5} and \ref{def6}, lower number for $(\xi\cdot \eta + \xi\cdot \zeta)$ will be of the form $XY+X_1Z$ (where $X_1\in \xi$ and may be different from $X$). Now if we let $X=min(X,X_1)$ then, $XY+XZ$ is also a lower number for $\xi$ which is equal to $X(Y+Z)$, where $X(Y+Z)$ is a lower number for $\xi\cdot(\eta+\zeta)$ (by Definition \ref{def5} and \ref{def6}). Hence, $(\xi\cdot \eta+\xi\cdot \zeta) \subset (\xi\cdot(\eta+\zeta))$ From above two paragraph we can conclude that $(\xi\cdot (\eta+\zeta)) = (\xi\cdot \eta + \xi\cdot \zeta)$. \bigskip \bigskip \bigskip \section{Rational and Integral cuts} We are almost one step far from defining real Numbers. In the previous section we have seen how to construct a cut, which just ask for a set of Rational number to satisfy three axioms of Definition \ref{def1}; but the question is ``how to associate uniquely a cut to a rational and what sort of cut will be suitable?". May be following theorem would help. \begin{thm} \label{thm10} For any given rational number $R$, the set of all rational numbers $<R$ constitutes a cut. \end{thm} \textbf{Proof:} Let the set defined above be $S$.In order to prove $\zeta$ a cut, it is sufficient to show that it satisfies the three points of Definition \ref{def1}. \begin{enumerate} \item For every R, $\exists$ a $X<R$ \&\ $X_1>R$ (where $X,X_1$ $\in$ $\mathbb{Q}$) which makes $X$ to belong in $S$ and $X_1$ not to belong in $S$. \item consider number less than any number $X\in S$; i.e. $Z<X$, but $X<R$, which implies $Z\in S$ for every $Z<X$. \item since there exists infinite rational number between any two Rational numbers, i.e. for every $X<R$, $\exists$ a $X_2\in \mathbb{Q}$ such that $X<X_2<R$ and by given condition $X_2\in S$. \end{enumerate} From 1, 2 and 3 we can conclude that $S$ is a cut. \\ Also, it's clear that this cut $S$ uniquely identifies R and hence, it is denoted by \textbf{R*}. \bigskip \begin{dfn} \label{def7} The cut constructed in Theorem \ref{thm10} is called as \textbf{rational cut} and is denoted by \textbf{R*}. And if R happens to be an integer then, the cut so constructed is called as \textbf{integral cut} and it is then denoted by \textbf{r*}. And if R is a natural number then the cut will be called as \textbf{natural cut}. \end{dfn} \subsection{Properties} Here are some interesting properties of rational cuts which are among the motivations in choosing cuts for defining all sorts of numbers (means rational) and hence are stated as theorems. \begin{thm} \label{thm11} If $ \begin{array}{lll} X<Y \qquad or & X=Y \qquad or & X>Y \\ X\text{*}<Y\text{*} \quad or & X\text{*}=Y\text{*} \quad or & X\text{*}>Y\text{* ;} \end{array} $ vice versa. \end{thm} \textbf{Proof:} If $X<Y$ then for sure $X\in Y$* and by Definition \ref{def7} $X\notin X$*. Hence, by Definition \ref{def3} $X$*$<$ $Y$*. Similarly, result for other two cases can also be proved. \bigskip \begin{thm} \label{thm12} The following holds $ \begin{array}{rcl} (X+Y)\text{*} & = & (X\text{*} + Y\text{*}) \\ (X\cdot Y)\text{*} & = & (X\text{*}\cdot Y\text{*}) \end{array} $ \end{thm} \textbf{Proof:} Let $X_1$ and $Y_1$ be arbitrary lowers number for $X$* and $Y$*, respectively. \begin{enumerate} \item From definition \ref{def5}, lower number for ($X$* + $Y$*) will be of the form $X_1+Y_1$, where $X_1<X$ \&\ $Y_1<Y$. Because of the property of rational number, $X_1+Y_1 < X+Y$. Also by Definition \ref{def5} $(X_1+Y_1)$ is a lower number for ($X+Y$)*. This implies $(X\text{*} + Y\text{*}) \subset (X+Y)\text{*}$. \medskip \item Let $Z$ be a arbitrary lower number for ($X+Y$)* for which $Z<(X+Y)$ or in other sense $\frac{Z}{X+Y}<1$. This statement can be used to draw the following : \begin{equation} X_1\dfrac{Z}{X_1+Y_1} < X_1\cdot1 \qquad \&\ \qquad Y_1\dfrac{Z}{X_1+Y_1} < Y_1\cdot1 \nonumber \end{equation} Then obviously $X_1\dfrac{Z}{X_1+Y_1}$ \&\ $Y_1\dfrac{Z}{X_1+Y_1}$ are lower number for $X$* and $Y$*, respectively and their sum must belong to $(X\text{*}+Y\text{*})$. Since $Z = X\dfrac{Z}{X+Y}+Y\dfrac{Z}{X+Y}$, $Z \in (X\text{*}+Y\text{*})$. Hence, $(X+Y)\text{*} \subset (X\text{*} + Y\text{*})$. \medskip \item From Definition \ref{def6}, lower number for ($X$*$\cdot$ $Y$*) will be of the form $X_1\cdot Y_1$ for which $X_1\cdot Y_1 < X\cdot Y$ (as $X_1<X$ and $Y_1<Y$). By Definition \ref{def6}, $(X_1\cdot Y_1)$ is a lower number for ($X\cdot Y$)*. Hence, $(X\text{*}\cdot Y\text{*}) \subset (X\cdot Y)\text{*}$. \medskip \item Let $Z$ be a arbitrary lower number for ($X\cdot Y$)* i.e. $Z<(X\cdot Y)$. Since ($X\cdot Y$)* is a cut, therefore $\exists$ a greater lower $Z_1\in (X\cdot Y)$* for every $Z$ . This implies $Z<Z_1<(X\cdot Y)$ and this relation can be written as \begin{equation} \frac{Z}{Z_1} < 1 \quad \&\ \quad \frac{Z_1}{Y} < X \nonumber \end{equation} The number $\frac{Z_1}{Y}$ is a lower number for $X$* whereas $Y\frac{Z}{Z_1}$ is for $Y$* (as $\frac{Z}{Z_1} < 1$). This implies $Z\in (X$*$\cdot Y$*) and $(X\cdot Y)\text{*} \subset (X\text{*}\cdot Y\text{*})$. \end{enumerate} From 1 and 2, $(X+Y)\text{*} = (X\text{*} + Y\text{*})$ , and \\ from 3 and 4, $(X\cdot Y)\text{*} = (X\text{*}\cdot Y\text{*})$. \bigskip \section{Real Numbers} Now we get the intuition in our mind that cut is a good candidate to be set as a real number and following definition defines natural numbers to rational numbers, the same way. \begin{dfn} \label{def8} The symbol $X$ will now denote the cut $X$*, with following replacement. \begin{itemize} \item The term \textbf{natural numbers} ($\mathbb{N}$) will henceforth be used instead of natural cuts. \item The term \textbf{integers} ($\mathbb{Z}$) will henceforth be used instead of integral cuts. \item The term \textbf{rational number} ($\mathbb{Q}$) will henceforth be used instead of rational cuts. \end{itemize} \end{dfn} Because of such replacement, interesting results arises, some of which needed to be proved; hence, is stated as theorems. \begin{thm} \label{thm13} The rational numbers are those cuts for which there exists a least upper number. \end{thm} \textbf{Proof:} By Definition \ref{def8}, rational numbers are rational cuts, i.e. if the number is \textbf{X}, then \textbf{X}=$X$* , which implies for every $Z<X$, $Z\in X$* but $X \notin X$* (where $Z$ is a rational number(old one)). This implies X is the least upper number. \medskip \begin{dfn} \label{def9} Any cut which is not a rational number is called an \textbf{irrational number}. In other words, \textbf{irrational number} are those cuts for which there doesn't exists a least upper number. \end{dfn} Two irrational numbers could be added, multiplied and ordered as they are cuts. Now we are ready to define real numbers because we have already defined rational and irrational number. \begin{dfn} \label{def10} A \textbf{real number} is a cut. In other words, collection of all rational and irrational numbers is called as \textbf{real number set} ($\mathbb{R}$). \end{dfn} \bigskip There is yet another important definition which will enable us to identify cut as a number and element of a cut, i.e. number as a cut; any time. \begin{dfn} \label{def11} Let $\xi$ and X be a cut. Then X is a lower number for $\xi$ if and only if, $X<\xi$ and hence is an upper number if, $X\geq \xi$ . \end{dfn} \bigskip \bigskip \bigskip \section{Dedekind's Fundamental Theorem} \begin{thm} \label{thm14} Let there be given any division of all real numbers into two classes with the following properties: \begin{enumerate} \item There exists a number of the first class, and also one of the second class. \item Every number of the first class is less than every number of the second class. \end{enumerate} Then there exists exactly one real number $\theta$ such that every $H<\theta$ belongs to the first class and every $H>\theta$ to the second class. \end{thm} \textbf{Proof:} Consider a cut $\xi$ which consists of all the numbers of first class (except the greatest one, if any). This cut can be made as two requirements are already fulfilled according theorem's condition and the third one is fulfilled by excluding the greatest one. \\ This implies that for every such division of all real numbers we can associate a cut.\\ Also, by definition \ref{def11}, the numbers $< \xi$, $\in \xi$; and since, all lower numbers of $\xi$ are from first class implies that every real $H<\xi$ belongs to the first class.\\ Also, the numbers $> \xi$, $\notin \xi$; and since, all upper numbers of $\xi$ are from second class implies that every real $H>\xi$ belongs to the second class.\\ \bigskip \section{Revisiting Peano's Axioms} There is an interesting point still remaining towards defining real numbers and i.e. ``Do natural cuts satisfies Peano's Axioms?" . And the answer is stated as a theorem, which if true could become the reason for identifying natural cuts as natural numbers. \begin{thm} \label{thm15} The natural cut satisfy the five axiom of natural number if the role of 0 is assigned to 0* and if we set ($x$*)$'$ = $x$*+$1$* . \end{thm} \textbf{Proof:} \begin{enumerate} \item 0* is a natural number. \\ \textbf{Explanation}: given condition \item If $x$* is a natural number then $\exists$ a unique successor denoted by ($x$*)$'$, which is also a natural number.\\ \textbf{Explanation}: $x$* is a cut and ($x$*)$'$ = $x$*+$1$* = $(x+1)$*; which is a natural cut, or in other words natural number. \item If ($x$*)$'$ = ($y$*)$'$ , \quad then ($x$)* = ($y$)* \\ \textbf{Explanation}: if ($x$*)$'$ = ($y$*)$'$ then, $($x'$)\text{*} = ($y'$)\text{*}$ (as ($x$*)$'$ = $x$*+$1$* = $(x+1)$*); which by Theorem \ref{thm11} equivalent to the expression $x'=y'$ or more precisely, $x=y$. This implies $x\text{*}=y\text{*}$. \item ($x$*)$'$ $\neq$ 0* \qquad $\forall$ $x$* \\ \textbf{Explanation}: since $x'\neq 0$ $\forall$ $x$, therefore by Theorem \ref{thm11}, $(x')\text{*} \neq 0\text{*}$. this implies $(x\text{*})' \neq 0\text{*} \qquad \forall x\text{*}$. \item Let $\mathbb{N}$* be the set of all natural cuts and $M$* be a subset of it, s.t. 0* $\in$ $M$* \&\ ($m$*)$'$ $\in$ $M$* , whenever $m$* $\in$ $M$* ; then $M$* = $\mathbb{N}$* . \textbf{Explanation}: Consider a set M = \{\ $m$ $\mid$ $m$* $\in$ $M$*\}\ . We can say, 0 $\in$ $M$ (since 0* $\in$ $M$*) and also, $m'$ $\in$ $M$, whenever $m$ $\in$ $M$ (as ($m$*)$'$ $\in$ $M$* , whenever $m$* $\in$ $M$* ). From axioms of Natural number we can conclude that $M=\mathbb{N}$, which implies $M$* = $\mathbb{N}$*. \end{enumerate} \subsection{Conclusion} So with this, we can conclude that this so constructed real number set is an extension of rational number and the pattern mentioned as equation \ref{eq1} is restored, i.e. \begin{equation} \mathbb{N}\:\subset \mathbb{Z}\:\subset \mathbb{Q}\: \subset \mathbb{R} \end{equation} \newpage \section{Acknowledgement} The author wishes to thank the DST-Inspire Programme and the IISc Mathematics department for providing the resources that went into this study. Considerable use has been made of the J.R.D. Tata Library, for which the author is correspondingly grateful. The author especially thanks his project guide, Prof. M.K. Ghosh, for his guidance and help during the project. \newpage \begin{thebibliography}{100} \bibitem{Landau} Edmund Landau, \textit{Foundation of Analysis}, Chelsea Publishing Company New York, 1966 \bibitem{Spivak} Michael Spivak, \textit{Calculus}, Cambridge University Press, 1994 \bibitem{webpage} Wenliang Zhang, \textit{Supplementary notes on Dedekind's Cuts}, (http://www-personal.umich.edu/~wlzhang/dedekind-cuts.pdf) \bibitem{webpage} Alexander Bogomolny, \textit{On Dedekind Cuts},\\ (http://www.cut-the-knot.org/proofs/Whittaker.shtml) \end{thebibliography} \end{document}
{ "alphanum_fraction": 0.7026290902, "avg_line_length": 66.7550505051, "ext": "tex", "hexsha": "141932966d389335f1ff737d99de016a6e255223", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2018-07-16T05:35:54.000Z", "max_forks_repo_forks_event_min_datetime": "2018-07-16T05:35:54.000Z", "max_forks_repo_head_hexsha": "a6e0c53bc311d74853fc6eb66acbaaff46f03126", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "Ankit-Jaiswal/ParTEX", "max_forks_repo_path": "server/resources/project_report.tex", "max_issues_count": 7, "max_issues_repo_head_hexsha": "a6e0c53bc311d74853fc6eb66acbaaff46f03126", "max_issues_repo_issues_event_max_datetime": "2018-12-11T14:04:26.000Z", "max_issues_repo_issues_event_min_datetime": "2018-07-16T04:07:56.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "Ankit-Jaiswal/ParTEX", "max_issues_repo_path": "server/resources/project_report.tex", "max_line_length": 783, "max_stars_count": 1, "max_stars_repo_head_hexsha": "a6e0c53bc311d74853fc6eb66acbaaff46f03126", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "Ankit-Jaiswal/ParTEX", "max_stars_repo_path": "server/resources/project_report.tex", "max_stars_repo_stars_event_max_datetime": "2019-04-15T10:59:28.000Z", "max_stars_repo_stars_event_min_datetime": "2019-04-15T10:59:28.000Z", "num_tokens": 8526, "size": 26435 }
In this lab you will practice using the command window to carry out basic calculations, learn how to use the help function, MATLAB documentation, and record your input and output using the \verb`diary` function. First, follow the Windows Intructions to create the working directory. %--------------------------------------------- \section{Arithmetics in MATLAB} %--------------------------------------------- %--------------------------------------------- \begin{enumerate}[(1)] \item In the command window enter the command \verb`diary('lab_01_output.txt')` this will create a \verb`.txt` file. This will record all input and output in the command window until you type the command \verb`diary off`. \item Type the command \verb`beep off`. This will disable the sound that plays when there is an error in your code. \item Use the \verb`help functionName` to search for the relevant function in the problems below. Consider your `answer' to this question to be the output generated by the \verb`help` command. \begin{enumerate} \item What type of logarithm does the \verb`log(x)` function calculate? \item Does the MATLAB default setting calculate trigonometric functions in radians or degrees? \end{enumerate} \item \label{enum:4}Carry out the following calculations using normal math operators: \begin{enumerate} \item \label{enum:a} $2+5$ \item \label{enum:b} $4^5$ \item \label{enum:c} $7\cdot 6$ \item \label{enum:d} $3/8$ \item \label{enum:e} $54460-2342$ \item \label{enum:f} $\cos(50^{\circ})$ % built in function commands \item \label{enum:g} $\sqrt{4}$ \item \label{enum:h} $\ln(3)$ \item \label{enum:i} $\sin\left(\frac{\pi}{2}\right)$ \item \label{enum:j} $e^{34}$ \end{enumerate} \item For questions \ref{enum:a} -- \ref{enum:e}, you must also carry out each calculation using functional notation for each operation. Use the \verb`help` command and/or search the MATLAB documentation to find what the functional notation is for each operation. \item When you complete the above tasks enter the command \verb`diary off`. This will stop recording the input and output in the command window. \end{enumerate} Follow the Overleaf Instructions to set up an account, make a copy of the template, then upload \verb`lab_01_output.txt` to the folder \verb`src` on Overleaf. Next open \verb|body.tex| under the folder \verb|LaTeX|. In the last section of the report, you will reproduce the following using \LaTeX{}. Once you finish, recompile, and submit the generated \verb|.pdf| file to WyoCourses. %--------------------------------------------- \section{Basics of \LaTeX{}} %--------------------------------------------- \subsection{Simplifying Fractions} Consider the function $$ f(x) = \frac{x^2 - 1}{x + 1}. $$ To simplify this funciton we can factor the numerator and cancel like terms \begin{align*} f(x) & = \frac{x^2 - 1}{x + 1} \\ & = \frac{(x - 1) (x + 1)}{x + 1} \\ & = x - 1. \end{align*} \subsection{Matrix} A general $3 \times 3$ matrix $A$ has the form $$ A = \begin{bmatrix} 1 & 2 & 3 \\ 4 & 5 & 6 \\ 7 & 8 & 9 \\ \end{bmatrix}. $$ \subsection{The Millennium Prize Problems} \emph{The Millennium Prize Problems} are seven problems in mathematics that were stated by the \textbf{Clay Mathematics Institute} on May 24, 2000. The problems are \begin{enumerate}[(1)] \item Birch and Swinnerton-Dyer conjecture, \item Hodge conjecture, \item Navier–Stokes existence and smoothness, \item P versus NP problem, \item Poincaré conjecture, \item Riemann hypothesis, \item Yang–Mills existence and mass gap. \end{enumerate} \textsc{The Riemann zeta function} is defined for complex $s$ with real part greater than $1$ by the absolutely convergent infinite series $$ \zeta(s) = \sum_{n=1}^{\infty} \frac{1}{n^s} = \frac{1}{1^s} + \frac{1}{2^s} + \frac{1}{3^s} + \cdots $$ The practical uses of the Riemann hypothesis include many propositions known true under the Riemann hypothesis, and some that can be shown to be equivalent to the Riemann hypothesis: \begin{itemize} \item Distribution of prime numbers, \item Growth of arithemtic functions, \item Large prime gap conjecture, \item Criteria equivalent to the Riemann hypothesis. \end{itemize}
{ "alphanum_fraction": 0.6624174824, "avg_line_length": 52.2976190476, "ext": "tex", "hexsha": "14c215815c3bf6ceb57a93c98e9562bc6bd0a226", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "524d4e23cd8fab4ab8368df8b7e6b4442f8436f1", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "butlerm0405/math3341", "max_forks_repo_path": "courses/template/MATH3341/Math.3341.Lab.01/exercise/body.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "524d4e23cd8fab4ab8368df8b7e6b4442f8436f1", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "butlerm0405/math3341", "max_issues_repo_path": "courses/template/MATH3341/Math.3341.Lab.01/exercise/body.tex", "max_line_length": 299, "max_stars_count": null, "max_stars_repo_head_hexsha": "524d4e23cd8fab4ab8368df8b7e6b4442f8436f1", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "butlerm0405/math3341", "max_stars_repo_path": "courses/template/MATH3341/Math.3341.Lab.01/exercise/body.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 1181, "size": 4393 }
\chapter{Boundaries reconstruction based on the triangulation refinement}\label{chap:triangulation} The purpose of this chapter is to provide an alternative approach to the $\alpha$-shapes methods for determining the boundaries $\partial \mbox{\set{R}{}{}}(\Pi)$ of the positive luminance regions in target PS. The idea of the method is based on the triangulation refinement of the source PS explained in Chapter \ref{chap:PS}. The boundaries $\partial \mbox{\set{R}{}{}}(\Pi)$ are approximated by connecting those vertices of the triangles that follow the path same $\Pi$. Numerical results are provided for three optical systems: the two-faceted cup, the TIR-collimator and a parabolic reflector.\\ \indent The PS method is compared to both MC and QMC ray tracing. Discussion and results are provided in the last section of this chapter. \section{Reconstruction of the boundaries} In Chapter \ref{chap:PS} we have seen that, using the triangulation refinement, more rays close to the boundaries are traced selecting increasingly smaller values for the parameters $\varepsilon_{\variabile{q}_1}^{\textrm{min}}$ and $\varepsilon_{\variabile{p}_1}^{\textrm{min}}$. Once the algorithm stops, only the triangles that are expected to be crossed by at least a boundary $\partial \mbox{\set{R}{}{}}(\Pi)$ are taken into account for the construction of the boundary. From now on, we call these triangles the \textit{boundary triangles}. Two triangles are neighbors if they have one side in common. For each boundary triangle its neighbor is found so that an ordered sequence of triangles is constructed. Given a path $\Pi$, the corresponding boundary $\partial\mbox{\set{R}{$1$}{}}(\Pi)$ on \set{S}{}{} is approximated connecting the vertices of the boundary triangles which correspond to rays following path $\Pi$. The edge-ray principle is employed in order to define the corresponding boundaries $\partial \mbox{\set{R}{}{}}(\Pi)$ at the target. Thus, $\partial \mbox{\set{R}{}{}}(\Pi)$ at the target are given by \begin{equation}\map{M}{}{}(\partial\mbox{\set{R}{$1$}{}}(\Pi)):\partial\mbox{\set{R}{$1$}{}}(\Pi)\rightarrow\partial\mbox{{R}{}{}}(\Pi),\end{equation} where $\map{M}{}{}$ is defined in (\ref{eq:map1}) and $\map{M}{}{}(\partial\mbox{\set{R}{$1$}{}}(\Pi))$ is the restriction of $\map{M}{}{}$ to $\partial\mbox{\set{R}{$1$}{}}(\Pi)$ for every path $\Pi$. \\\indent In this chapter, we develop a criterion to establish the values of the parameters $\varepsilon_{\variabile{q}_1}^{\textrm{min}}$, $\varepsilon_{\variabile{q}_1}^{\textrm{max}}$, $\varepsilon_{\variabile{p}_1}^{\textrm{min}}$ and $\varepsilon_{\variabile{p}_1}^{\textrm{max}}$ that give a good approximation of $\partial \mbox{\set{R}{}{}}(\Pi)$. Similar to the selection of $\alpha$ in the $\alpha$-shapes procedure, the triangulation parameters are chosen using \'{e}tendue conservation, i.e., conservation of area in PS. The core of our approach is the following.\\ \indent The \'{e}tendue $U_1$ at the source PS restricted to the rays that arrive at the target is calculated. If all the rays emitted by the source are received by the target, $U_1$ can be easily determined by (\ref{eq:etenduesource1}). In case some rays do not arrive at the target reaching another detector, we use (\ref{eq:etendueintegralsource}) and (\ref{eq:etenduesumsource}). \\ \indent The \'{e}tendue $U_{\textrm{t}}$ at the target PS is calculated using (\ref{eq:etendueintegraltarget}) and (\ref{eq:etenduesumtarget}). To do so, the triangulation refinement method is applied to the regions $\mbox{\set{R}{}{}}(\Pi)$ for a range of values of $\varepsilon_{\variabile{q}_1}^{\textrm{max}}$ and for a fixed value of $\varepsilon_{\variabile{q}_1}^{\textrm{min}}$. The parameters along the $\variabile{p}$-axis are scaled by \begin{equation}\label{eq:scaled_parameters} \begin{aligned} & \variabile{w} = \frac{\variabile{q}_1^{\textrm{max}}-\variabile{q}_1^{\textrm{min}}}{\variabile{p}_1^{\textrm{max}}-\variabile{p}_1^{\textrm{min}}},\\ &\varepsilon_{\variabile{p}_1}^{\textrm{min}}= \frac{\varepsilon_{\variabile{q}_1}^{\textrm{min}}}{\variabile{w}}, \\ &\varepsilon_{\variabile{p}_1}^{\textrm{max}}= \frac{\varepsilon_{\variabile{q}_1}^{\textrm{max}}}{\variabile{w}}, \end{aligned} \end{equation} where $\variabile{p}_1^{\textrm{min}}$ and $\variabile{p}_1^{\textrm{max}}$ are the minimum and the maximum $\variabile{p}$-coordinate in \set{S}{}{} and, $\variabile{q}_1^{\textrm{min}}$ and $\variabile{q}_1^{\textrm{max}}$ are the minimum and the maximum $\variabile{q}$-coordinate in \set{S}{}{}. Every set of parameters gives a certain triangulation, for each of them an approximation of the boundaries $\partial\mbox{\set{R}{}{}}(\Pi)$ is obtained. Next, the intersection points $(\variabile{q}^{\variabile{i}}( \Pi, \variabile{p}),\variabile{p})_{\variabile{i} = 1, \cdots, \variabile{r}}$ between $\partial\mbox{\set{R}{}{}}(\Pi)$ and the horizontal line $\variabile{p}~=~ \const{constant}$ are calculated for every path $\Pi$, and for $\variabile{p}~\in~[-1,1]$. Ordering their $\variabile{q}$-coordinates corresponding to each direction $\variabile{p}$ in ascending order, the integral in Equation (\ref{eq:etenduetarg1}) is computed. Changing the values of the parameters, different approximations of $\partial\mbox{\set{R}{}{}}(\Pi)$ are found and, consequently, different values of $U_{\textrm{t}}$. By construction, $U_{\textrm{t}}$ is always underestimated ($U_{\textrm{t}}<U_1$) because the approximated boundaries are found joining the vertices of the \textit{boundary triangles} which are \textit{inside} the regions \set{R}{}{}$(\Pi)$. \\ \indent To use the parameters that give a good accuracy of the target photometric variables, the difference $\Delta U = U_1-U_{\textrm{t}}$ is calculated for every value of $U_{\textrm{t}}$ found. The values of the parameters that give a small $\Delta U$ provide a triangulation refinement, from which a good approximation of the target photometric variables can be computed. \\ \indent A similar method as described here is presented by Moore \cite{moore2013methods}. In Moore's method each ray leaves a point source at the same position while the angle coordinate changes. It starts considering three sampling rays and their corresponding paths are taken into account. In case the paths are equal, the rays traced are representative of all the rays inside the polygon that they describe at the target, otherwise an interpolation is required to compute the illumination pattern. This interpolation can affect the efficiency of the method. Our method employs the distribution of the rays at the target PS and avoids using any interpolation. Moreover, a criterion to stop our algorithm is provided in such a way that no more rays than necessary are traced. This makes ray tracing in PS more accurate compared to Moore's procedure. Furthermore, Moore method is not suitable for systems in which the size and the spatial distribution of the source is important as it consider only point sources.\\ \indent The triangulation refinement method is tested for several optical systems. The results are presented next. \section{The two-faceted cup} In this section we apply the triangulation refinement in PS to the two-faceted cup described in Chapter \ref{chap:raytracing} and depicted in Figure \ref{fig:cup}. We start tracing rays inside the system using PS ray tracing as explained in Chapter \ref{chap:PS}. To avoid rays parallel to the source and rays emitted from the endpoints, we consider their initial position $\variabile{q}_1$ and initial direction $\variabile{p}_1$ such that \begin{equation*} \begin{aligned} \variabile{p}_1&\in[-1+10^{-6},1-10^{-6}] = [\variabile{p}_1^{\textrm{min}}, \variabile{p}_1^{\textrm{max}}], \\ \variabile{q}_1&\in[-2+10^{-12}, 2-10^{-12}] = [\variabile{q}_1^{\textrm{min}}, \variabile{q}_1^{\textrm{max}}]. \end{aligned} \end{equation*} A stopping criterion for the triangulation is defined using \'{e}tendue conservation. Since the two-faceted cup is formed by only reflective lines and its target is adjacent to the left and the right reflector (it is located exactly at the top of the system), all the rays emitted by the source arrive at the target. Thus, from (\ref{eq:etenduesource1}) with $\n_1\sin(\myangle_1^{\textrm{max}})=\variabile{p}_1^\textrm{max}$ and $\variabile{a}=\variabile{q}_1^{\textrm{max}}$ follows \begin{equation}U_1 = U \approx 8. \end{equation} %To establish the number of rays needed to achieve a good accuracy of the target intensity, we compare the approximated $U_{\textrm{t}}$, obtained from a given number of rays, to the exact \'{e}tendue $U=U_1$. Ray tracing in PS is implemented by varying the parameter $\varepsilon_{\variabile{q}_1}^{\textrm{min}}$ and fixing $\varepsilon_{\variabile{q}_1}^{\textrm{max}}$ (we choose $\varepsilon_{\variabile{q}_1}^{\textrm{max}}=1$), while the other two parameters are given by (\ref{eq:scaled_parameters}). Every set of parameters gives a different triangulation at the source PS. The approximated boundaries are computed for several triangulations joining the vertices of the triangles crossed by a boundary. \begin{figure}[t] \centering \begin{subfigure}{.45\textwidth} \includegraphics[width=\textwidth]{boundaries_source_triangles2} \caption{The black lines are the boundaries at \set{S}{}{}. $1500$ rays are traced using the triangulation refinement with $\varepsilon_{\variabile{q}_1}^{\textrm{min}}=0.1$. } \label{fig:boundaries_s2} \end{subfigure}% \hfill \begin{subfigure}{.45\textwidth} \includegraphics[width=\textwidth]{boundaries_target_triangles2} \caption{The black lines are the boundaries at \set{T}{}{}. $1500$ rays are traced using the triangulation refinement with $\varepsilon_{\variabile{q}_1}^{\textrm{min}}=0.1$.} % \label{fig:boundaries_t2} \end{subfigure} % \hfill \begin{subfigure}{.45\textwidth} \includegraphics[width = \textwidth]{boundaries_source_triangles1} \caption{The black lines are the boundaries at \set{S}{}{}. $7500$ rays are traced using the triangulation refinement with $\varepsilon_{\variabile{q}_1}^{\textrm{min}}=0.025$.} \label{fig:boundaries_s1} \end{subfigure}% \hfill \begin{subfigure}{.45\textwidth} \includegraphics[width=\textwidth]{boundaries_target_triangles1} \caption{The black lines are the boundaries at \set{T}{}{}. $7500$ rays are traced using the triangulation refinement with $\varepsilon_{\variabile{q}_1}^{\textrm{min}}=0.025$.} % \label{fig:boundaries_t1} \end{subfigure} \caption{\textbf{Boundaries at \set{S}{}{} and \set{T}{}{} of the two-faceted cup.} The approximated boundaries are computed using the triangulation refinement with two different values of $\varepsilon_{\variabile{q}_1}^{\textrm{max}}$.} \label{fig:boundaries_cup} \end{figure} \\ \indent For example, if we consider $\varepsilon_{\variabile{q}_1}^{\textrm{min}}=0.1$, $\varepsilon_{\variabile{q}_1}^{\textrm{max}}=1$ and the corresponding parameters for the $\variabile{p}$-axis given by (\ref{eq:scaled_parameters}), a triangulation with around $1500$ rays (vertices of the triangles) is found. The boundaries $\partial$\set{R}{$1$}{}$(\Pi)$ and $\partial$\set{R}{}{}$(\Pi)$ are calculated using this triangulation refinement and are depicted in black in Figures \ref{fig:boundaries_s2} and \ref{fig:boundaries_t2}, respectively. For this set of rays we found $\Delta U \approx 0.53 $. Next, we can decrease $\varepsilon_{\variabile{q}_1}^{\textrm{min}}$ to obtain a more precise approximation of $U_{\textrm{t}}$. Choosing $\varepsilon_{\variabile{q}_1}^{\textrm{min}} = 0.025$ and $\varepsilon_{\variabile{q}_1}^{\textrm{max}}=1$, a triangulation formed by around $7500$ rays is obtained. The approximated boundaries $\partial$\set{R}{$1$}{}$(\Pi)$ and $\partial$\set{R}{}{}$(\Pi)$ are depicted with black lines in Figures \ref{fig:boundaries_s1} and \ref{fig:boundaries_t1}, respectively. The approximation of the target \'{e}tendue gives $\Delta U \approx 0.13 $. Obviously, the boundaries computation obtained using $\varepsilon_{\variabile{q}_1}^{\textrm{min}} = 0.025$ is more accurate. Note that, decreasing $\varepsilon_{\variabile{q}_1}^{\textrm{min}}$, the number of rays increases. \\ \indent In Figure \ref{fig:etendue_cup} we show with the blue line how the target \'{e}tendue varies as a function of the parameter $\varepsilon_{\variabile{q}_1}^{\textrm{min}}$. The exact \'{e}tendue $U=8$ is depicted with the red line. By decreasing $\varepsilon_{\variabile{q}_1}^{\textrm{min}}$ an increase of $U_{\textrm{t}}$ is observed. %Furthermore, by construction, $U_{\textrm{t}}$ is always underestimated because the approximated boundaries are found joining the vertices of the \textit{boundaries triangles} which are \textit{inside} the regions \set{R}{}{}$(\Pi)$. \begin{figure}[t] \center \includegraphics[width= 0.7\textwidth]{etendue_cup_epsilon} \caption{\textbf{\'{E}tendue for the two-faceted cup.} The total \'{e}tendue as an area in PS is depicted with the red line. The approximated \'{e}tendue for a range of values of $\varepsilon_{\variabile{q}_1}^{\textrm{min}}$ is shown with the blue line.} \label{fig:etendue_cup} \end{figure} In Figure \ref{fig:etendue_cup} we show the approximation of $U_{\textrm{t}}$ for at most around $1.2 \cdot 10^5$ rays traced using PS ray tracing with parameters $\varepsilon_{\variabile{q}_1}^{\textrm{min}}=0.8\cdot 10^{-4}$, $\varepsilon_{\variabile{q}_1}^{\textrm{max}}=1$, $\varepsilon_{\variabile{p}_1}^{\textrm{min}}=\varepsilon_{\variabile{q}_1}^{\textrm{min}}/2$ and $\varepsilon_{\variabile{p}_1}^{\textrm{max}}= \varepsilon_{\variabile{q}_1}^{\textrm{max}}/2$. We expect that further decreasing $\varepsilon_{\variabile{q}_1}^{\textrm{min}}$, the value of $U_{\textrm{t}}$ becomes more precise. \\ \indent The PS averaged an normalized intensity $\hat{I}_{\textrm{PS}}$ with $1.2 \cdot 10^5$ rays is calculated from (\ref{eq:normalized_PS_intensity}) where ${I}_{\textrm{PS}}$ is given by (\ref{eta2}). The intensity profile is shown in Figure \ref{fig:intensity_cup_triangulation} with the red line. In the same graph we show the reference intensity $\hat{I}_{\textrm{ref}}$ with the dotted blue line. For the two-faceted cup the reference intensity is actually the exact intensity ($\hat{I}_{\textrm{ref}}= \hat{I}_{\textrm{exact}}$) computed as explained in Appendix \ref{app:boundariescup}. \begin{figure}[t] \center \includegraphics[width= 0.8\textwidth]{intensity_cup_triangulation} \caption{\textbf{Intensity profile at the target of the two-faceted cup.} The reference intensity is the exact intensity. The PS intensity is computed using the triangulation refinement with $\varepsilon_{\variabile{q}_1}^{\textrm{min}}=0.8\cdot 10^{-4}$, $\varepsilon_{\variabile{q}_1}^{\textrm{max}}=1$, $\varepsilon_{\variabile{p}_1}^{\textrm{min}}=\varepsilon_{\variabile{q}_1}^{\textrm{min}}/2$ and $\varepsilon_{\variabile{p}_1}^{\textrm{max}}= \varepsilon_{\variabile{q}_1}^{\textrm{max}}/2$. Around $1.2 \cdot 10^5$ rays are traced.} \label{fig:intensity_cup_triangulation} \end{figure} \\ \indent Finally, we compare PS ray tracing with both MC and QMC ray tracing by computing the error between the approximated intensities $\hat{I}_{\textrm{A}} (\textrm{A}= \textrm{MC}, \textrm{QMC}, \textrm{PS})$ and the exact intensity $\hat{I}_{\textrm{ref}}$ using (\ref{eq:error}) with $\nbin = 100$. The results are shown in Figure \ref{fig:error_cup_triangulation} where the MC, QMC and PS intensity are depicted with the green, blue and red line, respectively. \begin{figure}[t] \center \includegraphics[width= 0.8\textwidth]{error_cup_triangulation} \caption{\textbf{Error plot for the two-faceted cup.} The errors between the approximated intensities $\hat{I}_{\textrm{A}} (\textrm{A}= \textrm{MC}, \textrm{QMC}, \textrm{PS})$ and the exact intensity $\hat{I}_{\textrm{exact}}$.} \label{fig:error_cup_triangulation} \end{figure} The graph shows that using PS ray tracing in combination with the triangulation refinement admits tracing far less rays compared to MC ray tracing. A comparison between QMC ray tracing and PS ray tracing shows that more rays are needed in PS. Indeed, although the shapes of all the regions \set{R}{}{}$(\Pi)$ are very smooth, their boundaries at the edge of the target phase space \set{T}{}{} are difficult to approximate by triangles. With the triangulation refinement, the vertical straight lines at the edge of \set{T}{}{} are always approximated by a broken line. However, the two-faceted cup is a very simple system and QMC ray tracing does not requires a large number of rays to obtain the desired accuracy. \\ \indent Nevertheless, PS ray tracing has a big advantage compared to QMC ray tracing. Indeed, as we have seen in Chapter \ref{chap:raytracing}, MC and QMC ray tracing are binning procedures. Therefore, the MC and QMC intensities are given by the average over every bin and the error also depends on the number of bins. % according to Equations \ref{} \ref{} PS ray tracing gives a point-wise intensity along all possible directions. In the simulations shown in this thesis we always compute the average PS intensity. This is needed to give a fair comparison of PS ray tracing versus MC and QMC ray tracing. It is very important to observe that no error related to the number of bins is involved in the PS procedure. \\ \indent To investigate in more detail the performance of PS ray tracing, we test the method for more complicated systems. In the next paragraph we present the results for a TIR-collimator. \section{A TIR-collimator} In this section we provide the results of PS ray tracing for a TIR-collimator, using the triangulation refinement to compute the boundaries $\partial$\set{R}{}{}$(\Pi)$ in target PS. In particular, we consider the TIR-collimator depicted in Figure \ref{fig:analyticlens}. Since this system is located in two different media (air and glass), also the refraction law plays a role in the ray tracing procedure. We run PS ray tracing for the TIR-collimator several times gradually increasing the number of rays, i.e., gradually decreasing the values of the parameters $\varepsilon_{\variabile{q}_1}^{\textrm{min}}, \varepsilon_{\variabile{q}_1}^{\textrm{max}}, \varepsilon_{\variabile{p}_1}^{\textrm{min}}$ and $\varepsilon_{\variabile{p}_1}^{\textrm{max}}$ in the triangulation. In order to trace more rays close to the boundaries, we decide to vary only the values of $\varepsilon_{\variabile{q}_1}^{\textrm{min}}$ and $ \varepsilon_{\variabile{p}_1}^{\textrm{min}}$ while fixing the values of $ \varepsilon_{\variabile{q}_1}^{\textrm{max}}$ and $ \varepsilon_{\variabile{p}_1}^{\textrm{max}}$ as the last two are responsible of the number of rays inside the regions \set{R}{}{}$(\Pi)$. Every ray traced has initial position coordinate $\variabile{q}_1\in[-\variabile{a}, \variabile{a}]$ with $\variabile{a}=2$ and the initial direction coordinate $\variabile{p}_1\in[-1,1]$. Therefore, the source PS of the TIR-collimator is the rectangular domain \set{S}{}{}$= [-2, 2] \times [-1, 1]$. The parameters $\varepsilon_{\variabile{p}_1}^{\textrm{min}}$ and $\varepsilon_{\variabile{p}_1}^{\textrm{max}}$ are scaled as in Equation (\ref{eq:scaled_parameters}). \\ \indent To determine the triangulation refinement that gives a good approximation of the target intensity, we compare $U_1$ (source \'{e}tendue) to $U_{\textrm{t}}$ (target \'{e}tendue) and use \'{e}tendue conservation. In this case, not all light emitted by the source of the TIR-collimator arrives at the target. Indeed, using PS ray tracing, $\npath=7$ different paths $(\Pi_\lineai)_{\lineai=1, \cdots, \npath}$ are found but only five of them are paths from the source (line $1$) to the target (line $12$) (see also Section \ref{sec:results-Tir-alpha}). Thus, we need to remove from the total area of \set{S}{}{} those parts occupied by the rays that arrive at some others detectors and not at the target. % white areas in figure... Indicating with $A_T$ the area of each of these parts (see Section \ref{sec:results-Tir-alpha}), the source PS is given by: \begin{equation} U_1 = 8-2A_T\approx 7.77. \end{equation} The target \'{e}tendue $U_{\textrm{t}}$ is obtained from (\ref{eq:etenduetarg}) for a range of values of $\varepsilon_{\variabile{q}_1}^{\textrm{min}}$ and for $\varepsilon_{\variabile{q}_1}^{\textrm{max}}=1$ fixed. The boundaries $\partial$\set{R}{}{}$(\Pi)$ are found for every value of $\varepsilon_{\variabile{q}_1}^{\textrm{min}}$ and, $U_{\textrm{t}}$ is calculated for each of these boundaries. The results shown in Figure \ref{fig:etendue_tir_triangulation} give the \'{e}tendue plot as a function of $\varepsilon_{\variabile{q}_{1}}^{\textrm{min}}$. \begin{figure}[t] \center \includegraphics[width = 0.7\textwidth]{etendue_tir_triangulation} \caption{\textbf{\'{E}tendue of the TIR-collimator.} A comparison between $U_1$ and $U_{\textrm{t}}$ shows that by decreasing the value of $\varepsilon_{\variabile{q}_{1}}^{\textrm{min}}$, $\Delta U= U_1-U_{\textrm{t}}$ decreases.} \label{fig:etendue_tir_triangulation} \end{figure} \\ \indent The best approximation of $U_{\textrm{t}}$ shown in the previous graph is obtained using $\varepsilon_{\variabile{q}_1}^{\textrm{min}}=1.6\cdot 10^{-3}$, tracing around $1.62 \cdot 10^5$ rays. The boundaries $\big(\partial\mbox{\set{R}{$1$}{}}(\Pi_{\lineai})\big)_{\lineai = 1, \cdots, 5}$ and $\big(\partial\mbox{\set{R}{}{}}(\Pi_{\lineai})\big)_{\lineai = 1, \cdots, 5}$ of the regions formed by these rays are shown with red lines in Figure \ref{fig:boundaries_TIR_triangulation} at the left and the right, respectively (see also Figure \ref{fig:Tir2} for a comparison of the target PS). \begin{figure}[h] \begin{subfigure}[t]{0.5\textwidth} \centering \includegraphics[width=\textwidth]{boundaries_triang_source_tir} \caption{Boundaries at the source PS.} \label{fig:boundaries_triang_source_tir} \end{subfigure} \hfill \begin{subfigure}[t]{0.5\textwidth} \centering \includegraphics[width = \textwidth]{boundaries_triang_targ_tir} \caption{Boundaries at the target PS.} \label{fig:boundaries_triang_target_tir} \end{subfigure} \caption{\textbf{Boundaries at \set{S}{}{} and \set{T}{}{} of the TIR-collimator.} The red lines show the boundaries found using $\varepsilon_{\variabile{q}_1}^{\textrm{min}}=1.6\cdot 10^{-3}, \varepsilon_{\variabile{q}_1}^{\textrm{max}}=1, \varepsilon_{\variabile{p}_1}^{\textrm{min}}=\varepsilon_{\variabile{q}_1}^{\textrm{min}}/2$ and $\varepsilon_{\variabile{p}_1}^{\textrm{max}} = \varepsilon_{\variabile{q}_1}^{\textrm{max}}/2$.} \label{fig:boundaries_TIR_triangulation} \end{figure} \\ \indent The target PS intensity $\hat{I}_{\textrm{PS}}$ is computed and, it is compared with a reference intensity $\hat{I}_{\textrm{ref}}$ which is given by QMC ray tracing with $10^7$ rays (as the exact intensity for the TIR-collimator is unknown). The profile of the two intensities is given in Figure \ref{fig:intensity_tir_triangulation}. \begin{figure}[ht] \center \includegraphics[width= 0.8\textwidth]{intensity_tir_triangulation} \caption{\textbf{Target intensity for the TIR-collimator.} The PS intensity $\hat{I}_{\textrm{PS}}$ is computed using PS ray tracing with around $1.62\cdot 10^5$ rays. The reference intensity $\hat{I}_{\textrm{ref}}$ is obtained by QMC ray tracing with $10^7$ rays.} \label{fig:intensity_tir_triangulation} \end{figure} \\ \indent To validate our method, PS ray tracing is compared to both MC and QMC ray tracing. The error between the approximated intensities $\hat{I}_{\textrm{A}} (\textrm{A}=\textrm{QMC}, \textrm{MC}, \textrm{PS})$ and the reference intensity $\hat{I}_{\textrm{ref}}$ as a function of the number of rays traced is calculated. The error plot is shown in a logarithmic scale in Figure \ref{fig:error_tir_triangulation} where the MC, QMC and PS convergences are shown with the green, the blue and the red line, respectively. The black dotted line is a line with slope $-\frac{1}{2}$, the blue dotted line has slope $-1$.The graph shows that MC ray tracing converges, for $\nrays \rightarrow \infty$, with an order of $\mathcal{O}\big(\frac{1}{\sqrt{\nrays}}\big)$, while both PS and QMC ray tracing have a speed of convergence of the order $\mathcal{O}\big(\frac{1}{\nrays}\big)$. Note that, to obtain an error of $10^{-4}$, PS ray tracing allows tracing $10^2$ times less rays compared to MC ray tracing and almost $10$ times less rays compared to QMC ray tracing. \begin{figure}[t] \center \includegraphics[width= 0.8\textwidth]{error_nr_triangulation_tir} \caption{\textbf{Error as a function of the number of rays for the TIR-collimator.} The reference intensity $\hat{I}_{\textrm{ref}}$ is obtained by QMC ray tracing with $10^7$ rays.} \label{fig:error_tir_triangulation} \end{figure} \begin{figure}[t] \center \includegraphics[width= 0.75\textwidth]{error_tir_triangulation_time} \caption{\textbf{Error as a function of the CPU-time for the TIR-collimator.} The reference intensity $\hat{I}_{\textrm{ref}}$ is obtained by QMC ray tracing with $10^7$ rays.} \label{fig:error_tir_triangulation_time} \end{figure} \\ \indent Finally, in order to show the advantages of PS ray tracing in terms of the computational time, we provide an error convergence as a function of the CPU-time for all the three methods (MC, QMC and PS raytracing) shown in Figure \ref{fig:error_tir_triangulation_time}. The choice of the colors is consistent with Figure \ref{fig:error_tir_triangulation}. We remark that the averaged normalized intensity in PS $\hat{I}_{\textrm{PS}}$ is computed over every bin using the trapezoidal rule and discretizing each bin into $10$ more sub-intervals of the same length. Because of this, the CPU-time for computing the PS intensity is divided by $10$ to provide the real CPU-time of the PS method when it is not compared to binning procedure, e.g., MC and QMC ray tracing. This is done for \textit{all} the results presented in this thesis. From Figure \ref{fig:error_tir_triangulation_time}, we observe that PS ray tracing outperforms both MC and QMC ray tracing. PS ray tracing is approximately $10^2$ times faster than MC ray tracing and $10$ times faster than QMC ray tracing. \\ \indent The results shown in Figures \ref{fig:error_tir_triangulation} and \ref{fig:error_tir_triangulation_time} are reported in Tables \ref{tab:ps_error_triangulation} and \ref{tab:mc_error_triangulation}. \begin{table}[t] \centering \caption{\bf Errors of the PS intensity for the TIR-collimator} \begin{tabular}{lllll} \hline $\varepsilon_{\variabile{q}}^{\textrm{max}} $ & $\nrays$ & \'{E}tendue & PS error & PS CPU-time (sec.) \\ \hline $0.05$ & $3\,547$ & $7.50$ & $1.75\cdot10^{-4}$ & $1.98$\\ $0.025$ & $8\,055$ & $7.61$ & $1.49\cdot 10^{-4}$ & $4.69$ \\ $0.125$ & $17\,300$ & $7.69$ & $8.68\cdot 10^{-5}$ & $10.61$\\ $6.3 \cdot 10^{-3}$ & $38\,300$ & $7.73$ & $4.43\cdot 10^{-5}$ & $26.56$\\ $3.1 \cdot 10^{-3}$ & $79\,600$ & $7.75$ & $2.27\cdot 10^{-5}$ & $83,21$\\ $1.6 \cdot 10^{-3}$ & $162\,300$ & $7.76$ & $1.20\cdot 10^{-5}$ & $240.53$\\ \hline \end{tabular} \label{tab:ps_error_triangulation} \end{table} \begin{table}[t] \label{tab:table_tir_triangulation} \centering \caption{\bf Errors of the MC and QMC intensities for the TIR-collimator} \begin{tabular}{lllll} \hline $\nrays$ & MC error & MC CPU-time (sec.) & QMC error & QMC CPU-time (sec.)\\ \hline $10^3$ & $2.09\cdot 10^{-3}$ & $2.73$ & $1.43\cdot10^{-3}$ & $2.63$\\ $10^4$ & $6.42\cdot 10^{-4}$ & $25.98$ & $3.03\cdot 10^{-4}$ & $25.84$\\ $10^5$ & $1.92\cdot 10^{-4}$ & $259.92$ & $5.82\cdot 10^{-5}$ & $258.28$\\ $10^6$ & $7.45\cdot 10^{-5}$ & $2585.83$ & $1.28\cdot 10^{-5}$ & $2482.67$\\ \hline \end{tabular} \label{tab:mc_error_triangulation} \end{table} %\begin{table}[t] %\centering %\caption{\bf Errors of the QMC intensity for the TIR-collimator} %\begin{tabular}{lllll} % \hline $\nrays$\; & QMC error & CPU-time (sec.)\\ % \hline % $10^3$ & $1.43\cdot10^{-3}$ & $2.63$ \\ %$10^4$ & $3.03\cdot 10^{-4}$ & $25.84$ \\ %$10^5$ & $5.82\cdot 10^{-5}$ & $258.28$ \\ % $10^6$ & $1.28\cdot 10^{-5}$ & $2482.67$ \\ % \hline % \end{tabular} % \label{tab:qmc_error_triangulation} % \end{table} %Using PS ray tracing an error equal to $10^{-4}$ is achieved tracing almost $10$ time less compared to QMC ray tracing and $100$ times less rays compared to QMC ray tracing. Next, we show the results for a system in which more than $5$ paths are possible. In particular, we present the results for an optical system for which multiple reflections between rays and the mirrors occur. \section{A Parabolic reflector} In this section we show an example of a parabolic reflector which is depicted in Figure \ref{fig:PR}. It consists of a source \point{S} (line $1$), a target \point{T} (line $4$) parallel to \point{S} and two reflectors (lines $2$ and $3$) which are arcs of the same parabola. The minimum of the parabola is located at the point with $\variabile{x}$-coordinate equal to $0$. $\point{S}=[\variabile{-a}, \variabile{a}]$ (with $\variabile{a}=2$) and $\point{T}=[-\variabile{b},\variabile{b}]$ (with $\variabile{b}=17$) are lines perpendicular to the optical axis (\variabile{z}-axis) and are located at $\variabile{z}=0$ and $\variabile{z}=40$, respectively. All the optical lines are located in air, therefore the index of refraction ${n}=1$ for every line. The optical axis of the system in Figure \ref{fig:PR} corresponds to the \variabile{z}-axis. \begin{figure}[h!] \centering \includegraphics[width=0.75\textwidth]{parabolic_reflector} \caption{\textbf{A parabolic reflector.} Each line of the system is labeled with a number. The source \point{S}$= [2,2]$ (line $1$) is located on the $\variabile{x}$-axis. The target \point{T}$= [-17, 17]$ (line $4$) is parallel to the source and is located at a height $ \variabile{z}= 40$. The left and right reflectors (lines $2$ and $3$) are arcs of the same parabola.} \label{fig:PR} \end{figure} We trace rays in PS with source direction coordinates $\variabile{p}_1 \in [-1,1]$ and source position coordinates $\variabile{q}_1\in[-\variabile{a}+\varepsilon,\variabile{a}-\varepsilon]$ where $\varepsilon>0$ is a small number. In particular we take $\varepsilon = 10^{-12}$.\\ \indent As an example we show the triangulation refinement obtained for the parameters $$\varepsilon_{\variabile{q}_1}^{\textrm{min}} = 0.025, \; \varepsilon_{\variabile{q}_1}^{\textrm{max}} = 0.5, \; \varepsilon_{\variabile{p}_1}^{\textrm{min}} = \varepsilon_{\variabile{q}_1}^{\textrm{min}}/2, \mbox{ and } \varepsilon_{\variabile{p}_1}^{\textrm{max}} = \varepsilon_{\variabile{q}_1}^{\textrm{max}}/2, $$ for which around $8300$ rays are traced in PS. Their distribution at \set{S}{}{} and \set{T}{}{} is shown in Figures \ref{fig:source_triang_pr} and \ref{fig:target_triang_pr}, respectively. \begin{figure}[h] \begin{subfigure}[h]{0.47\textwidth} \centering \includegraphics[width=\textwidth]{pr_source} \caption{Source PS \set{S}{}{} $= [-\variabile{a}+\varepsilon,\variabile{a}-\varepsilon]\times[-1,1]$} \label{fig:source_triang_pr} \end{subfigure} \hfill \begin{subfigure}[h]{0.47\textwidth} \centering \includegraphics[width = \textwidth]{pr_target} \caption{Target PS \set{T}{}{} $= [-\variabile{b},\variabile{b}]\times[-1,1]$} \label{fig:target_triang_pr} \end{subfigure} \caption{\textbf{Rays distribution at \set{S}{}{} and \set{T}{}{} of the parabolic reflector.} Around $8300$ rays are traced using PS ray tracing with parameters $\varepsilon_{\variabile{q}_1}^{\textrm{min}} = 0.025, \; \varepsilon_{\variabile{q}_1}^{\textrm{max}} = 0.5, \; \varepsilon_{\variabile{p}_1}^{\textrm{max}} = \varepsilon_{\variabile{q}_1}^{\textrm{min}}/2, \mbox{ and } \varepsilon_{\variabile{p}_1}^{\textrm{min}} = \varepsilon_{\variabile{q}_1}^{\textrm{max}}/2$ . $17$ different paths are found, each of them correspond to a certain number of reflections.} \label{fig:phase_space_pr} \end{figure} The distribution of the rays in PS gives information about the paths they follow. We note that for the parabolic reflector many paths are found. Every path corresponds to a given number of reflections. Rays can have multiple reflections at lines $2$ and $3$ before arriving at the target. The parameters used in the triangulation refinement not only establish the number of rays traced but also determine the number of detected paths. For instance, for the values of the parameters defined above, the triangulation refinement is able to detect $17$ different paths. This means that up to $8$ multiple reflections occur between the rays and the two mirrors. Counting the number of rays that follow a given path $\Pi$, we can calculate the fraction of rays for every path. For example, tracing around $8300$ rays, the percentage of the rays that have $8$ multiple reflections along one of the two reflectors is around $0.13\%$. Rays that reflect many times before reaching the target do not give a significant contribution to the target intensity. Decreasing the value of the parameter $\varepsilon_{\variabile{q}_1}^{\textrm{min}}$, more paths can be found. The more reflections occur the smaller the corresponding area in PS is. In order to find as many path as possible, also the parameter $\varepsilon_{\variabile{q}_1}^{\textrm{max}}$ needs to be decreased. Increasing the number of reflections considered, the corresponding regions in PS become smaller and smaller, see Figure \ref{fig:phase_space_pr}. \\ \indent Like for the optical systems considered in the previous sections, a stopping criterion of the triangulation refinement is determined for the parabolic reflector. \'{E}tendue conservation is used in order to find the values of the parameters that give a good approximation of the boundaries of the positive luminance regions in PS. For the parabolic reflector in Figure \ref{fig:PR} all the rays that leave the source arrive at the target. Indeed, line $2$ and $3$ can only reflect rays (refraction law is not involved) and the target \'{e}tendue is equal to the source \'{e}tendue. From Equation (\ref{eq:etenduesource1}) we obtain: \begin{equation} U = U_1 = 4(\variabile{a}-\varepsilon)\approx 8. \end{equation} A range of values of $\varepsilon_{\variabile{q}_1}^{\textrm{min}}$ and $\varepsilon_{\variabile{q}_1}^{\textrm{max}}$ is considered (the triangulation parameters for the $\variabile{p}$-axis depend on the $\variabile{q}$-axis parameters according to (\ref{eq:scaled_parameters})). For each couple of values, an approximation of the boundaries $\partial$\set{R}{}{}$(\Pi)$ is found for every path $\Pi$ as explained. $U_{\textrm{t}}$ is calculated for the approximated boundaries using Equation (\ref{eq:etenduetarg}). In Table \ref{tab:etendue_pr} we show how the number of rays traced, the paths found and the value for the target \'{e}tendue depend on the triangulation parameters. We observe that, decreasing $\varepsilon_{\variabile{q}_1}^{\textrm{min}}$ and $\varepsilon_{\variabile{q}_1}^{\textrm{max}}$ the number of both the rays traced and the paths found increases. A maximum of $17$ different paths are detected. Furthermore, the value of $U_{\textrm{t}}$ gets closer and closer to the exact \'{e}tendue $U$. \begin{table}[t] \centering \caption{\bf Results of the triangulation refinement.} \begin{tabular}{lllll} \hline $\varepsilon_{\variabile{q}}^{\textrm{min}} $ & $\varepsilon_{\variabile{q}}^{\textrm{max}}$ & $\nrays$ & $\npath$ & \'{E}tendue \\ \hline $0.2$ & $1$ & $643$ & $11$ & $5.71$\\ $0.1$ & $1$ & $1\,573$ & $15$ & $7.23$\\ $0.025$ & $0.5$ & $8\,357$ & $17$ & $7.65$\\ $0.025/2$ & $0.5/2$ & $18\,613$ & $17$ & $7.82$\\ $0.025/4$ & $0.5/4$ & $40\,465$ & $17$ & $7.82$\\ $0.025/8$ & $0.5/8$ & $86\,529$ & $17$ & $7.96$\\ $0.025/16$ & $0.5/16$ & $185\,581$ & $17$ & $7.98$\\ \hline \end{tabular} \label{tab:etendue_pr} \end{table} \\ \indent Using the triangulation refinement that gives the best \'{e}tendue approximation, we calculate the target averaged and normalized PS intensity $\hat{I}_{\textrm{PS}}$ from (\ref{eq:normalized_PS_intensity}). The intensity profile is shown with the red line in Figure \ref{fig:intensity_pr}. The PS intensity is compared to a reference intensity $\hat{I}_{\textrm{ref}}$, computed using QMC ray tracing with $10^7$ rays. $\hat{I}_{\textrm{ref}}$ is depicted Figure \ref{fig:intensity_pr} with the dotted blue line. The graph shows that the two intensities coincide. \begin{figure}[t] \center \includegraphics[width = 0.7\textwidth]{intensity_pr} \caption{\textbf{Target intensity of the parabolic reflector.} For the PS intensity the parameters $\varepsilon_{\variabile{q}_1}^{\textrm{min}}=1.56\cdot 10^{-3}$, $\varepsilon_{\variabile{q}_1}^{\textrm{max}}=3.13\cdot 10^{-2}$, $\varepsilon_{\variabile{p}_1}^{\textrm{min}}=\varepsilon_{\variabile{q}_1}^{\textrm{min}}/2$, and $\varepsilon_{\variabile{p}_1}^{\textrm{max}}=\varepsilon_{\variabile{q}_1}^{\textrm{max}}/2$ are used. Around $8.15 \cdot 10^{4}$ rays are traced in PS. For the reference intensity QMC ray tracing with $10^7$ rays is implemented.} \label{fig:intensity_pr} \end{figure} \begin{table}[t] \label{tab:table_pr_triangulation} \centering \caption{\bf Errors of the PS intensity for the parabolic reflector} \begin{tabular}{lll} \hline $\nrays$ & PS error & CPU-time (sec.) \\ \hline $643$ & $5.18\cdot10^{-4}$ & $0.24$\\ $1\,573$ & $8.99\cdot 10^{-4}$ & $0.48$\\ $8\,357$ & $2.87\cdot 10^{-4}$ & $2.40$ \\ $18\,613$ & $1.38\cdot 10^{-4}$ & $5.48$\\ $40\,465$ & $5.80\cdot 10^{-5}$ & $16.14$\\ $86\,529$ & $2.90\cdot 10^{-5}$ & $55.04$\\ $185\,581$ & $1.66\cdot 10^{-5}$ & $245.97$\\ \hline \end{tabular} \label{tab:table_pr_triangulation} \end{table} \\ \indent Now, our method is compared to both MC and QMC ray tracing. The error between the approximate intensities $\hat{I}_{\textrm{A}}(\textrm{A}=\textrm{PS}, \textrm{MC}, \textrm{QMC})$ and the reference intensity is calculated. In Figure \ref{fig:error_rays_pr} the error as a function of the number of rays traced is shown for the three methods. The green line represents the MC error, the blue line pictures the QMC error and the red line depicts the PS error. The errors are shown in a logarithmic scale, the dotted black line has slope $-1/2$, the dotted blue line has slope $-1$. Like for the other systems, MC ray tracing converges proportionally to $1/\sqrt{\nrays}$. QMC convergence is proportional to $1/\nrays$ for $\nrays \rightarrow \infty$. Less rays compared to MC ray tracing are needed in PS. In particular, to achieve an error of $10^{-4}$, around $10$ times less rays in PS are needed compared to MC and few rays less than QMC. \begin{figure}[t] \center \includegraphics[width = 0.7\textwidth]{error_pr_100bin_rays} \caption{\textbf{Error as a function of the number of rays traced.} Less rays are needed using PS ray tracing compared to both MC and QMC ray tracing.} \label{fig:error_rays_pr} \end{figure} \\ \indent Finally, the error as a function of the CPU-time is shown in Figure \ref{fig:error_time_pr}. The MC, QMC and PS errors are depicted with the green, blue and red line, respectively. To obtain an error of the order of $10^{-5}$, PS ray tracing is $10$ times faster than MC ray tracing while becomes twice slower than QMC ray tracing. The detailed results of the numerical simulations are reported in Tables \ref{tab:mc_error_pr_triangulation} and \ref{tab:QMC_error_pr_triangulation}. \begin{figure}[t] \center \includegraphics[width = 0.7\textwidth]{error_pr_100bin_time} \caption{\textbf{Error as a function of the CPU-time.} PS ray tracing has significant advantages in terms of the CPU-time compared to MC ray tracing. For the parabolic reflector the computational time is comparable with QMC ray tracing.} \label{fig:error_time_pr} \end{figure} \begin{table}[t] \centering \caption{\bf Errors of the MC intensity for the parabolic reflector} \begin{tabular}{lll} \hline $\nrays$ & MC error & CPU-time (sec.) \\ \hline $10^3$ & $1.18\cdot10^{-3}$ & $0.39$\\ $10^4$ & $5.74\cdot 10^{-4}$ & $3.43$ \\ $10^5$ & $1.73\cdot 10^{-4}$ & $33.13$\\ $10^6$ & $5.79\cdot 10^{-5}$ & $328.96$\\ $10^7$ & $1.68\cdot 10^{-5}$ & $3325.39$\\ \hline \end{tabular} \label{tab:mc_error_pr_triangulation} \end{table} \begin{table}[t] \centering \caption{\bf Errors of the QMC intensity for the parabolic reflector} \begin{tabular}{lll} \hline $\nrays$ & QMC error & CPU-time (sec.) \\ \hline $10^3$ & $1.36\cdot10^{-3}$ & $0.53$\\ $10^4$ & $2.05\cdot 10^{-4}$ & $3.44$ \\ $10^5$ & $2.89\cdot 10^{-5}$ & $31.22$\\ $10^6$ & $6.96\cdot 10^{-6}$ & $314.59$\\ \hline \end{tabular} \label{tab:QMC_error_pr_triangulation} \end{table} \\ \indent As explained above, MC and QMC errors also depend on the number of bins $\nbin$. In the simulations shown in this chapter we have always considered $\nbin = 100$. On the contrary, PS ray tracing calculates the intensity point-wise, nevertheless we considered $\nbin = 100$ bins at the target and we calculate the averaged normalized PS intensity over every bin in order to have a precise comparison with the binning procedures. Because of this, we expect that increasing the number of bins, the average PS intensity becomes more accurate. To verify this conjecture, we implemented MC, QMC and PS ray tracing considering a partitioning of the interval $[-1,1]$ into $\nbin = 150$ bins. The number of rays considered for the reference intensity have to be increased $(1.5)^5$ times. The averaged normalized intensities $\hat{I}_{\textrm{A}} (\textrm{A}=\textrm{MC}, \textrm{QMC}, \textrm{PS})$ found considering $\nbin=150$ bins are compared with the reference intensity $\hat{I}_{\textrm{ref}}$ (averaged and normalized) which is computed using QMC ray tracing with $7.6\cdot10^7$ rays. The error as a function of the number of rays is shown in Figure \ref{fig:error_pr_150_bin}. The results show that, using PS ray tracing an error of the order of $10^{-5}$ is obtained tracing around $10^2$ times less rays than MC ray tracing and twice less rays than QMC ray tracing. We conclude that, increasing the number of bins, PS error outperforms both MC and QMC ray tracing (see also Figure \ref{fig:error_rays_pr}). \begin{figure}[h] \center \includegraphics[width= 0.7\textwidth]{error_pr_150bins_ray} \caption{\textbf{Error plot for the parabolic reflector considering $\nbin=150$ bins.} Increasing the number of bins, the PS error decreases resulting in a better convergence compared to MC and QMC ray tracing.} \label{fig:error_pr_150_bin} \end{figure} \section{Discussion and conclusions} In this chapter, we presented a method to calculate the boundaries of the positive luminance regions with in PS. This method does not depend on the parameter $\alpha$ needed for the $\alpha$-shapes method presented in Chapter \ref{chap:boundaries_alpha}. Indeed, given a triangulation at the source PS, the boundaries are computed connecting the vertices of the boundary triangles, i.e., triangles crossed by a boundary, that follow the same path. Employing \'{e}tendue conservation, a stopping criterion for the triangulation refinement was developed. We applied the method to three different optical systems: the two-faceted cup, a TIR-collimator, and a parabolic reflector. Numerical results show that PS ray tracing is faster and more accurate than MC ray tracing. Compared to QMC ray tracing we observed accuracy and speed advantages of an order of magnitude with our method for the TIR-collimator. For the two-faceted cup, PS ray tracing has a slower convergence compared to QMC ray tracing. For the parabolic reflector PS and QMC ray tracing display similar convergence. As an example, for this system we showed that increasing the number of bins the errors decrease. To conclude we state that QMC ray tracing performs better than PS ray tracing for very simple optical systems, but the PS approach is more suitable for complicated optical systems. \\ \indent In order to further improve PS ray tracing we develop a new method which employs the PS of \textit{all} the optical lines. This method is explained in the next chapter. %We claim that PS ray tracing is also more accurate than the ray tracing procedure proposed by Moore (2013), \cite{moore2013methods}. %The novelty of our approach compared to the method used by Moore, is briefly explained below. First, to compute the output intensity, we employ the phase space of the target. This avoids the use of any interpolation to compute the photometric variables and therefore, more accurate results are obtained. %Second, in all rays that leave the source start at the same position and only a sampling angular range is given. In our approach a rectangular source is considered thus, both the angular and spatial coordinates of each ray change. This extra variable can produce very irregular shapes of the regions at target phase space. To overcome this issue, we employ the edge-ray principle and we consider the regions at source phase space where the distribution of the rays is much more regular and the corresponding boundaries are easily computed. %As a consequence, our procedure is suitable to compute the output intensity as function of both the angular or the spatial coordinates. %Third, using the conservation of \'{e}tendue, we provided a criterion to stop the triangulation refinement. In this way we can estimate the number of rays required to obtain the desired accuracy and thus, we avoid tracing more rays than necessary. %\section{The Compound Parabolic Concentrator (CPC)}
{ "alphanum_fraction": 0.7291266795, "avg_line_length": 126.303030303, "ext": "tex", "hexsha": "4ed7c83aa1e48f012e257de3f3e6faeed6ac7b1a", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "8f9111ea4ec3ac125b593f41b3ac6fe302ea6632", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "melaniafilosa/ps_raytracing", "max_forks_repo_path": "chapters/triangulation.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "8f9111ea4ec3ac125b593f41b3ac6fe302ea6632", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "melaniafilosa/ps_raytracing", "max_issues_repo_path": "chapters/triangulation.tex", "max_line_length": 2163, "max_stars_count": null, "max_stars_repo_head_hexsha": "8f9111ea4ec3ac125b593f41b3ac6fe302ea6632", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "melaniafilosa/ps_raytracing", "max_stars_repo_path": "chapters/triangulation.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 14200, "size": 45848 }
\PassOptionsToPackage{pdfpagelabels=false}{hyperref} % To shut down hyperref warnings \documentclass{sig-alternate} \setlength{\paperheight}{11in} % To shut down hyperref warnings \usepackage{lmodern} \usepackage[T1]{fontenc} \usepackage{amssymb} \usepackage{booktabs} \usepackage{hyperref} \usepackage[numbers,square,sort&compress]{natbib} \renewcommand{\refname}{References} \renewcommand{\bibsection}{\subsection*{References}} \renewcommand*{\bibfont}{\raggedright} \usepackage{mdwlist} \renewcommand{\labelenumi}{\arabic{enumi}.} \renewcommand{\labelenumii}{\arabic{enumi}.\arabic{enumii}} \newcommand{\spara}[1]{\smallskip\noindent{\bf #1}} \newcommand{\para}[1]{\noindent{\bf #1}} \makeatletter \def\@copyrightspace{\relax} \makeatother \begin{document} \numberofauthors{3} \title{Centrality Measures on Big Graphs:\\Exact, Approximated, and Distributed Algorithms} \subtitle{Proposal for a Half-day Tutorial at ACM WSDM'16} \author{ \alignauthor Francesco Bonchi\\ \affaddr{ISI Foundation}\\ \affaddr{Turin, Italy}\\ \email{[email protected]} \end{tabular}\begin{tabular}[t]{p{1.3\auwidth}}\centering Gianmarco~De~Francisci~Morales\\ \affaddr{Aalto University}\\ \affaddr{Helsinki, Finland}\\ \email{[email protected]} \alignauthor Matteo Riondato\titlenote{Main contact person}\\ \affaddr{Two Sigma Investments}\\ \affaddr{New York, NY, USA}\\ \email{[email protected]} } \date{\today} \maketitle \begin{abstract} \emph{Centrality measures} allow to measure the relative \emph{importance} of a node or an edge in a graph. %w.r.t.~other nodes or edges. Several measures of centrality are available in the literature, each capturing different aspects of the informal concept of importance, paired with several algorithms to compute them. In this tutorial, we survey the different definitions of centrality measures and the algorithms to compute them. We start from the most common measures (e.g., closeness, betweenness) and move to more complex ones, such as spanning-edge centrality. In our presentation of the algorithms, we begin from exact ones, and progress to approximation algorithms, including sampling-based ones, and to scalable MapReduce algorithms for huge graphs, both for exact computation and for keeping the measures up-to-date on dynamic graphs, where edges change over time. % where edges are inserted or deleted over time. Our goal is to show how advanced algorithmic techniques and scalable systems can be used to obtain efficient algorithms for an important graph mining tasks, and to encourage research in the area by highlighting open problems and possible directions. \end{abstract} %\category{G.2.2}{Discrete Mathematics}{Graph Theory}[Graph algorithms] %\category{H.2.8}{Database Management}{Database Applications}[Data mining] % %\keywords{Centrality, Betweenness, Closeness} \section*{Intended Audience} The tutorial is aimed at researchers interested in the theory and the applications of algorithms for graph mining and social network analysis. We do not require any specific existing knowledge. The tutorial is designed for an audience of computer scientists who have a general idea of the problems and challenges in graph analysis. We plan to present the material in such a way that any advanced undergraduate student would be able to productively follow our tutorial. %and we will actively engage with the audience and adapt our pace and style to ensure %that every attendee can benefit from our tutorial. We start from the basic definitions and progressively move to more advanced algorithms, including sampling-based approximation algorithms and MapReduce algorithms. Therefore, our tutorial will be of interest both to researchers new to the field and to a more experienced audience. \section*{Duration: \textrm{Half-day.}} \section*{Previous editions of the tutorial} The tutorial was not previously offered. We did not find any tutorial covering similar topics in the programs of recent WSDM conferences and of other top conferences. \section*{Instructors} This tutorial is developed by Francesco Bonchi, Gianmarco De Francisci Morales, and Matteo Riondato. All three instructors will attend the conference. \para{Francesco Bonchi} is Research Leader at the ISI Foundation, Turin, Italy, where he leads the "Algorithmic Data Analytics" group. He is also Scientific Director for Data Mining at Eurecat (Technological Center of Catalunya), Barcelona. Before he was Director of Research at Yahoo Labs in Barcelona, Spain, leading the Web Mining Research group. His recent research interests include mining query-logs, social networks, and social media, as well as the privacy issues related to mining these kinds of sensible data. %In the past he has been interested in data mining query %languages, constrained pattern mining, mining spatiotemporal and mobility data, %and privacy preserving data mining. He will be PC Chair of the 16th IEEE International Conference on Data Mining (ICDM 2016) to be held in Barcelona in December 2016. He is member of the ECML PKDD Steering Committee, Associate Editor of the newly created IEEE Transactions on Big Data (TBD), of the IEEE Transactions on Knowledge and Data Engineering (TKDE), the ACM Transactions on Intelligent Systems and Technology (TIST), Knowledge and Information Systems (KAIS), and member of the Editorial Board of Data Mining and Knowledge Discovery (DMKD). % %He has been program co-chair of the %European Conference on Machine Learning and Principles and Practice of Knowledge %Discovery in Databases (ECML PKDD 2010). Dr. Bonchi has also served as program %co-chair of the first and second ACM SIGKDD International Workshop on Privacy, %Security, and Trust in KDD (PinKDD 2007 and 2008), the 1st IEEE International %Workshop on Privacy Aspects of Data Mining (PADM 2006), and the 4th %International Workshop on Knowledge Discovery in Inductive Databases (KDID %2005). He is co-editor of the book "Privacy-Aware Knowledge Discovery: Novel Applications and New Techniques" published by Chapman \& Hall/CRC Press. %He earned his Ph.D. in computer science from the University of Pisa in December 2003. He presented a tutorial at ACM KDD'14. \para{Gianmarco De Francisci Morales} is a Visiting Scientist at Aalto University. Previously he worked as a Research Scientist at Yahoo Labs Barcelona, and as a Research Associate at ISTI-CNR in Pisa. His research focuses on scalable data mining, with an emphasis on Web mining and data-intensive scalable computing systems. %He is an active member of the open source community of the Apache Software Foundation, working on the Hadoop ecosystem, and a committer for the Apache Pig project. He is one of the lead developers of Apache SAMOA, an open-source platform for mining big data streams. He presented a tutorial on stream mining at IEEE BigData'14. \para{Matteo Riondato} is a Research Scientist in the Labs group at Two Sigma Investments. Previously he was a postdoc at Stanford and at Brown. His dissertation on sampling-based randomized algorithms for data and graph mining received the Best Student Poster Award at SIAM SDM'14. His research focuses on exploiting advanced theory in practical algorithms for time series analysis, pattern mining, and social network analysis. He presented tutorials at ACM KDD'15, ECML PKDD'15, and ACM CIKM'15. \pagebreak \section*{EXTENDED ABSTRACT} \subsection*{Motivation} Identifying the ``important'' nodes or edges in a graph is a fundamental task in network analysis, with many applications. Many measures, known as \emph{centrality indices}, have been proposed over the years, formalizing the concept of importance in different ways~\citep{Newman10}. Centrality measures rely on graph properties to quantify importance. For example,betweenness centrality, one of the most commonly used centrality indices, counts the number of shortest paths going through a node, while the closeness centrality of a node is the average sum of the inverse of the distance to other nodes. Other centrality measures use eigenvectors, random walks, degrees, or more complex properties. The PageRank index of a node is also a centrality measure, and centrality measures for sets of nodes are also possible. With the proliferation of huge networks with millions of nodes and billions of edges, the importance of having scalable algorithms for computing centrality indices has become more and more evident and a number of contributions have been recently proposed, ranging from heuristics that perform extremely well in practice to approximation algorithms offering strong probabilistic guarantees, to scalable algorithms for the MapReduce platform. Moreover, the dynamic nature of many networks, i.e., the addition and removal of nodes and/or edges over time, dictates the need to keep the computed values of centrality up-to-date as the graph changes. These challenging problems have enjoyed enormous interest from the research community, with many relevant contributions proposed recently to tackle them. Our tutorial presents, in a unified framework, some of the many measures of centrality and discuss the algorithms to compute them, both in an exact and in an approximate way, both in-memory and in a parallel/distributed fashion for the MapReduce framework of computation. This is done with an effort to ease the comparison between different measures of centrality, the different quality guarantees offered by approximation algorithms, and the different trade-offs and scalability behaviors characterizing parallel/distributed algorithms. We believe this unity of presentation is beneficial both for newcomers and for experienced researchers in the field, who will be exposes to the material from a unified point of view. The graph analyst can now choose among a huge number of centrality indices, from the well-established ones originally developed in sociology, to the ones more recently introduced to capture other aspects of ``importance''. At the same time, the original algorithms that could handle the relatively small networks for classic social science experiments have been superseded by important algorithmic contributions that exploit modern computational frameworks and/or obtain fast, high-quality approximations. It is our belief that the long history of centrality measures and the ever-increasing interest from computer scientists in analyzing larger and richer graphs, create the need for an all-around organization of both old and new materials, and is the desire to satisfy this need that inspired us to develop this tutorial. \subsection*{Outline} The tutorial is structured in three main technical parts, plus a concluding part where we discuss future research directions. All the three technical parts will contains both theory and experimental results. \begin{enumerate} \item {\bf Introduction: definitions and exact algorithms} \begin{enumerate} \item The axioms of centrality~\citep{BoldiV14} \item Definitions of centrality~\citep{Newman10}, including, but not limited to: betweenness, closeness, degree, eigenvector, harmonic, Katz, absorbing random-walk~\citep{MavroforakisMG15}, and spanning-edge centrality~\citep{MavroforakisGLKT15}. \item Betweenness centrality: exact algorithm~\citep{Brandes01} and heuristically-faster exact algorithms for betweenness centrality~\citep{ErdosIBT15,SaryuceSKC13}. \item Exact algorithms for betweenness centrality in a dynamic graph~\citep{LeeLPCC12,NasrePR14,PontecorviR15}. \item Exact algorithms for closeness centrality in a dynamic graph~\citep{SariyuceKSC13b}. \end{enumerate} \item {\bf Approximation algorithms} \begin{enumerate} \item Sampling-based algorithm for closeness centrality~\citep{EppsteinW04}. \item Betweenness centrality: almost-linear-time approximation algorithm~\citep{Yoshida14}, basic sampling-based algorithm~\citep{BrandesP07}, refined estimators~\citep{GeisbergerSS08}, VC-dimension bounds for betweenness centrality~\citep{RiondatoK14WSDM,RiondatoK15}. \item Approximation algorithms for betweenness centrality in dynamic graphs~\citep{KasWCC13,BergaminiMS14,BergaminiM15}. \end{enumerate} \item {\bf Highly-scalable algorithms} \begin{enumerate} \item GPU-based algorithms~\citep{SariyuceKSC13}. \item Exact parallel streaming algorithm for betweenness centrality in a dynamic graph~\citep{KourtellisMB15}. \end{enumerate} \item {\bf Challenges and directions for future research} \end{enumerate} \subsection*{Links to related resources} Previous tutorials presented by the instructors: \begin{description} \item Big Data Stream Mining (IEEE BigData'14): \url{https://sites.google.com/site/bigdatastreamminingtutorial}. \item VC-Dimension and Rademacher Averages: from Statistical Learning Theory to Sampling Algorithms (ACM KDD'15, ECML PKDD'15, ACM CIKM'15) \url{http://bigdata.cs.brown.edu/vctutorial/}. \item Correlation Clustering: from Theory to Practice (ACM KDD'14) \url{http://www.francescobonchi.com/CCtuto_kdd14.pdf}. \end{description} \subsection*{Support materials} We are developing a mini-website for the tutorial at \url{http://matteo.rionda.to/centrtutorial/}. The website will contain the abstract of the tutorial, a more detailed outline with short a description of each item of the outline, a full list of references complete with links to electronic editions, a list of links to software packages implementing the algorithms we present, and naturally the slides used in the tutorial. A preliminary version of the website will be available 15 days after the tutorial is accepted. We plan to work on it and enrich the contents continuously. A preliminary version of the slides will be available 30 days before the conference, or in any case by any deadline given to us by the conference organizers, and the final version will be available 15 days before the conference. \bibliographystyle{abbrvnat} \bibliography{centrtutorial} \end{document}
{ "alphanum_fraction": 0.7988771232, "avg_line_length": 49.7208480565, "ext": "tex", "hexsha": "a780883fea68191b4e3ef8e4a54104c1d63e4d00", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2018-08-25T05:46:16.000Z", "max_forks_repo_forks_event_min_datetime": "2018-08-25T05:46:16.000Z", "max_forks_repo_head_hexsha": "cfb9b21ce83e03f66a9249d126b166f2b906f099", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "rionda/centrtutorial", "max_forks_repo_path": "WSDM16/centrtutorial.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "cfb9b21ce83e03f66a9249d126b166f2b906f099", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "rionda/centrtutorial", "max_issues_repo_path": "WSDM16/centrtutorial.tex", "max_line_length": 164, "max_stars_count": null, "max_stars_repo_head_hexsha": "cfb9b21ce83e03f66a9249d126b166f2b906f099", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "rionda/centrtutorial", "max_stars_repo_path": "WSDM16/centrtutorial.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 3407, "size": 14071 }
\documentclass[]{BasiliskReportMemo} \usepackage{AVS} \newcommand{\submiterInstitute}{Autonomous Vehicle Simulation (AVS) Laboratory,\\ University of Colorado} \newcommand{\ModuleName}{thrMomentumDumping} \newcommand{\subject}{Thrusting Firing Module to Cycle Through the RW Momentum Dumping} \newcommand{\status}{initial document draft} \newcommand{\preparer}{H. Schaub} \newcommand{\summary}{This module reads in the desired impulse that each thruster must produce to create inertial momentum change to despin the RWs. The output of the module is a setof of thruster firing times. Each thruster can only fire for a maximum time that matches a single control period. After this the thrusters are off for an integer number of control periods to let the RW re-stabilize the attitude about an inertial pointing scenario. } \begin{document} \makeCover % % enter the revision documentation here % to add more lines, copy the table entry and the \hline, and paste after the current entry. % \pagestyle{empty} {\renewcommand{\arraystretch}{1.1} \noindent \begin{longtable}{|p{0.5in}|p{4.5in}|p{1.14in}|} \hline {\bfseries Rev}: & {\bfseries Change Description} & {\bfseries By} \\ \hline Draft & Initial document creation & H. Schaub \\ \hline \end{longtable} } \newpage \setcounter{page}{1} \pagestyle{fancy} \tableofcontents ~\\ \hrule ~\\ \begin{figure}[htb] \centerline{ \includegraphics[]{Figures/rwMomentumOverview} } \caption{Overview of the Modules Used to Perform Reaction Wheel Angular Momentum Dumping.} \label{fig:Fig1} \end{figure} \section{Introduction} To manage the Reaction Wheel (RW) angular momentum build-up over time, a thruster-based momentum dumping strategy is used. Figure~\ref{fig:Fig1} illustrates how the momentum dumping will occur simultaneously with an inertial pointing control solution. Assume the spacecraft contains $N_{\text{RW}}$ RWs, and $M_{\text{thr}}$ thrusters. The momentum dumping management module already evaluated the required $\leftexp{B}{\Delta}\bm H$ that must be achieved in this round of momentum dumping. The spacecraft is assumed to be held inertially fixed, such as in sun-pointing. \begin{figure}[t] \centerline{ \includegraphics[]{Figures/rwMomentumDumping} } \caption{Overview of the Reaction Wheel Angular Momentum Dumping Module.} \label{fig:Fig2} \end{figure} \section{{\tt thrMomentumDumping} Module Description} This modules receives the vector of thruster impulses $\Delta\bm p$ from a thruster mapping module. It first checks to see if this $\Delta\bm p$ is new. If yes, it compute the overall thruster on-times through \begin{equation} \bm t_{i,\text{on}} = \frac{\Delta \bm p}{F_{i,\text{max}}} \end{equation} where $F_{i,\text{max}}$ is the maximum thrust force achievable with the on-off type thruster. This on-times are copied to the output thruster on times, and the thruster cycling counter is reset. If the $\Delta\bm p$ message is not new, then the counter flag is checked. If the counter is strictly positive, the thrusters should be off to allow for the RW to stabilize the orientation. In this case the output thruster times are set to zero, and the counter is decremented, and the remaining required thruster on-times are decremented by the control time period. If the counter flag reaches zero it is time to fire the required thrusters again to implement the next chunk of the required thruster impulse. The output thrust on times are set to the remaining thruster on times, the counter is reset, and the remaining thruster on times is decremented again by the control period. Finally, the thruster on times are checked to be greater than the thruster minimum firing time, or the output time is set to zero. Similarly, if the required thruster on time is larger than a control period, the thruster on time is set equal to the control period. This ensures that the thrusters will fire for a single control period duration only before returning to an off-mode. \section{Module Parameters} \subsection{{\tt maxCounterValue} Parameter} This parameter dictates the number of control periods where the thrusters cycle off to allow the RWs to re-stabilize the orientation. It must be set to a strictly positive value. \subsection{{\tt thrMinFireTime} Parameter} This parameter determines what the minimum thruster firing time is, provided in units of seconds. \end{document}
{ "alphanum_fraction": 0.7733967822, "avg_line_length": 47.9673913043, "ext": "tex", "hexsha": "501b9cadf770b7186875fe0eae677bcb69ca3eae", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "a8b1e37c31c1287549d6fd4d71fcaa35b6fc3f14", "max_forks_repo_licenses": [ "0BSD" ], "max_forks_repo_name": "ian-cooke/basilisk_mag", "max_forks_repo_path": "src/fswAlgorithms/effectorInterfaces/thrMomentumDumping/_Documentation/Basilisk-thrMomentumDumping-20160820.tex", "max_issues_count": 1, "max_issues_repo_head_hexsha": "a8b1e37c31c1287549d6fd4d71fcaa35b6fc3f14", "max_issues_repo_issues_event_max_datetime": "2019-03-13T20:52:22.000Z", "max_issues_repo_issues_event_min_datetime": "2019-03-13T20:52:22.000Z", "max_issues_repo_licenses": [ "0BSD" ], "max_issues_repo_name": "ian-cooke/basilisk_mag", "max_issues_repo_path": "src/fswAlgorithms/effectorInterfaces/thrMomentumDumping/_Documentation/Basilisk-thrMomentumDumping-20160820.tex", "max_line_length": 576, "max_stars_count": null, "max_stars_repo_head_hexsha": "a8b1e37c31c1287549d6fd4d71fcaa35b6fc3f14", "max_stars_repo_licenses": [ "0BSD" ], "max_stars_repo_name": "ian-cooke/basilisk_mag", "max_stars_repo_path": "src/fswAlgorithms/effectorInterfaces/thrMomentumDumping/_Documentation/Basilisk-thrMomentumDumping-20160820.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 1111, "size": 4413 }
% 01smoothmanifolds.tex % Fund Science! & Help Ernest finish his Physics Research! : quantum super-A-polynomials - a thesis by Ernest Yeung % % http://igg.me/at/ernestyalumni2014 % % Facebook : ernestyalumni % github : ernestyalumni % gmail : ernestyalumni % google : ernestyalumni % linkedin : ernestyalumni % tumblr : ernestyalumni % twitter : ernestyalumni % youtube : ernestyalumni % indiegogo : ernestyalumni % % Ernest Yeung was supported by Mr. and Mrs. C.W. Yeung, Prof. Robert A. Rosenstone, Michael Drown, Arvid Kingl, Mr. and Mrs. Valerie Cheng, and the Foundation for Polish Sciences, Warsaw University. \subsection*{Topological Manifolds } $M$ topological manifold of $\text{dim}{n}$, or topological $n$-manifold \begin{itemize} \item locally Euclidean, $\text{dim}{n}$ - $\forall \, p \in M$, $\exists \, $ neighborhood $U \equiv U_p$ s.t. $U_p \approx^{\text{homeo}} \text{ open } V \subset \mathbb{R}^n$ \end{itemize} \exercisehead{1.1} Recall, $M$ locally Euclidean $\dim{n}$ $\forall \, p \in M$, $\exists \,$ neighborhood homeomorphic to open subset. \\ open subset $\mathcal{O} \subseteq \mathbb{R}^n$ homeomorphic to open ball and $\mathcal{O}$ homeomorphic is $\mathbb{R}^n$ since $\mathbb{R}^n$ homeomorphic to open ball. To see this explicitly, that open ball $B_{\epsilon}(x_0) \subseteq \mathbb{R}^n $ homeomorphic to $\mathbb{R}^n$ Consider $\begin{aligned} & \quad \\ & T: B_{\epsilon}(x_0) \to \mathbb{R}^n \\ & T(B_{\epsilon}(x_0)) = B_{\epsilon}(0) \\ & T(x) = x- x_0 \end{aligned}$ \\ $T^{-1}(x) = x+x_0$. Clearly $T$ homeomorphism. Consider $\begin{aligned} & \quad \\ & \lambda : \mathbb{R}^n \to \mathbb{R}^n \\ & \lambda(x) = \lambda x \\ & \lambda^{-1}(x) = \frac{1}{ \lambda } x \end{aligned}$ for $\lambda >0$. Clearly $\lambda$ homeomorphism. Consider $B \equiv B_1(0)$. Consider $\begin{aligned} & \quad \\ & g: \mathbb{R}^n \to \mathbb{R}^n \\ & g(x) = \frac{x}{ 1 + |x| } \end{aligned}$ $g$ cont. Let $\begin{aligned} & \quad \\ & f:B \to \mathbb{R}^n \\ & f(x) = \frac{x}{ 1 - |x| } \end{aligned} $ How was $f$ guessed at? $|g(x) | = \left| \frac{x}{1 + |x| } \right| = \frac{r}{ 1 + r}$ . Note $0 \leq |g(x) | <1$ \\ So $g(\mathbb{R}^n) = B$ For $|g(x)| = |y|$, $y \in B$, $|y|(1+r) = r$, \, $r = \frac{ |y| }{ 1 - |y| }$ This is well-defined, since $0 \leq |y| < 1$ and $0 < 1 - |y| \leq 1$ \[ \begin{aligned} & gf(x) =\frac{ \frac{ x}{ 1 - |x| } }{ 1 + \frac{|x|}{ 1 - |x| } } = x \\ & fg(x) = \frac{ \frac{x}{ 1 + |x| } }{ 1 - \frac{|x| }{ 1 + |x| } } = x \end{aligned} \] $f$ homeomorphism between $B$ and $\mathbb{R}^n$. $B$ and $\mathbb{R}^n$ homeomorphic. So an open ball in $\mathbf{R}^n$ is homeomorphic to $\mathbb{R}^n$ \hrulefill In practice, both the Hausdorff and second countability properties are usually easy to check, especially for spaces that are built out of other manifolds, because both properties are inherited by subspaces and products (Lemmas A.5 and A.8). In particular, it follows easily that any open subset of a topological $n$-manifold is itself a topological $n$-manifold (with the subspace topology, of course). \subsubsection*{Coordinate Charts} chart on $M$, $(U, \varphi)$ where open $U \subset M$ and \\ homeomorphism $\varphi : U \to \mathbb{R}^n$, $\varphi(U)$ open. \subsubsection*{Examples of Topological Manifolds} Example 1.3. (Graphs of Continuous Functions) \\ Let open $U \subset \mathbb{R}^n$ \\ Let $F: U \to \mathbb{R}^k$ cont. \\ graph of $F$: $\Gamma(F) = \lbrace (x,y) \in \mathbb{R}^n \times \mathbb{R}^k | x \in U , y = F(x) \rbrace$ with subspace topology. \\ $\pi_1 : \mathbb{R}^n \times \mathbb{R}^k \to \mathbb{R}^n$ projection onto first factor. \\ $\varphi_k:\Gamma(F) \to U$ restriction of $\pi_1$ to $\Gamma(F)$ \\ $\varphi_F(x,y) = x$, $(x,y) \in \Gamma(F)$ \\ \textbf{Example 1.4 (Spheres)} $S^n = \lbrace x \in \mathbb{R}^{n+1} | |x| = 1 \rbrace$ \\ Hausdorff and second countable because it's topological subspace of $\mathbb{R}^n$ \\ \textbf{Example 1.5 (Projective Spaces)} $U_i \subset \mathbb{R}^{n+1} - 0$ where $x^i \neq 0$ \\ $V_i = \pi(U_i)$ Let $a\in U_i$. \[ |x-a|^2 = (x^1 - a^1)^2 + \dots + (x^i - a^i)^2 + \dots + (x^{n+1} - a^{n+1})^2 < \frac{ (n+1) \epsilon^2 }{ n + 1 } = \epsilon^2 \] $\forall \, a^i \in \mathbb{R}$, $\exists \, x^i$, s.t. $(x^i - a^i)^2 < \frac{ \epsilon^2}{ n +1}$, by choice of $0< x^i < a^i + \frac{ \epsilon}{ \sqrt{ n+ 1} }$ with $0<x^i$ for $i=i$ index. \\ $U_i$ indeed open set, \emph{saturated open set}. open $U_i \subset \mathbb{R}^{n+1} - 0$, $x^i \neq 0$ From Lemma A.10, recall (d) restriction of $\pi$ to any saturated open or closed subset of $X$ is a quotient map. natural map $\pi: \mathbb{R}^{n+1} - 0 \to \mathbb{R}P^n$ given quotient topology. By Tu, Prop. 7.14, $\sim $ on $\mathbb{R}^{n+1} - 0 \to \mathbb{R}P^n$ open equivalence relation. \\ $\Longrightarrow \left. \pi \right|_{U_i}(U_i) = V_i$ open. \[ \begin{aligned} & \varphi_i : V_i \to \mathbb{R}^n \\ & \varphi_i [ x^1 \dots x^{n+1} ] = \left( \frac{ x^1 }{ x^i } \dots \frac{ x^{i-1}}{ x^i } , \frac{ x^{i+1}}{ x^i} \dots \frac{x^{n+1}}{ x^i} \right) \end{aligned} \] $\varphi_i$ well-defined since \[ \varphi_i[tx^1 \dots tx^{n+1}] = \left( \frac{ tx^1 }{ tx^i } \dots \frac{ t\widehat{x}^i }{ tx^i } \dots \frac{ tx^{n+1}}{ tx^i } \right)= \left( \frac{x^1}{ x^i } \dots \frac{ \widehat{x}^i }{ x^i } \dots \frac{ x^{n+1}}{ x^i } \right) = \varphi_i[x^1 \dots x^{n+1} ] \] $\varphi_i$ cont. since $\varphi_i \pi$ cont. \begin{tikzpicture} \matrix (m) [matrix of math nodes, row sep=2em, column sep=3em, minimum width=1em] { U_i \subset \mathbb{R}^{n+1} - 0 & \\ V_i \subset \mathbb{R}P^n & \mathbb{R}^n \\ }; \path[-stealth] (m-1-1) edge node [right] {$\varphi_i \pi$} (m-2-2) edge node [left] { $\pi$} (m-2-1) (m-2-1) edge node [below] {$\varphi$} (m-2-2); \end{tikzpicture} \[ \begin{gathered} \begin{aligned} & \varphi_i : U_i \subset \mathbb{R}P^n \to \mathbb{R}^n \\ & \varphi_i[x^1 \dots x^{n+1} ] = \left( \frac{x^1}{x^i} \dots \frac{ \widehat{x}^i }{ x^i } \dots \frac{x^{n+1}}{ x^i } \right) \\ & \varphi^{-1}_i(u^1 \dots u^n) = [ u^1 \dots u^{i-1}, 1 , u^i \dots u^n ] \end{aligned} \\ \varphi_i^{-1} \varphi_i[x^1 \dots x^{n+1} ] = \left[ \frac{x^1 }{ x^i } \dots \frac{x^{i-1}}{ x^i } , 1 , \frac{x^{i+1}}{ x^i} \dots \frac{x^{n+1}}{ x^i } \right] = [x^1 \dots x^{i-1}, x^i , x^{i+1} \dots x^{n+1} ] \\ \varphi_i \varphi^{-1}_i(u^1 \dots u^n) = (u^1 \dots u^{i-1} , u^i \dots u^n ) \end{gathered} \] cont. $\varphi_i$ bijective, $\varphi_i^{-1}$ cont. $\varphi_i$ homeomorphism. From a previous edition: \exercisehead{1.2} Let $ \begin{aligned} & \quad \\ & \phi_t : \mathbb{R}^{n+1}\backslash \lbrace 0 \rbrace \to \mathbb{R}^{n+1} \backslash \lbrace 0 \rbrace \\ & \phi_t(x) = tx \end{aligned}$ \\ $\phi_t$ invertible, $\phi_t^{-1} = \phi_{\frac{1}{t}}$ \\ $\phi_t$, $\phi_t^{-1}$ \, $C^1$ (and $C^{\infty}$), $\phi_t$ homeomorphism. \\ Let $U$ open in $\mathbb{R}^{n+1}\backslash \lbrace 0 \rbrace$. Then $\phi_t(U)$ open in $\mathbb{R}^{n+1}\backslash \lbrace 0 \rbrace$. \\ Thus $\pi^{-1}([U]) = \bigcup_{t \in \mathbb{R}} \phi_t(U)$ open in $\mathbb{R}^{n+1}\backslash \lbrace 0 \rbrace$. \\ Thus $[U]$ open in $\mathbb{R}P^n$. $\sim$ open. \\ Note $\begin{aligned} & \quad \\ & \pi : \mathbb{R}^{n+1} \backslash \lbrace 0 \rbrace \to \mathbb{R}P^n \\ & \pi(x) = \frac{x}{ \| x\| } \end{aligned}$ \\ $\mathbb{R}^n$ 2nd. countable, $\mathbb{R}P^n$ 2nd. countable. \exercisehead{1.3} $S^n$ compact. \\ $\pi :\mathbb{R}^{n+1}\backslash \lbrace 0 \rbrace \to \mathbb{R}P^n$ \\ $\pi(x) = [ \frac{x}{ \| x \| } ]$ \\ Let $x \in \mathbb{R}P^n$ \\ \phantom{Let } $y = \frac{x}{ \| x \| } \in S^n$ and $ \left. \pi \right|_{S^n}(y) = [x]$ \\ $\left. \pi \right|_{S^n}$ surjective. \exercisehead{1.6} First, note that $\sim $ on $\mathbb{R}^{n+1} -0$ in the definition of $\mathbb{R}P^n$ is an open $\sim$ i.e. open equivalence relation. This is because of the following: $\forall \, U \subset \mathbb{R}^{n+1}-0$, \\ $\pi^{-1}(\pi(U)) = \bigcup_{t\in \mathbb{R}} tU$, set of all pts. equivalent to some pt. of $U$. \\ multiplication by $t\in \mathbb{R}$ homeomorphism of $\mathbb{R}^{n+1} - 0$, so $tU$ open $\forall \, t \in \mathbb{R}$. \\ $\pi^{-1}(\pi(U))$ open i.e. $\pi(U)$ open (for $\pi$ is cont.). Let $X = \mathbb{R}^{n+1} -0$. \\ Consider $R = \lbrace ( x,y) \in X \times X | x\sim y \text{ or } y = tx \text{ for some } t \in \mathbb{R} \rbrace$ $y = tx$ means $y_i = tx_i$ \, $\forall \, i = 0\dots n$. Then $\frac{ x_i}{y_i} = \frac{ x_j}{ y_j}$ \, $\forall \, i,j = 0 \dots n$. Hence $x_i y_j - y_i x_j = 0$ \, $\forall \, i , j$. Let $\begin{aligned} & \quad \\ & f : X \times X \to \mathbb{R} \\ & f(x,y) = \sum_{ i \neq j} (x_i y_j - y_i x_j)^2 \end{aligned}$ \[ \begin{aligned} & \frac{ \partial f}{ \partial x_i} = \sum_{i \neq j } 2 (x_i y_j - y_i x_j) ( y_j - y_j) = 0 \\ & \frac{ \partial f}{ \partial y_j} = \sum_{j \neq i } 2 (x_i y_j - y_i x_j) ( x_i - x_i) = 0 \end{aligned} \] Nevertheless, $f$ is $C^1$ so $f$ cont. So $f^{-1}(0) = R$. $0$ closed, so $f^{-1}(0) = R$ closed. By theorem, since $\sim$ open, $\mathbb{R}P^n = \mathbb{R}^{n+1} - 0 / \sim$ Hausdorff. cf. \url{http://math.stackexchange.com/questions/336272/the-real-projective-space-rpn-is-second-countable} topological space is second countable if its topology has countable basis. \\ \quad $\mathbb{R}^n$ second countable since $\mathcal{B} = \lbrace B_r{ ( q)} | r,q \in \mathbb{Q} \rbrace$ is a countable basis. $\forall \, x \in \mathbb{R}^n$ \\ If $X$ is second countable, with countable basis $\mathcal{B}$, \begin{enumerate} \item If $Y \subseteq X$, $Y$ also second countable with countable basis $\lbrace B | B \in \mathcal{B}, \, Y \bigcap B \neq \emptyset \rbrace$ \item If $Z == X/\sim $, $\lbrace \lbrace [x] | x \in B \rbrace | B \in \mathcal{B} \rbrace$ is a countable basis for $Z$ since $\mathcal{B}$ countable. \\ It is a basis since \begin{tikzpicture} \matrix (m) [matrix of math nodes, row sep=2em, column sep=3em, minimum width=1em] { X = \bigcup_{B \in \mathcal{B}} B \\ Z = X/ \sim = \pi{ \left( \bigcup_{B \in \mathcal{B}} B \right) } = \bigcup_{B \in \mathcal{B}} \pi{ (B)} = \bigcup_{ B \in \mathcal{B}} \lbrace [x] | x \in B \rbrace \\ }; % \path[-stealth] \path[->] (m-1-1) edge node [right] {$\pi$} (m-2-1); \end{tikzpicture} \end{enumerate} Now let $Y = \mathbb{R}^n-0$ and \\ \phantom{Now let } $Z = \mathbb{R}P^n = \mathbb{R}^n-0 / \sim$ \exercisehead{1.7} $S^n$ compact so $S^n/\lbrace \pm \rbrace$ compact by Theorem, as $\pi_S(S^n) = S^n/ \lbrace \pm \rbrace$, as $\pi_S$ cont. surjective ($\forall \, [x] \in S^n/\lbrace \pm \rbrace$, $\exists \, x \in S^n$ s.t. $\pi_S(S^n) = [x]$) \\ $g$ cont. bijective as defined above so since $g(S^n/\lbrace \pm \rbrace) = \mathbb{R}P^n$, $\mathbb{R}P^n$ compact. \textbf{Example 1.8 (Product Manifolds)} \[ \begin{aligned} & M_1 = \bigcup_{\alpha \in \mathfrak{A}_1} U_{\alpha}^{(1)} \\ & M_i = \bigcup_{\alpha \in \mathfrak{A}_i} U_{\alpha}^{(i)} \end{aligned} \quad \quad \quad \, M_1 \times \dots \times M_n = \bigcup_{ \begin{aligned} & \alpha_1 \in \mathfrak{A}_1 \\ & \vdots \\ & \alpha_n \in \mathfrak{A}_n \end{aligned} } U_{\alpha_1} \times \dots \times U_{\alpha_n} \quad \quad \, (\text{by def.}) \] $\forall \, p = (p_1 \dots p_n) \in M_1 \times \dots M_n$, consider $p_i \in M_i$. Choose coordinate chart $(U_{j_i}, \varphi_{j_i})$, $\varphi_i(U_i) \subset \mathbb{R}^{n_i}$. Then, \\ Consider $\varphi : U_1 \times \dots \times U_n \to \mathbb{R}^{m_1} \times \dots \times \mathbb{R}^{m_n} = \mathbb{R}^{ m_1 + \dots + m_n}$, $(U_1 \times \dots \times U_n, \varphi_1 \times \dots \times \varphi_n)$ \\ $\varphi = \varphi_1 \times \dots \times \varphi_n$\\ $\varphi$ also a homeomorphism. $\begin{gathered} \varphi \psi^{-1} \\ (\varphi_1 \times \dots \times \varphi_n) \circ(\psi_1 \times \dots \times \psi_n)^{-1} \end{gathered}$ also a diffeomorphism, cont. bijective and $C^{\infty}$ \\ $\lbrace (U, \varphi) = (U_1 \times \dots \times U_n, \varphi_1 \times \dots \times \varphi_n) | (U_i, \varphi_i) \in \lbrace (U_i, \varphi_i) | U_i \in M_i \rbrace \rbrace$ also an atlas. \subsubsection*{Topological Properties of Manifolds} \begin{lemma}[1.10] $\forall \, $ topological $M$, $M$ has countable basis of precompact coordinate balls \end{lemma} \begin{proof} First consider $M$ can be covered by single chart. \\ Suppose $\varphi : M \to \widehat{U} \subseteq \mathbb{R}^n$ global coordinate map. \\ Let $\mathcal{B} = \lbrace B_r(x) | \text{ open } B_r(x) \subseteq \mathbb{R}^n \text{ s.t. } r\in \mathbb{Q}, \, x \in \mathbb{Q}, \, \text{ i.e. $x$ rational coordinates }, B_{r'}(x) \subseteq \widehat{U}, \text{ for some } r' > r \rbrace$ \\ Clearly, $\forall \, B_r(x)$ precompact in $\widehat{U}$ \\ \quad \, $\mathcal{B}$ countable basis for topology of $\widehat{U}$ $\varphi$ homeomorphism, it follows $\lbrace \varphi^{-1}(B) | B \in \mathcal{B} \rbrace$ countable basis for $M$ Let $M$ arbitrary, \\ \quad By def., $\forall \, p \in M$, $p \in $ domain $U$ of a chart Prop. A.16, $\forall \, $ open cover of second-countable space has countable subcover. \\ \quad $M$ covered by countably many charts $\lbrace (U_i, \varphi_i) \rbrace$ \\ $\forall \, U_i$, $U_i$ has countable basis of coordinate balls precompact in $U_i$ \\ union of all these coordinates bases is countable basis for $M$. \\ If $V \subseteq U_i$ one of these balls, \\ \quad then $\overline{V}$ compact in $U_i$. $M$ Hausdorff, so $\overline{V}$ closed. \\ \quad $\overline{V}$ in $M$ is same as $\overline{V}$ in $U_i$, so $V$ precompact in $M$. \end{proof} \subsubsection*{Connectivity} \subsubsection*{Local Compactness and Paracompactness} \subsubsection*{Fundamental Groups of Manifolds} \subsection*{Smooth Structures} If open $U \subset \mathbb{R}^n$, $V \subset \mathbb{R}^m$, \\ $F:U \to V$ smooth (or $C^{\infty}$) if $\forall \, $ cont. partial derivative of all orders exists. \\ $F$ diffeomorphism if $F$ \emph{smooth}, bijective, and has a \emph{smooth} inverse. \\ Diffeomorphism is a homeomorphism. \\ $(U, \varphi), (V,\psi)$ smoothly compatible if $UV = \emptyset$ or \[ \psi \varphi^{-1}: \varphi(UV) \to \psi(UV) \quad \quad \, \text{ diffeomorphism } \] atlas $\mathcal{A} \equiv \lbrace (U, \varphi ) \rbrace$ s.t. $\bigcup U = M$. Smooth atlas if $\forall \, (U,\varphi), (V, \psi) \in \mathcal{A}$, $(U,\varphi), (V,\psi)$ smoothly compatible. \\ Smooth structure on topological $n$-manifold $M$ is a maximal smooth atlas. \\ Smooth manifold $(M, \mathcal{A})$ where $M$ topological manifold, $\mathcal{A}$ smooth structure on $M$. \\ \begin{proposition}[1.17] Let $M$ topological manifold. \begin{enumerate} \item[(a)] $\forall \, $ smooth atlas for $M$ is contained in ! maximal smooth atlas. \item[(b)] 2 smooth atlases for $M$ determine the same maximal smooth atlas iff union is smooth atlas. \end{enumerate} \end{proposition} \begin{proof} Let $\mathcal{A}$ smooth atlas for $M$ \\ $\overline{ \mathcal{A} } \equiv $ set of all charts that are smoothly compatible with every chart in $\mathcal{A}$ \textbf{Want}: $\overline{ \mathcal{A}}$ smooth atlas, i.e. $\forall \, (U, \varphi), (V \psi) \in \overline{\mathcal{A}}$, $\psi \varphi^{-1} : \varphi(UV) \to \psi(UV)$ smooth. Let $x = \varphi(p) \in \varphi(UV)$ \\ $p\in M$, so $\exists \,$ some chart $(W, \theta) \in \mathcal{A}$ s.t. $p \in W$. \\ By given, $\theta \varphi^{-1}, \psi \theta^{-1}$ smooth where they're defined. \\ $p \in UVW$, so $\psi \varphi^{-1} = \psi \theta^{-1} \theta \varphi^{-1}$ smooth on $x$. \\ Thus $\psi \varphi^{-1}$ smooth in a neighborhood of each pt. in $\varphi(UV)$. Thus $\overline{\mathcal{A}}$ smooth atlas. To check maximal, \end{proof} \subsection*{Local Coordinate Representations} \begin{proposition}[1.19] $\forall \, $ smooth $M$ has countable basis of regular coordinate balls \end{proposition} \exercisehead{1.20} smooth manifold $M$ has smooth structure \\ \quad Suppose single smooth chart $\varphi$ has entire $M$ as domain \[ \varphi: M \to \widehat{U} \subseteq \mathbb{R}^n \] Let $\widehat{B} = \lbrace \widehat{B}_r(x) \subseteq \mathbb{R}^n | r \in \mathbb{Q}, \, x \in \mathbb{Q}, \, \widehat{B}_{r'}(x) \subseteq \widehat{U} \text{ for some } r' > r \rbrace$ \\ \quad $\forall \, \widehat{B}_r(x)$ precompact in $\widehat{U}$ \\ \quad $\widehat{\mathcal{B}}$ countable basis for topology of $\widehat{U}$ $\varphi$ homeomorphism, \quad Let $\begin{aligned} & \quad \\ & \varphi^{-1}( \widehat{B}_r{(0)}) = B \\ & \varphi^{-1}{(\widehat{B}_{r'}{(0)} )} = B' \end{aligned}$ $\varphi$ homeomorphism and since $\widehat{B}$ countable basis, $\lbrace B \rbrace$ countable basis of regular coordinate basis. \\ Suppose arbitrary smooth structure. \\ \quad By def., $\forall \, p \in M$, $p$ in some chart domain \\ \quad Prop. A.16., $\forall \, $ open cover of second countable space has countable subcover \\ \quad $M$ covered by countably many charts $\lbrace (U_i, \varphi_i) \rbrace$ \\ $\forall \, U_i , \, U_i$ has countable basis of coordinate balls precompact in $U_i$ \\ union of all these coordinate charts is countable basis for $M$. If $V \subseteq U_i$, 1 of these balls, \\ $\begin{aligned} & \quad \\ & \varphi(V) = B_r(0) \\ & \varphi{(\overline{V})} = \overline{B}_r(0) \end{aligned}$ and $\varphi(B') = B_{r'}{(0)}$, $r' > r$ for countable basis for $U_i$ \\ So $V$ regular coordinate ball. \hrulefill \subsection*{Examples of Smooth Manifolds} \subsubsection*{More Examples} \textbf{Example 1.25 (Spaces of Matrices) } Let $M(m\times n,\mathbb{R}) \equiv $ set of $m\times n$ matrices with real entries. \textbf{Example 1.26 (Open Submanifolds)} $\forall \, $ open subset $U \subseteq M$ is itself a $\text{dim}{M}$ manifold. \\ EY : \emph{ $\forall \, $ open subset $U \subseteq M$ is itself a $\text{dim}{M}$ manifold. } \textbf{Example 1.27 (The General Linear Group) } general linear group $GL(n,\mathbb{R}) = \lbrace A | \text{det}{A} \neq 0 \rbrace$ \\ \quad $\text{det}:A \to \mathbb{R}$ is cont. (by def. of $\text{det}{A} = \epsilon^{ i_1 \dots i_n}a_{1 i_1} \dots a_{n i_n}$ \\ \quad $\text{det}^{-1}{ ( \mathbb{R} - 0 )}$ is open since $\mathbb{R}-0$ open so $GL(n,\mathbb{R})$ open \\ \quad $GL(n,\mathbb{R}) \subseteq M(n,\mathbb{R})$, $M(n,\mathbb{R})$ $n^2$-dim. vector space. \\ \quad \quad so $GL(n,\mathbb{R})$ smooth $n^2$-dim. manifold. \textbf{Example 1.28 (Matrices of Full Rank)} Suppose $m <n$ \\ Let $M_n(m\times n, \mathbb{R}) \subseteq M(m\times n, \mathbb{R})$ with matrices of rank $m$ \\ if $A \in M_m(m\times n, \mathbb{R})$, \\ \quad $\text{rank}{A}=m$ means that $A$ has some nonsingular $m \times m$ submatrix. (EY 20140205 ???) \textbf{Example 1.31 (Spheres)} \[ \begin{aligned} & \varphi_i^{\pm} : S^n \to B_1^n(0) \subset \mathbb{R}^n \quad \, (B_1^n(0) \text{ disk of radius $1$}) \\ & \varphi_i^{\pm}(x_1 \dots x_{n+1}) = (x_1 \dots \widehat{x}_i \dots x_{n+1} )= (y_1 \dots y_n) \end{aligned} \] Note $x_1^2 + \dots + x_i^2 + \dots + x_{n+1}^2 = 1$. \quad \, $x_i = \pm \sqrt{ 1 - (x_1^2 + \dots + \widehat{x}_i^2 + \dots + x_{n+1}^2 ) }$ \[ \begin{aligned} & (\varphi_i^{\pm})^{-1}(y_1 \dots y_n) = (y_1 \dots \pm \sqrt{ 1 - (y_1^2 + \dots + y_n^2) } \dots y_n) = (y_1 \dots y_{i-1}, \pm \sqrt{ 1- |y|^2 }, y_i\dots y_n ) \\ & \varphi_i^{\pm} (\varphi_j^{\pm})^{-1}(y_1 \dots y_n ) = (y_1 \dots \widehat{y}_i \dots y_{j-1} , \pm \sqrt{ 1 - |y|^2 }, y_j \dots y_n) \\ & \varphi_i^{\pm} (\varphi_j^{\mp})^{-1}(y_1 \dots y_n ) = (y_1 \dots \widehat{y}_i \dots y_{j-1} , \mp \sqrt{ 1 - |y|^2 }, y_j \dots y_n) \\ \end{aligned} \] \[ \varphi_j^{\pm}(\varphi_i^{\mp})^{-1} \varphi_i^{\pm} (\varphi_j^{\pm})^{-1}(y_1 \dots y_n) = \varphi_j^{\pm}(y_1 \dots \pm y_i \dots y_{j-1}, \pm \sqrt{ 1 - |y|^2 } \dots y_n ) = (y_1 \dots \pm y_i \dots y_j \dots y_n) \] This is symmetrical in $i,j$ and so true if $i,j$ reverse. \\ So $\varphi_i^{\pm}(\varphi_j^{\pm})^{-1}$ diff. and bijective. Likewise for $\varphi_i^{\pm}(\varphi_j^{\mp})^{-1}$. \\ So $\begin{aligned} & \varphi_i^{\pm} (\varphi_j^{\pm})^{-1} \\ & \varphi_i^{\pm} (\varphi_j^{\mp})^{-1} \end{aligned}$ diffeomorphisms. \begin{lemma}[1.35] \textbf{(Smooth Manifold Chart Lemma)} Let $M$ be a set, suppose given $\lbrace U_{\alpha} | U_{\alpha} \subset M \rbrace$, given maps $\varphi_{\alpha}: U_{\alpha} \to \mathbb{R}^n$ s.t. \begin{enumerate} \item[(i)] $\forall \, \alpha$, $\varphi_{\alpha}$ bijection between $U_{\alpha}$ and open $\varphi_{\alpha}(U_{\alpha}) \subseteq \mathbb{R}^n$ \item[(ii)] $\forall \, \alpha, \beta$, $\varphi_{\alpha}(U_{\alpha} \bigcap U_{\beta})$, $\varphi_{\beta}(U_{\alpha} \bigcap U_{\beta})$ open in $\mathbb{R}^n$ \end{enumerate} \end{lemma} \textbf{Example 1.36 (Grassmann Manifolds)} %$G_k(V) = $ set of all $k$-dim. linear subspaces of $V$, $\text{dim}{V} = n \geq k $ %Let $P,Q$ complementary subspaces of $V$, $\begin{aligned} & \quad \\ & \text{dim}{P} = k \\ & \text{dim}{Q} = n- k \end{aligned}$ %direct sum decomposition $V = P \oplus Q$ %graph of any linear $X: P \to Q$ %\[ %\Gamma(X) = \lbrace v + X v | v \in P \rbrace \subset V %\] %$k$-dim. subspace %Now $\Gamma(X) Q = \emptyset$ since $\Gamma(A) \subset P \oplus Q$ and $P$ and $Q$ are complementary. %If $KQ = \emptyset$, then $K \subset P \oplus Q$, $P\neq 0$. $G_k(V) = \lbrace S | S \subseteq V \rbrace$ \quad \, $S$ $k$-dim. linear subspace of $V$ \\ $\text{dim}V = n$, $V$ vector space \\ if $V = \mathbb{R}^n$, $G_k(\mathbb{R}^n) \equiv G_{k,n} \equiv G(k,n)$ (notation) \quad \, $G_1(\mathbb{R}^{n+1}) = \mathbb{R}P^n$ Let $V = P \oplus Q$, \, $\begin{aligned} & \text{dim}P =k \\ & \text{dim}Q = n-k \end{aligned}$ \\ linear $X: P \to Q$ \[ \Gamma(X) = \lbrace v + Xv | v \in P \rbrace , \quad \, \Gamma(X) \subseteq V, \, \text{dim}\Gamma(X) = k \] $\Gamma(X) \bigcap Q = 0$ since $\forall \, w \in \Gamma(X)$, $w$ has a $P$ piece, and $Q$ complementary to $P$ \\ Converse: $\forall \, $ subspace $S \subseteq V$, s.t. $S\bigcap Q = 0$ \\ \quad \, let $\begin{aligned} & \quad \\ & \pi_P : V \to P \\ & \pi_Q: V \to Q \end{aligned}$ \quad \, projections by direct sum decomposition $V = P\oplus Q$ \\ $\left. \pi_P \right|_S : S \to P$ isomorphism \\ $\Longrightarrow X = ( \left. \pi_Q \right|_S ) \cdot ( \left. \pi_P \right|_S)^{-1}$, \, $X: P \to Q$ \\ Let $v\in P$. $v+ Xv = v + \left. \pi_Q \right|_S ( \left. \pi_P \right|_S)^{-1} v$. Let $v \in \left. \pi_P \right|_S (S)$ \quad \, $\Gamma(X) = S$ \\ Let $L(P;Q) = \lbrace f | \text{ linear } f : P \to Q \rbrace$, \, $L(P;Q)$ vector space \\ $U_Q \subseteq G_k(V)$, \, $U_Q = \lbrace S | \text{dim}S =k, \, S \text{ subspace }, \, S \bigcap Q = 0 \rbrace$ \\ $\begin{aligned} & \Gamma : L(P;Q) \to U_Q \\ & X \mapsto \Gamma(X) \end{aligned}$ \\ $\Gamma$ bijection by above \\ $\varphi = \Gamma^{-1} : U_Q \to L(P;Q)$ \\ By choosing bases for $P,Q$, identify $L(P;Q)$ with $M((n-k)k; \mathbb{R})$ and hence with $\mathbb{R}^{k(n-k)}$ \\ think of $(U_Q, \varphi)$ as coordinate chart. \\ $\varphi(U_Q) = L(P;Q)$ \subsection*{Problems} \problemhead{1.7} (This was Problem 1.5 in previous editions) \[ S^n = \lbrace ( x_1 \dots x_{n+1} \rbrace \in \mathbb{R}^{n+1} | \sum_{i=1}^{n+1} x_i^2 = 1 \rbrace \subset \mathbb{R}^{n+1} \] Let $\begin{aligned} & \quad \\ & N = (0 \dots 0, 1) \\ & S = (0 \dots 0, -1) \end{aligned}$ \quad \, $x\in S^n$. \begin{enumerate} \item[(a)] Consider $t(x-N) + N = tx + (1-t)N$ when $x_{n+1} =0$ \[ \begin{gathered} tx_{n+1} + (1-t) = 0 \text{ or } tx_{n+1} + -1 + t = 0 \\ \Longrightarrow \frac{ 1}{ 1- x_{n+1} } = t \quad \left( \text{ or } \frac{1}{ 1 + x_{n+1} } \right) \end{gathered} \] \[ \begin{aligned} & \pi_1: S^n - N \to \mathbb{R}^n \\ & \pi_1(x_1 \dots x_{n+1}) = \left( \frac{x_1}{ 1 - x_{n+1} } \dots \frac{x_n}{ 1 - x_{n+1} } , 0 \right) \\ & \pi_2 : S^n -S \to \mathbb{R}^n \\ & \pi_2(x_1 \dots x_{n+1}) = \left( \frac{x_1}{ 1 + x_{n+1}} \dots \frac{x_n}{ 1 + x_{n+1} }, 0 \right) \end{aligned} \] Note that $-\pi_2(-x) = \pi_1$ and $\pi_1 \equiv \sigma$, $\pi_2 \equiv \widetilde{\sigma}$ in Massey's notation. \item[(b)] Note, for $y_i = \frac{x_i}{ 1 - x_{n+1}}$ \[ \begin{gathered} y_1^2 + \dots +y_n^2 = |y|^2 = \frac{1- x_{n+1}^2}{ (1-x_{n+1})^2 } = \frac{1+ x_{n+1}}{ 1- x_{n+1}} \text{ or } x_{n+1} = \frac{ |y|^2 - 1 }{ |y|^2 + 1 } \\ x_i = y _i (1- x_{n+1}) = \frac{2y_i}{ 1 + |y|^2 } \end{gathered} \] \[ \begin{aligned} & \pi_1^{-1}: \mathbb{R}^n \to S^n - N \\ & \pi_1^{-1}(y_1 \dots y_n) = \left( \frac{2y_1}{ 1 + |y|^2 } \dots \frac{2y_n}{1+ |y|^2 } , \frac{ |y|^2 - 1 }{ |y|^2 + 1 } \right) \\ & \pi_2^{-1}(y_1 \dots y_n) = \left( \frac{2y_1}{ 1 + |y|^2 } \dots \frac{2y_n}{1+ |y|^2 } , \frac{ 1 - |y|^2 }{ |y|^2 + 1 } \right) \end{aligned} \] $\pi_1,\pi_2$ diff., bijective, and $(S^n - N ) \bigcup (S^n-S) = S^n$ \\ \item[(c)] Computing the transition maps for the stereographic projections. Consider $(S^n - N)(S^n-S) = S^n - N \bigcup S$ \[ \begin{aligned} & \pi_1 \pi_2^{-1}(y_1 \dots y_n) = \left( \frac{y_1 }{ |y|^2} \dots \frac{y_n}{|y|^2} , 0 \right) \\ & \pi_2 \pi_1^{-1}(y_1 \dots y_n) = \left( \frac{y_1 }{ |y|^2} \dots \frac{y_n}{|y|^2} , 0 \right) \end{aligned} \] since, for example, \[ \begin{gathered} \frac{ \frac{2y_i }{ 1 + |y|^2} }{ 1 - \frac{ 1 - |y|^2}{ 1 + |y|^2 }} = \frac{y_i }{ |y|^2 } \end{gathered} \] $\pi_1 \pi_2^{-1}$ bijective and $C^{\infty}$, $\pi_1 \pi_2^{-1}$ diffeomorphism. \\ $\lbrace (S^n-N, \pi_1), (S^n - S, \pi_2) \rbrace$ \, $C^{\infty}$ atlas or differentiable structure. \[ \begin{aligned} & \partial_j \frac{y_i}{ |y|^2} = \frac{ - 2y_i y_j }{ (y_1^2 + \dots + y_n^2 )^2 } \\ & \partial_j \frac{ y_j}{ |y|^2} = \frac{ (y_1^2 + \dots + y_n^2) - 2y_j^2 }{ |y|^4} = \frac{ y_1^2 + \dots + \widehat{y}_j^2 + \dots + y_n^2 - y_j^2 }{ |y|^4} \end{aligned} \] \[ \text{det}{ (\partial_j \pi_1 \pi_2^{-1}(y) ) } = \sum_{ \sigma \in S_n} \text{sgn}{ (\sigma)} \partial_{\sigma_1} \frac{y_1}{ |y|^2} \dots \partial_{\sigma_n} \frac{y_n}{|y|^2} = 0 \text{ only if $y=0$. But that's excluded } \] \item[(d)] Consider $\begin{gathered} \lbrace ( S^n\backslash, \pi_1), (S^n \backslash S, \pi_2) \rbrace \\ \mathcal{A} = \lbrace (U_i^{\pm}, \varphi_i^{\pm}) \rbrace \end{gathered}$ \\ Now \[ \begin{aligned} & S^n \backslash N \bigcap U_i^+ = \begin{cases} U_i^+ & \text{ if } i \neq n + 1 \\ U_{n+1}^+ \backslash N & \text{ if } i = n+1 \end{cases} \\ & S^n \backslash N \bigcap U_i^- = U_i^- \end{aligned} \] \[ \pi_1(\varphi_i^{\pm})^{-1}(y_1 \dots y_n) = \pi_1(y_1 \dots y_{i-1}, \pm \sqrt{ 1 - |y|^2 }, y_i \dots y_n) = \left( \frac{y_1 }{ 1 - y_n} \dots \frac{y_{i-1} }{ 1 - y_n}, \frac{ \pm \sqrt{ 1 - |y|^2 } }{ 1 - y_n } , \frac{y_i}{ 1 - y_n} \dots \frac{y_{n-1}}{ 1 - y_n }, 0 \right) \] Note $-1< y_n <1$ on $\varphi_i^{\pm}(S^n\backslash \bigcap U_i^{\pm})$ \[ \varphi_i^{\pm} \pi_1^{-1}(y_1 \dots y_n) = \varphi_i^{\pm}\left( \frac{ 2y_1}{ 1 + |y|^2} \dots \frac{2y_n}{ 1 + |y|^2} , \frac{|y|^2 - 1 }{ |y|^2 + 1 } \right) = \left( \frac{2y_1 }{ 1 + |y|^2} \dots \frac{ 2 \widehat{y}_i }{ 1 + |y|^2 } \dots \frac{ 2y_n }{ 1 + |y|^2 }, \frac{|y|^2 -1}{ |y|^2 + 1 } \right) \] $\pi_{1,2}(\varphi_i^{\pm})^{-1}, \varphi_i^{\pm}\pi_{1,2}^{-1}$ are diffeomorphisms (bijective and differentiable). So $\lbrace(S^n\backslash N, \pi_1), (S^n\backslash S , \pi_2) \rbrace \bigcup \mathcal{A}$ also a $C^{\infty}$ atlas. \\ So $\lbrace(S^n\backslash N, \pi_1), (S^n\backslash S , \pi_2) \rbrace$ , $\mathcal{A}$ equivalent. \end{enumerate}
{ "alphanum_fraction": 0.5696193748, "avg_line_length": 42.3838235294, "ext": "tex", "hexsha": "535f6816b7a73d3227d2b32e72ad69835ffb9d66", "lang": "TeX", "max_forks_count": 25, "max_forks_repo_forks_event_max_datetime": "2022-03-03T20:15:13.000Z", "max_forks_repo_forks_event_min_datetime": "2018-01-21T05:33:31.000Z", "max_forks_repo_head_hexsha": "59eb794dfa46e2b80e43df0440bb8ec3c472d973", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "wacfeldwang333/mathphysics", "max_forks_repo_path": "LeeJM/01smoothmanifolds.tex", "max_issues_count": 5, "max_issues_repo_head_hexsha": "59eb794dfa46e2b80e43df0440bb8ec3c472d973", "max_issues_repo_issues_event_max_datetime": "2020-04-12T03:12:29.000Z", "max_issues_repo_issues_event_min_datetime": "2017-09-29T09:29:53.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "wacfeldwang333/mathphysics", "max_issues_repo_path": "LeeJM/01smoothmanifolds.tex", "max_line_length": 405, "max_stars_count": 50, "max_stars_repo_head_hexsha": "59eb794dfa46e2b80e43df0440bb8ec3c472d973", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "wacfeldwang333/mathphysics", "max_stars_repo_path": "LeeJM/01smoothmanifolds.tex", "max_stars_repo_stars_event_max_datetime": "2022-03-29T11:19:23.000Z", "max_stars_repo_stars_event_min_datetime": "2017-01-10T14:24:13.000Z", "num_tokens": 11631, "size": 28821 }
% Default to the notebook output style % Inherit from the specified cell style. \documentclass[11pt]{article} \usepackage[T1]{fontenc} % Nicer default font (+ math font) than Computer Modern for most use cases \usepackage{mathpazo} % Basic figure setup, for now with no caption control since it's done % automatically by Pandoc (which extracts ![](path) syntax from Markdown). \usepackage{graphicx} % We will generate all images so they have a width \maxwidth. This means % that they will get their normal width if they fit onto the page, but % are scaled down if they would overflow the margins. \makeatletter \def\maxwidth{\ifdim\Gin@nat@width>\linewidth\linewidth \else\Gin@nat@width\fi} \makeatother \let\Oldincludegraphics\includegraphics % Set max figure width to be 80% of text width, for now hardcoded. \renewcommand{\includegraphics}[1]{\Oldincludegraphics[width=.8\maxwidth]{#1}} % Ensure that by default, figures have no caption (until we provide a % proper Figure object with a Caption API and a way to capture that % in the conversion process - todo). \usepackage{caption} \DeclareCaptionLabelFormat{nolabel}{} \captionsetup{labelformat=nolabel} \usepackage{adjustbox} % Used to constrain images to a maximum size \usepackage{xcolor} % Allow colors to be defined \usepackage{enumerate} % Needed for markdown enumerations to work \usepackage{geometry} % Used to adjust the document margins \usepackage{amsmath} % Equations \usepackage{amssymb} % Equations \usepackage{textcomp} % defines textquotesingle % Hack from http://tex.stackexchange.com/a/47451/13684: \AtBeginDocument{% \def\PYZsq{\textquotesingle}% Upright quotes in Pygmentized code } \usepackage{upquote} % Upright quotes for verbatim code \usepackage{eurosym} % defines \euro \usepackage[mathletters]{ucs} % Extended unicode (utf-8) support \usepackage[utf8x]{inputenc} % Allow utf-8 characters in the tex document \usepackage{fancyvrb} % verbatim replacement that allows latex \usepackage{grffile} % extends the file name processing of package graphics % to support a larger range % The hyperref package gives us a pdf with properly built % internal navigation ('pdf bookmarks' for the table of contents, % internal cross-reference links, web links for URLs, etc.) \usepackage{hyperref} \usepackage{longtable} % longtable support required by pandoc >1.10 \usepackage{booktabs} % table support for pandoc > 1.12.2 \usepackage[inline]{enumitem} % IRkernel/repr support (it uses the enumerate* environment) \usepackage[normalem]{ulem} % ulem is needed to support strikethroughs (\sout) % normalem makes italics be italics, not underlines % Colors for the hyperref package \definecolor{urlcolor}{rgb}{0,.145,.698} \definecolor{linkcolor}{rgb}{.71,0.21,0.01} \definecolor{citecolor}{rgb}{.12,.54,.11} % ANSI colors \definecolor{ansi-black}{HTML}{3E424D} \definecolor{ansi-black-intense}{HTML}{282C36} \definecolor{ansi-red}{HTML}{E75C58} \definecolor{ansi-red-intense}{HTML}{B22B31} \definecolor{ansi-green}{HTML}{00A250} \definecolor{ansi-green-intense}{HTML}{007427} \definecolor{ansi-yellow}{HTML}{DDB62B} \definecolor{ansi-yellow-intense}{HTML}{B27D12} \definecolor{ansi-blue}{HTML}{208FFB} \definecolor{ansi-blue-intense}{HTML}{0065CA} \definecolor{ansi-magenta}{HTML}{D160C4} \definecolor{ansi-magenta-intense}{HTML}{A03196} \definecolor{ansi-cyan}{HTML}{60C6C8} \definecolor{ansi-cyan-intense}{HTML}{258F8F} \definecolor{ansi-white}{HTML}{C5C1B4} \definecolor{ansi-white-intense}{HTML}{A1A6B2} % commands and environments needed by pandoc snippets % extracted from the output of `pandoc -s` \providecommand{\tightlist}{% \setlength{\itemsep}{0pt}\setlength{\parskip}{0pt}} \DefineVerbatimEnvironment{Highlighting}{Verbatim}{commandchars=\\\{\}} % Add ',fontsize=\small' for more characters per line \newenvironment{Shaded}{}{} \newcommand{\KeywordTok}[1]{\textcolor[rgb]{0.00,0.44,0.13}{\textbf{{#1}}}} \newcommand{\DataTypeTok}[1]{\textcolor[rgb]{0.56,0.13,0.00}{{#1}}} \newcommand{\DecValTok}[1]{\textcolor[rgb]{0.25,0.63,0.44}{{#1}}} \newcommand{\BaseNTok}[1]{\textcolor[rgb]{0.25,0.63,0.44}{{#1}}} \newcommand{\FloatTok}[1]{\textcolor[rgb]{0.25,0.63,0.44}{{#1}}} \newcommand{\CharTok}[1]{\textcolor[rgb]{0.25,0.44,0.63}{{#1}}} \newcommand{\StringTok}[1]{\textcolor[rgb]{0.25,0.44,0.63}{{#1}}} \newcommand{\CommentTok}[1]{\textcolor[rgb]{0.38,0.63,0.69}{\textit{{#1}}}} \newcommand{\OtherTok}[1]{\textcolor[rgb]{0.00,0.44,0.13}{{#1}}} \newcommand{\AlertTok}[1]{\textcolor[rgb]{1.00,0.00,0.00}{\textbf{{#1}}}} \newcommand{\FunctionTok}[1]{\textcolor[rgb]{0.02,0.16,0.49}{{#1}}} \newcommand{\RegionMarkerTok}[1]{{#1}} \newcommand{\ErrorTok}[1]{\textcolor[rgb]{1.00,0.00,0.00}{\textbf{{#1}}}} \newcommand{\NormalTok}[1]{{#1}} % Additional commands for more recent versions of Pandoc \newcommand{\ConstantTok}[1]{\textcolor[rgb]{0.53,0.00,0.00}{{#1}}} \newcommand{\SpecialCharTok}[1]{\textcolor[rgb]{0.25,0.44,0.63}{{#1}}} \newcommand{\VerbatimStringTok}[1]{\textcolor[rgb]{0.25,0.44,0.63}{{#1}}} \newcommand{\SpecialStringTok}[1]{\textcolor[rgb]{0.73,0.40,0.53}{{#1}}} \newcommand{\ImportTok}[1]{{#1}} \newcommand{\DocumentationTok}[1]{\textcolor[rgb]{0.73,0.13,0.13}{\textit{{#1}}}} \newcommand{\AnnotationTok}[1]{\textcolor[rgb]{0.38,0.63,0.69}{\textbf{\textit{{#1}}}}} \newcommand{\CommentVarTok}[1]{\textcolor[rgb]{0.38,0.63,0.69}{\textbf{\textit{{#1}}}}} \newcommand{\VariableTok}[1]{\textcolor[rgb]{0.10,0.09,0.49}{{#1}}} \newcommand{\ControlFlowTok}[1]{\textcolor[rgb]{0.00,0.44,0.13}{\textbf{{#1}}}} \newcommand{\OperatorTok}[1]{\textcolor[rgb]{0.40,0.40,0.40}{{#1}}} \newcommand{\BuiltInTok}[1]{{#1}} \newcommand{\ExtensionTok}[1]{{#1}} \newcommand{\PreprocessorTok}[1]{\textcolor[rgb]{0.74,0.48,0.00}{{#1}}} \newcommand{\AttributeTok}[1]{\textcolor[rgb]{0.49,0.56,0.16}{{#1}}} \newcommand{\InformationTok}[1]{\textcolor[rgb]{0.38,0.63,0.69}{\textbf{\textit{{#1}}}}} \newcommand{\WarningTok}[1]{\textcolor[rgb]{0.38,0.63,0.69}{\textbf{\textit{{#1}}}}} % Define a nice break command that doesn't care if a line doesn't already % exist. \def\br{\hspace*{\fill} \\* } % Math Jax compatability definitions \def\gt{>} \def\lt{<} % Document parameters \title{Traffic\_Sign\_Classifier} % Pygments definitions \makeatletter \def\PY@reset{\let\PY@it=\relax \let\PY@bf=\relax% \let\PY@ul=\relax \let\PY@tc=\relax% \let\PY@bc=\relax \let\PY@ff=\relax} \def\PY@tok#1{\csname PY@tok@#1\endcsname} \def\PY@toks#1+{\ifx\relax#1\empty\else% \PY@tok{#1}\expandafter\PY@toks\fi} \def\PY@do#1{\PY@bc{\PY@tc{\PY@ul{% \PY@it{\PY@bf{\PY@ff{#1}}}}}}} \def\PY#1#2{\PY@reset\PY@toks#1+\relax+\PY@do{#2}} \expandafter\def\csname PY@tok@w\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.73,0.73,0.73}{##1}}} \expandafter\def\csname PY@tok@c\endcsname{\let\PY@it=\textit\def\PY@tc##1{\textcolor[rgb]{0.25,0.50,0.50}{##1}}} \expandafter\def\csname PY@tok@cp\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.74,0.48,0.00}{##1}}} \expandafter\def\csname PY@tok@k\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.00,0.50,0.00}{##1}}} \expandafter\def\csname PY@tok@kp\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.00,0.50,0.00}{##1}}} \expandafter\def\csname PY@tok@kt\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.69,0.00,0.25}{##1}}} \expandafter\def\csname PY@tok@o\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.40,0.40,0.40}{##1}}} \expandafter\def\csname PY@tok@ow\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.67,0.13,1.00}{##1}}} \expandafter\def\csname PY@tok@nb\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.00,0.50,0.00}{##1}}} \expandafter\def\csname PY@tok@nf\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.00,0.00,1.00}{##1}}} \expandafter\def\csname PY@tok@nc\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.00,0.00,1.00}{##1}}} \expandafter\def\csname PY@tok@nn\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.00,0.00,1.00}{##1}}} \expandafter\def\csname PY@tok@ne\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.82,0.25,0.23}{##1}}} \expandafter\def\csname PY@tok@nv\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.10,0.09,0.49}{##1}}} \expandafter\def\csname PY@tok@no\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.53,0.00,0.00}{##1}}} \expandafter\def\csname PY@tok@nl\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.63,0.63,0.00}{##1}}} \expandafter\def\csname PY@tok@ni\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.60,0.60,0.60}{##1}}} \expandafter\def\csname PY@tok@na\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.49,0.56,0.16}{##1}}} \expandafter\def\csname PY@tok@nt\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.00,0.50,0.00}{##1}}} \expandafter\def\csname PY@tok@nd\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.67,0.13,1.00}{##1}}} \expandafter\def\csname PY@tok@s\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.73,0.13,0.13}{##1}}} \expandafter\def\csname PY@tok@sd\endcsname{\let\PY@it=\textit\def\PY@tc##1{\textcolor[rgb]{0.73,0.13,0.13}{##1}}} \expandafter\def\csname PY@tok@si\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.73,0.40,0.53}{##1}}} \expandafter\def\csname PY@tok@se\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.73,0.40,0.13}{##1}}} \expandafter\def\csname PY@tok@sr\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.73,0.40,0.53}{##1}}} \expandafter\def\csname PY@tok@ss\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.10,0.09,0.49}{##1}}} \expandafter\def\csname PY@tok@sx\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.00,0.50,0.00}{##1}}} \expandafter\def\csname PY@tok@m\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.40,0.40,0.40}{##1}}} \expandafter\def\csname PY@tok@gh\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.00,0.00,0.50}{##1}}} \expandafter\def\csname PY@tok@gu\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.50,0.00,0.50}{##1}}} \expandafter\def\csname PY@tok@gd\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.63,0.00,0.00}{##1}}} \expandafter\def\csname PY@tok@gi\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.00,0.63,0.00}{##1}}} \expandafter\def\csname PY@tok@gr\endcsname{\def\PY@tc##1{\textcolor[rgb]{1.00,0.00,0.00}{##1}}} \expandafter\def\csname PY@tok@ge\endcsname{\let\PY@it=\textit} \expandafter\def\csname PY@tok@gs\endcsname{\let\PY@bf=\textbf} \expandafter\def\csname PY@tok@gp\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.00,0.00,0.50}{##1}}} \expandafter\def\csname PY@tok@go\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.53,0.53,0.53}{##1}}} \expandafter\def\csname PY@tok@gt\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.00,0.27,0.87}{##1}}} \expandafter\def\csname PY@tok@err\endcsname{\def\PY@bc##1{\setlength{\fboxsep}{0pt}\fcolorbox[rgb]{1.00,0.00,0.00}{1,1,1}{\strut ##1}}} \expandafter\def\csname PY@tok@kc\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.00,0.50,0.00}{##1}}} \expandafter\def\csname PY@tok@kd\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.00,0.50,0.00}{##1}}} \expandafter\def\csname PY@tok@kn\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.00,0.50,0.00}{##1}}} \expandafter\def\csname PY@tok@kr\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.00,0.50,0.00}{##1}}} \expandafter\def\csname PY@tok@bp\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.00,0.50,0.00}{##1}}} \expandafter\def\csname PY@tok@fm\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.00,0.00,1.00}{##1}}} \expandafter\def\csname PY@tok@vc\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.10,0.09,0.49}{##1}}} \expandafter\def\csname PY@tok@vg\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.10,0.09,0.49}{##1}}} \expandafter\def\csname PY@tok@vi\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.10,0.09,0.49}{##1}}} \expandafter\def\csname PY@tok@vm\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.10,0.09,0.49}{##1}}} \expandafter\def\csname PY@tok@sa\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.73,0.13,0.13}{##1}}} \expandafter\def\csname PY@tok@sb\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.73,0.13,0.13}{##1}}} \expandafter\def\csname PY@tok@sc\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.73,0.13,0.13}{##1}}} \expandafter\def\csname PY@tok@dl\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.73,0.13,0.13}{##1}}} \expandafter\def\csname PY@tok@s2\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.73,0.13,0.13}{##1}}} \expandafter\def\csname PY@tok@sh\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.73,0.13,0.13}{##1}}} \expandafter\def\csname PY@tok@s1\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.73,0.13,0.13}{##1}}} \expandafter\def\csname PY@tok@mb\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.40,0.40,0.40}{##1}}} \expandafter\def\csname PY@tok@mf\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.40,0.40,0.40}{##1}}} \expandafter\def\csname PY@tok@mh\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.40,0.40,0.40}{##1}}} \expandafter\def\csname PY@tok@mi\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.40,0.40,0.40}{##1}}} \expandafter\def\csname PY@tok@il\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.40,0.40,0.40}{##1}}} \expandafter\def\csname PY@tok@mo\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.40,0.40,0.40}{##1}}} \expandafter\def\csname PY@tok@ch\endcsname{\let\PY@it=\textit\def\PY@tc##1{\textcolor[rgb]{0.25,0.50,0.50}{##1}}} \expandafter\def\csname PY@tok@cm\endcsname{\let\PY@it=\textit\def\PY@tc##1{\textcolor[rgb]{0.25,0.50,0.50}{##1}}} \expandafter\def\csname PY@tok@cpf\endcsname{\let\PY@it=\textit\def\PY@tc##1{\textcolor[rgb]{0.25,0.50,0.50}{##1}}} \expandafter\def\csname PY@tok@c1\endcsname{\let\PY@it=\textit\def\PY@tc##1{\textcolor[rgb]{0.25,0.50,0.50}{##1}}} \expandafter\def\csname PY@tok@cs\endcsname{\let\PY@it=\textit\def\PY@tc##1{\textcolor[rgb]{0.25,0.50,0.50}{##1}}} \def\PYZbs{\char`\\} \def\PYZus{\char`\_} \def\PYZob{\char`\{} \def\PYZcb{\char`\}} \def\PYZca{\char`\^} \def\PYZam{\char`\&} \def\PYZlt{\char`\<} \def\PYZgt{\char`\>} \def\PYZsh{\char`\#} \def\PYZpc{\char`\%} \def\PYZdl{\char`\$} \def\PYZhy{\char`\-} \def\PYZsq{\char`\'} \def\PYZdq{\char`\"} \def\PYZti{\char`\~} % for compatibility with earlier versions \def\PYZat{@} \def\PYZlb{[} \def\PYZrb{]} \makeatother % Exact colors from NB \definecolor{incolor}{rgb}{0.0, 0.0, 0.5} \definecolor{outcolor}{rgb}{0.545, 0.0, 0.0} % Prevent overflowing lines due to hard-to-break entities \sloppy % Setup hyperref package \hypersetup{ breaklinks=true, % so long urls are correctly broken across lines colorlinks=true, urlcolor=urlcolor, linkcolor=linkcolor, citecolor=citecolor, } % Slightly bigger margins than the latex defaults \geometry{verbose,tmargin=1in,bmargin=1in,lmargin=1in,rmargin=1in} \begin{document} \maketitle \section{Self-Driving Car Engineer Nanodegree}\label{self-driving-car-engineer-nanodegree} \subsection{Deep Learning}\label{deep-learning} \subsection{Project 3: Build a Traffic Sign Recognition Classifier}\label{project-3-build-a-traffic-sign-recognition-classifier} Udacity Project 3 - Building a CNN with tensorflow to classify traffic signs of the size 32x32x3. \begin{center}\rule{0.5\linewidth}{\linethickness}\end{center} \subsection{Step 0: Load The Data}\label{step-0-load-the-data} \begin{Verbatim}[commandchars=\\\{\}] {\color{incolor}In [{\color{incolor}1}]:} \PY{c+c1}{\PYZsh{} Load pickled data} \PY{k+kn}{import} \PY{n+nn}{pickle} \PY{n}{training\PYZus{}file} \PY{o}{=} \PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{traffic\PYZhy{}signs\PYZhy{}data/train.p}\PY{l+s+s2}{\PYZdq{}} \PY{n}{validation\PYZus{}file}\PY{o}{=} \PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{traffic\PYZhy{}signs\PYZhy{}data/valid.p}\PY{l+s+s2}{\PYZdq{}} \PY{n}{testing\PYZus{}file} \PY{o}{=} \PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{traffic\PYZhy{}signs\PYZhy{}data/test.p}\PY{l+s+s2}{\PYZdq{}} \PY{k}{with} \PY{n+nb}{open}\PY{p}{(}\PY{n}{training\PYZus{}file}\PY{p}{,} \PY{n}{mode}\PY{o}{=}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{rb}\PY{l+s+s1}{\PYZsq{}}\PY{p}{)} \PY{k}{as} \PY{n}{f}\PY{p}{:} \PY{n}{train} \PY{o}{=} \PY{n}{pickle}\PY{o}{.}\PY{n}{load}\PY{p}{(}\PY{n}{f}\PY{p}{)} \PY{k}{with} \PY{n+nb}{open}\PY{p}{(}\PY{n}{validation\PYZus{}file}\PY{p}{,} \PY{n}{mode}\PY{o}{=}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{rb}\PY{l+s+s1}{\PYZsq{}}\PY{p}{)} \PY{k}{as} \PY{n}{f}\PY{p}{:} \PY{n}{valid} \PY{o}{=} \PY{n}{pickle}\PY{o}{.}\PY{n}{load}\PY{p}{(}\PY{n}{f}\PY{p}{)} \PY{k}{with} \PY{n+nb}{open}\PY{p}{(}\PY{n}{testing\PYZus{}file}\PY{p}{,} \PY{n}{mode}\PY{o}{=}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{rb}\PY{l+s+s1}{\PYZsq{}}\PY{p}{)} \PY{k}{as} \PY{n}{f}\PY{p}{:} \PY{n}{test} \PY{o}{=} \PY{n}{pickle}\PY{o}{.}\PY{n}{load}\PY{p}{(}\PY{n}{f}\PY{p}{)} \PY{n}{X\PYZus{}train}\PY{p}{,} \PY{n}{y\PYZus{}train} \PY{o}{=} \PY{n}{train}\PY{p}{[}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{features}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]}\PY{p}{,} \PY{n}{train}\PY{p}{[}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{labels}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]} \PY{n}{X\PYZus{}valid}\PY{p}{,} \PY{n}{y\PYZus{}valid} \PY{o}{=} \PY{n}{valid}\PY{p}{[}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{features}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]}\PY{p}{,} \PY{n}{valid}\PY{p}{[}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{labels}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]} \PY{n}{X\PYZus{}test}\PY{p}{,} \PY{n}{y\PYZus{}test} \PY{o}{=} \PY{n}{test}\PY{p}{[}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{features}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]}\PY{p}{,} \PY{n}{test}\PY{p}{[}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{labels}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]} \end{Verbatim} \begin{center}\rule{0.5\linewidth}{\linethickness}\end{center} \subsection{Step 1: Dataset Summary \& Exploration}\label{step-1-dataset-summary-exploration} The pickled data is a dictionary with 4 key/value pairs: \begin{itemize} \tightlist \item \texttt{\textquotesingle{}features\textquotesingle{}} is a 4D array containing raw pixel data of the traffic sign images, (num examples, width, height, channels). \item \texttt{\textquotesingle{}labels\textquotesingle{}} is a 1D array containing the label/class id of the traffic sign. The file \texttt{signnames.csv} contains id -\textgreater{} name mappings for each id. \item \texttt{\textquotesingle{}sizes\textquotesingle{}} is a list containing tuples, (width, height) representing the original width and height the image. \item \texttt{\textquotesingle{}coords\textquotesingle{}} is a list containing tuples, (x1, y1, x2, y2) representing coordinates of a bounding box around the sign in the image. \textbf{THESE COORDINATES ASSUME THE ORIGINAL IMAGE. THE PICKLED DATA CONTAINS RESIZED VERSIONS (32 by 32) OF THESE IMAGES} \end{itemize} \subsubsection{Provide a Basic Summary of the Data Set Using Python, Numpy and/or Pandas}\label{provide-a-basic-summary-of-the-data-set-using-python-numpy-andor-pandas} \begin{Verbatim}[commandchars=\\\{\}] {\color{incolor}In [{\color{incolor}2}]:} \PY{k+kn}{import} \PY{n+nn}{numpy} \PY{k}{as} \PY{n+nn}{np} \PY{c+c1}{\PYZsh{} Number of training examples} \PY{n}{n\PYZus{}train} \PY{o}{=} \PY{n}{X\PYZus{}train}\PY{o}{.}\PY{n}{shape}\PY{p}{[}\PY{l+m+mi}{0}\PY{p}{]} \PY{c+c1}{\PYZsh{} Number of validation examples} \PY{n}{n\PYZus{}validation} \PY{o}{=} \PY{n}{X\PYZus{}valid}\PY{o}{.}\PY{n}{shape}\PY{p}{[}\PY{l+m+mi}{0}\PY{p}{]} \PY{c+c1}{\PYZsh{} Number of testing examples.} \PY{n}{n\PYZus{}test} \PY{o}{=} \PY{n}{X\PYZus{}test}\PY{o}{.}\PY{n}{shape}\PY{p}{[}\PY{l+m+mi}{0}\PY{p}{]} \PY{c+c1}{\PYZsh{} What\PYZsq{}s the shape of an traffic sign image?} \PY{n}{image\PYZus{}shape} \PY{o}{=} \PY{n}{X\PYZus{}train}\PY{o}{.}\PY{n}{shape}\PY{p}{[}\PY{l+m+mi}{1}\PY{p}{:}\PY{p}{]} \PY{c+c1}{\PYZsh{} How many unique classes/labels there are in the dataset.} \PY{n}{n\PYZus{}classes} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{max}\PY{p}{(}\PY{n}{y\PYZus{}train}\PY{p}{)} \PY{o}{+} \PY{l+m+mi}{1} \PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Number of training examples =}\PY{l+s+s2}{\PYZdq{}}\PY{p}{,} \PY{n}{n\PYZus{}train}\PY{p}{)} \PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Number of validation examples =}\PY{l+s+s2}{\PYZdq{}}\PY{p}{,} \PY{n}{n\PYZus{}validation}\PY{p}{)} \PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Number of testing examples =}\PY{l+s+s2}{\PYZdq{}}\PY{p}{,} \PY{n}{n\PYZus{}test}\PY{p}{)} \PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Image data shape =}\PY{l+s+s2}{\PYZdq{}}\PY{p}{,} \PY{n}{image\PYZus{}shape}\PY{p}{)} \PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Number of classes =}\PY{l+s+s2}{\PYZdq{}}\PY{p}{,} \PY{n}{n\PYZus{}classes}\PY{p}{)} \end{Verbatim} \begin{Verbatim}[commandchars=\\\{\}] Number of training examples = 34799 Number of validation examples = 4410 Number of testing examples = 12630 Image data shape = (32, 32, 3) Number of classes = 43 \end{Verbatim} \subsubsection{Include an exploratory visualization of the dataset}\label{include-an-exploratory-visualization-of-the-dataset} Visualization of the German Traffic Signs Dataset using the pickled file(s). \paragraph{Data Histogram}\label{data-histogram} The data is not equally distributed between all classes. Some are more rare than others and will be therefore more difficult to train. \begin{Verbatim}[commandchars=\\\{\}] {\color{incolor}In [{\color{incolor}3}]:} \PY{k+kn}{import} \PY{n+nn}{matplotlib}\PY{n+nn}{.}\PY{n+nn}{pyplot} \PY{k}{as} \PY{n+nn}{plt} \PY{o}{\PYZpc{}}\PY{k}{matplotlib} inline \PY{c+c1}{\PYZsh{} create histograms for train, validation and test data.} \PY{n}{plt}\PY{o}{.}\PY{n}{hist}\PY{p}{(}\PY{n}{y\PYZus{}train}\PY{p}{,} \PY{n}{bins}\PY{o}{=}\PY{n}{np}\PY{o}{.}\PY{n}{max}\PY{p}{(}\PY{n}{y\PYZus{}train}\PY{p}{)}\PY{p}{)} \PY{n}{plt}\PY{o}{.}\PY{n}{hist}\PY{p}{(}\PY{n}{y\PYZus{}test}\PY{p}{,} \PY{n}{bins}\PY{o}{=}\PY{n}{np}\PY{o}{.}\PY{n}{max}\PY{p}{(}\PY{n}{y\PYZus{}test}\PY{p}{)}\PY{p}{)} \PY{n}{plt}\PY{o}{.}\PY{n}{hist}\PY{p}{(}\PY{n}{y\PYZus{}valid}\PY{p}{,} \PY{n}{bins}\PY{o}{=}\PY{n}{np}\PY{o}{.}\PY{n}{max}\PY{p}{(}\PY{n}{y\PYZus{}valid}\PY{p}{)}\PY{p}{)} \PY{n}{plt}\PY{o}{.}\PY{n}{yscale}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{log}\PY{l+s+s2}{\PYZdq{}}\PY{p}{)} \PY{n}{plt}\PY{o}{.}\PY{n}{show}\PY{p}{(}\PY{p}{)} \end{Verbatim} \begin{center} \adjustimage{max size={0.9\linewidth}{0.9\paperheight}}{output_8_0.png} \end{center} { \hspace*{\fill} \\} \paragraph{Show all the different classes available in the dataset}\label{show-all-the-different-classes-available-in-the-dataset} \begin{Verbatim}[commandchars=\\\{\}] {\color{incolor}In [{\color{incolor}4}]:} \PY{k+kn}{import} \PY{n+nn}{pandas} \PY{k}{as} \PY{n+nn}{pd} \PY{n}{signnames} \PY{o}{=} \PY{n}{pd}\PY{o}{.}\PY{n}{read\PYZus{}csv}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{signnames.csv}\PY{l+s+s2}{\PYZdq{}}\PY{p}{)}\PY{o}{.}\PY{n}{SignName} \PY{n}{num\PYZus{}of\PYZus{}rows} \PY{o}{=} \PY{n+nb}{int}\PY{p}{(}\PY{p}{(}\PY{n}{np}\PY{o}{.}\PY{n}{max}\PY{p}{(}\PY{n}{y\PYZus{}train}\PY{p}{)}\PY{o}{+}\PY{l+m+mi}{1}\PY{p}{)}\PY{o}{/}\PY{l+m+mi}{5}\PY{o}{+}\PY{o}{.}\PY{l+m+mi}{9}\PY{p}{)} \PY{c+c1}{\PYZsh{} max+1 = num of classes; divided by 5 and +.9 for rounding up} \PY{n}{f}\PY{p}{,} \PY{n}{axarr} \PY{o}{=} \PY{n}{plt}\PY{o}{.}\PY{n}{subplots}\PY{p}{(}\PY{n}{num\PYZus{}of\PYZus{}rows}\PY{p}{,} \PY{l+m+mi}{5}\PY{p}{,} \PY{n}{figsize}\PY{o}{=}\PY{p}{(}\PY{l+m+mi}{25}\PY{p}{,} \PY{l+m+mi}{40}\PY{p}{)}\PY{p}{)} \PY{k}{for} \PY{n}{i} \PY{o+ow}{in} \PY{n+nb}{range}\PY{p}{(}\PY{n}{np}\PY{o}{.}\PY{n}{max}\PY{p}{(}\PY{n}{y\PYZus{}train}\PY{p}{)}\PY{o}{+}\PY{l+m+mi}{1}\PY{p}{)}\PY{p}{:} \PY{n}{axarr}\PY{p}{[}\PY{n+nb}{int}\PY{p}{(}\PY{n}{i}\PY{o}{/}\PY{l+m+mi}{5}\PY{p}{)}\PY{p}{,} \PY{n}{i}\PY{o}{\PYZpc{}}\PY{k}{5}].imshow(X\PYZus{}valid[np.where(y\PYZus{}valid == i)[0][0]]) \PY{n}{axarr}\PY{p}{[}\PY{n+nb}{int}\PY{p}{(}\PY{n}{i}\PY{o}{/}\PY{l+m+mi}{5}\PY{p}{)}\PY{p}{,} \PY{n}{i}\PY{o}{\PYZpc{}}\PY{k}{5}].set\PYZus{}title(str(i) + \PYZdq{}: \PYZdq{} + signnames[i]) \PY{n}{axarr}\PY{p}{[}\PY{n+nb}{int}\PY{p}{(}\PY{n}{i}\PY{o}{/}\PY{l+m+mi}{5}\PY{p}{)}\PY{p}{,} \PY{n}{i}\PY{o}{\PYZpc{}}\PY{k}{5}].get\PYZus{}xaxis().set\PYZus{}visible(False) \PY{n}{axarr}\PY{p}{[}\PY{n+nb}{int}\PY{p}{(}\PY{n}{i}\PY{o}{/}\PY{l+m+mi}{5}\PY{p}{)}\PY{p}{,} \PY{n}{i}\PY{o}{\PYZpc{}}\PY{k}{5}].get\PYZus{}yaxis().set\PYZus{}visible(False) \end{Verbatim} \begin{center} \adjustimage{max size={0.9\linewidth}{0.9\paperheight}}{output_10_0.png} \end{center} { \hspace*{\fill} \\} \begin{center}\rule{0.5\linewidth}{\linethickness}\end{center} \subsection{Step 2: Design and Test a Model Architecture}\label{step-2-design-and-test-a-model-architecture} Design and implement a deep learning model that learns to recognize traffic signs. Train and test your model on the \href{http://benchmark.ini.rub.de/?section=gtsrb\&subsection=dataset}{German Traffic Sign Dataset}. The LeNet-5 implementation is a solid starting point. To meet specifications, the validation set accuracy will need to be at least 0.93. It is possible to get an even higher accuracy, but 0.93 is the minimum for a successful project submission. There are various aspects to consider when thinking about this problem: \begin{itemize} \tightlist \item Neural network architecture (is the network over or underfitting?) \item Play around preprocessing techniques (normalization, rgb to grayscale, etc) \item Number of examples per label (some have more than others). \item Generate fake data. \end{itemize} Here is an example of a \href{http://yann.lecun.com/exdb/publis/pdf/sermanet-ijcnn-11.pdf}{published baseline model on this problem}. It's not required to be familiar with the approach used in the paper but, it's good practice to try to read papers like these. \subsubsection{Pre-process the Data Set (normalization, grayscale, etc.)}\label{pre-process-the-data-set-normalization-grayscale-etc.} Minimally, the image data should be normalized so that the data has mean zero and equal variance. For image data, \texttt{(pixel\ -\ 128)/\ 128} is a quick way to approximately normalize the data and can be used in this project. \begin{Verbatim}[commandchars=\\\{\}] {\color{incolor}In [{\color{incolor}5}]:} \PY{c+c1}{\PYZsh{} Normalize all the data} \PY{k}{def} \PY{n+nf}{normalize}\PY{p}{(}\PY{n}{data}\PY{p}{)}\PY{p}{:} \PY{n}{data} \PY{o}{=} \PY{p}{(}\PY{n}{data} \PY{o}{\PYZhy{}} \PY{l+m+mf}{128.}\PY{p}{)}\PY{o}{/}\PY{l+m+mi}{128} \PY{k}{return} \PY{n}{data} \PY{k}{if} \PY{n}{np}\PY{o}{.}\PY{n}{max}\PY{p}{(}\PY{n}{X\PYZus{}train}\PY{p}{)} \PY{o}{\PYZgt{}} \PY{l+m+mi}{1}\PY{p}{:} \PY{n}{X\PYZus{}train} \PY{o}{=} \PY{n}{normalize}\PY{p}{(}\PY{n}{X\PYZus{}train}\PY{p}{)} \PY{n}{X\PYZus{}valid} \PY{o}{=} \PY{n}{normalize}\PY{p}{(}\PY{n}{X\PYZus{}valid}\PY{p}{)} \PY{n}{X\PYZus{}test} \PY{o}{=} \PY{n}{normalize}\PY{p}{(}\PY{n}{X\PYZus{}test}\PY{p}{)} \PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Normalization Successful}\PY{l+s+s2}{\PYZdq{}}\PY{p}{)} \PY{k}{else}\PY{p}{:} \PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Data already normalized}\PY{l+s+s2}{\PYZdq{}}\PY{p}{)} \end{Verbatim} \begin{Verbatim}[commandchars=\\\{\}] Normalization Successful \end{Verbatim} \subsubsection{Model Architecture}\label{model-architecture} \begin{Verbatim}[commandchars=\\\{\}] {\color{incolor}In [{\color{incolor}6}]:} \PY{c+c1}{\PYZsh{} CNN model architecture} \PY{k+kn}{import} \PY{n+nn}{tensorflow} \PY{k}{as} \PY{n+nn}{tf} \PY{k+kn}{from} \PY{n+nn}{sklearn}\PY{n+nn}{.}\PY{n+nn}{utils} \PY{k}{import} \PY{n}{shuffle} \PY{k+kn}{from} \PY{n+nn}{tensorflow}\PY{n+nn}{.}\PY{n+nn}{contrib}\PY{n+nn}{.}\PY{n+nn}{layers} \PY{k}{import} \PY{n}{flatten} \PY{n}{EPOCHS} \PY{o}{=} \PY{l+m+mi}{50} \PY{c+c1}{\PYZsh{} } \PY{n}{BATCH\PYZus{}SIZE} \PY{o}{=} \PY{l+m+mi}{2000} \PY{n}{learning\PYZus{}rate} \PY{o}{=} \PY{l+m+mf}{0.002} \PY{n}{dropout\PYZus{}training} \PY{o}{=} \PY{l+m+mf}{0.6} \PY{c+c1}{\PYZsh{} Dropout to avoid overfitting} \PY{k}{def} \PY{n+nf}{CNN}\PY{p}{(}\PY{n}{x}\PY{p}{)}\PY{p}{:} \PY{n}{mu} \PY{o}{=} \PY{l+m+mi}{0} \PY{n}{sigma} \PY{o}{=} \PY{l+m+mf}{0.1} \PY{c+c1}{\PYZsh{} Layer 1: Convolutional. Input = 32x32x3} \PY{n}{conv1\PYZus{}W} \PY{o}{=} \PY{n}{tf}\PY{o}{.}\PY{n}{Variable}\PY{p}{(}\PY{n}{tf}\PY{o}{.}\PY{n}{truncated\PYZus{}normal}\PY{p}{(}\PY{n}{shape}\PY{o}{=}\PY{p}{(}\PY{l+m+mi}{3}\PY{p}{,} \PY{l+m+mi}{3}\PY{p}{,} \PY{l+m+mi}{3}\PY{p}{,} \PY{l+m+mi}{8}\PY{p}{)}\PY{p}{,} \PY{n}{mean} \PY{o}{=} \PY{n}{mu}\PY{p}{,} \PY{n}{stddev} \PY{o}{=} \PY{n}{sigma}\PY{p}{)}\PY{p}{)} \PY{c+c1}{\PYZsh{}Shape: Kernel, Kernel, Input Depth, Output Depth} \PY{n}{conv1\PYZus{}b} \PY{o}{=} \PY{n}{tf}\PY{o}{.}\PY{n}{Variable}\PY{p}{(}\PY{n}{tf}\PY{o}{.}\PY{n}{zeros}\PY{p}{(}\PY{l+m+mi}{8}\PY{p}{)}\PY{p}{)} \PY{n}{conv1} \PY{o}{=} \PY{n}{tf}\PY{o}{.}\PY{n}{nn}\PY{o}{.}\PY{n}{conv2d}\PY{p}{(}\PY{n}{x}\PY{p}{,} \PY{n}{conv1\PYZus{}W}\PY{p}{,} \PY{n}{strides}\PY{o}{=}\PY{p}{[}\PY{l+m+mi}{1}\PY{p}{,} \PY{l+m+mi}{1}\PY{p}{,} \PY{l+m+mi}{1}\PY{p}{,} \PY{l+m+mi}{1}\PY{p}{]}\PY{p}{,} \PY{n}{padding}\PY{o}{=}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{SAME}\PY{l+s+s1}{\PYZsq{}}\PY{p}{)} \PY{o}{+} \PY{n}{conv1\PYZus{}b} \PY{n}{conv1} \PY{o}{=} \PY{n}{tf}\PY{o}{.}\PY{n}{nn}\PY{o}{.}\PY{n}{relu}\PY{p}{(}\PY{n}{conv1}\PY{p}{)} \PY{c+c1}{\PYZsh{} Layer 2: Convolutional. Input 32x32x8} \PY{n}{conv2\PYZus{}W} \PY{o}{=} \PY{n}{tf}\PY{o}{.}\PY{n}{Variable}\PY{p}{(}\PY{n}{tf}\PY{o}{.}\PY{n}{truncated\PYZus{}normal}\PY{p}{(}\PY{n}{shape}\PY{o}{=}\PY{p}{(}\PY{l+m+mi}{3}\PY{p}{,} \PY{l+m+mi}{3}\PY{p}{,} \PY{l+m+mi}{8}\PY{p}{,} \PY{l+m+mi}{8}\PY{p}{)}\PY{p}{,} \PY{n}{mean} \PY{o}{=} \PY{n}{mu}\PY{p}{,} \PY{n}{stddev} \PY{o}{=} \PY{n}{sigma}\PY{p}{)}\PY{p}{)} \PY{n}{conv2\PYZus{}b} \PY{o}{=} \PY{n}{tf}\PY{o}{.}\PY{n}{Variable}\PY{p}{(}\PY{n}{tf}\PY{o}{.}\PY{n}{zeros}\PY{p}{(}\PY{l+m+mi}{8}\PY{p}{)}\PY{p}{)} \PY{n}{conv2} \PY{o}{=} \PY{n}{tf}\PY{o}{.}\PY{n}{nn}\PY{o}{.}\PY{n}{conv2d}\PY{p}{(}\PY{n}{conv1}\PY{p}{,} \PY{n}{conv2\PYZus{}W}\PY{p}{,} \PY{n}{strides}\PY{o}{=}\PY{p}{[}\PY{l+m+mi}{1}\PY{p}{,} \PY{l+m+mi}{1}\PY{p}{,} \PY{l+m+mi}{1}\PY{p}{,} \PY{l+m+mi}{1}\PY{p}{]}\PY{p}{,} \PY{n}{padding}\PY{o}{=}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{SAME}\PY{l+s+s1}{\PYZsq{}}\PY{p}{)} \PY{o}{+} \PY{n}{conv2\PYZus{}b} \PY{n}{conv2} \PY{o}{=} \PY{n}{tf}\PY{o}{.}\PY{n}{nn}\PY{o}{.}\PY{n}{relu}\PY{p}{(}\PY{n}{conv2}\PY{p}{)} \PY{c+c1}{\PYZsh{} Pooling. Input = 32x32x8} \PY{n}{conv2} \PY{o}{=} \PY{n}{tf}\PY{o}{.}\PY{n}{nn}\PY{o}{.}\PY{n}{max\PYZus{}pool}\PY{p}{(}\PY{n}{conv2}\PY{p}{,} \PY{n}{ksize}\PY{o}{=}\PY{p}{[}\PY{l+m+mi}{1}\PY{p}{,} \PY{l+m+mi}{2}\PY{p}{,} \PY{l+m+mi}{2}\PY{p}{,} \PY{l+m+mi}{1}\PY{p}{]}\PY{p}{,} \PY{n}{strides}\PY{o}{=}\PY{p}{[}\PY{l+m+mi}{1}\PY{p}{,} \PY{l+m+mi}{2}\PY{p}{,} \PY{l+m+mi}{2}\PY{p}{,} \PY{l+m+mi}{1}\PY{p}{]}\PY{p}{,} \PY{n}{padding}\PY{o}{=}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{VALID}\PY{l+s+s1}{\PYZsq{}}\PY{p}{)} \PY{n}{conv2} \PY{o}{=} \PY{n}{tf}\PY{o}{.}\PY{n}{nn}\PY{o}{.}\PY{n}{dropout}\PY{p}{(}\PY{n}{conv2}\PY{p}{,} \PY{n}{dropout}\PY{p}{)} \PY{c+c1}{\PYZsh{} Layer 3: Convolutional. Input 16x16x8} \PY{n}{conv3\PYZus{}W} \PY{o}{=} \PY{n}{tf}\PY{o}{.}\PY{n}{Variable}\PY{p}{(}\PY{n}{tf}\PY{o}{.}\PY{n}{truncated\PYZus{}normal}\PY{p}{(}\PY{n}{shape}\PY{o}{=}\PY{p}{(}\PY{l+m+mi}{3}\PY{p}{,} \PY{l+m+mi}{3}\PY{p}{,} \PY{l+m+mi}{8}\PY{p}{,} \PY{l+m+mi}{16}\PY{p}{)}\PY{p}{,} \PY{n}{mean} \PY{o}{=} \PY{n}{mu}\PY{p}{,} \PY{n}{stddev} \PY{o}{=} \PY{n}{sigma}\PY{p}{)}\PY{p}{)} \PY{n}{conv3\PYZus{}b} \PY{o}{=} \PY{n}{tf}\PY{o}{.}\PY{n}{Variable}\PY{p}{(}\PY{n}{tf}\PY{o}{.}\PY{n}{zeros}\PY{p}{(}\PY{l+m+mi}{16}\PY{p}{)}\PY{p}{)} \PY{n}{conv3} \PY{o}{=} \PY{n}{tf}\PY{o}{.}\PY{n}{nn}\PY{o}{.}\PY{n}{conv2d}\PY{p}{(}\PY{n}{conv2}\PY{p}{,} \PY{n}{conv3\PYZus{}W}\PY{p}{,} \PY{n}{strides}\PY{o}{=}\PY{p}{[}\PY{l+m+mi}{1}\PY{p}{,} \PY{l+m+mi}{1}\PY{p}{,} \PY{l+m+mi}{1}\PY{p}{,} \PY{l+m+mi}{1}\PY{p}{]}\PY{p}{,} \PY{n}{padding}\PY{o}{=}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{SAME}\PY{l+s+s1}{\PYZsq{}}\PY{p}{)} \PY{o}{+} \PY{n}{conv3\PYZus{}b} \PY{n}{conv3} \PY{o}{=} \PY{n}{tf}\PY{o}{.}\PY{n}{nn}\PY{o}{.}\PY{n}{relu}\PY{p}{(}\PY{n}{conv3}\PY{p}{)} \PY{c+c1}{\PYZsh{} Layer 4: Convolutional. Input 16x16x16} \PY{n}{conv4\PYZus{}W} \PY{o}{=} \PY{n}{tf}\PY{o}{.}\PY{n}{Variable}\PY{p}{(}\PY{n}{tf}\PY{o}{.}\PY{n}{truncated\PYZus{}normal}\PY{p}{(}\PY{n}{shape}\PY{o}{=}\PY{p}{(}\PY{l+m+mi}{3}\PY{p}{,} \PY{l+m+mi}{3}\PY{p}{,} \PY{l+m+mi}{16}\PY{p}{,} \PY{l+m+mi}{16}\PY{p}{)}\PY{p}{,} \PY{n}{mean} \PY{o}{=} \PY{n}{mu}\PY{p}{,} \PY{n}{stddev} \PY{o}{=} \PY{n}{sigma}\PY{p}{)}\PY{p}{)} \PY{n}{conv4\PYZus{}b} \PY{o}{=} \PY{n}{tf}\PY{o}{.}\PY{n}{Variable}\PY{p}{(}\PY{n}{tf}\PY{o}{.}\PY{n}{zeros}\PY{p}{(}\PY{l+m+mi}{16}\PY{p}{)}\PY{p}{)} \PY{n}{conv4} \PY{o}{=} \PY{n}{tf}\PY{o}{.}\PY{n}{nn}\PY{o}{.}\PY{n}{conv2d}\PY{p}{(}\PY{n}{conv3}\PY{p}{,} \PY{n}{conv4\PYZus{}W}\PY{p}{,} \PY{n}{strides}\PY{o}{=}\PY{p}{[}\PY{l+m+mi}{1}\PY{p}{,} \PY{l+m+mi}{1}\PY{p}{,} \PY{l+m+mi}{1}\PY{p}{,} \PY{l+m+mi}{1}\PY{p}{]}\PY{p}{,} \PY{n}{padding}\PY{o}{=}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{SAME}\PY{l+s+s1}{\PYZsq{}}\PY{p}{)} \PY{o}{+} \PY{n}{conv4\PYZus{}b} \PY{n}{conv4} \PY{o}{=} \PY{n}{tf}\PY{o}{.}\PY{n}{nn}\PY{o}{.}\PY{n}{relu}\PY{p}{(}\PY{n}{conv4}\PY{p}{)} \PY{c+c1}{\PYZsh{} Pooling. Input = 16x16x16} \PY{n}{conv4} \PY{o}{=} \PY{n}{tf}\PY{o}{.}\PY{n}{nn}\PY{o}{.}\PY{n}{max\PYZus{}pool}\PY{p}{(}\PY{n}{conv4}\PY{p}{,} \PY{n}{ksize}\PY{o}{=}\PY{p}{[}\PY{l+m+mi}{1}\PY{p}{,} \PY{l+m+mi}{2}\PY{p}{,} \PY{l+m+mi}{2}\PY{p}{,} \PY{l+m+mi}{1}\PY{p}{]}\PY{p}{,} \PY{n}{strides}\PY{o}{=}\PY{p}{[}\PY{l+m+mi}{1}\PY{p}{,} \PY{l+m+mi}{2}\PY{p}{,} \PY{l+m+mi}{2}\PY{p}{,} \PY{l+m+mi}{1}\PY{p}{]}\PY{p}{,} \PY{n}{padding}\PY{o}{=}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{VALID}\PY{l+s+s1}{\PYZsq{}}\PY{p}{)} \PY{n}{conv4} \PY{o}{=} \PY{n}{tf}\PY{o}{.}\PY{n}{nn}\PY{o}{.}\PY{n}{dropout}\PY{p}{(}\PY{n}{conv4}\PY{p}{,} \PY{n}{dropout}\PY{p}{)} \PY{n}{fc0\PYZus{}0} \PY{o}{=} \PY{n}{flatten}\PY{p}{(}\PY{n}{conv2}\PY{p}{)} \PY{c+c1}{\PYZsh{} 16x16x8 = 2048} \PY{n}{fc0\PYZus{}1} \PY{o}{=} \PY{n}{flatten}\PY{p}{(}\PY{n}{conv4}\PY{p}{)} \PY{c+c1}{\PYZsh{} 8x8x16 = 1024} \PY{n}{fc0} \PY{o}{=} \PY{n}{tf}\PY{o}{.}\PY{n}{concat}\PY{p}{(}\PY{p}{[}\PY{n}{fc0\PYZus{}0}\PY{p}{,} \PY{n}{fc0\PYZus{}1}\PY{p}{]}\PY{p}{,} \PY{l+m+mi}{1}\PY{p}{)} \PY{c+c1}{\PYZsh{} Layer 5: Fully Connected. Input = 3072} \PY{n}{fc1\PYZus{}W} \PY{o}{=} \PY{n}{tf}\PY{o}{.}\PY{n}{Variable}\PY{p}{(}\PY{n}{tf}\PY{o}{.}\PY{n}{truncated\PYZus{}normal}\PY{p}{(}\PY{n}{shape}\PY{o}{=}\PY{p}{(}\PY{l+m+mi}{3072}\PY{p}{,} \PY{l+m+mi}{1024}\PY{p}{)}\PY{p}{,} \PY{n}{mean} \PY{o}{=} \PY{n}{mu}\PY{p}{,} \PY{n}{stddev} \PY{o}{=} \PY{n}{sigma}\PY{p}{)}\PY{p}{)} \PY{n}{fc1\PYZus{}b} \PY{o}{=} \PY{n}{tf}\PY{o}{.}\PY{n}{Variable}\PY{p}{(}\PY{n}{tf}\PY{o}{.}\PY{n}{zeros}\PY{p}{(}\PY{l+m+mi}{1024}\PY{p}{)}\PY{p}{)} \PY{n}{fc1} \PY{o}{=} \PY{n}{tf}\PY{o}{.}\PY{n}{matmul}\PY{p}{(}\PY{n}{fc0}\PY{p}{,} \PY{n}{fc1\PYZus{}W}\PY{p}{)} \PY{o}{+} \PY{n}{fc1\PYZus{}b} \PY{n}{fc1} \PY{o}{=} \PY{n}{tf}\PY{o}{.}\PY{n}{nn}\PY{o}{.}\PY{n}{batch\PYZus{}normalization}\PY{p}{(}\PY{n}{fc1}\PY{p}{,} \PY{n}{mean}\PY{o}{=}\PY{l+m+mi}{0}\PY{p}{,} \PY{n}{variance}\PY{o}{=}\PY{l+m+mf}{0.2}\PY{p}{,} \PY{n}{offset}\PY{o}{=}\PY{k+kc}{None}\PY{p}{,} \PY{n}{scale}\PY{o}{=}\PY{k+kc}{None}\PY{p}{,} \PY{n}{variance\PYZus{}epsilon}\PY{o}{=}\PY{l+m+mf}{1e\PYZhy{}5}\PY{p}{)} \PY{n}{fc1} \PY{o}{=} \PY{n}{tf}\PY{o}{.}\PY{n}{nn}\PY{o}{.}\PY{n}{relu}\PY{p}{(}\PY{n}{fc1}\PY{p}{)} \PY{n}{fc1} \PY{o}{=} \PY{n}{tf}\PY{o}{.}\PY{n}{nn}\PY{o}{.}\PY{n}{dropout}\PY{p}{(}\PY{n}{fc1}\PY{p}{,} \PY{n}{dropout}\PY{p}{)} \PY{c+c1}{\PYZsh{} Layer 6: Fully Connected. Input = 1024} \PY{n}{fc2\PYZus{}W} \PY{o}{=} \PY{n}{tf}\PY{o}{.}\PY{n}{Variable}\PY{p}{(}\PY{n}{tf}\PY{o}{.}\PY{n}{truncated\PYZus{}normal}\PY{p}{(}\PY{n}{shape}\PY{o}{=}\PY{p}{(}\PY{l+m+mi}{1024}\PY{p}{,} \PY{l+m+mi}{256}\PY{p}{)}\PY{p}{,} \PY{n}{mean} \PY{o}{=} \PY{n}{mu}\PY{p}{,} \PY{n}{stddev} \PY{o}{=} \PY{n}{sigma}\PY{p}{)}\PY{p}{)} \PY{n}{fc2\PYZus{}b} \PY{o}{=} \PY{n}{tf}\PY{o}{.}\PY{n}{Variable}\PY{p}{(}\PY{n}{tf}\PY{o}{.}\PY{n}{zeros}\PY{p}{(}\PY{l+m+mi}{256}\PY{p}{)}\PY{p}{)} \PY{n}{fc2} \PY{o}{=} \PY{n}{tf}\PY{o}{.}\PY{n}{matmul}\PY{p}{(}\PY{n}{fc1}\PY{p}{,} \PY{n}{fc2\PYZus{}W}\PY{p}{)} \PY{o}{+} \PY{n}{fc2\PYZus{}b} \PY{n}{fc2} \PY{o}{=} \PY{n}{tf}\PY{o}{.}\PY{n}{nn}\PY{o}{.}\PY{n}{batch\PYZus{}normalization}\PY{p}{(}\PY{n}{fc2}\PY{p}{,} \PY{n}{mean}\PY{o}{=}\PY{l+m+mi}{0}\PY{p}{,} \PY{n}{variance}\PY{o}{=}\PY{l+m+mf}{0.2}\PY{p}{,} \PY{n}{offset}\PY{o}{=}\PY{k+kc}{None}\PY{p}{,} \PY{n}{scale}\PY{o}{=}\PY{k+kc}{None}\PY{p}{,} \PY{n}{variance\PYZus{}epsilon}\PY{o}{=}\PY{l+m+mf}{1e\PYZhy{}5}\PY{p}{)} \PY{n}{fc2} \PY{o}{=} \PY{n}{tf}\PY{o}{.}\PY{n}{nn}\PY{o}{.}\PY{n}{relu}\PY{p}{(}\PY{n}{fc2}\PY{p}{)} \PY{n}{fc2} \PY{o}{=} \PY{n}{tf}\PY{o}{.}\PY{n}{nn}\PY{o}{.}\PY{n}{dropout}\PY{p}{(}\PY{n}{fc2}\PY{p}{,} \PY{n}{dropout}\PY{p}{)} \PY{c+c1}{\PYZsh{} Layer 7: Fully Connected. Input = 256. Output = 43.} \PY{n}{fc3\PYZus{}W} \PY{o}{=} \PY{n}{tf}\PY{o}{.}\PY{n}{Variable}\PY{p}{(}\PY{n}{tf}\PY{o}{.}\PY{n}{truncated\PYZus{}normal}\PY{p}{(}\PY{n}{shape}\PY{o}{=}\PY{p}{(}\PY{l+m+mi}{256}\PY{p}{,} \PY{l+m+mi}{43}\PY{p}{)}\PY{p}{,} \PY{n}{mean} \PY{o}{=} \PY{n}{mu}\PY{p}{,} \PY{n}{stddev} \PY{o}{=} \PY{n}{sigma}\PY{p}{)}\PY{p}{)} \PY{n}{fc3\PYZus{}b} \PY{o}{=} \PY{n}{tf}\PY{o}{.}\PY{n}{Variable}\PY{p}{(}\PY{n}{tf}\PY{o}{.}\PY{n}{zeros}\PY{p}{(}\PY{l+m+mi}{43}\PY{p}{)}\PY{p}{)} \PY{n}{logits} \PY{o}{=} \PY{n}{tf}\PY{o}{.}\PY{n}{matmul}\PY{p}{(}\PY{n}{fc2}\PY{p}{,} \PY{n}{fc3\PYZus{}W}\PY{p}{)} \PY{o}{+} \PY{n}{fc3\PYZus{}b} \PY{k}{return} \PY{n}{logits} \end{Verbatim} \begin{Verbatim}[commandchars=\\\{\}] C:\textbackslash{}ProgramData\textbackslash{}Anaconda3\textbackslash{}lib\textbackslash{}site-packages\textbackslash{}h5py\textbackslash{}\_\_init\_\_.py:36: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`. from .\_conv import register\_converters as \_register\_converters \end{Verbatim} \subsubsection{Train, Validate and Test the Model}\label{train-validate-and-test-the-model} A validation set can be used to assess how well the model is performing. A low accuracy on the training and validation sets imply underfitting. A high accuracy on the training set but low accuracy on the validation set implies overfitting. \begin{Verbatim}[commandchars=\\\{\}] {\color{incolor}In [{\color{incolor}7}]:} \PY{c+c1}{\PYZsh{}\PYZsh{}\PYZsh{} Train your model here.} \PY{c+c1}{\PYZsh{}\PYZsh{}\PYZsh{} Calculate and report the accuracy on the training and validation set.} \PY{c+c1}{\PYZsh{}\PYZsh{}\PYZsh{} Once a final model architecture is selected, } \PY{c+c1}{\PYZsh{}\PYZsh{}\PYZsh{} the accuracy on the test set should be calculated and reported as well.} \PY{c+c1}{\PYZsh{}\PYZsh{}\PYZsh{} Feel free to use as many code cells as needed.} \PY{n}{x} \PY{o}{=} \PY{n}{tf}\PY{o}{.}\PY{n}{placeholder}\PY{p}{(}\PY{n}{tf}\PY{o}{.}\PY{n}{float32}\PY{p}{,} \PY{p}{(}\PY{k+kc}{None}\PY{p}{,} \PY{l+m+mi}{32}\PY{p}{,} \PY{l+m+mi}{32}\PY{p}{,} \PY{l+m+mi}{3}\PY{p}{)}\PY{p}{)} \PY{n}{y} \PY{o}{=} \PY{n}{tf}\PY{o}{.}\PY{n}{placeholder}\PY{p}{(}\PY{n}{tf}\PY{o}{.}\PY{n}{int32}\PY{p}{,} \PY{p}{(}\PY{k+kc}{None}\PY{p}{)}\PY{p}{)} \PY{n}{dropout} \PY{o}{=} \PY{n}{tf}\PY{o}{.}\PY{n}{placeholder\PYZus{}with\PYZus{}default}\PY{p}{(}\PY{l+m+mf}{1.0}\PY{p}{,} \PY{n}{shape}\PY{o}{=}\PY{p}{(}\PY{p}{)}\PY{p}{)} \PY{n}{one\PYZus{}hot\PYZus{}y} \PY{o}{=} \PY{n}{tf}\PY{o}{.}\PY{n}{one\PYZus{}hot}\PY{p}{(}\PY{n}{y}\PY{p}{,} \PY{l+m+mi}{43}\PY{p}{)} \PY{n}{logits} \PY{o}{=} \PY{n}{CNN}\PY{p}{(}\PY{n}{x}\PY{p}{)} \PY{n}{cross\PYZus{}entropy} \PY{o}{=} \PY{n}{tf}\PY{o}{.}\PY{n}{nn}\PY{o}{.}\PY{n}{softmax\PYZus{}cross\PYZus{}entropy\PYZus{}with\PYZus{}logits}\PY{p}{(}\PY{n}{labels}\PY{o}{=}\PY{n}{one\PYZus{}hot\PYZus{}y}\PY{p}{,} \PY{n}{logits}\PY{o}{=}\PY{n}{logits}\PY{p}{)} \PY{n}{loss\PYZus{}operation} \PY{o}{=} \PY{n}{tf}\PY{o}{.}\PY{n}{reduce\PYZus{}mean}\PY{p}{(}\PY{n}{cross\PYZus{}entropy}\PY{p}{)} \PY{n}{optimizer} \PY{o}{=} \PY{n}{tf}\PY{o}{.}\PY{n}{train}\PY{o}{.}\PY{n}{AdamOptimizer}\PY{p}{(}\PY{n}{learning\PYZus{}rate} \PY{o}{=} \PY{n}{learning\PYZus{}rate}\PY{p}{)} \PY{n}{training\PYZus{}operation} \PY{o}{=} \PY{n}{optimizer}\PY{o}{.}\PY{n}{minimize}\PY{p}{(}\PY{n}{loss\PYZus{}operation}\PY{p}{)} \end{Verbatim} \begin{Verbatim}[commandchars=\\\{\}] WARNING:tensorflow:From <ipython-input-7-704b5ecbf537>:16: softmax\_cross\_entropy\_with\_logits (from tensorflow.python.ops.nn\_ops) is deprecated and will be removed in a future version. Instructions for updating: Future major versions of TensorFlow will allow gradients to flow into the labels input on backprop by default. See `tf.nn.softmax\_cross\_entropy\_with\_logits\_v2`. \end{Verbatim} \begin{Verbatim}[commandchars=\\\{\}] {\color{incolor}In [{\color{incolor}8}]:} \PY{n}{correct\PYZus{}prediction} \PY{o}{=} \PY{n}{tf}\PY{o}{.}\PY{n}{equal}\PY{p}{(}\PY{n}{tf}\PY{o}{.}\PY{n}{argmax}\PY{p}{(}\PY{n}{logits}\PY{p}{,} \PY{l+m+mi}{1}\PY{p}{)}\PY{p}{,} \PY{n}{tf}\PY{o}{.}\PY{n}{argmax}\PY{p}{(}\PY{n}{one\PYZus{}hot\PYZus{}y}\PY{p}{,} \PY{l+m+mi}{1}\PY{p}{)}\PY{p}{)} \PY{n}{accuracy\PYZus{}operation} \PY{o}{=} \PY{n}{tf}\PY{o}{.}\PY{n}{reduce\PYZus{}mean}\PY{p}{(}\PY{n}{tf}\PY{o}{.}\PY{n}{cast}\PY{p}{(}\PY{n}{correct\PYZus{}prediction}\PY{p}{,} \PY{n}{tf}\PY{o}{.}\PY{n}{float32}\PY{p}{)}\PY{p}{)} \PY{n}{saver} \PY{o}{=} \PY{n}{tf}\PY{o}{.}\PY{n}{train}\PY{o}{.}\PY{n}{Saver}\PY{p}{(}\PY{p}{)} \PY{k}{def} \PY{n+nf}{evaluate}\PY{p}{(}\PY{n}{X\PYZus{}data}\PY{p}{,} \PY{n}{y\PYZus{}data}\PY{p}{)}\PY{p}{:} \PY{n}{num\PYZus{}examples} \PY{o}{=} \PY{n+nb}{len}\PY{p}{(}\PY{n}{X\PYZus{}data}\PY{p}{)} \PY{n}{total\PYZus{}accuracy} \PY{o}{=} \PY{l+m+mi}{0} \PY{n}{sess} \PY{o}{=} \PY{n}{tf}\PY{o}{.}\PY{n}{get\PYZus{}default\PYZus{}session}\PY{p}{(}\PY{p}{)} \PY{k}{for} \PY{n}{offset} \PY{o+ow}{in} \PY{n+nb}{range}\PY{p}{(}\PY{l+m+mi}{0}\PY{p}{,} \PY{n}{num\PYZus{}examples}\PY{p}{,} \PY{n}{BATCH\PYZus{}SIZE}\PY{p}{)}\PY{p}{:} \PY{n}{batch\PYZus{}x}\PY{p}{,} \PY{n}{batch\PYZus{}y} \PY{o}{=} \PY{n}{X\PYZus{}data}\PY{p}{[}\PY{n}{offset}\PY{p}{:}\PY{n}{offset}\PY{o}{+}\PY{n}{BATCH\PYZus{}SIZE}\PY{p}{]}\PY{p}{,} \PY{n}{y\PYZus{}data}\PY{p}{[}\PY{n}{offset}\PY{p}{:}\PY{n}{offset}\PY{o}{+}\PY{n}{BATCH\PYZus{}SIZE}\PY{p}{]} \PY{n}{accuracy} \PY{o}{=} \PY{n}{sess}\PY{o}{.}\PY{n}{run}\PY{p}{(}\PY{n}{accuracy\PYZus{}operation}\PY{p}{,} \PY{n}{feed\PYZus{}dict}\PY{o}{=}\PY{p}{\PYZob{}}\PY{n}{x}\PY{p}{:} \PY{n}{batch\PYZus{}x}\PY{p}{,} \PY{n}{y}\PY{p}{:} \PY{n}{batch\PYZus{}y}\PY{p}{\PYZcb{}}\PY{p}{)} \PY{n}{total\PYZus{}accuracy} \PY{o}{+}\PY{o}{=} \PY{p}{(}\PY{n}{accuracy} \PY{o}{*} \PY{n+nb}{len}\PY{p}{(}\PY{n}{batch\PYZus{}x}\PY{p}{)}\PY{p}{)} \PY{k}{return} \PY{n}{total\PYZus{}accuracy} \PY{o}{/} \PY{n}{num\PYZus{}examples} \end{Verbatim} \begin{Verbatim}[commandchars=\\\{\}] {\color{incolor}In [{\color{incolor}9}]:} \PY{k}{with} \PY{n}{tf}\PY{o}{.}\PY{n}{Session}\PY{p}{(}\PY{p}{)} \PY{k}{as} \PY{n}{sess}\PY{p}{:} \PY{n}{sess}\PY{o}{.}\PY{n}{run}\PY{p}{(}\PY{n}{tf}\PY{o}{.}\PY{n}{global\PYZus{}variables\PYZus{}initializer}\PY{p}{(}\PY{p}{)}\PY{p}{)} \PY{n}{num\PYZus{}examples} \PY{o}{=} \PY{n+nb}{len}\PY{p}{(}\PY{n}{X\PYZus{}train}\PY{p}{)} \PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Training...}\PY{l+s+s2}{\PYZdq{}}\PY{p}{)} \PY{n+nb}{print}\PY{p}{(}\PY{p}{)} \PY{k}{for} \PY{n}{i} \PY{o+ow}{in} \PY{n+nb}{range}\PY{p}{(}\PY{n}{EPOCHS}\PY{p}{)}\PY{p}{:} \PY{n}{X\PYZus{}train}\PY{p}{,} \PY{n}{y\PYZus{}train} \PY{o}{=} \PY{n}{shuffle}\PY{p}{(}\PY{n}{X\PYZus{}train}\PY{p}{,} \PY{n}{y\PYZus{}train}\PY{p}{)} \PY{k}{for} \PY{n}{offset} \PY{o+ow}{in} \PY{n+nb}{range}\PY{p}{(}\PY{l+m+mi}{0}\PY{p}{,} \PY{n}{num\PYZus{}examples}\PY{p}{,} \PY{n}{BATCH\PYZus{}SIZE}\PY{p}{)}\PY{p}{:} \PY{n}{end} \PY{o}{=} \PY{n}{offset} \PY{o}{+} \PY{n}{BATCH\PYZus{}SIZE} \PY{n}{batch\PYZus{}x}\PY{p}{,} \PY{n}{batch\PYZus{}y} \PY{o}{=} \PY{n}{X\PYZus{}train}\PY{p}{[}\PY{n}{offset}\PY{p}{:}\PY{n}{end}\PY{p}{]}\PY{p}{,} \PY{n}{y\PYZus{}train}\PY{p}{[}\PY{n}{offset}\PY{p}{:}\PY{n}{end}\PY{p}{]} \PY{n}{sess}\PY{o}{.}\PY{n}{run}\PY{p}{(}\PY{n}{training\PYZus{}operation}\PY{p}{,} \PY{n}{feed\PYZus{}dict}\PY{o}{=}\PY{p}{\PYZob{}}\PY{n}{x}\PY{p}{:} \PY{n}{batch\PYZus{}x}\PY{p}{,} \PY{n}{y}\PY{p}{:} \PY{n}{batch\PYZus{}y}\PY{p}{,} \PY{n}{dropout}\PY{p}{:} \PY{n}{dropout\PYZus{}training}\PY{p}{\PYZcb{}}\PY{p}{)} \PY{n}{validation\PYZus{}accuracy} \PY{o}{=} \PY{n}{evaluate}\PY{p}{(}\PY{n}{X\PYZus{}valid}\PY{p}{,} \PY{n}{y\PYZus{}valid}\PY{p}{)} \PY{n}{train\PYZus{}accuracy} \PY{o}{=} \PY{n}{evaluate}\PY{p}{(}\PY{n}{X\PYZus{}train}\PY{p}{,} \PY{n}{y\PYZus{}train}\PY{p}{)} \PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{EPOCH }\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s2}{ ...}\PY{l+s+s2}{\PYZdq{}}\PY{o}{.}\PY{n}{format}\PY{p}{(}\PY{n}{i}\PY{o}{+}\PY{l+m+mi}{1}\PY{p}{)}\PY{p}{)} \PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Validation Accuracy = }\PY{l+s+si}{\PYZob{}:.3f\PYZcb{}}\PY{l+s+s2}{\PYZdq{}}\PY{o}{.}\PY{n}{format}\PY{p}{(}\PY{n}{validation\PYZus{}accuracy}\PY{p}{)}\PY{p}{)} \PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Train Accuracy = }\PY{l+s+si}{\PYZob{}:.3f\PYZcb{}}\PY{l+s+s2}{\PYZdq{}}\PY{o}{.}\PY{n}{format}\PY{p}{(}\PY{n}{train\PYZus{}accuracy}\PY{p}{)}\PY{p}{)} \PY{n+nb}{print}\PY{p}{(}\PY{p}{)} \PY{n}{saver}\PY{o}{.}\PY{n}{save}\PY{p}{(}\PY{n}{sess}\PY{p}{,} \PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{model/c3c3p\PYZus{}c3c3p\PYZus{}fc\PYZus{}fc\PYZus{}fc.ckpt}\PY{l+s+s1}{\PYZsq{}}\PY{p}{)} \PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Model saved}\PY{l+s+s2}{\PYZdq{}}\PY{p}{)} \end{Verbatim} \begin{Verbatim}[commandchars=\\\{\}] Training{\ldots} EPOCH 1 {\ldots} Validation Accuracy = 0.052 Train Accuracy = 0.079 EPOCH 2 {\ldots} Validation Accuracy = 0.193 Train Accuracy = 0.239 EPOCH 3 {\ldots} Validation Accuracy = 0.349 Train Accuracy = 0.451 EPOCH 4 {\ldots} Validation Accuracy = 0.526 Train Accuracy = 0.612 EPOCH 5 {\ldots} Validation Accuracy = 0.636 Train Accuracy = 0.713 EPOCH 6 {\ldots} Validation Accuracy = 0.751 Train Accuracy = 0.836 EPOCH 7 {\ldots} Validation Accuracy = 0.780 Train Accuracy = 0.879 EPOCH 8 {\ldots} Validation Accuracy = 0.817 Train Accuracy = 0.916 EPOCH 9 {\ldots} Validation Accuracy = 0.842 Train Accuracy = 0.935 EPOCH 10 {\ldots} Validation Accuracy = 0.859 Train Accuracy = 0.949 EPOCH 11 {\ldots} Validation Accuracy = 0.868 Train Accuracy = 0.962 EPOCH 12 {\ldots} Validation Accuracy = 0.876 Train Accuracy = 0.965 EPOCH 13 {\ldots} Validation Accuracy = 0.882 Train Accuracy = 0.973 EPOCH 14 {\ldots} Validation Accuracy = 0.886 Train Accuracy = 0.977 EPOCH 15 {\ldots} Validation Accuracy = 0.900 Train Accuracy = 0.981 EPOCH 16 {\ldots} Validation Accuracy = 0.905 Train Accuracy = 0.985 EPOCH 17 {\ldots} Validation Accuracy = 0.910 Train Accuracy = 0.987 EPOCH 18 {\ldots} Validation Accuracy = 0.911 Train Accuracy = 0.989 EPOCH 19 {\ldots} Validation Accuracy = 0.919 Train Accuracy = 0.990 EPOCH 20 {\ldots} Validation Accuracy = 0.917 Train Accuracy = 0.990 EPOCH 21 {\ldots} Validation Accuracy = 0.923 Train Accuracy = 0.993 EPOCH 22 {\ldots} Validation Accuracy = 0.923 Train Accuracy = 0.993 EPOCH 23 {\ldots} Validation Accuracy = 0.929 Train Accuracy = 0.995 EPOCH 24 {\ldots} Validation Accuracy = 0.923 Train Accuracy = 0.995 EPOCH 25 {\ldots} Validation Accuracy = 0.930 Train Accuracy = 0.996 EPOCH 26 {\ldots} Validation Accuracy = 0.928 Train Accuracy = 0.996 EPOCH 27 {\ldots} Validation Accuracy = 0.929 Train Accuracy = 0.997 EPOCH 28 {\ldots} Validation Accuracy = 0.928 Train Accuracy = 0.997 EPOCH 29 {\ldots} Validation Accuracy = 0.932 Train Accuracy = 0.997 EPOCH 30 {\ldots} Validation Accuracy = 0.937 Train Accuracy = 0.998 EPOCH 31 {\ldots} Validation Accuracy = 0.935 Train Accuracy = 0.998 EPOCH 32 {\ldots} Validation Accuracy = 0.932 Train Accuracy = 0.998 EPOCH 33 {\ldots} Validation Accuracy = 0.937 Train Accuracy = 0.999 EPOCH 34 {\ldots} Validation Accuracy = 0.935 Train Accuracy = 0.999 EPOCH 35 {\ldots} Validation Accuracy = 0.940 Train Accuracy = 0.999 EPOCH 36 {\ldots} Validation Accuracy = 0.936 Train Accuracy = 0.999 EPOCH 37 {\ldots} Validation Accuracy = 0.940 Train Accuracy = 0.999 EPOCH 38 {\ldots} Validation Accuracy = 0.928 Train Accuracy = 0.999 EPOCH 39 {\ldots} Validation Accuracy = 0.935 Train Accuracy = 0.999 EPOCH 40 {\ldots} Validation Accuracy = 0.937 Train Accuracy = 0.999 EPOCH 41 {\ldots} Validation Accuracy = 0.926 Train Accuracy = 0.998 EPOCH 42 {\ldots} Validation Accuracy = 0.942 Train Accuracy = 1.000 EPOCH 43 {\ldots} Validation Accuracy = 0.943 Train Accuracy = 1.000 EPOCH 44 {\ldots} Validation Accuracy = 0.946 Train Accuracy = 1.000 EPOCH 45 {\ldots} Validation Accuracy = 0.942 Train Accuracy = 1.000 EPOCH 46 {\ldots} Validation Accuracy = 0.945 Train Accuracy = 1.000 EPOCH 47 {\ldots} Validation Accuracy = 0.946 Train Accuracy = 1.000 EPOCH 48 {\ldots} Validation Accuracy = 0.944 Train Accuracy = 1.000 EPOCH 49 {\ldots} Validation Accuracy = 0.939 Train Accuracy = 1.000 EPOCH 50 {\ldots} Validation Accuracy = 0.941 Train Accuracy = 1.000 Model saved \end{Verbatim} \begin{Verbatim}[commandchars=\\\{\}] {\color{incolor}In [{\color{incolor}10}]:} \PY{k}{with} \PY{n}{tf}\PY{o}{.}\PY{n}{Session}\PY{p}{(}\PY{p}{)} \PY{k}{as} \PY{n}{sess}\PY{p}{:} \PY{n}{sess}\PY{o}{.}\PY{n}{run}\PY{p}{(}\PY{n}{tf}\PY{o}{.}\PY{n}{global\PYZus{}variables\PYZus{}initializer}\PY{p}{(}\PY{p}{)}\PY{p}{)} \PY{n}{saver}\PY{o}{.}\PY{n}{restore}\PY{p}{(}\PY{n}{sess}\PY{p}{,} \PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{model/c3c3p\PYZus{}c3c3p\PYZus{}fc\PYZus{}fc\PYZus{}fc.ckpt}\PY{l+s+s1}{\PYZsq{}}\PY{p}{)} \PY{n}{test\PYZus{}accuracy} \PY{o}{=} \PY{n}{evaluate}\PY{p}{(}\PY{n}{X\PYZus{}test}\PY{p}{,} \PY{n}{y\PYZus{}test}\PY{p}{)} \PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Test Accuracy = }\PY{l+s+si}{\PYZob{}:.3f\PYZcb{}}\PY{l+s+s2}{\PYZdq{}}\PY{o}{.}\PY{n}{format}\PY{p}{(}\PY{n}{test\PYZus{}accuracy}\PY{p}{)}\PY{p}{)} \end{Verbatim} \begin{Verbatim}[commandchars=\\\{\}] INFO:tensorflow:Restoring parameters from model/c3c3p\_c3c3p\_fc\_fc\_fc.ckpt Test Accuracy = 0.944 \end{Verbatim} \begin{center}\rule{0.5\linewidth}{\linethickness}\end{center} \subsection{Step 3: Test a Model on New Images}\label{step-3-test-a-model-on-new-images} To give yourself more insight into how your model is working, download at least five pictures of German traffic signs from the web and use your model to predict the traffic sign type. You may find \texttt{signnames.csv} useful as it contains mappings from the class id (integer) to the actual sign name. \subsubsection{Load and Output the Images}\label{load-and-output-the-images} \begin{Verbatim}[commandchars=\\\{\}] {\color{incolor}In [{\color{incolor}11}]:} \PY{k+kn}{import} \PY{n+nn}{os} \PY{k+kn}{import} \PY{n+nn}{matplotlib}\PY{n+nn}{.}\PY{n+nn}{image} \PY{k}{as} \PY{n+nn}{mpimg} \PY{k+kn}{from} \PY{n+nn}{PIL} \PY{k}{import} \PY{n}{Image} \PY{n}{i}\PY{o}{=}\PY{l+m+mi}{0} \PY{n}{X\PYZus{}test\PYZus{}own\PYZus{}images} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{array}\PY{p}{(}\PY{p}{[}\PY{p}{]}\PY{p}{)} \PY{k}{for} \PY{n}{file} \PY{o+ow}{in} \PY{n}{os}\PY{o}{.}\PY{n}{listdir}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{webpictures}\PY{l+s+s2}{\PYZdq{}}\PY{p}{)}\PY{p}{:} \PY{n}{image} \PY{o}{=} \PY{n}{Image}\PY{o}{.}\PY{n}{open}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{webpictures/}\PY{l+s+s2}{\PYZdq{}} \PY{o}{+} \PY{n}{file}\PY{p}{)} \PY{n}{image}\PY{o}{.}\PY{n}{thumbnail}\PY{p}{(}\PY{p}{(}\PY{l+m+mi}{32}\PY{p}{,} \PY{l+m+mi}{32}\PY{p}{)}\PY{p}{,} \PY{n}{Image}\PY{o}{.}\PY{n}{LANCZOS}\PY{p}{)} \PY{c+c1}{\PYZsh{} resizes image in\PYZhy{}place} \PY{n}{X\PYZus{}test\PYZus{}own\PYZus{}images} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{append}\PY{p}{(}\PY{n}{X\PYZus{}test\PYZus{}own\PYZus{}images}\PY{p}{,} \PY{n}{np}\PY{o}{.}\PY{n}{array}\PY{p}{(}\PY{p}{[}\PY{n}{normalize}\PY{p}{(}\PY{n}{np}\PY{o}{.}\PY{n}{array}\PY{p}{(}\PY{n}{image}\PY{p}{)}\PY{p}{)}\PY{p}{]}\PY{p}{)}\PY{p}{)} \PY{n}{y\PYZus{}test\PYZus{}own\PYZus{}images} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{array}\PY{p}{(}\PY{p}{[}\PY{l+m+mi}{7}\PY{p}{,} \PY{l+m+mi}{0}\PY{p}{,} \PY{l+m+mi}{1}\PY{p}{,} \PY{l+m+mi}{2}\PY{p}{,} \PY{l+m+mi}{18}\PY{p}{,} \PY{l+m+mi}{18}\PY{p}{,} \PY{l+m+mi}{22}\PY{p}{,} \PY{l+m+mi}{25}\PY{p}{,} \PY{l+m+mi}{17}\PY{p}{,} \PY{l+m+mi}{17}\PY{p}{,} \PY{l+m+mi}{15}\PY{p}{,} \PY{l+m+mi}{13}\PY{p}{,} \PY{l+m+mi}{30}\PY{p}{,} \PY{l+m+mi}{9}\PY{p}{,} \PY{l+m+mi}{34}\PY{p}{,} \PY{l+m+mi}{35}\PY{p}{,} \PY{l+m+mi}{27}\PY{p}{,} \PY{l+m+mi}{11}\PY{p}{,} \PY{l+m+mi}{14}\PY{p}{,} \PY{l+m+mi}{14}\PY{p}{]}\PY{p}{)} \PY{n}{X\PYZus{}test\PYZus{}own\PYZus{}images} \PY{o}{=} \PY{n}{X\PYZus{}test\PYZus{}own\PYZus{}images}\PY{o}{.}\PY{n}{reshape}\PY{p}{(}\PY{o}{\PYZhy{}}\PY{l+m+mi}{1}\PY{p}{,} \PY{l+m+mi}{32}\PY{p}{,} \PY{l+m+mi}{32}\PY{p}{,} \PY{l+m+mi}{3}\PY{p}{)} \PY{c+c1}{\PYZsh{}tf.reset\PYZus{}default\PYZus{}graph()} \PY{k}{with} \PY{n}{tf}\PY{o}{.}\PY{n}{Session}\PY{p}{(}\PY{p}{)} \PY{k}{as} \PY{n}{sess}\PY{p}{:} \PY{n}{sess}\PY{o}{.}\PY{n}{run}\PY{p}{(}\PY{n}{tf}\PY{o}{.}\PY{n}{global\PYZus{}variables\PYZus{}initializer}\PY{p}{(}\PY{p}{)}\PY{p}{)} \PY{n}{saver}\PY{o}{.}\PY{n}{restore}\PY{p}{(}\PY{n}{sess}\PY{p}{,} \PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{model/c3c3p\PYZus{}c3c3p\PYZus{}fc\PYZus{}fc\PYZus{}fc.ckpt}\PY{l+s+s1}{\PYZsq{}}\PY{p}{)} \PY{n}{test\PYZus{}accuracy} \PY{o}{=} \PY{n}{evaluate}\PY{p}{(}\PY{n}{X\PYZus{}test\PYZus{}own\PYZus{}images}\PY{p}{,} \PY{n}{y\PYZus{}test\PYZus{}own\PYZus{}images}\PY{p}{)} \PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Test Accuracy = }\PY{l+s+si}{\PYZob{}:.3f\PYZcb{}}\PY{l+s+s2}{\PYZdq{}}\PY{o}{.}\PY{n}{format}\PY{p}{(}\PY{n}{test\PYZus{}accuracy}\PY{p}{)}\PY{p}{)} \PY{n}{result} \PY{o}{=} \PY{n}{sess}\PY{o}{.}\PY{n}{run}\PY{p}{(}\PY{n}{logits}\PY{p}{,} \PY{n}{feed\PYZus{}dict}\PY{o}{=}\PY{p}{\PYZob{}}\PY{n}{x}\PY{p}{:} \PY{n}{X\PYZus{}test\PYZus{}own\PYZus{}images}\PY{p}{\PYZcb{}}\PY{p}{)} \PY{n}{y\PYZus{}est} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{argmax}\PY{p}{(}\PY{n}{result}\PY{p}{,} \PY{l+m+mi}{1}\PY{p}{)} \PY{n+nb}{print}\PY{p}{(}\PY{n}{y\PYZus{}est}\PY{p}{)} \PY{n+nb}{print}\PY{p}{(}\PY{n}{y\PYZus{}test\PYZus{}own\PYZus{}images}\PY{p}{)} \PY{n+nb}{print}\PY{p}{(}\PY{n}{y\PYZus{}est} \PY{o}{==} \PY{n}{y\PYZus{}test\PYZus{}own\PYZus{}images}\PY{p}{)} \end{Verbatim} \begin{Verbatim}[commandchars=\\\{\}] INFO:tensorflow:Restoring parameters from model/c3c3p\_c3c3p\_fc\_fc\_fc.ckpt Test Accuracy = 0.700 [ 1 0 1 1 18 29 29 14 17 17 15 13 28 9 34 35 27 11 14 14] [ 7 0 1 2 18 18 22 25 17 17 15 13 30 9 34 35 27 11 14 14] [False True True False True False False False True True True True False True True True True True True True] \end{Verbatim} \subsubsection{Predict the Sign Type for Each Image}\label{predict-the-sign-type-for-each-image} \begin{Verbatim}[commandchars=\\\{\}] {\color{incolor}In [{\color{incolor}12}]:} \PY{n}{i}\PY{o}{=}\PY{l+m+mi}{0} \PY{n}{f}\PY{p}{,} \PY{n}{axarr} \PY{o}{=} \PY{n}{plt}\PY{o}{.}\PY{n}{subplots}\PY{p}{(}\PY{l+m+mi}{4}\PY{p}{,} \PY{l+m+mi}{5}\PY{p}{,} \PY{n}{figsize}\PY{o}{=}\PY{p}{(}\PY{l+m+mi}{25}\PY{p}{,} \PY{l+m+mi}{25}\PY{p}{)}\PY{p}{)} \PY{k}{for} \PY{n}{file} \PY{o+ow}{in} \PY{n}{os}\PY{o}{.}\PY{n}{listdir}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{webpictures}\PY{l+s+s2}{\PYZdq{}}\PY{p}{)}\PY{p}{:} \PY{n}{image} \PY{o}{=} \PY{n}{Image}\PY{o}{.}\PY{n}{open}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{webpictures/}\PY{l+s+s2}{\PYZdq{}} \PY{o}{+} \PY{n}{file}\PY{p}{)} \PY{n}{image}\PY{o}{.}\PY{n}{thumbnail}\PY{p}{(}\PY{p}{(}\PY{l+m+mi}{32}\PY{p}{,} \PY{l+m+mi}{32}\PY{p}{)}\PY{p}{,} \PY{n}{Image}\PY{o}{.}\PY{n}{LANCZOS}\PY{p}{)} \PY{c+c1}{\PYZsh{} resizes image in\PYZhy{}place} \PY{n}{axarr}\PY{p}{[}\PY{n+nb}{int}\PY{p}{(}\PY{n}{i}\PY{o}{/}\PY{l+m+mi}{5}\PY{p}{)}\PY{p}{,} \PY{n}{i}\PY{o}{\PYZpc{}}\PY{k}{5}].imshow(image) \PY{n}{axarr}\PY{p}{[}\PY{n+nb}{int}\PY{p}{(}\PY{n}{i}\PY{o}{/}\PY{l+m+mi}{5}\PY{p}{)}\PY{p}{,} \PY{n}{i}\PY{o}{\PYZpc{}}\PY{k}{5}].set\PYZus{}title(\PYZdq{}Label:\PYZbs{}n\PYZdq{} + signnames[y\PYZus{}test\PYZus{}own\PYZus{}images[i]] + \PYZdq{}\PYZbs{}nPredicted:\PYZbs{}n\PYZdq{} + signnames[y\PYZus{}est[i]]) \PY{n}{axarr}\PY{p}{[}\PY{n+nb}{int}\PY{p}{(}\PY{n}{i}\PY{o}{/}\PY{l+m+mi}{5}\PY{p}{)}\PY{p}{,} \PY{n}{i}\PY{o}{\PYZpc{}}\PY{k}{5}].get\PYZus{}xaxis().set\PYZus{}visible(False) \PY{n}{axarr}\PY{p}{[}\PY{n+nb}{int}\PY{p}{(}\PY{n}{i}\PY{o}{/}\PY{l+m+mi}{5}\PY{p}{)}\PY{p}{,} \PY{n}{i}\PY{o}{\PYZpc{}}\PY{k}{5}].get\PYZus{}yaxis().set\PYZus{}visible(False) \PY{n}{i} \PY{o}{+}\PY{o}{=} \PY{l+m+mi}{1} \end{Verbatim} \begin{center} \adjustimage{max size={0.9\linewidth}{0.9\paperheight}}{output_27_0.png} \end{center} { \hspace*{\fill} \\} \subsubsection{Output Top 5 Softmax Probabilities For Each Image Found on the Web}\label{output-top-5-softmax-probabilities-for-each-image-found-on-the-web} \begin{Verbatim}[commandchars=\\\{\}] {\color{incolor}In [{\color{incolor}13}]:} \PY{k}{with} \PY{n}{tf}\PY{o}{.}\PY{n}{Session}\PY{p}{(}\PY{p}{)} \PY{k}{as} \PY{n}{sess}\PY{p}{:} \PY{n}{sess}\PY{o}{.}\PY{n}{run}\PY{p}{(}\PY{n}{tf}\PY{o}{.}\PY{n}{global\PYZus{}variables\PYZus{}initializer}\PY{p}{(}\PY{p}{)}\PY{p}{)} \PY{n}{saver}\PY{o}{.}\PY{n}{restore}\PY{p}{(}\PY{n}{sess}\PY{p}{,} \PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{model/c3c3p\PYZus{}c3c3p\PYZus{}fc\PYZus{}fc\PYZus{}fc.ckpt}\PY{l+s+s1}{\PYZsq{}}\PY{p}{)} \PY{n}{logits} \PY{o}{=} \PY{n}{sess}\PY{o}{.}\PY{n}{run}\PY{p}{(}\PY{n}{logits}\PY{p}{,} \PY{n}{feed\PYZus{}dict}\PY{o}{=}\PY{p}{\PYZob{}}\PY{n}{x}\PY{p}{:} \PY{n}{X\PYZus{}test\PYZus{}own\PYZus{}images}\PY{p}{\PYZcb{}}\PY{p}{)} \PY{n}{result} \PY{o}{=} \PY{n}{sess}\PY{o}{.}\PY{n}{run}\PY{p}{(}\PY{n}{tf}\PY{o}{.}\PY{n}{nn}\PY{o}{.}\PY{n}{softmax}\PY{p}{(}\PY{n}{logits}\PY{o}{=}\PY{n}{logits}\PY{p}{)}\PY{p}{)} \PY{n}{top5} \PY{o}{=} \PY{n}{sess}\PY{o}{.}\PY{n}{run}\PY{p}{(}\PY{n}{tf}\PY{o}{.}\PY{n}{nn}\PY{o}{.}\PY{n}{top\PYZus{}k}\PY{p}{(}\PY{n}{result}\PY{p}{,} \PY{n}{k}\PY{o}{=}\PY{l+m+mi}{5}\PY{p}{)}\PY{p}{)} \end{Verbatim} \begin{Verbatim}[commandchars=\\\{\}] INFO:tensorflow:Restoring parameters from model/c3c3p\_c3c3p\_fc\_fc\_fc.ckpt \end{Verbatim} \begin{Verbatim}[commandchars=\\\{\}] {\color{incolor}In [{\color{incolor}14}]:} \PY{n}{i}\PY{o}{=}\PY{l+m+mi}{0} \PY{n}{f}\PY{p}{,} \PY{n}{axarr} \PY{o}{=} \PY{n}{plt}\PY{o}{.}\PY{n}{subplots}\PY{p}{(}\PY{l+m+mi}{4}\PY{p}{,} \PY{l+m+mi}{5}\PY{p}{,} \PY{n}{figsize}\PY{o}{=}\PY{p}{(}\PY{l+m+mi}{25}\PY{p}{,} \PY{l+m+mi}{35}\PY{p}{)}\PY{p}{)} \PY{k}{for} \PY{n}{file} \PY{o+ow}{in} \PY{n}{os}\PY{o}{.}\PY{n}{listdir}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{webpictures}\PY{l+s+s2}{\PYZdq{}}\PY{p}{)}\PY{p}{:} \PY{n}{image} \PY{o}{=} \PY{n}{Image}\PY{o}{.}\PY{n}{open}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{webpictures/}\PY{l+s+s2}{\PYZdq{}} \PY{o}{+} \PY{n}{file}\PY{p}{)} \PY{n}{image}\PY{o}{.}\PY{n}{thumbnail}\PY{p}{(}\PY{p}{(}\PY{l+m+mi}{32}\PY{p}{,} \PY{l+m+mi}{32}\PY{p}{)}\PY{p}{,} \PY{n}{Image}\PY{o}{.}\PY{n}{LANCZOS}\PY{p}{)} \PY{c+c1}{\PYZsh{} resizes image in\PYZhy{}place} \PY{n}{axarr}\PY{p}{[}\PY{n+nb}{int}\PY{p}{(}\PY{n}{i}\PY{o}{/}\PY{l+m+mi}{5}\PY{p}{)}\PY{p}{,} \PY{n}{i}\PY{o}{\PYZpc{}}\PY{k}{5}].imshow(image) \PY{n}{axarr}\PY{p}{[}\PY{n+nb}{int}\PY{p}{(}\PY{n}{i}\PY{o}{/}\PY{l+m+mi}{5}\PY{p}{)}\PY{p}{,} \PY{n}{i}\PY{o}{\PYZpc{}}\PY{k}{5}].set\PYZus{}title(\PYZdq{}Label:\PYZbs{}n\PYZdq{} + signnames[y\PYZus{}test\PYZus{}own\PYZus{}images[i]] \PYZbs{} \PY{o}{+} \PY{l+s+s2}{\PYZdq{}}\PY{l+s+se}{\PYZbs{}n}\PY{l+s+s2}{Top1: }\PY{l+s+si}{\PYZob{}:.3f\PYZcb{}}\PY{l+s+s2}{\PYZdq{}}\PY{o}{.}\PY{n}{format}\PY{p}{(}\PY{n}{top5}\PY{p}{[}\PY{l+m+mi}{0}\PY{p}{]}\PY{p}{[}\PY{n}{i}\PY{p}{]}\PY{p}{[}\PY{l+m+mi}{0}\PY{p}{]}\PY{p}{)} \PY{o}{+} \PY{l+s+s2}{\PYZdq{}}\PY{l+s+se}{\PYZbs{}n}\PY{l+s+s2}{\PYZdq{}} \PY{o}{+} \PY{n}{signnames}\PY{p}{[}\PY{n}{top5}\PY{p}{[}\PY{l+m+mi}{1}\PY{p}{]}\PY{p}{[}\PY{n}{i}\PY{p}{]}\PY{p}{[}\PY{l+m+mi}{0}\PY{p}{]}\PY{p}{]} \PYZbs{} \PY{o}{+} \PY{l+s+s2}{\PYZdq{}}\PY{l+s+se}{\PYZbs{}n}\PY{l+s+s2}{Top2: }\PY{l+s+si}{\PYZob{}:.3f\PYZcb{}}\PY{l+s+s2}{\PYZdq{}}\PY{o}{.}\PY{n}{format}\PY{p}{(}\PY{n}{top5}\PY{p}{[}\PY{l+m+mi}{0}\PY{p}{]}\PY{p}{[}\PY{n}{i}\PY{p}{]}\PY{p}{[}\PY{l+m+mi}{1}\PY{p}{]}\PY{p}{)} \PY{o}{+} \PY{l+s+s2}{\PYZdq{}}\PY{l+s+se}{\PYZbs{}n}\PY{l+s+s2}{\PYZdq{}} \PY{o}{+} \PY{n}{signnames}\PY{p}{[}\PY{n}{top5}\PY{p}{[}\PY{l+m+mi}{1}\PY{p}{]}\PY{p}{[}\PY{n}{i}\PY{p}{]}\PY{p}{[}\PY{l+m+mi}{1}\PY{p}{]}\PY{p}{]} \PYZbs{} \PY{o}{+} \PY{l+s+s2}{\PYZdq{}}\PY{l+s+se}{\PYZbs{}n}\PY{l+s+s2}{Top3: }\PY{l+s+si}{\PYZob{}:.3f\PYZcb{}}\PY{l+s+s2}{\PYZdq{}}\PY{o}{.}\PY{n}{format}\PY{p}{(}\PY{n}{top5}\PY{p}{[}\PY{l+m+mi}{0}\PY{p}{]}\PY{p}{[}\PY{n}{i}\PY{p}{]}\PY{p}{[}\PY{l+m+mi}{2}\PY{p}{]}\PY{p}{)} \PY{o}{+} \PY{l+s+s2}{\PYZdq{}}\PY{l+s+se}{\PYZbs{}n}\PY{l+s+s2}{\PYZdq{}} \PY{o}{+} \PY{n}{signnames}\PY{p}{[}\PY{n}{top5}\PY{p}{[}\PY{l+m+mi}{1}\PY{p}{]}\PY{p}{[}\PY{n}{i}\PY{p}{]}\PY{p}{[}\PY{l+m+mi}{2}\PY{p}{]}\PY{p}{]} \PYZbs{} \PY{o}{+} \PY{l+s+s2}{\PYZdq{}}\PY{l+s+se}{\PYZbs{}n}\PY{l+s+s2}{Top4: }\PY{l+s+si}{\PYZob{}:.3f\PYZcb{}}\PY{l+s+s2}{\PYZdq{}}\PY{o}{.}\PY{n}{format}\PY{p}{(}\PY{n}{top5}\PY{p}{[}\PY{l+m+mi}{0}\PY{p}{]}\PY{p}{[}\PY{n}{i}\PY{p}{]}\PY{p}{[}\PY{l+m+mi}{3}\PY{p}{]}\PY{p}{)} \PY{o}{+} \PY{l+s+s2}{\PYZdq{}}\PY{l+s+se}{\PYZbs{}n}\PY{l+s+s2}{\PYZdq{}} \PY{o}{+} \PY{n}{signnames}\PY{p}{[}\PY{n}{top5}\PY{p}{[}\PY{l+m+mi}{1}\PY{p}{]}\PY{p}{[}\PY{n}{i}\PY{p}{]}\PY{p}{[}\PY{l+m+mi}{3}\PY{p}{]}\PY{p}{]} \PYZbs{} \PY{o}{+} \PY{l+s+s2}{\PYZdq{}}\PY{l+s+se}{\PYZbs{}n}\PY{l+s+s2}{Top5: }\PY{l+s+si}{\PYZob{}:.3f\PYZcb{}}\PY{l+s+s2}{\PYZdq{}}\PY{o}{.}\PY{n}{format}\PY{p}{(}\PY{n}{top5}\PY{p}{[}\PY{l+m+mi}{0}\PY{p}{]}\PY{p}{[}\PY{n}{i}\PY{p}{]}\PY{p}{[}\PY{l+m+mi}{4}\PY{p}{]}\PY{p}{)} \PY{o}{+} \PY{l+s+s2}{\PYZdq{}}\PY{l+s+se}{\PYZbs{}n}\PY{l+s+s2}{\PYZdq{}} \PY{o}{+} \PY{n}{signnames}\PY{p}{[}\PY{n}{top5}\PY{p}{[}\PY{l+m+mi}{1}\PY{p}{]}\PY{p}{[}\PY{n}{i}\PY{p}{]}\PY{p}{[}\PY{l+m+mi}{4}\PY{p}{]}\PY{p}{]}\PY{p}{)} \PY{n}{axarr}\PY{p}{[}\PY{n+nb}{int}\PY{p}{(}\PY{n}{i}\PY{o}{/}\PY{l+m+mi}{5}\PY{p}{)}\PY{p}{,} \PY{n}{i}\PY{o}{\PYZpc{}}\PY{k}{5}].get\PYZus{}xaxis().set\PYZus{}visible(False) \PY{n}{axarr}\PY{p}{[}\PY{n+nb}{int}\PY{p}{(}\PY{n}{i}\PY{o}{/}\PY{l+m+mi}{5}\PY{p}{)}\PY{p}{,} \PY{n}{i}\PY{o}{\PYZpc{}}\PY{k}{5}].get\PYZus{}yaxis().set\PYZus{}visible(False) \PY{n}{i} \PY{o}{+}\PY{o}{=} \PY{l+m+mi}{1} \end{Verbatim} \begin{center} \adjustimage{max size={0.9\linewidth}{0.9\paperheight}}{output_30_0.png} \end{center} { \hspace*{\fill} \\} \subsubsection{Project Writeup}\label{project-writeup} Once you have completed the code implementation, document your results in a project writeup using this \href{https://github.com/udacity/CarND-Traffic-Sign-Classifier-Project/blob/master/writeup_template.md}{template} as a guide. The writeup can be in a markdown or pdf file. \begin{quote} \textbf{Note}: Once you have completed all of the code implementations and successfully answered each question above, you may finalize your work by exporting the iPython Notebook as an HTML document. You can do this by using the menu above and navigating to \n", "\textbf{File -\textgreater{} Download as -\textgreater{} HTML (.html)}. Include the finished document along with this notebook as your submission. \end{quote} % Add a bibliography block to the postdoc \end{document}
{ "alphanum_fraction": 0.5673652249, "avg_line_length": 63.2041392286, "ext": "tex", "hexsha": "8784683ebd225ca9ddb38c0eb0e698f609725c74", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "74569d33339ed4f9f3ce5e62a0630695b48bd429", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "FinnHendricks/UdacitySDC_Project3_TrafficSignClassification", "max_forks_repo_path": "notebook.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "74569d33339ed4f9f3ce5e62a0630695b48bd429", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "FinnHendricks/UdacitySDC_Project3_TrafficSignClassification", "max_issues_repo_path": "notebook.tex", "max_line_length": 619, "max_stars_count": null, "max_stars_repo_head_hexsha": "74569d33339ed4f9f3ce5e62a0630695b48bd429", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "FinnHendricks/UdacitySDC_Project3_TrafficSignClassification", "max_stars_repo_path": "notebook.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 29010, "size": 67186 }
\documentclass{book} \usepackage[backend=bibtex,style=alphabetic]{biblatex} \addbibresource{custom.bib} \begin{document} \tableofcontents \newpage \chapter{First Chapter} \section{First Section} Here is some text. \cite{One} \printbibliography \end{document} \endinput
{ "alphanum_fraction": 0.7474048443, "avg_line_length": 13.7619047619, "ext": "tex", "hexsha": "c49244bb1adbe68229074f65eb663d26a69c171c", "lang": "TeX", "max_forks_count": 4, "max_forks_repo_forks_event_max_datetime": "2021-04-22T10:39:36.000Z", "max_forks_repo_forks_event_min_datetime": "2020-11-27T13:15:22.000Z", "max_forks_repo_head_hexsha": "4245e8b6de1430a8205553ebbe99b4a36c90df49", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "reallyinsane/mathan-latex-maven-plugin", "max_forks_repo_path": "mathan-latex-it/src/test/resources/configuration/resources/src/main/tex/main.tex", "max_issues_count": 2, "max_issues_repo_head_hexsha": "4245e8b6de1430a8205553ebbe99b4a36c90df49", "max_issues_repo_issues_event_max_datetime": "2021-11-07T11:34:01.000Z", "max_issues_repo_issues_event_min_datetime": "2019-04-09T19:47:47.000Z", "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "reallyinsane/mathan-latex-maven-plugin", "max_issues_repo_path": "mathan-latex-it/src/test/resources/configuration/resources/src/main/tex/main.tex", "max_line_length": 54, "max_stars_count": 7, "max_stars_repo_head_hexsha": "4245e8b6de1430a8205553ebbe99b4a36c90df49", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "reallyinsane/mathan-latex-maven-plugin", "max_stars_repo_path": "mathan-latex-it/src/test/resources/configuration/resources/src/main/tex/main.tex", "max_stars_repo_stars_event_max_datetime": "2020-07-19T18:52:14.000Z", "max_stars_repo_stars_event_min_datetime": "2017-11-05T10:53:01.000Z", "num_tokens": 89, "size": 289 }
% \iffalse meta-comment % % Copyright 2019-2020 % The LaTeX Project and any individual authors listed elsewhere % in this file. % % This file is part of the LaTeX base system. % ------------------------------------------- % % It may be distributed and/or modified under the % conditions of the LaTeX Project Public License, either version 1.3c % of this license or (at your option) any later version. % The latest version of this license is in % https://www.latex-project.org/lppl.txt % and version 1.3c or later is part of all distributions of LaTeX % version 2008 or later. % % This file has the LPPL maintenance status "maintained". % % The list of all files belonging to the LaTeX base distribution is % given in the file `manifest.txt'. See also `legal.txt' for additional % information. % % The list of derived (unpacked) files belonging to the distribution % and covered by LPPL is defined by the unpacking scripts (with % extension .ins) which are part of the distribution. % % \fi % Filename: ltnews32.tex % % This is issue 32 of LaTeX News. \NeedsTeXFormat{LaTeX2e}[2020-02-02] \documentclass{ltnews} \usepackage[T1]{fontenc} \usepackage{lmodern,url,hologo} \usepackage{csquotes} \usepackage{multicol} \providecommand\meta[1]{$\langle$\textrm{\itshape#1}$\rangle$} \providecommand\option[1]{\texttt{#1}} \providecommand\env[1]{\texttt{#1}} \providecommand\file[1]{\texttt{#1}} \providecommand\Arg[1]{\texttt\{\meta{#1}\texttt\}} \providecommand\eTeX{\hologo{eTeX}} \providecommand\XeTeX{\hologo{XeTeX}} \providecommand\LuaTeX{\hologo{LuaTeX}} \providecommand\pdfTeX{\hologo{pdfTeX}} \providecommand\MiKTeX{\hologo{MiKTeX}} \providecommand\CTAN{\textsc{ctan}} \providecommand\TL{\TeX\,Live} \providecommand\githubissue[2][]{\ifhmode\unskip\fi \quad\penalty500\strut\nobreak\hfill \mbox{\small\slshape(% \href{https://github.com/latex3/latex2e/issues/\getfirstgithubissue#2 \relax}% {github issue#1 #2}% )}% \par\smallskip} % simple solution right now (just link to the first issue if there are more) \def\getfirstgithubissue#1 #2\relax{#1} \providecommand\sxissue[1]{\ifhmode\unskip\fi \quad\penalty500\strut\nobreak\hfill \mbox{\small\slshape(\url{https://tex.stackexchange.com/#1})}\par} \providecommand\gnatsissue[2]{\ifhmode\unskip\fi \quad\penalty500\strut\nobreak\hfill \mbox{\small\slshape(% \href{https://www.latex-project.org/cgi-bin/ltxbugs2html?pr=#1\%2F#2}% {gnats issue #1/#2}% )}% \par} \let\cls\pkg \providecommand\env[1]{\texttt{#1}} \vbadness=1400 % accept slightly empty columns %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \providecommand\tubcommand[1]{} \tubcommand{\input{tubltmac}} \publicationmonth{October} \publicationyear{2020} \publicationissue{32} \begin{document} \tubcommand{\addtolength\textheight{4.2pc}} % only for TUB \maketitle {\hyphenpenalty=10000 \spaceskip=3.33pt \hbadness=10000 \tableofcontents} \setlength\rightskip{0pt plus 3em} \medskip \section{Introduction} The 2020-10-01 release of \LaTeX{} shows that work on improving \LaTeX{} has again intensified. The two most important new features are the kernel support for \pkg{xparse} and the introduction of the new hook management system for \LaTeX{}, but as you can see there are many smaller enhancements and bug fixes added to the kernel and various packages. \section{Providing \pkg{xparse} in the format} The official interface in the \LaTeXe{} kernel for creating document-level commands has always been \cs{newcommand}. This was a big step forward from \LaTeX~2.09. However, it was still very limited in the types of command it can create: those taking at most one optional argument in square brackets, then zero or more mandatory arguments. Richer syntaxes required use of the \TeX{} \cs{def} primitive along with appropriate low-level macro programming. The \LaTeX{} team started work on a comprehensive document-command parser, \pkg{xparse}, in the late 1990s. In the past decade, the experimental ideas it provides have been carefully worked through and moved to a stable footing. As such, \pkg{xparse} is now used to define a very large number of document and package commands. It does this by providing a rich and self-consistent syntax to describe a wide range of interfaces seen in \LaTeX{} packages. The ideas developed in \pkg{xparse} are now sufficiently well tested that the majority can be transferred into the \LaTeX{} kernel. Thus the following commands have been added \begin{itemize} \item \cs{NewDocumentCommand}, \cs{RenewDocumentCommand}, \cs{ProvideDocumentCommand}, \cs{DeclareDocumentCommand} \item \cs{NewExpandableDocumentCommand}, \cs{RenewExpandableDocumentCommand}, \cs{ProvideExpandableDocumentCommand}, \cs{DeclareExpandableDocumentCommand} \item \cs{NewDocumentEnvironment}, \cs{RenewDocumentEnvironment}, \cs{ProvideDocumentEnvironment}, \cs{DeclareDocumentEnvironment} \item \cs{BooleanTrue} \cs{BooleanFalse} \item \cs{IfBooleanTF}, \cs{IfBooleanT}, \cs{IfBooleanF} \item \cs{IfNoValueTF}, \cs{IfNoValueT}, \cs{IfNoValueF} \item \cs{IfValueTF}, \cs{IfValueT}, \cs{IfValueF} \item \cs{SplitArgument}, \cs{SplitList}, \cs{TrimSpaces}, \cs{ProcessList}, \cs{ReverseBoolean} \item \cs{GetDocumentCommandArgSpec} \cs{GetDocumentEnvironmentArgSpec} \end{itemize} Most, but not all, of the argument types defined by \pkg{xparse} are now supported at the kernel level. In particular, the types \texttt{g}/\texttt{G}, \texttt{l} and \texttt{u} are \emph{not} provided by the kernel code; these are deprecated but still available by explicitly loading \pkg{xparse}. All other argument types \emph{are} now available directly within the \LaTeXe{} kernel. \section{A hook management system for \LaTeX{}} With the fall 2020 release of \LaTeX{} we provide a general hook management system for the kernel and for packages. This will allow packages to safely add code to various kernel and package hooks and if necessary define rules to reorder the code in the hooks to resolve typical package loading order issues. This hook system is written in the L3 programming layer and thus forms the first larger application within the kernel that makes use of the \LaTeX3 functionality now available (if we discount \pkg{xparse} which has already been available for a long time as a separate package). The file \file{lthooks.dtx} holds the core management code for hooks and defines basic hooks for environments (as previously offered by \pkg{etoolbox}), \file{ltshipout.dtx} provides kernel hooks into the shipout process (making packages like \pkg{atbegshi}, etc., unnecessary) and the file \file{ltfilehook.dtx} holds redefinitions for commands like \cs{input} or \cs{usepackage} so that they offer hooks in a similar fashion to what is provided by the \pkg{filehook} package. At the moment the integration is lightweight, overwriting definitions made earlier during format generation (though this will change after more thorough testing). For that reason the documentation isn't in its final form either and you have to read through three different documents: \begin{description} \item[\file{lthooks-doc.pdf}] Core management interface and basic hooks for environments provided by the kernel. \item[\file{ltshipout-doc.pdf}] Hooks accessible while a page is being shipped out. \item[\file{ltfilehook-doc.pdf}] Hooks used when reading a file. \end{description} For those who wish to also study the code, replace \texttt{-doc} with \texttt{-code}, e.g., \file{lthooks-code.pdf}. All documents should be accessible via \texttt{texdoc}, e.g., \begin{verbatim} texdoc lthooks-doc \end{verbatim} should open the core documentation for you. \section{Other changes to the \LaTeX{} kernel} \subsection{\cs{symbol} in math mode for large Unicode values} The \LaTeXe{} kernel defines the command \cs{symbol}, which allows characters to be typeset by entering their `slot number'. With the \LuaTeX{} and \XeTeX{} engines, these slot numbers can extend to very large values to accommodate Unicode characters in the upper Unicode planes (e.g., bold mathematical capital A is slot number \texttt{"1D400} in hex or \texttt{119808} in decimal). The \XeTeX{} engine did not allow \cs{symbol} in math mode for values above $2^{16}$; this limitation has now been lifted. % \githubissue{124} \subsection{Correct Unicode value of \cs{=y} (\=y)} The Unicode slot for \=y was incorrectly pointing to the slot for \=Y. This has been corrected. % \githubissue{326} \subsection{Add support for Unicode soft hyphens} For a long time, the UTF-8 option for \pkg{inputenc} made the Unicode soft hyphen character (U+00AD) an alias for the \LaTeX\ soft hyphen \cs{-}. The Unicode engines \XeTeX{} and \LuaTeX{} behaved differently though: They either ignored U+00AD or interpreted it as an unconditional hyphen. This inconsistency is fixed now and \LaTeX{} always treats U+00AD as \cs{-}. % \githubissue{323} \subsection{Fix capital accents in Unicode engines} In Unicode engines the capital accents such as \cs{capitalcedilla}, etc., have been implemented as trivial shorthands for the normal accents (because other than Computer Modern virtually no fonts support them), but that failed when \pkg{hyperref} got loaded. This has been corrected. % \githubissue{332} \subsection{Support \pkg{calc} in various kernel commands} The \cs{hspace}, \cs{vspace}, \cs{addvspace}, \cs{\textbackslash} and other commands simply passed their argument to a \TeX{} primitive to produce the necessary space. As a result it was impossible to specify anything other than a simple dimension value in such arguments. This has been changed, so that now \pkg{calc} syntax is also supported with these commands. % \githubissue{152} \subsection{Support \eTeX\ length expressions in \env{picture} coordinates} Picture mode coordinates specified with \texttt{(\_,\_)} previously accepted multiples of \cs{unitlength}. They now also allow \eTeX\ length expressions (as used by the \cs{glueexpr} primitive although all uses in \env{picture} mode are non-stretchy). So, valid uses include \verb|\put(2,2)| as previously, but now also uses such as\tubcommand\\ \verb|\put(\textwidth-5cm,0.4\textheight)|. Note that you can only use expressions with lengths; \verb|\put(1+2,0)| is not supported. \subsection{Spaces in filenames of included files} File names containing spaces lead to unexpected results when used in the commands \cs{include} and \cs{includeonly}. This has now been fixed and the argument to \cs{include} can contain a file name containing spaces. Leading or trailing spaces will be stripped off but spaces within the file name are kept. The argument to \cs{includeonly}, which is a comma-separated list of files to process, can also contain spaces with any leading and trailing spaces stripped from the individual filenames while spaces \emph{in} the file names will remain intact. % \githubissue[s]{217 and 218} \subsection{Avoid extra line in \cs{centering}, \cs{raggedleft} or \cs{raggedright}} If we aren't justifying paragraphs then a very long word (longer than a line) could result in an unnecessary extra line in order to prevent a hyphen in the second-last line of the paragraph. This is now avoided by setting \cs{finalhyphendemerits} to zero in unjustified settings. % \githubissue{274} \subsection{Set a non-zero \cs{baselineskip} in text scripts} As \cs{textsuperscript} and \cs{textsubscript} usually contain only a few characters on a single line the \cs{baselineskip} was set to zero. However, \pkg{hyperref} uses that value to determine the height of a link box which consequently came out far too small. This has been adjusted. % \githubissue{249} \subsection{Spacing issues when using \cs{linethickness}} In some circumstances the use of \cs{linethickness} introduced a spurious space that shifted objects in a \env{picture} environment to the right. This has been corrected. % \githubissue{274} \subsection{Better support for the legacy series default interface} In the initial implementation of \LaTeX's font selection scheme (NFSS) changes to any default were carried out by redefining some commands, e.g., \cs{seriesdefault}. In 2019 we introduced various extensions and with it new methods of customizing certain parts of NFSS, e.g., the recommended way for changing the series default(s) is now through \cs{DeclareFontSeriesDefault}~\cite{32:ltnews31}. In this release we improved the support for legacy documents using the old method to cover additional edge cases. % \githubissue[s]{306 and 315} \subsection{Support for uncommon font series defaults} If a font family was set up with fairly unusual font series defaults, e.g., \begin{verbatim} \renewcommand\ttdefault{lmvtt} \DeclareFontSeriesDefault[tt]{md}{lm} \DeclareFontSeriesDefault[tt]{bf}{bm} \end{verbatim} then a switch between the main document families, e.g., \verb=\ttfamily...\rmfamily= did not always correctly continue typesetting in medium or bold series if that involved adjusting the values used by \verb=\mdseries= or \verb=\bfseries=. This has now been corrected. % \githubissue{291} \tubcommand{\vspace*{-6pt}\break} \subsection{Checking the current font series context} Sometimes it is necessary to define commands that act differently when used in bold context (e.g., inside \cs{textbf}). Now that it is possible in \LaTeX{} to specify different \enquote{\texttt{bf}} defaults based for each of the three meta families (\texttt{rm}, \texttt{sf} and \texttt{tt}) via \cs{DeclareFontSeriesDefault}, it is no longer easy to answer the question \enquote{am I typesetting in a bold context?}. To help with this problem a new command was provided: \begin{quote} \cs{IfFontSeriesContextTF}\Arg{context}\\ \hspace*{4em} \Arg{true code}\Arg{false code} \end{quote} The \meta{context} can be either \texttt{bf} (bold) or \texttt{md} (medium) and depending on whether or not the current font is recognized as being selected through \cs{bfseries} or \cs{mdseries} the \meta{true code} or \meta{false code} is executed. As an example \begin{verbatim} \usepackage{bm} % (bold math) \newcommand\vbeta{\IfFontSeriesContextTF{bf}% {\ensuremath{\bm{\beta}}}% {\ensuremath{\beta}}} \end{verbatim} This way you can write \cs{vbeta}\texttt{-isotopes} and if used in a heading it comes out in a bolder version. % \githubissue{336} \subsection{Avoid spurious package option warning} When a package is loaded with a number of options, say \texttt{X}, \texttt{Y} and \texttt{Z}, and then later another loading attempt was made with a subset of the options or no options, it was possible to get an error message that option \texttt{X} is not known to the package. This obviously incorrect error was due to a timing issue where the list of available options got lost prematurely. This has now been fixed. % \githubissue{22} \subsection{Adjusting \option{fleqn}} In \pkg{amsmath} the \cs{mathindent} parameter used with the \option{fleqn} design is a rubber length parameter allowing for setting it to a value such as \texttt{1em minus 1em}, i.e., so that the normal indentation can be reduced in case of very wide math displays. This is now also supported by the \LaTeX{} standard classes. In addition a compressible space between formula and equation number in the \env{equation} environment got added when the \option{fleqn} option is used so that a very wide formula doesn't bump into the equation number. % \githubissue{252} \subsection{Provide \cs{clap}} \LaTeX{} has inherited \cs{llap} and \cs{rlap} from plain \TeX{} (zero-sized boxes whose content sticks out to the left or right, respectively) but there isn't a corresponding \cs{clap} command that centers the material. This missing command was added by several packages, e.g., \pkg{mathtools}, and has now been added to the kernel. \subsection{Fix to legacy math alphabet interface} When using the \LaTeX{}~2.09 legacy math alphabet interface, e.g., \verb=$\sf -1$= instead of \verb=$\mathsf{-1}$=, an extra math Ord atom was added to the formula in case the math alphabet was used for the first time. In some cases this math atom would change the spacing, e.g., change the unary minus sign into a binary minus in the above example. This has finally been fixed. % \gnatsissue{latex}{3357} \subsection{Added tests for format, package and class dates} To implement compatibility code or to ensure that certain features are available it is helpful and often necessary to check the date of the format or that of a package or class and execute different code based on the result. For that, \LaTeX\ previously had only internal commands (\cs{@ifpackagelater} and \cs{@ifclasslater}) for testing package or class names, but nothing reasonable for testing the format date. For the latter one had to resort to some obscure command \cs{@ifl@t@r} that, given its cryptic name, was clearly never intended for use even in package or class code. Furthermore, even the existing interface commands were defective as they are testing for \enquote{equal or later} and not for \enquote{later} as their names indicate. We have therefore introduced three new CamelCase commands as the official interface for such tests \begin{quote} \cs{IfFormatAtLeastTF}\Arg{date}\\ \hspace*{4em} \Arg{true code}\Arg{false code} \end{quote} and for package and class tests \begin{quote} \cs{IfClassAtLeastTF}\Arg{class name}\Arg{date}\\ \hspace*{4em} \Arg{true code}\Arg{false code} \\ \cs{IfPackageAtLeastTF}\Arg{package~name}\Arg{date}\\ \hspace*{4em} \Arg{true code}\Arg{false code} \end{quote} For compatibility reasons the legacy commands remain available, but we suggest to replace them over time and use the new interfaces in new code. % \githubissue{186} \subsection{Avoid problematic spaces after \cs{verb}} If a user typed \verb*=\verb !~! foo= instead of \verb*=\verb!~! foo= by mistake, then surprisingly the result was ``\verb=!~!=foo'' without any warning or error. % What happened was that the \verb*= = became the argument delimiter due to the rather complex processing done by \cs{verb} to render verbatim. This has been fixed and spaces directly following the command \cs{verb} or \cs{verb*} are now ignored as elsewhere. % \githubissue{327} \subsection{Provide a way to copy robust commands\ldots} With the previous \LaTeXe{} release, several user-level commands were made robust, so the need for a way to create copies of these commands (often to redefine them) increased, and the \LaTeXe{} kernel didn't have a way to do so. Previously this functionality was provided in part by Heiko Oberdiek's \pkg{letltxmacro} package, which allows a robust command \verb=\foo= to be copied to \verb=\bar= with \verb=\LetLtxMacro\bar\foo=. From this release onwards, the \LaTeXe{} kernel provides \cs{NewCommandCopy} (and \verb=\Renew...= and \verb=\Declare...= variants) which functions almost like \verb=\LetLtxMacro=. To the end user, both should work the same way, and one shouldn't need to worry about the definition of the command: \cs{NewCommandCopy} should do the hard work. \cs{NewCommandCopy} knows about the different types of definitions from the \LaTeXe{} kernel, and also from other packages, such as \pkg{xparse}'s command declarations like \cs{NewDocumentCommand}, and \pkg{etoolbox}'s \cs{newrobustcmd}, and it can be extended to cover further packages. % \githubissue{239} \subsection{\ldots\ and a way to \cs{show} them} It is sometimes necessary to look up the definition of a command, and often one not only doesn't know where that command is defined, but doesn't know if it gets redefined by some package, so often enough looking at the source doesn't help. The typical way around this problem is to use \TeX's \cs{show} primitive to look at the definition of a command, which works fine until the command being \cs{show}n is robust. With \verb=\show\frac= one sees \begin{verbatim} >> \frac=macro: ->>\protect \frac . \end{verbatim} which is not very helpful. To show the actual command the user needed to notice that the real definition of \cs{frac} is in the \verb*=\frac = %* macro and do \verb=\expandafter\show\csname frac\space\endcsname=. But with the machinery for copying robust commands in place it is already possible to examine a command and detect (as far as a macro expansion language allows) how it was defined. \cs{ShowCommand} knows that and with \verb=\ShowCommand\frac= the terminal will show \begin{verbatim} >> \frac=robust macro: ->>\protect \frac . >> \frac =\long macro: #1#2->>{\begingroup #1\endgroup \over #2}. \end{verbatim} % \githubissue{373} \subsection{Merge \pkg{l3docstrip} into \pkg{docstrip}} The file \pkg{l3docstrip.tex} offered a small extension over the original \pkg{docstrip.tex} file supporting the \texttt{\%\string<@@=\meta{module}\string>} syntax of \pkg{expl3}. This has been merged into \pkg{docstrip} so that it can now be used for both traditional \texttt{.dtx} files and those containing code written in the L3 programming layer language. % \githubissue{337} \subsection{Support vertical typesetting with \pkg{doc}} The \env{macrocode} environment uses a \env{trivlist} internally and as part of this sets up the \cs{@labels} box to contain some horizontal skips, but that box is never used. As a result this generates an issue in some circumstances if the typesetting direction is vertical. This has now been corrected to support such use cases as well. % \githubissue{344} \subsection{Record the counter name stepped by \cs{refstepcounter}} \cs{refstepcounter} now stores the name of the counter in \cs{@currentcounter}. This allows packages like \pkg{zref} and \pkg{hyperref} to store the name without having to patch \cs{refstepcounter}. % \githubissue{300} \subsection{Native Lua\TeX\ behavior for \cs{-}} \LaTeX\ changes \cs{-} to add a discretionary hyphen even if \cs{hyphenchar} is set to $-1$. This change is not necessary under Lua\TeX\ because there \cs{-} is not affected by \cs{hyphenchar} in the first place. Therefore this behavior has been changed to ensure that Lua\TeX's (language specific) hyphenation characters are respected by \cs{-}. \subsection{Allow \cs{par} commands inside \cs{typeout}} \cs{typeout} used to choke when seeing an empty line or a \cs{par} command in its argument. However, sometimes it is used to display arbitrary user input or code (wrapped, for example, in \cs{unexpanded}) which may contain explicit \cs{par} commands. This is now allowed. % \githubissue{335} \subsection{Spacing commands moved from \pkg{amsmath} to the kernel} Originally \LaTeX{} only provided a small set of spacing commands for use in text and math; some of the commands like \cs{;} were only supported in math mode. \pkg{amsmath} normalized and provided all of them in text and math. This code has now been moved to the kernel so that it is generally available. \begin{center} \begin{tabular}{lll} command name(s) & math & text\\\hline \cs{,} \cs{thinspace} & $x\,x$ & x\,x\\ \cs{!} \cs{negthinspace} \; & $x\!x$ & x\!x\\ \cs{:} \cs{>} \cs{medspace} & $x\:x$ & x\:x\\ \cs{negmedspace} & $x\negmedspace x$ & x\negmedspace x\\ \cs{;} \cs{thickspace} & $x\;x$ & x\;x\\ \cs{negthickspace} & $x\negthickspace x$ & x\negthickspace x\\ \end{tabular} \end{center} % \githubissue{303} \subsection{Access raw glyphs in \LuaTeX\ without reloading fonts} \LaTeX's definitions for \cs{textquotesingle},\tubcommand\\ \cs{textasciigrave}, and \cs{textquotedbl} for the TU encoding in \LuaTeX\ need special handling to stop the shaper from replacing these characters with curly quotes. This used to be done by reloading the current font without the \texttt{tlig} feature, but that came with multiple disadvantages: It behaves differently than the corresponding \XeTeX\ code and it is not very efficient. This code has now been replaced with an implementation which injects a protected glyph node which is not affected by font shaping. % \githubissue{165} \subsection{Added a fourth empty argument to \cs{contentsline}} \LaTeX's \cs{addcontentsline} writes a \cs{contentsline} command with three arguments to the \texttt{.toc} and similar files. \pkg{hyperref} redefines \cs{addcontentsline} to write a fourth argument. The change unifies the number of arguments by writing an additional empty brace group. % \githubissue{370} \subsection{Lua\TeX\ callback \texttt{new\_graf} made \texttt{exclusive}} Corrected an incorrect callback type which caused return values from the \texttt{new\_graf} callback to be ignored and paragraph indentation to be suppressed. In the new version, only one \texttt{new\_graf} callback handler can be active at a time, which allows this handler to take full control of paragraph indentation. % \githubissue{188} \section{Changes to packages in the \pkg{graphics} category} \subsection{Generate a warning if existing color definition is changed} If a color is defined twice using \cs{DefineNamedColor}, no info text \texttt{Redefining color ...\ in named color model ...}\ was written to the log file, because of a typo in the check. This has been corrected. % \gnatsissue{graphics}{3635} \subsection{Specifying viewport in the \pkg{graphics} package} Specifying a BoundingBox does not really have meaning when including non-EPS graphics in \pdfTeX\ and \LuaTeX. For some years the \pkg{graphicx} package \texttt{bb} key has been interpreted (with a warning) as a \texttt{viewport} key. This feature has been added to the two-argument form of \verb|\includegraphics|, which is mostly used in the \pkg{\mbox{graphics}} package. \verb|\includegraphics[1,2][3,4]{file}| will now be interpreted in \pdfTeX\ and \LuaTeX\ in the same way as \pkg{\mbox{graphicx}}'s\tubcommand\\ \verb|\includegraphics[viewport=1 2 3 4]{file}|. \subsection{Normalizing \cs{endlinechar}} If \cs{endlinechar} is set to $-1$ so that ends of lines are ignored in special contexts, then a low level \TeX\ error would be generated by code parsing BoundingBox comments. The package now locally sets \cs{endlinechar} to its standard value while reading files. % \githubissue{286} \subsection{Files with multiple parts} Sometimes one has a graphics file, say, \file{file.svg}, and converts it to another format to include it in \LaTeX{} and ends up with a file named \file{file.svg.png}. In previous releases, if the user did \verb|\includegraphics{file.svg}|, an error would be raised and the graphics inclusion would fail due to the unknown \verb|.svg| extension. The \pkg{graphics} package now checks if the given extension is known, and if it doesn't, it tries appending the known extensions until it finds a graphics file with a valid extension, otherwise it falls back to the file as requested. % \githubissue{355} \section{Changes to packages in the \pkg{tools} category} \subsection{\pkg{array}: Support stretchable glue in \texttt{w}-columns} If stretchable glue, e.g., \cs{dotfill}, is used in \env{tabular} columns made with the \pkg{array} package, it stretches as it would in normal paragraph text. The one exception was \texttt{w}-columns (but not \texttt{W}-columns) where it got forced to its nominal width (which in case of \cs{hfill} or \cs{dotfill} is 0\,pt). This has been corrected and now \texttt{w}-columns behave like all other column types in this respect. % \githubissue{270} \subsection{\pkg{array}: Use math mode for \texttt{w} and \texttt{W}-cells in \env{array}} The \texttt{w} and \texttt{W}-columns are LR-columns very similar to \texttt{l}, \texttt{c} and \texttt{r}. It is therefore natural to expect their cell content to be typeset in math mode instead of text mode if they are used in an \env{array} environment. This has now been adjusted. Note that this is a breaking change in version v2.5! If you have used \texttt{w} or \texttt{W}-columns in older documents either add \texttt{\detokenize{>{$}...<{$}}} for such columns or remove the \texttt{\$} signs in the cells. Alternatively, you can roll back to the old version by loading \pkg{array} with \begin{verbatim} \usepackage{array}[=v2.4] \end{verbatim} in such documents. % \githubissue{297} \subsection{\pkg{array}: Fix for \cs{firsthline} and \cs{lasthline}} Replacing \cs{hline} with \cs{firsthline} or \cs{lasthline} could lead in some cases to an increase of the tabular width. This has now been corrected. % \githubissue{322} \subsection{\pkg{varioref}: Support Japanese as a language option} The package now recognizes \option{japanese} as a language option. The extra complication is that for grammatical reasons \cs{vref}, \cs{Vref}, \cs{vrefrange} and \cs{fullref} need a structure different from all other languages currently supported. To accommodate this, \cs{vrefformat}, \cs{Vrefformat}, \cs{vrefrangeformat}, and \cs{fullrefformat} have been added to all languages. % \githubissue{352} \subsection{\pkg{xr}: Support for spaces in filenames} The command \cs{externaldocument}, provided by \pkg{xr}, now also supports filenames with spaces, just like \cs{include} and \cs{includeonly}. % \githubissue{223} \section{Changes to packages in the \pkg{amsmath} category} \subsection{Placement corrections for two accent commands} The accent commands \cs{dddot} and \cs{ddddot} (producing triple and quadruple dot accents) moved the base character vertically in certain situations if it was a single glyph, e.g., \verb=$Q \dddot{Q}$= were not at the same baseline. This has been corrected. % \githubissue{126} \subsection{Fixes to \env{aligned} and \env{gathered}} The environments \env{aligned} and \env{gathered} have a trailing optional argument to specify the vertical position of the environment with respect to the rest of the line. Allowed values are \texttt{t}, \texttt{b} and \texttt{c} but the code only tested for \texttt{b} and \texttt{t} and assumed anything else must be \texttt{c}. As a result, a formula starting with a bracket group would get mangled without warning---the group being dropped and interpreted as a request for centering. After more than 25 years this has now been corrected. If such a group is found a warning is given and the data is processed as part of the formula. % \githubissue{5} \subsection{Detect Unicode engines when setting \cs{std@minus} and \cs{std@equal}} \pkg{amsmath} now detects the Unicode engines and uses their extended commands to define \cs{std@minus} and \tubcommand{\parfillskip=0pt\par\newpage\noindent}% \cs{std@equal}. This avoids a package like \pkg{unicode-math} having to patch the code in the begin document hook to change the commands. \subsection{Use Lua\TeX{} primitives where applicable} For a number of years \pkg{lualatex-math} patched \cs{frac}, \cs{genfrac} and the \env{subarray} environment to make use of new lua\TeX{} primitives. This code has now been integrated into \pkg{amsmath}. \section{Changes to the \pkg{babel} package} Multilingual typesetting has evolved greatly in recent years, and \pkg{babel}, like \LaTeX{} itself, has followed the footsteps of Unicode and the W3C consortia to produce proper output in many languages. Furthermore, the traditional model to define and select languages (which can be called \enquote{vertical}), based on closed files, while still the preferred one in monolingual documents, is being extended with a new model (which can be called \enquote{horizontal}) based on \emph{services} provided by \pkg{babel}, which allows defining and redefining locales with the help of simple \texttt{ini} files based on key\slash value pairs. The \pkg{babel} package provides about 250 of these files, which have been generated with the help of the Unicode Common Language Data Repository. Thanks to the recent advances in \texttt{lualatex} and \pkg{luaotfload}, \pkg{babel} currently provides \emph{services} for bidi typesetting, line breaking for Southeast Asian and CJK scripts, nonstandard hyphenation (like ff to ff-f), alphabetic and additive counters, automatic selection of fonts and languages based on the script, etc. This means \pkg{babel} can be used to typeset a wide variety of languages, such as Russian, Arabic, Hindi, Thai, Japanese, Bangla, Amharic, Greek, and many others. In addition, since these \texttt{ini} files are easily parsable, they can serve as a source for other packages. For further details take a look at the \pkg{babel} package documentation~\cite{32:babel}. \medskip \begin{thebibliography}{9} \fontsize{9.3}{11.3}\selectfont \bibitem{32:ltnews31} \LaTeX{} Project Team: \emph{\LaTeXe{} news 31}.\\ \url{https://latex-project.org/news/latex2e-news/ltnews31.pdf} \bibitem{32:site-doc} \emph{\LaTeX{} documentation on the \LaTeX{} Project Website}.\\ \url{https://latex-project.org/help/documentation/} \bibitem{32:issue-tracker} \emph{\LaTeX{} issue tracker}. \url{https://github.com/latex3/latex2e/issues/} \bibitem{32:babel} Javier Bezos and Johannes Braams. \emph{Babel---Localization and internationalization}.\\ \url{https://www.ctan.org/pkg/babel} \end{thebibliography} \end{document}
{ "alphanum_fraction": 0.7571527423, "avg_line_length": 38.1181192661, "ext": "tex", "hexsha": "01bc515db50e1dac3c2865d3194d24101caaa06d", "lang": "TeX", "max_forks_count": 235, "max_forks_repo_forks_event_max_datetime": "2022-03-30T06:23:57.000Z", "max_forks_repo_forks_event_min_datetime": "2017-11-16T22:02:49.000Z", "max_forks_repo_head_hexsha": "42a8fa1872e68a53b9551031292c6f633a524d4f", "max_forks_repo_licenses": [ "LPPL-1.3c" ], "max_forks_repo_name": "seanpm2001/latex2e", "max_forks_repo_path": "base/doc/ltnews32.tex", "max_issues_count": 698, "max_issues_repo_head_hexsha": "42a8fa1872e68a53b9551031292c6f633a524d4f", "max_issues_repo_issues_event_max_datetime": "2022-03-19T01:55:27.000Z", "max_issues_repo_issues_event_min_datetime": "2017-11-11T09:06:31.000Z", "max_issues_repo_licenses": [ "LPPL-1.3c" ], "max_issues_repo_name": "seanpm2001/latex2e", "max_issues_repo_path": "base/doc/ltnews32.tex", "max_line_length": 94, "max_stars_count": 1265, "max_stars_repo_head_hexsha": "ddcf4a17440e4c8868eee27c61fb81be5754b004", "max_stars_repo_licenses": [ "LPPL-1.3c" ], "max_stars_repo_name": "kesslermaximilian/latex2e", "max_stars_repo_path": "base/doc/ltnews32.tex", "max_stars_repo_stars_event_max_datetime": "2022-03-30T08:20:59.000Z", "max_stars_repo_stars_event_min_datetime": "2017-11-02T13:11:48.000Z", "num_tokens": 9018, "size": 33239 }
\chapter{Unit testing} \label{chap:unit_testing} Unit testing is a standard software engineering practice to aid in ensuring a quality product. A good suite of unit tests provides confidence in refactoring and changing code, furnishes some documentation on how classes and functions are used, and can drive a more decoupled design. If unit tests do not already exist for a section of code, you are encouraged to add them when modifying that section of code. New code additions should also include unit tests. When possible, fixes for specific bugs should also include a unit test that would have caught the bug. \section {Unit testing framework} The Catch framework is used for unit testing. See the project site for a tutorial and documentation: \url{https://github.com/philsquared/Catch}. Catch consists solely of header files. It is distributed as a single include file about 400 KB in size. In QMCPACK, it is stored in \ishell{external\_codes/catch}. \section{Unit test organization} \begin{sloppypar} The source for the unit tests is located in the \ishell{tests} directory under each directory in \ishell{src} (e.g., \ishell{src/QMCWavefunctions/tests}). All of the tests in each \ishell{tests} directory get compiled into an executable. After building the project, the individual unit test executables can be found in \ishell{build/tests/bin}. For example, the tests in \ishell{src/QMCWavefunctions/tests} are compiled into \ishell{build/tests/bin/test\_wavefunction}. \end{sloppypar} All the unit test executables are collected under ctest with the \ishell{unit} label. When checking the whole code, it is useful to run through CMake (\ishell{cmake -L unit}). When working on an individual directory, it is useful to run the individual executable. Some of the tests reference input files. The unit test CMake setup places those input files in particular locations under the \ishell{tests} directory (e.g., \ishell{tests/xml\_test}). The individual test needs to be run from that directory to find the expected input files. Command line options are available on the unit test executables. Some of the more useful ones are \begin{itemize} \item{List command line options.} \item{List all the tests in the executable.} \end{itemize} A test name can be given on the command line to execute just that test. This is useful when iterating on a particular test or when running in the debugger. Test names often contain spaces, so most command line environments require enclosing the test name in single or double quotes. \section{Example} The first example is one test from \ishell{src/Numerics/tests/test\_grid\_functor.cpp}. \begin{minipage}{\linewidth} \begin{lstlisting}[language=C++,caption={Unit test example using Catch.},label=CatchExample,basicstyle=\ttfamily] TEST_CASE("double_1d_grid_functor", "[numerics]") { LinearGrid<double> grid; OneDimGridFunctor<double> f(&grid); grid.set(0.0, 1.0, 3); REQUIRE(grid.size() == 3); REQUIRE(grid.rmin() == 0.0); REQUIRE(grid.rmax() == 1.0); REQUIRE(grid.dh() == Approx(0.5)); REQUIRE(grid.dr(1) == Approx(0.5)); } \end{lstlisting} \end{minipage}\\ The test function declaration is \ishell{TEST\_CASE("double\_1d\_grid\_functor","[numerics]")}. The first argument is the test name, and it must be unique in the test suite. The second argument is an optional list of tags. Each tag is a name surrounded by brackets (\ishell{"[tag1][tag2]"}). It can also be the empty string. The \ishell{REQUIRE} macro accepts expressions with C++ comparison operators and records an error if the value of the expression is false. Floating point numbers may have small differences due to roundoff, etc. The \ishell{Approx} class adds some tolerance to the comparison. Place it on either side of the comparison (e.g., \ishell{Approx(a) == 0.3} or \ishell{a = Approx(0.3)}). To adjust the tolerance, use the \ishell{epsilon} and \ishell{scale} methods to \ishell{Approx} (\ishell{REQUIRE(Approx(a).epsilon(0.001) = 0.3);}. \subsection{Expected output} When running the test executables individually, the output of a run with no failures should look like \begin{shade} =============================================================================== All tests passed (26 assertions in 4 test cases) \end{shade} A test with failures will look like \begin{minipage}{\linewidth} \begin{shade} ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ test_numerics is a Catch v1.4.0 host application. Run with -? for options ------------------------------------------------------------------------------- double_1d_grid_functor ------------------------------------------------------------------------------- /home/user/qmcpack/src/Numerics/tests/test_grid_functor.cpp:29 ............................................................................... /home/user/qmcpack/src/Numerics/tests/test_grid_functor.cpp:39: FAILED: REQUIRE( grid.dh() == Approx(0.6) ) with expansion: 0.5 == Approx( 0.6 ) =============================================================================== test cases: 4 | 3 passed | 1 failed assertions: 25 | 24 passed | 1 failed \end{shade} \end{minipage} \section{Adding tests} Three scenarios are covered here: adding a new test in an existing file, adding a new test file, and adding a new \ishell{test} directory. \subsection{Adding a test to existing file} Copy an existing test or from the example shown here. Be sure to change the test name. \subsection{Adding a test file} When adding a new test file, create a file in the test directory, or copy from an existing file. Add the file name to the \ishell{ADD\_EXECUTABLE} in the \ishell{CMakeLists.txt} file in that directory. One (and only one) file must define the \ishell{main} function for the test executable by defining \ishell{CATCH\_CONFIG\_MAIN} before including the Catch header. If more than one file defines this value, there will be linking errors about multiply defined values. Some of the tests need to shut down MPI properly to avoid extraneous error messages. Those tests include \ishell{Message/catch\_mpi\_main.hpp} instead of defining \ishell{CATCH\_CONFIG\_MAIN}. \subsection{Adding a test directory} Copy the \ishell{CMakeLists.txt} file from an existing \ishell{tests} directory. Change the \ishell{SRC\_DIR} name and the files in the \ishell{ADD\_EXECUTABLES} line. The libraries to link in \ishell{TARGET\_LINK\_LIBRARIES} may need to be updated. Add the new test directory to \ishell{src/CMakeLists.txt} in the \ishell{BUILD\_UNIT\_TESTS} section near the end. \section{Testing with random numbers} Many algorithms and parts of the code depend on random numbers, which makes validating the results difficult. One solution is to verify that certain properties hold for any random number. This approach is valuable at some levels of testing, but is unsatisfying at the unit test level. The \ishell{Utilities} directory contains a ``fake'' random number generator that can be used for deterministic tests of these parts of the code. Currently it outputs a single, fixed value every time it is called, but it could be expanded to produce more varied, but still deterministic, sequences. See \ishell{src/QMCDrivers/test\_vmc.cpp} for an example of using the fake random number generator.
{ "alphanum_fraction": 0.7214549433, "avg_line_length": 53.7720588235, "ext": "tex", "hexsha": "4e8146f1693f009a3233e2562d8b4ccd7bf24189", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "280f67e638bae280448b47fa618f05b848c530d2", "max_forks_repo_licenses": [ "NCSA" ], "max_forks_repo_name": "djstaros/qmcpack", "max_forks_repo_path": "legacy_manual/unit_testing.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "280f67e638bae280448b47fa618f05b848c530d2", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "NCSA" ], "max_issues_repo_name": "djstaros/qmcpack", "max_issues_repo_path": "legacy_manual/unit_testing.tex", "max_line_length": 394, "max_stars_count": null, "max_stars_repo_head_hexsha": "280f67e638bae280448b47fa618f05b848c530d2", "max_stars_repo_licenses": [ "NCSA" ], "max_stars_repo_name": "djstaros/qmcpack", "max_stars_repo_path": "legacy_manual/unit_testing.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 1756, "size": 7313 }
\documentclass[12pt]{amsart} \usepackage[margin=1in]{geometry} \usepackage{amsmath,amssymb,amsthm} \usepackage{enumerate} \usepackage{todonotes} \usepackage{hyperref} \newtheorem{thm}{Theorem} %\newtheorem{rmk}[thm]{Remark} \newenvironment{sol}{\paragraph{{\bf Solution:}}}{\hfill$\square$} \newenvironment{act}{\paragraph{{\bf Activity:}}}{\hfill$\square$} \let\biconditional\leftrightarrow \theoremstyle{definition} \newtheorem*{defn}{Definition} \newtheorem*{ex}{Example} \newtheorem*{nonex}{Non-example} \newtheorem*{rmk}{Remark} \newtheorem*{prop}{Proposition} \newtheorem*{cor}{Corollary} \newtheorem*{conj}{Conjecture} \newtheorem*{thmno}{Theorem} \newtheorem*{exer}{Exercise} \usepackage{tikz} \usepackage{multicol} \newcommand{\Z}{\mathbb{Z}} \newcommand{\N}{\mathbb{N}} \newcommand{\R}{\mathbb{R}} \title{ Final Exam Review} \begin{document} \maketitle \noindent\emph{Disclaimer: These practice problems are based on what the class requested to review for the first two exams plus the most recent material and what my impression of what the class struggled with. You should not assume the final will be exactly like this, but these are good practice problems to try.}\\ \noindent\textbf{Ways to study:} \begin{itemize} \item Go through the ``self quiz" questions on Blackboard and make sure you can answer them all. \item For any section you struggled with do the practice problems from that section (on Blackboard). \item The problems with ** are the potential proofs for the exam. You should definitely do these. \item Make sure you know how to do the questions on previous exams. \end{itemize} Note for all of the above you should be \emph{doing} the mathematics. Reading over worksheet solutions has little to no value - studies have shown this will not improve your performance on the exam. %\section{Some Example Proofs} % %\begin{enumerate} %\item For $x,y\in \mathbb{Z}$ define $x\sim y$ if and only if $x+y$ is even. Then $\sim$ is an equivalence relation.\\ % %\item For each $n\in \mathbb{N}$, $7^n$ is odd.\\ % %\item Let $U$ be a set and $A, B,$ and $C$ subsets of some universal set. Then $(A\cap B) - C = (A-C)\cap (B-C)$.\\ \section{Induction} \begin{enumerate} \item ** For each $n\in\mathbb{N}$, $5^n$ is odd.\\ \item For all $n\in\mathbb{N}$, $\displaystyle{\sum_{i=1}^n (2i-1) = n^2}$. (This is ``sigma notation" which you may have seen in Calculus 2. We could rewrite this statement as ``For all $n\in\N$, $1+3+5+\cdots + (2n-1) = n^2$.") \\ \item For all natural numbers $n$ with $n\geq 3$, $n^2\geq 2n+3$.\\ \item For any natural number $n$, $3\mid n^3+2n$.\\ \section{Using Cases in Proofs} \item ** Let $U$ be a set and $A, B,$ and $C$ subsets of some universal set. Then $(A\cup B) - C = (A-C)\cup (B-C)$.\\ \item ** Prove that for each integer $n$, if $n$ is odd then $8\mid n^2-1$.\\ \item For all integers $a$, $b$, and $d$ with $d\neq 0$, if $d \mid a$ or $d\mid b$ then $d\mid ab$. \section{Functions} \item Define $f: \Z \to \Z$ by $f(x) = 5x+3$. State the domain, codomain and range. Is $f$ an injection? surjection? Prove/disprove.\\ \item ** Define $g: \R\times\R \to\R$ by $g(a,b) = a$. State the domain, codomain and range. Is $g$ an injection? Surjection? Prove/disprove.\\ \item ** Let $A,B,$ and $C$ be nonempty sets and $f:A\to B$ and $g: B\to C$. Prove that if $f$ and $g$ are bijections then $g\circ f$ is a bijection. \\ \section{Equivalence Relations and Equivalence Classes} \item ** For $x,y\in \mathbb{Z}$ define $x\sim y$ if and only if $x+y$ is even. Prove that $\sim$ is an equivalence relation and find the equivalence class of $2$ (denoted $[2]_\sim$).\\ %\item Let $U$ be some universal set. For $A,B \in\mathcal{P}(U)$ define $A\sim B$ if and only if $A\subseteq B$. Is $A$ an equivalence relation?\\ \item ** For $(w,x)$ and $(y,z)$ in $\mathbb{Z}\times \mathbb{Z}$ define $(w,x)\sim (y,z)$ if and only if $w\leq y$ and $x\leq z$. Is $\sim$ an equivalence relation?\\ \item Define a function $f:\mathbb{R} \to\mathbb{R}$ by $f(x)=x^2+1$. For $a,b\in \mathbb{R}$, say $a\sim b$ if and only if $f(a) = f(b)$. Show this is an equivalence relation and then find $[3]_{\sim}$.\\ \item Write the addition and multiplication tables for $\Z_5$ and $\Z_6$. Somewhere you will have written $[0]$ in each table. Does this mean something different in $\Z_5$ and $\Z_6$ or is it the same? %\section{Proving Set Relationships} % %\item Let $A,B,$ and $C$ be subsets of some universal set $U$. Prove or disprove: If $A\subset C$ and $B\subset C$ then $A\cup B \subset C$.\\ % %\item Let $A,B,$ and $C$ be subsets of some universal set $U$. Then $A\times (B\cup C) = (A\times B) \cup (A\times C)$\\ % %\item Let $A$ and $B$ be subsets of some universal set $U$. Then $A-(A\cap B^c)\subseteq A\cap B$. (These sets are actually equal, for an extra challenge, show they are equal.) \end{enumerate} \end{document}
{ "alphanum_fraction": 0.6831401261, "avg_line_length": 39.0238095238, "ext": "tex", "hexsha": "8c20a76e74e1bfd1909c18b7b26355368a0f3632", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "4038b6d102000f4eeb27adaa8d0fd2bde63c28ac", "max_forks_repo_licenses": [ "CC0-1.0" ], "max_forks_repo_name": "mkjanssen/discrete", "max_forks_repo_path": "from LDK/1-ClassWorksheets/FinalExamReview/FinalExamReview.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "4038b6d102000f4eeb27adaa8d0fd2bde63c28ac", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "CC0-1.0" ], "max_issues_repo_name": "mkjanssen/discrete", "max_issues_repo_path": "from LDK/1-ClassWorksheets/FinalExamReview/FinalExamReview.tex", "max_line_length": 316, "max_stars_count": null, "max_stars_repo_head_hexsha": "4038b6d102000f4eeb27adaa8d0fd2bde63c28ac", "max_stars_repo_licenses": [ "CC0-1.0" ], "max_stars_repo_name": "mkjanssen/discrete", "max_stars_repo_path": "from LDK/1-ClassWorksheets/FinalExamReview/FinalExamReview.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 1646, "size": 4917 }
\par \section{Prototypes and descriptions of {\tt MPI} methods} \label{section:MPI:proto} \par This section contains brief descriptions including prototypes of all methods found in the {\tt MPI} source directory. \par \subsection{Split and redistribution methods} \label{subsection:MPI:proto:split} \par %======================================================================= In a distributed environment, data must be distributed, and sometimes during a computation, data must be re-distributed. These methods split and redistribute four data objects. \par \begin{enumerate} %----------------------------------------------------------------------- \item \begin{verbatim} void DenseMtx_MPI_splitByRows ( DenseMtx *mtx, IV *mapIV, int stats[], int msglvl, FILE *msgFile, int firsttag, MPI_Comm comm ) ; \end{verbatim} \index{DenseMtx_MPI_splitByRows@{\tt DenseMtx\_MPI\_splitByRows()}} This method splits and redistributes the {\tt DenseMtx} object based on the {\tt mapIV} object that maps rows to processes. The messages that will be sent require {\tt nproc} consecutive tags --- the first is the parameter {\tt firsttag}. On return, the {\tt stats[]} vector contains the following information. \par \begin{center} \begin{tabular}{cclcccl} {\tt stats[0]} & --- & \# of messages sent & & {\tt stats[1]} & --- & \# of bytes sent \\ {\tt stats[2]} & --- & \# of messages received & & {\tt stats[3]} & --- & \# of bytes received \end{tabular} \end{center} \par Note, the values in {\tt stats[]} are {\it incremented}, i.e., the {\tt stats[]} vector is not zeroed at the start of the method, and so can be used to accumulated information with multiple calls. \par \noindent {\it Error checking:} If {\tt mtx} or {\tt rowmapIV} is {\tt NULL}, or if {\tt msglvl > 0} and {\tt msgFile} is {\tt NULL}, or if {\tt firsttag < 0} or {\tt firsttag + nproc} is larger than the largest available tag, an error message is printed and the program exits. %----------------------------------------------------------------------- \item \begin{verbatim} DenseMtx * DenseMtx_MPI_splitFromGlobalByRows ( DenseMtx *Xglobal, DenseMtx *Xlocal, IV *rowmapIV, int root, int stats[], int msglvl, FILE *msgFile, int firsttag, MPI_Comm comm ) ; \end{verbatim} \index{DenseMtx_MPI_splitFromGlobalByRows@{\tt DenseMtx\_MPI\_splitFromGlobalByRows()}} This method is used when the {\tt Xglobal} {\tt DenseMtx} matrix object is owned by processor {\tt root} and redistributed to the other processors. \par {\tt Xglobal} is pertinent only to processor {\tt root}. If the local matrix {\tt Xlocal} is {\tt NULL}, and if the local matrix will be nonempty, then it is created. If the local matrix is not {\tt NULL}, then it will be returned. The remaining input arguments are the same as for the {\tt DenseMtx\_MPI\_splitByRows()} method. \par \noindent {\it Error checking:} Processor {\tt root} does a fair amount of error checking --- it ensures that {\tt Xglobal} is valid, that {\tt firsttag} is valid, and that the {\tt rowmapIV} object is valid. The return code is broadcast to the other processors. If an error is found, the processors call {\tt MPI\_Finalize()} and exit. %----------------------------------------------------------------------- \item \begin{verbatim} DenseMtx * DenseMtx_MPI_mergeToGlobalByRows ( DenseMtx *Xglobal, DenseMtx *Xlocal, IV *rowmapIV, int root, int stats[], int msglvl, FILE *msgFile, int firsttag, MPI_Comm comm ) ; \end{verbatim} \index{DenseMtx_MPI_mergeToGlobalByRows@{\tt DenseMtx\_MPI\_mergeToGlobalByRows()}} This method is used when the processors own a partitioned {\tt DenseMtx} object and it must be assembled onto the {\tt root} processor. Each processor owns a {\tt Xlocal} matrix (which may be {\tt NULL}). The global matrix will be accumulated in the {\tt Xglobal} object. \par {\tt Xglobal} is pertinent only to processor {\tt root}. If the global matrix {\tt Xglobal} is {\tt NULL}, and if the global matrix will be nonempty, then it is created. If the global matrix is not {\tt NULL}, then it will be returned. The remaining input arguments are the same as for the {\tt DenseMtx\_MPI\_splitByRows()} method. \par \noindent {\it Error checking:} Each processor does a fair amount of error checking --- they ensure that {\tt firsttag} is valid, that the types of the local matrices are identical, and that the number of columns of the local matrices are identical, If there is any error detected by any of the processors, they call {\tt MPI\_Finalize()} and exit. %----------------------------------------------------------------------- \item \begin{verbatim} void InpMtx_MPI_split ( InpMtx *inpmtx, IV *mapIV, int stats[], int msglvl, FILE *msgFile, int firsttag, MPI_Comm comm ) ; \end{verbatim} \index{InpMtx_MPI_split@{\tt InpMtx\_MPI\_split()}} This method splits and redistributes the {\tt InpMtx} object based on the {\tt mapIV} object that maps the {\tt InpMtx} object's vectors (rows, columns or chevrons) to processes. The the vectors are defined by the first coordinate of the {\tt InpMtx} object. For the distributed $LU$, $U^TDU$ and $U^HDU$ factorizations, we use the chevron coordinate type to store the matrix entries. This method will redistribute a matrix by rows if the coordinate type is {\tt 1} (for rows) and {\tt mapIV} is a row map. Similarly, this method will redistribute a matrix by columns if the coordinate type is {\tt 2} (for columns) and {\tt mapIV} is a column map. See the {\tt InpMtx} object for details. The messages that will be sent require {\tt nproc} consecutive tags --- the first is the parameter {\tt firsttag}. On return, the {\tt stats[]} vector contains the following information. \par \begin{center} \begin{tabular}{cclcccl} {\tt stats[0]} & --- & \# of messages sent & & {\tt stats[1]} & --- & \# of bytes sent \\ {\tt stats[2]} & --- & \# of messages received & & {\tt stats[3]} & --- & \# of bytes received \end{tabular} \end{center} \par Note, the values in {\tt stats[]} are {\it incremented}, i.e., the {\tt stats[]} vector is not zeroed at the start of the method, and so can be used to accumulated information with multiple calls. \par \noindent {\it Error checking:} If {\tt firsttag < 0} or {\tt firsttag + nproc} is larger than the largest available tag, an error message is printed and the program exits. %----------------------------------------------------------------------- \item \begin{verbatim} InpMtx * InpMtx_MPI_splitFromGlobal ( InpMtx *Aglobal, InpMtx *Alocal, IV *mapIV, int root, int stats[], int msglvl, FILE *msgFile, int firsttag, MPI_Comm comm ) ; \end{verbatim} \index{InpMtx_MPI_splitFromGlobal@{\tt InpMtx\_MPI\_splitFromGlobal()}} This method is used when the {\tt Aglobal} {\tt InpMtx} matrix object is owned by processor {\tt root} and redistributed to the other processors. \par {\tt Aglobal} is pertinent only to processor {\tt root}. If the local matrix {\tt Alocal} is {\tt NULL}, and if the local matrix will be nonempty, then it is created. If the local matrix is not {\tt NULL}, then it will be returned. The remaining input arguments are the same as for the {\tt InpMtx\_MPI\_split()} method. \par \noindent {\it Error checking:} Processor {\tt root} does a fair amount of error checking --- it ensures that {\tt Aglobal} is valid, that {\tt firsttag} is valid, and that the {\tt mapIV} object is valid. The return code is broadcast to the other processors. If an error is found, the processors call {\tt MPI\_Finalize()} and exit. %----------------------------------------------------------------------- \item \begin{verbatim} void Pencil_MPI_split ( Pencil *pencil, IV *mapIV, int tag, int stats[], int msglvl, FILE *msgFile, MPI_Comm comm ) ; \end{verbatim} \index{Pencil_MPI_split@{\tt Pencil\_MPI\_split()}} This method splits and redistributes the matrix pencil based on the {\tt mapIV} object that maps rows and columns to processes. This is a simple wrapper around the {\tt InpMtx\_MPI\_split()} method. The messages that will be sent require {\tt 2*nproc} consecutive tags --- the first is the parameter {\tt firsttag}. On return, the {\tt stats[]} vector contains the following information. \par \begin{center} \begin{tabular}{cclcccl} {\tt stats[0]} & --- & \# of messages sent & & {\tt stats[1]} & --- & \# of bytes sent \\ {\tt stats[2]} & --- & \# of messages received & & {\tt stats[3]} & --- & \# of bytes received \end{tabular} \end{center} \par Note, the values in {\tt stats[]} are {\it incremented}, i.e., the {\tt stats[]} vector is not zeroed at the start of the method, and so can be used to accumulated information with multiple calls. \par \noindent {\it Error checking:} If {\tt firsttag < 0} or {\tt firsttag + 2*nproc} is larger than the largest available tag, an error message is printed and the program exits. %----------------------------------------------------------------------- \item \begin{verbatim} void FrontMtx_MPI_split ( FrontMtx *frontmtx, SolveMap *solvemap, int stats[], int msglvl, FILE *msgFile, int firsttag, MPI_Comm comm ) ; \end{verbatim} \index{FrontMtx_MPI_split@{\tt FrontMtx\_MPI\_split()}} Used after the factorization, this method is used instead of the {\tt FrontMtx\_splitUpperMatrices()} and {\tt FrontMtx\_splitLowerMatrices()} methods. The method splits and redistributes the {\tt FrontMtx} object based on the {\tt solvemap} object that maps submatrices to processes. The {\tt firsttag} is the first tag that will be used for all messages. Unfortunately, the number of different tags that are necessary is not known prior to entering this method. On return, the {\tt stats[]} vector contains the following information. \par \begin{center} \begin{tabular}{cclcccl} {\tt stats[0]} & --- & \# of messages sent & & {\tt stats[1]} & --- & \# of bytes sent \\ {\tt stats[2]} & --- & \# of messages received & & {\tt stats[3]} & --- & \# of bytes received \end{tabular} \end{center} \par Note, the values in {\tt stats[]} are {\it incremented}, i.e., the {\tt stats[]} vector is not zeroed at the start of the method, and so can be used to accumulated information with multiple calls. \par \noindent {\it Error checking:} If {\tt mtx} or {\tt rowmapIV} is {\tt NULL}, or if {\tt msglvl > 0} and {\tt msgFile} is {\tt NULL}, or if {\tt firsttag < 0} is larger than the largest available tag, an error message is printed and the program exits. %----------------------------------------------------------------------- \end{enumerate} \par \subsection{Gather and scatter methods} \label{subsection:MPI:proto:gather-scatter} \par %======================================================================= These method gather and scatter/add rows of {\tt DenseMtx} objects. These operations are performed during the distributed matrix-matrix multiply. The gather operation $X_{supp}^q \leftarrow X$ is performed by {\tt DenseMtx\_MPI\_gatherRows()}, while the scatter/add operation $Y^q := Y^q + \sum_r Y_{supp}^r$ is performed by {\tt DenseMtx\_MPI\_scatterAddRows()}. \par \begin{enumerate} %----------------------------------------------------------------------- \item \begin{verbatim} void DenseMtx_MPI_gatherRows ( DenseMtx *Y, DenseMtx *X, IVL *sendIVL, IVL *recvIVL, int stats[], int msglvl, FILE *msgFile, int tag, MPI_Comm comm) ; \end{verbatim} \index{DenseMtx_MPI_gatherRows@{\tt DenseMtx\_MPI\_gatherRows()}} This method is used to gather rows of {\tt X}, a globally distributed matrix, into {\tt Y}, a local matrix. List $q$ of {\tt sendIVL} contains the local row ids of the local part of {\tt X} that will be sent to processor $q$. List $q$ of {\tt recvIVL} contains the local row ids of {\tt Y} that will be received from processor $q$. \par This method uses tags in the range {\tt [tag,tag+nproc*nproc)}. On return, the following statistics will have been added. \begin{center} \begin{tabular}{cclcccl} {\tt stats[0]} & --- & \# of messages sent & & {\tt stats[1]} & --- & \# of bytes sent \\ {\tt stats[2]} & --- & \# of messages received & & {\tt stats[3]} & --- & \# of bytes received \end{tabular} \end{center} This method is {\it safe} in the sense that it uses only non-blocking sends and receives, {\tt MPI\_Isend()} and {\tt MPI\_Irecv()}. \par \noindent {\it Error checking:} If {\tt Y}, {\tt X}, {\tt sendIVL} or {\tt recvIVL} is {\tt NULL}, or if {\tt msglvl > 0} and {\tt msgFile} is {\tt NULL}, or if {\tt tag < 0} or {\tt tag + nproc*nproc} is larger than the largest available tag, an error message is printed and the program exits. %----------------------------------------------------------------------- \item \begin{verbatim} void DenseMtx_MPI_scatterAddRows ( DenseMtx *Y, DenseMtx *X, IVL *sendIVL, IVL *recvIVL, int stats[], int msglvl, FILE *msgFile, int tag, MPI_Comm comm) ; \end{verbatim} \index{DenseMtx_MPI_scatterAddRows@{\tt DenseMtx\_MPI\_scatterAddRows()}} This method is used to scatter/add rows of {\tt X}, a globally distributed matrix, into {\tt Y}, a local matrix. List $q$ of {\tt sendIVL} contains the local row ids of the local part of {\tt X} that will be sent to processor $q$. List $q$ of {\tt recvIVL} contains the local row ids of {\tt Y} that will be received from processor $q$. \par This method uses tags in the range {\tt [tag,tag+nproc*nproc)}. On return, the following statistics will have been added. \begin{center} \begin{tabular}{cclcccl} {\tt stats[0]} & --- & \# of messages sent & & {\tt stats[1]} & --- & \# of bytes sent \\ {\tt stats[2]} & --- & \# of messages received & & {\tt stats[3]} & --- & \# of bytes received \end{tabular} \end{center} This method is {\it safe} in the sense that it uses only non-blocking sends and receives, {\tt MPI\_Isend()} and {\tt MPI\_Irecv()}. \par \noindent {\it Error checking:} If {\tt Y}, {\tt X}, {\tt sendIVL} or {\tt recvIVL} is {\tt NULL}, or if {\tt msglvl > 0} and {\tt msgFile} is {\tt NULL}, or if {\tt tag < 0} or {\tt tag + nproc*nproc} is larger than the largest available tag, an error message is printed and the program exits. %----------------------------------------------------------------------- \end{enumerate} \par \subsection{Symbolic Factorization methods} \label{subsection:MPI:proto:symbfac} \par \begin{enumerate} %----------------------------------------------------------------------- \item \begin{verbatim} IVL * SymbFac_MPI_initFromInpMtx ( ETree *etree, IV *frontOwnersIV, InpMtx *inpmtx, int stats[], int msglvl, FILE *msgFile, int firsttag, MPI_Comm comm ) ; IVL * SymbFac_MPI_initFromPencil ( ETree *etree, IV *frontOwnersIV, Pencil *pencil, int stats[], int msglvl, FILE *msgFile, int firsttag, MPI_Comm comm ) ; \end{verbatim} \index{SymbFac_MPI_initFromInpMtx@{\tt SymbFac\_MPI\_initFromInpMtx()}} \index{SymbFac_MPI_initFromPencil@{\tt SymbFac\_MPI\_initFromPencil()}} These methods are used in place of the {\tt Symbfac\_initFrom\{InpMtx,Pencil\}()} methods to compute the symbolic factorization. The {\tt ETree} object is assumed to be replicated over the processes. The {\tt InpMtx} and {\tt Pencil} objects are partitioned among the processes. Therefore, to compute the {\tt IVL} object that contains the symbolic factorization is a distributed, cooperative process. At the end of the symbolic factorization, each process will own a portion of the {\tt IVL} object. The {\tt IVL} object is neither replicated nor partitioned (except in trivial cases), but the {\tt IVL} object on each process contains just a portion, usually not much more than what it needs to know for its part of the factorization and solves. \par This method uses tags in the range {\tt [tag,tag+nfront)}. On return, the following statistics will have been added. \begin{center} \begin{tabular}{cclcccl} {\tt stats[0]} & --- & \# of messages sent & & {\tt stats[1]} & --- & \# of bytes sent \\ {\tt stats[2]} & --- & \# of messages received & & {\tt stats[3]} & --- & \# of bytes received \end{tabular} \end{center} This method is {\it safe} in the sense that it uses only non-blocking sends and receives, {\tt MPI\_Isend()} and {\tt MPI\_Irecv()}. \par \noindent {\it Error checking:} If {\tt etree}, {\tt inpmtx}, {\tt pencil} or {\tt frontOwnersIV} is {\tt NULL}, or if {\tt msglvl > 0} and {\tt msgFile} is {\tt NULL}, or if {\tt tag < 0} or {\tt tag + nfront} is larger than the largest available tag, an error message is printed and the program exits. %----------------------------------------------------------------------- \end{enumerate} \par \subsection{Numeric Factorization methods} \label{subsection:MPI:proto:factor} \par \begin{enumerate} %----------------------------------------------------------------------- \item \begin{verbatim} Chv * FrontMtx_MPI_factorPencil ( FrontMtx *frontmtx, Pencil *pencil, double tau, double droptol, ChvManager *chvmanager, IV *frontOwnersIV, int lookahead, int *perror, double cpus[], int stats[], int msglvl, FILE *msgFile, int firsttag, MPI_Comm comm ) ; Chv * FrontMtx_MPI_factorInpMtx ( FrontMtx *frontmtx, InpMtx *inpmtx, double tau, double droptol, ChvManager *chvmanager, IV *frontOwnersIV, int lookahead, int *perror, double cpus[], int stats[], int msglvl, FILE *msgFile, int firsttag, MPI_Comm comm ) ; \end{verbatim} \index{FrontMtx_MPI_factorPencil@{\tt FrontMtx\_MPI\_factorPencil()}} \index{FrontMtx_MPI_factorInpMtx@{\tt FrontMtx\_MPI\_factorInpMtx()}} These methods are used to compute the numeric factorization and are very similar to the multithreaded {\tt FrontMtx\_MT\_factorPencil()} and {\tt FrontMtx\_MT\_factorInpMtx()} methods. All that has been added is the code to send and receive the {\tt Chv} messages. The input {\tt firsttag} parameter is used to tag the messages during the factorization. This method uses tags in the range {\tt [firsttag, firsttag + 3*nfront + 3)}. \par On return, {\tt *perror} holds an error flag. If the factorization completed without any error detected, {\tt *perror} will be negative. Otherwise it holds the id of a front where the factorization failed. Currently, this can happen only if pivoting is not enabled and a zero pivot was detected. \par The return value is a pointer to a list of {\tt Chv} objects that hold entries of the matrix that could not be factored. This value should be {\tt NULL} in all cases. We have left this return behavior as a hook for future implementation of a multi-stage factorization. \par On return, the {\tt cpus[]} vector has the following information. \begin{center} \begin{tabular}{ccl} {\tt cpus[0]} & -- & initialize fronts \\ {\tt cpus[1]} & -- & load original entries \\ {\tt cpus[2]} & -- & update fronts \\ {\tt cpus[3]} & -- & insert aggregate data \\ {\tt cpus[4]} & -- & assemble aggregate data \\ {\tt cpus[5]} & -- & assemble postponed data \\ {\tt cpus[6]} & -- & factor fronts \end{tabular} \begin{tabular}{ccl} {\tt cpus[7]} & -- & extract postponed data \\ {\tt cpus[8]} & -- & store factor entries \\ {\tt cpus[9]} & -- & post initial receives \\ {\tt cpus[10]} & -- & check for received messages \\ {\tt cpus[11]} & -- & post initial sends \\ {\tt cpus[12]} & -- & check for sent messages \end{tabular} \end{center} On return, the {\tt stats[]} vector has the following information. \begin{center} \begin{tabular}{ccl} {\tt stats[0]} & --- & \# of pivots \\ {\tt stats[1]} & --- & \# of pivot tests \\ {\tt stats[2]} & --- & \# of delayed rows and columns \\ {\tt stats[3]} & --- & \# of entries in D \\ {\tt stats[4]} & --- & \# of entries in L \\ {\tt stats[5]} & --- & \# of entries in U \\ {\tt stats[6]} & --- & \# of aggregate messages sent \\ {\tt stats[7]} & --- & \# of bytes sent in aggregate messages \\ {\tt stats[8]} & --- & \# of aggregate messages received \\ {\tt stats[9]} & --- & \# of bytes received in aggregate messages \\ {\tt stats[10]} & --- & \# of postponed messages sent \\ {\tt stats[11]} & --- & \# of bytes sent in postponed messages \\ {\tt stats[12]} & --- & \# of postponed messages received \\ {\tt stats[13]} & --- & \# of bytes received in postponed messages \\ {\tt stats[14]} & --- & \# of active {\tt Chv} objects (working storage) \\ {\tt stats[15]} & --- & \# of active bytes in working storage \\ {\tt stats[16]} & --- & \# of requested bytes for working storage \end{tabular} \end{center} \par \noindent {\it Error checking:} If {\tt frontmtx}, {\tt pencil}, {\tt frontOwnersIV}, {\tt cpus} or {\tt stats} is {\tt NULL}, or if {\tt tau < 1.0} or {\tt droptol < 0.0}, or if {\tt firsttag < 0} or {\tt firsttag + 3*nfront + 2} is larger than the largest available tag, or if {\tt msglvl > 0} and {\tt msgFile} is {\tt NULL}, an error message is printed and the program exits. %----------------------------------------------------------------------- \end{enumerate} \par \subsection{Post-processing methods} \label{subsection:MPI:proto:postprocess} \par \begin{enumerate} %----------------------------------------------------------------------- \item \begin{verbatim} void FrontMtx_MPI_postProcess ( FrontMtx *frontmtx, IV *frontOwnersIV, int stats[], int msglvl, FILE *msgFile, int firsttag, MPI_Comm comm ) ; \end{verbatim} \index{FrontMtx_MPI_postProcess@{\tt FrontMtx\_MPI\_postProcess()}} After the factorization is complete, the factor matrices are split into submatrices. This method replaces the serial {\tt FrontMtx\_postProcess()} method. The messages that will be sent require at most {\tt 5*nproc} consecutive tags --- the first is the parameter {\tt firsttag}. \par \noindent {\it Error checking:} If {\tt frontmtx}, {\tt frontOwnersIV} or {\tt stats} is {\tt NULL}, or if {\tt firsttag < 0} or {\tt firsttag + 5*nproc}, is larger than the largest available tag, or if {\tt msglvl > 0} and {\tt msgFile} is {\tt NULL}, an error message is printed and the program exits. %----------------------------------------------------------------------- \item \begin{verbatim} void FrontMtx_MPI_permuteUpperAdj ( FrontMtx *frontmtx, IV *frontOwnersIV, int stats[], int msglvl, FILE *msgFile, int firsttag, MPI_Comm comm ) ; void FrontMtx_MPI_permuteLowerAdj ( FrontMtx *frontmtx, IV *frontOwnersIV, int stats[], int msglvl, FILE *msgFile, int firsttag, MPI_Comm comm ) ; \end{verbatim} \index{FrontMtx_MPI_permuteUpperAdj@{\tt FrontMtx\_MPI\_permuteUpperAdj()}} \index{FrontMtx_MPI_permuteLowerAdj@{\tt FrontMtx\_MPI\_permuteLowerAdj()}} If pivoting takes place during the factorization, the off diagonal blocks of the factor matrices must be permuted prior to being split into submatrices. To do this, the final rows and columns of the factor matrix must be made known to the different processors. The messages that will be sent require at most {\tt nproc} consecutive tags --- the first is the parameter {\tt firsttag}. \par \noindent {\it Error checking:} If {\tt frontmtx}, {\tt frontOwnersIV} or {\tt stats} is {\tt NULL}, or if {\tt firsttag < 0} or {\tt firsttag + nproc}, is larger than the largest available tag, or if {\tt msglvl > 0} and {\tt msgFile} is {\tt NULL}, an error message is printed and the program exits. %----------------------------------------------------------------------- \item \begin{verbatim} void IV_MPI_allgather ( IV *iv, IV *ownersIV, int stats[], int msglvl, FILE *msgFile, int firsttag, MPI_Comm comm ) ; \end{verbatim} \index{IV_MPI_allgather@{\tt IV\_MPI\_allgather()}} After a factorization with pivoting, the {\tt frontsizesIV} object needs to be made global on each processor. This methods takes the individual entries of an {\tt IV} object whose owners are specified by the {\tt ownersIV} object, and communicates the entries around the processors until the global {\tt IV} object is present on each. The messages that will be sent require at most {\tt nproc} consecutive tags --- the first is the parameter {\tt firsttag}. \par \noindent {\it Error checking:} If {\tt iv}, {\tt ownersIV} or {\tt stats} is {\tt NULL}, or if {\tt firsttag < 0} or {\tt firsttag + nproc}, is larger than the largest available tag, or if {\tt msglvl > 0} and {\tt msgFile} is {\tt NULL}, an error message is printed and the program exits. %----------------------------------------------------------------------- \item \begin{verbatim} void IVL_MPI_allgather ( IVL *ivl, IV *ownersIV, int stats[], int msglvl, FILE *msgFile, int firsttag, MPI_Comm comm ) ; \end{verbatim} \index{IVL_MPI_allgather@{\tt IVL\_MPI\_allgather()}} When the {\tt FrontMtx} object is split into submatrices, each processor accumulates the structure of the block matrix for the fronts its owns. This structure must be global to all processors before the submatrix map can be computed. This method takes a {\it partitioned} {\tt IVL} object and communicates the entries among the processors until the global {\tt IVL} object is present on each. Which processor owns what lists of the {\tt IVL} object is given by the {\tt ownersIV} object. The messages that will be sent require at most {\tt nproc} consecutive tags --- the first is the parameter {\tt firsttag}. \par \noindent {\it Error checking:} If {\tt ivl}, {\tt ownersIV} or {\tt stats} is {\tt NULL}, or if {\tt firsttag < 0} or {\tt firsttag + nproc}, is larger than the largest available tag, or if {\tt msglvl > 0} and {\tt msgFile} is {\tt NULL}, an error message is printed and the program exits. %----------------------------------------------------------------------- \end{enumerate} \par \subsection{Numeric Solve methods} \label{subsection:MPI:proto:solve} \par \begin{enumerate} %----------------------------------------------------------------------- \item \begin{verbatim} void FrontMtx_MPI_solve ( FrontMtx *frontmtx, DenseMtx *mtxX, DenseMtx *mtxB, SubMtxManager *mtxmanager, SolveMap *solvemap, double cpus[], int stats[], int msglvl, FILE *msgFile, int firsttag, MPI_Comm comm ) ; \end{verbatim} \index{FrontMtx_MPI_solve@{\tt FrontMtx\_MPI\_solve()}} This method is used to compute the forward and backsolves. Its structure is very, very similar to the multithreaded {\tt FrontMtx\_MT\_solve()} method. All that has been added is the code to send and receive the {\tt SubMtx} messages. The method uses tags in the range {\tt [firsttag, firsttag + 2*nfront)}. On return, the {\tt cpus[]} vector has the following information. \begin{center} \begin{tabular}{ccl} {\tt cpus[0]} & --- & setup the solves \\ {\tt cpus[1]} & --- & load rhs and store solution \\ {\tt cpus[2]} & --- & forward solve \end{tabular} \begin{tabular}{ccl} {\tt cpus[3]} & --- & diagonal solve \\ {\tt cpus[4]} & --- & backward solve \\ {\tt cpus[5]} & --- & miscellaneous \end{tabular} \end{center} On return, the following statistics will have been added. \begin{center} \begin{tabular}{ccl} {\tt stats[0]} & --- & \# of solution messages sent \\ {\tt stats[1]} & --- & \# of aggregate messages sent \\ {\tt stats[2]} & --- & \# of solution bytes sent \\ {\tt stats[3]} & --- & \# of aggregate bytes sent \\ {\tt stats[4]} & --- & \# of solution messages received \\ {\tt stats[5]} & --- & \# of aggregate messages received \\ {\tt stats[6]} & --- & \# of solution bytes received \\ {\tt stats[7]} & --- & \# of aggregate bytes received \end{tabular} \end{center} \par \noindent {\it Error checking:} If {\tt frontmtx}, {\tt mtxX}, {\tt mtxB}, {\tt mtxmanager}, {\tt solvemap}, {\tt cpus} or {\tt stats} is {\tt NULL}, or if {\tt firsttag < 0} or {\tt firsttag + 2*nfront} is larger than the largest available tag, or if {\tt msglvl > 0} and {\tt msgFile} is {\tt NULL}, an error message is printed and the program exits. %----------------------------------------------------------------------- \end{enumerate} \par \subsection{Matrix-matrix multiply methods} \label{subsection:MPI:proto:mmm} \par The usual sequence of events is as follows. \begin{itemize} \item Set up the data structure via a call to {\tt MatMul\_MPI\_setup()}. \item Convert the local $A^q$ matrix to local indices via a call to {\tt MatMul\_setLocalIndices()}. \item Compute the matrix-matrix multiply with a call to {\tt MatMul\_MPI\_mmm()}. Inside this method, the MPI methods {\tt DenseMtx\_MPI\_gatherRows()} and {\tt DenseMtx\_MPI\_scatterAddRows()} are called, along with a serial {\tt InpMtx} matrix-matrix multiply method. \item Clean up and free data structures via a call to {\tt MatMul\_cleanup().} \item Convert the local $A^q$ matrix to global indices via a call to {\tt MatMul\_setGlobalIndices()}. \end{itemize} \begin{enumerate} %----------------------------------------------------------------------- \item \begin{verbatim} MatMulInfo * MatMul_MPI_setup ( InpMtx *A, int symflag, int opflag, IV *XownersIV, IV *YownersIV int stats[], int msglvl, FILE *msgFile, MPI_Comm comm) ; \end{verbatim} \index{MatMul_MPI_setup@{\tt MatMul\_MPI\_setup()}} This method is used to set up and return the {\tt MatMulInfo} data structure that stores the information for the distributed matrix-matrix multiply. The {\tt symflag} parameter specifies the symmetry of the matrix. \begin{itemize} \item 0 ({\tt SPOOLES\_SYMMETRIC}) \item 1 ({\tt SPOOLES\_HERMITIAN}) \item 2 ({\tt SPOOLES\_NONSYMMETRIC}) \end{itemize} The {\tt opflag} parameter specifies what type of operation will be performed. \begin{itemize} \item 0 ({\tt MMM\_WITH\_A}) --- $Y := Y + \alpha A X$ \item 1 ({\tt MMM\_WITH\_AT}) --- $Y := Y + \alpha A^T X$ \item 2 ({\tt MMM\_WITH\_AH}) --- $Y := Y + \alpha A^H X$ \end{itemize} The {\tt XownersIV} object is the map from the rows of $X$ to their owning processors. The {\tt YownersIV} object is the map from the rows of $Y$ to their owning processors. \par On return, the following statistics will have been added. \begin{center} \begin{tabular}{cclcccl} {\tt stats[0]} & --- & \# of messages sent & & {\tt stats[1]} & --- & \# of bytes sent \\ {\tt stats[2]} & --- & \# of messages received & & {\tt stats[3]} & --- & \# of bytes received \end{tabular} \end{center} This method calls {\tt makeSendRecvIVLs()}. \par \noindent {\it Error checking:} If {\tt A}, {\tt XownersIV}, {\tt YownersIV} or {\tt stats} is {\tt NULL}, or if {\tt msglvl > 0} and {\tt msgFile} is {\tt NULL}, an error message is printed and the program exits. %----------------------------------------------------------------------- \item \begin{verbatim} void MatMul_setLocalIndices ( MatMulInfo *info, InpMtx *A ) ; void MatMul_setGlobalIndices ( MatMulInfo *info, InpMtx *A ) ; \end{verbatim} \index{MatMul_setLocalIndices@{\tt MatMul\_setLocalIndices()}} \index{MatMul_setGlobalIndices@{\tt MatMul\_setGlobalIndices()}} The first method maps the indices of {\tt A} (which are assumed to be global) into local indices. The second method maps the indices of {\tt A} (which are assumed to be local) back into global indices. It uses the {\tt XmapIV}, {\tt XsupIV} {\tt YmapIV} and {\tt YsupIV} objects that are contained in the {\tt info} object. These are serial methods, performed independently on each processor. \par \noindent {\it Error checking:} If {\tt info} or {\tt A} is {\tt NULL}, an error message is printed and the program exits. %----------------------------------------------------------------------- \item \begin{verbatim} void MatMul_MPI_mmm ( MatMulInfo *info, DenseMtx *Yloc, double alpha[], InpMtx *A, DenseMtx *Xloc, int stats[], int msglvl, FILE *msgFile, MPI_Comm comm) ; \end{verbatim} \index{MatMul_MPI_mmm@{\tt MatMul\_MPI\_mmm()}} This method computes a distributed matrix-matrix multiply $Y := Y + \alpha A X$, $Y := Y + \alpha A^T X$ or $Y := Y + \alpha A^H X$, depending on how the {\tt info} object was set up. NOTE: {\tt A} must have local indices, use {\tt MatMul\_setLocalIndices()} to convert from global to local indices. {\tt Xloc} and {\tt Yloc} contain the owned rows of $X$ and $Y$, respectively. \par On return, the following statistics will have been added. \begin{center} \begin{tabular}{cclcccl} {\tt stats[0]} & --- & \# of messages sent & & {\tt stats[1]} & --- & \# of bytes sent \\ {\tt stats[2]} & --- & \# of messages received & & {\tt stats[3]} & --- & \# of bytes received \end{tabular} \end{center} This method calls {\tt makeSendRecvIVLs()}. \par \noindent {\it Error checking:} If {\tt info}, {\tt Yloc}, {\tt alpha}, {\tt A}, {\tt Xloc} or {\tt stats} is {\tt NULL}, or if {\tt msglvl > 0} and {\tt msgFile} is {\tt NULL}, an error message is printed and the program exits. %----------------------------------------------------------------------- \item \begin{verbatim} void MatMul_cleanup ( MatMulInfo *info ) ; \end{verbatim} \index{MatMul_cleanup@{\tt MatMul\_cleanup()}} This method free's the data structures owned by the {\tt info} object, and then free's the object. processor. \par \noindent {\it Error checking:} If {\tt info} is {\tt NULL}, an error message is printed and the program exits. %----------------------------------------------------------------------- \end{enumerate} \par \subsection{Broadcast methods} \label{subsection:MPI:proto:broadcast} \par \begin{enumerate} %----------------------------------------------------------------------- \item \begin{verbatim} ETree * ETree_MPI_Bcast ( ETree *etree, int root, int msglvl, FILE *msgFile, MPI_Comm comm ) ; \end{verbatim} \index{ETree_MPI_Bcast@{\tt ETree\_MPI\_Bcast()}} This method is a broadcast method for an {\tt ETree} object. The {\tt root} processor broadcasts its {\tt ETree} object to the other nodes and returns a pointer to its {\tt ETree} object. A node other than {\tt root} free's its {\tt ETree} object (if not {\tt NULL}), receives {\tt root}'s {\tt ETree} object, and returns a pointer to it. \par \noindent {\it Error checking:} None presently. %----------------------------------------------------------------------- \item \begin{verbatim} Graph * Graph_MPI_Bcast ( Graph *etree, int root, int msglvl, FILE *msgFile, MPI_Comm comm ) ; \end{verbatim} \index{Graph_MPI_Bcast@{\tt Graph\_MPI\_Bcast()}} This method is a broadcast method for an {\tt Graph} object. The {\tt root} processor broadcasts its {\tt Graph} object to the other nodes and returns a pointer to its {\tt Graph} object. A node other than {\tt root}, clears the data in its Graph object, receives the Graph object from the root and returns a pointer to it. \par \noindent {\it Error checking:} None presently. %----------------------------------------------------------------------- \item \begin{verbatim} IVL * IVL_MPI_Bcast ( IVL *obj, int root, int msglvl, FILE *msgFile, MPI_Comm comm ) ; \end{verbatim} \index{IVL_MPI_Bcast@{\tt IVL\_MPI\_Bcast()}} This method is a broadcast method for an {\tt IVL} object. The {\tt root} processor broadcasts its {\tt IVL} object to the other nodes and returns a pointer to its {\tt IVL} object. A node other than {\tt root}, clears the data in its IVL object, receives the IVL object from the root and returns a pointer to it. \par \noindent {\it Error checking:} None presently. %----------------------------------------------------------------------- \item \begin{verbatim} IV * IV_MPI_Bcast ( IV *obj, int root, int msglvl, FILE *msgFile, MPI_Comm comm ) ; \end{verbatim} \index{IV_MPI_Bcast@{\tt IV\_MPI\_Bcast()}} This method is a broadcast method for an {\tt IV} object. The {\tt root} processor broadcasts its {\tt IV} object to the other nodes and returns a pointer to its {\tt IV} object. A node other than {\tt root}, clears the data in its IV object, receives the IV object from the root and returns a pointer to it. \par \noindent {\it Error checking:} None presently. %----------------------------------------------------------------------- \end{enumerate} \par \subsection{Utility methods} \label{subsection:MPI:proto:utility} \par \begin{enumerate} %----------------------------------------------------------------------- \item \begin{verbatim} IVL * InpMtx_MPI_fullAdjacency ( InpMtx *inpmtx, int stats[], int msglvl, FILE *msgFile, MPI_Comm comm ) ; IVL * Pencil_MPI_fullAdjacency ( Pencil *pencil, int stats[], int msglvl, FILE *msgFile, MPI_Comm comm ) ; \end{verbatim} \index{InpMtx_MPI_fullAdjacency@{\tt InpMtx\_MPI\_fullAdjacency()}} \index{Pencil_MPI_fullAdjacency@{\tt Pencil\_MPI\_fullAdjacency()}} These methods are used to return an {\tt IVL} object that contains the full adjacency structure of the graph of the matrix or matrix pencil. The matrix or matrix pencil is distributed among the processes, each process has a {\it local} portion of the matrix or matrix pencil. The returned {\tt IVL} object contains the structure of the global graph. The {\tt stats[]} vector must have at least four fields. On return, the following statistics will have been added. \begin{center} \begin{tabular}{cclcccl} {\tt stats[0]} & --- & \# of messages sent & & {\tt stats[1]} & --- & \# of bytes sent \\ {\tt stats[2]} & --- & \# of messages received & & {\tt stats[3]} & --- & \# of bytes received \end{tabular} \end{center} \par \noindent {\it Error checking:} If {\tt inpmtx}, {\tt pencil} or {\tt stats} is {\tt NULL}, or if {\tt msglvl > 0} and {\tt msgFile} is {\tt NULL}, an error message is printed and the program exits. %----------------------------------------------------------------------- \item \begin{verbatim} ChvList * FrontMtx_MPI_aggregateList ( FrontMtx *frontmtx, IV *frontOwnersIV, int stats[], int msglvl, FILE *msgFile, int tag, MPI_Comm comm ) ; \end{verbatim} \index{FrontMtx_MPI_aggregateList@{\tt FrontMtx\_MPI\_aggregateList()}} This method is used in place of the {\tt FrontMtx\_aggregateList()} method to initialize the aggregate list object. Since the symbolic factorization data is distributed among the processes, the number of incoming aggregates for a front and the number of different processes contributing to a front --- information necessary to initialize the list object --- must be computed cooperatively. This method uses {\tt tag} as the message tag for all messages communicated during this method. The {\tt stats[]} vector must have at least four fields. On return, the following statistics will have been added. \begin{center} \begin{tabular}{cclcccl} {\tt stats[0]} & --- & \# of messages sent & & {\tt stats[1]} & --- & \# of bytes sent \\ {\tt stats[2]} & --- & \# of messages received & & {\tt stats[3]} & --- & \# of bytes received \end{tabular} \end{center} \par \noindent {\it Error checking:} If {\tt frontmtx} or {\tt frontOwnersIV} is {\tt NULL}, or if {\tt tag < 0} or {\tt tag} is larger than the largest available tag, an error message is printed and the program exits. %----------------------------------------------------------------------- \item \begin{verbatim} IV * FrontMtx_MPI_colmapIV ( FrontMtx *frontmtx, IV *frontOwnersIV, int msglvl, FILE *msgFile, MPI_Comm comm ) ; IV * FrontMtx_MPI_rowmapIV ( FrontMtx *frontmtx, IV *frontOwnersIV, int msglvl, FILE *msgFile, MPI_Comm comm ) ; \end{verbatim} \index{FrontMtx_MPI_colmapIV@{\tt FrontMtx\_MPI\_colmapIV()}} \index{FrontMtx_MPI_rowmapIV@{\tt FrontMtx\_MPI\_rowmapIV()}} For a factorization with pivoting, the elimination of some rows and columns may be delayed from the front that initially contains them to an ancestor front. The solution and right hand side entries would therefore need to be redistributed. To do so requires new row and column maps, maps from the row or column to the processor that owns them. These two methods construct that map. The routine uses the {\tt MPI\_Allgather()} and {\tt MPI\_Bcast()} methods, so no unique tag values are needed. \par \noindent {\it Error checking:} None at present. %----------------------------------------------------------------------- \item \begin{verbatim} IVL * IVL_MPI_alltoall ( IVL *sendIVL, IVL *recvIVL, int stats[], int msglvl, FILE *msgFile, int firsttag, MPI_Comm comm ) ; \end{verbatim} \index{InpMtx_MPI_alltoall@{\tt InpMtx\_MPI\_alltoall}} This method is used during the setup for matrix-vector multiplies. Each processor has computed the vertices it needs from other processors, these lists are contained in {\tt sendIVL}. On return, {\tt recvIVL} contains the lists of vertices this processor must send to all others. \par This method uses tags in the range {\tt [tag,tag+nproc-1)}. On return, the following statistics will have been added. \begin{center} \begin{tabular}{cclcccl} {\tt stats[0]} & --- & \# of messages sent & & {\tt stats[1]} & --- & \# of bytes sent \\ {\tt stats[2]} & --- & \# of messages received & & {\tt stats[3]} & --- & \# of bytes received \end{tabular} \end{center} This method is {\it safe} in the sense that it uses only {\tt MPI\_Sendrecv()}. \par \noindent {\it Error checking:} If {\tt sendIVL} or {\tt stats} is {\tt NULL}, or if {\tt msglvl > 0} and {\tt msgFile} is {\tt NULL}, or if {\tt tag < 0} or {\tt tag + nproc} is larger than the largest available tag, an error message is printed and the program exits. %----------------------------------------------------------------------- \item \begin{verbatim} void * makeSendRecvIVLs ( IV *supportedIV, IV *globalmapIV, IVL *sendIVL, IVL *recvIVL, int stats[], int msglvl, FILE *msgFile, int firsttag, MPI_Comm comm ) ; \end{verbatim} \index{makeSendRecvIVLs@{\tt makeSendRecvIVLs}} \par The purpose of this method to analyze and organize communication. It was written in support of a distributed matrix-vector multiply but can be used for other applications. \par Each processor has a list of items it "supports" or needs found in the {\tt supportedIV} object. The {\tt globalmapIV} object contains the map from items to owning processors. We need to figure out what items this processor will send to and receive from each other processor. This information is found in the {\tt sendIVL} and {\tt recvIVL} objects. \par On return, list {\tt jproc} of {\tt sendIVL} contains the items owned by this processor and needed by {\tt jproc}. On return, list {\tt jproc} of {\tt recvIVL} contains the items needed by this processor and owned by {\tt jproc}. \par This method initializes the {\tt recvIVL} object, and then calls {\tt IVL\_MPI\_alltoall()} to construct the {\tt sendIVL} object. This method uses tags in the range {\tt [tag,tag+nproc*nproc)}. On return, the following statistics will have been added. \begin{center} \begin{tabular}{cclcccl} {\tt stats[0]} & --- & \# of messages sent & & {\tt stats[1]} & --- & \# of bytes sent \\ {\tt stats[2]} & --- & \# of messages received & & {\tt stats[3]} & --- & \# of bytes received \end{tabular} \end{center} This method is {\it safe} in the sense that it uses only {\tt MPI\_Sendrecv()}. \par \noindent {\it Error checking:} If {\tt sendIVL} or {\tt stats} is {\tt NULL}, or if {\tt msglvl > 0} and {\tt msgFile} is {\tt NULL}, or if {\tt tag < 0} or {\tt tag + nproc} is larger than the largest available tag, an error message is printed and the program exits. %----------------------------------------------------------------------- \item \begin{verbatim} int maxTagMPI ( MPI_Comm comm) ; \end{verbatim} \index{maxTagMPI@{\tt maxTagMPI()}} This method returns the maximum tag value for the communicator {\tt comm}. \par \noindent {\it Error checking:} None at present. %----------------------------------------------------------------------- \end{enumerate} \par
{ "alphanum_fraction": 0.646263587, "avg_line_length": 42.5433526012, "ext": "tex", "hexsha": "b9495425fd99804a3c4c1dcd38007a0a90184f17", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2019-08-29T18:41:28.000Z", "max_forks_repo_forks_event_min_datetime": "2019-08-29T18:41:28.000Z", "max_forks_repo_head_hexsha": "2cb2c434b536eb668ff88bdf82538d22f4f0f711", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "alleindrach/calculix-desktop", "max_forks_repo_path": "ccx_prool/SPOOLES.2.2/MPI/doc/proto.tex", "max_issues_count": 4, "max_issues_repo_head_hexsha": "2cb2c434b536eb668ff88bdf82538d22f4f0f711", "max_issues_repo_issues_event_max_datetime": "2018-01-25T16:08:31.000Z", "max_issues_repo_issues_event_min_datetime": "2017-09-21T17:03:55.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "alleindrach/calculix-desktop", "max_issues_repo_path": "ccx_prool/SPOOLES.2.2/MPI/doc/proto.tex", "max_line_length": 88, "max_stars_count": null, "max_stars_repo_head_hexsha": "2cb2c434b536eb668ff88bdf82538d22f4f0f711", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "alleindrach/calculix-desktop", "max_stars_repo_path": "ccx_prool/SPOOLES.2.2/MPI/doc/proto.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 11828, "size": 44160 }
\chapter{\plink: Discovering and Exploiting Datacenter Network Locality for Efficient Cloud-based Distributed Training} So far we have focused on accelerating distributed learning at a rack scale, by customizing hardware configuration, software stack, and network interconnect. While highly effective, not all of the optimizations are universally applicable, for example, in a public cloud environment. Further, even if all optimizations are applied, they do little to address cloud specific bottlenecks. \section{Specific Challenges in Public Clouds} Two major challenges exist in cloud-based training. \noindent\textbf{Non-uniform link bandwidth}. Host-to-host bandwidth in the cloud is non-uniform due to the hierarchical structure of the datacenter~(\textsection \ref{sec:datacenternetwork}). %, as VM nodes can be spread across different physical hosts, racks, rows or even clusters. %Practically, VMs can spread in multiple physical machines, racks, rows, or even clusters. If the network core utilization is high, or the oversubscription ratio is large, available bandwidth for a pair of VM flow will be smaller when compared to intra-rack traffic (section \ref{sec:datacenternetwork}). Achieving consistent high performance requires minimized cross rack traffic. Figure \ref{fig:dcnetworkcondition} shows a pairwise bandwidth probe of 32 VM nodes in \ectwo and \azure, in the same availability zone/datacenter. In both cases, faster pairs can deliver more than 2x the throughput of slower pairs. % during \code{iperf} probes.%, when running point to point iperf probe. %The nodes appear to form two clusters with intra-cluster communication delivers almost 2x the performance of inter-cluster communication. \noindent\textbf{Volatile traffic}. The performance variability in the public cloud is well known~\cite{cloudVariance1, Iosup:2011:PVP:2007336.2007402,perfVariance}. Although mechanisms for performance isolation have been proposed~\cite{Shieh:2010:SPI:1863103.1863104,Shieh:2011:SDC:1972457.1972489}, we still observe interference from other workloads, leading to volatile latency and aggregation performance (Figure~\ref{fig:perfFluctuation}). %\arvind{we don't provide experimental evidence; so reword to make it clear. maybe say something like "we observed ...".} %Although multiple previous work~\cite{Shieh:2010:SPI:1863103.1863104,Shieh:2011:SDC:1972457.1972489} has purposed mechanisms for performance isolation, empirically we can still observe interference by other workloads. Figure \ref{fig:dcnetworkcondition} (Right) traces the 10-second bandwidth average of two VMs for 90 minutes on Azure US-West region. The bandwidth can vary from 10 to 20Gbps. As a result, the significant changes in link condition infers that the best aggregation route is not static. \begin{figure}[t!] \centering \includegraphics[width=.7\linewidth, trim=3 3 3 3,clip]{Figures/dcnetworkcondition.pdf} \caption{Pairwise bandwidth probes with 32 \ectwo C5 and \azure F instances show non-uniform link bandwidth.} \label{fig:dcnetworkcondition} \end{figure} \begin{figure}[t!] \centering \includegraphics[width=.7\linewidth, trim=3 3 3 3,clip]{Figures/perfFluctuation.pdf} \caption{Left: 8 hour latency (us) tracing (1 minute average) between two VMs on both clouds show steep latency fluctuations due to volatile cloud traffic. Right: Wide performance distribution of the same, periodically launched Gloo aggregation task on both clouds.} \label{fig:perfFluctuation} \end{figure} %\noindent\textbf{One route does not fit all}. The size of model of different layer in a neural network can vary significantly (e.g., the size of the largest layers in the popular ResNet-50 model is multi-megabytes in Caffe2 versus a small layer's model size of hundreds of bytes). This means the transmission time of some layers is latency-bound while some are bandwidth bound. Since the forward and backward passes process layers in a fixed order, a delay in the update of layer L will cause process stall of layer L+1 in forward pass or L-1 in backward pass. Therefore, optimally hiding communication latency requires different aggregation routes for different layers. \subsection{Inefficiencies in Existing Approaches} \label{sec:differentReductionAlgorithms} We motivate our design by analyzing why some existing approaches do not perform optimally, as they rely on assumptions that aren't typically valid in the datacenter setting. %We first define good locality as localizing communication within a single rack. Thus, the more cross rack communication, the worse locality. \begin{figure}[t!] \centering \includegraphics[width=.5\linewidth, trim=3 3 3 3,clip]{Figures/poorlocalitywithexistingapproach.pdf} \caption{Existing aggregation approaches can suffer from poor locality if not taking physical network topology into account.} %Some steps omitted for clarity.} \label{fig:poorlocalitywithexistingapproach} \end{figure} Figure \ref{fig:poorlocalitywithexistingapproach} shows a theoretical analysis of widely used communication patterns. PS (a) and popular choices of CA such as halving-doubling (b) and ring (c) are shown in a setup where nodes (0-3, enclosed in a circle) are spread equally among two clusters (purple and gold). The left side of the figure shows patterns that achieve optimal locality in the setting by exchanging data among nodes with high locality (high-performance links in green) while minimizing transfers over the bottleneck links (slow links in red). The right side shows alternative reduction routes with poor locality. All patterns achieve the same result, but with different efficiency. The problem with poor locality happens when the communication pattern in the algorithm is not optimally aligned with physical topology. Mapping logical ranks to physical hosts in a locality-preserving way is contingent on awareness of the physical network structure. Hence, \textit{topology-awareness} is crucial for efficient aggregation in a datacenter network. %For example, in the case of the bad recursive halving-doubling communication highlighted in Figure \ref{fig:poorlocalitywithexistingapproach}(b-right), it would be much better if the IDs of node 1 and 2 are swapped, creating the pattern on the left. % %Being able to do so Even with careful mapping, not all algorithms work optimally in the datacenter environment. Table \ref{table:algoCharacterization} summarizes network characteristics of these algorithms, with a simplified, flattened datacenter network topology model where nodes are simply placed in different racks. Centralized PSs are known to suffer from \textit{incast} congestion and do not scale to a large number of workers~\cite{firecaffe,Geng:2018:HHP:3229543.3229544}. Sharded PSs incur high cross rack traffic. CAs usually trade off lower per-link traffic on the wire with more rounds of communication, which is not suitable when the latency is high. Tree reduction inherits the problems of both PS and CA: high fan-out causes incast problems; a low fan-out adds more rounds. Ultimately, we need an algorithm that bounds communication steps, takes advantage of fast links, and localizes traffic to avoid interference from competing traffic. \begin{table}[ht] \centering \footnotesize \begin{tabular}{|c|c|c|c|} \hline Name & Rounds/Hops & Bytes on Wire & XR Bytes \\ \hline PS (fully sharded) & $2$ & $2(NC-1)S$ & $2N(C-1)S$ \\ \hline Halving doubling & $2log_2NC$ & $2NCS$ & $2C\frac{N-1}{N}S$ \\ \hline Ring-Chunked & $2NC-1$ & $(2NC-1)S$ & $(2C-1)S$ \\ \hline Tree (fan out = C) & $2(log_CN + 1)$ & $\approx2C\frac{CN-1}{C-1}S $ & $\approx2C\frac{(N-1)}{C-1}S$ \\ %n is a power of k. each k link starting from the 2nd layer, at least k-1/k links are cross machine link \hline \hline 2-level hierarchical & $4$ & $2S(NC-1)$ & $2(C-1)S$ \\ \hline \end{tabular} \caption{Network characteristics of various algorithms featuring rounds of communication (Rounds), minimum total traffic (Bytes on Wire), and minimum cross rack traffic (Min XR Bytes, corresponding to red arrows in Figure~\ref{fig:poorlocalitywithexistingapproach}) to allreduce with $NC$ nodes on $C$ racks, each with $N$ nodes. Each node has a buffer of size $S$ bytes. PS and aggregators in HA are colocated and sharded.} \label{table:algoCharacterization} \end{table} \subsection{2-level Hierarchical Aggregation (2LHA)} HA is not new, but most applications of \mlha are in contexts where the network topology is known. HA does not reduce the total amount of data transferred on the wire, but it can create more \textit{localized} traffic and avoid slow links. %trades more \textit{localized} traffic with higher \textit{latency}, because a piece of data needs to traverse multiple hops. %The potential benefit comes from traffic localization, which is related to the tree-like topology: with each hop the message takes, exponentially more machines can generate adversarial traffic that affects this message. One important parameter in HA is the number of levels. Similar to~\cite{cool}, we used a 2-level HA based on our domain knowledge of datacenter networks, that oversubscription mostly hurts at the rack level. Thus, by separating inter- and intra-rack aggregation, we can best capture the static aspect of locality and minimize latency. The use of more levels ($>2$) suffers from higher latency and volatile performance, as messages need to traverse multiple links with unpredictable latency, but provides no benefit compared to PS if links don't have enough non-uniformity (e.g., in the same cluster). %Therefore, we focus on a 2-level hierarchical (2LHA) aggregation scheme because it strikes a balance between smaller latency (requiring fewer rounds) and more aggressive localization (requiring more rounds). %, and empirically can capture the network locality well~(\textsection\ref{sec:eval}). \label{sec:2lhaOverview} \mlha partitions nodes into different groups (clusters) based on their affinity. \mlha starts by chunking the buffer across members in the same group. For each chunk, a node is designated as the local master (LM) for that group for aggregating locally. One of the LMs across all groups is chosen as the global master (GM) for global aggregation. Visually, the reduction trees of all chunks form a 2-level forest. Communication for \mlha is done in the following steps: \begin{enumerate}[noitemsep,topsep=0pt,parsep=0pt,partopsep=0pt] \item Each group member sends all chunks to their respective LMs (intra-group traffic only). \item LMs in all groups send per-group aggregated chunk to the GM for global aggregation (inter-group traffic only). \item The GM aggregates the chunk, then uses the reversed routes for propagating the globally-aggregated chunk back to the LMs. \item The LMs fan out the globally aggregated chunk to all group members. \end{enumerate} %\arvind{say something about sharding and load balancing of chunks?} %\begin{figure} % \centering % \includegraphics[width=.9\linewidth, trim=1 3 3 3,clip]{Figures/2LHA.pdf} % \caption{The first two steps in performing a \mlha for chunk 1 labelled. Gradients are aggregated locally before synchronized globally, reducing data transferred on inter-group links.} % \label{fig:2lha} %\end{figure} \mlha is described here as a two-phase process for simplicity, but the intra- and inter-group aggregation can overlap. Effective \mlha also requires load-balanced LM and GM assignments within and across groups. Later we provide an implementation that satisfies these. Table \ref{table:algoCharacterization} shows the desirable properties of \mlha. Compared to CAs, the number of rounds in \mlha does not increase with the number of nodes and, compared to PSs, it requires significantly less cross-rack bandwidth. \section{Design and Implementation of \plink} We now describe \plink, an optimized, topology-aware, and dynamic system that leverages HA for efficient cloud-based training. To optimally utilize datacenter networks, \plink must address the major challenges highlighted previously. \plink uses three components to achieve this. \begin{itemize}[noitemsep,topsep=0pt,parsep=0pt,partopsep=0pt,leftmargin=1em] \item \marcopolo: a network probing and clustering approach to capture physical locality in the datacenter network. \marcopolo groups nodes based on their physical affinity, so intra-group links have better communication performance than inter-group links. %\arvind{say "better communication performance" rather than "lower latency"?} \item \ha: a high-performance implementation of \mlha that is codesigned to take advantage of deep learning properties. %\arvind{what do you mean by training workload?} \ha uses clustering information to distribute the aggregation workload efficiently and execute the aggregation schedule. \item \autoplink: a mechanism that tracks training performance and adjusts the current GM and LM assignments to adapt to changes in the network conditions. \end{itemize} \subsection{Capturing Network Locality with \marcopolo} \label{sec:marcopolo} For accurate network topology discovery, \marcopolo must probe quickly and should not rely on knowledge of a particular datacenter. \marcopolo: (1) probes communication links between nodes to measure pairwise node distances, (2) denoises probed distances, and (3) clusters nodes. \paragraph{Running \marcopolo probes} \marcopolo starts by issuing measurements to identify communication locality and determine pairwise node distances. Distance is defined using universal networking concepts, like latency or inverse bandwidth. \marcopolo uses two different probes: an inhouse DPDK-based~\cite{HomeDPDK74:online} probe to provide near bare-metal latency measurements for supported VMs on \azure and \ectwo, and iPerf~\cite{iPerfThe0:online}. \marcopolo runs these networking probes one-to-one. $O(N^2)$ time would be required to probe $N$ nodes if run sequentially. To accelerate this process, \marcopolo picks as many pairs (up to $\frac{N}{2}$) as possible in each round without having a node appear twice, to avoid interference from concurrent tests. This allows \marcopolo to probe in $O(N)$ rounds. %\noindent\textbf{All-to-all probes} are one-shot tests that all nodes participate in. \marcopolo use these tests to assess distances of nodes in a more dynamic fashion under loaded conditions. %, as all to all probe resembles the communication pattern of distributed training. \marcopolo derives pairwise distances with probe results (in case of bandwidth measurements, bandwidth are converted to distance by taking the inverse~\cite{affinity2Distance}). \marcopolo then proceeds to \textit{denoise} the collected data. %, before clustering nodes into different groups based on their physical affinity. %\arvind{be consistent with terminology. Earlier it is "augment/denoising", now it is "augment", and next it is "refining". I think "denoising" is better.} \paragraph{Denoising probe data with embedding} \label{sec:embedding} \marcopolo embeds nodes in a Euclidean coordinate space, obtaining a set of coordinates whose distances agree with the probed distances. This works to: \begin{enumerate}[noitemsep,topsep=0pt,parsep=0pt,partopsep=0pt] \item Denoise measurements by leveraging Euclidean space to approximate the physical location of nodes. %\arvind{I think we can say "which we use to approximate the physical location of nodes inside a datacenter" rather than "resembles a network topology".} %Distances in the Euclidean space end up being more reflective of network topology, likely due to denoising effects. \item Obtain a set of ``virtual coordinates'' for a clustering algorithm to identify groups. \end{enumerate} To embed nodes, we identify node coordinates ($\mathbf{v_i}$ and optional $h_i$) that minimize the following objective: \begin{equation} \sum_{i=1}^n \sum_{j=1}^{i-1} \left( (d_{i,j})^\alpha + \mathbbm{1}_h\left[h_i + h_j \right] - p_{i,j}\right)^2 \end{equation} where $n$ is the number of nodes, $d_{i,j} = \left\lVert \mathbf{v_i} - \mathbf{v_j} \right\rVert_2$ is the Euclidean distance between embedded node coordinates for nodes $i$ and $j$, $p_{i,j}$ is their probed distance, parameter $\alpha$ takes a value between 1 and 2, $h_i$ is a non-negative startup cost parameter for node i, and $\mathbbm{1}_h$ is a switch for $h_i$. We use the Adam algorithm~\cite{kingma2014adam} to optimize this. %$x$ is a tunable parameter between 1 and 2 that intuitively governs how much emphasis is put on long probed distances in the embedding process. %is an indicator denoting whether or not we are using these latency parameters. % We will describe the use of $h_i$ later in this section, but first consider the case when $\mathbbm{1}_h = 0$. \marcopolo embeds VM nodes in a coordinate space that preserves the probed distances between VM nodes. The denoising effect of the embedding process stems from its tendency to keep mutually close nodes together, which enforces our domain knowledge that VM nodes that are close to one particular reference VM node are probably close to each other as well in the datacenter. Thus, this effect has a correcting influence when the mutual-closeness property is violated by a particular observation but is observed in a majority of nodes. A lower number of embedding dimensions strengthens this effect. %Considering only the left size of equation 1 (ignoring $h$), there are 2 cases, one when $x=1$ and one when $x=2$ (the two allowed values). These share a common intuition for why topology might be well captured. In both, we are embedding our nodes in a coordinate space where larger Euclidean distance should correspond to larger probe distance. In such a space, we have the property that, for nodes A, B, and C, if A and B are both close to C by Euclidean distance, they must also be relatively close to each other. This fits our domain knowledge, that if two nodes are close to a mutual third in a datacenter, they should also be relatively close to one another. %We have observed noise in the probe distance data that violates this mutual-closeness property, and we believe this is part of why our method is successful. Fitting Euclidean distance enforces this concept of closeness, working to remove that noise, and giving us refined distance with an improved agreement with true topology. $\alpha$ tunes how longer distances are treated in the Euclidean space: $\alpha=1$ fits the embedded distances to probe distances exactly. For $\alpha>1$, we can achieve increased compaction of distance while maintaining relative distance order. This effect is desired because small physical distances can be magnified disproportionately in the probe due to competing traffic. Setting $\alpha=2$ causes the long probed values to be ``compacted'' more than the smaller ones (but it never changes the relative order of distances). Figure~\ref{fig:legacy_why} suggests empirically, a larger $\alpha$ pushes VM nodes with short distances even closer on the embedded plane, leading to more consistent clusters with higher adjusted mutual information~\cite{vinh2010information} (0.59 vs 0.76) across 100 runs. % with a larger probe distance being represented by a moderately larger embedding distance, allowing \marcopolo to generate more consistent clusters empirically (higher Adjusted Mutual Information~\cite{vinh2010information} score across many runs). %It is important to capture long distances as accurately as possible because they are the bottleneck in \mlha. Setting $\alpha > 1$ lets \marcopolo{} focus more on the representation of long distances in the embedding process. Figure~\ref{fig:legacy_why} (right) shows that in the $\alpha=1$ case, the error for deviating from the optimal value of embedding distance ($d$) is independent of probe value $p$, but with $\alpha > 1$, the same deviation for a longer probed distance results in a larger error in the optimization objective, forcing the optimizer to focus on it. %Setting $\alpha > 1$ does exactly this for longer probe distances, which we expect to be more important. % we're effectively discounting long probe distance in the embedding: only 2X euclidean distance is required to represent a 4X probe distance. Preventing long distances in the embedded space enforces the reality in the datacenter that no distances should be significantly larger than the average distance, so disallowing long distances to be embedded exceedingly far from other nodes in face of a long probed distance (likely due to interference) tends to better capture physical affinity. %$x$ also controls whether additional focus is given to long distances when minimizing the embedding error: a larger $x$ forces the embedding space to reflect long distances most precisely, which is desirable because \mlha{}'s performance is bottlenecked by the longest distance in each hierarchy. Capturing this is crucial in later generation of the actual clusters. Figure~\ref{fig:legacy_why} gives a visual summary of the effect of $x$: long probed distances are more ''compressed``, and more ''carefully treated`` than short probe distances, resulting in tighter-packed embedded space that resembles datacenter, especially bottlenecks. Inspired by~\cite{Dabek:2004:VDN:1030194.1015471}, \marcopolo includes an optional parameter $h_i$, the node-specific, network-agnostic, fixed latency of sending a packet (e.g. traversing the operating system network stack). Multiple clusters are generated at once, and \marcopolo favors clusters with smaller diversity in the sizes of groups. If there is a tie, \marcopolo makes a random choice. %\arvind{more comments on the above text: (1) need to state up front what is being solved by the optimization problem; I believe it is both vi as well as hi. (2) $1_h$ seems to be a function of the type of measurement probe that is used (i.e., latency as opposed to bandwidth); if so, say it at the beginning. (3) even then, it seems like you want to have the hi factors along with the coords (i.e., $v + h - d$) as opposed to being outside. (4) what happens if you have multiple types of measurements?} %A standard approach to such a problem would just set $x=1$, fitting Euclidean distance to probe distance directly by a sum-of-squares error. In testing a variety of optimization objectives, we found the standard approach ($x=1$) often performed well, but in many cases squaring the Euclidean distance term ($x=2$) gave superior performance. This is the setting we typically use in our experiments on \marcopolo. We will attempt to gives some intuition about why this might be more effective. %First, we have useful domain knowledge, that no distances should be significantly larger than the average distance. Nodes are tightly packed in the datacenter, and so we expect their topology to be described by coordinates with no distant outliers. We do however observe such outliers in the data. Using $x=2$ allows nodes with very large probe distances to not be embedded exceedingly far from all other nodes. This is because, for instance, only 2X euclidean distance is required to represent a 4X probe distance. \begin{figure} \centering \includegraphics[width=.7\linewidth, trim= 4 4 4 4,clip]{Figures/clusters.pdf} \caption{Effect of $\alpha$: $\alpha = 2$ (right) generates more consistent clusters (higher AMI across 100 generated clusters) compared to $\alpha=1$ (left)} % This effectively means the resulting embedding space is more sensitive long distances than short ones, where it is equally sensitive when $x=1$} \label{fig:legacy_why} \end{figure} %Second, consider the fact that bottlenecks will likely come from longer distances. For this reason, it can be more important to capture long distances precisely. Figure 4 demonstrates why the $x=2$ case achieves this. If we consider an error margin in capturing probe distance (which is the error we optimize here, by sum-of-squares) we can notice a key difference between $x=1$ and $x=2$. In the $x=1$ case, the error margin in euclidean space required satisfy the probe distance margin does not depend on probe distance. In the $x=2$ case however, a larger probe distance gives us a tighter error margin in Euclidean space, effectively forcing the coordinates to capture larger probe distances more precisely. %Inspired by the Vivaldi system of \cite{dabek2004vivaldi}, we allow for "height" parameters $h_i$ for each node i. When this feature is enabled ($\mathbbm{1}_h = 1$), $h_i$ corresponds to an added, node-specific distance applied to any distance the node is involved in. We found this useful for improving the system's ability to capture ground-truth topology for some nodes (e.g. ping) but not others, which is why we allow this to be disabled. \paragraph{Grouping Nodes for \mlha} We now outline how \marcopolo partitions VM nodes into groups for \mlha. The goal is to generate groups that are (1) balanced, so no single group becomes a bottleneck and (2) cohesive, so VMs in the same group have good locality. We first compute the number of groups to generate, then determine the members of each group. GMs are more likely to be the bottleneck during \mlha, as they must receive and aggregate messages at both levels. For a uniform key distribution, the following term approximates the bytes-on-wire sent or received by a GM: \begin{equation} b = O\left(\frac{n}{k} + k\right) \end{equation} with $n$ total VMs, and $k$ groups. The GM sends and receives a message from each node within its group ($\frac{n}{k} -1$) for local aggregation, and from every other group ($k-1$) for global aggregation. This expression achieves a minimum at $k = \sqrt{n}$, giving a natural choice for group count. %., and it is empirically effective with and without them. Once group count is selected, we use a constrained k-means clustering algorithm with k-means++ initialization~\cite{bradley2000constrained, arthur2007k} to generate balanced, locality-preserving groups. This accepts a minimum cluster size and the number of groups to generate as input. %\arvind{and number of groups also as input?} For perfect balance, both parameters are set to $\sqrt{n}$. %, where $k = \sqrt{n}$ in general. %, we use $k = \sqrt{n}$ (as justified in \strongrandom). Enforcing perfect balance is not optimal in cases where VMs are naturally clustered in almost balanced but distant clusters, because in those cases some group can contain a distant member which could be assigned to a much more cohesive group with a slight imbalance, forcing an onerous bottleneck on the local aggregation step. Thus, we include a parameter, \textit{balance elasticity}, which enables a slight imbalance among clusters. We empirically found best results with values between 1.0 (perfectly balanced) and 2.0 (each group has at least $\sqrt{N}/2$ nodes). %\arvind{define the parameter -- largest/average or largest/smallest?} %We include optional parameter $h_i$ from equation 1 which can improve the quality of embedding, but will not affect the clustering objective and can be safely ignored. In order to evaluate the performance of \marcopolo, we also define \strongrandom, a grouping method for \mlha that operates without considering probe distances and simply produces $\sqrt{n}$ groups of size $\sqrt{n}$ uniformly at random. This is used later as a baseline. \subsection{Efficient HA with \ha} \label{sec:haimpl} \ha transforms grouping information from \marcopolo into a hierarchical reduction plan and efficiently executes it. \ha takes the crux of \phub and applies them to a TCP context to accommodate for the cloud environment where InfiniBand is not available. \ha supports various communication backends, including TCP, RoCE~\cite{softroce}, iWarp~\cite{softiwarp}, and InfiniBand (supported through \phub). To avoid repeating similar contents, we only highlight the difference of \ha and \phub. %, to maintain compatibility with all environments. We now briefly describe \ha in the context of TCP. %On clouds with Ethernet-only support such as EC2, \ha can still support RDMA with SoftiWarp~\cite{softiwarp} and SoftRoCE~\cite{softroce}, although we didn't find any performance benefit of doing so compared to using a native TCP backend. \paragraph{Generating an Aggregation Plan} \label{sec:generatingAggregationPlan} \ha chunks buffers into 64KB segments for better load-balancing across processor cores and overlapping of the transmission of gradients with aggregation. % but at the cost of potentially more packets. \ha uses chunk size of 64KB. % for TCP and 8KB for RDMA. %identifying the buffers and their sizes to aggregate. \ha first creates chunks of buffers. Chunks are important if the supplied buffers have various sizes and are hard to balance the load. \ha prefers smaller buffer chunks because they allow for more fine-grained overlapping of data transmission and aggregation, as each chunk can be aggregated independently to take advantage of the fact that aggregation is an element-wise operation. However, a chunk size too small is harmful as they may not saturate the network and requires excessive computation overhead: generally, we find chunk size between the network MTU size (generally 1.5-9KB) and a few hundreds of KBs to be optimal depending on the workload and environment. %Generally, once \ha has a list of chunks with roughly the same size, it finds each chunk a \textit{local master (LM)} for each clustered group of nodes. LMs take care of leaf-level aggregation of a chunk using central aggregation. \ha then assign one of the LMs as \textit{global master (GM)}, so that LMs only send per-group aggregated chunk to GM, by going through the four steps described in \ref{sec:2lhaOverview}. %Small buffers are treated differently, however: if buffers are small, the time to traverse an additional two hops (with \mlha) may not be compensated by the reduced cross rack traffic. It is better to ignore the cluster topology and treats the network as flat. \ha uses a flat topology for buffer $b$ if $lat(GM(b),LM(b)) \approx \frac{SIZE(b)}{bw(GM(b), LM(b))}$, which means the propagation latency is comparable to transmission delay. Note we use the full bandwidth as an overestimation for this purpose, because in reality the bandwidth is shared by multiple flows. This rule is usually applied to buffers of sizes of tens or hundreds of bytes. Since \ha is bottlenecked by the slowest inter-group transfer, which in turn is bottlenecked by the slowest intra-group transfer, \ha assigns chunk GMs and LMs such that each group (or node) has a number of GMs (or LMs) proportional to its cardinality. \ha uses an approximation set partition algorithm to achieve this. %Assignment of LM placements in each group follows a similar fashion. %as the aggregate available bandwidth scales with the size of a group. % Now consider the min-cut of the bandwidth for each cross rack link in the inferred cluster. The value of the bandwidth min-cut is proportional to the cardinality of the cluster.%., until it reaches the bandwidth of a physical uplink ($B_{upward}$) from ToR to the network core. % This value is currently a changeable parameter ($B_{upward}$) in \ha. %Since aggregation latency is determined by the slowest group, \ha need to balance GM assignments cross racks: more GMs should be placed in larger groups. %but up to a point. %assign GLMs to different clusters based on the min-cut value. The bandwidth min-cut for each group is estimated as the less of $B_{upward}$ and $|C| * B_{vm}$. %The same argument goes for RLM placements in each rack. \ha uses a similar mechanism to assign RLMs. This is simpler than assigning GLMs because the min-cut bandwidth of each node is equal to the bandwidth provisioned by the cloud, usually 10Gbps. Since a single TCP connection may not be able to saturate a 10Gbps link (as some cloud limits the bandwidth of a single flow~\cite{Placemen48:online}), during communication, \ha establishes multiple connections to each peer and the total transferred data is load-balanced across all connections. \ha then generates a schedule that executes the steps in \textsection \ref{sec:2lhaOverview} for each chunk. A schedule consists of a set of chunk-action pairs, where action is one of the following\footnote{With these action primitives, \ha can support arbitrary reduction graphs.}: \begin{itemize}[noitemsep,topsep=0pt,parsep=0pt,partopsep=0pt] \item \textbf{SendTo(nids)}: send the content in the current \textit{merge buffer} to the list of nodes specified in \textit{nids}. SendTo is a non-blocking operation, and its status is inferred by whether subsequently anticipated data is received. %\item \textbf{ReceiveFrom(nids)}: receive the content in the merge buffer of \textit{nids}. ReceiveFrom is a blocking operation, and it finishes when all updates are received. \item \textbf{ReceiveFrom(nids)}: block until the chunks from \textit{nids} are received and aggregated into the merge buffer. \item \textbf{Fetch}: notifies \ha that a framework-supplied buffer is ready to be processed.%a fetch action is taken when the calling framework notifies \ha that a buffer can be read. %Depending on the framework, \ha may create a copy of this buffer in case it is modified during aggregation. \item \textbf{Deliver}: writes the content in the merge buffer back to the framework-supplied buffer. %\arvind{Deliver instead of Delivery?} \end{itemize} A schedule is represented as a DAG where dependencies are edges and nodes are primitives, avoiding false dependencies between local and global aggregation. \paragraph{Executing an Aggregation Schedule} \label{sec:2lhae} %When a job starts, \ha creates two merge buffers for each chunk (details below) and one receive buffer for each peer. %Merge buffers are padded to the nearest cache-line size for efficient reduction using SSE or AVX. \ha first performs rendezvous with an out-of-band mechanism (e.g., Redis), establishing multiple connections per pair of VM as cloud providers can restrict per-stream bandwidth~\cite{Placemen48:online}. \ha preserves intra-node locality by maintaining a load-balanced map from a chunk to a connection, and then by further associating the connection to a particular processor core~\cite{Pesterev:2012:INC:2168836.2168870}. %\ha then connects the mesh prescribed by the schedule. For TCP connections, \ha maintains two flows per network device per remote node to saturate bandwidth; for RDMA-based backends, \ha maintains one connection per network device per remote node (\ha can always materialize additional virtual interfaces as needed). The rendezvous of addresses (IP ports and QP numbers) and memory regions are performed using a central redis server. %\begin{figure} % \centering % \includegraphics[width=\linewidth, trim=0.5 0 0 3,clip]{Figures/2lhae.pdf} % \caption{\ha preserves locality by associating a chunk with a particular core and a connection.} % This effectively means the resulting embedding space is more sensitive long distances than short ones, where it is equally sensitive when $x=1$} % \label{fig:2lhae} %\end{figure} %(Figure~\ref{fig:2lhae}). %\ha groups connections by underlying network device. Each device is then associated with one poll descriptor. Each poll descriptor is further tied to a particular processor core, promoting locality~\cite{Pesterev:2012:INC:2168836.2168870}. The buffers that need communication are load-balanced across network devices first, then load-balanced again across connections. %\ha spawns worker threads to execute a routine event loop. The event loop pulls data in/pushes data out from/to connection, and polls data from \textit{ready queue}, and \textit{defer queue}. We now focus on how \ha efficiently supports the four actions, hiding communication latency and avoiding excessive synchronization. When a framework calls \code{reduce(chunk)}, \ha retrieves the thread ID, suspends it, and enqueues \code{chunk} to the ready queue. Its worker threads poll the ready queue to retrieve \code{chunk} and transition the buffer into the Fetch state, copying gradients from the supplied address, then set the state of buffer to SendTo. %If multiple buffers are available, \ha prioritizes the buffer that corresponds to the later layers of the neural network, because they are required first during backward propagation. When fetch of a buffer is done, \ha transfers the buffer state to the next step specified by the schedule, SendTo. SendTo is an asynchronous operation that simply enqueues data to a send FIFO queue. A cursor is used for each TCP connection if a send operation cannot finish. \ha enforces that the order of bytes on the wire corresponds exactly to the order in which SendTos are issued. This saves metadata overhead as only a 4B integer per flow that encodes the chunk ID is required. % Avoiding mixed transfers of different chunks for a single network flow lets \ha minimize metadata overhead, as only a 4B integer that encodes the chunk ID is required. %For RDMA connections, we use immediate field of a \code{ibv\_wr} to encode metadata and choose \code{IBV\_WR\_RDMA\_WRITE\_WITH\_IMM}. SendTo is followed by ReceiveFrom. \ha allows streaming aggregation for each buffer in ReceiveFrom state to a \textit{merge buffer}. A counter is incremented when a chunk is fully received from a peer, and when the counter reaches the target, this step concludes. \ha transitions current buffer state to SendTo or Deliver based on schedule. %the \ha worker thread aggregates that chunk to the current merge buffer (aggregation can start at a single floating point granularity, but \ha uses SSE or AVX to aggregate whenever possible). A counter associated with a merge buffer is then incremented. %When the counter reaches the expected number, ReceiveFrom for that buffer is finished, and the schedule transitions into the next step, which can be another SendTo, or Delivery. The last step of a schedule is Deliver, where \ha copies the final value to the framework-supplied address. \ha then wakes up the thread that called \code{reduce}. \ha alternates between two merge buffers per chunk for synchronous training to overlap local computation on a chunk with transfers of that chunk to peer nodes. %Since \ha cannot immediately reuse the merge buffer from last iteration, as the transfer to peer nodes may not finish. \ha alternates two merge buffers per chunk for synchronous training because the peers lag behind at most one iteration. %The flipping step is required because SendTo is asynchronous and can finish before data is on the wire. %If the training framework calls \code{reduce} again, the newly-fetched data will overwrite the original merge buffer. %Note that the framework does not call another \code{reduce} with the same \code{bufferID} when one is active. Since it is certain when iteration $i+1$ finishes, all merge buffers for all keys of iteration $i$ would have already been sent out (otherwise if remote peers did not receive buffers for iteration $i$, they would not participate in finishing iteration $i+1$ in the first place!). By taking advantage of training is synchronous, we can use two merge buffers to avoid waiting for a SendTo completion signal. %Not all nodes proceed at the same speed as slower nodes may see incoming data from faster nodes without having the buffer state in ReceiveFrom. In this case, \ha eagerly accepts the data in the receive buffer, without touching the merge buffer because it may be overwritten by a late \code{reduce} call. \ha bookmarks the incoming chunk ID to a deferred queue, which is polled frequently to check if any progress can be made. %Saving data to the receive buffer when not in ReceiveFrom state is safe, because each peer only sends one copy of data for each chunk. %because \ha is sure that a buffer is sent exactly once per iteration. For bookkeeping, \ha maintains a deferred queue, which is queried later. %\subsubsection{Changing an Aggregation Schedule} %\label{sec:changingPlan} %\ha dynamically swaps in new aggregation plans to react to network changes by reassigning LMs and GMs. This can be implemented efficiently as coherence is optional in DL~\cite{recht2011hogwild, Wei:2015:MCC:2806777.2806778,BSP,SSP,Litz,orpheus}.% shows it is not required to achieve high accuracy. %\ha can get away by taking advantage that learning itself is a stochastic process: by maintaining a keeping a strongly coherent view of model during normal operation and a bounded-staled, relaxed view of model during change of aggregation schedule, which is known to not affect accuracy %Schedule changes start by blocking subsequent calls to \code{reduce}. \ha then attempts to bring its polling threads to a safe point, %so that aggregation schedule for all buffers can be reset to Fetch stage. A safe point is %which is reached by first draining the send queue then transitioning the buffer state to Fetch. %, avoiding transferring old data when a new schedule is installed.\ %Each buffer is then given a 5s timeout to do so, and after which \ha signals its peers that it is ready to switch to a new schedule for that chunk. \ha recovers chunks that are affected by requeueing them to the ready queue. %\ha supports a transparent process for switching to a new aggregation schedule, with the end result being that chunks whose reductions are interrupted by schedule changes may hold values from the previous iteration, while chunks whose reductions are uninterrupted reflect the new values of the current iteration. \begin{table}[t] \centering \footnotesize %\footnotesize \begin{tabular}{|c|c|} \hline Property & \ha Optimization \\ %\hline %Wide range of buffer sizes & Size-based agg. route selection. \\ \hline Fixed comm. pattern & No explicit acknowledgement \\ \hline Fixed buffer size & Minimal metadata \\ %\hline %Later layer depended on first & Prioritize later layers \\ \hline One reduction per layer & Only 2 merge buffers per layer; \\ per iteration & Eagerly accepts chunks \\ \hline Training is stochastic & Switching plans quickly and cheaply \\ \hline \end{tabular} \caption{Codesigning \ha with training workload.} \label{table:codesign2lha} \end{table} Table \ref{table:codesign2lha} summarizes how \ha is designed to take advantage of properties in the distributed training workload to lower its overhead. \subsection{Reacting to Network Changes with \autoplink} \label{sec:autoplinkimpl} \autoplink collects performance information from \ha and watches for sudden changes in link conditions, reflected by the current training speed. %Instead of running \marcopolo in the background, which itself is resource intensive, \plink needs to take a lightweight approach that directly pinpoints the link bottleneck of training. The goal of \autoplink is to dynamically compensate for link changes by redistributing LMs and GMs to VMs, so the time to finish an iteration is similar at both local and global levels. %that slower VM nodes are serving less as GM or LM of chunks. %, so that nodes with less bandwidth are assigned fewer chunks. A perfect initial LM and GM assignment is hard, even if we have bandwidth probe measurements. Consider an aggregation plan $S$, where the \textit{effective bandwidth} of node $i$ to $j$ while running aggregation $S$ is $BW(i,j)$. Clearly, finding the best $S$ analytically relies on $BW(i,j)$ to be precisely measured or modeled, but $BW(i,j)$ has a circular dependency on $S$ itself. \autoplink thus makes approximations when optimizing the assignments. %, because it depends on the actual transfers dictated by $S$. %Meanwhile, $BW(i,j)$ might change due to competing traffic. To counteract this effect, the number of bytes transferred from $i$ to $j$ according to $S$, denoted by $D(i,j)$, should also reflect the change. Moreover, the time to compute model updates in different VM nodes may also differ, creating the straggler effect and adding to the complexity. %\autoplink cannot simply minimizes each link's individual transfer time, because they all share the bandwidth of a given node, nor can it minimize the maximum transfer time, because $D(i,j)$ constitutes the total transfers for all different stages $p$ in 2LHA, and thus may not overlap with other transfers. At a high level, \autoplink works in two phases: (1) a \textit{Quicktune} phase where a one-shot, global adjustment of GM and LM assignments is done to adapt to the network change immediately; and (2) a \textit{Finetune} phase where \autoplink uses a performance model to find the current performance bottleneck in the system, and moves GMs and LMs away from it in a stepwise, increasingly aggressive manner. \paragraph{Quicktune} Quicktune aims to minimize the maximum transfer time of each node. Quicktune can be best summarized formally as follows: let $GM(i,c) \in \{0,1\}$ and $LM(i,c) \in \{0,1\}$ be the boolean variables to be solved, which indicate whether node $i$ is the GM or LM of chunk $c$. Let $G(i)$ be the group of node $i$, $|G|$ the number of groups and $S(c)$ the size of $c$ in bytes. Our goal is to: minimize \begin{align*} \small \label{eq:quicktuneobj} \quad max(t_i = \frac{\sum_{c}S(c)(LM(i,c)|G(i)| + GM(i,c)|G|)}{\sum_{n}BW(i,n)}) \end{align*} subject to \begin{align*} \small \forall_{i,c} \quad GM(i,c) = 1 \implies LM(i,c) = 1 \\ \forall_{c} \quad \sum_{i}GM(i, c) = 1 \\ \forall_{c,g \in G} \quad \sum_{i \in G(g)}LM(i,c) = 1 \end{align*} Quicktune solves this with an approximation: it first distributes GMs to different groups, with the number of GMs assigned to each group proportional to the group aggregate bandwidth $\sum_{m \in G(i)}\sum_{n \notin G(g)}BW(m,n)$, then distributes LMs inside each group to different members in a similar fashion, using aggregate per node bandwidth. %This is effective because groups are balanced in \plink \mlha schedules, and there are many more chunks than there are nodes, practically making every node a GM of a certain chunk, so the effect of being a GM in each group can be ignored. %To retrieve $BW(i,j)$, \autoplink queries per connection stats directly from the OS~\cite{NetlinkW13:online, mathis2003web100}.% Since this bandwidth is an average reading, \autoplink filters by the \code{app\_limited} field: bandwidth measured only when the outbound throughput is not throttled by the sending application is recorded. To further reduce averaging effects, \autoplink queries this after each successful \code{write} call. \paragraph{Finetune} Quicktune is limited as it assumes constant effective bandwidth across different schedules and ignores node balance. Finetune, however, amends this by gradually \textit{evolving the current schedule}, using both the currently measured $D(i,j)$ and $B(i,j)$ to pinpoint the current bottleneck node in the system, and then moves away its load while maintaining balance based on \textit{blame}. Blame for node $i$ ($B(i)$) has two major weighted parts: \textit{time} $t(i)$ and \textit{imbalance} $l(i)$. \autoplink collects per connection stats including link $RTT(i,j)$, bandwidth $BW(i,j)$ through the OS~\cite{NetlinkW13:online, mathis2003web100}, and computes $t(i)=max_{j}RTT(i,j) + \frac{\sum_{j}D(i,j)}{\sum_jBW(i,j)}$ and normalized $\hat{t(i)} = \frac{t(i)}{mean_n t(n)}$. %Note that we use a first order model to predict reduction time, instead of measuring it directly, because actual reduction time includes overheads from other nodes and we shouldn't blame those on node $i$.% Further, we let $l(i) = \sum_j{D(i,j)}$ and weighted $\hat{l(i)} = \frac{l(i) - min_nl(n)}{max_nl(n) - min_nl(n)}$. Finally, blame is defined as $\beta \hat{l(i)} + \gamma \hat{t(i)}$.%(empirically, $\beta=1,\gamma=5$). %Note how $D(i,j)$ is simply read from the previous schedule in Finetune, a core difference to Quicktune. With blame for each node available, Finetune attempts a move of a single GM (or if not available, an LM) from the node with highest blame to the lowest, if $\frac{max_iB(i)}{min_iB(i)} >\epsilon$ (a configuration parameter). If Finetune repeatedly identifies the same bottleneck node, it moves exponentially more LMs and GMs in each step. %The number to move is reset to 1 if the bottleneck changes. %Finetune's metrics are directly inspired by those used in profiling concurrent systems, notably~\cite{Anderson:1990:QTT:98460.98518,AUDIT}, which distributes blames to active tasks (root cause) sharing the resource, and focuses on first order effects~\cite{80132,211299} (most blamed nodes first). \autoplink uses a central daemon to collect performance metrics and generate new schedules, and is triggered by a sudden change (e.g., larger than 20\%) in training performance. \autoplink signals \ha to install the new schedule. %, which then proceeds with the steps in \textsection\ref{sec:changingPlan}. % to start using the new schedule. %\arvind{some comments: (1) this overall sounds very heuristicy and will likely make the reviewers complain - the less heuristicy it is, it would be better; (2) not clear we need to describe both quicktune and finetune - might be simpler to just describe finetune; that might help it sound less heuristicy; (3) try to be as precise as possible about the algo.} %\subsection{Integration with Training Frameworks} %\label{sec:integration} %Integrating \plink with a training framework is straightforward. \plink{}'s initialization requires only a list of nodes and buffer sizes/addresses from the training framework. %Reduction is done by calling \code{reduce} in appropriate places. %Here we demonstrate how \plink can be integrated with systems with training frameworks that have different design decisions. %Caffe2/Pytorch uses blocking reduction. \plink integration is done by extending Gloo~\cite{facebook35:online}'s \code{algorithm} object. Caffe2/Pytorch, by default, uses a concurrency limit to control the number of established connections per peer and simultaneous \code{allreduce} calls. \plink instead uses a single \ha instance to multiplex all requests. %MxNet uses an asynchronous callback pattern, but \plink{}'s \code{reduce} is blocking. A separate thread is launched for invoking callbacks through \code{kv\_apps}'s infrastructure when a chunk has finished. Support for MxNet is enabled by extending PSLite~\cite{dmlcpsli50:online}'s \code{van} object. %Initialization of \plink is done by hooking into the \code{init} call of \code{kv\_dist\_server}.
{ "alphanum_fraction": 0.7847981185, "avg_line_length": 132.4581151832, "ext": "tex", "hexsha": "a8e9e5111ba1c91ba0daa2782fe0ab50a19289b5", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "c70c16dfc572ca3345241c9bb100bb3dbc797deb", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "Luo-Liang/UWThesis", "max_forks_repo_path": "PLink.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "c70c16dfc572ca3345241c9bb100bb3dbc797deb", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "Luo-Liang/UWThesis", "max_issues_repo_path": "PLink.tex", "max_line_length": 1070, "max_stars_count": null, "max_stars_repo_head_hexsha": "c70c16dfc572ca3345241c9bb100bb3dbc797deb", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "Luo-Liang/UWThesis", "max_stars_repo_path": "PLink.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 12013, "size": 50599 }
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % Simple Sectioned Essay Template % LaTeX Template % % This template has been downloaded from: % http://www.latextemplates.com % % Note: % The \lipsum[#] commands throughout this template generate dummy text % to fill the template out. These commands should all be removed when % writing essay content. % %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %---------------------------------------------------------------------------------------- % PACKAGES AND OTHER DOCUMENT CONFIGURATIONS %---------------------------------------------------------------------------------------- \documentclass[12pt]{article} % Default font size is 12pt, it can be changed here \usepackage{geometry} % Required to change the page size to A4 \geometry{a4paper} % Set the page size to be A4 as opposed to the default US Letter \usepackage{graphicx} % Required for including pictures \usepackage{float} % Allows putting an [H] in \begin{figure} to specify the exact location of the figure \usepackage{wrapfig} % Allows in-line images such as the example fish picture \usepackage{tabularx} \usepackage{lipsum} % Used for inserting dummy 'Lorem ipsum' text into the template \linespread{1.2} % Line spacing %\setlength\parindent{0pt} % Uncomment to remove all indentation from paragraphs \graphicspath{{./Pictures/}} % Specifies the directory where pictures are stored \begin{document} %---------------------------------------------------------------------------------------- % TITLE PAGE %---------------------------------------------------------------------------------------- \begin{titlepage} \newcommand{\HRule}{\rule{\linewidth}{0.5mm}} % Defines a new command for the horizontal lines, change thickness here \center % Center everything on the page \textsc{\LARGE CERN IT-SDC}\\[1.5cm] % Name of your university/college %\textsc{\Large Major Heading}\\[0.5cm] % Major heading such as course name %\textsc{\large Minor Heading}\\[0.5cm] % Minor heading such as course title \HRule \\[0.4cm] {\huge \bfseries Realtime monitoring for DMLite}\\[0.4cm] % Title of your document \HRule \\[1.5cm] \begin{minipage}{0.4\textwidth} \begin{flushleft} \large \emph{Author:}\\ Fabrizio \textsc{Furano} \\ % Your name\\ \emph{Author:}\\ Martin \textsc{Hellmich} \\ % Your name\\ \end{flushleft} \end{minipage} \\[1cm] %~ %\begin{minipage}{0.4\textwidth} %\begin{flushright} \large %\emph{Supervisor:} \\ %Dr. James \textsc{Smith} % Supervisor's Name %\end{flushright} %\end{minipage}\\[4cm] {\large \today}\\[3cm] % Date, change the \today to a set date if you want to be precise %\includegraphics{Logo}\\[1cm] % Include a department/university logo - this will require the graphicx package \vfill % Fill the rest of the page with whitespace \end{titlepage} %---------------------------------------------------------------------------------------- % TABLE OF CONTENTS %---------------------------------------------------------------------------------------- \tableofcontents % Include a table of contents \newpage % Begins the essay on a new page instead of on the same page as the table of contents \begin{abstract} It's time for dmlite and the dpm frontends to report realtime monitoring information. This document explains the main choices that have been made to achieve the best compromise between cost of development, cost of maintenance and extendability. This is a preliminary investigation, which describes the main direction and technical choices. Some of the technical details will have to be understood on the way, following an exploratory approach. \end{abstract} \section{Introduction} Ok? Understood? Good. Let's go. \section{The goal and the difficulties} The purpose of the project is to contribute to the conspiracy theories. We will be happy to start contributing to the HAARP project for mind control, throwing chemical trails, fixing our coffee-based nuclear fusion device, and will wear a black tie and sunglasses forever while sleeping.\\ The secondary goal is to be able to report to external systems about the activity of a DPM site. So far we envision two kinds of external systems: \begin{itemize} \item an instance of the \textit{Gled Collector}, i.e. the software component that is used in the FAX and AAA deployments of the xrootd framework \item a generic messaging broker, like the ones that feed the Dashboard application, yet not limited to it. \end{itemize} In the former case, the activity would have to be reported in the format supported by Gled, which is the monitoring format used by the xrootd framework. In the latter case the format will be some STOMP-based message or eventually ActiveMQ.\\ Among the goals we cite the convenience in making the system sufficiently modular, so that the system administrator can choose which systems his instance of DPM should report to, andd be informed of the consequences of his choices.\\ Either the activity information can be aggregated on the DPM site and sent to the monitoring system in bulk, or single events can be sent to the monitoring site where they are aggregated, or any level of partial aggregation in between. The choice influences the computational load and memory requirements on the DPM and monitoring sites as well as the amount of network traffic between them; then, it is limited by the capabilities of both sites to perform the aggregations. We present the rationale that leads to our preferred solution in the following. \subsection{Decorating the DMLite IO plugins} The DMLite architecture is able to load stacks of plugins that, together, override or pass-through the functionalities to another plugin. Since this project is about monitoring, we assume that we want to monitor the IO metrics generated the service, like bytes read and written, number of files accessed, etc.\\ The feasibility of several of those measures is subject to the possibility of loading pass-through plugins in the IO stack of dmlite. Since apparently the DMLite IO stack cannot have the so-called decorated plugins, a needed effort to solve is to modify the DMLite API in order to support them. \subsection{Gathering aggregated information across forks} A DPM site provides multiple storage access protocols, which are exposed through different frontends. The different frontends are typically standard software components that are linked to DPM through the dmlite libraries. he goal of this project is to allow these protocols to send information about their activity. The difficulty is given by the fact that \textit{the architecture of dmlite is subject to the forking decisions of the frontend that is being used}. In the case of Apache for example, an instance of dmlite cannot know if there are (in the same machine) other similar instances that have been forked, according to the configuration of the Apache frontend. In this case, determining e.g. how many bytes have been read in total in the last time interval becomes a difficult exercise, as a process will not know about the activity of others.\\ The case of the GridFTP frontend is similar to the one of Apache. Whatever internal monitoring schema we may choose, this must also work in the case of a purely forking frontend, i.e. a frontend that spawns a process in order to handle even only one request.\\ This reason severely limits the kind of information that can be sent in realtime. For example, a report on the last file read operation can be sent easily. Instead, a report on the average number of read bytes per second is very difficult to compute. We are not aware of solutions to this problem that don't negatively impact on the latency of the various operations. This is what we call \textit{aggregated} data, i.e. information that can only be computed by comparing the behaviour of all the other clients in the system. Another consequence of not controlling forks is that a single instance of DPM will have to instantiate multiple information senders, typically one per process per machine. This aspect may be a challenge for external systems willing to collect and perform aggregations on this information. \subsection{\label{feeding}Feeding different monitoring systems} In the project's intentions, the system must support in principle feeding any external monitoring system, thus it has to be plugin-based. To be able to feed an external realtime monitoring system, there has to be a suitable plugin, able to talk to that system.\\ The idea is to implement monitoring data producers in the form of DMLite pass-through plugins, in a way that is similar to the (currently never used) ``dmlite profiler'' plugin.\\ These new DMLite plugins will implement both the DMLite activity collection and the specifics of sending data to an external system. Feeders for different external monitoring systems will compute different numbers and send them in different ways. This is consistent with the idea of implementing them in different DMLite plugins.\\ An interesting consequence of this is that it will be possible to feed more than one external system at the same time, by loading more than one pass-through plugins into the DMLite stack. As mentioned before, sending information that is specific to the current process (e.g. the filename being opened) is not problematic; more difficult is to report about aggregated values (e.g. the average reading speed in the last minute). The importance of sending a particular counter has to be considered on a case by case base.\\ \subsection{The Open semantics} Frameworks like xrootd, or gridftp are based on the posix-like concept of \textit{open-atcivity-close} for the lifetime of a file. DMLite does not have this behavior, being it closer to the idea of the HTTP GET request, where all the requests are independent, in the sense that they cannot easily be recognized as part of a ``session'' like the one represented by a POSIX open() call.\\ The consequence of this is that there is no Open in DMLite, hence reporting strictly about Open calls is not possible. What DMLite can do is to report on single GET requests, which likely are individually translated by Apache into an \textit{open-read-close} sequence. \section{Where to send realtime monitoring information} DPM already sends Xrdmon packets to an existing Xrootd monitoring infrastructure. We assume that the DMLite configuation used for frontends that are not xrootd has to be able to feed the same infrastructure if needed. We also assume that DMLite should be able to send information about its realtime activity also to other kinds of infrastructures, through suitable plugins. \subsection{Message broker} One possibility could be to send the realtime activity reports to an activeMQ broker, eventually in a format that could be understood by the Dashboard applications, that (at the best of our knowledge) is the only messaging consumer of monitoring data so far. Broker needs aggregations, hence much more difficult. We chose to postpone. \subsection{XrdMon UDP packets} The other monitoring implementation that we consider is the one of the xrootd framework, called XrdMon, plus the infrastructure that has been built out of the GLED Collector. The GLED infrastructure is able to collect monitoring information from a very large number of endpoints and either write it locally into ROOT trees, or aggregate it and send it to an external ActiveMQ broker. Although it's actually used only in the xrootd framework, XrdMon represents quite an agnostic, POSIX-style implementation of a monitoring/reporting framework. DPM can already sends XrdMon packets, but only for the traffic that is done through its Xrootd frontend. A possibility is then to send UDP packets that have the same content of the ones sent by XrdMon, in order to utilize an already working production infrastructure.\\ The choice of using UDP has several technical strong points: \begin{itemize} \item Sending an UDP packet is quite an efficient operation, that does not need many system resources. This is very important in a server that likely will be very loaded. \item The architectural choice of having external collectors as independent services is also scalable, and can feed more complex aggregated information to other services. \end{itemize} The maintainer of the GLED collector agrees in giving support and is available for working in the GLED collector to make it work in the corner cases that DPM may generate in the usage of the XrdMon protocol. Some of these corner cases may be related to the absence of an Open() primitive in some protocols like HTTP. \subsubsection{Application-global state} The XrdMon monitoring schema relies on multiple globally-synchronized values. They are used to identify related events, keep a (loose) ordering of the packets, and distinguish services. A list of the different types can be found below. In Xrootd, the definition is clear: There is one Xrootd process running and the global level means process-global; shared state across all threads; Xrootd and Cmsd report as different entities. For dmlite we can have multiple processes running for each frontends, so we can decide between three possible definitions of global. Global state can be process-global as in Xrootd (then e.g.\ each httpd process would advertise itself as another dpm server), machine-global (then both httpd and gridftp share information), or frontend-global (then all httpd processes advertise as belonging to the same dpm server). The first option results in finer differentiation than Xrootd and is problematic. Since we don't control the forking behaviour of the frontends, there could be many hundred processes running. This happens with httpd started in fork mode. The second option is problematic as well. While the Xrootd monitoring schema allows to include information differentiating program versions, custom names, or ports to discriminate the frontends, sharing certain information is semantically difficult (e.g.\ the gridftp service is likely to be restarted separately form the httpd service, so sharing the server startup time makes no sense). The third option, using frontend-global values, is different from Xrootd implemenation-wise, but very similar semantically. It is necessary to implement cross-process communication, but not across frontends. If a common system for storing state or passing messages is used, we must ensure to keep different namespaces for the frontends. \begin{description} \item[server startup time] allows identifying unique IDs across server restarts \item[server ID] the server machine \item[service ID] the running service (the process) \item[packet sequence] sequence number in [0 255] to allow packet reordering on the collector \end{description} \section{Development priority} The first objective is to build a prototype that sends out XrdMon-compatible messages. It is the simplest option from a implementation point of view: the message constructs are simple and do not require additional libraries to include, the amount of aggregation we have to do on the storage side is limited, and we can choose to support only a part of all available messages. Most of the work needed to support the XrdMon protocol is a prerequisite to support sending stomp messages. Sending stomp messages will also require us to implement an infrastructure to create message with information from different sources, schedule and send them, but just in another serialization format, so the work done for XrdMon will not be lost. The XrdMon design is also already successfully used to send monitoring information from Xrootd servers in large-scale scenarios (AAA, FAX and USCMS), so it is already proven that it works with an acceptable overhead for storage systems. Sending stomp messages directly from a storage system has not been tried yet. The first implementation will support file access and redirection messages (as is described in the table below). These are the most important message types and give an important view on the usage of the system. They also serve as good candidates whether we can implement the XrdMon protocol fully, as we need to implement both single-message packet sending as well as message streams. %we do the gled/xrdmon, leaving stomp and broker for the future, as the result would be more uncertain \section{The system} We chose to send telekynetic information, through Dark Matter brokers. After it's unveiled, Aliens will come to stop us, so we can welcome their invasion recall that we can't do aggregations stack instance dictid dmlite funcs to instrument. IOHandler likely + where to read/write for the redir \subsection{Mapping of dmlite API function to XrdMon Messages} \begin{tabularx}{\textwidth}{p{1.5cm}|l|l| >{\setlength{\baselineskip}{0.75\baselineskip}}X} XrdMsg code & XrdMon message & type dmlite function & notes \\ \hline \hline = & server id & GetServIDorCreateNew() & {\footnotesize synchronizes the creation of the new id when the first primitive is called after the configuration} \\ d & dictid user/path & TBDecided & {\footnotesize this ID will give a unique value that is valid on this host, but across different processes and process runs} \\ f & f-stream & \\ & isOpen & XrootdMonitor::XrootdMonitor \\ & isClose & XrootdMonitor::~XrootdMonitor \\ & isDisc & \\ & isTime & \\ & isXFR & \\ i & dictid user/info & \\ p & file purge & \\ r & redirect & \\ u & dictid user/auth & \\ x & file transfer & \\ \end{tabularx} \subsection{Buffering} Packets are not sent immediately. They are rather buffered in sequences that are managed by a global static object. The sequence is flushed when:\\ \begin{itemize} \item the buffer is full \item when the corresponding Profiler Factory is destroyed \item at regular time intervals, on the order of 5-60 seconds \end{itemize} \section{Typical deployment} Any deployment where dmlite is involved will be a possible place for DMLiteMonitor. This includes the DPM deployments making use of the following frontends: \begin{itemize} \item HTTP/WebDAV \item GridFTP v2 \item DPM-xrootd \end{itemize} Also the Dynamic Federations project will benefit from DMLiteMonitor, and will be able to report about global redirection decisions. Beware, the xrootd frontend will report by itself. In the case of the deployment of DPM with the xrootd frontend, the DMLite monitoring for the stack that is associated to DPM-xrootd must be kept disabled, otherwise every action will be notified twice.\\ \section{Configuration parameters reference} TBD \begin{thebibliography}{9} \bibitem{contact} Contact, Carl Sagan ISBN: 978-0671004101 \bibitem{alice} Alice in Wonderland, Lewis Carroll ISBN: 978-1619490222 \bibitem{xrd} The xrootd.org homepage \verb"http://www.xrootd.org" \bibitem{gled} Gled is a Generic Lightweight Environment for Distributed computing \verb"http://www.gled.org/" \bibitem{gledwhitepaper} Gled - an Implementation of a Hierarchic Server-Client Model Matev\v{z} Tadel \verb"http://www.gled.org/docs/gled/gled-ihsm.pdf" \bibitem{activemq} Apache ActiveMQ The Apache Foundation \verb"http://activemq.apache.org/" \end{thebibliography} \emph{DRAFT}\\ \emph{DRAFT}\\ \emph{DRAFT}\\ \emph{DRAFT}\\ \emph{DRAFT}\\ \emph{DRAFT}\\ \emph{DRAFT}\\ \emph{DRAFT}\\ \emph{DRAFT}\\ \end{document}
{ "alphanum_fraction": 0.7643559234, "avg_line_length": 57.7014925373, "ext": "tex", "hexsha": "725bdd80df42d6ab5d901ca15c87f0919017cb57", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "7fe31e16fc18e3c3d8b048a3507f7769ef3f8a9d", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "EricKTCheung/dmlite", "max_forks_repo_path": "src/plugins/profiler/doc/dmliteMon/dmliteMon.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "7fe31e16fc18e3c3d8b048a3507f7769ef3f8a9d", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "EricKTCheung/dmlite", "max_issues_repo_path": "src/plugins/profiler/doc/dmliteMon/dmliteMon.tex", "max_line_length": 963, "max_stars_count": null, "max_stars_repo_head_hexsha": "7fe31e16fc18e3c3d8b048a3507f7769ef3f8a9d", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "EricKTCheung/dmlite", "max_stars_repo_path": "src/plugins/profiler/doc/dmliteMon/dmliteMon.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 4276, "size": 19330 }
\documentclass{article} \usepackage{tocloft} \include{common_symbols_and_format} \renewcommand{\cfttoctitlefont}{\Large\bfseries} \begin{document} \logo \rulename{Exponential Trading Rule - Three factor} %Argument is name of rule \tblofcontents \ruledescription{These representations have the position dependent on an exponentially weighted average of the prior values of research and price. The lookback length, $\lookbacklength$, determines how far back the rule looks. The decay rates, $\decaycoefficientone$, $\decaycoefficienttwo$ and $\decaycoefficientthree$, and amplitudes, $\amplitudecoefficientone$, $\amplitudecoefficienttwo$ and $\amplitudecoefficientthree$ coefficients determine the contribution from each exponential. The expression is normalised to be dimensionless by division by the weighting factors and current price.} \ruleparameters {Window size}{10}{This is the number of time steps over which exponential contributions are sourced.}{$\lookbacklength$} {Exponential decay rate for price m, \break m $\in \{1,2,3\}$}{0.0}{These are the decay factors that reduce older contributions from the price series.}{$\decaycoefficient_m^{\price}$} {Exponential decay rate for research m, \break m $\in \{1,2,3\}$}{0.0}{These are the decay factors that reduce older contributions from the research series.}{$\decaycoefficient_m^{\research}$} {Amplitude of price contribution m, \break $m \in \{1,2,3\}$}{-0.1}{These factors scale the overall contribution from past values of price.}{$\amplitudecoefficient_m^{\price}$} {Amplitude of research contribution m, \break $m \in \{1,2,3\}$}{-0.1}{These factors scale the overall contribution from past values of price.}{$\amplitudecoefficient_m^{\research}$} \stoptable \section{Equation} \begin{equation} \contribution^{\price}(\dummyiterator,\dummyiteratortwo) = \sum_{\dummyiteratorthree = 1}^{\dummyiteratortwo} e^{- \frac{ \dummyiterator}{e^{-\decaycoefficient_\dummyiteratorthree^\price}}} \end{equation}\\ \begin{equation} \contribution^{\research}(\dummyiterator,\dummyiteratortwo) = \sum_{\dummyiteratorthree = 1}^{\dummyiteratortwo} e^{- \frac{ \dummyiterator}{e^{-\decaycoefficient_\dummyiteratorthree^\research}}} \end{equation}\\ \begin{equation} \bigcontribution_{\dummyiteratortwo \price} = \amplitudecoefficient_m^\price \frac{\sum_{\dummyiterator=0}^{\lookbacklength} \price_{\currenttime - \dummyiterator} \contribution^{\price}(\dummyiterator, \dummyiteratortwo)}{\sum_{\dummyiterator = 0}^{\lookbacklength} \contribution^{\price}(\dummyiterator, \dummyiteratortwo)} \end{equation}\\ \begin{equation} \bigcontribution_{\dummyiteratortwo \research} = \amplitudecoefficient_m^\research \frac{\sum_{\dummyiterator=0}^{\lookbacklength} \research_{\currenttime - \dummyiterator} \contribution^{\research}(\dummyiterator, \dummyiteratortwo)}{\sum_{\dummyiterator = 0}^{\lookbacklength} \contribution^{\research}(\dummyiterator, \dummyiteratortwo)} \end{equation}\\ \begin{equation} \position_{\currenttime} = \frac{({\bigcontributionone}_{\price} + {\bigcontributionone}_{\research} + {\bigcontributiontwo}_{\price} + {\bigcontributiontwo}_{\research} + {\bigcontributionthree}_{\price} + {\bigcontributionthree}_{\research})}{\price_{\currenttime}} \end{equation} \\ where $\price_\currenttime$ is the price at time $\currenttime$, $\research$ is the value of the research series, $\amplitudecoefficientone$, $\amplitudecoefficienttwo$ and $\amplitudecoefficientthree$ are the amplitude coefficients, $\decaycoefficientone$, $\decaycoefficienttwo$ and $\decaycoefficientthree$ are the decay rate coefficients and $\position$ is the resultant fractional portfolio investment.\\ Intuitively, the $\contribution$ are the exponential weightings for each historical value in the lookback period, with $\bigcontribution$ the normalised sum of all the contributions, scaled by the amplitude coefficient. The total contributions are then scaled by the current price to give a dimensionless fraction of the portfolio to invest. \hspace{200mm} \hspace{200mm} \keyterms \furtherlinks \end{document}
{ "alphanum_fraction": 0.7794732956, "avg_line_length": 90.2888888889, "ext": "tex", "hexsha": "4d0b0386abd3741299e6573e3ff793a056f9cc53", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2021-06-30T06:38:58.000Z", "max_forks_repo_forks_event_min_datetime": "2021-06-30T06:38:58.000Z", "max_forks_repo_head_hexsha": "48231c2c026b4163291e299cd938969401ca6a4a", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "pawkw/infertrade", "max_forks_repo_path": "docs/strategies/tex/ExponentialThree.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "48231c2c026b4163291e299cd938969401ca6a4a", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "pawkw/infertrade", "max_issues_repo_path": "docs/strategies/tex/ExponentialThree.tex", "max_line_length": 593, "max_stars_count": 1, "max_stars_repo_head_hexsha": "48231c2c026b4163291e299cd938969401ca6a4a", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "pawkw/infertrade", "max_stars_repo_path": "docs/strategies/tex/ExponentialThree.tex", "max_stars_repo_stars_event_max_datetime": "2021-08-07T14:40:51.000Z", "max_stars_repo_stars_event_min_datetime": "2021-08-07T14:40:51.000Z", "num_tokens": 1106, "size": 4063 }
\documentclass[12pt, titlepage]{article} \usepackage{booktabs} \usepackage{tabularx} \usepackage{hyperref} \hypersetup{ colorlinks, citecolor=black, filecolor=blue, linkcolor=red, urlcolor=blue } \usepackage[round]{natbib} \input{../Comments} \begin{document} \title{Runge-Kutta (RK) Generator} \author{Alexander Schaap} \date{\today} \maketitle \pagenumbering{roman} \section{Revision History} The latest version can be found at \url{https://github.com/aschaap/cas741}.\\ \noindent \begin{tabularx}{\textwidth}{p{3cm}p{2cm}X} \toprule {\bf Date} & {\bf Version} & {\bf Notes}\\ \midrule October 16 & 1.0 & Initial draft for presentation\\ October 25 & 1.1 & Processed feedback and completed report\\ December 18 & 1.2 & Final version, processed issues and new feedback\\ \bottomrule \end{tabularx} ~\newpage \section{Symbols, Abbreviations and Acronyms} Also see the \href{../SRS/CA.pdf#ssec:symbols}{Table of Symbols} in the SRS at \url{https://github.com/aschaap/cas741}.\\ \noindent \renewcommand{\arraystretch}{1.2} \begin{tabular}{l l} \toprule \textbf{symbol} & \textbf{description}\\ \midrule ODE & Ordinary Differential Equation\\ RK & Runge-Kutta\\ T & Test\\ \bottomrule \end{tabular}\\ %\wss{symbols, abbreviations or acronyms -- you can reference the SRS tables if %needed} \newpage \tableofcontents %\listoftables %\listoffigures \wss{If you don't have tables or figures, you can comment out these headings.} \als{I've commented them out} \newpage \pagenumbering{arabic} %This document ... \section{General Information} This document is a test plan for the RK Generator family of ODE solvers. \subsection{Purpose} This document specifies the (black-box) verification and validation tests for the RK Generator. The intended audience are testers aiming to test the system and developers maintaining the software. This document is expected to be updated when development of the system proceeds. \subsection{Scope} The scope of the testing of the RK Generator is restricted to correctness, in some cases within an expected margin of error. Performance, while implied, should also be measured. The implementation could be compared to an unstaged version of the same code (plain OCaml code rather than MetaOCaml; no generation). Alternatively, a comparison against \textsc{\textsc{Matlab}} would give an indication of performance compared to the competition. This is a secondary goal though. The tests in this document follow from the \href{../SRS/CA.pdf}{SRS} available at \url{https://github.com/aschaap/cas741}. \wss{An explicit web-link to your GitHub repo would be nice.} \als{I've added one to the revision history as well as here.} \wss{Reference your SRS document}\als{Done.} \subsection{Overview of Document} The structure of this documents is based on \cite{Smith2006}. The next section will outline the software to test, the testing team and the proposed approach. Subsequently, system tests are covered, and ultimately unit tests will be added in the section after that. \section{Plan} This section begins with a brief description of the software being tested, followed by a succinct description of the test team before discussing the automated approach proposed, the tools expected to accomplish this, and the manual testing necessary. %Testing will occur in various ways: automated unit testing, manual code %walkthroughs, and performance comparison. \subsection{Software Description} The RK Generator is a metaprogramming approach to a library of ODE solvers. Given a choice of RK method, an ODE, an interval, a step size and an initial condition, it generates a function that will provide values for input on the given interval. This function will be unique to the inputs provided. During the creation of this function, metaprogramming is also employed for performance reasons. The parent program that called the generator can subsequently use the generated code to solve specific points within the interval. \subsection{Test Team} %\wss{Probably just you. :-)} The test ``team'' will comprise Alexander Schaap. \subsection{Automated Testing Approach} Automated tests exist primarily in the form of unit tests, though some ``system'' tests will do black-box verification. \subsection{Verification Tools} %OUnit %Make %Continuous integration %Code coverage - bisect_ppx %\wss{Thoughts on what tools to use, such as the following: unit testing % framework, valgrind, static analyzer, make, continuous integration, test % coverage tool, etc.} For unit tests, OUnit seems an obvious choice. For automating compilation and running unit tests as a primitive form of continuous integration, Make is sufficient. Unit tests give us regression testing for free. A tool exists to measure code coverage. It is called bisect\_ppx and appears actively maintained. It remains to be seen how useful it is for MetaOCaml code. While code coverage has its weaknesses, it should suffice because there is little branching in this software. % \subsection{Testing Schedule} % See Gantt Chart at the following url ... \subsection{Non-Testing Based Verification} %\wss{List any approaches like code inspection, code walkthrough, symbolic % execution etc. Enter not applicable if that is the case.} Ideally, unit tests capture all the desired behaviour. However, manual inspection of the generated code by the developer will be required. The manual assessment should determine whether modifications to the generator are beneficial to either making the code more readable or improving the results in terms of accuracy. While readability isn't strictly a requirement, simple (and therefore more readable) code is often a decent indicator of correctness in metaprogramming. A common approach used for metaprogramming is outputting generated code to text and comparing it to a previous version. Any differences are subject to manual inspection. Note that the correctness of the previous code is not guaranteed; this is also subject to manual inspection. \wss{In your case having a build infrastructure where you can do a diff between the generated code and some ``golden'' version of the generated code, would be nice. This won't tell you if your generated code is correct, but it will tell you if you generator starts creating different code. This either means you have introduced an error, or you have a new ``golden'' version to put in place. This is the approach used for ggk and for Drasil.} \als{I've added a summary of this approach.} \section{System Test Description} Also known as black-box tests, system tests verify the software as a whole. This is done without assumed knowledge of the internal workings. This section covers the system tests along with non-functional tests. \subsection{Tests for Functional Requirements} The intent is to test using problems with known solutions to get around oracle problem. Results of other ODE solvers such as \textsc{Matlab} or Octave can be used for comparison purposes in case of unknown solutions. Currently, all tests have known solutions. \wss{I would like to see some tests where you compare to \textsc{Matlab} or Octave as the pseudo oracle. This will provide a greater variety of tests.} \als{I'm afraid I'll have to move that to phase 2; I've updated the corresponding section to reflect this.} It is assumed that these tests test both the generator and the generated code. The inputs therefore are for the generator, but inputs are also needed for the generated code. Since the generator creates a function that takes one input, the input for generated code is provided one at a time (making the generated code run multiple times, which should be the typical use case). \subsubsection{RK4 \& RK2} These tests are to be repeated for each method. The expected results for each method are described separately. \paragraph{Solution Correctness Tests} \begin{enumerate} \item{Low-order-ODE\footnote{Adapted from: \url{http://mathinsight.org/ordinary_differential_equation_introduction_examples}}\\} Type: Functional, Dynamic Initial State: N/A Input (to generator): $x'(t) = 5x - 3 \quad x(2) = 1, \quad 2 \leq t \leq 3, \quad h = 0.1$ Input (to generated code): $t$ = 2, 3, 0, -1024, 2.2, 2.5, 23 \wss{What are these numbers? The final value of $t$, or something else?} \als{I've added $t$ to make this unambiguous.} Output (RK4): printed values for the user to compare to the exact solution $x(t) = \frac{2}{5} e^{5(t-2)} + \frac{3}{5}$ for the same inputs to see whether they are acceptably close. Output (RK2): similar to RK4, but with a higher margin of error in most cases (should be within$O(h^3)$). How test will be performed: as part of the automated tests executed by the Makefile. \wss{Rather than make the vague statement that you are taking floating point errors into account, I think your goal should be to summarize the relative error for each test case. You can make a table of errors. Rather than specify a priori what the error should be, the job of your testing is to simply describe the results. Judging the acceptability of the errors is something that can be done by potential users of the software.} \als{I've changed the test cases to reflect this change.} \item{High-order-ODE\footnote{Adapted from: \url{http://mathinsight.org/ordinary_differential_equation_introduction_examples}}\\} Type: Functional, Dynamic Initial State: N/A Input (to generator): $x'(t) = 7x^2t^3 \quad x(2) = 3, \quad 2 \leq t \leq 4, \quad h = 0.1$ \wss{Why not just write $x'(t) = 7x^5$? (product rule for exponents). Is there supposed to be a $t$ on the right hand side of the equation? Looking into this further, I believe you mean $7 x^2 t^3$.} \als{Yes, my mistake. Thank you for pointing this out, I've corrected it.} Input (to generated code): $t$ = 2, 3, 0, 2.5, -89, 8192 Output (RK4): printed values close to the exact solution $x(t) = \frac{-1}{\frac{7}{4}x^4 - \frac{85}{3}}$; the error should be within $O(h^5)$. The user can then decide whether these results are acceptable. Output (RK2): similar to that of RK4, but with the larger margin of error (of $O(h^3)$). How test will be performed: as part of the automated tests executed by the Makefile. \item{Stiff-ODE\\} Type: Functional, Dynamic Initial State: N/A Input (to generator): $y'(t) = -15y(t),\quad y(0) = 1, \quad 0 \leq t \leq 1, \quad h = 0.1$. Input (to generated code): $t$ = -1, 0, 0.4, 1, 1.5 Output (RK4 \& RK2): A printed inaccurate numeric result that deviates significantly from the exact solution of $y(t) = e^{-15t}$ with $y(t) \rightarrow 0$ as $t \rightarrow \infty$ (or even as $t \rightarrow 1$), also for the user to manually inspect. How test will be performed: as part of the automated tests executed by the Makefile. %\item{test-id2\\} % %Type: Functional, Dynamic, Manual, Static etc. % %Initial State: % %Input: % %Output: % %How test will be performed: \end{enumerate} \wss{I agree with this. You should mention it under non-testing based verification. You should also give more detail on how it is going to be done, who is going to do it, etc. Ideally, there will be a specific approach for the verification, including a pointer to the specific algorithm that the final code should resemble.} \als{I've moved these sentences and expanded on the topic under non-testing based verification.} \wss{Does the ... mean something? Are more details coming?} \als{No, I removed them.} \subsection{Tests for Nonfunctional Requirements} The implied nonfunctional requirement to be tested is performance. There is no ``wrong'' result though. The goal is to provide insight into the performance in comparison to obvious alternatives. \subsubsection{Performance Tests} This section discusses tests that compare the performance of the implementation to that of existing implementations. These tests are part of phase two, which will commence in January 2018, due to performance being a secondary goal and the limited time in which everything must be done. \paragraph{Calculation Speed Tests} \begin{enumerate} \item{Comparison against unstaged (plain OCaml) code\\} Type: Non-Functional, Dynamic Initial State: N/A Input/Condition: Input to any one of the functional tests will be provided to both versions of the code, but the objective is to compare running times Output/Result: identical outputs that correspond to the expected output of the respective test chosen, but the staged (MetaOCaml) code should be faster How test will be performed: via Make and OUnit \item{Comparison against \textsc{\textsc{Matlab}}\\} Type: Non-Functional, Dynamic Initial State: N/A Input: Input to any one of the functional tests above will be provided to both \textsc{Matlab} and the generator (note that \textsc{Matlab} could require a different input format) Output: Similar output but with the expectation that \textsc{Matlab} may be more accurate. \textsc{Matlab} will also likely be faster (excluding start-up time). How test will be performed: via Make \wss{More detail needs to be provided. How are the results going to be summarized? What equation is going to be used to compare the generated code to the ``competition''? Percent relative change seems like a good summary statistic to me. Will there be a bar graph showing the results, or some other approach?} \wss{For many of your tests they might be too simple to show much of a difference in performance. You will need a long integration time for differences to be observable.} %\item{Comparison against \textsc{Matlab}\\} % %Type: Functional, Dynamic, Manual, Static etc. % %Initial State: % %Input: % %Output: % %How test will be performed: \end{enumerate} %\subsubsection{Area of Testing2} % ... \subsection{Traceability Between Test Cases and Requirements} N/A - No requirements, just a commonality analysis (which implies requirements). % \section{Tests for Proof of Concept} % \subsection{Area of Testing1} % \paragraph{Title for Test} % \begin{enumerate} % \item{test-id1\\} % Type: Functional, Dynamic, Manual, Static etc. % Initial State: % Input: % Output: % How test will be performed: % \item{test-id2\\} % Type: Functional, Dynamic, Manual, Static etc. % Initial State: % Input: % Output: % How test will be performed: % \end{enumerate} % \subsection{Area of Testing2} % ... \section{Unit Testing Plan} Unit tests can be divided into two distinct parts. The first set verifies portions of the generator, while the second part has to verify the generated code. One could easily assume that verifying a generator is similar to verifying other programs, and that tools can be reused for this purpose. However, a common approach at the moment is to write generated code to a file and use ``diff'' to compare the two. By comparing the generator's output to previous generator output for identical inputs the tester can manually determine by careful examination whether any differences are appropriate. These tests will have to be defined after implementation. \bibliographystyle{plainnat} \bibliography{../References,../ReferencesA} \newpage %\section{Appendix} % %This is where you can place additional information. % %\subsection{Symbolic Parameters} % %The definition of the test cases will call for SYMBOLIC\_CONSTANTS. %Their values are defined in this section for easy maintenance. % %\subsection{Usability Survey Questions?} % %This is a section that would be appropriate for some teams. \end{document}
{ "alphanum_fraction": 0.7510764944, "avg_line_length": 33.0376569038, "ext": "tex", "hexsha": "dd94c6a5252404998670bc0f69353791ce9771bb", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "51cbaa2b35f615d0761972f06caddc36a4474b2b", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "aschaap/cas741", "max_forks_repo_path": "Doc/TestPlan/TestPlan.tex", "max_issues_count": 39, "max_issues_repo_head_hexsha": "51cbaa2b35f615d0761972f06caddc36a4474b2b", "max_issues_repo_issues_event_max_datetime": "2017-12-19T21:06:10.000Z", "max_issues_repo_issues_event_min_datetime": "2017-09-16T02:22:03.000Z", "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "aschaap/cas741", "max_issues_repo_path": "Doc/TestPlan/TestPlan.tex", "max_line_length": 85, "max_stars_count": null, "max_stars_repo_head_hexsha": "51cbaa2b35f615d0761972f06caddc36a4474b2b", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "aschaap/cas741", "max_stars_repo_path": "Doc/TestPlan/TestPlan.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 4012, "size": 15792 }
\documentclass[]{article} \usepackage{graphicx} \usepackage{longtable} \usepackage{hyperref} \usepackage{color} \usepackage{soul} \usepackage{amsmath} \DeclareRobustCommand{\hlcyan}[1]{{\sethlcolor{cyan}\hl{#1}}} \DeclareRobustCommand{\hlgreen}[1]{{\sethlcolor{green}\hl{#1}}} \DeclareRobustCommand{\hlred}[1]{{\sethlcolor{red}\hl{#1}}} \DeclareRobustCommand{\hlyellow}[1]{{\sethlcolor{yellow}\hl{#1}}} \DeclareRobustCommand{\hlorange}[1]{{\sethlcolor{orange}\hl{#1}}} %opening \title{Session 2} \author{Fakhir} \begin{document} \maketitle \section*{Solutions} Break the function into two pieces, namely for $x>0$ and $x<0$. \[ f'(x)= \begin{cases} 1, & \text{if } x>0 \\ -1,& \text{if } x<0 \\ undefined, & \text{if } x=0 \end{cases} \] The reason it is undefined at $x=0$ is because $\Delta x$ can have one end on one piece of the function and the other end on different piece of the function. This implies that even if $\Delta x$ has same magnitude in different cases but its slope can drastically vary depending on how we choose the end points, as shown in the following figure: \begin{center} \includegraphics[scale=1]{images/section1_1} \end{center} \hlred{The above explanation may not be correct.} \\~\\ The arguments given in the MiT's solution appear to be much more convincing. \end{document}
{ "alphanum_fraction": 0.723903177, "avg_line_length": 26.44, "ext": "tex", "hexsha": "b1d1ee63e318c8c7e72503e964d6f3a9c31341fa", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "62189eb1515753acd9499bf2296af0311a76d23e", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "fakhirsh/JediTraining", "max_forks_repo_path": "mit-ocw/[in progress] Single Variable Calculus, 2010/my solutions/1. Differentiation/Session2.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "62189eb1515753acd9499bf2296af0311a76d23e", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "fakhirsh/JediTraining", "max_issues_repo_path": "mit-ocw/[in progress] Single Variable Calculus, 2010/my solutions/1. Differentiation/Session2.tex", "max_line_length": 344, "max_stars_count": null, "max_stars_repo_head_hexsha": "62189eb1515753acd9499bf2296af0311a76d23e", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "fakhirsh/JediTraining", "max_stars_repo_path": "mit-ocw/[in progress] Single Variable Calculus, 2010/my solutions/1. Differentiation/Session2.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 424, "size": 1322 }
\appendix \chapter{Supplementary} \begin{figure}[h!] \centering \begin{subfigure}[b]{\textwidth} \centering \includegraphics[width=\textwidth]{All3_efficiency_1:1:1-1000} \caption{Equal Seeding - $T^p:T^+:T^-$ :: 1:1:1, Initial Total seeding: 1000} \label{fig_all3-time-series_1:1:1-1000} \end{subfigure} \end{figure} \begin{figure}[h!]\ContinuedFloat \centering \begin{subfigure}[b]{\textwidth} \centering \includegraphics[width=\textwidth]{All3_efficiency_8:1:1-1000} \caption{High $T^p$ seeding- $T^p:T^+:T^-$ :: 8:1:1, Initial Total seeding: 1000} \label{fig_all3-time-series_8:1:1-1000} \end{subfigure} \begin{subfigure}[b]{\textwidth} \centering \includegraphics[width=\textwidth]{All3_efficiency_1:1:1-2000} \caption{Equal Seeding - $T^p:T^+:T^-$ :: 1:1:1, Initial Total seeding: 2000} \label{fig_all3-time-series_1:1:1-2000} \end{subfigure} \end{figure} \begin{figure}[h!]\ContinuedFloat \centering \begin{subfigure}[b]{\textwidth} \centering \includegraphics[width=\textwidth]{All3_efficiency_8:1:1-2000} \caption{High $T^p$ seeding- $T^p:T^+:T^-$ :: 8:1:1, Initial Total seeding: 2000} \label{fig_all3-time-series_8:1:1-2000} \end{subfigure} \begin{subfigure}[b]{\textwidth} \centering \includegraphics[width=\textwidth]{All3_efficiency_1:1:1-4000} \caption{Equal Seeding - $T^p:T^+:T^-$ :: 1:1:1, Initial Total seeding: 4000} \label{fig_all3-time-series_1:1:1-4000} \end{subfigure} \end{figure} \begin{figure}[h!]\ContinuedFloat \centering \begin{subfigure}[b]{\textwidth} \centering \includegraphics[width=\textwidth]{All3_efficiency_8:1:1-4000} \caption{High $T^p$ seeding- $T^p:T^+:T^-$ :: 8:1:1, Initial Total seeding: 4000} \label{fig_all3-time-series_8:1:1-4000} \end{subfigure} \caption[Time-series of all 3 cell types under different limitations]{Time-series of all 3 cell types under different oxygen limitation (columns), testosterone limitation (rows) and initial seeding proportions, initial total seeding (subfigures).} \label{fig_all3-time-series} \end{figure} \begin{figure}[h!] \centering \begin{subfigure}[b]{\textwidth} \centering \includegraphics[width=0.9\textwidth]{All3_efficiency-mixed_1:1:1} \caption{Equal Seeding - $T^p:T^+:T^-$ :: 1:1:1 } \label{fig_all3-mixed_1:1:1} \end{subfigure} \begin{subfigure}[b]{\textwidth} \centering \includegraphics[width=0.9\textwidth]{All3_efficiency-mixed_8:1:1} \caption{High $T^p$ seeding- 8:1:1 :: $T^p:T^+:T^-$} \label{fig_all3-mixed_8:1:1} \end{subfigure} \caption[Final ratio of all 3 cell types runs under different limitations for each cell type]{Final ratio of all 3 cell types runs under different oxygen limitations of $T^-$ (columns), oxygen limitations of $T^+$, $T^p$ (rows) and initial seeding proportions (subfigures). $T^p$ has moderate testosterone limitation.} \label{fig_all3-mixed} \end{figure} \begin{figure}[h!] \centering \begin{subfigure}[b]{\textwidth} \centering \includegraphics[width=\textwidth]{figures/All3_therapy-standardization-total} \end{subfigure} \begin{subfigure}[b]{\textwidth} \centering \includegraphics[width=\textwidth]{figures/All3_therapy-standardization-total-sw} \end{subfigure} \caption[Standardization of threshold considering all 3 cells for adaptive therapy]{Standardization of threshold considering all three cells for adaptive therapy, Columns: On-Off threshold, Rows: $T^p:T^+:T^-$ Seeding} \label{fig_therapy-AT_standardization-Total} \end{figure}
{ "alphanum_fraction": 0.7090655509, "avg_line_length": 41.6860465116, "ext": "tex", "hexsha": "7dae5d5c7bb92cb55963d77a9fa230df037ee87a", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "e4decacd5779e85a68c81d0ce3bedf42dea2964f", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "Harshavardhan-BV/Cancer-compe-strat", "max_forks_repo_path": "writing/MSThesis/chapters/Supplementary.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "e4decacd5779e85a68c81d0ce3bedf42dea2964f", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "Harshavardhan-BV/Cancer-compe-strat", "max_issues_repo_path": "writing/MSThesis/chapters/Supplementary.tex", "max_line_length": 320, "max_stars_count": 1, "max_stars_repo_head_hexsha": "e4decacd5779e85a68c81d0ce3bedf42dea2964f", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "Harshavardhan-BV/Cancer-compe-strat", "max_stars_repo_path": "writing/MSThesis/chapters/Supplementary.tex", "max_stars_repo_stars_event_max_datetime": "2020-10-18T15:54:26.000Z", "max_stars_repo_stars_event_min_datetime": "2020-10-18T15:54:26.000Z", "num_tokens": 1248, "size": 3585 }
% Diese Zeile bitte -nicht- aendern. \documentclass[course=eragp]{aspdoc} \input{commands.tex} \input{listing_styles/riscv_lstlisting_style.tex} \usepackage{placeins} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %% TODO: Ersetzen Sie in den folgenden Zeilen die entsprechenden -Texte- % mit den richtigen Werten. \newcommand{\theGroup}{IhreGruppennummer} % Beispiel: 42 \newcommand{\theNumber}{IhreProjektnummer} % Beispiel: A123 % Authors, sorted by last name \author{Lukas Döllerer \and Jonathan Hettwer \and Johannes Maier \and Tobias Schwarz \and Felix Solcher} \date{Summer semester 2021} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % Diese Zeile bitte -nicht- aendern. \title{Gruppe \theGroup{} -- Abgabe zu Aufgabe \theNumber} \title{Static binary translation from RISC-V to x86\_64} \begin{document} \maketitle \tableofcontents \pagebreak \section{Motivation and Problem Statement} % Bullet points: % - why to translate binaries? % - to use a program on another platform (e.g. source code lost / not available) % - (academic purposes :-)) % - How to translate binaries? % - DBT (dynamic binary translation): translate sequence of instructions and execute them % - SBT (static binary translation): translate binary to target ISA, then execute it (pro: higher % execution time) % - comparable to difference between interpreted and compiled programs % - our problem: SBT from RISC-V to x86-64 % - more academic problem, because most programs are available on x86-64 % TODO: Ahead Of Time In order to natively execute machine code, a processor which runs this machine code is needed. But there are many different processor families with different instruction set architectures (ISA) for which programs are developed. To run a program which is written for an ISA one does not have access to a system running this ISA, one can translate it and execute it on another system. There are three main approaches to do this: \par The first option for executing programs written for a different ISA is to use an emulator. It emulates a different execution environment by interpreting the program's instructions just-in-time~\cite{binary_translation}. The development is simple, but it is a rather slow method. \par The second option is to use dynamic binary translation (DBT). It works like an emulator but uses caching to store already translated instruction sequences for future use. This increases the execution speed compared to an emulator because often used instructions do not need to be translated multiple times.~\cite{binary_translation} \par As the third option, one can statically translate the full binary. The program is rewritten for the target system's ISA. Translation and execution are separated phases, which means the binary only has to be translated once for running it multiple times. This increases execution speed compared to DBT. This approach is called static binary translation (SBT).~\cite{binary_translation} \par The current rise of RISC-V processors makes it probable that we will see a need for RISC-V to x86\_64 binary translation in the near future. Even though x86\_64 is still a more widely used ISA than RISC-V, a lot of companies invest in a shift towards the RISC-V ISA. This revolution makes ISA incompatibilities a future problem we want to solve today.~\cite{riscv_rises} Another important use case for the SBT is translating binaries where the source code was lost. Programs without source code can not be compiled for another target ISA, but they can be translated using SBT. \par Therefore we developed a static binary translator from RISC-V to x86\_64 which uses an intermediate representation (IR). The instruction decoding and code generation are separated by using as the IR as a abstract data structure. Our aim is to be faster than the popular dynamic binary translator QEMU. \par The following chapters will provide some background information for the RISC-V architecture, the relevant differences to the x86\_64 architecture (Section~\ref{sec:background}) and our general approach to static binary translation (Section~\ref{sec:approach}). Afterwards, the distinct parts of the translator, namely our intermediate representation (Section~\ref{sec:ir}), the RISC-V lifter (Section~\ref{sec:riscv_lifter}), the optimizer (Section~\ref{sec:optimizations}), and the x86\_64 code generator (Section~\ref{sec:generator}) are further explained. Section~\ref{sec:floating_point} describes our approach to support floating point RISC-V instructions and the present difficulties. The last section (Section~\ref{sec:evaluation}) evaluates the quality and performance of our implementation using benchmarks tests. \section{Background}\label{sec:background} This chapter provides important background information, which is required for our approach. This includes an overview over the RISC-V instructions, standard calling convention~\footnote{When referring to the \textit{standard calling convention}, the RVG calling convention is meant.} and registers, as well as relevant differences between the RISC-V and the x86\_64 architecture. \subsection{Short Overview of RISC-V} According to the RISC-V specification, RISC-V is an open and freely accessible ISA, which consists of a base ISA for integer computations. It also offers general purpose software development supporting extensions, which are optional. There is also the option to implement custom extensions. The RISC-V ISA is available for building 32-bit and 64-bit based applications and operating system kernels. Because of its minimalistic nature, it is well suited for custom hardware implementations.~\cite{rvspec} \par Currently, there are standard extensions for multiplication and division (M), for atomic instructions (A), for floating point arithmetic (F,D), for compressed instructions to reduce the code size (C) and for control and status registers (Ziscr). Together with the RV64 Base ISA, they are called \texttt{rv64imacfd} or short \texttt{rv64g}. More standard extensions are planned.~\cite{rvspec} \par RV64I provides 31 64-bit wide integer registers (\texttt{x1} $-$ \texttt{x31}) and a hard wired zero register (\texttt{x0}). The floating point extensions add instructions for 32 floating points registers (\texttt{f0} $-$ \texttt{f31}). They also add a floating point control and status register (\texttt{fcsr}). The instruction words have a fixed width of 32-bit, with the C extension introducing instructions with 16-bit wide instruction words. The standard calling convention defines \texttt{x1} as the link register and \texttt{x5} as the alternative link register. Link registers store the return address of jumps which target a subroutine and intend to return the parent routine.~\cite{rvspec} \subsection{Differences between RISC-V and x86\_64} When comparing the RISC-V and the x86\_64 ISAs, most of the differences can be derived from their different design ideologies. The x86\_64 ISA is built to describe a CISC, a \emph{Complex Instruction Set Computer}. This decision results in processors that have a limited number of registers, richer instruction sets and variable length instructions. Because of the complex instruction functions and formats, logic is often implemented in a mix between microcode and hard coded logic, executing instructions over a number of CPU clock cycles.~\cite{RISCvCISC} \par As the name suggests, the RISC-V ISA describes a \emph{Reduced Instruction Set Computer}. This kind of microprocessor is characterized by a larger number of registers, its load/store architecture, fixed-length instruction words and a generally limited amount of instructions and instruction formats.~\cite{RISCvCISC} \par These major design differences play an important role during the binary translation. The following paragraph describes two relevant differences in greater detail. \par The RISC-V ISA defines a load/store architecture, meaning that a single instruction can either manipulate the memory or perform an operation that does not access the memory. The x86\_64 ISA does not have this restriction and defines arithmetic and logical instructions which operate directly on memory. Also, RISC-V base instructions are currently always 32-bit wide (16-bit for compressed instructions) which makes loading wide immediates difficult. These are either combined by two or more separate immediate loading operations or loaded relative to the current instruction pointer.~\cite{rvspec} In comparison, x86\_64 instructions can contain immediate values which are up to 64-bit wide, because of their variable instruction lengths~\cite{intel2017man}. \par Assemblers make this difference less complicated for developers by introducing so called \textit{pseudoinstructions}. These are instructions which perform simple tasks like loading an immediate, jumping to an address, or calling a subroutine. Most of them can't be directly expressed as a single RISC-V instruction and are thus replaced with a common pattern. This gives a translator the power to detect these patterns and efficiently convert them to a more compact, x86\_64 representation. \par These differences are necessary knowledge for the translation. They give the opportunity to generate efficient and optimized code. For example, some RISC-V instruction sequences can be merged to a single x86\_64 instruction. \subsection{Inspiration for the Intermediate Representation} Languages for intermediate representations (IR) are often used in compilers, which means that highly advanced concepts already exist. The LLVM assembly language is a established IR, known for its deep integration into the LLVM ecosystem with existing compiler and debugger backends and its well defined language reference. These advantages however have a price. The LLVM assembly language is a complex IR with a lot of features which are not required for the purposes of this project. The used IR is inspired by the LLVM assembly language, tailored to serve the use case of lifting binaries. The parallels between our IR and the LLVM assembly language are the static typing, single static assignment (SSA) variables and machine-instruction-like operations. \section{Approach}\label{sec:approach} The translation process is divided into three main parts: instruction lifting, optimization, and code generation. This partition is made to structure the translator and to simplify the implementation of optimizations. This allows supporting other source or target ISAs in the future. Only some modular components need to be extended or replaced for extending the ISA support of this translator. \par The IR is tailored to support efficient binary translation. It is the data structure which contains the logic of the input binary in an architecture-neutral form. Generic optimization passes can be applied on this structure. They reduce the number or complexity of operations without altering the program's behavior. \par The lifter reads the instruction bytes from the binary file and decodes them using \texttt{frvdec}~\cite{frvdec}. The resulting instruction objects are used to disassemble the RISC-V binary into basic blocks, operations, and variables in the IR. The code generator creates x86\_64 gnu assembler assembly, based on the optimized program in the form of the IR. \par A helper library is used to implement platform specific or runtime based functionality which is used by the translated binary. It includes, among other functions, an interpreter which can be used to translate instructions during runtime. The helper library is designed for specific source and target ISAs. System calls and the interpreter work in an architecture-specific way. \par To create the translated, executable binary, the GNU assembler (AS)~\cite{gnu_binutils} and the GNU linker (LD)~\cite{gnu_binutils} are used to assemble the generated assembly and link it to the helper library. \par Figure~\ref{fig:program_overview} depicts a schematic of the basic structure of the translator. It shows the operating parts of the translator and the processed data fragments, represented by arrows and squares respectively. \begin{figure}[H] \centering \ProgramScheme{0.43}{13} \caption{A schematic representation of the translator's internal structure as well as in- and output binaries.}\label{fig:program_overview} \end{figure} \par Our implementation only supports 64-bit RISC-V ELF\footnote{Executable and Linking Format} binaries which are statically linked and in little endian format. Another requirement is that the binary conforms to the System V ABI\footnote{Application Binary Interface}. This defines, among other things, the system call IDs and the calling convention. These prerequisites are checked, using the metadata contained in the ELF-file, before translating the file. This is required debugging and assertion purposes. \section{Intermediate Representation}\label{sec:ir} As explained in Section~\ref{sec:approach}, an IR is used to create a platform-independent representation of the translated program's logic. This is not only useful to support different input and output languages in the future, but also to be able to optimize the more abstract representation. By parsing control flow and program structures, optimizations like \nameref{dead_code_elimination} and \nameref{constant_folding} can be applied to the IR code. \begin{figure}[H] \begin{center} \begin{tabular}{|c | c | c|} \hline Integers & Floats & Special \\ [0.5ex] \hline\hline i64 & f64 & mt \\ \hline i32 & f32 & \\ \hline i16 & & \\ \hline i8 & & \\ \hline \end{tabular} \caption{Types defined for the IR variables. The numbers describe the width of each type in bits. The memory token (\textit{mt}) is a purely virtual type and thus doesn't have a specified width.}\label{ir:types:figure} \end{center} \end{figure} The IR consists of variables which are statically typed. The types are shown in Figure~\ref{ir:types:figure}. The variables follow a single static assignment form, which makes them immutable after they are initialized. Operations take variables as inputs and assign values to new output variables. Every operation performs one of the available instructions. These are abstract, assembly-like instructions defining mostly arithmetic and logical operations. A full list of all the supported instructions can be found in the Figure~\ref{figure:ir_instructions}. Variables and their operations are grouped into basic blocks. These are blocks of code which do not contain a control flow changing operation. These special operations are jumps and system calls. Subroutine calls which are defined by many ISAs are also jump instructions with additional memory accessing operations. The control flow changing operations act as connectors between basic blocks. Figure~\ref{figure:cfops} shows a list of all defined control flow changing operations. \begin{figure} \begin{center} \def\arraystretch{1.5} \begin{tabular}{r | p{0.8\linewidth}} \hline IR Operation & Description \\ [0.5ex] \hline jump & Direct jump to a known target address. \\ ijump & Indirect jump to a possibly unknown target address. \\ cjump & Conditional jump to a known target address. \\ call & Subroutine call. Also encodes a continuation address to which the subroutine is expected to return to. \\ icall & Same behavior as a normal \textit{call}, but with unknown subroutine target address. \\ return & Exit subroutine execution and continue at the continuation address of the \textit{call} instruction which lead to the subroutine execution. \\ unreachable & This operation should never be reached. On reach, it terminates the program's execution. \\ syscall & A syscall in respect to the System V ABI. \\ \hline \end{tabular} \caption{Control flow changing operations defined by the IR.}\label{figure:cfops} \end{center} \end{figure} \par Each basic block stores a list of its predecessors and successors. Predecessors are the basic blocks which jump to the current basic block, and successors are the basic blocks that are jumped to by the current basic block. This ``jump'' can be any control flow changing operation, including syscalls. \par A basic block also stores a list of input parameters. These basic block parameters are used for transferring data between basic blocks. Every variable which is linked to a static mapper (\ref{statics}) is added to the basic block parameters. These have to be filled with the correct values with every jump to a new basic block. \subsection{Single Static Assignment Form}\label{ssa} First described in 1988 for optimizing an intermediate program representation\cite{ssa_proposal}, the \emph{single static assignment form} (SSA) is used for creating and optimizing explicit control flow graphs. It follows one basic rule: ``[E]ach variable is assigned to exactly once in the program text.''~\cite{ssa_proposal} In our IR, each variable stores its source operation (if it is the result of an operation). This enables us to efficiently backtrack variable origins and e.g.\ replace variables which can be evaluated statically with constant immediates (Section~\ref{constant_folding}) or eliminate unused variables (Section~\ref{dead_code_elimination}). We introduce a \emph{memory token}, a required input for operations which read from memory and the result of operations which write to memory. It creates a dependency between operations that store and operations that load data. This should be used to keep the memory access operations in their correct order when reordering operations inside basic blocks and functions. A writing operation acts as a separator of reading operations. Because of the SSA variables used in the IR, correct memory access order is automatically achieved as long as no variable is referenced before its assignment. This insures a consistent memory access ordering and therefore the correctness of the program. \subsection{Static Mappers}\label{statics} To keep track of register contents between basic blocks, we introduced a concept called a \textit{static mapper} or a \textit{static} for short. Variables which link to them can be used as inputs in basic blocks, They act as a signal to code generators that, if the control flow is transferred to the basic block from an unknown predecessor, the variable can be expected in this static. For control flow operations with an unknown target, a mapping from the output of the basic block to the static at which they should be stored is then created. \par The IR itself does not enforce a fixed set of statics but instead leaves it to the lifter to create as many statics as it needs to lift the source ISA. For RISC-V, these are 31 general purpose registers, one memory token, 32 floating point registers and one control and status flag register. The register \texttt{x0} is always zero and thus not assigned to a static mapper. \section{RISC-V Lifter}\label{sec:riscv_lifter} This chapter describes how the RISC-V instructions are decoded and parsed, how they are lifted into the intermediate representation and which optimizations can be made during lifting. This includes the subroutine call optimization, indirect jump target backtracking and jump table detection. \subsection{General Method} % Bullet Points: % - naming: lifter -> elevator: lift to higher code level % - runs sequentially through the text section % - storing variables assigned to registers in so called mapping (+ memory token) % - for each instruction an instruction sequence in the ir is created % - when discovering a cfop: % - finish basic block (adding cfop) % - address known: % - already scanned address: split basic block or set jump target % - not scanned: set as start of a basic block % - if unknown: % - use backtracking to find possible addresses (controllable through flag) % - else target = dummy, let runtime handle % - last parts of lifting: % - add entry block with setup stack % - fixup jump targets / predecessors / successors / binary relative immediates % executeable sections müssen nicht nur instruktionen enthalten, auch daten möglich, von Neumann Daten & Instruktionen selber Speicher The lifter creates the IR based on the RISC-V binary. This process is called lifting because the machine code is lifted up to a more abstract representation at a higher level, the IR. \subsubsection{Program Decoding}\label{sec:program_decoding} The translator starts with the ELF input binary file which is supplied by the user. After checking the validity of the binary file and whether it is suited for being translated by the program, the RISC-V lifter decodes all instructions. \par ELF binaries store information about what data is stored where in the binary using \textit{program headers} and \textit{section headers}. Both header types store information about the virtual address spaces, allocation details and access flags. The ELF-file specification does not guarantee that every ELF binary defines \textit{sections}~\cite{elf_spec}. The lifter therefore supports instruction detection using either \textit{program headers} or \textit{section headers}. However, \textit{section headers} hold finer and more detailed information about program sections than the \textit{program headers} and are therefore preferred for instruction decoding. \par The ELF-file loader parses instructions from the sections of the binary which are marked as executable. In case there are no sections in the ELF file\footnote{e.g.\ memory dumps.}, we use the program loader information, stored in the \emph{program headers}. They specify which bytes of the underlying binary are actually loaded to the final program.\footnote{Program header type \texttt{PT\_LOAD}.} We only parse instructions from the program headers which are marked as \emph{readable} and \emph{executable}. \par Due to the memory architecture of modern computers, instructions and data are mostly stored in the same memory. This concept, which was first introduced by John von Neumann in 1945\cite{vna}, makes it hard to statically distinguish between instructions and data. That is why we rely on the additional information stored in the section- and program headers which identifies certain parts of the binary as executable instructions. This however means we rely on the compiler and the programmer to use executable sections only for storing instructions. \subsubsection{IR Generation} After loading the data form the binary, the lifter runs sequentially through the discovered instructions which are decoded using \texttt{frvdec}\cite{frvdec}. The resulting instructions are objects, filled with the parsed content of the binary instruction code. This is a more generic and easier processable form with direct access to involved register numbers and immediate values. For each of the instructions, a variable amount of operations and variables is added to the IR to replicate the original program's behavior. \begin{figure} \begin{subfigure}{\textwidth} \centering \begin{lstlisting}[language={[RISC-V]Assembler}] start: [...] loop: 0xb0: addi a0, a0, -1 0xb4: bne a0, zero, loop continuation: 0xb8: sll a1, a1, a2 [...] \end{lstlisting} \caption{RISC-V assembly sequence used as an example for the lifter. Decrements \texttt{a0} by one in a loop until it is equal to zero, then shifts \texttt{a1} to the left by the amount stored in \texttt{a2}.}\label{fig:lifting_example_riscv} \vspace{1em} \end{subfigure} \begin{subfigure}{\textwidth} \centering \begin{lstlisting} [...] block b1(/*input list */) <= [/* predecessors */] { // loop [...] i64 v9 <- @10 [...] imm v33 <- immediate -1 i64 v35 <- add i64 v9, imm v33 imm v36 <- immediate 0 imm v37 <- immediate 0xb0 imm v38 <- immediate 0xb8 } => [(cjump, [b1, /* target inputs */], eq, v35, v36, v37), (jump, [b2, /* target inputs */], v38)] block b2(/* input list */) <= [b1, /* other predecessors */] { // continuation [...] i64 v10 <- @11 i64 v11 <- @12 [...] i64 v33 <- shl i64 v10, i64 v11 } => [/* control flow op */] [...] \end{lstlisting} \caption{The IR created by the lifter from the RISC-V assembly in Figure~\ref{fig:lifting_example_riscv}. \\ Some duplicated or unnecessary parts, e.g. predecessors list, variables from statics, target inputs and control flow operations are omitted.}\label{fig:lifting_example_ir} \end{subfigure} \caption{Exemplary lifting of a RISC-V assembly program.} \end{figure} \par Figure \ref{fig:lifting_example_riscv} shows an example RISC-V assembly sequence and Figure \ref{fig:lifting_example_ir} shows the resulting IR parts. \par As explained in the IR chapter (Section~\ref{sec:ir}), our IR is subdivided into basic blocks which contain consecutive instructions without a change in control flow. As soon as a control flow changing instruction is discovered, it is associated with the basic block. A basic block can be associated to one or more closing control flow changing instructions, e.g. RISC-V branches which consist of a conditional jump and a direct jump to the next instruction (in the next block) if the branch is not taken. The current basic block is finished and a new one is created. If the jump address can be inferred from the instruction using Backtracking (Section~\ref{backtracking}) or if it is directly encoded, like in branch instructions, this address is used to register the start of a new basic block. \par If the instructions at the virtual target address have not been parsed by the lifter yet\footnote{This is the case for forward jumps.}, the basic block entry address is stored. Otherwise, if the lifter already parsed the instruction at the target address without starting a new basic block, the block containing the jump address must be split up at the jump address. This is required because it is only possible to jump to the start of a basic block. \par Splitting basic blocks on backwards jumps is required because of the way the lifter consumes the input binary. It runs over all instructions twice. Once for decoding them and a second time for lifting them into the IR. The second run includes detecting indirect jumps and parsing basic blocks at the same time. This causes every backwards jump which would have changed the basic block parsing to be detected too late. It then has to change the already existing basic blocks by splitting them at the jump's target address. \par When lifting instructions, the lifter stores their original virtual address. This is used for splitting basic blocks. During splitting, all instructions with an original address lower than the split address are sorted into the first block, the other ones into the second. Both blocks are connected using a direct jump. The predecessor and successor lists as well as some operation inputs need to be adjusted. \par If the jump address cannot be extracted from the instruction, the backtracking (Section~\ref{backtracking}) is used to infer possible addresses which are processed with the same procedure as described before. Using a command line flag, the user can control whether all possible addresses should be found or whether only the first one should be used. If no jump address can be inferred, a dummy basic block which indicates an unresolved jump target, is set as the target block. The jump address then needs to be evaluated at runtime. \par If a jump target address is inside a decoded instruction, the further behavior of the program is undefined. \par While lifting the instructions, a register file, called \emph{mapping}, keeps track of the variables which represents the content of all registers, the memory token, and the control and status registers (csr). This is required to determine the inputs of the operations based on the input registers of the instructions. For example, when lifting a RISC-V instruction which has the registers \texttt{x13} and \texttt{x14} as inputs, the input variables are taken from the mapping at the corresponding indices. \par At the start of a basic block the mapping is initialized with IR variables from statics (Section~\ref{statics}). The register with the index zero is a hard wired zero in RISC-V and therefore not initialized with a variable. Furthermore, each read from the mapping at index zero is handled by adding an immediate variable with value zero. Each write to the zero register is ignored, as it is specified for the \texttt{x0} register.\cite{rvspec} \subsubsection{Postprocessing} After all available instructions have been lifted, a post processing step is required to connect the basic blocks with each other. Because we lift the file by reading the instructions consecutively, a basic block does not know its successors during lifting. Vice versa, a new basic block does not know its full list of predecessors during lifting. During post processing, we iterate over all basic blocks, assign them their jump target basic blocks and adjust their predecessor and successor lists. \par The last step in lifting is to add an entry basic block that sets the RISC-V stack up and jumps to the basic block at the ELF-file's entry address. This virtual address is stored in the ELF header and specifies where the execution of the binary starts.\cite{elf_spec} \subsection{Atomic Instructions} To support programs which use atomic instructions from the ``A'' standard extension, the lifter converts every atomic instruction into a set of non-atomic instructions which emulate the original instruction's behavior. This does not change the logic or validity of the program as long as the instructions are executed consecutively, without the need for atomic ressource access. \subsection{Backtracking}\label{backtracking} Although indirect jump addresses can be evaluated at runtime, we can only jump to the beginning of detected basic blocks. This is the case, because we can optimize (Section~\ref{dead_code_elimination}) and fold instructions (Section~\ref{constant_folding}) inside basic blocks without altering the control flow or logic of the program. That is why there might often not be a one-to-one correspondence of RISC-V to x86\_64-Instructions. Creating a fast and runnable program thus means detecting most basic blocks with their corresponding entry points (including their exit points). Although determining all possible indirect jump targets statically is not possible (e.g.\ virtual function calls), % or not feasible (e.g. jump targets could be influenced by user input) backtracking register contents during lifting can provide us with an approximation of the indirect jump target set. \par To statically determine indirect jump target addresses, we need to evaluate possible contents of the jump's input register operand. Each variable stores its origin, being either the result of an operation, an immediate value or from a static. This creates a dependency tree. To backtrack the value of a variable, this tree has to be traversed until a variable which can not be backtracked is reached or until all possible values have been evaluated. \par Registers keep their values between jumps and calls. We model this behavior using static mappers which bind variables to their registers in between basic blocks. For backtracking, this means a variable which is bound to a static mapper can have any value that the variable mapped to the same static mapper in its basic block's predecessors has. This opens up a room of variables which influence the jump target, growing with the amount of predecessors and operation inputs. \par Figure~\ref{backtracking_graph} shows an example for such a dependency tree. \begin{figure} \centering \begin{tikzpicture} \begin{pgfonlayer}{nodelayer} \node [style=BB] (0) at (-3.75, 0) {bb24}; \node [style=VAR] (1) at (-2.25, 0) {v37}; \node [style=OP] (2) at (-0.75, 0) {ADD}; \node [style=VAR] (3) at (0.5, 1.5) {v36}; \node [style=VAR] (4) at (0.5, -1.5) {v34}; \node [style=OP] (5) at (2, -1.5) {SHL}; \node [style=VAR] (6) at (3.5, 0.25) {v15}; \node [style=VAR] (7) at (3.5, -1.5) {v22}; \node [style=STATIC] (8) at (5, 0.25) {x14}; \node [style=STATIC] (9) at (5, -1.5) {x21}; \node [style=BB] (10) at (6.5, 1) {bb42}; \node [style=BB] (11) at (6.5, -1) {bb23}; \node [style=BB] (12) at (6.5, 2) {bb8}; \node [style=BB] (13) at (6.5, 0) {bb17}; \node [style=BB] (14) at (6.5, -2.25) {bb127}; \node [style=VALUE] (15) at (2, 1.5) {-2}; \node [style=VAR] (16) at (7.75, 2) {}; \node [style=VAR] (18) at (7.75, 0) {}; \node [style=VAR] (19) at (7.75, -1) {}; \node [style=BB] (23) at (9.75, 1) {}; \node [style=BB] (24) at (9.75, -2.25) {}; \node [style=OP] (25) at (8.75, 0) {}; \node [style=OP] (26) at (8.75, -1) {}; \node [style=OP] (28) at (8.75, 2) {}; \node [style=VAR] (29) at (7.75, 1) {}; \node [style=VAR] (31) at (7.75, -2.25) {}; \node [style=STATIC] (32) at (8.75, 1) {}; \node [style=STATIC] (33) at (8.75, -2.25) {}; \node [style=none] (34) at (9.75, 2) {}; \node [style=none] (35) at (9.75, 1.5) {}; \node [style=none] (36) at (9.75, 0.25) {}; \node [style=none] (37) at (9.75, -0.25) {}; \node [style=none] (38) at (9.75, -0.75) {}; \node [style=none] (39) at (9.75, -1.25) {}; \node [style=none] (40) at (9.75, 2.5) {}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw [style=LINK] (0) to (1); \draw [style=LINK] (1) to (2); \draw [style=LINK] (2) to (3); \draw [style=LINK] (2) to (4); \draw [style=LINK] (3) to (15.center); \draw [style=LINK] (4) to (5); \draw [style=LINK] (5) to (7); \draw [style=LINK] (5) to (6); \draw [style=LINK] (6) to (8); \draw [style=LINK] (7) to (9); \draw [style=LINK] (9) to (14); \draw [style=LINK] (9) to (11); \draw [style=LINK] (8) to (13); \draw [style=LINK] (8) to (10); \draw [style=LINK] (8) to (12); \draw [style=LINK] (12) to (16); \draw [style=LINK] (13) to (18); \draw [style=LINK] (11) to (19); \draw [style=LINK] (16) to (28); \draw [style=LINK] (10) to (29); \draw [style=LINK] (29) to (32); \draw [style=LINK] (18) to (25); \draw [style=LINK] (14) to (31); \draw [style=LINK] (31) to (33); \draw [style=LINK] (33) to (24); \draw [style=LINK] (19) to (26); \draw [style=LINK] (32) to (23); \draw [style=DASHED] (28) to (34.center); \draw [style=DASHED] (28) to (35.center); \draw [style=DASHED] (25) to (36.center); \draw [style=DASHED] (25) to (37.center); \draw [style=DASHED] (26) to (38.center); \draw [style=DASHED] (26) to (39.center); \draw [style=DASHED] (28) to (40.center); \end{pgfonlayer} \end{tikzpicture} \caption{ Schematic dependency tree of an indirect jump target address variable (\textbf{v37}). The basic block \textbf{bb24} ends with an indirect jump, jumping to the address stored in \texttt{v37}. \texttt{ADD} and \texttt{SHL} are operations, names starting with \textbf{x} are names of static mappers.\\ }\label{backtracking_graph} \end{figure} % TODO: divide this into two parts, one lifter one generator? since there are two distinct parts to it \subsection{Jump Table Detection} As described in\ \cite{jump_table_paper}, n-conditional case statements, like the C programming language's switch-case statement, are often implemented by compilers using jump tables. A jump table is a pattern which enables efficient\footnote{Because the integer condition is used as the jump address selector. This is only efficient for switch-statements with small entry ranges (e.g.\ enum switching).} branching based on a stored set of addresses which are possible jump targets. This means we need to detect jump tables to find all possible basic block entry points. \par The jump table patterns observed in generated assembly always consist of a particular set of RISC-V instructions. At the beginning, a bounds check is executed. A branching instruction is used to either jump to the default branch or the code after the switch if the switch condition (an integer) is above the highest case value. \par If this is not the case, the switch condition is multiplied by a factor of 4. Using a combination of a \texttt{LUI} and an \texttt{ADDI} instruction, it is possible to load a full 32-bit constant\cite{rvspec} representing the start address of the jump table. This address is added to the integer condition. The resulting value is the memory address at which the desired jump address is stored. A 32-bit load is executed to load the value into a register. An indirect jump to this register's content is the final instruction, executing the case label jump. \par For the jump table detection, the previously described pattern is backtracked for every encountered indirect jump. If the jump turns out to be part of a jump table configuration, the jump target addresses can be extracted and used as entry points for new basic blocks. This also works for switch-statements which are compiled into a combination of jump tables and branch-based search trees. \subsection{Call and Return Optimization} The RISC-V architecture does not include special operations for handling subroutines. A function call is assembled as a jump which places the address of the next instruction into a \emph{link-}register \texttt{x1} or \texttt{x5} (for the standard calling convention).\cite{rvspec} Returning from a subroutine is done by jumping to the address which is contained in either of those registers. This behavior is further optimized with the compressed instructions \texttt{C.JAL} and \texttt{C.JALR}. These operations implicitly store the address of the next instruction to the link register \texttt{x1}.\cite{rvspec} \par Subroutine calls which jump to a dynamic address are so called \emph{icalls}. They act similar to indirect jumps and are backtracked in the same way. \section{Optimizations}\label{sec:optimizations} The IR produced by the lifter is not always optimal. For example, at the time of lifting, the lifter is not always able to know whether a variable will be used elsewhere, or whether some operation can be directly calculated or combined with another operation that follows later. For these purposes, separate optimization passes are performed once the full IR is available. \subsection{Dead Code Elimination}\label{dead_code_elimination} There are cases in compiled programs, where a value is written to a register, but never read again. When lifted, these register writes are found in the IR as unused variables, which can be removed through \emph{Dead Code Elimination} (DCE). For this purpose, each variable has a reference count assigned to it. This is a number, which is increased each time the variable is referenced in another operation or control flow operation. If the reference count of a variable is zero, it is not used anywhere and can be removed. By visiting the variables in reverse order, \textit{cascading} deletes are taken care of, as the reference count for inputs of an operation is decreased once the operation is deleted. Another purpose of the Dead Code Elimination pass is to remove unused inputs from basic blocks. For this, the reference-count does not suffice, since there might be circular control flow (like in loops). Instead, DCE marks each variable associated with side effects, i.e.\ variables used in \texttt{store}s, \texttt{syscall}s and jumps to unknown targets like \texttt{ijump}s. This marking is then propagated through the IR by visiting operation inputs and predecessors, until no more variables are reachable. Then, all unmarked variables and inputs can be deleted. This technique can be compared to a graph search algorithm, which detects connected components containing an operation with side effects. \begin{figure}[H] \centering \begin{lstlisting}[] block b1(i64 v0, i64 v1) <= [/* predecessors */] { i64 v0 <- @0 (1) i64 v1 <- @1 (3) imm v2 <- immediate -1 (3) i64 v3 <- add i64 v0, imm v2 (1) imm v4 <- immediate 0 (2) i64 v5 <- add i64 v1, imm v4 (0) } => [(cjump, [b2, v2, v1], eq, v3, v4), (jump, [b1, v2, v1])] block b2(i64 v0, i64 v1) <= [b1] { /* content omitted for simplicity */ i64 v0 <- @0 (1) i64 v1 <- @1 (0) } => [/* control flow operations */]; \end{lstlisting} \caption{Example IR before DCE: The value in parenthesis after each variable is its reference count.}\label{fig:dce_example_before} \end{figure} \begin{figure}[H] \centering \begin{lstlisting}[] block b1(i64 v0) <= [/* predecessors */] { i64 v0 <- @0 (1) imm v1 <- immediate -1 (1) i64 v2 <- add i64 v0, imm v1 (3) imm v3 <- immediate 0 (1) } => [(cjump, [b2, v2], eq, v2, v3), (jump, [b1, v2])] block b2(i64 v0) <= [b1] { /* content omitted for simplicity */ i64 v0 <- @0 (1) } => [/* control flow operations */]; \end{lstlisting} \caption{Example IR after DCE.}\label{fig:dce_example_after} \end{figure} Figure~\ref{fig:dce_example_before} shows an example IR which has not been optimized. Figure~\ref{fig:dce_example_after} shows a potential result of performing DCE on this example. To better visualize the operation of DCE, the variables have been annotated with their corresponding reference count. The simplest change is variable \texttt{v5}, which is not referenced in any other variable or control flow operation, and can be eliminated. One can also see that \texttt{v1} is not used, but since it is still passed as a target input to other basic blocks, its reference count is not zero. In this case, the marking process finds that \texttt{v1} is not involved in any side effect, and consequently removes it. Since it is also a static, the corresponding input is removed from the basic block. \subsection{Constant Folding and Constant Propagation}\label{constant_folding} Constant Folding and Constant Propagation are closely related optimizations, where operations with known input values (i.e.\ immediate values) are computed, and immediate variables are propagated throughout basic blocks. In our implementation, Constant Folding, Constant Propagation, and operation simplifications are grouped into the same pass. As such, they are further referred to collectively as Constant Folding. This optimization pass is done by visiting each operation in a basic block in order, making the pass run in linear time. If the variable is an integer arithmetic, logic or conditional set operation, one of several actions can be performed. In the simplest case, all inputs are immediates. The result of the operation can be computed, and the variable is replaced with the resulting immediate value. One exception to this are binary-relative immediates --- immediates which are offset at runtime by the base address of the binary. Hence, only addition and subtraction with one binary-relative and one non-binary-relative immediate operand are evaluable, and evaluating subtraction is only sensible if the binary-relative immediate is the first operand. Another exception are instructions with side effects like store and load instructions, since these still need to be performed at runtime. The next case is one immediate input and one non-immediate (static or operation) input. Generally, this case allows for simplification of identity operations, like addition with zero. If the non-immediate operand is an operation with an immediate operand itself, the two immediates can be evaluated under the rules of associativity and commutativity (Exceptions for binary-relative immediates still apply). This optimization is greatly simplified by the use of the SSA form in the IR, since variables can not be reassigned. This means that the value of a variable is known just by looking at the declaration of the variable. \begin{figure}[H] \centering \begin{lstlisting}[] block b1(i64 v0) <= [/* predecessors */] { i64 v0 <- @0 imm v1 <- immediate 73 i64 v2 <- add i64 v0, imm v1 imm v3 <- immediate 64 i64 v4 <- add i64 v2, imm v3 imm v5 <- immediate 95 imm v6 <- immediate 31 i64 v7 <- sub imm v5, imm v6 imm v8 <- immediate 0 i64 v9 <- add i64 v7, imm v8 } => [/* control flow operations */] \end{lstlisting} \caption{Example IR before Constant Folding.}\label{fig:const_folding_example_before} \end{figure} \begin{figure}[H] \centering \begin{lstlisting}[] block b1(i64 v0) <= [/* predecessors */] { i64 v0 <- @0 imm v3 <- immediate 137 i64 v4 <- add i64 v0, imm v3 i64 v7 <- immediate 64 } => [/* control flow operations */] \end{lstlisting} \caption{Example IR after Constant Folding.}\label{fig:const_folding_example_after} \end{figure} Figures~\ref{fig:const_folding_example_before} and \ref{fig:const_folding_example_after} show an example IR before and after applying Constant Folding. First, variables \texttt{v2} and \texttt{v4} are collapsed into one variable by evaluating the add with immediates (since addition is associative), making \texttt{v1} and \texttt{v2} dead code. Next, \texttt{v7} is directly evaluated, since both inputs are immediates. Last, since \texttt{v9} is an add with zero and thus an identity operation, it is removed, and references to it are replaced with \texttt{v7}. \subsection{Common Subexpression Elimination} Common Subexpression Elimination, or more specifically, \textit{(Local) Value numbering}~\cite{ssa_proposal}, is an optimization that removes redundant code. In principle, each variable is given a symbolic value depending on its inputs and operation, variables with the same value are redundant. These redundant variables can thus be all replaced by the first occurrence. In our implementation, a hash set is employed (similar to how this optimization is described in the SSA proposal), where relevant properties, i.e. operation type, input variables, rounding mode, or in the case of immediate variables the value itself, are considered and compared. Common subexpression elimination is mostly useful for eliminating duplicate immediates, which can also be produced by the Constant folding pass. Since input binaries are often already optimized, duplicate operations are rather uncommon. \begin{figure}[H] \centering \begin{lstlisting}[] block b1(i64 v0, i64 v1) <= [/* predecessors */] { i64 v0 <- @0 i64 v1 <- @1 imm v2 <- immediate 64 i64 v3 <- add i64 v0, imm v2 imm v4 <- immediate 64 i64 v5 <- add i64 v0, imm v4 i64 v6 <- add i64 v1, i64 v5 } => [/* control flow operations */] \end{lstlisting} \caption{Example IR before Common Subexpression Elimination.}\label{fig:cse_example_before} \end{figure} \begin{figure}[H] \centering \begin{lstlisting}[] block b1(i64 v0, i64 v1) <= [/* predecessors */] { i64 v0 <- @0 i64 v1 <- @1 imm v2 <- immediate 64 i64 v3 <- add i64 v0, imm v2 i64 v6 <- add i64 v1, i64 v3 } => [/* control flow operations */] \end{lstlisting} \caption{Example IR after Common Subexpression Elimination.}\label{fig:cse_example_after} \end{figure} \section{Code Generator}\label{sec:generator} The code generator is used to produce assembly code for all basic blocks. This means assembling all variables with their associated operations, preparing the inputs for the next basic block, and creating the control flow operations for each block. Additionally, it needs to provide for a runtime environment. This runtime environment consists of the original binary, which makes relevant data for execution (e.g. the original \texttt{.data} section) accessible. Furthermore, a stack is part of the runtime environment, which is used as a replacement for the original stack. This is required, because the x86\_64-Stack is reserved for storing temporary variables when assembling a basic block and return addresses for the Call-Return-Optimization. % Furthermore, a resolver for indirect jumps is included. An interpreter which acts as a fallback if % an indirect jump to an address at which no basic block starts is encountered is also available at runtime. % Architecture-specific functions like a routine to create an initial stack as required by the % original binariy's system convention as well as a routine to translate syscalls are included. % The architecture-independent parts are emitted by the generator in the assembly with the rest being part of the Helper Library which is linked against the generated assembly. \par When generating assembly for the basic blocks, a user can choose between a simple approach that generates unoptimized assembly, and a more sophisticated approach that generates optimized assembly. \subsection{Simple Code Generation}\label{naive_generator} The simple implementation is meant to ease debugging and to better evaluate gains from optimizations in the IR, as it very closely recreates the IR operations. Since the assembly it generates is not optimized, it should only be used for development purposes. \par The simple code generator creates assembly for each basic block independently. It creates stack frames which hold space for every variable in the basic block. Then, it iterates all basic block operations and retrieves its sources, either from a static or from the stack frame. The generator applies the operation and saves the results back in the stack frame. At the end of a block, it iterates over all control flow operations and evaluates their condition if necessary. It then writes out the inputs for the next basic block from the stack frame and either calls a helper routine or directly jumps to the next block. \par To show an example of the code produced by the naive code generator the IR shown in Figure~\ref{fig:example_ir} is used. This simple IR, which decrements a counter until it is 0, produces the assembly shown in Figure~\ref{fig:naive_gen_asm}. \begin{figure} \centering \begin{lstlisting}[] block b1(i64 v0) <= [/* predecessors */] { i64 v0 <- @0 imm v1 <- immediate -1 i64 v2 <- add i64 v0, imm v1 imm v3 <- immediate 0 } => [(cjump, [b2, v2], v2, v3, eq), (jump, [b1, v2])] \end{lstlisting} \caption{Example IR used to show the difference between the simple code generator and the register allocator.}\label{fig:example_ir} \end{figure} \begin{figure}[h] \centering \begin{lstlisting}[language={[x86masm]Assembler}] b1: sub rsp, 32 ; create stack frame, 4 variables * 8 bytes for each mov rax, [s0] ; calculate the val of v0 mov [rsp], rax ; mov v0 to the stack frame mov [rsp + 8], -1 ; immediates are loaded directly into the stack frame mov rax, [rsp] ; operands are loaded from the stack again mov rbx, [rsp + 8] add rax, rbx ; v2 mov [rsp + 16], rax mov [rsp + 24], 0 ; v3 is a 0 immediate mov rax, [rsp + 16] mov rbx, [rsp + 24] cmp rax, rbx ; test cjump condition jne b1_cf_1 ; jump to next cfop if not fulfilled mov rax, [rsp + 16] mov [s0], rax ; Store input for the next basic block add rsp, 32 ; Destroy stack frame jmp b2 b1_cf_1: mov rax, [rsp + 16] mov [s0], rax add rsp, 32 jmp b1 \end{lstlisting} \caption{Assembly produced by the simple code generator from Figure~\ref{fig:example_ir}.}\label{fig:naive_gen_asm} \end{figure} \subsection{Advanced Code Generation} To achieve good performance and code density, a more sophisticated approach to generating assembly from the IR needs to be employed. This is done by using register allocation to keep frequently used variables in registers and on the stack without spilling them into the statics when transferring control flow to the next basic block. In addition, certain operation sequences in the IR are merged into single x86\_64-Instructions to achieve better code density and performance. \subsubsection{Register Allocation}\label{reg_alloc} Basic blocks which closely represent functions of the original binary are grouped and compiled together. The register and stack state (i.e. the location of variables) is maintained between blocks. %As such, often used variables are kept in registers to prevent spilling of variables to statics at the end of each basic block. To achieve this, all blocks which are targets of a call control flow changing operation must first be marked. The register allocator then iterates over all basic blocks, checks if they are either the target of a call operation or have no known predecessors. It then starts allocating registers for the detected group of these basic blocks. \par The allocator compiles the starting block by allocating registers and stack slots for the variables. Every basic block that sits at the beginning of a detected function is called a top-level block. The allocator evaluates the control flow changing operations and checks whether the target of such an operation is part of the current group. This basically means that it is not a target of a call operation. If the target is part of the current group, it creates an input mapping for the target which specifies where each input variable is located. However, if the block has already been compiled, the allocator emits a sequence of instructions which move the variables to the position at which the target block expects them. After this, the handling of the current block is finished and if any of the targets of the block's control flow changing operations are part of the current group they are recursively compiled too. \par The process of generating instructions for the basic block itself is also relatively straight forward. First, a rough estimate of the time-of-uses of each variable is created. This is required because control flow changing operations might need reshuffling. Afterwards, each operation in the basic block is then iterated over and its inputs are loaded into registers if they are located in a static or on the stack frame. A destination register is chosen too. If possible, a destination register other than the input registers is used if the inputs are still required later. Otherwise, one of the input registers is overwritten. When choosing a register, empty or disused registers are preferred. In case all registers contain variables which are still needed, the variable which is the farthest away from being used again is chosen to be evicted to the stack. This is evaluated using the stored time of next use. \par A possible improvement here is to create a preference for a register for each variable if it is required at a fixed location. This could be the case if it is used as an input for an already compiled basic block. This preference could then be taken into account when choosing a new register for storing a variable. \par As an example of how this works, we assume that only \texttt{rax} and \texttt{rbx} are available for allocation in the following example. We also assume that an \texttt{add} of a variable located in \texttt{rax} and an immediate 10 is compiled. It is also assumed that the variable in rax is accessed later in the same block again. If \texttt{rbx} contains no variable which is needed anymore the following code is compiled: \begin{lstlisting}[language={[x86masm]Assembler},float=h] lea rbx, [rax + 10] ; rbx is chosen as the destination since it is empty \end{lstlisting} \FloatBarrier If \texttt{rbx} contains a variable and it is used in the operation following the \texttt{add}, \texttt{rax} is chosen as the destination, because it is used after rbx: \begin{lstlisting}[language={[x86masm]Assembler},float=h] mov [rsp], rax ; rax is saved to the stack frame to be retrievable later on add rax, 10 ; then overwrite it \end{lstlisting} \FloatBarrier If \texttt{rbx} contains a variable but it is only used after the variable contained in \texttt{rax}, it is instead chosen as the destination: \begin{lstlisting}[language={[x86masm]Assembler},float=h] mov [rsp], rbx ; rbx is saved to the stack frame to be retrievable later on lea rbx, [rax + 10] ; then overwrite it \end{lstlisting} \FloatBarrier \par After all blocks in a group are compiled and the final stack-frame size is known, the control-flow operations for each block can be assembled. The difference to the simple generator is that the inputs for the next block might not be written to statics but instead need to be placed in certain register or stack slots while not overwriting values that are currently stored in them and might be needed later. This is why the initial time-calculation might be wrong for these cases as it is hard to predict when such conflicts might occur. The current approach taken is to place the input into statics first, then the stack slots, then the registers while storing values which may be overwritten in temporary stack slots. Also, when compiling the control flow operations and the target is a block not part of the current group the stack frame size for it might be different and a such it needs to be adjusted as well to prevent stack over- or underflows. \par As an example of the input relocating we again assume only rax and rbx to be available which both hold variables used as inputs for the next blocks. We assume to compile the control flow operations for a block which conditionally jumps to one block with the inputs needed in rbx and the second stack slot respectively. The other other block that is jumped to assumes the variables in rax and rbx. \begin{lstlisting}[language={[x86masm]Assembler},float=h] cmp rax, 0 ; evaluate cjump condition jne b1_cf_1 ; cf 0, adjust rax and rbx to rbx and the second stack slot mov [rsp + 8 * 2], rbx ; variables needed in the stack frame are written out first mov rbx, rax ; now rbx is safe to be overwritten jmp b2 b1_cf_1: ; no need for adjustment as the locations for the input match jmp b3 \end{lstlisting} \FloatBarrier To show an example of the generated assembly, the same IR as in Figure~\ref{fig:example_ir} is used. The produced assembly is shown in Figure~\ref{fig:reg_gen_asm}. \begin{figure}[h] \centering \begin{lstlisting}[language={[x86masm]Assembler}] b1: mov rax, [s0] ; Input is in s0 so load it from there add rax, -1 ; the immediate operand can be encoded directly into the instruction cmp rax, 0 ; evaluate the eq-condition of the cjump control flow operation jne b1_cf_1 mov [s0], rax ; move the modified var to the location the block expects it at jmp b1 b1_cf_1: ; b2 expects the input in rax so no need to move it jmp b2 \end{lstlisting} \caption{Assembly produced by register allocation from Figure \ref{fig:example_ir}.}\label{fig:reg_gen_asm} \end{figure} \subsubsection{Translation Blocks}\label{translation_blocks} One problem that is encountered when employing register allocation on groups of basic blocks is that the programm cannot simply jump to them using indirect jumps anymore. If the basic block is not a top-level block, it might expect variables to be outside of statics and the stack frame, which is set up at the first block of a group. These would be missing if the basic block is reached via a jump. \par To still make it possible for indirect jumps to jump to basic blocks which are register-allocated, translation blocks need to be generated. These translation blocks set up the stack frame and move the inputs for the basic block to the registers or stack slots at which they are expected. They afterwards jump to the register-allocated block. \par A problem with these blocks is that they need to be generated for a lot of basic blocks and take up a lot of space in the finished binary.\footnote{This required space was more than 30\% of the \.text section, in some extreme cases.} They are also rarely used, since most indirect jumps will target the entry basic block of detected functions. The solution is to only generate translation blocks for targets of function returns. This way, the binary size can be drastically reduced while also having a negligible impact on performance. \subsubsection{Merging Operations} Because the x86\_64 architecture is a CISC-architecture, it offers more sophisticated instructions than RISC-V or our IR. As such, multiple operations in the IR can be merged into single x86-Instructions if the intermediate results of them are not required later. Mergeable sequences include: \begin{itemize} \item \texttt{add}, (\texttt{cast}), \texttt{store}: Can be merged to a single store instruction with the \texttt{add} embedded in the memory operand and the \texttt{cast} expressed as the store width. \item \texttt{add}, \texttt{load}, \texttt{sign-} / \texttt{zero-extend}: Can be merged into a single \texttt{movsx} / \texttt{movzx} instruction with the \texttt{add} embedded in the memory operand \item \texttt{and} with 0x1f / 0x3f, (\texttt{cast}), \texttt{shr} / \texttt{shl} / \texttt{sar}: Can be merged into a single shift instruction since they mask the count operand themselves \end{itemize} % Merging these operations not only reduces the binary size, since fewer instructions need to be encoded, but also increases performance significantly. % TODO: test the performance diff for merging ops \subsection{Call and Return Optimization} To support the IR's \texttt{call} and \texttt{return} instructions, the x86\_64-Stack is used. When the code generator encounters a \texttt{call}, it first destroys the stack frame of the current basic block, leaving only the return addresses on the stack. It then pushes the return address from the original binary, which is encoded in the IR-\texttt{call} as the continuation address. Afterwards, a x86\_64 \texttt{call} instruction to the target is emitted. \par A return is assembled to do a comparison of the actual return address computed and the previously pushed, expected return address. This expected return address is pushed by a previously executed \texttt{call} instruction, as explained in the paragraph above. If both return addresses match, an x86\_64 \texttt{ret} instruction is executed. \par If they do not match, the stack pointer is reset to the original stack pointer before execution of any basic block and the indirect jump handler is invoked. To prevent underflow, 16 bytes of zeroes are pushed onto the stack before the first basic block is executed. This will cause the compare at a \texttt{return} instruction without a previous \texttt{call} instruction to fail and the indirect jump handler to be used instead. To prevent overflow, the stack pointer is compared to a maximum allowed size every time a call control flow operation is executed and the stack is reset if it exceeds this size. \subsection{Helper Library}\label{helper} The ABI that is exposed by Linux at runtime is different on x86\_64 compared to RISC-V. Some aspects are specified by the System V ABI, like the ELF file format and x86\_64 calling convention, but the system call ABI used by Linux differs from platform to platform, mostly due to backwards compatibility or architectural differences~\cite{man_syscalls}. Because of these incompatibilities, some system calls need a thin wrapper, which is defined in the Helper Library. In most cases, the only difference is the ID used to perform the system call, but in some cases, arguments need to be modified before the system call can be performed or to be handled specially. However, system calls related to signals were stubbed to do nothing, since signal support was not necessary for running the benchmarks and implementing these requires invasive changes. % TODO: cite something that shows that system calls are slow anyway. => slow helper library does not matter very much At startup, Linux puts information on the x86\_64 stack, for example the program arguments and auxiliary vectors. % TODO: define what auxiliary vectors are and why it matters ? Before executing the translated code, these values need to be copied to the emulated Stack, and in some cases transformed. For example, the auxiliary vectors contain information on where to locate the Linux vDSO~\cite{man_vdso}. This is an optional ELF object that provides user-space wrappers for some system calls to make them faster (e.g. by not invoking the rather expensive kernel system call procedure), and is automatically linked by Linux (if present). The application can use the auxiliary vectors to locate this object and call the provided functions; however, since the ABI is very different, we opted to remove any reference to the vDSO from the auxiliary vectors. In the future, a vDSO wrapper could be implemented to provide the same speed benefits. Additionally, the helper library houses more required components for execution, like helpers for indirect jump lookup (\ref{ijump_resolution}) and the interpreter (\ref{interpreter}). % bullet points: % * why necessary (requirements): not everything can be translated ahead of time, sort of done % * system v abi at startup % * ignoring the linux vdso % * translate arguments / environment to risc-v stack % * translate syscalls at startup % * often pass through % * id mismatch % * argument layout mismatch % * helper functions for ijump hash table lookup -> reference to other chapter % * contains the interpreter % * evaluate how well the requirements where reached, what is missing, problems encountered. \subsection{Indirect Jump Resolution}\label{ijump_resolution} When encountering an indirect jump, the jump's input variable contains the jump target address in the original RISC-V binary address space. The lifter tries to statically identify most of the indirect jump target addresses during lifting and is thus able to start new basic blocks at such addresses. This however means that the translated binary needs a method to translate the original binary's addresses to basic block start addresses in the translated binary, at runtime. This is the case, because address calculations still refer to the original binary's addresses. This method also needs a way to identify indirect jump target addresses which are not the start address of any lifted basic block. We can only jump to the start of a (translation) basic block, because this is the only reference point where the emulated RISC-V registers are stored at a known location (in the form of statics). In case we encounter such an unknown jump target address, we switch to the interpreter (Section~\ref{interpreter}) to interpret the original RISC-V code just-in-time until the next jump-able start of a basic block is reached. \par Mapping the RISC-V indirect jump target addresses to translated basic blocks requires a dictionary-like data structure. The implementation has to have fast access times as we need to access entries with every indirect jump and every interpreted control flow changing operation in the interpreter. It also has to be able to identify invalid keys, to switch to the interpreter if the jump target address is not a valid basic block start address. We developed two data structure and compared their strengths and weaknesses: \begin{enumerate} \item For the simple implementation, we store a specific 64-bit wide number for each lifted RISC-V address in the \texttt{.ro\_data} section. This recreates part of the original binary's address space with a known offset. The numbers are either 0 if no basic block with the given start address exists or the address of the basic block which starts with instructions from this RISC-V address. The latter is done by using the basic block labels, with the assembler calculating the actual address numbers. \par This method has the advantage of having a constant access time with barely no calculation overhead. It however means that we need to store additional information in our output assembly file in the size of the lifted instructions section of the original binary. This could be part of the problem of the assembler AS requiring a lot of memory for assembling the output assembly. \item To reduce this size overhead, we implemented a method for creating a perfect hash table with a process called ``Hash, displace, and compress''\cite{CHD}. The process consist of a two-stage hash function which first sorts the items into buckets and later generates a hash function for each bucket. The hashed values are stored in one common hash table. The calculation starts with a load factor ($\alpha$) of 100\% and a bucket size of 10 elements ($\lambda$) per bucket in average. These values are lowered incrementally by 10\% and 1 element respectively if no valid bucket-level hash function could be found for the set constraints. While a lower average bucket size decreases the translation time (because hash functions hash fewer items which generally results in fewer collisions), a higher average bucket size decreases the translated binary's size (because every bucket needs to store its own hash function). The bucket-level hash functions only need a 16-bit index to be reconstructed and can thus be stored memory efficiently. These bucket hash function indices are stored in a separate table in the \texttt{.ro\_data} section, together with the hash table. \par For the hash function generation, we did not use truly random hash functions. The authors of the ``Hash, displace, and compress'' perfect hashing method suggest using a ``heuristic hash function''\cite{CHD}, one of Bob Jenkins' first published hash functions.\cite{jenkins_hash_1} For our implementation, we use one of his latest hash functions which is supposed to be faster than his first proposals.\cite{jenkins_hash_2} Because we are only interested in hashing 64-bit addresses, we only need part of his hash function which generates hashes for byte arrays of variable sizes. \par The hashing algorithm is used in the generator for building the hash table and finding the required hash functions. It is also implemented in the helper library (Section~\ref{helper}) for providing hashes for addresses which are only available during runtime. \par With the generated hash table and hash functions, every key is assigned an index in the hash table, no matter if it is a valid key which was used to build the hash table or not. That is why we store both the key and the value, which is the target basic block label in this case, in the hash table. For key validation, we first load the key from the calculated index and compare it to the input key. If they do not match, we know that no basic block exists for the supplied RISC-V address. Otherwise, we load the value from the table which is placed just 64-bits after the key. \par The hashing helps reduce the required storage space to the hash table size and the hash function indices table size. (Figure~\ref{hashing_figure}) The storage size optimizations mean however that every access takes, although its complexity is still constant, the additional time of calculating the hash table index. \end{enumerate} \subsection{Interpreter}\label{interpreter} An emulator is the slowest method of translating a binary~\cite{binary_translation}. Our interpreter is an non optimized emulator.\footnote{Our emulator is called \textit{interpreter} in the implementation and therefore we use this name here too.} This means that the interpreter is comparatively slow as it decodes the instructions at runtime using frvdec~\cite{frvdec}. In order to access the original RISC-V instructions at runtime, the original binary has to be stored as data in the result binary. \par We did not optimize the interpreter much because it is only a fall back system for the translated binary and not the focus of this project. Additionally the simple implementation reduces the interpreter code's size and therefore the translated binary size. \par The interpreter is called if an indirect jump or indirect call target was not resolved and therefore the generator does not know where to jump. The interpreter is used to fill this gap and to jump back to the translated instructions when possible. This can be done when the start of a top-level basic block or the address of a translation block is reached. If the assembly is generated using the simple code generator, every basic block can be considered to be a top-level basic block. \par When the interpreter is entered and left, the RISC-V register values (represented by the static values) need to be in the statics. This is important as the generator stores the variable values in registers or in the stack frame, but the interpreter works on the statics. This needs to be done as the interpreter needs to know a location where the variable values are stored. Therefore, the values are transferred to the statics before calling the interpreter. \par This is also a reason why we only leave the interpreter at the beginning of a new basic block. The naive generator (Section~\ref{naive_generator}) takes the basic blocks inputs from the statics at this point. When using the register allocator (Section~\ref{reg_alloc}), the translation blocks (Section~\ref{translation_blocks}) move the values to the correct registers and stack slots and are therefore --- as well as the top level basic blocks --- the addresses where we can leave the interpreter. These addresses are stored in the indirect jump lookup table (Section~\ref{ijump_resolution}). \par Usually, a translated basic block will be found after a few interpreted instructions, resulting in almost negligible slow down from entering the interpreter. However, in some extreme cases where no return basic block is found, practically the entire program is emulated resulting in extreme slow downs. This can be observed in rare cases if no translation blocks are created resulting in less possible return targets. \par The interpreter fully emulates \texttt{rv64imacfd} and therefore supports the same standard extensions as the translator. The translator can be instructed to translated no basic blocks, so that all code is interpreted. This can be used to verify the correctness of the interpreter using the benchmark tests. This method however is, as explained before, rather slow and is therefore not suitable for interpreting entire binaries. \par The interpreter theoretically supports the execution of RISC-V JIT compilers. Using the interpreter alone, self-modifying code should also be able to be executed. Such programs were not tested though and therefore this is only a theoretical feature. \par At runtime, the CPUID~\cite{intel2017man} is checked for support of the \textit{FMA3} extension. The interpreter uses it, if it is present, for faster floating point emulation. Otherwise, the fused multiply add instructions are assembled using separate multiplication and addition or subtraction instructions. \par \section{Floating Point Support}\label{sec:floating_point} Floating point arithmetic is special as it is affected by rounding and therefore depends on the implementation of the architecture. We decided to use the x86\_64 SSE instructions if possible and otherwise emulate them, because a bitwise correct implementation is even slower. Due to rounding, the results of the translated floating point operations may not be the same as if they were run using a RISC-V architecture. \subsection{Lifting and Generating} % TODO: Move footnote content into normal text? To support the \texttt{F} and \texttt{D}\footnote{The Q standard extension is not supported due to its complexity and lack of usage.} standard extensions, the mapping as well as the amount of registered statics has to be extended to include the floating point registers and the floating point control and status register (fcsr). To lift the RISC-V floating point instructions, some instructions are reused, e.g. \emph{add}, \emph{sub}, and some floating point specific instructions are defined, e.g. \emph{fmul}, \emph{fsqrt}. The floating point variables have the types \emph{f32} and \emph{f64} which represents single and double precision numbers, respectively. \par The code generator (Section~\ref{sec:generator}) uses the x86\_64 floating point instructions provided by the \textit{Streaming SIMD Extensions} (SSE) and \textit{Streaming SIMD Extensions 2} (SSE2) which operate on the \texttt{xmm} registers. As SSE2 is part of x86\_64, it can be assumed that SSE2 instructions are usable. However, the usage of other x86\_64 extensions like SSE4 or \textit{Fused Multiply Add 3} (FMA3) in the translated binary has to be enabled using a command line flag for the translator. \par For performance reasons, the occurring x86\_64 floating point exceptions are not transferred to the register / variable representing the \emph{fcsr}. This register contains the floating points exceptions on RISC-V processors. It is too slow to check after each operation which exceptions were raised and then set the according flags. A read from the \emph{fcsr} value can be redirected to a read from the \textit{MXCSR} register, but this would also take an implementation and performance overhead. However, this does not result in a big problem, because only very few programs depend on floating point exceptions. \par The floating point support can be deactivated using a command line flag for the translator in order to prevent the lifter from adding the additional statics. The advantage is a decreased memory usage, because less static variables have to be stored. The lifter then ignores all floating point instructions instead of throwing an error. This is required, because the ELF parser and instruction decoder is not able to distinguish instructions from other program data, as emphasized in Section~\ref{sec:program_decoding}. Data being wrongly decoded as a floating point instruction does not cause the translation to abort. \subsection{Rounding and Accuracy Problems} A known problem when dealing with different floating point implementations is that especially rounding effects will lead to slightly different results. As we want to use the \textit{SSE2} floating point instructions to provide an acceptable performance, some compromises have to be made. \par Firstly, for each RISC-V floating point instruction a rounding mode can be specified, which is used if the number cannot be expressed in the given number format. Because x86\_64 does not support such a feature, it is quite hard to implement this in a fast way. With x86\_64 floats, the results are already rounded and therefore it is not possible to round the result of each operation afterwards. \par Furthermore, a bitwise correct implementation, as provided by the dynamic translator \texttt{QEMU-user}, is challenging to implement and comparatively slow. We decided to ignore the rounding mode during lifting instructions. This does not apply to conversions between integers and floating point numbers as well as between single and double precision floating point numbers. We made this exception because it makes a great difference when rounding to full integer numbers. For example, the difference in rounding \emph{25.5} towards positive infinity or towards negative infinity, resulting in \emph{26} respectively \emph{25}, makes a noticeable difference. Furthermore, the used benchmarks (Section~\ref{sec:evaluation}) require the translator to respect the rounding mode of conversion instructions. \par There are two ways of implementing this per-instruction specific rounding. \begin{enumerate} \item The first one is to use the \texttt{roundss} instruction (respectively \texttt{roundsd} for double precision) and the selected rounding mode as an immediate input. However, these instructions are part of \textit{SSE4} which is not supported by all currently used, x86\_64-based CPUs~\cite{intel2017man}. Therefore the usage of \textit{SSE4} instructions must be explicitly enabled for this method. \item The alternative is to change the rounding mode using the \emph{MXCSR} register. The x86\_64 \emph{convert} instructions round according to the state of this register. The disadvantage of this method is that the only way to access the MXCSR register is using \texttt{STMXCSR} and \texttt{LDMXCSR} instructions which only operate on memory operands. This makes this method slower than the first method. Additionally, this method uses more instructions to change the rounding mode in our implementation.~\cite{intel2017man} \end{enumerate} \par A second problem is that there can be differences in rounding when assembling some instructions. For example, the fused multiply add instructions can be translated using the x86\_64 FMA3 instructions. However, as this extension is not supported by every x86\_64 CPU, it needs to be enabled explicitly. The alternative is to emulate the instruction using multiplication and addition or subtraction~\footnote{For some fused multiply adds, negating a floating point value is also required.}. This can result in a different result due to rounding errors. \subsection{Implementation Problems and Solutions} Some RISC-V instructions do not have an x86\_64 equivalent, especially unsigned conversions, sign injection and the floating point classification instruction \texttt{FCLASS}. These instructions are emulated using sequences of x86\_64 operations. \par The x86\_64 ISA does not define an unsigned conversion instruction. Although the \textit{Advanced Vector Extensions 512} (AVX512) provide such instructions, they are relatively new and implemented only in a few CPUs. Therefore, the translator does not provide an optimization flag for using these instructions. \par The RISC-V unsigned conversion instruction's behavior is emulated using a combination of the signed x86\_64 conversion instructions and logical operations for correcting the results. \par For unsigned integer to floating point number conversion, the corrections are applied based on the signed representation of the unsigned integers. 32-bit wide integer values can always be zero extended to 64-bits, which leads to their signed representation to be non-negative. If the signed representation is greater than, or equal, to zero, the 64-bit signed conversion can be applied directly. Otherwise, the input and output values of the signed conversion instruction need to be adjusted. This is done using bit arithmetic, which is also used by \textit{gcc}. The RISC-V floating point conversion to unsigned integers is emulated by converting to a signed integer. If the input floating point value is negative, the result is set to zero, as specified by the RISC-V manual~\cite{rvspec}. \par The sign injection instruction is emulated by arithmetically manipulating the sign bit of the floating point numbers. The floating point classify instructions categorize the input floating point number. Possible categories include sNaN, qNaN, negative infinity, negative normal number, and more. The x86\_64 architecture does not feature a similar instruction, therefore the classification is done using custom bit arithmetic. \section{Evaluation}\label{sec:evaluation} To measure the performance we used the SPEC CPU\textregistered 2017~\cite{spec_cpu_2017} benchmark package. We only ran the integer and not the floating point suite, because we did not optimize floating point operations. However, the integer suite includes tests that not only cover integer instructions, but also many floating point operations, allowing us to verify our implementation. \par The integer benchmark suite covers many actual use cases, including compilers, video compression, artificial intelligence and more. This allows us to stress test the performance under conditions similar to a real production environment. \par The benchmark results were compared to the results using QEMU, a popular dynamic binary translator which supports many different architectures. They are also compared to RIA-JIT~\cite{ria_jit_repo}, another dynamic binary translator that only supports RISC-V to x86\_64 translation. This translator also claims to be faster than QEMU in its project description paper~\cite{ria_jit_paper}. \par The effectiveness and drawbacks of the implemented optimizations were also measured, with special attention to the effects of floating point operation accuracy. \subsection{Benchmark Setup} All benchmarks where performaned on the \texttt{time-x} server provided by the Chair of Computer Architecture and Parallel Systems.~\footnote{\url{https://www.in.tum.de/caps/hw/caps-cloud/}} % FIXME: better citation The Server has a \texttt{AMD Ryzen Threadripper PRO 3955WX 16-Cores} CPU and \texttt{32 GB} memory. It is running \texttt{Ubuntu 20.04.2 LTS} with kernel release \texttt{5.10.0-1044-oem} and version \texttt{\#46-Ubuntu SMP Wed Aug 11 09:50:57 UTC 2021}. % TODO: mention vulnerability mitigation ? (lscpu) The benchmarks where executed on the \texttt{NFS} filesystem. For native benchmarks the GNU Compiler provided by the operating system was used. It had the gcc version \texttt{9.3.0 (Ubuntu 9.3.0-17ubuntu1~20.04)} with glibc version \texttt{2.31-0ubuntu9.2}. For RISC-V benchmarks, the gnu toolchain provided by the RISC-V foundation was used, version 2021.09.21~\footnote{Repository \url{https://github.com/riscv-collab/riscv-gnu-toolchain}, Commit hash b39e36160aa0649ba0dfb9aa314d375900d610fb}, it was configured with \texttt{--with-arch=rv64imafdc --with-abi=lp64d}. \par Our own translator (SBT) was build from the latest master~\footnote{Commit hash 3c341ea7298555b7c34572bd355ca4765d7650df} at the time in release mode. The QEMU version used was \texttt{1:4.2-3ubuntu6.17}. For RIA-JIT, a modified version by Alexis Engelke was used, adding support for RV64C over the original implementation. It was further modified to fix a bug with compressed instructions and to implement an additional system call required by the benchmarks. ~\footnote{Commit hash cff2c15e4b646dca8d839f4a5ee9c3ec89231ac6} \par Due to lack of development tools on the benchmark system, the toolchain, RIA-JIT and SBT were compiled on another host in a container similar to \texttt{time-x} and uploaded. % FIXME: remove because not useful ? Version 1.1.0 of the SPEC CPU\textregistered2017 benchmarks was used. The example configuration of the benchmark suite was modified to make it compatible with the newer \texttt{gcc} version used, \texttt{OpenMP} support was disabled and all test binaries were statically linked. All benchmarks were run with 2 iterations and the slower result was selected. For our static binary translation, only the execution speed of the resulting binary and not the translation time was measured. \subsection{Correctness} To verify that our translator produces correct binaries, we used the testsuites of the SPEC CPU 2017 benchmarks. Especially the tests for the \texttt{600.perlbench} benchmark are ideal for this purpose, as they they test the perl interpreter and some popular perl packages. As a result, they cover a wide range of operations, including many different system calls, floating point and string operations. Other benchmark tests cover compilation and compression. All together, the tests cover many real use cases. Any binary passing the benchmarks' testsuites can be assumed to be correct in almost all cases. \par Furthermore, the code base was tested for implementation and regression faults using unit tests and integration testing. The unit tests were written using the test framework \textit{Google Test}. Additional integration tests were applied, for example a test where the translator translates a simple zip utility tool once with full optimizations and once with only the interpreter enabled. The resulting translated zip tools are used to compress and decompress sample files. The output of each operation is then compared to the output of a natively compiled version of the same zip utility. \par Using this methodology, we tested the interpreter for correctness. The translator also passes all tests when compiled without floating point support. However, with floating point support, additional optimizations~\footnote{\texttt{fma3} and \texttt{sse4} need to be enabled.} need to be enabled to achieve correct floating point results. % TODO: either say they need fixing or that this is due to wrong rounding (backref to floating chapter) \subsection{Comparison to QEMU and RIA-JIT} \begin{figure} \begin{centering} \begin{tikzpicture} \begin{axis}[ width=0.9\textwidth, height=0.45\textheight, ybar, bar width = 4pt, ylabel={Execution time normalised to native (less is better)}, symbolic x coords = { 600.perlbench, 602.gcc, 605.mcf, 620.omnetpp, 623.xalancbmk, 625.x264, 631.deepsjeng, 641.leela, 648.exchange2, 657.xz, average }, xticklabel style = { anchor = east, rotate = 45 + 15 }, xtick = data, % make a tick at every symbolic x coordinate ytick distance = 1, % make a tick every 1 ymajorgrids = true, % display horizontal guide lines legend cell align = left, legend pos = north east, ] \addplot[fill=green] table[x=benchmark, y expr=\thisrow{native} /\thisrow{native}] from {./tikz_src/benchmarks_results1.txt}; \addplot[fill=orange] table[x=benchmark, y expr=\thisrow{qemu} /\thisrow{native}] from {./tikz_src/benchmarks_results1.txt}; \addplot[fill=yellow] table[x=benchmark, y expr=\thisrow{ria} /\thisrow{native}] from {./tikz_src/benchmarks_results1.txt}; \addplot[fill=blue] table[x=benchmark, y expr=\thisrow{sbt1} /\thisrow{native}] from {./tikz_src/benchmarks_results1.txt}; \legend{ native, QEMU, RIA-JIT, SBT} \end{axis} \end{tikzpicture} \caption{ SPECSpeed\textregistered2017 Integer benchmark suite results compared. SBT was executed with all optimizations except the removal of translation blocks and the hash table. RIA-JIT encountered a floating point error in the benchmark 625.x264, so this single benchmark was repeated with a toolchain using \texttt{--with-arch=rv64iamc --with-abi=lp64}. % TODO: redo benchmark for x264 for all targets with the softfp toolchain }\label{benchmark_results1} \end{centering} \end{figure} We used the SPECSpeed\textregistered2017 Integer benchmark suite to compare the performance of our SBT, QEMU and RIA-JIT. Because the individual benchmarks total execution time varies widely, all results are normalized to a native benchmark. This clearly shows the overhead caused by the binary translation and can be used to compare SBT, QEMU and RIA-JIT. Our own translator (SBT) was run with all optimizations, except the removal of \emph{Translation Blocks}, as it proofed to be highly detrimental for execution speed. This will be further discussed in Section~\ref{sec:translation_blocks}. % TODO: forward reference. QEMU and RIA-JIT where run without any additional configuration. \par In Figure \ref{benchmark_results1} the results are visualized. % 1. compare to QEMU It can be seen that we consistently outperformed QEMU in every benchmark. % - Show the best case, and the worst case On average SBT was 18\% faster than QEMU, in some cases, namely \emph{658.xz}, the difference was very small, but in others it was as much as 41\% when comparing the \emph{602.gcc} benchmark. \par % 2. compare to RIA-JIT SBT is on average faster than RIA-JIT, but when looking at individual benchmarks the results are mixed. In some benchmarks SBT is faster by a large margin, and in some other cases it performs similarly, being slower in some cases. Because of a bug in the floating point operations a toolchain without hard floating point support had to be used for the \emph{625.x264} benchmark for RIA-JIT, in practice this could mean worse performance. \par Comparing our results for QEMU and RIA-JIT to those presented in the RIA-JIT paper~\cite{ria_jit_paper}, QEMU has become significantly faster, % TODO: verify numbers from ~3.2x to ~2.5x native speed, while RIA-JIT has become slightly slower. % TODO: we need a hard break here to show that we go from presenting the results to interpretering their meaning. % - Point out that x264 is very slow compared to native % - on the other hand with omnetpp and xz we are better than 2x native % explain why binary translation does so exceptionally bad at x264 (video), maybe vectorization, cross reference ria-jit paper ? % - Compare results to the results of the ria-jit paper => QEMU appears to be faster (relatively speaking) \subsection{Effectiveness of different Optimizations} \begin{figure} \begin{centering} \begin{tikzpicture} \begin{axis}[ width=0.9\textwidth, height=0.45\textheight, ybar, bar width = 4pt, ylabel={Performance increase compared to no optimizations (more is better)}, symbolic x coords = { 600.perlbench, 602.gcc, 605.mcf, 620.omnetpp, 623.xalancbmk, 625.x264, 631.deepsjeng, 641.leela, 648.exchange2, 657.xz, average }, xticklabel style = { anchor = east, rotate = 45 + 15 }, xtick = data, % make a tick at every symbolic x coordinate % ytick distance = 1, % make a tick every 1 ymajorgrids = true, % display horizontal guide lines legend pos = outer north east, %legend cell align = left, legend style = { at={(0.0, -0.35)}} % legend style = { at={(0.95, 0.95)}, anchor=north east}, % place the graph legend at the top-right ] \addlegendimage{empty legend} % used to add a title to the legend \addplot[fill=green] table[x=benchmark, y expr=\thisrow{reg_alloc} / \thisrow{reg_alloc}] from {./tikz_src/benchmarks_results1.txt}; \addplot[fill=orange] table[x=benchmark, y expr=\thisrow{reg_alloc} / \thisrow{reg_alloc_plus_ir}] from {./tikz_src/benchmarks_results1.txt}; \addplot[fill=yellow] table[x=benchmark, y expr=\thisrow{reg_alloc} / \thisrow{reg_alloc_plus_gen}] from {./tikz_src/benchmarks_results1.txt}; \addplot[fill=blue] table[x=benchmark, y expr=\thisrow{reg_alloc} / \thisrow{reg_alloc_plus_lifter}] from {./tikz_src/benchmarks_results1.txt}; \legend{ \textbf{Additional Optimization}, None, {Const Folding, DCE, Common Subexpression Elimination}, {Merging Operations, Better Shifting}, Call and Return %SBT } \end{axis} \end{tikzpicture} \caption{ % TODO: adjust SPECSpeed\textregistered2017 Integer benchmark suite results of different optimizations compared using the register allocator with FMA3 and SSE4 enabled. For IR the Constant Folding, Dead Code Elimination and Variable Deduplication where enabled, for Generator the Merging Operations and BMI2 instruction set extension. % TODO: expand on what IR, Lifter, Generator means %SBT the result of all optimizations except the removal of translation blocks. }\label{benchmark_results2} \end{centering} \end{figure} To better understand the performance of our translator we compared the effect of the implemented optimizations on the runtime performance. In Figure \ref{benchmark_results2} the performance of optimizations applying to the \texttt{IR}, applying to the generator and the \texttt{Call and Return} optimizations are compared to a baseline of no additional optimizations. %We also evaluted how effective various optimizations are at different stages of the translation. %For this we compared the results of benchmarks shown in Figure \ref{benchmark_results2}. \par On average any optimization improves the performance, however in some cases individual optimizations can result in performance loss. \par On average the biggest performance increase is achieved by optimizations targeting the IR, most importantly \texttt{Constant Folding}. % TODO: sentence on immediate width on risc-v vs. x86 => more instructions necessary than on x86, can be folded into one. It also shows that static binary translation can perform more extensive anaylsis of the target program than dynamic binary translation, resulting in more speed up at runtime. \par In benchmarks with many instructions following the load, modify, store pattern, the \texttt{Merging Operations} optimizations yields the biggest results, in some edge cases it can cause a minor slow down. \par In most cases \texttt{Call and Return Optimizations} improves performance, but in some cases the additional cost incured by miss predictions causes a slow down, here in the \texttt{648.exchange2} and \texttt{657.xz} benchmarks. \subsection{Performance of Translation Blocks}\label{sec:translation_blocks} The translation blocks introduced in Section \ref{translation_blocks} are responsible for a large part of the resulting binary. To save space in the translated executable we added an option to remove them. \par The removal of these blocks also means that any indirect jump that can't be resolved needs to enter the interpreter, which is inherintly slow. The performance impact is made worse due to the lack of points where the interpreter can jump back to compiled code. For example the \texttt{600.perlbench} benchmark was heavily affected by this, because not all indirect jump targets where detected at translation, resulting in misses at runtime. \par Because of the large impact on performance we ran our benchmarks with translation blocks available. In the future this could be mitigated by detecting more indirect jump targets and keeping translation blocks in these cases. \subsection{Basic Block Start Address Hashing} \begin{figure} \begin{centering} \begin{tikzpicture} \begin{axis}[ width=0.9\textwidth, height=0.5\textwidth, tick align=inside, unbounded coords=jump, xmin=0, xmax=110, ymin=-1, ymax=22, point meta min=90, point meta max=108, colorbar horizontal, colorbar style={ xlabel={Assembly file size in $\frac{1}{100}\cdot \textit{Binary size without hashing}$}, }, colormap={reverse viridis}{ indices of colormap={\pgfplotscolormaplastindexof{viridis},...,0 of viridis} }, xlabel={Load factor in percent}, ylabel={Average bucket size in $\frac{1}{\text{Bucket}}$}, ] \addplot[ mark=square*, only marks, scatter, scatter src=explicit, mark size=3, ] table[meta=z] from {./tikz_src/hashing_assembly_size.txt}; \end{axis} \end{tikzpicture} \caption{Effects of average bucket size and hash table load factor on final assembly file size.}\label{hashing_figure} \end{centering} \end{figure} In Figure~\ref{hashing_figure} we show the effects of the two major parameters for the used hashing algorithm. Missing data points mean that the hash function algorithm was not able to find all valid bucket-level hash functions within the 16-bit per-bucket storage limit. The load factor is the fraction (in percent) of the final hash table which is filled with hashed entries. The average bucket size is used to calculate the number of buckets and therefore directly effects the number of bucket-level hash functions required. \par The test was conducted on the translated binary of a simple zip program. Sources are located in the \texttt{tests} project folder. The test was only performed once because the hashing algorithm and the translator itself are both strictly deterministic. \par The analysis shows that the amount of storage required for bucket-level hash functions does not have a noticeable impact on the final assembly size. It also shows that finding all valid bucket-level hash functions is more likely with a lower average number of elements per bucket. This leads to the conclusion that using smaller buckets sizes can improve bucket-level hash function generation times while not noticeably increasing the output assembly size. \par However a hash table is noticeably slower than a direct lookup table. This caused a significant slow down in some of the benchmarks. Even with the \texttt{Call and Return Optimization} reducing the amount of lookups required, the performance impact was too big. Indirect jump hashing was therefore disabled for performance testing, but kept for use cases that require a smaller translated binary result. It is an optimization which the user can manually enable using a command line flag. \subsection{Floating point accuracy} \begin{figure}[H] \centering \includegraphics[width=0.75\textwidth]{images/mandelbrot_differences/translated_diff.png} \caption{Differences in results of the mandelbrot set visualizer~\cite{mandelbrot_program} when comparing the translator to QEMU. light grey = mandelbrot set, dark grey = remainder, red = differences}\label{fig:mandelbrot_diff_translator} \end{figure} To depict the occuring rounding differences, a simple mandelbrot set visualizer~\cite{mandelbrot_program} was used. Figure~\ref{fig:mandelbrot_diff_translator} shows that the translator without any optimizations produces different results than the RISC-V binary emulated with QEMU. The comparison of the emulated RISC-V program result and the picture calculated by the program nativley compiled on x86\_64 shows the same differences. \par Another problem can be observed when activating the usage of \textit{FMA3}. It allows the translation of RISC-V fused multiply add instruction to x86\_64 equivalents rather than emulating it. This again has an impact on the accuracy, because the intermediate result in fused multiply add instructions is not rounded~\cite{intel2017man}. The mandelbrot test showed that the usage of \textit{FMA3} instructions causes the result to be more similar to the RISC-V result and therefore should be used if available. \section{Summary} The static binary translation program (SBT) developed in this project statically translates binaries compiled for the RISC-V ISA to x86\_64 binaries. It is able to translate every benchmark from the SPECSpeed\textregistered2017 integer benchmark suite. It fully supports the \textit{RV64I}, \textit{RV32I} RISC-V base integer modules. It also supports the \textit{Zifencei}, \textit{Zicsr}, \textit{M}, \textit{F}, \textit{D}, \textit{A} and \textit{C} standard extension for single threaded execution. \par The SBT is written in a modular manner to support repurposing parts of the project for e.g. translating other source or target ISAs. It consists of a RISC-V lifter, an abstract intermediate representation and an x86\_64 code generator. The translator additionally supplies a runtime environment for translated binaries. This is used for translating OS-specific features and performing a fallback just-in-time interpretation of the original RISC-V code. \par As shown in the evaluation section, the SBT reached its goal to be consistently faster than QEMU. It is able to outperform the dynamic binary translator in every executed benchmark. The SBT is, on average, approximately 18\% faster than QEMU. The SBT was also compared to a dynamic binary translator specifically optimized for translating RISC-V binaries. The static translator was able to outperform the dynamic translator by 16\% on average over all executed benchmarks. \par In order to increase the execution speed even further, more optimizations to the translated binary are possible. Some of them have already been introduced in previous chapters. Such features are not yet implemented due to the limited time span of this project. \par Further improvements of the translator include instruction reordering beyond basic block borders, support for dynamically linked binaries and support for more RISC-V ISA extensions. Another milestone project would be to utilize the modularity of the project and reuse the IR and either the RISC-V lifter or the x86\_64 code generator to introduce support for different source or target ISAs. \clearpage \bibliographystyle{plain} \bibliography{Ausarbeitung}{} \appendix \section{Appendix} \begin{center} \begin{longtable}{p{0.35\linewidth} | p{0.6\linewidth}} \hline Instruction & Description \\ [0.5ex] \hline store & Store value at supplied address. Consumes memory token and generates a new one. \\ \hline load & Loads value at supplied address and returns it. Reads as many bits from memory as the output variable is wide. \\ \hline add & Addition. \\ \hline sub & Subtraction. \\ \hline mul\_l & Multiplication. Returns the lower half of the result. \\ \hline ssmul\_h & \textbf{Signed} multiplication. Returns upper half of the result. \\ \hline uumul\_h & \textbf{Unsigned} multiplication. Returns upper half of the result. \\ \hline sumul\_h & Multiplication of one \textbf{signed} and one \textbf{unsigned} value. Returns upper half of the result. \\ \hline div & \textbf{Signed} division. Returns both the result and the remainder. \\ \hline udiv & \textbf{Unsigned} division. Returns both the result and the remainder. \\ \hline shl & Logical shift left. \\ \hline shr & Logical shift right. \\ \hline sar & Arithmetic shift right. \\ \hline or & Logical \textit{or}. \\ \hline and & Logical \textit{and}. \\ \hline not & Logical \textit{not}. \\ \hline xor & Logical \textit{xor}. \\ \hline cast & C-style typecast. Defined for shrinking variables and for bit-wise cast from integer to floating point numbers. \\ \hline slt & Ternary operator, set if less than:\newline IR: \texttt{dst $\leftarrow$ slt v1, v2, v3, v4} \newline Pseudocode: \texttt{dst = (v1 < v2) ? v3 : v4} \\ \hline sltu & Ternary operator, set if less than:\newline IR: \texttt{dst $\leftarrow$ sltu v1, v2, v3, v4} \newline Pseudocode: \texttt{dst = (v1 <\textbf{u} v2) ? v3 : v4} \\ \hline sle & Ternary operator, set if less or equal:\newline IR: \texttt{dst $\leftarrow$ sle v1, v2, v3, v4} \newline Pseudocode: \texttt{dst = (v1 <= v2) ? v3 : v4} \\ \hline seq & Ternary operator, set if equal:\newline IR: \texttt{dst $\leftarrow$ seq v1, v2, v3, v4} \newline Pseudocode: \texttt{dst = (v1 == v2) ? v3 : v4} \\ \hline sign\_extend & Enlarge value to bigger type using sign extension. \\ \hline zero\_extend & Enlarge value to bigger type using zero extension. \\ \hline setup\_stack & Meta instruction used for stack setup at program start. \\ \hline umax & Returns larger one of two values, \textbf{unsigned} comparison. \\ \hline umin & Returns smaller one of two values, \textbf{unsigned} comparison. \\ \hline max & Returns larger one of two values, \textbf{signed} comparison. \\ \hline min & Returns smaller one of two values, \textbf{signed} comparison. \\ \hline fmul & \textbf{Signed} floating pointer number multiplication. \\ \hline fdiv & \textbf{Signed} floating pointer number division, only returns the result. \\ \hline fsqrt & Takes the square root of two floating point values. \\ \hline fmadd & Fused multiply add of three floating point values:\newline IR: \texttt{dst $\leftarrow$ fmadd v1, v2, v3} \newline Pseudocode: \texttt{dst = v1 * v2 + v3} \\ \hline fmsub & Fused multiply sub of three floating point values:\newline IR: \texttt{dst $\leftarrow$ fmsub v1, v2, v3} \newline Pseudocode: \texttt{dst = v1 * v2 - v3} \\ \hline fnmadd & Fused negate multiply add of three floating point values:\newline IR: \texttt{dst $\leftarrow$ fnmadd v1, v2, v3} \newline Pseudocode: \texttt{dst = -(v1 * v2) + v3} \\ \hline fnmsub & Fused negate multiply sub of three floating point values:\newline IR: \texttt{dst $\leftarrow$ fnmsub v1, v2, v3} \newline Pseudocode: \texttt{dst = -(v1 * v2) - v3} \\ \hline convert & Conversion between integer and floating point numbers or between single and double precision floating point numbers. \\ \hline uconvert & Unsigned conversion between integer and floating point numbers or between single and double precision floating point numbers. \\ \hline \end{longtable} \captionof{figure}{Defined IR instructions.}\label{figure:ir_instructions} \end{center} \end{document}
{ "alphanum_fraction": 0.7167627574, "avg_line_length": 55.685645933, "ext": "tex", "hexsha": "3b0f27354abd97a72f868ac732568488a895aec7", "lang": "TeX", "max_forks_count": 2, "max_forks_repo_forks_event_max_datetime": "2022-03-30T11:53:28.000Z", "max_forks_repo_forks_event_min_datetime": "2022-03-30T10:39:04.000Z", "max_forks_repo_head_hexsha": "67cf885634ed9205650ee2353849d623d318e2d6", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "ERAP-SBT/ERAP-SBT", "max_forks_repo_path": "Ausarbeitung/Ausarbeitung.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "67cf885634ed9205650ee2353849d623d318e2d6", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "ERAP-SBT/ERAP-SBT", "max_issues_repo_path": "Ausarbeitung/Ausarbeitung.tex", "max_line_length": 177, "max_stars_count": 2, "max_stars_repo_head_hexsha": "67cf885634ed9205650ee2353849d623d318e2d6", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "ERAP-SBT/ERAP-SBT", "max_stars_repo_path": "Ausarbeitung/Ausarbeitung.tex", "max_stars_repo_stars_event_max_datetime": "2022-03-30T11:53:25.000Z", "max_stars_repo_stars_event_min_datetime": "2022-03-28T07:43:02.000Z", "num_tokens": 26281, "size": 116383 }
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % Deedy - One Page Two Column Resume % LaTeX Template % Version 1.2 (16/9/2014) % % Original author: % Debarghya Das (http://debarghyadas.com) % % Original repository: % https://github.com/deedydas/Deedy-Resume % % IMPORTANT: THIS TEMPLATE NEEDS TO BE COMPILED WITH XeLaTeX % % This template uses several fonts not included with Windows/Linux by % default. If you get compilation errors saying a font is missing, find the line % on which the font is used and either change it to a font included with your % operating system or comment the line out to use the default font. % %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % % TODO: % 1. Integrate biber/bibtex for article citation under publications. % 2. Figure out a smoother way for the document to flow onto the next page. % 3. Add styling information for a "Projects/Hacks" section. % 4. Add location/address information % 5. Merge OpenFont and MacFonts as a single sty with options. % %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % % CHANGELOG: % v1.1: % 1. Fixed several compilation bugs with \renewcommand % 2. Got Open-source fonts (Windows/Linux support) % 3. Added Last Updated % 4. Move Title styling into .sty % 5. Commented .sty file. % %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % % Known Issues: % 1. Overflows onto second page if any column's contents are more than the % vertical limit % 2. Hacky space on the first bullet point on the second column. % %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \documentclass[]{resume} \usepackage{fontawesome} \usepackage{fancyhdr} \usepackage{graphicx} \pagestyle{fancy} \fancyhf{} \begin{document} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % % LAST UPDATED DATE % %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % % TITLE NAME % %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \namesection{Gianluca}{Scarpellini}{ \urlstyle{same}\href{http://blog.scarpellini.dev}{blog.scarpellini.dev} | \href{mailto:[email protected]}{[email protected]} | \href{mailto:[email protected]}{[email protected]} } %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % % COLUMN ONE % %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{minipage}[t]{0.33\textwidth} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % EDUCATION %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{Education} \subsection{University of Milano-Bicocca} \descript{MS in Computer Science} \location{Oct 2020 | Milan, Italy} \location{GPA: 110L / 110} \descript{BS in Computer Science} \location{Jul 2018 | Milan, Italy} \location{GPA: 110L / 110} \sectionsep %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % LINKS %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{Links} \faGithub \hspace{3} \href{https://github.com/gianscarpe}{\bf gianscarpe} \\ \faLinkedin \hspace{3} \href{https://www.linkedin.com/in/gianlucascarpellini}{\bf gianlucascarpellini} \\ \faTwitter \hspace{3} \href{https://twitter.com/gianscarpellini}{\bf gianscarpellini} \\ \faMedium \hspace{3} \href{https://twitter.com/gianscarpellini}{\bf gianscarpellini} \\ \sectionsep %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % COURSEWORK %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{Coursework} \subsection{University} Advanced Machine Learning \\ Computer Vision \\ Robotics \\ Signal processing \\ Probabilistic models \\ Machine Learning\\ \sectionsep \subsection{Online} Deep Reinforcement Learning (Udacity) \\ Probabilistic Machine Learning (Coursera) \\ \sectionsep \subsection{Summer schools} Eastern European Machine Learning 2021\\ Mediterranean Machine Learning 2021\\ \sectionsep %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % SKILLS %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{Skills} \subsection{Programming languages} \location{Over 5000 lines:} Python \textbullet{} Node.js \textbullet{} C\# \textbullet{} Matlab \textbullet{} C++ \\ \location{Proficiency with:} Opencv \textbullet{} Pytorch \textbullet{} Pytorch-lightning \\ Keras \textbullet{} Tensorflow \textbullet{} Scikit-learn \textbullet{} \sectionsep \sectionsep \hspace{8cm} \includegraphics[width=5cm]{qr.png} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % % COLUMN TWO % %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \end{minipage} \hfill \begin{minipage}[t]{0.66\textwidth} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % RESEARCH %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{Who am I} My name is Gianluca and I'm a 1st year PhD student at the Italian Institute of Technology. I want to help energy-intensive industries reducing their carbon footprint with machine learning and computer vision. Get in touch if you want to know more! %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % EXPERIENCE %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{Experience} \runsubsection{Argo Vision} \descript{| ML Research Internship } \location{May 2019 - Jan 2020 | Milan, Italy} \vspace{\topsep} % Hacky fix for awkward extra vertical space \begin{tightemize} \item Trained state-of-the-art models for aerial semantic segmentation with IOU > 80\% (Keras, Hyperopt) \item Trained models for car model and maker classification (200 classes) with accuracy > 80\% (Keras) \item Adapted a plate detection \& OCR pipeline for raspberry pi (Keras, Darknet, Tensorflow lite) \end{tightemize} \sectionsep \runsubsection{Flash Delivery} \descript{| Back-end Engineer } \location{Jun 2017 – Apr 2019 | Bergamo, Italy} \begin{tightemize} \item Developed and deployed back-end infrastructure (C\#, Node.JS) \item Maintained database and data analysis (SQLServer, MongoDB) \end{tightemize} \sectionsep \runsubsection{Bergamopost} \descript{| Freelance journalist } \location{Sep 2016 – Sep 2017 | Bergamo, Italy} \sectionsep %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % RESEARCH %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{Research} \runsubsection{Italian Institute of Technology}\\ \descript{Ph.D. Student} \location{Nov 2020 – Present | Genova, Italy} \descript{Visiting Scientist} \location{Mar 2020 – Oct 2020 | Genova, Italy} \begin{tightemize} \item Developed the first pipeline for solving human pose estimation with a single event-camera. Published at CVPR2021 [\href{https://blog.scarpellini.dev/tags/event-cameras/}{blog posts}] \item Research on uncertainty estimation and object-detection for robotics applications \end{tightemize} \sectionsep \runsubsection{Imaging and Vision Lab} \descript{| Undergraduate Researcher} \location{Mar 2018 – Jul 2018 | Milan, Italy} I conducted research for my thesis as an undergraduate student. My works focused on car, pedestrians, and cyclists detection from RGB ad LIDAR data in road scenarios. Experience on Kitti Vision benchmark suite. \sectionsep %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % AWARDS %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{Open-source contributions} \begin{tabular}{rll} 2021 & Test refactoring and multi-gpu utils & Pytorch Lightning [\href{https://github.com/belerico/insertion-matcher}{code}] \\ 2020 & NLP pipeline for advertasing matching & Open-source [\href{https://github.com/belerico/insertion-matcher}{code}] \\ 2018 & Classical CV pipeline for chessboard detection & Open-source [\href{https://github.com/gianscarpe/chess_detection}{code}] \\ \end{tabular} \sectionsep %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % PUBLICATIONS %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{Publications} \renewcommand\refname{\vskip -1.5em} % Couldn't get this working from the .cls file \bibliographystyle{abbrv} \bibliography{publications} \nocite{*} \end{minipage} \end{document} \documentclass[]{article}
{ "alphanum_fraction": 0.6306496245, "avg_line_length": 31.489626556, "ext": "tex", "hexsha": "42bb7218f6a8d98aa1470f37e81d0e12b8f29b3b", "lang": "TeX", "max_forks_count": 2, "max_forks_repo_forks_event_max_datetime": "2022-03-12T04:44:11.000Z", "max_forks_repo_forks_event_min_datetime": "2019-11-28T09:19:26.000Z", "max_forks_repo_head_hexsha": "a99e3347c914dee396e5cd859b79777e01e41303", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "gianscarpe/Resume", "max_forks_repo_path": "Gianluca_Scarpellini_resume.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "a99e3347c914dee396e5cd859b79777e01e41303", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "gianscarpe/Resume", "max_issues_repo_path": "Gianluca_Scarpellini_resume.tex", "max_line_length": 248, "max_stars_count": 1, "max_stars_repo_head_hexsha": "a99e3347c914dee396e5cd859b79777e01e41303", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "gianscarpe/Resume", "max_stars_repo_path": "Gianluca_Scarpellini_resume.tex", "max_stars_repo_stars_event_max_datetime": "2022-03-12T04:44:05.000Z", "max_stars_repo_stars_event_min_datetime": "2022-03-12T04:44:05.000Z", "num_tokens": 1941, "size": 7589 }
The mathematical laws describing the deformation of a solid body are summarized in this section. First, the kinematic laws governing the motion of each material point belonging to a solid are considered. Then \textit{strain} measures are associated with internal forces through the thermodynamic framework so that \textit{constitutive equations} are derived. For a more exhaustive review of governing equations, see for instance \cite{Belytschko,Foundation_of_elasticity,Truesdell,Simo}. \subsection{Kinematic laws -- Strain measures} Consider a three-dimensional solid with volume denoted by $\Omega \subset \Rbb^3$ bounded by the surface $\partial \Omega$. This body undergoes external forces that can either be localized on a part of the external surface of the body (\textit{i.e. surface forces}) or act in the whole solid domain (\textit{i.e. volume forces}). Due to the presence of such loads, the domain may change within the time interval $\tau = \[0,T\]$ and is hence written as a function of time $\Omega(t)$ ($t\in \tau$). The state of the solid at time $t=0$, corresponding to a non-deformed state with volume $\Omega(t=0)=\Omega_0$, is referred to as the \textit{initial configuration}. Some problems require the use of a \textit{reference configuration} that can be deformed and to which equations are referred. In what follows, the reference and initial configurations are identical. At a given time $t>0$, the volume is $\Omega(t)=\Omega_t$ and the state of the solid corresponds to the \textit{current configuration}. These configurations are depicted in figure \ref{fig:deformationFunction}. \begin{figure}[h] \centering \input{chapter2/pgfFigures/deformationFunction} \caption{Deformation of a solid body between a reference state $\Omega_0$ to a subsequent state $\Omega_t$.} \label{fig:deformationFunction} \end{figure} The points of $\Rbb^3$ are located with \textit{Eulerian} or \textit{spatial coordinates} $\vect{x}=x_i\vect{e}_i$ while material particles in the reference configuration are located with \textit{Lagrangian coordinates} $\vect{X}=X_\alpha \vect{E}_\alpha$. %Two different bases $(\vect{E}_1,\vect{E}_2,\vect{E}_3)$ and $(\vect{e}_1,\vect{e}_2,\vect{e}_3)$, that may be different are not, allow the use of \textit{Lagrangian coordinates} and $\vect{E}_\alpha$ or \textit{Eulerian coordinates} $\vect{x}=x_i\vect{e}_i$ respectively. %Lagrangian coordinates give the location of material particles in the reference configuration while Eulerian coordinates are used to locate the point of space at which a particle may be at a given time. %In the reference configuration, all material particles are located by their position vectors: $\vect{X}=X_\alpha \vect{E}_\alpha$. At time $t$, the particle initially located at $\vect{X}$ may have moved to a different position given by the smooth mapping $\vect{\phi}(\vect{X},t)=\phi_i(\vect{X},t)\vect{e}_i$, providing the path of every particle of the solid during the deformation. Thus, the current volume of the solid is defined by means of Eulerian coordinates and the \textit{deformation function} $\vect{\phi}(\vect{X},t)$ as: $\Omega(t)=\{\vect{x}\in \Rbb^3: \vect{x}=\vect{\phi}(\vect{X},t),\: \vect{X},t\in\Omega_0\times \tau\}$. % The restriction of Eulerian coordinates to the current volume, namely $\vect{x}\in\Omega(t)$, allows to write the bijective relation $\vect{x}=\vect{\phi}(\vect{X},t)$. %Thus, the mapping $\vect{\phi}$ provides the path of every particle of the solid during the deformation. % In the Lagrangian coordinates system, particles are tracked during the deformation while the \textit{Eulerian coordinates}, denoted by $\vect{x}=x_i\vect{e}_i$, correspond to a \textit{spatial description}. Note that in the above definitions Greek indices are used for quantities evaluated in the reference configuration whereas Latin ones refer to quantities defined in the current configuration. The \textit{displacement} and \textit{velocity} vectors of a particle at time $t$ are respectively: \begin{align} &\vect{u}(\vect{X},t)=\vect{\phi}(\vect{X},t) - \vect{X} \qquad \forall\:\: \vect{X},t \in \Omega_0\times \tau \label{eq:displacement}\\ &\vect{v}(\vect{X},t)=\drond{\vect{\phi}}{t}(\vect{X},t) = \vect{\dot{\phi}}(\vect{X},t) \qquad \forall\: \: \vect{X},t \in \Omega_0\times \tau \label{eq:velocity} \end{align} where the superposed dot denotes the material time derivative. The second-order two-point \textit{deformation gradient} tensor is defined as: \begin{equation} \label{eq:F_phi} \tens{F}=\nablat_0 \vect{\phi} (\vect{X},t) \end{equation} where $\nablat_0 (\bullet)$ is the gradient operator in the reference configuration. This tensor can also be written by using equation \eqref{eq:displacement}: \begin{equation} \tens{F}= \nablat_0 \vect{u}(\vect{X},t) + \tens{I} \label{eq:F_displacement} \end{equation} with $\tens{I}$, the second-order identity tensor. The deformation gradient tensor characterizes the variations of lengths, angles, areas and volumes. Indeed, the infinitesimal vector, oriented surface and volume elements in the reference configuration, respectively denoted by $\vect{dX},\vect{dS}$ and $dV$, transform respectively to: \begin{equation} \label{eq:transport_equations} \begin{aligned} & dx_i=F_{i\alpha}dX_\alpha \\ & ds_i=J F_{\alpha i}^{-1}dS_{\alpha} \\ & dv=JdV \end{aligned} \end{equation} in the current configuration. Transport equations \eqref{eq:transport_equations} involve the determinant of the deformation gradient $J=\det(\tens{F})>0$, also called the \textit{Jacobian of the deformation}. Since it accounts for changes in lengths and angles (\textit{i.e. the change of shape of a body}), the deformation gradient is one strain measure among others. For instance, one also defines the \textit{right Cauchy-Green} and the \textit{Green-Lagrange} tensors as: \begin{equation*} \tens{C}=\tens{F}^T\tens{F} \quad ; \quad \tens{E}=\frac{1}{2}(\tens{C}-\tens{I}) \end{equation*} respectively. Making use of equation \eqref{eq:F_displacement}, the Green-Lagrange tensor reads: \begin{equation*} \tens{E}=\frac{1}{2}(\nablat_0 \vect{u} + \nablat_0 \vect{u}^T + \nablat_0 \vect{u}^T \nablat_0 \vect{u}) \end{equation*} In particular, when a deformation involves displacement vectors such that $\norm{\nablat_0 \vect{u}} \ll 1$, the last term of the previous definition can be neglected, leading to: \begin{equation} \label{eq:epsilon} \tens{E} \approx \frac{1}{2}(\nablat_0 \vect{u} + \nablat_0 \vect{u}^T) = \tens{\eps} \end{equation} with $\tens{\eps}$ the symmetric \textit{linearized strain tensor}. Such deformations fall in the \textit{small strains} framework and are characterized by small strains but possibly large displacements. Furthermore, when the deformation leads to a displacement vector $\frac{\norm{\vect{u}}}{L} \ll 1$, where $L$ is a characteristic length of the domain, reference and current configurations are considered as identical within equations of the Initial Boundary Value Problem (IBVP). The aforementioned situations correspond to the \textit{linearized geometrical} framework or \textit{infinitesimal theory}. \subsection{Balance equations} % In this section a solid domain $\Omega(t)$ undergoing a deformation is still considered within the time interval $\tau = \[0,T\]$. The time derivative of equations \eqref{eq:F_phi} and \eqref{eq:epsilon} combined with the definition of the velocity field \eqref{eq:velocity} yield respectively: \begin{equation} \begin{aligned} & \tens{\dot{F}} - \nablat_0\vect{v} = \tens{0} \\ & \tens{\dot{\eps}} - \nablat^s\vect{v} = \tens{0} \end{aligned} \label{eq:geometrical_conservation} \end{equation} where $\nablat^s(\bullet)$ denotes the symmetric gradient operator. By rewriting the gradient operators as: \begin{align} & \nablat_0 \vect{v} = \nablav_0 \cdot \(\vect{v}\otimes\tens{I}\) \\ & \nablat^s \vect{v} = \frac{1}{2}\nablav \cdot \(\vect{v}\otimes\tens{I}+\tens{I}\otimes \vect{v}\) \end{align} with $\nablav_0 \cdot \(\bullet\)$ and $\nablav \cdot\(\bullet\)$, the right divergence operators in reference and current configurations respectively. With these forms of gradient operators, geometrical relations \eqref{eq:geometrical_conservation} can be written as kinematic or geometrical balance laws \cite{Plohr,Haider_FVM,Gil_HE,Gil_SPH_HE}: \begin{align} & \tens{\dot{F}} - \nablav_0 \cdot \(\vect{v}\otimes\tens{I}\) = \tens{0} \label{eq:HE_kinematic} \\ & \tens{\dot{\eps}} - \frac{1}{2}\nablav \cdot \(\vect{v}\otimes\tens{I}+\tens{I}\otimes \vect{v}\) = \tens{0} \label{eq:HPP_kinematic} \end{align} Then, assuming that the mass of some amount of matter remains constant during the deformation, one writes the conservation of mass in integral form: \begin{equation*} \int_\Omega \rho dv = \int_{\Omega_0} \rho_0 dV \qquad \forall \: t \in \tau,\: \forall \:\Omega_0 \end{equation*} which, with the third transport formula reads: \begin{equation} \label{eq:mass_conservation_law} \int_{\Omega_0} \(J\rho - \rho_0\) dV = 0 \qquad \forall \:\Omega_0 \end{equation} Since equation \eqref{eq:mass_conservation_law} holds regardless of the volume $\Omega_0$, the integrand must vanish so that the local conservation of mass is written: \begin{equation} \label{eq:mass_balance} \rho\(\vect{\phi}(\vect{X},t),t\) = \frac{\rho_0\(\vect{X}\)}{J} \qquad \forall \: \vect{X},t \: \in \Omega_0\times \tau \end{equation} Furthermore, \textit{Newton's second law} states the equilibrium between inertia and external forces undergone by a solid $\Omega$. In the current configuration this conservation law consists of the \textit{translational} and \textit{rotational} balances, also known as \textit{linear momentum} and \textit{angular momentum} balance equations, which are respectively: \begin{subequations} \begin{alignat}{1} \label{eq:linear_momentum} & \ddroit{}{t}\int_\Omega \rho \vect{v} dv = \int_{\partial \Omega} \vect{t} ds + \int_{\Omega} \rho\vect{b}dv \qquad \forall \: t \in \tau,\: \forall \:\Omega \\ \label{eq:angular_momentum} & \ddroit{}{t}\int_\Omega \rho \vect{x} \times \vect{v} dv = \int_{\partial \Omega} \vect{x} \times\vect{t} ds + \int_{\Omega} \rho \vect{x} \times\vect{b}dv \qquad \forall \: t \in \tau,\: \forall \:\Omega \end{alignat} \end{subequations} where $\vect{t}$ and $\vect{b}$ denote surface and volume forces and the cross operator denotes the vector product. The second-order \textit{Cauchy stress tensor} $\tens{\sigma}$ is then introduced by using Cauchy's theorem $\vect{t}=\tens{\sigma}\cdot \vect{n}$ where $\vect{n}$ is the outward normal vector to the surface element $ds$. \begin{theorem}[Ostrogradski] The \textbf{divergence theorem} relates the flow of a quantity through a closed surface $\partial\Omega$ to the divergence of this quantity inside the volume $\Omega$ delimited by $\partial \Omega$: \begin{equation} \label{eq:Ostrogradski_th} \int_{\partial \Omega} (\bullet)\cdot \vect{n}\:ds=\int_\Omega \nablav \cdot (\bullet) \: dv \qquad \forall\:\Omega \end{equation} \end{theorem} % \begin{theorem}[Reynolds] % The \textbf{Reynolds transport theorem} relates the flow of a quantity through a closed surface $\partial\Omega$ to the divergence of this quantity inside the volume $\Omega$ delimited by $\partial \Omega$: % \begin{equation} % \label{eq:Ostrogradski_th} % \int_{\partial \Omega} (\bullet)\cdot \vect{n}\:ds=\int_\Omega \nablav \cdot (\bullet) \: dv \qquad \forall\:\Omega % \end{equation} % \end{theorem} \begin{definition} \label{def:Piola_transform} The Piola transform $\tens{T}^P$ of a second-order tensor $\tens{T}$ is defined as: \begin{equation*} \tens{T}^P=J\tens{T}\cdot\tens{F}^{-T} \end{equation*} and satisfies: \begin{equation*} \nablav_0\cdot \tens{T}^P = J \nablav \cdot \tens{T} \end{equation*} \end{definition} The conservation of linear momentum \eqref{eq:linear_momentum}, combined with the volume transport theorem \eqref{eq:transport_equations}, reads in the reference configuration: \begin{equation} \label{eq:1} \ddroit{}{t}\int_{\Omega_0} \rho_0 \vect{v} \: dV = \int_{\Omega_0} J\nablav \cdot\tens{\sigma} \: dV + \int_{\Omega_0} \rho_0\vect{b}\:dV \qquad \forall \: t \in \tau,\: \forall \:\Omega_0 \end{equation} % Thus, by using the divergence theorem \eqref{eq:Ostrogradski_th} in the second law of Newton combined with the conservation of mass and the transport theorem, one gets: % \begin{equation} % \label{eq:Linear_momentum_conservation_eulerian} % \int_{\Omega} \( \rho \vect{\dot{v}} - \nablav \cdot \tens{\sigma} - \rho\vect{b} \) dV = \vect{0} \qquad \forall \:t \in \tau,\: \forall \:\Omega % \end{equation} % The transport formula of volume elements \eqref{eq:transport_equations} are then combined to the mass balance equation \eqref{eq:mass_balance} in order to write the equation \eqref{eq:Linear_momentum_conservation_eulerian} in the reference configuration: % \begin{equation} % \int_{\Omega_0} \( \rho_0 \vect{\dot{v}} - J \nablav \cdot \tens{\sigma} - \rho_0\vect{b} \) dV = \vect{0} \qquad \forall \: t \in\tau,\: \forall \:\Omega_0 % \end{equation} or, by using definition \ref{def:Piola_transform}: \begin{equation} \label{eq:Linear_momentum_conservation} \int_{\Omega_0} \( \rho_0 \vect{\dot{v}} - \nablav_0 \cdot \tens{\Pi} - \rho_0\vect{b} \) dV = \vect{0} \qquad \forall \: t \in\tau,\: \forall \:\Omega_0 \end{equation} where the \textit{first Piola-Kirchhoff stress tensor} (PK1) $\tens{\Pi}=J\tens{\sigma}\cdot\tens{F}^{-T}$ is the Piola transform of Cauchy stress tensor. Thus, the vanishing of the integrand in equation \eqref{eq:Linear_momentum_conservation} yields the balance equation of the \textit{Lagrangian linear momentum}: \begin{equation} \label{eq:Lagrangian_linear_momentum} \rho_0 \vect{\dot{v}} - \nablav_0 \cdot \tens{\Pi} = \rho_0 \vect{b} \qquad \forall \: \: \vect{X},t \in \Omega_0 \times \tau \end{equation} or equivalently for the current configuration: \begin{equation} \label{eq:HPP_linear_momentum} \rho \vect{\dot{v}} - \nablav \cdot \tens{\sigma} = \rho \vect{b} \qquad \forall \: \: \vect{x},t \in \Omega \times \tau \end{equation} On the other hand, the conservation of angular momentum \eqref{eq:angular_momentum} leads to the symmetry of Cauchy stress tensor $\tens{\sigma}=\tens{\sigma}^T$, or equivalently from the definition of PK1 tensor, $\tens{\Pi}\cdot \tens{F}^T=\tens{\Pi}^T\cdot \tens{F}$ \cite{Foundation_of_elasticity}. We complete the set of balance laws by considering the \textit{first law of thermodynamics}. This law is a balance between the rates of change of \textit{kinetic} and \textit{internal} energies, the power of external forces, and the amount of heat entering the system as \textit{volume} or \textit{surface heat sources}. \begin{equation*} \ddroit{}{t}\int_{\Omega} \(\frac{1}{2}\rho \vect{v}\cdot\vect{v} + \rho e\) dv = \int_{\partial \Omega} \(\tens{\sigma}\cdot\vect{n}\)\cdot\vect{v} \: ds + \int_{\Omega} \rho\vect{b}\cdot\vect{v} \: dv + \int_{\Omega} \rho r \:dv - \int_{\partial \Omega} \vect{q}\cdot\vect{n} \: ds \qquad \forall \: t \in \tau ,\: \forall \:\Omega \end{equation*} where $\vect{q}$ is the outward heat flux vector, $r$ is a volume heat source and $e$ is the internal energy density. The divergence theorem \eqref{eq:Ostrogradski_th} yields: \begin{equation*} \ddroit{}{t}\int_{\Omega} \(\frac{1}{2}\rho \vect{v}\cdot\vect{v} + \rho e\) dv = \int_{\Omega} \(\nablav\cdot(\tens{\sigma}\cdot\vect{v}) + \rho\vect{b}\cdot\vect{v} \) dv + \int_{\Omega} \rho r \: dv - \int_{\partial \Omega} \vect{q}\cdot\vect{n} \: ds \qquad \forall \: t \in \tau ,\: \forall \:\Omega \end{equation*} The transport of this relation in the reference configuration and introduction of the Lagrangian linear momentum \eqref{eq:Lagrangian_linear_momentum} and of kinetic conservation laws \eqref{eq:HE_kinematic} lead to: \begin{equation} \label{eq:conservation_law_energy} \int_{\Omega_0} \rho_0 \dot{e} dV = \int_{\Omega_0} \tens{\Pi}:\tens{\dot{F}}\: dV + \int_{\Omega_0} \(\rho_0 r - \nablav_0 \cdot \vect{Q}\) dV \qquad \forall \: t \in \tau \end{equation} where $\vect{Q}=J\vect{q}\cdot \tens{F}^{-1}$ is the Lagrangian heat flux vector. One thus deduces the balance equation of internal energy in the reference configuration: \begin{equation} \label{eq:energy_balance} \rho_0 \dot{e} - \tens{\Pi}:\tens{\dot{F}} + \nablav_0 \cdot \vect{Q} = \rho_0 r \qquad \forall \: \: \vect{X},t \in \Omega_0 \times \tau \end{equation} Finally, the small strain version of equation \eqref{eq:energy_balance} is: \begin{equation} \label{eq:energy_balance_euler} \rho \dot{e} - \tens{\sigma}:\tens{\dot{\eps}} + \nablav \cdot \vect{q} = \rho r \qquad \forall \: \: \vect{x},t \in \Omega \times \tau \end{equation} Strain and stress are then conjugate fields through an energy function. The former are referred to as \textit{state variables} describing the evolution of the thermodynamic system while the latter are \textit{thermodynamic forces} governed by \textit{constitutive equations}. In what follows, such constitutive equations are derived. \subsection{Constitutive equations -- Thermodynamics} \label{sec:constitutive-equations} The closure of the continuum equations is given by constitutive equations for the stress. Once and for all, we consider here constitutive models within the \textit{Generalized Standard Materials} (GSM) framework \cite{GSM}. \subsubsection*{The general hyperelasticity formulation} First, the \textit{Clausius-Duhem} inequality resulting from combination of first and second laws of thermodynamics, reads: \begin{equation} \label{eq:Clausius-Duhem} \underbrace{\phantom{\frac{1}{\theta}} \tens{\Pi}:\tens{\dot{F}} + \rho_0 \(\theta \dot{\eta} -\dot{e}\)}_{\Dscr^{int}} \: \underbrace{-\:\frac{1}{\theta} \vect{q} \cdot \nablav_0 \theta}_{\Dscr^{th}} \geq 0 \qquad \forall \: \: \vect{X},t \in \Omega_0 \times \tau \end{equation} where $\theta$ and $\eta$ denote the temperature and the entropy, and $\Dscr^{int}$ and $\Dscr^{th}$ are respectively the mechanical and thermal dissipations. The relation \eqref{eq:Clausius-Duhem} becomes an equality for \textit{reversible} processes and a strict inequality for \textit{irreversible} ones. Furthermore, a widely used assumption consists in considering that mechanical and thermal dissipations simultaneously satisfy non-negativeness. Note that the \textit{Fourier's law} of conduction is based on the non-negativeness of the thermal dissipation and leads to the following definition of the heat flux vector in order to ensure the positiveness of the thermal dissipation: \begin{equation*} \label{eq:Fourier_law} \vect{q}=-\tens{k}\cdot\nablav_0 \theta \end{equation*} where $\tens{k}$ is a positive-definite second-order tensor. We assume that the internal energy density is a function of strain, entropy and additional internal variables $\Vc_p \: (1\leq p \leq N)$, describing irreversible processes. The Helmholtz free energy density on the other hand, defined as the \textit{Legendre transform} of internal energy, is a function of temperature and not of entropy: $\psi\(\tens{F},\theta,\Vcb\)=e\(\tens{F},\eta,\Vcb\)-\theta \eta$. The free energy density is supposed \textit{objective} or \textit{frame indifferent} \cite[p.255]{Simo}, concave with respect to temperature and convex with respect to other variables. The mechanical dissipation thus reads: \begin{equation*} \Dscr^{int} = \tens{\Pi}:\tens{\dot{F}} - \rho_0 \(\dot{\psi} +\eta \dot{\theta}\) \end{equation*} or, by introducing the time derivative of the Helmholtz free energy density $\dot{\psi} = \drond{\psi}{\tens{F}}:\tens{\dot{F}} + \drond{\psi}{\theta}\dot{\theta} + \drond{\psi}{\Vcb}\dot{\Vcb}$ \begin{equation} \label{eq:Dint_psi_factor} \Dscr^{int} = \(\tens{\Pi}- \rho_0 \drond{\psi}{\tens{F}} \):\tens{\dot{F}} - \rho_0 \(\drond{\psi}{\theta} +\eta\) \dot{\theta} - \rho_0\drond{\psi}{\Vcb}\dot{\Vcb} \end{equation} Since the mechanical dissipation must be non-negative regardless of the nature of the transformation, it must in particular vanish for a reversible isothermal process (\textit{i.e. $\theta=const$}) for which every additional internal variable is constant (\textit{i.e. $\dot{\Vc}=0$}). With these considerations, we are left with the relation: \begin{equation*} \( \tens{\Pi} - \rho_0\drond{\psi}{\tens{F}} \): \tens{\dot{F}} = 0 \end{equation*} holding regardless of the deformation, and hence: \begin{equation} \label{eq:PK1_definition} \rho_0\drond{\psi}{\tens{F}} = \tens{\Pi} \end{equation} A material is said \textit{hyperelastic} if there exists a \textit{stored energy density function} $\rho_0\psi$ from which can be derived the first Piola-Kirchhoff stress tensor \cite[p.8]{Foundation_of_elasticity}. Similar considerations lead to the state laws for entropy and are assumed for additional thermodynamic forces associated with internal variables $\Vcb$: \begin{equation} \label{eq:thermodynamic_forces} \drond{\psi}{\theta} = - \eta \quad ; \quad \rho_0\drond{\psi}{\Vcb}=-\Acb \end{equation} \begin{remark} \label{rq:isothermal_deformation} Temperature has been introduced as a state variable and requires the first principle of thermodynamics, rewritten as the heat equation, in order to close the system: \begin{equation*} \rho_0 C \dot{\theta} = \rho_0 r - \nablav_0 \cdot \vect{Q} - \rho_0 \drond{\psi}{\Vcb}\dot{\Vcb} + \theta \(\drond{\tens{\Pi}}{\theta}:\tens{\dot{F}} - \drond{\Acb}{\theta}\dot{\Vcb} \) \end{equation*} Nevertheless, we will restrict our attention in the following to isothermal deformations so that temperature can be omitted and internal energy balance equations \eqref{eq:energy_balance} or \eqref{eq:energy_balance_euler} are not considered. % automatically satisfied. Indeed, for isothermal processes the heat equation leads to: % \begin{equation*} % \rho_0 r - \nablav_0 \cdot \vect{Q} = \rho_0 \drond{\psi}{\Vcb}\dot{\Vcb} % \end{equation*} % which, once introduced in the energy balance \eqref{eq:energy_balance} yields: % \begin{equation*} % \rho_0 \dot{e} - \tens{\Pi}:\tens{\dot{F}} = \rho_0 \drond{\psi}{\Vcb}\dot{\Vcb} % \end{equation*} % By using the time derivative of free energy density and noting that $\eta=-\drond{\psi}{\theta} =0$, we get that each side of the previous equation simplify. \end{remark} For isothermal reversible deformations in hyperelastic solids, the time derivative of equation \eqref{eq:PK1_definition} leads to: \begin{equation} \label{eq:HE_tangent} \tens{\dot{\Pi}} = \rho_0\ddrond{\psi}{\tens{F}}{\tens{F}}:\tens{\dot{F}} = \Hbb:\tens{\dot{F}} \end{equation} where $\Hbb$ is the fourth-order \textit{tangent modulus} tensor (major symmetric). %\subsubsection*{Examples of hyperelastic constitutive laws} The above discussion is now specified to constitutive models that will be used in the remainder of the manuscript. \begin{example}[Nearly incompressible Neo-Hookean] The nearly incompressible neo-Hookean hyperelastic model is well-suited to describe rubber-like materials and is based on the polyconvex stored energy function (\textit{i.e. convex with respect to all its arguments}): \begin{equation} \label{eq:neo-hook_energy} \rho_0 \psi(J,\tens{F})= \frac{\kappa}{2}(J-1)^2 + \frac{\mu}{2}\[J^{-2/3} (\tens{F}:\tens{F})-3 \] \end{equation} where $\kappa$ is the bulk modulus and $\mu$ the Lam\'e shear modulus. The first Piola-Kirchhoff stress and the acoustic tensor $A_{ij}=N_\alpha H_{i\alpha j\beta} N_\beta$ are for this model \cite{Haider_FVM}: \begin{equation} \label{eq:PK1_neo-hook} \tens{\Pi} = \mu J^{-2/3} \[\tens{F}- \frac{1}{3}\(\tens{F}:\tens{F}\) \tens{F}^{-T}\] + \kappa \(J-1\)\tens{H} \end{equation} and \begin{equation} \label{eq:neo-hook_acoustic} \begin{split} \tens{A}=\[\frac{5}{9}\mu J^{-8/3}\(\tens{F}:\tens{F}\) + \kappa \]&(\tens{H}\cdot\vect{N})\otimes\(\tens{H}\cdot\vect{N} \) + \mu J^{-2/3} \tens{I} \\ &-\frac{2}{3}\mu J^{-5/3}\[(\tens{H}\cdot\vect{N})\otimes\(\tens{F}\cdot\vect{N} \) + (\tens{F}\cdot\vect{N})\otimes\(\tens{H}\cdot\vect{N} \)\] \end{split} \end{equation} where $\tens{H}=J\tens{F}^{-T}$ is the \textit{adjoint tensor} of the deformation gradient. The polyconvexity of this model ensures the positive definiteness of the acoustic tensor \eqref{eq:neo-hook_acoustic} \cite{Kluth}. \end{example} \begin{example}[Saint-Venant-Kirchhoff] The Saint-Venant-Kirchhoff hyperelastic model is based on the stored energy function: \begin{equation} \label{eq:SVK_energy} \rho_0\psi=\frac{1}{8}\(\tens{F}^T\tens{F}- \tens{I}\):\Cbb:\(\tens{F}^T\tens{F}- \tens{I}\) \end{equation} where $\Cbb$ is the fourth-order elasticity tensor defined as: $C_{i\alpha j\beta}= \lambda \delta_{i\alpha}\delta_{j\beta} + \mu \(\delta_{ij}\delta_{\alpha\beta}+\delta_{i\beta}\delta_{j\alpha}\)$, with Lamé parameters $(\lambda,\mu)$. By differentiating the stored energy function \eqref{eq:SVK_energy} with respect to the deformation gradient, the PK1 stress is: \begin{equation} \label{eq:SVK_PK1} \tens{\Pi} = \frac{1}{2}\lambda \(\tens{F}:\tens{F}- 3\) \tens{F} + \mu\tens{F}\(\tens{F}^T\tens{F}- \tens{I}\) \end{equation} whose derivative with respect to $\tens{F}$ yields the tangent modulus: \begin{equation} \label{eq:SVK_tangent} \Hbb = \lambda \[\tens{F}\otimes\tens{F} + \frac{1}{2} (\tens{F}:\tens{F}- 3)\Ibb\]+ \mu\(\Bbb- \Ibb \) \end{equation} with $B_{i\alpha j \beta}=F_{k\alpha}F_{k\beta}\delta_{ij} + F_{j\alpha}F_{i\beta} +F_{i\mu} F_{j\mu}\delta_{\alpha\beta}$ and the fourth order identity tensor $I_{i\alpha j \beta}=\delta_{ij}\delta_{\alpha \beta}$. The previous tangent modulus finally leads to the acoustic tensor: \begin{equation} \label{eq:SVK_acoustic} \begin{split} \tens{A} = \lambda \[\vphantom{\frac{1}{2}} (\tens{F}\cdot\vect{N})\otimes(\tens{F}\cdot\vect{N}) \right.+ &\left.\frac{1}{2} (\tens{F}:\tens{F}- 3)\tens{I}\] \\ + & \mu\[(\tens{F}\cdot\vect{N})\cdot(\tens{F}\cdot\vect{N})\tens{I} + (\tens{F}\cdot\vect{N})\otimes(\tens{F}\cdot\vect{N}) + \tens{F}\cdot\tens{F}^T- \tens{I} \] \end{split} \end{equation} Even though this model can lead to non-physical solutions, as we shall see in section \ref{sec:SVK_solution}, it will be used for a one-dimensional strain problem for it enables the development of an exact solution. \end{example} %\subsubsection*{History-dependent models in small strain} \subsubsection*{The infinitesimal theory formulation} %Analogously to the hyperelastic case, constitutive equations within the small strain framework are derived from the Eulerian Clausius-Duhem inequality. Then, by assuming the existence of a Helmholtz free energy density $\psi$ depending on the temperature $\theta$, the linearized strain tensor $\tens{\eps}$ and additional internal variables $\Vcb$, The linearized geometrical framework leads, by assuming the existence of a Helmholtz free energy density $\psi$ that depends on the temperature $\theta$, the infinitesimal strain tensor $\tens{\eps}$ and additional internal variables $\Vcb$, to the following relation \cite[Ch.2]{Simo}: \begin{equation} \label{eq:Cauchy_definition} \rho \drond{\psi}{\tens{\eps}} = \tens{\sigma} \end{equation} % which time derivative yields: % \begin{equation} % \label{eq:HPP_tangent} % \tens{\dot{\sigma}} = \rho\ddrond{\psi}{\tens{\eps}}{\tens{\eps}}:\tens{\dot{\eps}} = \Cbb:\tens{\dot{\eps}} % \end{equation} The infinitesimal strain tensor is further assumed to be additively decomposed into an elastic and a plastic part: $\tens{\eps} = \tens{\eps}^e + \tens{\eps}^p$. Then, with irreversible deformations due to plastic strains, the mechanical dissipation reads: \begin{equation} \label{eq:HPP_dissipation} \Dscr^{int}=\tens{\sigma}:\tens{\dot{\eps}}^p -\rho \drond{\psi}{\Vcb}\dot{\Vcb} \geq 0 \end{equation} A \textit{yield condition} is defined by means of function $f(\tens{\sigma},\Acb)$ so that the elastic domain $\Ebb$ in forces space $(\tens{\sigma},\Acb)$ corresponds to: \begin{equation} \label{eq:elastic_convex} \Ebb = \{ (\tens{\sigma},\Acb)\: | \: f(\tens{\sigma},\Acb) \leq 0\} \end{equation} According to the GSM framework \cite{GSM} we assume the existence of a dissipation pseudo-potential $\Phi(\tens{\sigma},\Vcb)$, convex with respect to thermodynamic forces and vanishing at the origin of the $(\tens{\sigma},\Vcb)$ space. This pseudo-potential enables the derivation of the plastic \textit{flow} and \textit{hardening} rules: \begin{subequations} \begin{alignat}{2} \label{eq:flow_rule_plast} \tens{\dot{\eps}}^p&=&\drond{\Phi}{f}\drond{f}{\tens{\sigma}}\\ \label{eq:hardening_rule_plast} \dot{\Vcb}& = -&\drond{\Phi}{f}\drond{f}{\Acb} \end{alignat} \end{subequations} where $\drond{\Phi}{f}=\dot{p}$ is the equivalent plastic strain rate. An example of a model used for metals is the \textit{plastic $J_2$ flow theory} where the elastic domain is here described by a yield function that depends on the deviatoric part of the Cauchy stress, $\tens{s}=\tens{\sigma}-\frac{1}{3}\tr \tens{\sigma}\tens{I}$, through its second invariant $J_2(\tens{s})=\frac{1}{2}\tens{s}:\tens{s}$. %The elastic domain is here described by a yield function that depends on the deviatoric part of the Cauchy stress, $\tens{s}=\tens{\sigma}-\frac{1}{3}\tr \tens{\sigma}\tens{I}$, through its second invariant $J_2(\tens{s})=\frac{1}{2}\tens{s}:\tens{s}$ according to the \textit{plastic $J_2$ flow theory} generally used to model metals. In addition, a set of internal variables and associated forces describing the plastic hardening of the material is used $\{\Vcb,\Acb\}=\{\[\tens{\eps}^p,p\],\[\tens{s}-\tens{Y},-R(r)\]\}$ in order to define the \textit{von-Mises yield surface}: \begin{equation} \label{eq:von-Mises_yield} f\(\tens{\sigma},\Acb \)= \sqrt{\frac{3}{2}}\norm{\tens{s}-\tens{Y}} - \(R(r)+\sigma^y\) \equiv 0 \end{equation} where $\sigma^y$ is the tensile yield stress and $r$ is to be defined. In deviatoric stress space, the von-Mises yield surface is a circle whose center and radius are $\tens{Y}$ and $R(r)$. These thermodynamic forces hence respectively describe the displacement of the elastic domain center due to \textit{kinematic hardening}, and the evolution of its radius due to \textit{isotropic hardening}. Setting $R(r)$ (\textit{resp.} $\tens{Y}$) to zero amounts to specializing the yield surface \eqref{eq:von-Mises_yield} to kinematic (\textit{resp. isotropic}) hardening. %The specialization of the yield surface \eqref{eq:von-Mises_yield} to kinematic or isotropic hardenings is made by respectively setting $R(r)=0$ or $\tens{Y}=\tens{0}$. % In the following, linear hardening will be considered here by setting either $\tens{Y}=\tens{0}$ for isotropic or $R(r)=0$ for kinematic hardening. Then, flow rules \eqref{eq:flow_rule_plast} and \eqref{eq:hardening_rule_plast} applied to the yield function \eqref{eq:von-Mises_yield} lead to: \begin{subequations} \begin{alignat}{1} \label{eq:plastic_strain_rate} &\tens{\dot{\eps}}^p=\dot{p}\sqrt{\frac{3}{2}}\frac{\tens{s}-\tens{Y}}{\norm{\tens{s}-\tens{Y}}}=\dot{p}\:\sqrt{\frac{3}{2}}\tens{m} \\ \label{eq:equiv_plastic_strain_rate} &\dot{r}=\dot{p} \end{alignat} \end{subequations} where $r$ is identified from \eqref{eq:equiv_plastic_strain_rate} as the equivalent plastic strain and $\tens{m}$ is referred to as the \textit{plastic flow direction}. Next, assuming a Prager-Ziegler linear kinematic hardening \cite[p.91]{Simo}, the Helmholtz free energy density takes the form: %By considering internal variables and associated forces as: $\{\Vcb,\Acb\}=\{\[\tens{\eps}^p,p\],\[\tens{s}-\tens{Y},-R(p)\]\}$, one defines the following Helmholtz free energy density: \begin{equation} \label{eq:EP_helmoltz} \rho \psi = \frac{1}{2}\tens{\eps}^e:\Cbb:\tens{\eps}^e + \frac{2}{3}C\tens{\eps}^p:\tens{\eps}^p + H(p) \end{equation} where $H(p)$ is defined so that $H''(p)=C$ is the \textit{hardening modulus}, and $\Cbb$ is the fourth-order \textit{elastic stiffness} tensor (major and minor symmetric) defined for isotropic materials as $C_{ijkl}=\lambda \delta_{ij}\delta_{kl} + \mu \(\delta_{ik}\delta_{jl}+\delta_{il}\delta_{jk}\)$. Thermodynamic forces are finally related to internal variables by means of equation \eqref{eq:thermodynamic_forces}, that is: \begin{subequations} \begin{alignat}{1} \label{eq:eta} \tens{Y}&=\frac{2}{3}C\tens{\eps}^p \\ \label{eq:isotropic_hardening} R &=H'(p) \end{alignat} \end{subequations} We consider in what follows isothermal deformations of isotropic solids, that may be irreversible by specifying the above developments to some well-known small strain constitutive models. \begin{example}[Linear elasticity] The simplest case that is considered hereinafter does not involve irreversible deformations, and hence additional internal variables (\textit{i.e. $\tens{\eps}^p\equiv \tens{0}$}), and is referred to as linear elasticity. The combination of equations \eqref{eq:Cauchy_definition} and \eqref{eq:EP_helmoltz} then leads to \textit{Hooke's law}: \begin{equation} \label{eq:Hooke} \tens{\sigma}=\Cbb:\tens{\eps} \end{equation} or in rate form: \begin{equation} \label{eq:elastic_law} \tens{\dot{\sigma}}=\Cbb:\tens{\dot{\eps}} \end{equation} The elastic acoustic tensor is further defined as: \begin{equation} \label{eq:elasticity_acoustic} A^{\text{elast}}_{ij}= n_k C_{kijl}n_l= \lambda n_in_j + \mu \(n_in_j + \delta_{ij}\) \end{equation} \end{example} \begin{example}[Elastoplasticity] Rate-independent plasticity or elastoplasticity is based on the assumption that admissible thermodynamic forces lie within or on the boundary of the elastic domain \eqref{eq:elastic_convex}. The equivalent plastic strain rate becomes a Lagrange multiplier in order to ensure $f(\tens{\sigma},\Acb)\leq 0$ and must obey the \textit{K{\"u}hn-Tucker compatibility conditions}: \begin{equation} \label{eq:Kuhn_Tucker} \dot{p} \geq 0 \quad ; \quad f \leq 0 \quad ; \quad \dot{p}f =0 \end{equation} The equivalent plastic strain rate, is determined by the \textit{consistency condition} $\dot{f}=\drond{f}{(\tens{s}-\tens{Y})}:(\tens{\dot{s}}-\tens{\dot{Y}}) - \drond{f}{R}\dot{R}=0$ that leads to: \begin{equation} \sqrt{\frac{3}{2}}\tens{m}:\dot{\tens{Y}} + \dot{R} =\sqrt{\frac{3}{2}}\tens{m}:\dot{\tens{\sigma}} \end{equation} Then, combination of the above equation with the elastic law $\tens{\dot{\sigma}}=\Cbb:\(\tens{\dot{\eps}}-\tens{\dot{\eps}}^p\)$ and equation \eqref{eq:plastic_strain_rate} yields: \begin{equation} \label{eq:p_evolution} \dot{p}=\sqrt{\frac{3}{2}}\frac{2\mu}{3\mu+(C+R')}\tens{m}:\tens{\dot{\eps}}=\sqrt{\frac{3}{2}}\frac{2\mu}{3\mu+(C+R')}\tens{m}:\tens{\dot{\eps}} \end{equation} At last, equations \eqref{eq:plastic_strain_rate} and \eqref{eq:p_evolution} can be successively introduced in the elastic law so that one gets \cite[eq (2.2.22)]{Simo}: \begin{equation} \label{eq:elastoplastic_tangent} \tens{\dot{\sigma}}=\(\Cbb - \frac{6\mu^2}{3\mu +(C+R')}\tens{m}\otimes\tens{m} \):\tens{\dot{\eps}} = \Cbb^{ep}:\tens{\dot{\eps}} \end{equation} with $\Cbb^{ep}$ the \textit{elastoplastic tangent modulus}. The \textit{elastoplastic acoustic tensor} is defined as: \begin{equation} \label{eq:EP_acoustic} A_{ij}^{ep}= n_k C^{ep}_{ikjl}n_l = A_{ij}^{elast} - \frac{6\mu^2}{3\mu +(C+R')} (n_k m_{ik})(m_{jl}n_l) \end{equation} which is positive-definite for positive linear hardening ($C>0 \:;\: R'>0$). \end{example} \begin{example}[Elasto-viscoplasticity] Viscoplasticity or rate-dependent plasticity can be seen as a regularization of rate-independent plasticity that relaxes the condition $f(\tens{\sigma},\Acb)\leq 0$ and thus leads to admissible thermodynamic forces lying outside the elastic domain \cite[p.58]{Simo}. %% Viscoplasticity provides on the other hand an explicit definition of the equivalent plastic strain, for example the Perzyna or Sokolowskii-Malvern model \cite{Perzyna} is governed by: \begin{equation} \label{eq:EVP_creep_law} \dot{p}=\left\langle \frac{f}{\gamma}\right\rangle^n \end{equation} where $\left\langle\bullet\right\rangle=\frac{\bullet + \abs{\bullet}}{2}$ is the positive part function, and $n$ and $\gamma$ are parameters. % arising from the relaxation of the condition $f\leq 0$. %% Hence, the plastic fluxes $\tens{\dot{\eps}}^p, \dot{\Vcb}$ are completely determined by \eqref{eq:flow_rule_plast} and \eqref{eq:hardening_rule_plast}. %% It then comes out that rate-dependent plasticity is driven by the elastic law: \begin{equation} %\label{eq:elastic_law} \tens{\dot{\sigma}}=\Cbb:\(\tens{\dot{\eps}}-\tens{\dot{\eps}}^p\) \end{equation} in which $\tens{\dot{\eps}^p}$ is given by the combination of equations \eqref{eq:flow_rule_plast} and \eqref{eq:EVP_creep_law}, namely: \begin{equation} \label{eq:EVP_plastic_strain_rate} \tens{\dot{\eps}}^p=\left\langle \frac{f}{\gamma}\right\rangle^n\:\sqrt{\frac{3}{2}}\tens{m} \end{equation} \end{example} \subsection{The general formulation} \label{sec:general-formulation} Balance and constitutive equations obtained previously are now summarized for various classes of materials and regimes of deformation. Recall that the deformations are assumed isothermal and that history effects are considered within the infinitesimal theory only. \subsubsection*{Hyperelasticity} %The non-linear constitutive equations of elastoplasticity and hyperelasticity prevent the writing of a conservative form with the previous approach. The system of conservation laws for problems involving hyperelastic solids is composed of kinematic laws \eqref{eq:HE_kinematic} and the balance equation of Lagrangian linear momentum \eqref{eq:Lagrangian_linear_momentum}, repeated here for convenience: \begin{align} & \tens{\dot{F}} - \nablav_0 \cdot \(\vect{v}\otimes\tens{I}\) = \tens{0}\\ & \rho_0 \vect{\dot{v}} - \nablav_0 \cdot \tens{\Pi} = \rho_0 \vect{b} \end{align} Assuming a Cartesian coordinate system, this system can be written in \textit{conservative form}: \begin{equation} \label{eq:general_conservative_HE} \Ucb_t + \sum_{\alpha=1}^D \drond{\Fcb\cdot \vect{E}_\alpha}{X_\alpha} = \Scb \end{equation} where the vector of conserved quantities $\Ucb$, flux vectors $\Fcb\cdot \vect{E}_\alpha$ and the source term $\Scb$ are: \begin{equation} \label{eq:vectors_hyperelasticity} \Ucb =\matrice{\rho_0\vect{v} \\ \tens{F}} \quad ; \quad \Fcb\cdot\vect{E}_\alpha = \matrice{-\tens{\Pi}\cdot\vect{E}_\alpha\\-\vect{v}\otimes\vect{E}_\alpha } \quad; \quad \Scb = \matrice{ \rho_0\vect{b} \\ \tens{0}} \end{equation} A quasi-linear system may then be built by introducing an auxiliary vector $\Qcb=\matrice{\vect{v}\\ \tens{\Pi}}$ and using the chain rule according to \cite{Trangenstein91}: \begin{equation} \label{eq:quasi-linear_Trangenstein} \drond{\Qcb}{t} + \(\drond{\Ucb}{\Qcb}\)^{-1}\drond{\Fcb\cdot\vect{E}_\alpha}{\Qcb} \drond{\Qcb}{X_\alpha} = \(\drond{\Ucb}{\Qcb}\)^{-1}\Scb = \tilde{\Scb} \end{equation} In the quasi-linear form, the derivative of the vector of conserved quantities with respect to the auxiliary vector leads to the diagonal matrices: \begin{equation*} \drond{\Ucb}{\Qcb}=\matrice{\tens{I} & \tens{0}^3 \\ \tens{0}^3 & \drond{\tens{F}}{\tens{\Pi}}} \Rightarrow \(\drond{\Ucb}{\Qcb}\)^{-1}=\matrice{\tens{I} & \tens{0}^3 \\ \tens{0}^3 & \drond{\tens{\Pi}}{\tens{F}}} \end{equation*} where the tangent modulus $\drond{\tens{\Pi}}{\tens{F}}=\Hbb$ arises and $\tens{0}^p$ is a $p$th-order zero tensor. Moreover, the derivative of the flux vectors with respect to the auxiliary vector reads: \begin{equation} \drond{\Fcb\cdot\vect{E}_\alpha}{\Qcb}=-\matrice{\tens{0}^2 & \frac{1}{\rho_0}\tens{I}\otimes\vect{E}_\alpha \\ \tens{I}\boxtimes \vect{E}_\alpha & \tens{0}^4} \end{equation} in which the operator $\tens{I}\boxtimes\vect{E}_\alpha$ is the transpose on second and third indices of the classical tensor product, namely: $\tens{I}\boxtimes\vect{E}_\alpha=\delta_{jk} \vect{e}_j\otimes \vect{E}_\alpha\otimes\vect{e}_k$. Finally, the quasi-linear form associated with hyperelastic problems is: \begin{equation} \Qcb_t + \Absf^\alpha \drond{\Qcb}{X_\alpha} = \Scb \qquad \text{with: }\Absf^\alpha = -\matrice{ \tens{0}^2 & \frac{1}{\rho_0}\tens{I}\otimes\vect{E}_\alpha \\ \Hbb\cdot\vect{E}_\alpha & \tens{0}^4} \label{eq:HE_quasilinear} \end{equation} where the dependence on $\Qcb$ of matrices $\Absf^\alpha(\Qcb)$ has been omitted for simplicity. \subsubsection*{Linear elasticity and elasto-viscoplasticity} The governing equations of elasticity and elasto-viscoplasticity within the linearized geometrical framework consist of the kinematic law \eqref{eq:HPP_kinematic}, the balance equation of linear momentum \eqref{eq:HPP_linear_momentum} and the elastic law \eqref{eq:elastic_law}: \begin{align*} & \tens{\dot{\eps}}-\nablav \cdot \(\frac{\vect{v}\otimes\tens{I}+\tens{I}\otimes \vect{v}}{2}\) =\tens{0} \\ & \rho \vect{\dot{v}} - \nablav \cdot \tens{\sigma} = \rho \vect{b} \\ & \tens{\dot{\sigma}} - \Cbb :\(\tens{\dot{\eps}}-\tens{\dot{\eps}}^p\) =\tens{0} \end{align*} Combining kinematic and elastic laws and considering again a Cartesian coordinates system yields, for a homogeneous media (\textit{i.e. $\nablav \rho=\vect{0}$}), the following \textit{conservative form}: %Hence, introduction of kinematic laws in the elastic law yields, by assuming a Cartesian coordinates system and homogeneous media (\textit{i.e. $\nablav \rho=\vect{0}$}), the following \textit{conservative form}: \begin{equation} \label{eq:general_conservative} \Qcb_t + \sum_{i=1}^D \drond{\Fcb\cdot \vect{e}_i}{x_i} = \Scb \end{equation} with conserved quantities, flux and source term vectors respectively defined as: \begin{equation} \label{eq:vectors_elasticity} \Qcb =\matrice{\vect{v} \\ \tens{\sigma}} \quad ; \quad \Fcb\cdot\vect{e}_i = \matrice{-\frac{1}{\rho}\tens{\sigma}\cdot\vect{e}_i\\-\Cbb:\frac{\vect{v}\otimes\vect{e}_i +\vect{e}_i \otimes\vect{v} }{2} } \quad ; \quad \Scb = \matrice{ \vect{b} \\ -\Cbb:\tens{\dot{\eps}}^p} \end{equation} Note that here, the direct writing of the conservative form in terms of $\vect{v}$ and $\tens{\sigma}$ is made possible by the linearity of the elasticity tensor, avoiding thus the introduction of an auxiliary vector. The quasi-linear form of equation \eqref{eq:general_conservative} is derived by means of the chain rule: \begin{equation} \label{eq:HPP_quasi-linear} \Qcb_t + \Absf^i \drond{\Qcb}{x_i} = \Scb \end{equation} where: \begin{equation*} \Absf^i=\drond{\Fcb\cdot \vect{e}_i}{\Qcb}=-\matrice{\tens{0}^3 & \frac{1}{\rho}\tens{I}\otimes\vect{e}_i\\\Cbb\cdot\vect{e}_i & \tens{0}^4} \end{equation*} in which symmetries of the elastic stiffness tensor have been used. Since the elastic stiffness tensor is constant, system \eqref{eq:HPP_quasi-linear} is linear for elasticity and semi or non-linear for elasto-viscoplasticity depending on the flow rule \eqref{eq:flow_rule_plast}. Moreover, the source term arising due to the viscoplastic flow rule \eqref{eq:EVP_plastic_strain_rate} can be written in terms of a relaxation term $\bar{\Scb}$ and a relaxation time $\tau=(\gamma/\sigma^y)^n$ as $\Scb=\bar{\Scb}/\tau$ \cite{Thomas_EVP}, so that the system \eqref{eq:HPP_quasi-linear} can be identified to a \textit{relaxation system} \cite{Relaxation_syst}. In the asymptotic limit $\tau \rightarrow 0$ or in the \textit{vanishing viscosity limit}, system \eqref{eq:HPP_quasi-linear} tends to the \textit{equilibrium system} corresponding to elastoplasticity \cite{Thomas_EVP}. \subsubsection*{Elastoplasticity} The writing of a conservative form for elastoplasticity is similar to what was done for hyperelastic solids. Indeed, the system composed of kinematic laws \eqref{eq:HPP_kinematic} and the balance equation of linear momentum \eqref{eq:HPP_linear_momentum}: %The system of conservation laws for non-linear problems are based on kinematic laws \eqref{eq:HPP_kinematic} so that a conservative form \eqref{eq:general_conservative} is also written: \begin{align*} & \tens{\dot{\eps}}-\nablav \cdot \(\frac{\vect{v}\otimes\tens{I}+\tens{I}\otimes \vect{v}}{2}\) =\tens{0} \\ & \rho \vect{\dot{v}} - \nablav \cdot \tens{\sigma} = \rho \vect{b} \end{align*} can be written as: \begin{equation} \label{eq:general_conservative_EP} \Ucb_t + \sum_{i=1}^D \drond{\Fcb\cdot \vect{e}_i}{x_i} = \Scb \end{equation} where the conserved quantities, flux and source term vectors are: \begin{equation} \label{eq:vectors_plasticity} \Ucb =\matrice{\vect{v} \\ \tens{\eps}} \quad ; \quad \Fcb\cdot\vect{e}_i = \matrice{-\frac{1}{\rho}\tens{\sigma}\cdot\vect{e}_i\\-\frac{\vect{v}\otimes\vect{e}_i +\vect{e}_i \otimes\vect{v} }{2} } \quad ; \quad \Scb = \matrice{ \vect{b} \\\tens{0}} \end{equation} Analogously to hyperelasticity, a quasi-linear form involving the elastoplastic tangent modulus \eqref{eq:elastoplastic_tangent} is derived by means of the auxiliary vector $\Qcb=\matrice{\vect{v}\\ \tens{\sigma}}$ and the chain rule: \begin{equation} \Qcb_t + \Absf^i \drond{\Qcb}{x_i} = \Scb \qquad \text{with: }\Absf^i = -\matrice{\tens{0}^2 & \frac{1}{\rho}\tens{I}\otimes\vect{e}_i\\ \Cbb^{ep}\cdot \vect{e}_i & \tens{0}^4} \label{eq:EP_quasilinear} \end{equation} %%% Local Variables: %%% mode: latex %%% ispell-local-dictionary: "american" %%% TeX-master: "../mainManuscript" %%% End:
{ "alphanum_fraction": 0.7178474234, "avg_line_length": 81.2192513369, "ext": "tex", "hexsha": "c5aa0d704c3d9fe5f416307badedf836a412e4fd", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "2f0062a1800d7a17577bbfc2393b084253d567f4", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "adRenaud/research", "max_forks_repo_path": "manuscript/chapter2/conservationLaws.tex", "max_issues_count": 1, "max_issues_repo_head_hexsha": "2f0062a1800d7a17577bbfc2393b084253d567f4", "max_issues_repo_issues_event_max_datetime": "2019-01-07T13:11:11.000Z", "max_issues_repo_issues_event_min_datetime": "2019-01-07T13:11:11.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "adRenaud/research", "max_issues_repo_path": "manuscript/chapter2/conservationLaws.tex", "max_line_length": 1074, "max_stars_count": 1, "max_stars_repo_head_hexsha": "2f0062a1800d7a17577bbfc2393b084253d567f4", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "adRenaud/research", "max_stars_repo_path": "manuscript/chapter2/conservationLaws.tex", "max_stars_repo_stars_event_max_datetime": "2021-06-18T14:52:03.000Z", "max_stars_repo_stars_event_min_datetime": "2021-06-18T14:52:03.000Z", "num_tokens": 15040, "size": 45564 }
\documentclass[9pt]{extarticle} \usepackage{amsmath} \usepackage{booktabs} \usepackage{bytefield} \usepackage{caption} \usepackage{endnotes} \usepackage{fancyvrb} \usepackage{float} \usepackage{longtable} \usepackage{minibox} \usepackage{register} \usepackage{standalone} \usepackage{swiftnav} \usepackage{tabularx} \usepackage{tocloft} \usepackage{setspace} \usepackage{pbox} \usepackage{soul} \usepackage{hyperref} \usepackage{ltxtable} \usepackage{caption} \hypersetup{bookmarks,bookmarksopen,bookmarksdepth=4} \setlength{\regWidth}{0.4\textwidth} \floatstyle{plain} \newfloat{field}{h}{fld} \floatname{field}{Field} \numberwithin{table}{subsection} \numberwithin{field}{subsection} \renewcommand{\version}{(((version)))} \renewcommand{\thesubsubsection}{\hspace{-0.45cm}} \newcommand{\specialcell}[2][c]{% \begin{tabular}[#1]{@{}c@{}}#2\end{tabular}} \renewcommand{\regLabelFamily}{} \cftsetindents{section}{0.5in}{0.5in} \cftsetindents{subsection}{0.5in}{0.5in} %%\setlength\cftparskip{-1.2pt} \setlength{\cftbeforetoctitleskip}{-1em} \setlength\cftbeforesecskip{1.3pt} \setlength\cftaftertoctitleskip{2pt} \renewcommand{\cftsecafterpnum}{\hspace*{7.5em}} \renewcommand{\cftsubsecafterpnum}{\hspace*{7.5em}} \renewcommand\tableofcontents{\@starttoc{toc}} \newcolumntype{a}{>{\hsize=.2\hsize}X} \newcolumntype{b}{>{\hsize=.22\hsize}X} \newcolumntype{c}{>{\hsize=.3\hsize}X} \newcolumntype{d}{>{\hsize=.7\hsize}X} \newcolumntype{e}{>{\hsize=.13\hsize}X} \newcolumntype{f}{>{\hsize=.16\hsize}X} \newcolumntype{g}{>{\hsize=.77\hsize}X} \newcolumntype{h}{>{\hsize=.6\hsize}X} % Shell out to git to get the most recent tag and pass it to the LateX % job name. Hopefully this doesn't screw with things. \immediate\write18{git describe --abbrev=0 --tags | cut -c 2-5 > \jobname.info } \title{Swift Navigation Binary Protocol} \author{Swift Navigation} \mysubtitle{Protocol Specification \version} \date{\today} \begin{document} \maketitle \begin{normalsize} \setcounter{tocdepth}{2} \begin{centering} \tableofcontents \end{centering} \end{normalsize} \thispagestyle{firstpage} \bigskip \bigskip \begin{large} \section{Overview} \label{sec:Overview} The Swift Navigation Binary Protocol (SBP) is a fast, simple, and minimal binary protocol for communicating with Swift devices. It is the native binary protocol used by the Piksi GPS receiver to transmit solutions, observations, status, and debugging messages, as well as receive messages from the host operating system, such as differential corrections and the almanac. As such, it is an important interface with your Piksi receiver and the primary integration method with other systems. This document provides a specification of SBP framing and the payload structures of the messages currently used with Swift devices. SBP client libraries in a variety of programming languages are available at~\url{https://github.com/swift-nav/libsbp} and support information for sbp is available at~\url{https://support.swiftnav.com/customer/en/portal/articles/2492810-swift-binary-protocol}. \end{large} \newpage \section{Message Framing Structure} \label{sec:Message} \begin{large} SBP consists of two pieces: \begin{itemize} \item an over-the-wire message framing format \item structured payload definitions \end{itemize} As of Version~\version, the frame consists of a 6-byte binary header section, a variable-sized payload field, and a 16-bit CRC value. All multibyte values are ordered in \textbf{little-endian} format. SBP uses the CCITT CRC16 (XMODEM implementation) for error detection\footnote{CCITT 16-bit CRC Implementation uses parameters used by XMODEM, i.e. the polynomial: $x^{16} + x^{12} + x^5 + 1$. For more details, please see the implementation at~\url{https://github.com/swift-nav/libsbp/blob/master/c/src/edc.c\#L59}. See also \emph{A Painless Guide to CRC Error Detection Algorithms} at~\url{http://www.ross.net/crc/download/crc_v3.txt}}. \end{large} \begin{table}[h] \centering \begin{tabularx}{\textwidth}{aaaX} \toprule Offset (bytes) & Size (bytes) & Name & Description \\ \midrule $\mathtt{0}$ & $\mathtt{1}$ & {Preamble} & Denotes the start of frame transmission. Always 0x55. \\ $\mathtt{1}$ & $\mathtt{2}$ & {Message Type} & Identifies the payload contents. \\ $\mathtt{3}$ & $\mathtt{2}$ & {Sender} & \hangindent=0.5em{A unique identifier of the sender. On the Piksi, this is set to the 2 least significant bytes of the device serial number. A stream of SBP messages may also include sender IDs for forwarded messages. By default, clients of `libsbp` use a sender id value of `0x42`. Sender id '0x42' is used to represent device controllers such as the Piksi Console.} \\ $\mathtt{5}$ & $\mathtt{1}$ & {Length} & Length (bytes) of the {Payload} field. \\ $\mathtt{6}$ & $\mathtt{N}$ & {Payload} & Binary message contents. \\ $\mathtt{N+6}$ & $\mathtt{2}$ & {CRC} & \hangindent=0.5em{Cyclic Redundancy Check of the frame's binary data from the Message Type up to the end of Payload (does not include the Preamble).} \\ \midrule & $\mathtt{N+8}$ & &Total Frame Length \\ \bottomrule \end{tabularx} \caption{Swift Binary Protocol message structure. $\mathtt{N}$ denotes a variable-length size.} \label{tab:message} \end{table} \section{NMEA-0183} \label{sec:NMEA} \begin{large} Swift devices, such as the Piksi, also have limited support for the standard NMEA-0183 protocol. Note that NMEA-0183 doesn't define standardized message string equivalents for many important SBP messages such as observations, baselines and ephemerides. For this reason it is strongly recommended to use SBP for new development. NMEA-0183 output is provided primarily to support legacy devices. \end{large} \newpage \section{Basic Formats and Payload Structure} \label{sec:Payload} \begin{large} The binary payload of an SBP message decodes into structured data based on the message type defined in the header. SBP uses several primitive numerical and collection types for defining payload contents. \end{large} \begin{table}[h] \centering \begin{tabularx}{\textwidth}{aaX} \toprule Name & Size (bytes) & Description \\ \midrule ((*- for t in prims *)) (((t.identifier))) & (((t.identifier | getsize))) & \hangindent=0.5em{(((t.desc)))} \\ ((*- endfor *)) \bottomrule \end{tabularx} \caption{SBP primitive types} \label{tab:types} \end{table} \hspace{-5em} \subsubsection*{Example Message} \begin{large} \par As an example, consider this framed series of bytes read from a serial port: \begin{verbatim} 55 0b 02 cc 04 14 70 3d d0 18 cf ef ff ff ef e8 ff ff f0 18 00 00 00 00 05 00 15 dc \end{verbatim} This byte array decodes into a \texttt{MSG\_BASELINE\_ECEF} (see pg.~\pageref{sec:MSG_BASELINE_ECEF}), which reports the baseline position solution of the rover receiver relative to the base station receiver in Earth Centered Earth Fixed (ECEF) coordinates. The segments of this byte array and its contents break down as follows: \end{large} \begin{table}[h] \centering \begin{tabular}{llrl} \toprule Field Name & Type & Value & Bytestring Segment\\ \midrule {Preamble} & u8 & 0x55 & \verb!55! \\ {Message Type}& u16 & \texttt{MSG\_BASELINE\_ECEF} & \verb!0b 02! \\ {Sender}& u16 & 1228 & \verb!cc 04! \\ {Length}& u8 & 20 & \verb!14! \\ {Payload}& & --- & \verb!70 3d d0 18 cf ef ff ff ef e8 ff ff! \\ & & & \verb!f0 18 00 00 00 00 05 00! \\ \quad~\texttt{MSG\_BASELINE\_ECEF} & & & \\ \quad~.tow & u32 & $416300400~\textrm{msec}$ & \verb!70 3d d0 18! \\ \quad~.x & s32 & $-4145~\textrm{mm}$ & \verb!cf ef ff ff! \\ \quad~.y & s32 & $-5905~\textrm{mm}$ & \verb!ef e8 ff ff! \\ \quad~.z & s32 & $6384~\textrm{mm}$ & \verb!f0 18 00 00! \\ \quad~.accuracy & u16 & 0 & \verb!00 00! \\ \quad~.nsats & u8 & 5 & \verb!05! \\ \quad~.flags & u8 & 0 & \verb!00! \\ {CRC} & u16 & 0x9443 & \verb!15 dc! \\ \bottomrule \end{tabular} \caption{SBP breakdown for \texttt{MSG\_BASELINE\_ECEF}} \label{tab:example_breakdown} \end{table} ((* block messages_table *)) ((* endblock *)) ((* block messages_desc *)) ((* endblock *)) \end{document}
{ "alphanum_fraction": 0.7165143205, "avg_line_length": 35.0641025641, "ext": "tex", "hexsha": "2ad30217eb23988ad355a13e57423d76b82bf291", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "33f82210ff1262f8d6c180215277a0bb5eb3b65c", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "adammacudzinski/libsbp", "max_forks_repo_path": "generator/sbpg/targets/resources/sbp_base.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "33f82210ff1262f8d6c180215277a0bb5eb3b65c", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "adammacudzinski/libsbp", "max_issues_repo_path": "generator/sbpg/targets/resources/sbp_base.tex", "max_line_length": 416, "max_stars_count": null, "max_stars_repo_head_hexsha": "33f82210ff1262f8d6c180215277a0bb5eb3b65c", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "adammacudzinski/libsbp", "max_stars_repo_path": "generator/sbpg/targets/resources/sbp_base.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 2676, "size": 8205 }
\documentclass[a4paper,10pt]{report} \usepackage{color} \usepackage{graphicx} \usepackage{amsmath} \usepackage{makeidx} \usepackage{nomencl} \usepackage[margin=10pt,font=small,labelfont=bf,labelsep=newline,singlelinecheck=false]{caption} \usepackage[german,polutonikogreek,english]{babel} \usepackage{german} \usepackage{ifthen} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \newcommand{\TITLE}{The ``complete'' I18n4Java manual} \newcommand{\SUBTITLE}{Users, Administrators and Developers Guide to I18n4Java} \newcommand{\AUTHOR}{Rick-Rainer Ludwig} \newcommand{\DATE}{\today} \newcommand{\VERSION}{1.0.0} \newcommand{\COMPANY}{PureSol Technologies} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \newenvironment{definition}[2] { \index{#1} \vspace{1ex} Definition \textbf{#1}: \begin{flushright} \begin{minipage}{0.9\textwidth} #2 \end{minipage} \end{flushright} } %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \bibliographystyle{unsrt} %\renewcommand{\nomname}{\chapter{Glossary and Register}} \makenomenclature %\renewcommand{\indexname}{\chapter{Index}} \makeindex \newenvironment{story}[1]{ { \sffamily\small \list{} { \leftmargin1cm \rightmargin1cm } \item\relax\textit{#1} \endlist } } \newenvironment{person}[1] {\textsc{#1}\index{#1}} \newenvironment{term}[1] {\textit{#1}\index{#1}} \renewenvironment{quote}[3] { \index{#2} \begin{flushright} \begin{minipage}{0.75\textwidth} \sffamily \small \begin{flushright} \textit{``#1''}\\-- \person{#2} \ifthenelse{\equal{#3}{}} {} {#3} \end{flushright} \end{minipage} \end{flushright} } %%%%%%%%%%%%%%%%% %%% Farben... %%% \definecolor{red}{rgb}{1.0,0.0,0.0} \definecolor{blue}{rgb}{0.0,0.0,1.0} \definecolor{green}{rgb}{0.0,1.0,0.0} %%%%%%%%%%%%%%%%% \begin{document} \pagestyle{headings} \newcommand{\bs}{\textbackslash} \begin{titlepage} { \centering \textbf{\Huge \TITLE}\\[1cm] \textbf{\Large \SUBTITLE}\\[2cm] \textbf{\AUTHOR}\\ \textbf{(c) by \COMPANY}\\[1cm] \textbf{version: \VERSION}\\ \textbf{date: \DATE}\\[1cm] } This is the ``official'' manual for KickStart. \end{titlepage} \tableofcontents \listoffigures \listoftables \chapter{About I18n4Java} I18n4Java is a new approach to do internationalization (in short i18n) in Java. The standard approach by using property files to store strings in specified keys and to change the files for different language output has a serious flaws: \textbf{The original message is not shown.} It's wise to see the string in the source code. With the whole string in sight, one knows the context, the size and the message format and the needed parameters and their types. I18n4Java tries to make this right. All original string in the original development language are written within the source. The Java MessageFormat facilities are used to parameterize the string with localization already included. Translators get the whole freedom with this approach to translate all string within the right context and with the possibility to reorder the parameters to get a grammatically correct translation in any language. The MessageFormat approach could be used before with the properties file approach, too, but the string were not cleanly located within the source code and it was hard and time consuming to get all information about the string to be displayed to know the size of the output, the number and format of all parameters and also the position and the context within the application. I18n4Java solves these issues. \chapter{Installation} The library \texttt{i18n4java.jar} is just copied to an appropriate position and the path is added to \texttt{CLASSPATH}. Afterwards I18n4Java can be used. \chapter{Configuration} A file \texttt{i18n4java.properties} is put into the root directory of the source project. Within the files some keys are defined to specify the behavior of all I18N4Java related processing tools. \chapter{Use in Source Code} As soon as the library of I18n4Java is included in \texttt{CLASSPATH} a translator can be get by a simple call of \texttt{private static final Translator translator =} \texttt{Translator.getTranslator(<class>.class);} Translations can be retreived by \texttt{translator.i18n(``<message with parameters>'',} \texttt{<param1>, <param2>...);} That's almost all of it! The rest can be done graphically with I18nLinguist, which is the tool to scan the sources, to do the translations and to release the translations into language files. The Translator automatically translates into the language which is administrated in the underlaying OS. With {\center \texttt{Translator.setDefault(<locale>);} } the current language can be changed. % GLOSSARY \nomenclature{\textbf{ASCII}\index{ASCII}}{American Standard Code for Information Interchange - Code table for a number to character mapping} \nomenclature{\textbf{Design Patterns}\index{Design Patterns}}{Design Patterns (in german: Entwurfsmuster) are generalized approaches to repeating programming issues. For the most issues there is a fitting approach to solve the issue with a generalized design. Design patterns are well known with generalized names and software which implements and documents these patterns are very easy mainanable. Other programmers have a good chance to understand the code and the design of the software in short time. (see: \cite{book:head_first_design_patterns,book:design_patterns})} \nomenclature{\textbf{Refactoring}\textbf{Refactoring}}{Refactoring is the process of permanent code improvement even after implementing functionality. This process is essential to keep the software code in a clean state for later use. Without Refactoring the code will get out of shape and the implementation process for new functionallity will become more expensive. Therefore, Refactoring is a way to keep code maintanable. (see: \cite{book:the_pragmatic_programmer,book:refactoring})} \printnomenclature % <-- new version since 4.1! \bibliography{refs/books} \printindex \end{document}
{ "alphanum_fraction": 0.7313262815, "avg_line_length": 35.726744186, "ext": "tex", "hexsha": "5ebc83053113294b24cb4b03260598797bf8a5d6", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "b5eb1b22147f36e007776101a27322e29b344379", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "PureSolTechnologies/i18n4java", "max_forks_repo_path": "man/manual.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "b5eb1b22147f36e007776101a27322e29b344379", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "PureSolTechnologies/i18n4java", "max_issues_repo_path": "man/manual.tex", "max_line_length": 849, "max_stars_count": null, "max_stars_repo_head_hexsha": "b5eb1b22147f36e007776101a27322e29b344379", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "PureSolTechnologies/i18n4java", "max_stars_repo_path": "man/manual.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 1619, "size": 6145 }
\section{Methodologies} \label{Metho} This Section presents the methodologies used to accomplish the project goals. Further details for the Sentiment Analysis techniques can be found in \cite{Aggarwal}. \subsection{Data collection and preprocessing} First of all, the datasets were loaded from csv format into a Python Notebook. Since articles and comments from different months were stored in different files,they were merged into two Pandas {\tt DataFrames}. After that, all the features considered unnecessary for the project were removed. The following step is to preprocess the comment bodies in order to prepare them for sentiment analysis. The preprocessing tasks included punctuation removal, case lowering, stop-word removal and stemming. The latter two were performed using methods from the NLTK package \cite{NLTK}. In particular, stemming was performed via a Porter Stemmer. Furthermore, all articles without any headline, section name and desk name were removed. \subsection{Sentiment Calculation} Once the comments were ready to be processed, a lexicon-based method was used to calculate polarity for each word in the comments. In particular, the AFINN lexicon \cite{AFINN} was used. This lexicon consists of a list of more than 3000 words, each associated with an integer score between -5 and 5. The score for each comment $c$, from now on labelled as CS, was calculated with the following formula \cite{Guardian}: \begin{equation} CS(c) = \frac{\sum_{w} S(w)}{\sqrt{N+1}} \end{equation} where $S(w)$ are the single-word scores, and $N$ is the total number of words in a comment. The definitive score for a comment, named \textit{popularity}, was the sum of CS and a normalised count of the recommendations each comment received. Furthermore, the score was doubled if the variable {\tt editorsSelection} was set to 1, meaning that the comment had been appreciated by the New York Times editors \begin{equation} \label{Popc} Pop (c) = (1 + \mbox{editorsSelection} (c) ) \left( CS(c) + \frac{\mbox{Recomm} (c)}{\max_c{\mbox{Recomm}(c)}} \right) \end{equation} The score for each article (AS) was subsequently computed through the formula after grouping comments by the article $a$ they belonged to: \begin{equation} \label{ASeq} AS(a) = \frac{\sum_{c\in a} Pop(c)}{\sqrt{M(a)+1}} \end{equation} where CS(c) is the comment score and $M(a)$ is the number of comments related to a given article $a$. \subsection{Evaluation strategies} The index of controversy for each article needed to take account of two factors: popularity of the comment, and if there was any form of debate between comments from different users. The former could be estimated naively as the number of comments under each article; the latter was evaluated as the interquartile range (IQR) of the $Pop(c)$ distribution for each article. This choice was made because the IQR is an indicator of statistical dispersion. Thus, one can expect for an article with opposing opinions to present two populated regions at the extremes of the distributions. The controversy index was hence defined as \begin{equation} \label{Str1} \mbox{Contr}_1(a) = \mbox{IQR}(a) * M(a) \end{equation} The results with this strategy (see Section \ref{Resul}) suggested to implement another strategy: applying the natural logarithm to the number of comments, to obtain \begin{equation} \label{Str2} \mbox{Contr}_2(a) = \mbox{IQR}(a) * \log{M(a)} \end{equation} After defining and calculating the index of controversy, articles were grouped by different categories, such as section name, new desk and author, in order to see whether some categories were more controversial than others. All the results from this groupings are reported in Sec. \ref{Resul}. Lastly, the keywords from comments of the most controversial articles were extracted by means of a TF-IDF vectorizer and a $\chi^2$ selector. Among the best features obtained, only those present in the AFINN dictionary were kept.
{ "alphanum_fraction": 0.7803087826, "avg_line_length": 82.3125, "ext": "tex", "hexsha": "4f778b71d5a6291d05c88a75be241d68e22b6c19", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "3e9eb3da22f815d3179f10ff98b9887a2a4f1360", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "andreasala98/NYMines", "max_forks_repo_path": "Report/Meth.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "3e9eb3da22f815d3179f10ff98b9887a2a4f1360", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "andreasala98/NYMines", "max_issues_repo_path": "Report/Meth.tex", "max_line_length": 582, "max_stars_count": null, "max_stars_repo_head_hexsha": "3e9eb3da22f815d3179f10ff98b9887a2a4f1360", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "andreasala98/NYMines", "max_stars_repo_path": "Report/Meth.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 945, "size": 3951 }
\subsection{Object Detection} \subsubsection{Helmet Detection} \noindent \textbf{1. Architecture} Helmet-Off Detection system contains the cameras and AI edge nodes connected to the edge-cloud, which do not require equipment rooms. There is at least one edge node for each of the five sites C1, C2, C3, S1, and S2. Cameras in each site are all connected to the edge stations via WiFi under a star network topology. Different from edge nodes and the edge cloud, cameras do not have computation power and are merely used to collect data. The simulation will be conducted according to the above setting to better simulate the real-world helmet-detection applications on the edge cloud. \noindent \textbf{2. Dataset and Related Settings} Among all the cameras, we take 49 cameras from all five sites as the training set. The rest 8 cameras are used as the test set, i.e., two cameras (S1-1, S1-2) from S1 and six cameras (S2-1, S2-2, S2-3, S2-4, S2-5, S2-6) from S2. That is, the ratio between the on-cloud training set and the on-edge test set is 9:1. \noindent \textbf{3. Scheme-wise Metrics} Current the helmet detection scenario is supported by metrics of Edge-cloud synergy inference, and lifelong learning. \noindent \textbf{4. Contextual Metrics} In the current helmet detection task, we focus on the detection capability of the helmet-off object. Thus, we measure the Recall, Precision, and F1-score to determine the Helmet-off detection capability of the model. \begin{equation} \label{equ:recall} Recall = \frac{TP}{TP + FN}, \end{equation} \begin{equation} \label{equ:precision} Precision = \frac{TP}{TP + FP}, \end{equation} \begin{equation} \label{equ:f1} F1-score = \frac{ 2 } {\frac{1}{Recall} + \frac{1}{Precision}}, \end{equation} where $TP$, $FN$ and $FP$ denote the number of true positive, false negative and false positive results in the whole dataset. $Precision$ is implied as to the measure of the correctly identified positive cases from all the predicted positive cases. Thus, this metric is useful when the costs of \textit{False Positives} are high. $Recall$ is the measure of the correctly identified positive cases from all the actual positive cases. This metric is important when the cost of \textit{False Negatives} is high. $F1-score$ is the harmonic mean of Precision and Recall and gives a better measure of the incorrectly classified cases than the Accuracy Metric. \noindent \textbf{5. Features} Helmet detection data are of unstructured data. Features are extracted with Yolov3. See base models below. \noindent \textbf{6. Baselines and Hyperparameters} We leverage two models, including the deeper model of Darknet53-YOLOv3 and the shallower model of Resnet18-YOLOv3. \noindent \textbf{7. Hardware} Edge nodes are all equipped with a 1.6GHz CPU with 4GB RAM and 8TOPS @FP16 NPU with 8GB Memory. A public edge-cloud instance is adopted with a Pascal P100 GPU. \subsection{Classification} \subsubsection{Thermal Comfort Prediction} \noindent \textbf{1. Architecture} There are 99 cities of 28 countries, each of which are equipped with an edge node. Edge nodes in each site are all connected to the cloud under a star network topology. \noindent \textbf{2. Dataset and Related Settings} ASHRAE RP-884 database (ATC I) was originally collected to develop De Dear's adaptive comfort model. 25,248 observation sets from 160 buildings in 24 cities of nine countries in different times of a day covering seasons in the whole year, involving more than 70 variables~\cite{de97,arp18}. Recognizing the value of open-source research databases in advancing the art and science of HVAC, in 2014 the ASHRAE Global Thermal Comfort Database II (ATCII) project was launched under the leadership of University of California and The University of Sydney~\cite{licina18}. The exercise began with a systematic collection and harmonization of raw data from the last two decades of thermal comfort field studies around the world. The final database is comprised of field studies conducted between 1995 and 2015 from around the world, with contributors releasing their raw data to the project for wider dissemination to the thermal comfort research community. After the quality-assurance process, there was a total of 81,846 rows of data of paired subjective comfort votes and objective instrumental measurements of thermal comfort parameters. An additional 25,617 rows of data from the original ATCI database are included, bringing the total number of entries to 107,463.~\footnote{http://www.comfortdatabase.com/} \noindent \textbf{3. Scheme-wise Metrics} Thermal comfort supports metrics of lifelong learning, federated learning. \noindent \textbf{Federated Learning} \begin{itemize} \item Objective: Metric \ref{equ:aoaoc}, \ref{equ:voaoc}, \item Constaint: Metric \ref{equ:cv} \end{itemize} \noindent \textbf{4. Contextual Metrics} To capture the predictive capabilities of our approach, we use the following metrics: Accuracy for evaluation. \noindent \begin{equation*} \begin{split} &ACC = TP / N; \\ \end{split} \end{equation*} where $TP$ denotes the number of true positive and $N$ denotes the number of testing data. \noindent \textbf{5. Features} We categorize the data with the variables that can be used as raw features and the variables that can be used for prediction. For raw features, we have basic identifiers, e.g., age and sex. These are personalized data, yet it is easier to collect these data. There is information on the physical environment, which can be collected by building automation systems. In this paper, we conduct prediction on direct comfort metrics of thermal sensation and preference which are for the general thermal comfort process, while the unselected ones are for adaptive thermal comfort process and we leave the related investigation as future work. The important feature items include age, sex, taav, trav, velav, rh, et, set, tsens, pcc\_ag, day15\_ta, day06\_ta, and dayav\_et. \noindent \textbf{6. Baselines and Hyperparameters} The basic model is XGBoost. \noindent \textbf{7. Hardware} A cloud instance is used to run the benchmark, which has 12 cores, each with 3.20GHz, and a total memory of 32G. \vspace{0.2cm} \noindent \subsubsection{Defect Detection} \noindent \textbf{1. Architecture} One edge node connects to the cloud. \noindent \textbf{2. Dataset and Related Settings} In the evaluation, we chronologically order our three-month dataset and use a hold-out evaluation with a ratio of 10:1:1, i.e., the first 10-week data for training, 1-week data for evaluation and the remaining 1-week for testing. \noindent \textbf{3. Scheme-wise Metrics} Support metrics of Lifelong Learning. \vspace{0.2cm} \noindent \textbf{Common} \begin{itemize} \item Objective: - \item Constaint: Metric \ref{equ:ed} \begin{itemize} \item 500 fps, i.e., 500 images per second \end{itemize} \end{itemize} \vspace{0.2cm} \noindent \textbf{Lifelong Learning} \begin{itemize} \item Objective: Metric \ref{equ:aoma} \ref{equ:aombt} \ref{equ:aombt} \item Constaint: Metric \ref{equ:mseff}, \ref{equ:ssse} \end{itemize} \noindent \textbf{4. Contextual Metrics} To capture the predictive capabilities of our approach, we use the following metrics: Accuracy for evaluation. \noindent \begin{equation*} \begin{split} &ACC = TP / N; \\ \end{split} \end{equation*} where $TP$ denotes the number of true positive and $N$ denotes the number of testing data. \noindent \textbf{5. Features} Defect detection data are of unstructured data. Features are extracted with CNN. See base models below. \noindent \textbf{6. Baselines and Hyperparameters} The base model is CNN. \noindent \textbf{7. Hardware} All our experiments are conducted in a private cloud with 16 cores of 2.6GHz CPU and 64G memory. \subsection{Regression} \subsubsection{Coke Quality Prediction} \noindent \textbf{1. Architecture} One edge node connects to the cloud. \noindent \textbf{2. Dataset and Related Settings} Coke Production Dataset (CPD) is used to predict coke quality based on certain coal blending proportions. The dataset used in this study is collected from a factory for 6 months. With 108 samples of different coal blending combinations in total, the ratio between the training and testing is set as 8:2 and 22 samples are set as testing samples. In the training part for task forest, 25$\%$ of the training data are used for validation. \noindent \textbf{3. Scheme-wise Metrics} Coke prediction supports metrics of lifelong learning, federated learning. \vspace{0.2cm} \noindent \textbf{Common} \begin{itemize} \item Objective: - \item Constaint: Metric \ref{equ:ed} \begin{itemize} \item 6ms per sample \item 100 samples in one batch, i.e., 0.6s per batch \end{itemize} \end{itemize} \vspace{0.2cm} \noindent \textbf{Federated Learning} \begin{itemize} \item Objective: Metric \ref{equ:aoaoc}, \ref{equ:voaoc} \item Constaint: Metric \ref{equ:cv} \end{itemize} \noindent \textbf{4. Contextual Metrics} Predicting the continuous, scalar value of CSR is basically a regression problem, directly leading to our selection of the metrics of Mean Absolute Error (MAE) and Mean Squared Error (MSE). Also, the error rate metric is used to measure the relative bias of predictions. To avoid moderate predictions, matching the prediction vectors with the ground-truth vectors will end up with better predictions. So to capture the trend tracking capability of the proposed approach, the Pearson Correlation Coefficient (PCCS), which is a common coefficient for measuring the covariance of two variables is used. \begin{equation} \label{equ:mae} MAE = \frac{1}{J} \sum^J_{j=1} \frac{\sum_{i=1}^N |\hat{y}^{j}_{i} - y^{j}_{i}|}{N_j}, \end{equation} \begin{equation} \label{equ:mse} MSE = \frac{1}{J} \sum^J_{j=1} \frac{\sum_{i=1}^N (\hat{y}^{j}_{i} - y^{j}_{i})^2}{N_j}, \end{equation} \begin{equation} \label{equ:er} MAPE = \frac{1}{J} \sum^J_{j=1} \frac{\sum_{n=1}^{N_j} |\hat{y}^j_n - y^j_n|}{\sum_{n=1}^{N_j} y^j_n}, \end{equation} \begin{equation} \label{equ:pccs} PCCS = \frac{1}{J} \sum^J_{j=1} \frac{\sum_{i=1}^{N_j} (\hat{y}^{j}_{i} - \hat{\mu}^j)(y^{j}_{i} - \mu^j)}{\sqrt{\sum_{i=1}^{N_j} (\hat{y}^{j}_{i} - \hat{\mu}^j)^2}\sqrt{\sum_{i=1}^{N_j}(y^{j}_{i} - \mu^j)^2}}, \end{equation} where $J$ denotes the number of testing tasks; $N_j$ is the number of testing data on task $j$; $\hat{y}_n$ and $y_n$ are the estimation and the ground truth of the $n^\text{th}$ sample; $\hat{\mu}^j$ and $\mu^j$ are the mean of $\hat{y}_n$ and $y_n$, respectively. \noindent \textbf{5. Features} To be more specific, for coal, data of ratios for different types of coals and properties for each type in the form of a 4-dimension vector showing values of Ash (Ad), Volatiles (Vdaf), Sulfur (St,d), and Caking Index (G) are available. By taking the ratio-weighted average value of each of the four properties, each coal blending sample is represented by a 4-dimension feature. For coke, data of Ad, Vdaf, Std, and G value for each sample, and the corresponding CSR value after coking the coals are available. The preprocessed 4-dimension feature for several coal blending combinations are used as input data, and the CSR value is used as corresponding labels. \noindent \textbf{6. Baselines and Hyperparameters} The base model could be of SVM, CNN, linear regression. \noindent \textbf{7. Hardware} A cloud instance is used to run the benchmark, which has 12 cores, each with 3.20GHz, and a total memory of 32G. %1) average accuracy metric; %2) generalized - specifier real-world samples; %3) optimization samples: delay, result (feasible; cost reduction); \subsection{Re-identification} \subsubsection{Person Re-identification} \noindent \textbf{1. Architecture} We can simulate two different architectures of federated learning: edge-cloud architecture and device-edge-cloud architecture. Firstly, since each dataset is collected from multiple camera views, we can split each dataset by camera views. Each camera is defined as an individual client to directly communicate with the server to conduct the federated learning process. Under this scenario, keeping images in clients significantly reduces the risk of privacy leakage. However, this scenario exerts high requirements on the computation ability of cameras to train deep models, which makes practical deployment harder. Secondly, we can employ device-edge-cloud architecture, where clients are defined as edge servers. The edge servers construct datasets from multiple cameras and then collaboratively conduct federated learning with the central server. Under this scenario, each client contains one person ReID dataset. \noindent \textbf{2. Dataset and Related Settings} We select nine different datasets with significant variances in image amounts, identity numbers, scenes (indoor or outdoor), and the number of camera views. These properties lead to huge domain gaps among each other, simulating the statistical heterogeneity in reality. These nine datasets are MSMT17 \cite{Wei2017Msmt}, DukeMTMC-reID \cite{zheng2017dukemtmc-reid}, Market-1501 \cite{Zheng2015Market1501}, CUHK03-NP \cite{Li2014CUHK03}, PRID2011 \cite{prid2011}, CUHK01 \cite{li2012cuhk01}, VIPeR \cite{Gray2008ViewpointIP}, 3DPeS \cite{3dpes}, and iLIDS-VID \cite{iLIDS-VID}. \noindent \textbf{3. Scheme-wise Metrics} Support metrics of Federated Learning. \noindent \textbf{4. Contextual Metrics} To be defined. \noindent \textbf{5. Features} To be defined. \noindent \textbf{6. Baselines and Hyperparameters} We can adopt implementation and reference related settings in \cite{zhuang2020fedreid, zhuang2021easyfl}. \noindent \textbf{7. Hardware} To be defined.
{ "alphanum_fraction": 0.7617948348, "avg_line_length": 44.8608414239, "ext": "tex", "hexsha": "dd7b3a15b24c0478a26fde4beedfc84d5bb6113b", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "c52b0cd7a5290f16e079f96df8f8c00b0bff9965", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "MooreZheng/bench", "max_forks_repo_path": "Edge Intelligence Benchmark/5. testCaseDesign.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "c52b0cd7a5290f16e079f96df8f8c00b0bff9965", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "MooreZheng/bench", "max_issues_repo_path": "Edge Intelligence Benchmark/5. testCaseDesign.tex", "max_line_length": 1015, "max_stars_count": 2, "max_stars_repo_head_hexsha": "c52b0cd7a5290f16e079f96df8f8c00b0bff9965", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "MooreZheng/bench", "max_stars_repo_path": "Edge Intelligence Benchmark/5. testCaseDesign.tex", "max_stars_repo_stars_event_max_datetime": "2021-07-07T09:07:34.000Z", "max_stars_repo_stars_event_min_datetime": "2021-06-18T01:16:31.000Z", "num_tokens": 3760, "size": 13862 }
\chapter{Introduction} \label{sec:intro} At the heart of programming languages lies the question of how to represent and manipulate programs. Nearly all aspects of programming language work---including theorem proving, optimizing compilers, and program synthesis---% depend on these fundamental notions, since they dictate how (and how efficiently!) a tool stores and works with programs. The choice of how to represent and manipulate programs largely determines the the efficacy of a tool or technique. This thesis takes a fresh look at a data structure called the \textit{\egraph}~\cite{nelson} and a technique called \textit{\eqsat}~\cite{eqsat} for representing and manipulating programs, respectively. Together, they offer great promise over the status quo of syntax trees and term rewriting, Unfortunately, poor scalability and flexibility has limited the approach's reach. % Unfortunately, % \egraphs (as described in 1980) % are not specifically tailored to the needs of % equality saturation (as described in 2009). % This mismatch between data structure and algorithm % has led to poor scalability and flexibility, % limiting the approach's applicability. % have led \egraphs to % remain niche data structures (outside of theorem prover internals) % and limited \eqsat's applicability. To bring \egraphs and \eqsat into the programming language practitioner's toolbox, % as a fast, flexible way to work with programs in % various domains, this thesis makes the following claim: \begin{quote} \it\Thesisstmt \end{quote} The rest of this document supports that claim up in three ways: \begin{enumerate} \item Our original research contributions presented in \autoref{sec:rebuild} and \autoref{sec:extensions} advance the state-of-the-art around \egraphs and \eqsat, enhancing both so that together they form a compelling alternative to conventional term rewriting. \item These theoretical advances are realized in \egg, a first-of-its-kind tool that has brought \egraphs and \eqsat to a wider range of users and domains than ever before. \autoref{sec:egg} documents how \egg combines novel techniques with myriad practical niceties to make a generic, high-performance implementation of \egraphs and \eqsat. \item \autoref{sec:case-studies} makes an empirical case for the thesis statement in the form of published projects that rely on \egg. These case studies demonstrate that \egraphs and \eqsat can now be used to achieve state-of-the-art results in various domains. \end{enumerate} \section{The World Before \egg} Abstract syntax trees and directed term rewriting are (and will likely remain) the most popular approached for program representation and manipulation. We defer discussion of issues with this approach to \autoref{sec:background}. Here I would like to set the stage a little and describe the setting in which this thesis's contributions were made. Equality graphs (\egraphs) were developed in late 1970s to efficiently represent congruence relations in automated theorem provers (ATPs). At a high level, \egraphs~\cite{nelson, pp-congr} extend union-find~\cite{unionfind} to compactly represent equivalence classes of expressions while maintaining a key invariant: the equivalence relation is closed under congruence.\footnote{ Intuitively, congruence simply means that $a \equiv b$ implies $f(a) \equiv f(b)$.} \begin{figure} \centering \includegraphics[width=0.6\linewidth]{eqsat} \caption{ \Eqsat optimizes a program $p$ by storing it in an \egraph, growing the \egraph into a large set of equivalent terms by applying rewrites, and finally selecting the best program that is equivalent to $p$. } \label{fig:eqsat-flow} \end{figure} In the 2000s, work like Denali~\cite{denali} and the first equality saturation papers~\cite{eqsat, eqsat-llvm} began to repurpose \egraphs as the basis for program optimization. % Over the past decade, several projects have repurposed \egraphs % to implement state-of-the-art, rewrite-driven % compiler optimizations and program synthesizers % using a technique known as \textit{equality saturation}~\cite{ % denali, eqsat, eqsat-llvm, szalinski, yogo-pldi20, spores, herbie}. Given an input program $p$, equality saturation constructs an \egraph $E$ that represents a large set of programs equivalent to $p$, and then extracts the ``best'' program from $E$. The \egraph is grown by repeatedly applying pattern-based rewrites $\ell \to r$. Each rewrite $\ell \to r$ includes a pattern $\ell$ to match and a pattern $r$ to instantiate and merge with the matched subterm. Critically, these rewrites only add information\footnote{ As opposed to traditional term rewriting which only considers a single term at a time. \autoref{sec:rewriting} covers this in detail. } to the \egraph, eliminating the need for careful ordering. Upon reaching a fixed point (\textit{saturation}), $E$ will represent \textit{all equivalent ways} to express $p$ with respect to the given rewrites. After saturation (or timeout), a final extraction procedure analyzes $E$ and selects the optimal program according to a user-provided cost function. Ideally, a user could simply provide a language grammar and rewrites, and equality saturation would produce a effective optimizer. Three challenges blocked this ideal: \begin{enumerate} \item Maintaining congruence can become expensive as $E$ grows. In part, this is because \egraphs from the conventional ATP setting remained unspecialized to the distinct \textit{equality saturation workload}. Early applications based on \eqsat had to limit their searches, which can impact the quality of results. \item Many applications critically depend on \textit{domain-specific analyses}, but integrating them required ad~hoc extensions to the \egraph. The lack of a general extension mechanism forced researchers to re-implement equality saturation from scratch several times~\cite{herbie, eqsat, wu_siga19}. \item Even outside of the context of domain-specific analyses, the lack of a generic implementation of \egraphs and \eqsat made the ``just bring your grammar and rewrites'' vision impossible. Users had to start from scratch and reimplement state-of-the-art techniques for their own domain. \end{enumerate} \begin{wrapfigure}{r}{6cm} \centering \vspace{-22mm} \includegraphics[width=3cm]{egg.png} \caption{The egg logo.} \label{fig:egg-logo} \vspace{-1em} \end{wrapfigure} \section{The \egg Era} The work presented in this thesis addresses all of the concerns raised above. Our theoretical contributions make \egraphs and \eqsat faster and more flexible, and \egg makes it all usable in real-world applications. \paragraph{Equality Saturation Workload} ATPs frequently query and modify \egraphs and additionally require the ability to undo modifications (e.g., in DPLL(T)~\cite{dpll}). This forces conventional \egraph designs to maintain the congruence invariant after every operation. In contrast, the equality saturation workload can be factored into distinct phases of (1) querying the \egraph to simultaneously find all rewrite matches and (2) modifying the \egraph to merge in equivalences for all matched terms. We present a new amortized algorithm called \textit{rebuilding} (\autoref{sec:rebuilding}) that defers \egraph invariant maintenance to equality saturation phase boundaries without compromising soundness. Empirically, rebuilding provides asymptotic speedups over conventional approaches. \paragraph{Domain-specific Analyses} Equality saturation is primarily driven by syntactic rewriting, but many applications require additional interpreted reasoning to bring domain knowledge into the \egraph. Past implementations have resorted to ad~hoc \egraph manipulations to integrate what would otherwise be simple program analyses like constant folding. To flexibly incorporate such reasoning, we introduce a new, general mechanism called \textit{\eclass analyses} (\autoref{sec:extensions}). An \eclass analysis annotates each \eclass (an equivalence class of terms) with facts drawn from a semilattice domain. % resembling an abstract interpretation lifted to \egraphs. As the \egraph grows, facts are introduced, propagated, and joined to satisfy the \textit{\eclass analysis invariant}, which relates analysis facts to the terms represented in the \egraph. Rewrites cooperate with \eclass analyses by depending on analysis facts and adding equivalences that in turn establish additional facts. The examples and case studies demonstrate \eclass analyses like constant folding and free variable analysis which required bespoke customization in previous equality saturation implementations. \paragraph{A generic, high-performance implementation} We implement rebuilding and \eclass analyses in an open-source\footnote{ \begin{tabular}[t]{ll} web: & \url{https://egraphs-good.github.io}\\ source: & \url{https://github.com/egraphs-good/egg}\\ documentation: & \url{https://docs.rs/egg} \end{tabular} } library called \egg (\textbf{e}-\textbf{g}raphs \textbf{g}ood). \Egg specifically targets equality saturation, taking advantage of its workload characteristics and supporting easy extension mechanisms to provide \egraphs specialized for program synthesis and optimization. \Egg also addresses more prosaic challenges, e.g., parameterizing over user-defined languages, rewrites, and cost functions while still providing an optimized implementation. Our case studies in \autoref{sec:case-studies} demonstrate how \egg's features constitute a general, reusable \egraph library that can support equality saturation across diverse domains. In summary, I believe that \eqsat has a big role in the future of programming languages. When so many of the challenges to building programming tools revolve around \textit{choice}, \eqsat's ability to operate over many terms \textit{simultaneously} is hard to ignore. % offers an alternative to heuristics or backtracking Even in situations where equality saturation is not the best approach, it may be the easiest, since it offloads work from the developer to the computer. More complex applications may require making a particular choice at some points and taking all possible paths in others; \egg's practical, generic implementation is a compatible with this approach as well. Fast and flexible \eqsat provides the power to \textit{not} choose how to manipulate programs, which can be a boon to both power users and domain experts who may not have expertise in building programming language tools. I hope this thesis and \egg are steps toward having \egraphs and \eqsat in the toolbox of anyone working with programs, regardless of their expertise, the size and scale of the problem, or the application domain. % In summary, the contributions of this paper include: % \begin{itemize} % \item Rebuilding (\autoref{sec:rebuilding}), % a technique that restores key correctness and performance invariants % only at select points in the equality saturation algorithm. % Our evaluation demonstrates that rebuilding is faster than % existing techniques in practice. % \item \Eclass analysis (\autoref{sec:extensions}), % a technique for integrating domain-specific analyses % that cannot be expressed as purely syntactic rewrites. % The \eclass analysis invariant provides the guarantees % that enable cooperation between rewrites and analyses. % \item A fast, extensible implementation of % \egraphs in a library dubbed \egg (\autoref{sec:impl}). % \item Case studies of real-world, published tools that use \egg % for deductive synthesis and program optimization across domains such as % floating point accuracy, % linear algebra optimization, % and CAD program synthesis % (\autoref{sec:case-studies}). % Where previous implementations existed, % \egg is orders of magnitude faster and offers more features. % \end{itemize} %%% Local Variables: %%% TeX-master: "../thesis" %%% End:
{ "alphanum_fraction": 0.7752394989, "avg_line_length": 37.1215805471, "ext": "tex", "hexsha": "481aba5a899a0d7d449e1c5c58abdb8063457b24", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "1706ea16107f60d8c43caff13ecdb57950896872", "max_forks_repo_licenses": [ "CC-BY-4.0" ], "max_forks_repo_name": "mwillsey/thesis", "max_forks_repo_path": "chapters/01-intro.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "1706ea16107f60d8c43caff13ecdb57950896872", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "CC-BY-4.0" ], "max_issues_repo_name": "mwillsey/thesis", "max_issues_repo_path": "chapters/01-intro.tex", "max_line_length": 81, "max_stars_count": 1, "max_stars_repo_head_hexsha": "1706ea16107f60d8c43caff13ecdb57950896872", "max_stars_repo_licenses": [ "CC-BY-4.0" ], "max_stars_repo_name": "mwillsey/thesis", "max_stars_repo_path": "chapters/01-intro.tex", "max_stars_repo_stars_event_max_datetime": "2021-10-17T01:00:20.000Z", "max_stars_repo_stars_event_min_datetime": "2021-10-17T01:00:20.000Z", "num_tokens": 2847, "size": 12213 }
\section{QMCPACK Coding Standards} The following document presents best practices for the code at the current time. New development should follow these guidelines and contributors are expected to adhere to them for the sake of consistency in the code base. A well prepared PR should follow these standards before inclusion in the mainline. A Clang format file can be found in ? that should be run to fix and check the code style whenever possible. Keep reformatting, feature, and bug commits separate whenever possible. \section{Files} Each file should start with the header \lstset{language=C++,style=C++} \begin{lstlisting} ////////////////////////////////////////////////////////////////////////////////////// // This file is distributed under the University of Illinois/NCSA Open Source License. // See LICENSE file in top directory for details. // // Copyright (c) 2018 QMCPACK developers // // File developed by: Name, email, affiliation // // File created by: Name, email, affiliation ////////////////////////////////////////////////////////////////////////////////////// \end{lstlisting} If you make significant changes to an existing file, add yourself to the list of developed by authors. \subsection{File organization} Header files should be placed in the same directory as their implementations. Unit tests should be written for all new functionality. These tests should be placed in a \inlinecode{tests} subdirectory below the implementations. \subsection{File Names} Each class should be defined in a separate file whose name is the same as the class name. Use separate \inlinecode{.cpp} implementation files whenever possible to aid in incremental compilation. The filenames of tests are composed by the filename of the object tested and the prefix \inlinecode{test_}. The filenames of \emph{fake} and \emph{mock} objects used in tests are composed by the prefixes \inlinecode{fake_} and \inlinecode{mock_}, respectively, and the filename of the object that is imitated. \subsection{Header files} All header files should be self-contained i.e. it is not dependent on following any other header when it is included. Nor should they include files that are not necessary for their use, i.e. headers only needed by the implementation. Implementation files should not include files only for the benefit of files they include. There are many header files that violate this currently. Each header must use \inlinecode{\#define} guards to prevent multiple inclusion. The symbol name of the \inlinecode{\#define} guards should be \inlinecode{NAMESPACE(s)_CLASSNAME_H}. \subsection{Includes} Header files should be included with the full path based on the \verb|src| directory. For example, the file \verb|qmcpack/src/QMCWaveFunctions/SPOSet.h| should be included as \begin{lstlisting} #include "QMCWaveFunctions/SPOSet.h" \end{lstlisting} Even if the included file is located in the same directory as the including file this rule should be obeyed. Header files from external projects and standard libraries should be includes using \inlinecode{<iostream>} convention, while headers that are part of the QMCPACK project should be included using the \verb|"our_header.h"| convention. For readability it is suggested to use the standard order of includes: \begin{enumerate} \item related header \item std C library headers \item std C++ library headers \item Other libraries' headers \item QMCPACK headers \end{enumerate} In each section the included files should be sorted in alphabetical order. \section{Naming} The balance between description and ease of implementation should be balanced such that the code remains self documenting within a single terminal window. If an extremely short variable name is used its scope must be shorter than $\sim 40$ lines. An exception is made for template parameters which must be in all CAPS. \subsection{Namespace Names} Namespace names should be one word, lowercase. \subsection{Type and Class Names} Type and class names should start with a capital letter and have a capital letter for each new word. Underscores \inlinecode{_} are not allowed. \subsection{Variable Names} Variable names should not begin with a capital letter this is reserved for type and class names. Underscores (\inlinecode{_}) should be used to separate words. \subsection{Class Data Members} Class private/protected data members names should follow the convention of variable names with a trailing underscore \inlinecode{_}. \subsection{(Member) Function Names} Function names should start with a lowercase character and have a capital letter for each new word. \subsection{Lambda Expressions} Named lambda expressions follow the naming convention for functions: \begin{lstlisting}[showspaces=false] auto myWhatever = [](int i) { return i + 4; }; \end{lstlisting} \subsection{Macro Names} Macro names should be all uppercase and can include underscores (\inlinecode{_}). The underscore is not allowed as first or last character. \subsection{Test Case and Test Names} Test code files should be named \begin{lstlisting}[showspaces=false] class DiracMatrix; //leads to test_dirac_matrix.cpp //which contains test cases named TEST_CASE("DiracMatrix_update_row","[wavefunction][fermion]") \end{lstlisting} Where the test case covers the \inlinecode{updateRow} and \inlinecode{[wavefunction][fermion]} indicates this test belongs to the fermion wavefunction functionality. \section{Comments} \subsection{Comment Style} Use the \inlinecode{// Comment} syntax for actual comments. Use: \begin{lstlisting} /** base class for Single-particle orbital sets * * SPOSet stands for S(ingle)P(article)O(rbital)Set which contains * a number of single-particle orbitals with capabilities of * evaluating \f$ \psi_j({\bf r}_i)\f$ */ \end{lstlisting} or \begin{lstlisting} ///index in the builder list of sposets int builder_index; \end{lstlisting} \subsection{Documentation} Doxygen will be used for source documentation. Doxygen commands should be used when appropriate guidance on this TBD. \subsubsection{File Docs} Do not put the file name after the \verb|\file| doxygen command. Doxygen will fill it in for the file the tag appears in. \begin{lstlisting} /** \file * File level documentation */ \end{lstlisting} \subsubsection{Class Docs} Every class should have a short description (in the header of the file) of what it is and what is does. Comments for public class member functions follow the same rules as general function comments. Comments for private members are allowed, but not mandatory. \subsubsection{Function Docs} For function parameters whose type is non-const reference or pointer to non-const memory, it should be specified if they are input (In:), output (Out:) or input-output parameters (InOut:). Example: \begin{lstlisting} /** Updates foo and computes bar using in_1 .. in_5. * \param[in] in_3 * \param[in] in_5 * \param[in,out] foo * \param[out] bar */ //This is probably not what our clang-format would do void computeFooBar(Type in_1, const Type& in_2, Type& in_3, const Type* in_4, Type* in_5, Type& foo, Type& bar); \end{lstlisting} \subsubsection{Variable Documentation} Name should be self-descriptive. If you need documentation consider renaming first. \subsubsection{Golden Rule of Comments} If you modify a piece of code, also adapt the comments that belong to it if necessary. \section{Formatting and ``Style''} Use the provided clang-format style in \inlinecode{src/.clang-format} to format \verb|.h|, \verb|.hpp|, \verb|.cu| and \verb|.cpp| files. Many of the rules below will be applied to the code by clang-format, which should allow you to ignore most of them if you always run it on your modified code. You should use clang-format support and the .clangformat file with your editor, use a git precommit hook to run clang-format or run clang-format manually on every file you modify. However if you see numerous formatting updates outside of the code you have modified first commit the formatting changes in a separate PR. \subsection{Indentation} Indentation consists of 2 spaces. Do not use tabs in the code. \subsection{Line Length} The length of each line of your code should be at most \emph{120} characters. \subsection{Horizontal Spacing} No trailing whitespaces should be added to any line. Use no space before a comma (\inlinecode{,}) and a semicolon (\inlinecode{;}) and add a space after them if they are not at the end of a line. \subsection{Preprocessor Directives} The preprocessor directives are not indented. The hash is the first character of the line. \subsection{Binary Operators} The assignment operators should always have spaces around them. \subsection{Unary Operators} Do not put any space between an unary operator and their argument. \subsection{Types} The \inlinecode{using} syntax is preferred to \inlinecode{typedef} for type aliases. If the actual type is not excessively long or complex simply just use it, renaming simple types makes code less understandable. \subsection{Pointers and References} Pointer or Reference operators should go with the type. But understand the compiler reads them right to left. \begin{lstlisting} Type* var; Type& var; //Understand this is incompatible with multiple declarations Type* var1, var2; // var1 is a pointer to Type but var2 is a Type. \end{lstlisting} \subsection{Templates} The angle brackets of templates should not have any external and internal padding. \begin{lstlisting} template<class C> class Class1; Class1<Class2<type1>> object; \end{lstlisting} \subsection{Vertical Spacing} Use empty lines when it helps to improve the readability of the code, but do not use too many. Do not use empty lines after a brace which opens a scope, or before a brace which closes a scope. Each file should contain an empty line at the end of the file. Some editors add an empty line automatically, some do not. \subsection{Variable Declarations and Definitions} \begin{itemize} \item Avoid declaring multiple variables in the same declaration, especially if they are not fundamental types: \begin{lstlisting}[showspaces=false] int x, y; // Not recommended Matrix a("my-matrix"), b(size); // Not allowed // Preferred int x; int y; Matrix a("my-matrix"); Matrix b(10); \end{lstlisting} \item Use the following order for keywords and modifiers in variable declarations: \begin{lstlisting}[showspaces=false] // General type [static] [const/constexpr] Type variable_name; // Pointer [static] [const] Type* [const] variable_name; // Integer // the int is not optional not all platforms support long, etc. [static] [const/constexpr] [signedness] [size] int variable_name; // Examples: static const Matrix a(10); const double* const d(3.14); constexpr unsigned long l(42); \end{lstlisting} \end{itemize} \subsection{Function Declarations and Definitions} The return type should be on the same line as the function name. Parameters should be on the same line, too, unless they do not fit on it then one parameter per line aligned with the first parameter should be used. Include the parameter names also in the declaration of a function, i.e. \begin{lstlisting} // calculates a*b+c double function(double a, double b, double c); // avoid double function(double, double, double); //dont do this double function(BigTemplatedSomething<double> a, BigTemplatedSomething<double> b, BigTemplatedSomething<double> c); //do this double function(BigTemplatedSomething<double> a, BigTemplatedSomething<double> b, BigTemplatedSomething<double> c); \end{lstlisting} \subsection{Conditionals} Examples: \begin{lstlisting} if (condition) statement; else statement; if (condition) { statement; } else if (condition2) { statement; } else { statement; } \end{lstlisting} \subsection{Switch statement} Switch statements should always have a default case. Example: \begin{lstlisting} switch (var) { case 0: statement1; statement2; break; case 1: statement1; statement2; break; default: statement1; statement2; } \end{lstlisting} \subsection{Loops} Examples: \begin{lstlisting} for (statement; condition; statement) statement; for (statement; condition; statement) { statement1; statement2; } while (condition) statement; while (condition) { statement1; statement2; } do { statement; } while (condition); \end{lstlisting} \subsection{Class Format} \label{subsec:class_format} \inlinecode{public}, \inlinecode{protected} and \inlinecode{private} keywords are not indented. Example: \begin{lstlisting} class Foo : public Bar { public: Foo(); explicit Foo(int var); void function(); void emptyFunction() {} void setVar(const int var) { var_ = var; } int getVar() const { return var_; } private: bool privateFunction(); int var_; int var2_; }; \end{lstlisting} \subsubsection{Constructor Initializer Lists} Examples: \begin{lstlisting} // When everything fits on one line: Foo::Foo(int var) : var_(var) { statement; } // If the signature and the initializer list do not // fit on one line, the colon is indented by 4 spaces: Foo::Foo(int var) : var_(var), var2_(var + 1) { statement; } // If the initializer list occupies more lines, // they are aligned in the following way: Foo::Foo(int var) : some_var_(var), some_other_var_(var + 1) { statement; } // No statements: Foo::Foo(int var) : some_var_(var) {} \end{lstlisting} \subsection{Namespace Formatting} The content of namespaces is not indented. A comment should indicate when a namespace is closed. (clang-format will add these if absent) If nested namespaces are used, a comment with the full namespace is required after opening a set of namespaces or an inner namespace. Examples: \begin{lstlisting} namespace ns { void foo(); } // ns \end{lstlisting} \begin{lstlisting} namespace ns1 { namespace ns2 { // ns1::ns2:: void foo(); namespace ns3 { // ns1::ns2::ns3:: void bar(); } // ns3 } // ns2 namespace ns4 { namespace ns5 { // ns1::ns4::ns5:: void foo(); } // ns5 } // ns4 } // ns1 \end{lstlisting} \section{QMCPack C++ Guidance} The guidance here like any advice on how to program should not be treated as a set of rules but rather the hardwon wisdom of many hours of develop suffering. In the past many were ignored and the absolute worst results of that will affect whatever code you need to work with. Your PR should go much smoother if you don't ignore them. \subsection{Encapsulation} A class is not just a naming scheme for a set of variables and functions. It should provide a logical set of methods, could contain the state of a logical object and might allow access to object data through a well defined interface related variables. while preserving maximally ability to change internal implementation of the class. Do not use \inlinecode{struct} as a way to avoid controlling access to the class. Only in rare cases where a class is a fully public data structure \inlinecode{struct} is this appropriate. Ignore (or fix one) the many examples of this in QMCPACK. Do not use inheritance primarily as a means to break encapsulation. If your class could aggregate or compose another class do that and access it solely through its public interface. You will reduce dependencies. \subsection{Casting} In C++ source avoid C style casts, they are difficult to search for and imprecise in function. An exception is made for controlling implicit conversion of simple numerical types. Explicit C++ style casts make it clear what the safety of the cast is and what sort of conversion is expected to be possible. \begin{lstlisting} int c = 2; int d = 3; double a; a = (double)c / d; // Ok const class1 c1; class2* c2; c2 = (class2*)&c1; // NO SPOSetAdvanced* spo_advanced = new SPOSetAdvanced(); SPOSet* spo = (SPOSet*)spo_advanced; // NO SPOSet* spo = static_cast<SPOSet*>(spo_advanced); // OK if upcast, dangerous if downcast \end{lstlisting} \subsection{Pre-increment and pre-decrement} Use the pre-increment (pre-decrement) operator when a variable is incremented (decremented) and the value of the expression is not used. In particular, use the pre-increment (pre-decrement) operator for loop counters where i is not used: \begin{lstlisting}[showspaces=false] for (int i = 0; i < N; ++i) { doSomething(); } for (int i = 0; i < N; i++) { doSomething(i); } \end{lstlisting} The post-increment and post-decrement operators create an unnecessary copy, that the compiler cannot optimize away in the case of iterators or other classes with overloaded increment and decrement operators. \subsection{Alternative Operator Representations} Alternative representations of operators and other tokens such as \inlinecode{and}, \inlinecode{or}, and \inlinecode{not} instead of \inlinecode{&&}, \inlinecode{||}, and \inlinecode{!} are not allowed. For the reason of consistency, the far more common primary tokens should always be used. \subsection{Use of const} \begin{itemize} \item Add the \inlinecode{const} qualifier to all function parameters that are not modified in the function body. \item For parameters passed by value, only add the keyword in the function definition. \item Member functions should be specified const whenever possible. \begin{lstlisting}[showspaces=false] // Declaration int computeFoo(int bar, const Matrix& m) // Definition int computeFoo(const int bar, const Matrix& m) { int foo = 42; // Compute foo without changing bar or m. // ... return foo; } class MyClass { int count_ ... int getCount() const { return count_;} } \end{lstlisting} \end{itemize}
{ "alphanum_fraction": 0.7543909188, "avg_line_length": 32.6697416974, "ext": "tex", "hexsha": "3280c0acba426c1fd39d1b37d0f3e0a04c1e0fb6", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "cd09fc54b36de2579c9802f5e64b7ec15506f3c3", "max_forks_repo_licenses": [ "NCSA" ], "max_forks_repo_name": "bwvdg/qmcpack", "max_forks_repo_path": "manual/coding_standards.tex", "max_issues_count": 1, "max_issues_repo_head_hexsha": "cd09fc54b36de2579c9802f5e64b7ec15506f3c3", "max_issues_repo_issues_event_max_datetime": "2020-04-10T15:35:59.000Z", "max_issues_repo_issues_event_min_datetime": "2020-04-10T15:33:28.000Z", "max_issues_repo_licenses": [ "NCSA" ], "max_issues_repo_name": "bwvdg/qmcpack", "max_issues_repo_path": "manual/coding_standards.tex", "max_line_length": 412, "max_stars_count": null, "max_stars_repo_head_hexsha": "cd09fc54b36de2579c9802f5e64b7ec15506f3c3", "max_stars_repo_licenses": [ "NCSA" ], "max_stars_repo_name": "bwvdg/qmcpack", "max_stars_repo_path": "manual/coding_standards.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 4154, "size": 17707 }
% Preamble. \documentclass[12pt]{article} \usepackage[margin=1.25in]{geometry} \usepackage[fleqn]{amsmath} \usepackage{textcomp} \usepackage{gensymb} \usepackage{amsfonts} \usepackage{enumitem} %\usepackage{tikz} % Include for figures. %\usepackage{subfiles} % Include for subfiles. %% Title macros. \newcommand{\HOMEWORKNUM}{14} \newcommand{\NAME}{D. Choi} \newcommand{\DATE}{2020-06-15} \title{\vspace{-2\baselineskip}MATH 225 - Homework \#\HOMEWORKNUM} \author{\NAME} \date{\DATE} %% Formatting options. %\pagenumbering{gobble} % Include for single-page document. % Macros. %% Norm. \newcommand{\norm}[1]{\left\lVert#1\right\rVert} % Document. \begin{document} \maketitle \section*{1.} \textit{For a generic orthographic projection $A$ in $\mathbb{R}^3$, show that $\text{det}(A) = 0$.} \\[\baselineskip] Let $\vec{n}$ be the normal vector so that the orthographic projection $A$ of a vector $\vec{v}$ is equivalent to the orthographic projection onto the plane \begin{equation*} \vec{n} \cdot \vec{v} = \begin{pmatrix} a \\ b \\ c \end{pmatrix} \cdot \begin{pmatrix} x \\ y \\ z \end{pmatrix} = 0 . \end{equation*} Such a projection is also equivalent to the vector rejection of $\vec{v}$ from $\vec{n}$. Thus, \begin{equation*} A\vec{v} = \vec{v} - \text{proj}_{\vec{n}}\vec{v} = \begin{pmatrix} x \\ y \\ z \end{pmatrix} - \frac{ \begin{pmatrix} x \\ y \\ z \end{pmatrix} \cdot \begin{pmatrix} a \\ b \\ c \end{pmatrix} }{ \norm{\begin{pmatrix} a \\ b \\ c \end{pmatrix}}^2 } \begin{pmatrix} a \\ b \\ c \end{pmatrix} = \begin{pmatrix} x \\ y \\ z \end{pmatrix} - \frac{ax + by + cz}{a^2 + b^2 + c^2} \begin{pmatrix} a \\ b \\ c \end{pmatrix} . \end{equation*} The columns of $A$ can be found by computing the vector rejections of the standard basis unit vectors from $\vec{n}$: \begin{gather*} A \begin{pmatrix} 1 \\ 0 \\ 0 \end{pmatrix} = \begin{pmatrix} 1 \\ 0 \\ 0 \end{pmatrix} - \frac{a}{a^2 + b^2 + c^2} \begin{pmatrix} a \\ b \\ c \end{pmatrix} = \frac{1}{a^2 + b^2 + c^2} \begin{pmatrix} b^2 + c^2 \\ -ab \\ -ac \end{pmatrix} \\ A \begin{pmatrix} 0 \\ 1 \\ 0 \end{pmatrix} = \begin{pmatrix} 0 \\ 1 \\ 0 \end{pmatrix} - \frac{b}{a^2 + b^2 + c^2} \begin{pmatrix} a \\ b \\ c \end{pmatrix} = \frac{1}{a^2 + b^2 + c^2} \begin{pmatrix} -ab \\ a^2 + c^2 \\ -bc \end{pmatrix} \\ A \begin{pmatrix} 0 \\ 0 \\ 1 \end{pmatrix} = \begin{pmatrix} 0 \\ 0 \\ 1 \end{pmatrix} - \frac{c}{a^2 + b^2 + c^2} \begin{pmatrix} a \\ b \\ c \end{pmatrix} = \frac{1}{a^2 + b^2 + c^2} \begin{pmatrix} -ac \\ -bc \\ a^2 + b^2 \end{pmatrix} . \end{gather*} Thus, \begin{equation*} A = \frac{1}{a^2 + b^2 + c^2} \begin{pmatrix} b^2 + c^2 & -ab & -ac \\ -ab & a^2 + c^2 & -bc \\ -ac & -bc & a^2 + b^2 \end{pmatrix} . \end{equation*} For the sake of convenience, let \begin{equation*} B = \begin{pmatrix} b^2 + c^2 & -ab & -ac \\ -ab & a^2 + c^2 & -bc \\ -ac & -bc & a^2 + b^2 \end{pmatrix} \end{equation*} so that \begin{equation*} A = \frac{B}{a^2 + b^2 + c^2}, \end{equation*} which allows computing the determinant of $A$ as \begin{equation*} \text{det}(A) = \left( \frac{1}{a^2 + b^2 + c^2} \right)^3 \text{det}(B) = s \cdot \text{det}(B) . \end{equation*} The row reduction of $B$ results in \begin{equation*} \text{rref}(B) = \begin{pmatrix} 1 & 0 & -\frac{a}{c} \\ 0 & 1 & -\frac{b}{c} \\ 0 & 0 & 0 \end{pmatrix} . \end{equation*} Because $\text{rref}(B)$ contains a row of zeros, the determinant of $B$ must be $0$. \\ Thus, \begin{equation*} \text{det}(A) = s \cdot \text{det}(B) = s \cdot 0 = \boxed{0} . \end{equation*} \end{document}
{ "alphanum_fraction": 0.5955362003, "avg_line_length": 20.5251396648, "ext": "tex", "hexsha": "6c211b9fb34df6deb48db560ebbb37c0f8ae1ddb", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "244548f415553f058098cae84ccdd4ce3f58c245", "max_forks_repo_licenses": [ "Unlicense" ], "max_forks_repo_name": "Floozutter/coursework", "max_forks_repo_path": "usc-20202-math-225-39425/hw14/main.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "244548f415553f058098cae84ccdd4ce3f58c245", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Unlicense" ], "max_issues_repo_name": "Floozutter/coursework", "max_issues_repo_path": "usc-20202-math-225-39425/hw14/main.tex", "max_line_length": 78, "max_stars_count": null, "max_stars_repo_head_hexsha": "244548f415553f058098cae84ccdd4ce3f58c245", "max_stars_repo_licenses": [ "Unlicense" ], "max_stars_repo_name": "Floozutter/coursework", "max_stars_repo_path": "usc-20202-math-225-39425/hw14/main.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 1567, "size": 3674 }
\documentclass{beamer} \usetheme{CambridgeUS} \title[Process]{The Koopman operator identification algorithm} \subtitle{So far} \institute[Polimi]{Politecnico di Milano} \author{Sergio Vanegas} \date{\today} \usepackage{listings} \usepackage[framed,numbered,autolinebreaks,useliterate]{mcode/mcode} \usepackage{caption} \usepackage{subcaption} \usepackage{siunitx} \begin{document} \begin{frame}[plain,noframenumbering] \maketitle \end{frame} \begin{frame}{Table of Contents} \tableofcontents \end{frame} \section{Pre-requisites} \begin{frame}[fragile]{The forced Van Der Pol oscillator} \begin{equation} \begin{cases} \dot{x}_1 = 2*x_2 \\ \dot{x}_2 = -0.8*x_1 + 2*x_2 + 10*x_1^2*x_2 + u \end{cases} \end{equation} \begin{lstlisting}[language=Matlab] function dxdt = VanDerPol(t,x,u_t,u) % Interpolation just for the sake of indexing u = interp1(u_t,u,t); dxdt = [2*x(2); -0.8*x(1)+2*x(2)-10*(x(1)^2)*x(2) + u]; end \end{lstlisting} \end{frame} \begin{frame}{Data generation - Parameters} \begin{itemize} \item $T_s = \SI{1}{\milli \second}$ \item $T = \SI{20}{\second}$ \item $N_\text{Sim} = \num{100}$ \item $\textbf{u}$: random Gaussian signal with the same sampling rate and length as the simulation \item $\sigma = 1$: std deviation of the Gaussian noise \item $\mu$: average of the Gaussian noise (randomly selected per trajectory) \end{itemize} \end{frame} \begin{frame}[fragile]{Data generation - Noise as input} \begin{lstlisting}[language=Matlab,basicstyle=\tiny] Data_Source = "~/Documents/Thesis/VanDerPol_Unsteady_Input/"; ts = 1e-3; T = 20; N_Sim = 40; u_t = 0:ts:T; sigma = 1; % 0 for constant input mu = 0; system("mkdir -p "+Data_Source); delete(Data_Source+"Data_*.mat"); parfor f=1:N_Sim mu = randn(1); % 0 for zero-mean noise u = sigma^2*randn(size(u_t)) + mu; z0 = 4*rand(2,1) - 2; [~,z] = ode113(@(t,z) VanDerPol(t,z,u_t,u), u_t, z0); z = z'; L = length(z)-1; parsave(sprintf(Data_Source+'Data_%i.mat',f),L,z,u,T,ts); end \end{lstlisting} \end{frame} \begin{frame}[fragile]{Polynomial observables} \begin{lstlisting}[language=Matlab,basicstyle=\tiny] function [g,n] = Poly_Obs(z,P,Nx,Nu) % Number of monomials with degree lesser than P n = factorial(Nx+P)/(factorial(Nx)*factorial(P)); g = ones(n+Nu,size(z,2)); exponents = zeros(n,Nx); current = zeros(1,Nx); [exponents,~] = Recursive_Monomial(1,1,exponents,current,P); for i=1:n for j=1:Nx g(i,:) = g(i,:).*(z(j,:).^exponents(i,j)); end end % Inputs returned as additional observables g(n+1:end,:) = z(Nx+1:end,:); end \end{lstlisting} \end{frame} \begin{frame}[fragile]{Recursive exponent generation} \begin{lstlisting}[language=Matlab,basicstyle=\tiny] function [exponents,i] = Recursive_Monomial(i,j,exponents,current,P) while sum(current) <= P % Decide wether or not to register the exponent combination only on the deepest level of recursion if j==size(exponents,2) exponents(i,:) = current; i = i+1; else [exponents,i] = Recursive_Monomial(i,j+1,exponents,current,P); end % Reset exponent count current(j) = current(j)+1; end current(j) = 0; end \end{lstlisting} \end{frame} \section{First step: data observables} \begin{frame}[fragile]{Data reading \& Polynomial degree selection} Up to fourth order polynomials used for the presentation (lower order more stable but just as inaccurate). \begin{lstlisting}[language=Matlab, basicstyle=\tiny] data = dir(Data_Source+"Data_*.mat"); load(Data_Source+data(1).name); K = 0; for f=1:length(data) load(Data_Source+data(f).name,"L"); K = K+L; end P = 12; [g_t,n] = Poly_Obs(z,P); Px = zeros(n,K); Py = Px; Z = zeros(size(z,1),K); U = zeros(size(u,1),K); \end{lstlisting} \end{frame} \begin{frame}[fragile]{Data concatenation} \begin{lstlisting}[language=Matlab] Px(:,1:L) = g_t(:,1:end-1); Py(:,1:L) = g_t(:,2:end); Z(:,1:L) = z(:,2:end); U(:,1:L) = u(:,1:end-1); idx = L; for f=2:length(data) load(Data_Source+data(f).name); [g_t,~] = Poly_Obs(z,P); Px(:,idx+1:idx+L) = g_t(:,1:end-1); Py(:,idx+1:idx+L) = g_t(:,2:end); Z(:,idx+1:idx+L) = z(:,2:end); U(:,idx+1:idx+L) = u(:,1:end-1); idx = idx+L; end \end{lstlisting} \end{frame} \section{Second step: operator matrix} \begin{frame}{Matrix calculation - Notation} \begin{itemize} \item $\mathbf{u}\left[k\right]$: input vector at sample $k \in \left\{1,\dots,K+1\right\}$. \item $\left(\mathbf{x}\left[k\right],\mathbf{y}\left[k\right]\right)$: state pairs such that, for a generic causal DT system $\mathbf{x}\left[k+1\right] = \mathbf{T}(\mathbf{x}\left[k\right] , \mathbf{u}\left[k\right])$, $\mathbf{y}\left[k\right] = \mathbf{T}(\mathbf{x}\left[k\right], \mathbf{u}\left[k\right])$. States are of dimension $N_x$, and the input is of dimension $N_u$. \item $P_N$: Projection over the N-Dimensional truncated space of observables, where $\tilde{N} = \frac{\left(N_x + P\right)!}{N_x! P!}$ and $N = \tilde{N} + N_u$. \item $p_j$: Monomial observing the original states, with $j \in {1,\dots,\tilde{N}}$. \item $\mathbf{P_x},\mathbf{P_y},\mathbf{Z}$: State data collections of the form \begin{align*} \mathbf{P_x} &= \begin{pmatrix} \mathbf{p}\left(\mathbf{x}\left[1\right]\right) & \cdots & \mathbf{p}\left(\mathbf{x}\left[K\right]\right) \end{pmatrix} \\ \mathbf{P_y} &= \begin{pmatrix} \mathbf{p}\left(\mathbf{y}\left[1\right]\right) & \cdots & \mathbf{p}\left(\mathbf{y}\left[K\right]\right) \end{pmatrix} \\ \mathbf{Z} &= \begin{pmatrix} \mathbf{x}\left[1\right] & \cdots & \mathbf{x}\left[K\right] \end{pmatrix} \end{align*} \item $\mathbf{U}$: Input data collection of the form $\begin{pmatrix} \mathbf{u}\left[1\right] & \cdots & \mathbf{x}\left[K\right] \end{pmatrix}$. \end{itemize} \end{frame} \begin{frame}{Matrix calculation - I} We denote the associated Koopman operator with sampling time $T_s$ by $\mathcal{K}^{T_s}$, derived from the solution to the following minimization problem: \begin{align} P_N & g = \text{arg min}_{\tilde{g} \in \text{span}\left\{p_1 , \dots , p_N , u_1 , \dots , u_m\right\}} \sum_{k=1}^K \left|\tilde{g}\left(\mathbf{x}_k\right) - g\left(\mathbf{x}_k\right)\right|^2 \\ &\implies P_N g = \begin{pmatrix} g\left(\mathbf{x}_1\right) & \cdots & g\left(\mathbf{x}_K\right) \end{pmatrix} \begin{pmatrix} \mathbf{P}_x \\ \mathbf{U} \end{pmatrix}^\dagger \begin{pmatrix} \mathbf{p} \\ \mathbf{u} \end{pmatrix}\\ & \implies P_N \left(\mathcal{K}^{T_s} p_j\right) = \mathbf{p}^T \mathbf{P_x}^\dagger \begin{pmatrix} \mathcal{K}^{T_s} p_j\left(\mathbf{x}_1\right) \\ \vdots \\ \mathcal{K}^{T_s} p_j\left(\mathbf{x}_K\right) \end{pmatrix} \approx \begin{pmatrix} p_j\left(\mathbf{y}_1\right) \\ \vdots \\ p_j\left(\mathbf{y}_K\right) \end{pmatrix} \begin{pmatrix} \mathbf{P}_x \\ \mathbf{U} \end{pmatrix}^\dagger \begin{pmatrix} \mathbf{p} \\ \mathbf{u} \end{pmatrix} \end{align} \end{frame} \begin{frame}[fragile]{Matrix calculation - II} We get each row of the truncation (and, by consequence, the whole matrix) as below, where $A$ and $B$ follow the notation from a classical State Space representation: \begin{equation} \label{eq:Matrix_Operator} \overline{\mathcal{K}}_{N,j} \approx \mathbf{P}_{y,j} \begin{pmatrix} \mathbf{P}_x \\ \mathbf{U} \end{pmatrix}^\dagger \implies \overline{\mathcal{K}}_N = \begin{bmatrix} A & B \end{bmatrix} \approx \mathbf{P}_y \begin{pmatrix} \mathbf{P}_x \\ \mathbf{U} \end{pmatrix}^\dagger \end{equation} We implement Equation~\ref{eq:Matrix_Operator} as follows: \begin{lstlisting} [A,B] = Koopman(Px,Py,U); % Generic way to recover original states C = Unobserver(Px,Z); % Recovery of original states independent from input D = zeros(size(Z,1),size(U,1)); save(sprintf(Data_Source+'Operator_P_%i.mat',P), ... "A","B","C","D","ts"); \end{lstlisting} \end{frame} \begin{frame}[fragile]{Matrix calculation - III} The function \texttt{Koopman} called above is defined as follows: \begin{lstlisting}[language=Matlab,basicstyle=\tiny] function [A,B] = Koopman(X,Y,U) % Following "Practical Considerations" from Korda-Mezic G = [X;U]*[X;U]'; V = Y*[X;U]'; A = zeros(size(Y,1),size(X,1)); B = zeros(size(Y,1),size(U,1)); % Matricial division segmented because of memory limits % M0 = V/G; % A = M0(:,1:size(X,1)); % B = M0(:,size(X,1)+1:end); for i=1:size(Y,1) m0 = V(i,:)/G; A(i,:) = m0(1:size(X,1)); B(i,:) = m0(size(X,1)+1:end); end \end{lstlisting} \end{frame} \begin{frame}[fragile]{Spectral analysis - Code} Originally done for DMD purposes, but it keeps being useful for stability analysis. \begin{lstlisting}[language=Matlab] Lambda = eig(A); figure(1); scatter(real(Lambda),imag(Lambda)); hold on; rectangle('Position', [-1 -1 2 2], 'Curvature', 1); hold off; \end{lstlisting} \end{frame} \section{Third step: trajectory prediction} \begin{frame}[fragile]{Training data - Visualization} \begin{lstlisting}[language=Matlab] load(Data_Source+data(1).name); L = length(0:ts:T); markerDecay = 0.001; figure(2); scatter(z(1,:),z(2,:),36*exp(-markerDecay*(0:L-1))); hold on; \end{lstlisting} \end{frame} \begin{frame}[fragile]{Training data - Trajectory prediction} \begin{lstlisting}[language=Matlab] g_p = zeros(size(g_t,1),L); [g_p(:,1),~] = Poly_Obs(z(:,1),P); for i=1:L-1 g_p(:,i+1) = A*g_p(:,i) + B*u(:,i); end z_p = C*g_p; scatter(z_p(1,:),z_p(2,:),36*exp(-markerDecay*(0:L-1))); legend("Original Data","Trajectory Prediction"); hold off; \end{lstlisting} \end{frame} \begin{frame}[fragile]{New signal - Data generation} \begin{lstlisting}[language=matlab] T = 20; z0 = 4*rand(2,1) - 2; sigma = randn(1); u_t = 0:ts:T; u = sigma * cos(u_t); [t,z] = ode113(@(t,z) VanDerPol(t,z,u_t,u), u_t, z0); z = z'; L = length(0:ts:T); markerDecay = 0.001; figure(1); scatter(z(1,:),z(2,:),36*exp(-markerDecay*(0:L-1))); hold on; \end{lstlisting} \end{frame} \begin{frame}[fragile]{New signal - Trajectory prediction} \begin{lstlisting} g_p = zeros(size(g_t,1),L); [g_p(:,1),~] = Poly_Obs(z(:,1),P); for i=1:L-1 g_p(:,i+1) = A*g_p(:,i) + B*u(:,i); end z_p = C*g_p; scatter(z_p(1,:),z_p(2,:),36*exp(-markerDecay*(0:L-1))); legend("New Simulation","Trajectory Prediction"); hold off; \end{lstlisting} \end{frame} \section{Alternative Observables} \begin{frame}[fragile]{Thin-Plate Spline-Radial functions I} Instead of using polynomials, we can use $\psi_j \left(\mathbf{x}\right) = \left|\left|\mathbf{x}-\mathbf{x}_j\right|\right|^2 \log{\left(\left|\left|\mathbf{x}-\mathbf{x}_j\right|\right|\right)}$ as basis function, where $\mathbf{x}_j$ is a bounded random vector of the same dimension as the original states. Since this collection of reference vectors has to be constant once it has been selected, we implement the observables as follows: \begin{lstlisting}[language=Matlab,basicstyle=\tiny] function [g,n] = Spline_Radial_Obs(z,X0) L = size(z,2); n = size(X0,1); N = size(X0,2); % Preservation of original states g = [z;zeros(N,L)]; for i=1:N x0 = X0(:,i); g(i+n,:) = vecnorm(z-x0).^2 .* log(vecnorm(z-x0)); end n=n+N; end \end{lstlisting} \end{frame} \begin{frame}[fragile]{Thin-Plate Spline-Radial functions II} The following modifications to the code had to be made in order to accommodate the new observables: \begin{lstlisting}[language=Matlab,basicstyle=\tiny] % P = 12; % [g_t,n] = Poly_Obs(z,P); M = 100; X0 = 2*rand(size(z,1),M)-1; [g_t,n] = Spline_Radial_Obs(z,X0); ... % [g_t,~] = Poly_Obs(z,P); [g_t,~] = Spline_Radial_Obs(z,X0); ... % save(sprintf(Data_Source+'Operator_P_%i.mat',P), ... % "A","B","C","D","ts"); save(sprintf(Data_Source+'Operator_M_%i.mat',M), ... "A","B","C","D","ts","X0"); ... % [g_p(:,1),~] = Poly_Obs(z(:,1),P); [g_p(:,1),~] = Spline_Radial_Obs(z(:,1),X0); \end{lstlisting} \end{frame} \section{Results} \begin{frame}{Spectral analysis - Results} \begin{figure} \centering \begin{subfigure}[b]{0.45\textwidth} \centering \includegraphics[width=\textwidth]{Eigen_Poly.png} \caption{Polynomial observables (P=13) - Operator eigenvalues} \label{fig:eigen_poly} \end{subfigure} \hfill \begin{subfigure}[b]{0.45\textwidth} \centering \includegraphics[width=\textwidth]{Eigen_Radial.png} \caption{Radial observables (M=100) - Operator eigenvalues} \label{fig:eigen_radial} \end{subfigure} \caption{Operator eigenvalues (highest calculated order) - Observable to observable} \end{figure} \end{frame} \begin{frame}{Noisy input prediction - Results} \begin{figure} \centering \begin{subfigure}[b]{0.45\textwidth} \centering \includegraphics[width=\textwidth]{Training_Poly.png} \caption{Polynomial observables (P=13) - Predicted trajectory} \label{fig:training_poly} \end{subfigure} \hfill \begin{subfigure}[b]{0.45\textwidth} \centering \includegraphics[width=\textwidth]{Training_Radial.png} \caption{Radial observables (M=100) - Predicted trajectory} \label{fig:training_radial} \end{subfigure} \caption{Reference and predicted trajectories (highest calculated order) - Training data} \end{figure} \end{frame} \begin{frame}{Deterministic - Results} \begin{figure} \centering \begin{subfigure}[b]{0.45\textwidth} \centering \includegraphics[width=\textwidth]{Verification_Poly.png} \caption{Polynomial observables - Predicted trajectories} \label{fig:verification_poly} \end{subfigure} \hfill \begin{subfigure}[b]{0.45\textwidth} \centering \includegraphics[width=\textwidth]{Verification_Radial.png} \caption{Radial observables} \label{fig:verification_radial} \end{subfigure} \caption{Reference and predicted trajectories - Predicted trajectories} \end{figure} \end{frame} \section{Conclusions} \begin{frame}{Conclusions} \begin{itemize} \item Lower sampling periods have shown to yield a better prediction horizon, since in practice it is bounded by the variations in shape between different trajectories getting wider throughout physical time. \item A classical optimization approach was implemented as part of the development process; nevertheless, it did not produce different results from the pseudo-inverse approach and instead was not able to keep up with the increase in dimension of the observable space. As a consequence, it was removed. \item Spline-radial functions yielded an overall better prediction horizon than the polynomial observables, which opens the question of wether or not better observable structures should be researched. \item Finally, higher-degree observables yielded better approximations of the original system, but required a higher than theoretical data library in order to converge when applying the pseudo-inverse. \end{itemize} \end{frame} \end{document}
{ "alphanum_fraction": 0.6132798249, "avg_line_length": 31.6269230769, "ext": "tex", "hexsha": "9108db8897ebe635ba0e88b5c9364a3ff99da0f6", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2021-11-21T14:26:02.000Z", "max_forks_repo_forks_event_min_datetime": "2021-11-21T14:26:02.000Z", "max_forks_repo_head_hexsha": "22daa8196b611089e6753e600c39922c55522d9b", "max_forks_repo_licenses": [ "CC0-1.0" ], "max_forks_repo_name": "sergiovaneg/LaTex_Documents", "max_forks_repo_path": "Thesis_Progress/main.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "22daa8196b611089e6753e600c39922c55522d9b", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "CC0-1.0" ], "max_issues_repo_name": "sergiovaneg/LaTex_Documents", "max_issues_repo_path": "Thesis_Progress/main.tex", "max_line_length": 443, "max_stars_count": null, "max_stars_repo_head_hexsha": "22daa8196b611089e6753e600c39922c55522d9b", "max_stars_repo_licenses": [ "CC0-1.0" ], "max_stars_repo_name": "sergiovaneg/LaTex_Documents", "max_stars_repo_path": "Thesis_Progress/main.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 5141, "size": 16446 }
\subsection{Chapter 5} \begin{p}{Show that any 2-form $F$ on $\R\times S$ can be uniquely expressed as $B+E\wedge dt$ in such a way that for any local coordinates $x^i$ on $S$ we have $E=E_i dx^i$ and $B=\frac{1}{2}B_{ij} dx^i\wedge dx^j$.} \end{p} For some patch $U\subset S$, we can use the local coordinates $x^i$ along with $t\in\R$ to define an element of $\R^{n+1}$. Any 2-form $F$ can be written as $F=F_{\alpha\beta} dx^\alpha\wedge dx^\beta$ where the summation runs over $0\dots n$ and $x^0=t$. We can also express this as $F=F_{0i}dt\wedge dx^i+F_{i0}dx^i\wedge dt+F_{ij}dx^i\wedge dx^j$. But then by antisymmetry of the wedge product, we might as well have $F_{i0}=-F_{0i}$. Then defining $E_i=2F_{0i}$ we have $F=E_i dx^i\wedge dt+F_{ij}dx^i\wedge dx^j$. With $B_{ij}=2F_{ij}$ we obtain $F=E\wedge dt+B$. This decomposition is unique because if it were true using different functions $E'_i$ and $B'_{ij}$, then their difference would be zero. \begin{p}{Show that for any form $\om$ on $\R\times S$ there is a unique way to write $d\om=dt\wedge\partial_t\om+d_S\om$ such that for any local coordinates $x^i$ on $S$, writing $t=x^0$, we have $d_S\om=\partial_i \om_I dx^i\wedge dx^I$ and $dt\wedge \partial_t\om=\partial_0\om_I dx^0\wedge dx^I$.} \end{p} First write $\om=\om_I dx^I$ for $I$ a multi-index of length $p$ (i.e. a $p$-tuple whose entries range over $0\dots n$, where $n$ is the dimension of $S$). Then $d_S\om=\partial_i \om_I dx^i\wedge dx^I$ and $\partial_t\om=\partial_0\om_I$. Again, uniqueness follows from linearity: if there were two possibilities, their difference would be zero by the linearity of $d$, implying their equality. \begin{p}{Use the nondegeneracy of the metric to show that the map from $V$ to $V^*$ given by $v\mapsto g(v,\cdot)$ is an isomorphism, that is, 1-to-1 and onto.}\end{p} Nondegeneracy of the metric means that if $g(v,w)=0$ for all $w$, then $v=0$. Now, for the map to be an isomorphism, we have to check the two conditions. First 1-to-1: different inputs result in different outputs. So we check if the difference of the outputs is zero: $g(v,\cdot)-g(w,\cdot)=g(v-w,\cdot)\neq 0$ unless $v=w$. Next is the onto condition: is every element of $V^*$ the image of some $v\in V$? Yes, consider $\om\in V^*$ and consider the action on a basis: $\om_\mu=\om(e_\mu)$. Setting $\om_\mu=g(v,e_\mu)$, we have to solve for $v$. Expanding $v=v^\mu e_\mu$ we obtain $\om_\mu=v^\nu g(e_\nu,e_\mu)=v^\nu g_{\nu,\mu}$. But the nondegeneracy of $g$ implies that $g_{\nu,\mu}$ is invertible. Suppose we have $g(v,w)=g_{\mu,\nu}v^\mu w^\nu=0$ for all $w$. Then $g_{\mu,\nu}v^\mu=0$ for all $\nu$. But this only occurs when $v^\mu=0$ for all $\mu$, meaning that $g_{\mu,\nu}$ never annihilates an input vector; it has full rank. Thus, it's invertible. So we can solve the equation for $v$.\\ There's a more direct proof for the vector spaces we are considering here: by exercise 28, we know the dual space has the same dimension as the original space. Thus, nothing in the dual can be outside the image of the original space, since if were, it would consititute a new basis element. \begin{p}{Let $v=v^\mu e_\mu$ be a vector field on a chart. Show that the corresponding 1-form $g(v,\cdot)$ is equal to $v_\nu f^\nu$, where $f^\nu$ is the dual basis of 1-forms and $v_\nu=g_{\mu,\nu}v^\mu$.}\end{p} We can follow the lines of the previous exercise to derive the form, or simply show that the equation is correct. Choosing an arbitrary vector field $w$, $g(v,w)=g_{\mu,\nu}v^\mu w^\nu$ which is equal to $v_\nu f^\nu(w)=v_\nu w^\nu$. \begin{p}{Let $\om=\om_\mu f^\mu$ be a 1-form on a chart. Show that the corresponding vector field is equal to $\om^\nu e_\nu$ where $\om^\nu=g^{\mu,\nu}\om_\mu$.}\end{p} For arbitrary $v$, $\om(v)=\om_\mu f^\mu(v)=\om_\mu v^\mu$. But there should be a $w$ such that $g(w,v)=\om_\mu v^\mu$. Using the series expression for $v$ and $w$, we obtain $\om_\nu v^\nu=g_{\mu,\nu}w^\mu v^\nu$. This should hold for arbitrary $v$, so $\om_\nu=g_{\mu,\nu}w^\mu$, or $w^\mu=g^{\mu,\nu}\om_\nu$. \begin{p}{Let $\eta$ be the Minkowski metric on $\R^4$. Show that its components in the standard basis are $\eta_{\mu,\nu}=\delta_{\mu,\nu}(1-2\delta_{\mu,0})$.}\end{p} The Minkowski metric is defined for two vectors $v$ and $w$ as $\eta(v,w)=-v^0w^0+v^1w^1+v^2w^2+v^3w^3$. The form of $\eta_{\mu,\nu}$ follows immediately. \begin{p}{Show that $g^\mu_\nu=\delta^\mu_\nu$.}\end{p} If we start with $g_{\lambda,\nu}$ we can raise the first index with the metric, obtaining $g^\mu_\nu=g^{\mu,\lambda}g_{\lambda,\nu}$. But $g^{\lambda,\nu}$ is the matrix inverse of $g_{\lambda,\nu}$, so the result of the product is the Kronecker delta. \begin{p}{Show that the inner product of $p$-forms is nondegenerate by supposing that $(e^1,\dots,e^n)$ is any orthonormal basis of 1-forms in some chart, with $g(e^i,e^i)=\epsilon(i)$ for $\epsilon(i)=\pm 1$. Show the $p$-fold wedge products $e^{i_1}\wedge\dots\wedge e^{i_p}$ form an orthonormal basis of $p$-forms with $\langle e^{i_1} \wedge\dots\wedge e^{i_p},e^{i_1}\wedge\dots\wedge e^{i_p}\rangle=\epsilon(i_1)\cdots\epsilon(i_p)$.}\end{p} From exercise 52 we already know that the nondegeneracy of $g$ is equivalent to the invertibility of the matrix of $g$ applied to basis elements. The same will be true here. First recall that the terms in the $p$-fold wedge products must be distinct so that the $p$-form doesn't vanish. Suppose we have two basis $p$-forms in which $m$ basis 1-forms appear in both. Reordering so that these come first, the matrix of $g(e^{i_j},e^{k_\ell})$ will be block diagonal with the first $m\times m$ block equal to the identity matrix, and zero everywhere else. This matrix obviously has zero determinant, thus the basis $p$-forms with any distinct elements are orthogonal. Otherwise, the determinant yields the product of the individual normalizations, establishing the last equation posed in the problem. This in turn means that the metric matrix for the basis $p$-forms is diagonal with entries $\pm 1$, whence it has full rank and is therefore nondegenerate. \begin{p}{Let $E=E_j dx^j$ be a 1-form on $\R^3$ with its Euclidean metric. Show that $\langle E,E\rangle=E_x^2+E_y^2+E_z^2$. Similarly, let $B=B_x dy\wedge dz+B_y dz\wedge dx+B_z dx\wedge dy$ be a 2-form. Show that $\langle B,B\rangle = B_x^2+B_y^2+B_z^2$.}\end{p} $\langle E,E\rangle=\delta^{i,j}E_i E_j=E_x^2+E_y^2+E_z^2$. $\langle B,B\rangle=\langle B_x dy\wedge dz+B_y dz\wedge dx+B_z dx\wedge dy, B_x dy\wedge dz+B_y dz\wedge dx+B_z dx\wedge dy\rangle= B_x^2+B_y^2+B_z^2$ since the different wedge products are orthonormal.\\ \begin{p}{In $\R^4$ let $F$ be the 2-form given by $F=B+E\wedge dt$, where $E$ and $B$ are given by the formulas in the previous exercise. Using the Minkowski metric, calculate $-\frac{1}{2}\langle F,F\rangle$.}\end{p} $\langle F,F\rangle=\langle B+E\wedge dt,B+E\wedge dt\rangle=\langle B,B\rangle+\langle B,E\wedge dt\rangle+\langle E\wedge dt,B\rangle +\langle E\wedge dt,E\wedge dt\rangle$. The middle two terms are zero by orthogonality of different wedge products, leaving only the last term to deal with. $\langle E\wedge dt,E\wedge dt\rangle={\rm det}\left(\begin{array}{cc}\langle E,E\rangle & \langle E,dt\rangle\\ \langle dt,E\rangle & \langle dt,dt\rangle\end{array}\right)={\rm det}\left(\begin{array}{cc}\langle E,E\rangle & 0\\ 0 &-1\end{array}\right)=-\langle E,E\rangle$. Hence $-\frac{1}{2}\langle F,F\rangle=\frac{1}{2}(\langle E,E\rangle-\langle B,B\rangle)$, which is the Lagrangian of the source(charge)-free field.\\ \begin{p}{Show that any even permutation of a given basis has the same orientation, while any odd permutation has the opposite orientation.} \end{p} To check this we must examine the determinant. Since the determinant of a product is the product of determinants, we can break up the permutation of basis elements into a product of pairwise swaps, whose individual determinant is minus one. Thus, if the permutation has an even number of swaps, the new basis has the same orientation, while an odd number of swaps implies the determinant will equal minus one. \begin{p}{Let $M$ be an oriented manifold. Show that we can cover $M$ with oriented charts $\phi_\alpha:U_\alpha\rightarrow\rn$, that is, charts such that the basis $dx^\mu$ of cotangent vectors on $\rn$, pulled back to $U^\alpha$ by $\phi_\alpha$, is positively oriented.} \end{p} The point here is that the volume form on $\rn$ can be pulled back to $U_\alpha$ by $\phi_\alpha$. Since $M$ is orientable, we can define a volume form $\om$ on all of $M$ to specify the standard orientation. In the chart $U_\alpha$ we can also define a volume form by pulling back the standard volume form $\bigwedge_\mu dx^\mu$ from $\rn$. If this specifies a different orientation, just reorder the basis of $\rn$ so as to have the same orientation as given by $\om$. Since $\om$ exists in all charts $U_\alpha$, we can always orient these appropriately. \begin{p}{Given a diffeomorphism $\phi:M\rightarrow N$ from one oriented manifold to another, we say that $\phi$ is orientation-preserving if the pullback of any right-handed basis (standard orientation) of a cotangent space in $N$ is a right-handed basis of a cotangent space in $M$. Show that if we can cover $M$ with charts such that the transition functions $\varphi_\alpha\circ \varphi_\beta^{-1}$ are orientation-preserving, we can make $M$ into an oriented manifold by using the charts to transfer the standard orientation on $\rn$ to an orientation on $M$.} \end{p} In each chart pull the standard volume form on $\rn$ back to $M$. Since any two charts are connected by an orientation-preserving transition function, the orientation is the same in any chart and the manifold is oriented everywhere.\\ \begin{p}{Let $M$ be an oriented $n$-dimensional semi-Riemannian manifold and let $\{e_\mu\}$ be an oriented orthonormal basis of cotangent vectors at some point $p\in M$. Show that $e_1\wedge\dots\wedge e_n={\rm vol}_p$ where \emph{vol} is the volume form associated to the metric on $M$, and \emph{vol}$_p$ is its value at $p$.} \end{p} The oriented basis in question can be generated from the standard basis by applying a transformation $T$. Since the input and output are orthonormal bases, $T$ is an orthogonal matrix, implying det$T=\pm 1$. Because it's orientation preserving, det$T=1$. Since in transforming $e_1\wedge\dots\wedge e_n$ into the standard basis at $p$, we pick up a factor equal to the determinant of $T$, $e_1\wedge\dots\wedge e_n={\rm vol}_p$. \begin{p}{Show that if we define the Hodge star operator in a chart using the formula $\star(e^{i_1}\wedge \cdots \wedge e^{i_p})=\pm e^{i_{p+1}}\wedge \cdots \wedge e^{i_n}$, where the second set of indices consists of those not contained in the first and the $\pm$ is given by ${\rm sign}(i_1,\dots,i_n)\epsilon(i_1)\cdots \epsilon(i_p)$, it satisfies the property $\om\wedge\star\mu= \langle\om,\mu\rangle{\rm vol}$.} \end{p} First expand $\om$ and $\mu$: $\om=\om_\alpha e^{\alpha_1}\wedge\cdots\wedge e^{\alpha_p}$, $\mu=\mu_\beta e^{\beta_1}\wedge\cdots\wedge e^{\beta_p}$. Now compute their inner product: $\langle \om,\mu\rangle=\om_\alpha\mu_\beta\langle e^{\alpha_1}\wedge\cdots\wedge e^{\alpha_p}, e^{\beta_1}\wedge\cdots\wedge e^{\beta_p}\rangle=\om_\alpha\mu_\beta\delta^{\alpha,\beta}\epsilon(\alpha_1)\cdots\epsilon(\alpha_p)$. Now $\star\mu=\pm\mu_\alpha e^{\alpha_{p+1}}\wedge\cdots\wedge e^{\alpha_n}$, and taking the wedge product with $\om$, one obtains $\om\wedge\star\mu=\pm\om_\alpha\mu_\beta\delta^{\alpha,\beta} e^{\alpha_1}\wedge\cdots e^{\alpha_n}$. Finally, $e^{\alpha_1}\wedge\cdots e^{\alpha_n}={\rm sign}(\alpha_1,\dots,\alpha_n){\rm vol}$, so we obtain the desired result. \begin{p}{Calculate $\star d\om$ when $\om$ is a 1-form on $\R^3$.}\end{p} $\star d\om$ is essentially the curl of $\om$. First expand $\om$ in an orthonormal basis $e^j$: $\om=\om_je^j$. $d\om=\partial_k\om_j dx^k\wedge dx^j$. Finally, $\star d\om= \partial_j\om_k\, {\rm sign}(j,k,\ell)dx^\ell=\partial_j\om_k \epsilon^{jk}_\ell dx^\ell.$ Note that $\epsilon(i)=1$ for all $i$ in this case. \begin{p}{Calculate $\star d\,{\star}\om$ when $\om$ is a 1-form on $\R^3$.}\end{p} This gives the divergence of $\om$. Expanding $\om$ in a basis as above, we obtain $\star\om=\om_j\epsilon^j_{k\ell}dx^k\wedge dx^\ell$. Applying $d$ gives $d\star\om=\partial_m\om_j\epsilon^j_{k\ell}dx^m\wedge dx^k\wedge dx^\ell$. Another $\star$ yields the final result: $\star d\star\om=\partial_m\om_j\epsilon^j_{k\ell}\epsilon^{mk\ell}$ and since $\epsilon^j_{k\ell}\epsilon^{mk\ell}=\delta^{jm}$, we have $\star d\star\om=\partial_j\om_k\delta^{jk}$, the divergence. \begin{p}{Give $\R^4$ the Minkowski metric and the orientation in which $(dt,dx,dy,dz)$ is positively oriented. Calculate the Holdge star operator on all wedge products of $dx^\mu$'s Show that on $p$-forms $\star^2=(-1)^{p(4{-}p)+1}$.} \end{p} First we have the definition vol$=dt\wedge dx\wedge dy\wedge dz$. Using this we can make a table of the various starred wedge products. \begin{tabular}{c|c} $\omega$ & $\star\omega$\\\hline $dt$ & $-dx\wedge dy\wedge dz$\\ $dx$ & $-dt\wedge dy\wedge dz$ \\ $dy$ & $dt\wedge dx\wedge dz$\\ $dz$ & $-dt\wedge dx\wedge dy$\\ $dt\wedge dx$ & $-dy\wedge dz$\\ $dt\wedge dy$ & $dx\wedge dz$ \\ $dt\wedge dz$ &$-dx\wedge dy$\\ $dx \wedge dy$ & $dt\wedge dz$ \\ $dx \wedge dz$ & $-dt\wedge dy$\\ $dy \wedge dz$ & $dt\wedge dx$\\ $dt\wedge dx\wedge dy$ & $-dz$ \\ $dt\wedge dx\wedge dz$ & $dy$\\ $dt\wedge dy\wedge dz$ & $-dx$\\ $dx\wedge dy\wedge dz$ & $-dt$ \end{tabular}\\ From the table we immediately see that the given condition is fulfilled. \begin{p}{Let $M$ be an oriented semi-Riemannian manifold of dimension $n$ and signature $(s,n{-}s)$. Show that on $p$-forms $\star^2=(-1)^{p(n-p)+s}$.} \end{p} $\om\wedge \star \om=\langle \om,\om\rangle {\rm vol}$ and $\star\om \wedge \star^2\om=\langle \star\om, \star\om\rangle{\rm vol}$, so $\star\om \wedge \star^2\om=(-1)^{p(n{-}p)}\star^2\!\om\wedge \star\om= \frac{\langle \star\om, \star\om\rangle}{\langle \om,\om\rangle}\,\om\wedge\star\om$. Hence $\star^2=(-1)^{p(n{-}p)}\frac{\langle \star\om, \star\om\rangle}{\langle \om,\om\rangle}$. To evaluate this expression, let $\om$ be a basis $p$-form $e^{i_1}\wedge\dots\wedge e^{i_p}$. Setting $\epsilon(\mu)=\langle e^\mu,e^\mu\rangle$, we have $\langle\om,\om\rangle=\prod_{j=1}^p \epsilon(i_j)$. On the other hand, $\star\om=\pm e^{i_{p+1}}\wedge\dots\wedge e^{i_n}$, so $\langle \star\om,\star\om\rangle= \prod_{j=p+1}^n \epsilon(i_{p+j})$. Since all the terms in the denominator are $\pm 1$, we might as well multiply by $\langle\om,\om\rangle$ instead of dividing. We then obtain $\star^2=(-1)^{p(n{-}p)}\prod_{j=1}^n \epsilon(i_j)$. The latter term equals $(-1)^s$ as $s$ is the number of basis elements with norm $-1$. \begin{p}{Let $M$ be an oriented semi-Riemannian manifold of dimension $n$ and signature $(s,n{-}s)$. Let $e^\mu$ be an orthonormal basis of 1-forms on some chart. Define the Levi-Civita sympbol for $1\leq i_j\leq n$ by \[ \epsilon_{i_1\dots i_n}=\left\{ \begin{array}{ll} {\rm sign}(i_1\dots i_n) & \textrm{all $i_j$ distinct}\\0 &{\rm otherwise} \end{array} \right. \]} Show that for any $p$-form $\om=\frac{1}{p!}\om_{i_1\dots i_p}e^{i_1}\wedge\dots\wedge e^{i_p}$ we have $(\star\om)_{j_1\dots j_{n{-}p}}=\frac{1}{p!}\epsilon^{i_1\dots i_p}_{j_1\dots j_{n{-}p}}\om_{i_1\dots i_p}$. \end{p} Using the results of exercise 64, we have $\star(e^{i_1}\wedge \cdots \wedge e^{i_p})=\epsilon(i_1)\cdots \epsilon(i_p)\epsilon^{i_1\dots i_p}_{i_{p+1}\dots i_{n}}e^{i_{p+1}}\wedge \cdots \wedge e^{i_n}$ (no sum). Now $(\star\om)_{j_1\dots j_{n{-}p}}=\epsilon(j_1)\cdots \epsilon(j_{n{-}p})\langle e^{j_1}\wedge\dots \wedge e^{j_{n{-}p}},\star \om\rangle$, so $(\star\om)_{j_1\dots j_{n{-}p}}=\frac{(-1)^s}{p!}\epsilon^{i_1\dots i_p}_{j_1\dots j_{n{-}p}}\om_{i_1\dots i_p}$. \begin{p} {Show that the Maxwell equations $\nabla\cdot \vec{E}=\rho, \nabla\times \vec{B}-\frac{\partial \vec{E}}{\partial t}=\vec{j}$ can be rewritten as $\star_S\, d_S \star_S\! E=\rho$ and $-\partial_t E+\star_S \,d_S\star_S\! B=j$.} \end{p} Start from $E=E_j dx^j$. For the remainder of the exercise, $\star$ means $\star_S$. So $\star E=\frac{1}{2}E_j \epsilon^{j}_{k\ell}dx^k\wedge dx^\ell$. Then $d\star\!E=\frac{1}{2}\epsilon^j_{k\ell}\partial_m E_j dx^m\wedge dx^k\wedge dx^\ell$, and finally $\star d\star\! E=\frac{1}{2}\epsilon^{j}_{k\ell}\epsilon^{mk\ell}\partial_m E_j=\partial_j E_j=\nabla\cdot\vec{E}$. Meanwhile $B=\frac{1}{2}\epsilon^j_{k\ell} B_j dx^k\wedge dx^\ell$. Thus $\star B=\frac{1}{2}\epsilon^j_{k\ell}\epsilon^{k\ell}_m B_j dx^m=B_j dx^j$. Then $d\star\!B=\partial_k B_j dx^k\wedge dx^j$ and $\star d\star\!B=\epsilon^{kj}_\ell \partial_k B_j dx^\ell$, the components of which are simply $\nabla\times \vec{B}$.\\ \begin{p} {Show that on a semi-Riemannian manifold $M$ which can be decomposed into $\R\times S$, where $S$ is space, the general Maxwell equations $dF=0$ and $\star d\star F=J$ can be transformed into their ``usual'' appearance $d_S E=0, \partial_t B+d_SE=0, \star_Sd_S\star_SE=\rho,$ and $\star_Sd_S\star_SB-\partial_tE=j$.} \end{p} Given the decomposition of $M$, we write $F=B+E\wedge dt$ where $B$ is the portion of $F$ defined only on $S$. Similarly $J=j-\rho dt$. The first equation then reads $dB+dE\wedge dt=0$. Observe that $dB=d_SB+\partial_t B\wedge dt$ while $dE\wedge dt=\partial_t E dt\wedge dt+d_S E\wedge dt$. Thus we have $d_S B=0$ and $\partial_t B+d_S E=0$; the first equation deals with three forms only on space, while the second involves three forms on space and time. Assume further that the metric can be decomposed into $g=-dt^2+{}^3g$ {\small (can this always be done when the underlying space is a product? The tangent space can be decomposed, it seems clear, from the picture of vectors as arrows pointing tangentially to the surface. In the case of a product space, there are arrows for each manifold, and we take their formal product to get the tangent to the total manifold. But from the point of view of vectors as derivatives? I think it might be possible by showing that the value of the derivative is equal to making changes on each submanifold separately and then adding them. In other words, we have a curve $\gamma(t)$ on $M$, but since $M=\R\times S$, we can write this as $(\gamma_1(t),\gamma_2(t))$. Now $\gamma'(t)[f]=\frac{\rm d}{{\rm d}t}f(\gamma(t))=\frac{\rm d}{{\rm d}t}f(\gamma_1(t),\gamma_2(t))=\frac{\partial f}{\partial \gamma_1}\frac{\partial\gamma_1}{\partial t}+\frac{\partial f}{\partial \gamma_2}\frac{\partial\gamma_2}{\partial t}=\gamma_1'(t)[f]|_{\gamma_2(t)}+\gamma_2'[f]|_{\gamma_1(t)} \equiv(\gamma_1'(t),\gamma_2'(t))[f].$ That's what we do in $\R^n$, after all. I guess it's obvious once you map open sets of the manifold to $\R^n$. But this doesn't mean the metric has to be block diagonal and respect the split; after all that \emph{doesn't} always happen in $\R^n$.)} Now let $\star_S$ be the Hodge dual on (differential forms on) $S$. Since $E$ is a one form on $S$, $\star E\wedge dt=\star_S E$ and similarly since $B$ is a two form on $S$, $\star B=-\star_S B\wedge dt$ (using the chart from 67). So $\star F=\star_S E-\star_S B\wedge dt$ and then $d\star F=d_S\star_S E+\star_S(\partial_t E)\wedge dt-d_S\star_S B\wedge dt$. Note that the first term is a three form on space, so $\star d_S\star_S E$ must be a one form on time, $\star d_S\star_S E=-(\star_S d_S\star_S E) dt$, where the minus sign can be determined again by the chart from 67. The second term is a three form on space and time, and due to the ordering we'll have $\star(\star_S(\partial_t E)\wedge dt)=-\partial_t E$. The last term is also a three form on space and time and we end up with $\star(d_S\star_S B\wedge dt)=-\star_S d_S\star_S B$. Thus $\star_S d_S\star_S E=\rho$ and $-\partial_t E+\star_S d_S\star_SB=j$. \begin{p}{Show that in a Riemannian 4-dimensional manifold any 2-form $F$ can be written as a sum of self-dual and anti-self-dual parts, $F=F_++F_-$ for $\star F_\pm=\pm F_\pm$, if we take $F_\pm=(F\pm\star F)/2$.} \end{p} Clearly $F=F_++F_-$. And $\star F_\pm=(\star F\pm\star^2 F)/2=(\pm F+\star F)/2=\pm(F\pm\star F)/2=\pm F_\pm$. \begin{p}{Show that in a Lorentizan 4-manifold any 2-form $F$ can be written as a sum of self-dual and anti-self-dual parts, $F=F_++F_-$ for $\star F_\pm=\pm i F_\pm$.} \end{p} Take $F_\pm=(F\mp \star i F)/2$. Then $\star F_\pm=(\star F\mp\star^2 i F)/2=(\pm i F+\star F)/2=\pm i(F\mp \star i F)/2=\pm i F_\pm$. \begin{p}{Show that the equations $\star_S E=iB$ and $\star_S B=-i E$ are equivalent and that they both hold if at every time $t$ we have $E=E_i dx^i$ and $B=-(i/2)\varepsilon^j_{\phantom{j}k\ell}E_j dx^k\wedge dx^\ell$.} \end{p} Applying $\star_S$ turns one equation into the other, to they are equivalent. From exercise 69 we know that $\star_S E=(1/2)\varepsilon^j_{\phantom{j}k\ell}E_j dx^k\wedge dx^\ell$ which is clearly equal to $iB$. \begin{p}{Show that the second Maxwell equation $\partial_t B+d_SE=0$ leads to ${}^3k\wedge E=k_0 B$.} \end{p} Start with $d_S E=d_S(e^{ik_\mu x^\mu}E_j dx^j)=\partial_k(e^{ik_\mu x^\mu}E_j)dx^j\wedge dx^k=i k_k E_jdx^j\wedge dx^k=-i{}^3k\wedge E$. Meanwhile $\partial_t B=ik_0 B$, so $\partial_t B+d_SE=i(k_0 B-{}^3k\wedge E)=0$, the desired result. \begin{p}%76 {Show that ${}^3k\wedge E=-ik_0 \star_s E$ implies $k_\mu k^\mu=0$.} \end{p} Start from $\langle {}^3k\wedge E,{}^3k\wedge E\rangle=k_0^2\langle\star_s E,\star_s E\rangle$. Note that the inner product should be antilinear in one of its inputs. Using the coordinate expressions for $E$ and ${}^3k$ we have $\langle {}^3k\wedge E,{}^3k\wedge E\rangle=k_\ell k_{\ell'}^*E_j E_{j'}^*\langle dx^\ell\wedge dx^j,dx^{\ell'}\wedge dx^{j'}\rangle=k_\ell k_{\ell'}^*E_j E_{j'}^*(\delta^{j j'}\delta^{\ell\ell'}-\delta^{j'\ell}\delta^{j\ell'})=\langle {}^3k,{}^3k\rangle\,\langle E,E\rangle-|\langle {}^3k,E\rangle|^2=\langle {}^3k,{}^3k\rangle\,\langle E,E\rangle$, since $E$ is orthogonal to ${}^3k$. On the other hand, $k_0^2\langle\star_s E,\star_s E\rangle=\frac{k_0^2}{4}\varepsilon^j_{k\ell}\varepsilon^{j'}_{k'\ell'}E_jE_{j'}\langle dx^k\wedge dx^\ell,dx^{k'}\wedge dx^{\ell'}\rangle=\frac{k_0^2}{4}\varepsilon^j_{k\ell}\varepsilon^{j'}_{k'\ell'}E_jE_{j'}(\delta^{kk'}\delta^{\ell \ell'}-\delta^{k\ell'}\delta^{k'\ell})=k_0^2\langle E,E\rangle$. Thus, $\langle {}^3k,{}^3k\rangle=k_0^2$. Using the metric, we have $k_0^2=-k_0k^0$, and the other components are unchanged, so $k_\mu k^\mu=0$. \begin{p}%77 {Show $\vec{E}=(0,e^{i(t-x)},-ie^{i(t-x)})$, $\vec{B}=i\vec{E}$ satisfy the vacuum Maxwell equations. [corrected from the book]} \end{p} Since $\vec{\nabla}\cdot \vec{E}=0$ (by inspection), $\vec{\nabla}\cdot \vec{B}=0$, too. Now $\vec{\nabla}\times \vec{E}=\vec{E}$ and $\partial_t \vec{B}=i\partial_t \vec{E}=-\vec{E}$, so $\vec{\nabla}\times \vec{E}+\partial_t \vec{B}=0$. Finally, $ \vec{\nabla}\times \vec{B}=i\vec{E}$, so $\vec{\nabla}\times \vec{B}-\partial_t \vec{E}=0$. \begin{p}%{78} {Prove that all self-dual and anti-self-dual plane wave solutions are left and right circularly polarized, respectively.} \end{p} Without loss of generality we can take ${}^3k$ to point in the $x$ direction. Then the self-dual case is worked out in detail in the book. Left circular polarization can be recognized from the form of $E$, namely that the polarization vector is proportional to $(1,-i)$; right circular polarization is given by $(1,i)$. In the anti-self-dual case $\star_S E=-iB$ or, equivalently, $\star_SB=iE$. The first Maxwell equation $d_SB=0$ reads as before, $B\wedge {}^3k=0$, so $\langle E,{}^3k\rangle=0$ still holds. The second equation, $\partial_t B+d_SE=0$, leads to ${}^3k\wedge E=k_0B$ as before. Rewriting this in terms of $E$ we obtain ${}^3k\wedge E=ik_0\star_SE$, which again leads to $\langle {}^3k\wedge E,{}^3k\wedge E\rangle=k_0^2\langle\star_s E,\star_s E\rangle$, as in exercise 76. Thus we know that $k$ must be lightlike, so we can assume without loss of generality that $k=dt-dx$. $E$ must be orthogonal to ${}^3k$, so $E=a dy+bdz$. We now use the second Maxwell equation to determine $a$ and $b$. ${}^3k\wedge E=-(a dx\wedge dy+b dx\wedge dz)$, while $ik_0\star_SE=i(a dz\wedge dx+b dx\wedge dy)$, so $b=ia$, which is the condition for right circular polarization. \begin{p}%{79} {Let $P:\R^4\rightarrow\R^4$ be the parity transformation $P(t,x,y,z)=(t,-x,-y,-z)$. Show that if $F$ is a self-dual solution of Maxwell's equations, the pullback $P^*F$ is an anti-self-dual solution, and vice versa.} \end{p} We don't really need to show both, since the vice versa part follows from $P^*P^*F=F$. To see that $P^*F$ takes a self-dual solution to an anti-self-dual solution, note that $P^*E=-E$, while $P^*B=B$. (This is why $B$ is called an axial vector sometimes.) If $F=dE\wedge dt+B$ was self-dual to begin with, meaning $\star_SE=iB$, the new $F'=dE'\wedge dt'+B'=-dE\wedge dt+B$ is anti-self-dual: $\star_SE'=-\star_SE=-iB=-iB'$.
{ "alphanum_fraction": 0.6892872724, "avg_line_length": 73.5072886297, "ext": "tex", "hexsha": "921ffcae5e9b240052726d8c01e8cbf55bdb0ac9", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "e1e38de9acab877bc4200af59c7910d42de748ca", "max_forks_repo_licenses": [ "CC-BY-4.0" ], "max_forks_repo_name": "joerenes/Baez-Muniain-solutions", "max_forks_repo_path": "src/I5.tex", "max_issues_count": 1, "max_issues_repo_head_hexsha": "e1e38de9acab877bc4200af59c7910d42de748ca", "max_issues_repo_issues_event_max_datetime": "2017-04-13T20:19:44.000Z", "max_issues_repo_issues_event_min_datetime": "2017-04-13T12:15:30.000Z", "max_issues_repo_licenses": [ "CC-BY-4.0" ], "max_issues_repo_name": "joerenes/Baez-Muniain-solutions", "max_issues_repo_path": "src/I5.tex", "max_line_length": 571, "max_stars_count": 3, "max_stars_repo_head_hexsha": "e1e38de9acab877bc4200af59c7910d42de748ca", "max_stars_repo_licenses": [ "CC-BY-4.0" ], "max_stars_repo_name": "joerenes/Baez-Muniain-solutions", "max_stars_repo_path": "src/I5.tex", "max_stars_repo_stars_event_max_datetime": "2019-04-19T18:18:34.000Z", "max_stars_repo_stars_event_min_datetime": "2017-04-13T12:10:03.000Z", "num_tokens": 9011, "size": 25213 }
\documentclass[12pt,letterpaper,oneside,final]{memoir} % ---------- Options ----------------- % % NYU GSAS Requirements \usepackage{amsmath} \usepackage{amssymb} \usepackage{amsthm} \usepackage{array} \usepackage{bbm} \usepackage{booktabs} \usepackage{caption} \usepackage{color, colortbl} \usepackage{dsfont} \usepackage{enumitem} \usepackage{epstopdf} \usepackage[gen]{eurosym} \usepackage{float} \usepackage[left=1in,right=1in,top=1in,bottom=1in,includefoot,heightrounded,footskip=.75in]{geometry} \usepackage[colorlinks=true, citecolor=blue, urlcolor=blue, linkcolor=blue, backref=page]{hyperref} \usepackage{kpfonts} \usepackage{lipsum} \usepackage{longtable} \usepackage{makeidx} \usepackage{mathpazo} \usepackage{mathtools} \usepackage{morefloats} \usepackage{multirow} \usepackage[round]{natbib} \usepackage{pifont} \usepackage{placeins} \usepackage{rotating} \usepackage{setspace} \usepackage{subcaption} \usepackage{tabularx} \usepackage{titling} \usepackage{tkz-tab} \usepackage{units} \usepackage{verbatim} \usepackage{wasysym} % Misc packages \usepackage{afterpage} \usepackage{amsfonts} \usepackage[english]{babel} \usepackage{bigdelim} \usepackage{bookmark} \usepackage{epsfig} \usepackage{fullpage} \usepackage{graphicx} \usepackage[utf8]{inputenc} \usepackage{latexsym} \usepackage{listings} \usepackage{pdflscape} \usepackage{pgf} \usepackage{tgtermes} \usepackage{threeparttable} \usepackage{titling} \usepackage{tikz} \usepackage{vmargin} % New commands \newcommand{\cmark}{\ding{51}} % Nice checkmark for tables \newcommand{\xmark}{\ding{55}} % Nice X mark for tables % Theorems etc \newtheorem{theorem}{Theorem} \newtheorem{lemma}[theorem]{Lemma} \newtheorem{proposition}[theorem]{Proposition} \newtheorem{corollary}[theorem]{Corollary} \newtheorem{definition}{Definition} \newtheorem{result}{Result} % Misc new commands \def\citeapos#1{\citeauthor{#1}'s (\citeyear{#1})} \newcommand{\var}{\mbox{\textit Var\/}} \newcommand{\wh}{\widehat} \newcommand{\wt}{\widetilde} \newcommand{\ti}{\tilde} \newcommand{\TODO}[1]{ {\color{red} TODO: #1} } \newcommand{\NOTE}[2]{ {\color{blue} NOTE (author: #1): #2} } % Other personalizations \setitemize{noitemsep} \setenumerate{noitemsep} % Footnote without marker \newcommand\blfootnote[1]{% \begingroup \renewcommand\thefootnote{}\footnote{#1}% \addtocounter{footnote}{-1}% \endgroup } % Fix position for page numbers \addtolength{\textheight}{-60pt} \pagestyle{plain} % Needed to create list of appendices % disable everything after the POST hook, but restore after BIB \cftinsertcode{POST}{ \setcounter{tocdepth}{-1} } \cftinsertcode{BIB}{ \setcounter{tocdepth}{0} } \newcommand\tableofcontentsapps{ \begingroup % disable first part \cftinsertcode{PRE}{ \setcounter{tocdepth}{-10} } % enable down to subsection within appendices \cftinsertcode{POST}{ \setcounter{tocdepth}{1} } % disable bibliography \cftinsertcode{BIB}{ \setcounter{tocdepth}{-10} } \renewcommand\contentsname{List of Appendices} \tableofcontents \endgroup } % OPTIONS END, DOCUMENT STARTS HERE \begin{document} \cftinserthook{toc}{PRE} \fussy \vbadness=10000 % badness above which bad vboxes are shown. (Default = 10000?) \frontmatter \hyphenation{} \DoubleSpacing \begin{center} % ---------- % Title Page % ---------- \thispagestyle{empty} \vspace*{15mm} Essays in Macro and Labor Economics \vspace{20mm} by\\ \vspace{10mm} Chase G. Coleman \\ \vspace{10mm} A dissertation submitted in partial fulfillment\\ of the requirements for the degree of\\ Doctor of Philosophy\\ Department of Economics\\ Stern School of Business\\ New York University\\ May, 2019 \end{center} \vspace{20mm} \begin{flushright} {\rule[0pt]{60mm}{0.1mm}}\\ %rule[raise-height]{width}{height} * raise-height specifies how high to raise the rule (optional) * width specifies the length of the rule (mandatory) * height specifies the height of the rule (mandatory) Laura L. Veldkamp\\ \vspace{7.5mm} % {\rule[0pt]{60mm}{0.1mm}}\\ \end{flushright} \newpage % --------- % COPYRIGHT % --------- %\thispagestyle{empty} % \begin{center} Copyright © Chase G. Coleman \\ % All Rights Reserved, 2019 \end{center} %\newpage % %\thispagestyle{empty} % % ---------- % DEDICATION % ---------- % \chapter{Dedication} % % \vspace*{\fill} % \begin{center} % \begin{minipage}{.6\textwidth} % I dedicate this dissertation to % \end{minipage} % \end{center} % \vfill \newpage % --------------- % ACKNOWLEDGMENTS % --------------- \chapter{Acknowledgments} First and foremost, I am grateful for a truly exceptional committee consisting of my chair, Laura Veldkamp, and committee members Thomas Sargent, Michael Waugh, and Stan Zin. They have each played a unique role in my development as an economist by helping me learn to direct my curiosity, teaching me the tools needed to do economic research, and by providing encouragement when I wasn't sure which steps were next. My life is greatly improved because of the time they were willing to share with me. Dave Backus, though no longer with us, also played a vital role in my development as an economist and as a person --- I often miss him and frequently reflect on the lessons I learned from him. I am grateful for the collaborative environment that I was able to work in while here at NYU and am lucky to have learned an incredible number of things from various students and professors, including, but not limited to, Felipe Alvarez, Katka Borovickova, Tom Cooley, Richard Davies, Fiona Feng, Axelle Ferriere, Sonia Gilbukh, Victoria Gregory, Timothy Hills, Malika Krishna, Elliot Lipnowski, Roxana Mihet, Simon Mongey, Jonathan Payne, Alberto Polo, Kim Ruhl, Maher Said, Daniel Stackman, Balint Szoke, Venky Venkateswaran, Larry White, and Peifan Wu. The Sargent Reading Group has served as an effective tool for me to learn about a wide range of topics in economics --- I am grateful to those who have helped organize it each while I attended (Joseba Martinez and Balint Szoke) and to each of the particants who took it seriously by coming prepared with interesting papers and asking insightful, though sometimes difficult, questions. Outside of NYU, I have also benefitted from a number of wonderful colleagues, including, but not limited to, Rick Evans, Lilia Maliar, Serguei Maliar, Matt McKay, Kerk Phillips, Jim Savage, Simon Scheidegger, John Stachurski, and Natasha Watkins. This work is dedicated to my wife, Jessica Coleman, and my parents, Gregory and Stephanie Coleman. Each of these three individuals have encouraged me in my education, driven and motivated my curiosity, and supported the paths that these pursuits have lead me to take, often at the price of their own convenience. \newpage % -------- % ABSTRACT % -------- \chapter{Abstract} \DoubleSpacing This dissertation consists of three chapters. These chapters focus on Chapter 1, `Pareto weights as wedges in two-country models`, was written with Dave Backus, Axelle Ferriere, and Spencer Lyon. In this chapter we study how in models with recursive preferences, endogenous variation in Pareto weights would be interpreted as wedges from the perspective of a frictionless model with additive preferences. We describe the behavior of the relative Pareto weight in a two-country world and explore its interaction with consumption and the real exchange rate. Chapter 2, `Global Solution Methods for Macroeconomic Models`, was written jointly with Spencer Lyon, Lilia Maliar, and Serguei Maliar. This chapter first discusses 7 global solution methods in the context of a simple neoclassical growth model and then introduces a new 8th global solution method in the context of a non-trivial New-Keynesian model. We first highlight how algorithm choice, even more than programming language, plays an important role in the speed that a model can be solved. The new method introduced in this paper is capable of solving a New-Keynesian model with an 8-dimensional state-space in mere seconds which makes it possible to perform model calibration. In Chapter 3, `The Cost of Income-Driven Repayment for Student Loans,' I investigate a form of student loan repayment known as income-driven repayment. This type of loan repayment has received a significant amount of political support recently in the U.S. and there are new proposals that would make it so that all future student loans would be repaid under income-driven plans. I use a detailed model that incorporates pre, intra, and post college decisions to explore how income-driven might change enrollment and debt accumulation decisions, and how these changes might affect the required government subsidy to support the student loan program. I find that income-driven repayment would increase the cost of running the student loan program by 15\% and that a significant portion of this cost comes from debt accumulated by those who are unlikely to graduate from college. \newpage % ----------------- % TABLE OF CONTENTS % ----------------- \renewcommand*{\cftappendixname}{Appendix\space} \renewcommand*{\contentsname}{Table of Contents} \setcounter{tocdepth}{1}% chapters and above %this removes TOC listing from TOC \begin{KeepFromToc} \tableofcontents* \end{KeepFromToc} \clearpage \newpage \clearpage % --------------- % LIST OF FIGURES % --------------- \listoffigures \newpage \clearpage % -------------- % LIST OF TABLES % -------------- \listoftables \newpage \clearpage % ------------------ % LIST OF APPENDICES % ------------------ \tableofcontentsapps \DoubleSpacing \mainmatter \clearpage \cleardoublepage \phantomsection %\addcontentsline{toc}{chapter}{Introduction} % ------------ % Introduction % ------------ % \chapter*[Introduction]{Introduction} \label{ch-intro} \setcounter{tocdepth}{1}% chapters and above % CHAPTER 1 \chapter{Pareto weights as wedges in two-country models} \input{sections/BCFL/intro} \input{sections/BCFL/two_countries} \input{sections/BCFL/calibration} \input{sections/BCFL/solution} \input{sections/BCFL/equilibrium} \input{sections/BCFL/conclusion} \newpage \clearpage % CHAPTER 2 \chapter{Global Solution Methods for Macroeconomic Models} \input{sections/CLMM/intro} \input{sections/CLMM/languageintro} \input{sections/CLMM/growthmodel} \input{sections/CLMM/nkmodel} \input{sections/CLMM/conclusion} % CHAPTER 3 \chapter{Student Loans} \input{sections/StudentLoans/intro} \input{sections/StudentLoans/background} \input{sections/StudentLoans/model} \input{sections/StudentLoans/calibration} \input{sections/StudentLoans/results} \input{sections/StudentLoans/conclusion} \newpage \clearpage %%% APPENDICES \begin{appendices} \appendix \addcontentsline{toc}{chapter}{Appendices} \cftinserthook{toc}{POST} % APPENDIX 1 \chapter{Supplementary Material for Chapter 1} \label{sec:Appendix1} \input{sections/BCFL/appendix.tex} \input{sections/BCFL/tables_and_figs.tex} % APPENDIX 2 \chapter{Supplementary Material for Chapter 2} \label{sec:Appendix2} \input{sections/CLMM/appendix.tex} \input{sections/CLMM/tables_and_figs.tex} \newpage \clearpage % APPENDIX 3 \chapter{Supplementary Material for Chapter 3} \label{sec:Appendix3} \input{sections/StudentLoans/appendix.tex} \input{sections/StudentLoans/tables_and_figs.tex} \end{appendices} \newpage \clearpage \backmatter %%% BIBLIOGRAPHY \cftinserthook{toc}{BIB} \begin{OnehalfSpace} \bibliographystyle{ecta} \bibliography{combined} \end{OnehalfSpace} \end{document}
{ "alphanum_fraction": 0.7291876469, "avg_line_length": 28.5071770335, "ext": "tex", "hexsha": "475083cea797718e6fc60f85be0bdd26197e1db5", "lang": "TeX", "max_forks_count": 2, "max_forks_repo_forks_event_max_datetime": "2020-05-03T18:48:22.000Z", "max_forks_repo_forks_event_min_datetime": "2019-12-31T22:54:14.000Z", "max_forks_repo_head_hexsha": "813210c2f92122bb0c05f6ad7f5a9ede04993781", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "cc7768/Dissertation", "max_forks_repo_path": "ms/dissertation.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "813210c2f92122bb0c05f6ad7f5a9ede04993781", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "cc7768/Dissertation", "max_issues_repo_path": "ms/dissertation.tex", "max_line_length": 232, "max_stars_count": null, "max_stars_repo_head_hexsha": "813210c2f92122bb0c05f6ad7f5a9ede04993781", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "cc7768/Dissertation", "max_stars_repo_path": "ms/dissertation.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 3362, "size": 11916 }
\documentclass[output=paper]{langsci/langscibook} \author{Louis de Saussure\affiliation{Université de Neuchâtel}} \title{The theory of meaning in René de Saussure's works} \ChapterDOI{10.5281/zenodo.1306472} %will be filled in at production \abstract{\noabstract} \maketitle \begin{document} \label{sec:semantics} \lettrine[loversize=0.1, nindent=0em]{F}{or a contemporary reader,} the works of René de Saussure (henceforth simply ``René'') presented in the present volume are strongly suggestive of certain trends in modern syntactic theorizing. Even morphologically simple words are resolutely decomposed into structured combinations, reminiscent of the approach of Generative Semantics in the 1970s and some more recent Minimalist work. The positing of component atoms of meaning with no content beyond that of the basic lexical categories suggests a parallel with the ``little \emph{v, n, a}'' elements of some current syntactic analyses. Pursuing these similarities, though, would be rather anachronistic, since René de Saussure's views on word structure are not articulated against the background of a theory of syntax above the word level. In the present commentary, therefore, the focus will be more narrowly on the relation between words (simple and complex) and their meanings. In short, René de Saussure’s theory of lexical meaning is compositional in a radical reductionist sense on which all words are reducible to a composition of minimal ``atoms''. These atoms are themselves realizations of three overarching categories signifying \emph{being}, \emph{property} (quality) and \emph{action}. These basic notions are grammatical ones, as they map onto the three categories of noun, adjective and verb; but they are ultimately semantic in essence, and are labelled grammatical ``ideas''. This assumption that grammatical categories are fundamentally meaningful in themselves, and not mere abstract unsaturated slots to be filled with content, is a central claim of the theory and quite an original one. Elements of different types combine together to form new meanings: therefore, a word may be about a substance but nonetheless incorporate a verbal notion, as in \emph{couronnement} `coronation’ which is a noun but still embeds a notion of action indicated by a particular morph or \emph{atom} (`e’) which is verbal in essence. More precise semantic contents are various add-ons which build upon the results of the combination of basic semantic categories. René’s approach can be linked to more classical views on language tracing back to the Stoics and is opposed in many ways to his brother Ferdinand’s systemic view. However it is also reminiscent of structuralist approaches to such semantics as \posscitet{hjelmslev43:prolegomena} functional conception. The first text \citep{r.desaussure11:formation} is the most developed but the second one \citep{r.desaussure19:structure.logique} adds very valuable elements of precision, elaboration and sometimes amendments to the original theory. René’s conception of language in general is oriented towards reductionism and universality. Although his assertions in the 1911 book are not claimed to be universally valid (he admits implicitly that his work might not apply to all languages), it has an overall universalist orientation, and therefore we expect him to assume some universalism in semantics in particular. In the 1919 text, he clarifies his ambitions to adopt the ``international'' point of view on natural languages in order to find generalities (later on to be put to work in the construction of artificial languages like Esperanto) \citep[5]{r.desaussure19:structure.logique} and he actually concludes with a strong claim about the universality of the few basic overarching semantic-grammatical categories that he identifies \citep[26]{r.desaussure19:structure.logique}. \section{The theory of meaning in the scientific context} \label{sec:theory-of-meaning} René claims at the beginning of his book: “the logical principles of the formation of words are thus the same for all languages” but further adds a nuance: ``\ldots or at least for all [languages] that start from the same primitive elements” \citep[4]{r.desaussure11:formation}. Unfortunately, he does not elaborate any further on the distinction between types of languages in this respect before looking at some (scarce) crosslinguistic data in the 1919 text where he strengthens his universalist claim. This is a conception that departs seriously from the more structuralist and relativist views that were soon to emerge in the structuralist trend based on some interpretations of his brother Ferdinand's (\citeyear{saussure16:cours-original}) \emph{Course in General Linguistics}. This has implications at the level of semantics and places René on the side of the ``naturality of meaning'' hypotheses which relate categorial language to categorial thought without assuming the prevalence of language but rather that of mind (thought) as bearing universal patterns, while Ferdinand’s theory is rather that languages have irreconcilable semantic systems --- all built, however, upon a similar and thus natural mechanism, the ``faculty'' of language. It is clear in the 1911 book that René chooses to ignore linguistic variation as much as possible, and this attitude is also palpable in his conception of meaning. His general approach is one of simplification: he aims at reducing the principles in play to the strictest possible minimum (in striking similarity with Ferdinand, who goes as to posit only one overarching axiom, ‘\emph{langue} is a system of signs’). It is noteworthy that Ferdinand’s views also disregard linguistic variation, but only at the level of dialectal differences. René chooses to overlook differences across languages in general as much as possible. However, in the 1919 text, some contrasts are made and he takes Ferdinand’s own examples (such as the famous \emph{mouton} / \emph{sheep} / \emph{mutton}) in order to refine some of the claims of the 1911 text. Doing so, he explains that general principles have to be understood precisely as general, providing the big picture, while all fine-grained differences in the empirical reality (“in practice”: \citealt[15]{r.desaussure19:structure.logique}) are to be purposely ignored. In a number of respects, his theory reminds one of the Port Royal grammar \citep{arnauld-lancelot1660:port-royal} and more generally of the formal, logicist, view of semantics that begins with the Stoics and which his brother Ferdinand precisely opposed. As a reminder: for the classical view, generally speaking, one ``sign'' (word or other morphological unit) means unambiguously one idea, and one grammatical category expresses one and only one particular notion --- a view generally labelled that of ‘logic-grammar parallelism’. This conception traces back at least to Augustine’s very influential works on language (in particular \textsl{De Dialectica}\footnote{\protect\citealt{augustine:de-dialectica}.} and \textsl{De Doctrina Christiana}\footnote{\protect\citealt{augustine:de-doctrina-christiana}.}) which themselves refine the Stoics' semiotic theory. In short, according to the classical approach initiated notably by Augustine, language is a set of public signs which give a mental access to otherwise private ``thoughts''. The Stoics' semiotic theory already assumed that a sign in general is a perceivable element that triggers an inference of a non-perceivable one. On that view, the inference from the perceivable to the non-perceivable is allowed either by causal rules, as when one infers fire from smoke, or by conventions, as when one infers a thought on the basis of a perceivable linguistic string. These conventions are based on general principles that play out on each level of language compositionality. To linguistic sentences correspond logical propositions, to ``words'' -- not only words in the conceptual lexicon but also non-inflectional affixes -- correspond concepts (``conceptions'') of various kinds depending on the grammatical category. Finally, functional categories and in particular ``grammatical words'', such as conjunctions, correspond to types of mental operations. With regard to words proper, the logical-grammatical parallelism approach classifies concepts, or ``conceptions'', mainly as substances (objects), signified by ``substantive nouns'' (nouns) and as accidents, or qualities, i.e. properties predicated about the substance (adjectives). In this tradition, another important dimension is the reductionist conception of meaning, where each meaningful combination is actually a synthesis from more general but also more developed notions. For example, a verb such as \emph{briller} `to shine' is reducible to an ontological hidden verb of being, and therefore carries existentiality: \emph{briller} is a linguistic reduction of a notion of \emph{être brilliant} `be shining', actually `be [sun-shining]', so that \emph{Le soleil brille} ‘The sun shines’ actually means ‘The sun is being sun-shining’. René, knowingly or not, adopts a similar approach in several respects and would probably be happy to be labelled a Stoic, which may not be very surprising given that he was a mathematician. His brother Ferdinand, on the contrary, fiercely rejected this logical tradition, at least in the formulation of the \textsl{Course in General Linguistics}. There he famously says that the classical conception, which “supposes ready-made thoughts pre-existing to words”, is a “simplistic view” \citep[97]{saussure:cours}. Even though it is usually assumed that Ferdinand de Saussure had no real semantic theory of his own, he had at least a theory about the source of lexical meaning: each word or ``sign'' is a mental pairing of a concept with an \emph{image acoustique}, an imprint of sounds in memory, and the meaning of that sign results from the complex relations of oppositions and associations it enters into within the whole system of interdependent signs -- to be later described as a ``structure'': the \emph{langue}.\largerpage For Ferdinand, meaning exists only as it results from ``value'', that is, from differentiation within the system. This postulate derives from his thorough knowledge of the history of languages, which he views as involving breakings off of previous, more general signs, so that new signs begin to exist through oppositive relations with one another. On that view, whatever the degree of morphological compositionality eventually involved (as in the famous example of \emph{poirier} as composed of \emph{poire} `pear' and -\emph{ier} `tree'; cf. \sectref{sec:rene-vs-ferdinand} above), the whole lexical meaning stems from this dynamic of oppositive relationships. For René, on the contrary, compositionality is central to the meaning of words, in line with the former idea that words evoke \emph{ex-ante} ideas and notions. Where Ferdinand insists on the contrasts between languages, connecting with more cultural and conventionalist views about meaning and reference, René insists on the similarities between words across languages or at least across the languages that he compares (mostly French and German, but also English and a few others in the 1919 text). Furthermore, René radically opposes his brother’s view that meaning depends on the linguistic environment, whether real (``syntagmatic'' relations in Ferdinand’s terminology) or abstract (``paradigmatic'', associative relations). The following quote, which uses some of his brother’s own terminology, seems precisely directed against Ferdinand’s view: “The logical analysis of words is only possible if the symbols with which we work are invariant elements; thus the sense, the value of an atom, must depend only on itself and not at all on the sense or the value of the atoms that surround it” \citep[8]{r.desaussure11:formation}. While Ferdinand has a top-down view of meaning, imposed by the idea that the ``value'' of any element itself involves the whole ``system of signs'' within a given language, which then differs radically from those of other languages, René has a bottom-up conception of meaning where blocks with fixed, invariant and \emph{a priori} meanings are combined together in the making-up of more complex meanings. He has a clearly analytic methodology which consists in explaining “compound words by simple words, and simple words by the basic words” \citep[21]{r.desaussure19:structure.logique}. \section{The elements of meaning} \label{sec:elements-meaning} His notion of ``atoms'' of meaning which combine together in order to generate more complex ``molecules'' is interestingly comparable to what Louis \citet{hjelmslev43:prolegomena} would later propose as a theory of structural functional semiotics, although this was presented as a continuation of Ferdinand de Saussure’s pre-structuralism. The technical notions of structural semantics such as \textsc{semes} (atoms of meaning or \emph{semantic traits}), \textsc{sememes} (aggregations of semes corresponding to a lexical item), \textsc{archisememes} (those particular semes that have overarching value in a semantic paradigm), seem also to resonate with René’s approach of ``molecular'' meaning, even though René’s notion of atom is about morphs whereas the structuralist notion of semantic traits is about abstract concepts, not linguistic realizations. However, whereas structural semantics assumes that semantic elements combine together within a system of oppositions, mirroring structural phonology, René does not hold that ``atoms'' should exist because of some system of oppositions relative to one particular language. What René advocates is rather the idea that ``molecules'' are like sets of ideas, each of which is realized linguistically by an ``atom''. When none of these atoms is sufficient to fully express a particular idea, a morphological composition of the basic atoms is needed. This, interestingly, is again comparable to some extent to what mainstream structural semantics would propose. For René, there are two kinds of words: simple words and compound words, the latter being either compounds of autonomous words or morphological constructions involving affixes. Since simple words and affixes fall within the same general notion of bearers of simple meanings --- that is, both affixes and words have a single, atomic meaning --- the two morphological categories reduce to the same general semantic type. This approach is a typical reductionist conception of language which ignores cross-linguistic disparities as much as possible. The only difference between simple words and affixes is that, according to René, affixes tend to represent more general ideas than lexical roots, since, as he ventures to suggest, the more general an idea, the more frequent it is in discourse, and the more frequent a meaning, the more likely it is to aggregate in an affix. This is of course a debatable claim, but it does in a way prefigure some approaches within the contemporary conceptions of grammaticalisation and lexicalisation, which suggest a general pattern for the production by a language of grammatical morphemes on the basis of archaic conceptual lexical words. These atoms, in any case, are considered by René the ‘basic material for word formation’ \citep[5]{r.desaussure11:formation} and are assumed to be stable across contexts and also across compositions. Their individual content is supposedly known and identified. René insists in a footnote that this principle of invariability of atoms is not about their form but about their meaning, but his comments on the formal variation of morphemes shows substantial ignorance both of the nature of allomorphic variation and of linguistic diversity \citep[6]{r.desaussure11:formation}. Just as for the Port-Royal grammarians and for most of the formal tradition in semantics, meaning according to René is fundamentally compositional, not inferential. He even dismisses the need for ‘rules of derivation’ \citep[8]{r.desaussure11:formation} (but the picture is somewhat different in the 1919 text, see below). However, intellectually speaking, his approach is organized in a slightly different way than what existed on the market at his time. Notably, his view of the way in which meanings combine is an original one. This is most of all because words, according to him, involve not only hierarchical meanings (i.e. the meaning of a word includes the meaning of the superordinate, as \emph{horse} means ‘horse’ but also ‘animal’ and so on\footnote{That René takes the example of \emph{horse} ironically echoes Ferdinand's mention \citep[97]{saussure:cours}{97} of the word \emph{horse} when he criticizes the classical tradition.}). Furthermore, the top-most superordinate meanings are actually those of abstract grammatical categories \emph{themselves}, breaking into pieces the traditional distinction that opposes conceptual-declarative and grammatical-functional meaning. More precisely, for René de Saussure, the meaning of a word has several layers: an ``explicit'' meaning (the particular idea) and a series of implicit meanings (the more general ideas) \citep[10]{r.desaussure11:formation}. Interestingly, the most general, the most abstract idea is a \emph{grammatical} one. ‘Horse’ involves not only ‘mammal’, ‘vertebrate’, ‘animal’ (which all have to be substantive, \citealt[13]{r.desaussure11:formation}) but even the ``substantive idea'' itself. There are three such fundamental, overarching grammatical elements of meaning: the nominal idea (ontologies), the adjective idea (qualities, properties) and the verbal idea (states of affairs and (other) actions). The ``nominal idea'' corresponds to the abstract notion of ``being'' itself \citep[10]{r.desaussure11:formation}; this resonates again with the philosophical tradition of Port-Royal. Through this abstract notion of being, which is semantically attached to the grammatical category as a whole and to all its members, a word like \emph{horse} entails therefore a meaning of ‘some being’. This ``nominal idea'' is implied not only by concrete real beings such as ‘horse’ but also for abstractions like ``theory'' or ``type''. Therefore, it is not only the case that substantives indicate one class of concepts (substances) and adjectives another one (accidents or qualities), as in traditional grammar, but also, and much more interestingly, that there is a grammatical notion that semantically encodes the very notions of being (substance) and of quality, as top-most overarching semantic elements. And of course, the same holds for the ``verbal idea''. He explains: “The nominal idea, the adjectival idea and the verbal idea are thus ideas completely like other ideas: they are simply those of our ideas that are the most general and as a consequence, the most abstract. […] I will call them grammatical ideas” \citep[12]{r.desaussure11:formation}. These grammatical ideas are represented in language by a number of linguistic items, the basic grammatical atoms, which bear only the function of evoking the grammatical idea. There may be various linguistics markers of one grammatical idea but still they all encode that particular grammatical idea. An adjectival suffix, for example, encodes the grammatical idea of quality (‘hum-\emph{an}’), since all morphological indicators of a grammatical category encode the semantic notion behind it. In some cases a particular grammatical item operates a shift of categorization, as when \emph{riche} ‘rich’ becomes \emph{un riche} ‘a rich [person]' \citep[19]{r.desaussure11:formation}. In such cases, as with \emph{ce} and \emph{le} (see \citealt[28]{r.desaussure11:formation}), he explains that the determiner is the ``nominalizing atom'', the maker of substantivity, which might recall a very contemporary view (introduced in \citealt{abney:thesis}) on which noun phrases are actually headed grammatically by the determiner, not the noun (in René's theory the definite determiner is more abstract than the indefinite: cf. \citealt[27--28]{r.desaussure11:formation}). \pagebreak The ``adjective idea'' can be found not only in grammatical adjectives (typically because of a variety of affixes which themselves bear the ``adjective idea''), but also, curiously, in some prepositions, which René treats as ``dissociated suffixes'', such as French \emph{de} ‘of’ in \emph{d’homme} ‘of man’ where it ``plays the same role'' as \emph{ain} in \emph{humain} (i.e. as ‘an’ in ‘hum\emph{an}’). The same property is attributed to some pronouns, such as \emph{qui} ‘that, which' in \emph{qui diffère} ‘that differs’ as an equivalent to \emph{différent} ‘different’. Needless to say, the latter claim both recalls Port-Royal again, but once again with differences: ultimately, in René’s approach, \emph{qui diffère} and \emph{différent} have the same meaning (but then what happens to the verbal idea in such a composition is unclear). René acknowledges the existence of other types of ``atoms'' which do not belong to the nominal, adjectival or verbal type: adverbs, prepositions, etc. However René considers any attempt to classify them pointless because they do not, according to him, contain any general ideas of their own, but only particular ones (e.g. time or place etc. when considering adverbs) \citep[117]{r.desaussure11:formation}. \section{Meaning, composition of meaning, and grammar} \label{sec:meaning-grammar} These fundamental grammatical ideas have compositional properties. René considers that “the noun cannot function alone any more than a body without limbs'' \citep[75]{r.desaussure11:formation} and thus needs verbal and adjectival atoms in order to construct not only sentences but even ``molecules''. This suggests that René, unlike Frege, does not have a referential conception of meaning, since nouns are unlike verbs in that they can refer without need of anything else, whereas verbs need arguments to refer to actual eventualities.\largerpage[1.5] The relationship between grammar and semantics in René’s work is somewhat ambivalent. On the one hand, all conceptual meanings are subordinate to some grammatical notion, but, on the other hand, since all grammatical notions are ideas, the grammar is ultimately a matter of meaning, and therefore bears a semantic essence. This brings the two brothers close: Ferdinand, looking at the ``system of signs'', does not really discriminate between the world of the conceptual and that of the functional. All these elements are signs with values. For example, according to Ferdinand, systems with masculine and feminine genders only differ from those that have a neuter gender because of the mutual oppositions of these genders inside the system of gender that they form together. However, contrary to Ferdinand’s views, for whom \emph{langue} resides in the community of speakers,\footnote{However Ferdinand’s views are more complicated. \emph{Langue} as specific to one particular language exists in the corresponding community of speakers and thus is perhaps ``external'', but \emph{langue} as a principle of organisation is true of all languages and is thus internal, as a cognitive feature of the brain.} René clearly holds an internalist conception, even though unfortunately he does not elaborate on the ontology of language. Language, for René, is neither a mere indicator of thoughts (with reference to external objects) as in Port-Royal grammar, nor a thought-maker as it is in Ferdinand’s account at the other end of the spectrum. Language for René is rather an apparatus which imposes its principles on the referential lexicon and thus belongs itself to the domain of thought, since the grammar is, ultimately, conceptual. When René discusses the notion of the ``verbal idea'', he faces a problem of categorization, since it is not intuitive to unite stative and dynamic notions in a single general type \citep[17]{r.desaussure11:formation}. At first, he represents not one but two quite distinct verbal ideas: ``ag'' for ``active verbs'', i.e. verbs indicating dynamic aspect, and ``sta'' for stative verbs. But further on, René elaborates a unifying solution which has the merits of being already aspectual, years before the seminal works by Otto Jespersen on that notion outside the Slavic domain in the twenties (e.g. \citealt{jespersen24:philosophy-of-grammar}). René's knowledge of physics is clearly recruited to discuss the status of verbality and of eventualities. He claims that states are a subtype of dynamic events: “the static idea is however only a special case of the dynamic idea. The latter implies forces in activity: if the forces are not in equilibrium, we have the proper dynamic idea (action), which is the general case; if the forces are in equilibrium, we have the static idea (state). And indeed as soon the state changes, we fall back into action” \citep[32--33]{r.desaussure11:formation}.\footnote{This view, which René imports from physics into linguistics, is problematic when it comes to the way language represents facts: that states are particular instances of non-states does not look right linguistically, and René seems to be lacking some more precise knowledge of sub-categories. What is more, the complements of an atelic verb may turn it into a telic verb phrase, a problem that could hinder René’s ``principle of invariability of atoms'' \citep[21]{r.desaussure11:formation}.} An issue with this conception of the ``verbal idea'' could be that ``state'' looks pretty much like a ``nominal idea'' since it also involves a notion of being. However, whereas the nominal idea concerns the ontology of substances, the verbal idea concerns some particular property that a substance may bear at some particular time.\largerpage[1.5] In the 1911 text, René seems to leave aside the possibility that words do gain some semantic autonomy when they are incorporated into the lexicon as units; he mentions the movement from compounding to units in the 1919 book but only to suggest that this does not impact the analysis because it is a historical, evolutive, problem \citep[5]{r.desaussure19:structure.logique}. In a footnote \citep[37]{r.desaussure11:formation} he acknowledges a suggestion by Charles Bally to consider the autonomy of lexical items (Bally was the successor of Ferdinand de Saussure at the University of Geneva and editor of the \textsl{Cours}), but René abstains from a conclusion here, saying that he only aims at developing the logical point of view. Yet in the 1919 text, he actually moves on and recognizes that each word, despite being possibly compositional, “is in itself its own structure” \citep[27]{r.desaussure19:structure.logique}. However, René does not always avoid such discussions, as when he compares \emph{doux} ‘sweet’ and \emph{doucereux} ‘smooth, unctuous’, saying that the meaning from the logical point of view is the same but ``superfluous atoms'' of meaning are added on top of the logical meaning of \emph{doucereux} to express a nuance \citep[61--62]{r.desaussure11:formation}. The 1919 text provides an interesting philosophical development of these primitive categories of meaning (``grammatical ideas''). In just a few words, which, however prefigure later conceptions of the relation between language and cognition (``thought''), René relates natural language to the nature of the human mind much more deeply than what one finds in the canonical version of Ferdinand’s \textsl{Cours}. René even places it in the context of human evolution: ``\,“grammatical” categories correspond to “logical” categories, and since the former are the same for all languages, language must therefore be the expression of a certain philosophical conception of the world, the popular conception if you will, but one that must have very deep roots, because it emanates so to speak from within natural evolution.” \citep[26]{r.desaussure19:structure.logique}. Leaving aside the \textsl{Cours}, which does not do full justice to Ferdinand’s actual thoughts on the matter, it is likely that such considerations have their roots in the debates and discussions the two brothers certainly used to have about language. There may even be a trace here of Ferdinand’s own influence, since he is reported to have clearly affirmed, notably during his third course of General Linguistics in Geneva, that language is an ``instinct'' (a notion that echoes that of ``faculty'' found in the \emph{Cours}) and that it has to do with the ``folders of the brains'' \citep[80]{saussure93:troisieme-cours}. René asserts two major principles with regard to the ‘synthesis of words’, that is, the construction of words composed of more than one atom of meaning \citep[99]{r.desaussure11:formation}; these principles are very interesting inasmuch as they suggest an economical view of meaning construction which goes well with many scholars’ positions across various linguistic domains of investigation: for example, the Prague school in phonology, but also Martinet’s view on linguistic change, and of course the well-known principles of economy at play in pragmatics according to most authors in the Gricean tradition. On the one hand, René posits a \emph{principle of necessity} which requires that a certain quantity of information is provided by the word in order that clarity and completeness of meaning are ensured. Conversely, a \emph{principle of sufficiency} requires that an idea is expressed only once in a word, and that no idea foreign to the meaning of the word is incorporated. These two principles limit each other: the principle of sufficiency imposes a restriction on the degree of linguistic precision of an item, avoiding superfluous meanings or redundancy, whereas the principle of necessity imposes the requirement to go as far as possible in the linguistic marking of meaning. The equilibrium between these principles results in the fact that “the meaning of a word depends only on its own content, and on all of its content and not on the manner in which one supposes this word to be derived from another” \citep[99]{r.desaussure11:formation}. In passing, we note once again René’s opposition to derivational theories, even though he acknowledges that derivations take place in the history of languages \citep[120]{r.desaussure11:formation}. In this respect, the theory is perhaps made clearer in the 1919 text, as René (after noticing the behavioural similarity between verbs and adjectives) considers that there exist constructions like adjectivalizations, which he recognizes as derivations. But still, this does not change the whole picture inasmuch as this type of derivation is (probably) to be understood as historically and not synchronically relevant. In some cases, the two principles cannot both be met adequately. This typically happens when the idea becomes too precise for expression as the result only of the composition of more basic items. Two options are possible in theory: \emph{excess}, if some notion is needed but its linguistic marking would extend to additional extraneous meanings, and \emph{rounding down}, if some necessary idea is however omitted from expression for lack of appropriate atom; René claims that languages always choose rounding down because it is economical since it avoids more confusions than excess. \emph{Couronner} ‘to crown’ is an example of a process by rounding down: in this word, there is a notion of an object (the crown) and of an undetermined action, but \emph{couronner} involves more than just any action performable with a crown: a notion of a patient and of a particular positioning of the crown \citep[109]{r.desaussure11:formation}.\largerpage[-2] ``Rounding down'' cases force René to open the theory of meaning to some degree of contextual determination and of sematic underspecification; unfortunately he does not elaborate on this important aspect of meaning.\footnote{Also, René uses only his own lay semantic intuition, and fails sometimes to pursue further semantic investigation. The example of ‘coronation’ could for instance be challenged: first, isn’t the notion of an individual’s head already ``contained'' in some way in the object itself? In other words, isn’t the function of an object part of its meaning? After all, a crown is made in such a shape that it ought to fit specifically on a head. There could be room for a notion of typicality here: a typical action performed with a crown is to put it one one’s head in the way it is made to be employed. As a result, it is very hard to imagine different usages of ‘to crown’ than this one (except of course non-literal usages). If this is correct, then there is no need for a notion of usage or context here and René seems to have trapped himself into problematic consequences of his claims about ``rounding down'', at least with unambiguous words like the one he chooses to illustrate it.} Some further elaborations of the theory concern ``synonyms'' (with interesting ideas such as that the linguistic variation observable for some meanings has the function of selecting the proper grammatical idea for the former), ambiguous atoms (which he names ‘double-sense atoms’ and views as phonetic accidents irrelevant for the logical analysis), etc. The 1919 text adds a few elements to the picture elaborated in 1911, notably some nuances. His brother Ferdinand is now universally acclaimed and the tone of the text is closer to reconciliation with his recently deceased brother than the 1911 text was. This is true even though the 1919 book seems to continue in some way the discussions with Ferdinand: the \textsl{Cours} is abundantly cited and René attempts to bridge a few elements of his approach to what he finds there. For example, he tries to relate Ferdinand’s principle of the arbitrariness of the sign \emph{vs}. motivation to the distinction between simple and compound words in his own theory \citep[6]{r.desaussure19:structure.logique}. Although some very interesting clarifications, elaborations and adaptations are provided by the 1919 text, (for example about the ``law of reversal'' which establishes the relationship between morphology and syntax, \citep[11]{r.desaussure19:structure.logique}, the essence of the 1911 theory remains unchanged and still opposed to Ferdinand’s systemic view. René's compositional theory achieves a considerable degree of descriptive adequacy. Some of the examples which he analyses show that many subtleties of meaning composition can be accounted for within the theory, as when he compares \emph{brossier} (someone who sells brushes, in a somewhat archaic French) and \emph{brosseur} (someone who brushes) \citep[112]{r.desaussure11:formation}: only the second case contains the verbal idea of action. The assumptions driving this analysis may push theorizing further: while we intuitively tend to define the word \emph{brossier} as someone who \emph{sells} brushes, thus using a verbal notion in the definition, it might more accurate to define it as someone who \emph{has brushes} (for sale). Similarly, his approach to word composition accounts for the fact that verbs constructed on the basis of nouns (e.g. \emph{couronner}) are semantically of a different type than ``simple verbs'' such as \emph{frapper} ‘to beat’; however such compositions as \emph{couronner} are not, he insists, derivations from a noun into a verbal derivative but compositions of atoms of different kinds, even though the anti-derivational stance is attenuated in some way in the 1919 text where he suggests that some words (like \emph{musique}, ‘music’) “giving rise to new adjectives” such as \emph{musical} “where the root \emph{music} plays the role of a simple element” \citep[5]{r.desaussure19:structure.logique}. \section{Conclusion} \label{sec:semantics-conclusion} All in all, the most important feature of René de Saussure’s theory of meaning is the assumption that grammatical categories as such are not only semantic types but are also meaningful, corresponding to the overarching concepts (of being, or qualities or to actions) that serve as the foundation of all possible particular lexical meanings. This is perhaps a consequence of his particular interest in morphology and its role in the constitution of meaning. In this perspective, syntax, then, should perhaps be considered as a formal apparatus that governs the arrangements of meanings, not of pure abstract labels. \sloppy \printbibliography[heading=subbibliography,notkeyword=this] \end{document}
{ "alphanum_fraction": 0.8001747345, "avg_line_length": 58.3232484076, "ext": "tex", "hexsha": "a12c755532132be57add479b72368c11b90dc31e", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "8b85c33828f23c40cd70469f3e30ca9198fb0ed5", "max_forks_repo_licenses": [ "CC-BY-4.0" ], "max_forks_repo_name": "langsci/199", "max_forks_repo_path": "chapters/commentary3.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "8b85c33828f23c40cd70469f3e30ca9198fb0ed5", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "CC-BY-4.0" ], "max_issues_repo_name": "langsci/199", "max_issues_repo_path": "chapters/commentary3.tex", "max_line_length": 81, "max_stars_count": null, "max_stars_repo_head_hexsha": "8b85c33828f23c40cd70469f3e30ca9198fb0ed5", "max_stars_repo_licenses": [ "CC-BY-4.0" ], "max_stars_repo_name": "langsci/199", "max_stars_repo_path": "chapters/commentary3.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 8708, "size": 36627 }
Deep generative models are currently the leading method for AMC \cite{yang2019deep}. However, a major problem with this method is controlling the generation process towards a given compositional goal. For example, controlling a model trained on classical piano pieces to compose a tense piece for a horror movie scene. Controlling emotions in AMC has been actively investigated in the field of \textit{affective algorithmic composition} \cite{williams2015investigating}. Therefore, this chapter presents a literature review of this field, including models to represent emotion and methods to control emotion in symbolic AMC systems. Given the similarity between text and music generation tasks, this chapter also covers NLP methods to control neural natural language models. % Besides controlling emotions, the AMC research community has also been investigating how to control structural attributes (e.g. chord progression, melody contour, orchestration) of music with deep generative models. These methods are also discussed in this chapter. % Once again, it is important to highlight that only music generation methods are discussed in this chapter. \section{Affective Algorithmic Composition} \textit{Affective algorithmic composition} (AAC) is a research field concerned about controlling AMC methods to compose music that makes listeners perceive or feel a given target emotion \cite{williams2015investigating}. AAC is essential to a variety of applications, ranging from soundtrack generation \cite{williams2015dynamic} to sonification \cite{Chen2015} and music therapy \cite{miranda2011brain}. The AAC community has explored various methods to compose music with a target emotion algorithmically: expert systems \cite{williams2015dynamic}, evolutionary algorithms \cite{kim2004composing}, Markov chains \cite{monteith2010automatic}, deep learning \cite{madhok2018sentimozart}, among others. These methods have used different \textit{models of emotion}, but most of them are concerned with controlling \textit{perceived emotions} instead of \textit{felt emotions}. Therefore, they are normally evaluated with a listening test (i.e., a user study) or a qualitative analysis of generated samples. This section introduces the most common models of emotion used in the AAC literature and the different approaches to control emotion in AMC. It also highlights the different methodologies used in the AAC literature to conduct the qualitative analysis and the listening tests. \subsection{Models of Emotion} The study of affective phenomena has a very dense literature with multiple alternative theories \cite{ekkekakis2013measurement}. This section does not aim to discuss all of them but to provide a definition of emotion that is useful for designing neural LMs for AAC. According to \citet{williams2015investigating}, the literature concerning the affective response to musical stimuli defines emotion as a short-lived episode, usually evoked by an identifiable stimulus event that can further influence or direct perception and action. Williams et al. \cite{williams2015investigating} also differentiate emotion from affect and mood, which are longer experiences commonly caused by emotions. There are two main types of models used to represent emotions: \emph{categorical} and \emph{dimensional} \cite{eerola2011comparison}. Categorical models use discrete labels to classify emotions. For example, Ekman's model \cite{ekman1992argument} divides human emotions into six basic categories: anger, disgust, fear, happiness, sadness, and surprise. This model builds on the assumption that an independent neural system subserves every discrete basic emotion \cite{eerola2011comparison}. Therefore, any other secondary emotion (e.g. rage, frustration, and grief) can be derived from the basic ones. \citet{parrott2001emotions} proposed a similar model, but deriving a hierarchical structure with the following six basic emotions: love, joy, surprise, anger, sadness, and fear. These basic emotions are expanded into secondary emotions (e.g. passion, pleasure, and envy), which in turn derive ternary emotions (e.g. compassion, frustration, and guilt). Some emotions are more present in the music domain and are evoked more easily than others. For example, it is more common for a person to feel happiness instead of disgust while listening to music. The Geneva Emotion Music Scale (GEMS) \cite{zentner2008emotions} is a categorical model specifically created to capture the emotions that are evoked by music. GEMS divides the space of musical emotions into nine categories: wonder, transcendence, tenderness, nostalgia, peacefulness, energy, joyful activation, tension, and sadness. These nine emotions group a total of 45 specific labels. Dimensional models represent emotions as a set of coordinates in a low-dimensional space. For example, \citet{russell1980circumplex} proposed a general-purpose dimensional model called \textit{circumplex}, which describes emotions using two orthogonal dimensions: valence and arousal. Instead of an independent neural system for every basic emotion, the circumplex model assumes that all emotions arise from two independent neurophysiological systems dedicated to the processing of valence (positive--negative) and arousal (mild--intense). Figure \ref{fig:circumplex} shows a graphical representation of the circumplex model. The \textit{vector model} \cite{bradley1992remembering} is another two-dimensional model based on valence and arousal. It represents emotions as vectors, where valence is a binary dimension (positive or negative) that defines the vector's direction and arousal is a continuous dimension that defines the vector's magnitude. \begin{figure}[!h] \centering \includegraphics[width=0.7\textwidth]{imgs/related_work/circumplex.png} \caption{The circumplex model of emotion. The horizontal and vertical axes represent valence and arousal, respectively \cite{russell1980circumplex}.} \label{fig:circumplex} \end{figure} Both categorical and dimensional models have been widely used in AAC systems \cite{williams2015investigating}. The choice of model is especially important for data-driven methods (e.g., neural networks) since they rely on datasets of music annotated by listeners according to the selected model. Given that listeners typically use words to describe the emotions they perceive in music, categorical models might be easier than dimensional models for this annotation task. On the other hand, dimensional models allow a more precise description of the perceived emotion. Such precision is particularly helpful to describe emotions in ambiguous pieces, where listeners can select intermediate values between the different basic emotions (e.g., happy and calm) perceived in the piece. Moreover, dimensional models can be easily mapped to categorical models. The VGMIDI dataset, created as part of this dissertation, used the circumplex model (dimensional) to exploit this flexibility of the dimensional methods, making the dataset more general and thus reusable for other future AAC systems. Independent of the model, emotions can also be classified as \textit{perceived} or \textit{felt} \cite{gabrielsson2001emotion}. A person \textit{perceives} an emotion when they objectively recognize that emotion from their surroundings. For example, one can usually recognize someone else's emotion using expressed cues, including facial expression, tone of voice, and gestures. The same can happen when one listens to music, in that they recognize the music as happy or sad using cues such as key, tempo, or volume. A person \textit{feels} an emotion when they actually experience that emotion themselves. For example, one typically experiences fear in response to someone else's anger. This example shows that a given perceived emotion can trigger a different felt emotion. This dissertation focuses on generating symbolic music that listeners perceive to have a target emotion. Ideally, the listeners should also experience this emotion in response to their perception. However, emotions are less frequently felt in response to music than perceived as expressive properties of the music \cite{zentner2008emotions}. Moreover, symbolic music (e.g., MIDI) has limited expressivity compared to traditional recorded human performances (audio representation), making it harder for symbolic music to make listeners experience the target emotions. Focusing on perceived emotions facilitates evaluating the different methods proposed in this dissertation and still allows one to apply these methods to a wide range of problems. \subsection{Expert systems} Expert systems are one of the most common methods of AAC \cite{williams2015investigating}. They encode knowledge from music composers to map musical features into a given (categorical or dimensional) emotion. For example, \citet{williams2015dynamic} proposed a system to generate soundtracks for video games from a scene graph \footnote{A graph defining all the possible branching of scenes in the game.} annotated according to twelve categorical emotions derived from the circumplex model. First, a second-order Markov chain learns to generate melodies from a symbolic music dataset. Then, an expert system transforms the melodies generated via the Markov chain to match the annotated emotions in the graph. The transformations are performed by mapping each of the twelve emotions to a different configuration of five music parameters: rhythmic density, tempo, modality (major or minor), articulation, mean pitch range, and mean spectral range. For example, a melody generated for a happy scene would be transformed to have high density, medium tempo, major mode, staccato articulation, medium mean pitch range, and high spectral range (clear timbre). Williams et al. \cite{williams2015dynamic} evaluated their system by qualitatively examining a few examples of generated pieces. TransProse \cite{davis2014generating} is an expert system that composes piano melodies for novels. It splits a given novel into sections and uses a lexicon-based approach to assign an emotion label to each section. TransProse composes a melody for each section by controlling the scale, tempo, octave, and notes of the melodies with pre-defined rules based on music theory. For example, the scale of a melody is determined by the sentiment of the section: positive sections are assigned a major scale, and negative sections are assigned a minor scale. TransProse was evaluated with a qualitative analysis of nine music pieces generated by the system for nine respective novels. \citet{scirea2017affective} presented a framework called MetaCompose designed to create background music for games in real-time. MetaCompose generates music by (i) randomly creating a chord sequence from a pre-defined chord progression graph, (ii) evolving a melody for this chord sequence with a genetic algorithm and (iii) producing an accompaniment, adding rhythm and arpeggio, for the melody/harmony combination. Finally, MetaCompose uses an expert system called \textit{Real-Time Affective Music Composer} to transform the final composition to match a given emotion in the circumplex model. This expert system controls four musical attributes: volume, timbre, rhythm, dissonance. For example, low arousal pieces were controlled to have lower volume. Scirea et al. \cite{scirea2017affective} evaluated each component of the MetaCompose with a pairwise listening test. The components of the system were systematically (one-by-one) switched off and replaced with random generation, generating different ``broken'' versions of the framework. These broken versions were paired with the complete framework and evaluated by human subjects according to four criteria: pleasantness, randomness, harmoniousness, and interestingness. For each criteria, the participants were asked to prefer one of two pieces, also having the options of \textit{neither} and \textit{both equally}. Expert systems are a great approach for evaluating mappings between a small set of music features and emotions. However, given the multidimensional (melody, harmony, rhythm, orchestration, dynamics, etc.) nature of music, it is hard to create a large set of rules that consider all the dimensions. The challenge is not only in the number of rules one needs to make, but these rules might also contradict each other when combined together. The neural language models explored in this dissertation don't have these problems because neural networks learn can learn these rules from music data. \subsection{Evolutionary Algorithms} To control emotions in music generated with EAs, one has to define a fitness function that guides individuals encoding symbolic music towards a given emotion. It is very challenging to design a function that formally evaluates subjective aspects of music emotion. Thus, most EAs for AAC use interactive evaluation functions, where human subjects judge whether or not the generated pieces match a target emotion. For example, \citet{kim2004composing} proposed an IGA to compose polyrhythms\footnote{A polyrhythm is the concurrent playing of two or more different rhythms.} for four percussion instruments. It starts with a random population of polyrhythms and evolves them towards relaxing or disquieting emotions. A polyrhythm is encoded with four 16-bit strings, one for each instrument. A single bit in the string represents a beat division where 1 means that a (unpitched) note is played in that division and 0 means silence. The fitness of a polyrhythm is given by a human subject who judges it as relaxing, neutral, or disquieting. The selection strategy keeps the four most relaxing and four most disquieting individuals for reproduction with one-point crossover and mutation. Results showed that the genetic algorithm generated relaxing polyrhythms after 20 generations while it took only 10 for it to generate disquieting ones. \citet{zhu2008emotional} presented an IGA based on the KTH rule system \cite{friberg2006overview} to create affective performances of pre-composed music pieces. The KTH rules model performance principles within the realm of Western classical, jazz and popular music. These rules control different music performance parameters (e.g. phrasing, articulation, and tonal tension) with weights called \textit{k values} that represent the magnitude of each rule. Zhu et al. \cite{zhu2008emotional} encoded the individuals as a set of k values used to create MIDI performances of pre-composed pieces according to the KTH rules. The genetic algorithm evolves a population to find optimal k values that yield performances that are either happy or sad. The fitness of the performances are given by human subjects with a seven-point Likert scale. \citet{nomura2018music} designed a distributed IGA to generate four-bar piano melodies with controllable ``brightness''. Multiple human evaluators evolve independent populations of melodies in parallel. In some generations, the genetic algorithm exchanges individuals between the independent populations. With the exchange, evaluators are affected by each other and the solutions are expected to agree with everyone's evaluations. Each individual in a population represents a melody with a sequence of sixteen pitch numbers (as defined by the MIDI protocol). Each element in the sequence is mapped into a quarter note with the pitch defined by the element. Evaluators give the fitness of an individual based on a seven-point Likert scale, where 1 means ``extremely dark'', 4 means ``neither'', and 7 means ``extremely bright''. An experiment with ten parallel evaluators showed that after seventeen generations, the independent populations converged to similar melodies. The benefit of interactive EAs is that when the population converges towards a target emotion, no further evaluation is needed to check if the generated pieces indeed match that emotion. These approaches are well suited for applications that require the human to be in the loop of the generative process (e.g., assisted composition tools). On the other hand, interactive EAs are model-free approaches, i.e., they only provide a set of solutions at the end of the evolutionary process. Every time one wants to generate a new set of pieces, the slow interactive evolutionary process must be restarted. The NNs explored in this dissertation don't have this problem because they are trained as language models which can generate as many music pieces as needed. Neural networks are well suited for real-time applications where the generated music is constantly changing (e.g., generative soundtracks). \subsection{Markov Chains} AAC systems based on Markov chains are typically hybrid systems where Markov chains compose an underlying piece of music that is transformed by an expert system. The work of \citet{williams2015dynamic} described earlier in this chapter is an example of such a system. Ramanto and Maulidevi \cite{ramanto2017markov} proposed a similar system in which the Markov chain is designed manually instead of derived from a corpus. There is very little research on training Markov chains directly from datasets of symbolic music labeled according to a model of emotion. One of the few examples is the work of Monteith et al. \cite{monteith2010automatic, chan2008automatic}, which uses different Markov models to generate polyphonic music from a MIDI corpus labeled according to the Parrott model of emotion \cite{parrott2001emotions}. The corpus is composed by 45 movie soundtracks labeled by 6 researchers using the 6 basic Parrott emotions: love, joy, surprise, anger, sadness, and fear. The system is divided into four main components: a rhythm generator, a pitch generator, a chord generator, and an instrumentation planner. First, the rhythm generator randomly selects and transforms a rhythmic pattern from the subset of pieces with the target emotion. Second, the pitch generator assigns pitch values to the notes in the rhythm by sampling from a Markov chain trained with the pieces from the target emotion. Third, the chord generator generates an underlying harmony for the provided melody with a hidden Markov model. Finally, the instrumentation planner probabilistically selects the instruments for melodic and harmonic accompaniment based on the frequency of various melody and harmony instruments in the corpus. The system uses single-layer feedforward networks, trained with the same MIDI corpus, to classify the outputs of the rhythm and melody generators as having the target emotion or not. The system only accepts rhythms and melodies classified by the neural networks as having the target emotion. Monteith et al. \cite{monteith2010automatic} evaluated their system with a listening test in which 13 human subjects selected the emotion that they perceived in the pieces. Moreover, the subjects used two 10-point Likert scales to evaluate how human-like and unique the pieces are. Markov chains are similar to the neural language models used in this dissertation as they both learn mappings from music features to emotions. However, as discussed in Chapter \ref{ch:amc}, low order Markov chains typically generate unmusical compositions that wander aimlessly, while high order ones tend to repeat segments from the corpus and are very expensive to train. On the other hand, the neural language models explored in this dissertation are currently the leading methods for algorithmic music composition \cite{yang2019deep}. % Chung and Vercoe \cite{chung2006affective} proposed a method to derive a Markov chain from physiological, physical, and self-reported data from listeners. They first composed original music pieces with different styles: jazz, jazz-funk, rock, electronic and dance music. Four audio clips were assembled for each of the five musical styles, providing 20 clips in total, each with a different level of vertical layering\footnote{The amount of musical layers stacked in a piece.} and horizontal complexity \footnote{The rhythmic complexity of a piece}. \subsection{Deep Learning} To train deep neural networks that can generate music with a target emotion, one needs a relatively large dataset of symbolic music labeled according to a model of emotion. Such datasets started to be created only recently, and this dissertation is part of the first wave of research in this area. The remainder of this section presents prominent deep learning approaches proposed to date. All of them use different datasets and have been developed concurrently with this dissertation. For example, SentiMozart \cite{madhok2018sentimozart} is a framework that generates piano pieces that match the emotion of a given facial expression. It uses a convolutional NN to classify the emotion of a facial expression and an LSTM to generate a music piece corresponding to the identified emotion. The convolutional NN was trained with the FER-2013 \cite{goodfellow2013challenges} dataset, which has 35,887 images of facial expressions labeled with the following emotions: angry, disgust, fear, happy, sad, surprise, and neutral. To train the LSTM, \citet{madhok2018sentimozart} created a dataset with 200 MIDI piano pieces, each labeled by 15 annotators according to the classes happy, sad, and neutral. Three LSTM were trained independently, one for each emotion. To unify the models of emotion between the two datasets, \citet{madhok2018sentimozart} merged the categories sad, fear, angry, and disgust from the FER-2013 dataset into the sad emotion. The categories happy and surprise were mapped into the happy emotion. Thus, the output $e \in \{\textit{happy}, \textit{sad}, \textit{neutral}\}$ of the convolutional NN selects the respective LSTM that, in turn, composes a piece conveying the emotion $e$. SentiMozart was evaluated with a listening test where 30 human subjects judged 30 randomly chosen images (10 of each class) and their corresponding generated pieces. The participants used a 11-point Likert scale in which 0 means sad, 5 means neutral, and 10 means happy. \citet{tan2020automated} proposed a deep NN for a similar problem: generating music that matches an emotion expressed by visual artworks (e.g. paintings, illustrations, and collages). They paired images of paintings with MIDI piano pieces labeled with the same emotion and used an encoder-decoder network as a generative model. The encoder is a pre-trained convolutional NN called ResNet \cite{he2016deep}. \citet{tan2020automated} compared two decoders: an LSTM and a (decoder) transformer. The music pieces were encoded as sequences of tokens with the MIDI-based method proposed by \citet{oore2017learning}. \citet{tan2020automated} trained their model by pairing emotion-labeled images from \citet{you2016building} (17,349 images) with emotion-labeled music pieces from \citet{panda2013multi} (196 MIDI files). Since these two datasets use different categorical models of emotion, \citet{tan2020automated} mapped the emotion labels of the music dataset to the labels of the image dataset. The trained model was evaluated with a listening test and a machine classification test. In the listening test, six human subjects were asked to evaluate the sentiment of the music pieces and the images with a 10-point Likert scale without knowing which music piece was related to which image. The machine evaluation test consisted of training an emotional correspondence classifier to predict if a given pair of images and pieces express the same sentiment. Zhao et al. \cite{zhao2019emotional} presented a conditional Biaxial LSTM \cite{johnson2017generating} to generate piano pieces with controllable emotion. They labeled the \textit{Piano midi.de} dataset according to a categorical model of emotion: happy, sad, peaceful, and tense. They used a piano roll representation for the input music pieces, which are processed by the LSTM together with an emotion signal associated with each time step of the input. This signal is encoded by an extra embedding layer and then added to the embedding of the input. Zhao et al. \cite{zhao2019emotional} evaluated the quality of the generated pieces according to four metrics of music structure: polyphony, scale consistency, 3-tone repetitions, and tone span. Moreover, they performed a listening test with 30 human subjects to evaluate if the subjects agree with the emotions intended by the model. Music FaderNets \cite{tan2020music} is a framework based on a \textit{gaussian mixture variational autoencoder} (GM-VAE) that can learn high-level music feature representations (such as emotion) by modeling corresponding low-level structural music features. Based on \citet{yang2019deep}, Music FaderNets learns the low-level features of rhythm $z_r$, note density $z_d$, and key $z_k$ with separated RNN encoders $e_r$, $e_d$, and $e_k$, respectively. The high-level features are then inferred from the low-level representations ($z_r$, $z_d$, and $z_k$) via semisupervised clustering. Music Fadernets was trained to reconstruct pieces encoded as a sequence of tokens \cite{oore2017learning} extracted from the Yamaha Piano-e-Competition dataset \cite{yamahaEPiano} and the VGMIDI dataset\footnote{The VGMIDI dataset is a contribution of this dissertation described in Chaper \ref{ch:ismir19}.} \cite{ferreira_2019}. Results showed that Music Fadernets successfully learns the intrinsic relationship between arousal (high-level feature) and its corresponding low-level attributes of rhythm and note density. % However, it is considerably expensive to create such a dataset because different people can perceive different emotions in the same piece. Thus, there is no single ground truth emotion label for a given piece. This subjectivity requires one to collect independent perceptions from different annotators in order to define a democratic ground truth label. The lack of labeled data is one of the reasons why deep learning approaches started being explored for AAC only recently. The main benefit of the neural LMs investigated in this dissertation over other AAC methods is that such LMs can learn the subjective mappings between music features and emotions from data. This makes them flexible and, in turn, applicable to different AAC problems. One just needs to create a dataset for the problem at hand. Neural LMs can also be used to generate a wide range of music pieces with the same trained model. Moreover, deep generative models are currently the leading method for AMC \cite{yang2019deep}, which ensures that neural AAC systems can produce high-quality (i.e., similar to human composed pieces) music. % \section{Conditional Generative Models} % Besides controlling emotion, the AMC community has also been investigating how to control general structural features in music. These methods typically are unsupervised approaches to learn disentangled representations of specific music features such as melody and rhythm. Such disentangled representations allows one to directly manipulate the output of the model. These approaches are related to this dissertation because one could use them to learn disentangled mappings between music structure and emotions. For example, EC2-VAE \cite{yang2019deep} is a VAE that learns disentangled representations of symbolic melodies, allowing the generation of melodies with controlled pitch contour and rhythm patterns. The encoder is a bidirectional GRU $E$ that maps the input melodies into a latent representation $z$. The decoder splits the latent representation $z$ to create two independent representations $z_p$ and $z_r$ for the pitch and the rhythm features of the melody, respectively. The rhythm representation $z_r$ is fed into a bidirectional GRU $D_r$ to reconstruct the rhythm of the melody. The output of $D_r$ is concatenated with $z_p$ and fed into another GRU $D_m$ to reconstruct the entire melody. Moreover, the EC2-VAE uses chords as a condition for both the encoder and decoder. EC2-VAE was trained with the Nottingham Dataset \cite{} to reconstruct entire melodies. The two independent decoder modules allows it to learn disentangled representations of pith and rhythm. Therefore, one can swap or interpolate the latent representations of different source melodies to compose new melodies that combine pitch and rhythm features from the source material. % % Music Sketchnet \cite{chen2020music} is a neural network \textit{music inpainting}\footnote{Music inpainting is the task of filling in missing or lost information in a piece of music.} framework that allows users to specify partial musical ideas guiding automatic music generation. Music Sketchnet is composed of three NN modules: SketchVAE, SketchInpainter, and SketchConnector. Similar to EC2-VAE \cite{yang2019deep}, SketchVAE \cite{chen2020music} is a VAE that creates independent representations of rhythm and pitch contour. SketchInpainter combines GRUs to perform the element-level inpainting prediction from the latent variables given by SketchVAE. SketchConnector uses a transformer NN to combined sketches of pitch and rhythm from users with the prediction from SketchInpainter and finalize the generation. Sketchnet was trained to reconstruct melodies from the Irish and Scottish monophonic music dataset \cite{}. \section{Controllable Neural Language Models} \label{sec:related_nlp} This dissertation is also related to methods that control natural LMs to generate text with given characteristics (e.g. topic, style, and sentiment). For example, \citet{radford_2017} showed that fine-tuning a neural LM with an extra classification head to perform sentiment analysis exposes the neurons in the LM that carry sentiment signal. Particularly, when pre-training a character-level LSTM LM on Amazon product reviews \cite{He2016} and fine-tuning it with L1 regularization on the \textit{Stanford sentiment treebank} \cite{socher2013recursive}, most of the sentiment signal in the classification head comes from a single neuron in the LSTM. Therefore, this neuron can be manually adjusted to control the LM to generate new product reviews with a target sentiment. CTRL \cite{keskar2019ctrl} is a transformer LM trained to generate text conditioned on special tokens, called \textit{control codes}, that inform the LM about the characteristics (e.g. style) of the text to be generated. These control codes are derived automatically from the text source (e.g. Wikipedia), which means no manual annotation is needed. During training, every example sentence $x$ is processed together with a set of control codes. At generation time, CTRL can produce text with a particular style $s$, for example, by conditioning the prior input $x$ with a signal representing $s$. % This dissertation differs from CTRL because we control the LM with a search procedure and not with an extra input to the LM. Conditioning the LM requires a large amount of labeled data, which is expensive in the affective music domain. \citet{holtzman2018learning} proposed a decoding strategy that uses a set of discriminators (NNs) to steer the probability distribution of a LM at generation time. At each decoding step, the probabilities of the next words given by the LM are multiplied by weighted scores of four different discriminators: repetition, entailment, relevance, and lexical diversity. The discriminators are trained independently with a dataset of pairs $(x, y)$, where $x$ is a prior context sentence, and $y$ is the completion of that sentence. For each discriminator, the loss function measures the difference between the scores assigned to the truth continuation $y$ and the scores assigned to the continuation generated by the LM. Once all the discriminators are trained, Holtzman et al. \cite{holtzman2018learning} optimize the weights used to combine the score of each discriminator. The Plug and Play LM \cite{dathathri2019plug} combines a pre-trained LM with an attribute (e.g. sentiment) classifier $C$ to guide text generation by fine-tuning the LM hidden layers at decoding time. At each generative time step, the LM hidden layers are shifted in the direction of the sum of two gradients: one towards the higher log-likelihood of the target attribute (as given by $C$) and the other towards higher log-likelihood of the unmodified LM. The shifting process occurs in three steps: (1) a forward pass using $C$ to compute the likelihood of the target attribute, (2) a backward pass to update the LM hidden states with gradients from the attribute classifier $C$, and (3) another forward pass to update the distribution over the vocabulary from the updated LM hidden layers. With this approach, multiple attribute classifiers can be combined at generation time with customized weights. % Vijayakumar et al. \cite{vijayakumar2018diverse} and Kool et al. \cite{Kool2019SBS} proposed variations of beam search to solve the problem of generating repetitive sentences. Vijayakumar et al. \cite{vijayakumar2018diverse} decodes diverse lists by dividing candidate solutions into groups and enforcing diversity between groups. Kool et al. \cite{Kool2019SBS} applyes the ``Gumbel-Top-k'' trick to sample without replacement with beam search, mitigating the repetition problem. Our work differs from both these works because our variation of Beam search optimizes for two independent objectives. % Ziegler et al. \cite{ziegler2019fine} used reinforcement learning to fine-tune a transformer LM for generating text with a given sentiment by using a reward model trained on human preferences on text continuations. They fine-tuned the model for two different tasks: generating text with sentiment and text summarization. Given a music dataset labeled according to a model of emotion, one can apply the methods discussed in this section to control music LMs to generate pieces with a target emotion. The next chapter presents a method based on \citet{radford_2017} to control the sentiment of piano pieces generated by an LSTM. The chapter after that discusses a variation of beam-search inspired by \citet{holtzman2018learning} to control a transformer to generate piano pieces with a target emotion.
{ "alphanum_fraction": 0.8188265635, "avg_line_length": 334.5294117647, "ext": "tex", "hexsha": "0be60c72e8f88117f052c5a646d1c41a5556d911", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "49bfed278e99fefaa1b0f325822acfd0f4d6080e", "max_forks_repo_licenses": [ "LPPL-1.3c" ], "max_forks_repo_name": "lucasnfe/phd-dissertation", "max_forks_repo_path": "tex/related_work.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "49bfed278e99fefaa1b0f325822acfd0f4d6080e", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "LPPL-1.3c" ], "max_issues_repo_name": "lucasnfe/phd-dissertation", "max_issues_repo_path": "tex/related_work.tex", "max_line_length": 1971, "max_stars_count": 2, "max_stars_repo_head_hexsha": "49bfed278e99fefaa1b0f325822acfd0f4d6080e", "max_stars_repo_licenses": [ "LPPL-1.3c" ], "max_stars_repo_name": "lucasnfe/phd-dissertation", "max_stars_repo_path": "tex/related_work.tex", "max_stars_repo_stars_event_max_datetime": "2021-07-25T06:48:57.000Z", "max_stars_repo_stars_event_min_datetime": "2021-07-22T12:51:09.000Z", "num_tokens": 7063, "size": 34122 }
\section{solution2} \label{sec:solution2} Paragraph1: Explain why you propose solution2. What point will solution2 improve compared with solution1? The rest is similar to what you have done in solution1 section.
{ "alphanum_fraction": 0.8122065728, "avg_line_length": 35.5, "ext": "tex", "hexsha": "51450ac14a1621bb6da2022fea89a56ee54f88fe", "lang": "TeX", "max_forks_count": 8, "max_forks_repo_forks_event_max_datetime": "2021-06-18T07:58:51.000Z", "max_forks_repo_forks_event_min_datetime": "2019-10-25T04:06:39.000Z", "max_forks_repo_head_hexsha": "c2bcbd90a41da2c7e1d4489d70309fbb3184810f", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "haidaoxiaofei/cslatextemplate", "max_forks_repo_path": "Paper/tex/proposedSolution2.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "c2bcbd90a41da2c7e1d4489d70309fbb3184810f", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "haidaoxiaofei/cslatextemplate", "max_issues_repo_path": "Paper/tex/proposedSolution2.tex", "max_line_length": 105, "max_stars_count": 18, "max_stars_repo_head_hexsha": "c2bcbd90a41da2c7e1d4489d70309fbb3184810f", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "haidaoxiaofei/cslatextemplate", "max_stars_repo_path": "Paper/tex/proposedSolution2.tex", "max_stars_repo_stars_event_max_datetime": "2022-01-12T08:02:53.000Z", "max_stars_repo_stars_event_min_datetime": "2018-11-18T05:22:32.000Z", "num_tokens": 49, "size": 213 }
\mychapter{20}{Lesson 20} %181205 \section{Random Oracle Model (ROM)} The Random Oracle Model treats a given hash function $H$ as a truly random function. As a reminder: a truly random function $R$ is defined to have a specific evaluation behaviour. They do act, in fact, as truth tables\footnotemark: \begin{itemize} \item if the argument hasn't been submitted to the function beforehand, then a value is chosen \uar{} from the codomain, and assigned as the image of said argument in the function; \item otherwise, the function will return the image as assigned in the corresponding previous evaluation. \end{itemize} \footnotetext{Such tables are also aptly called \emph{rainbow tables}.} \subsection{Full domain hashing} Let $(f, f^{-1}, \Gen)$ be a \tdp{} scheme over some domain $\mathcal{X}_{\pk}$ Take \rsa: \begin{itemize} \item $(m, \pk, \sk) \pickUAR \Gen\rsa(1^\lambda)$ \item $f(\pk, x) = x^{\pk} \mod n$ \item $f^{-1}(\sk, y) = y^{\sk} \mod n$ \end{itemize} Build a similar asymmetric-authentication scheme as such: \begin{itemize} \item $(m, \pk, \sk) \pickUAR \Gen\rsa(1^\lambda)$ \item $Sign_{\sk, H}(x): \sigma = f^{-1}(\sk, H(m))$ \item $Verify_{\pk, H}: H(m) = f(pk, \sigma)$ \end{itemize} \begin{exercise} Show \rsa-sign is not secure without $H$. (Hint: The scheme becomes malleable) \end{exercise} \begin{theorem} If the above scheme (full-domain hash) uses a \tdp{} for $f, f^{-1}$, then it is (asymmetric)-\ufcma under the random oracle model. \end{theorem} \begin{proof} Idea: Reduce to \tdp, program the random oracle. \begin{cryptoredux} {fdhufcma} {} {tdp} {ufcma(a)} \receive{\shortstack[l]{ $(pk, sk) \pickUAR \Gen(1^\lambda)$ \\ $x \pickUAR \mathcal{X}_{pk}$ \\ $y = f(pk, x)$ }}{$pk, y$}{} \invoke{$*$}{$pk$}{} \cseqdelay \cseqbeginloop \return{}{$m$}{} \invoke{}{$\sigma$}{} \cseqendloop \cseqdelay \return{}{$(m^*, \sigma^*)$}{} \send{}{$\sigma^*$}{} \end{cryptoredux} Notes: this is a loose reduction Some assumptions are made: \begin{itemize} \item The adversary makes the same number of RO queries as the number of signing queries done by the distinguisher (without loss of generality) \item The RO query must be done \emph{before} the corresponding sign query, otherwise the adversary cannot sign the messages, as specified by the scheme \end{itemize} The RO queries are actually an analogue of the definition of a random function, and it is the \emph{programming} step of the oracle itself; then if the signing queries do not correspond to any RO query, abort the game. \end{proof} \section{ID Scheme} \subsubsection{``Sigma'' protocol} \subsection{Fiat-Shamir scheme} \subsubsection{Honest Verifier Zero-Knowledge (HVZK)} \subsubsection{Special Soundness (SS)}
{ "alphanum_fraction": 0.6498486377, "avg_line_length": 32.3152173913, "ext": "tex", "hexsha": "194a37d7854f84460715182979ee30d10f0b2c21", "lang": "TeX", "max_forks_count": 3, "max_forks_repo_forks_event_max_datetime": "2021-01-03T15:23:22.000Z", "max_forks_repo_forks_event_min_datetime": "2020-07-17T14:41:17.000Z", "max_forks_repo_head_hexsha": "da5dcf51b0396bd26d7fd0445feceef365950757", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "Project2100/cryptography_1819", "max_forks_repo_path": "lessons/lesson_20.tex", "max_issues_count": 2, "max_issues_repo_head_hexsha": "da5dcf51b0396bd26d7fd0445feceef365950757", "max_issues_repo_issues_event_max_datetime": "2020-12-27T20:36:12.000Z", "max_issues_repo_issues_event_min_datetime": "2020-07-18T15:45:10.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "Project2100/cryptography_1819", "max_issues_repo_path": "lessons/lesson_20.tex", "max_line_length": 231, "max_stars_count": 1, "max_stars_repo_head_hexsha": "da5dcf51b0396bd26d7fd0445feceef365950757", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "Project2100/Cryptography-2018_19", "max_stars_repo_path": "lessons/lesson_20.tex", "max_stars_repo_stars_event_max_datetime": "2019-10-15T09:22:45.000Z", "max_stars_repo_stars_event_min_datetime": "2019-10-15T09:22:45.000Z", "num_tokens": 872, "size": 2973 }
\chapter{RTLIL Text Representation} \label{chapter:textrtlil} % Stolen from stackexchange: calculate indent based on given keyword, % with some nice hrules added in. \newlength{\myl} \newenvironment{indentgrammar}[1] {\vspace{0.5cm}\hrule \setlength{\myl}{\widthof{#1}+2em} \grammarindent\the\myl \begin{grammar}} {\end{grammar} \hrule} This appendix documents the text representation of RTLIL in extended Backus-Naur form (EBNF). The grammar is not meant to represent semantic limitations. That is, the grammar is ``permissive'', and later stages of processing perform more rigorous checks. The grammar is also not meant to represent the exact grammar used in the RTLIL frontend, since that grammar is specific to processing by lex and yacc, is even more permissive, and is somewhat less understandable than simple EBNF notation. Finally, note that all statements (rules ending in \texttt{-stmt}) terminate in an end-of-line. Because of this, a statement cannot be broken into multiple lines. \section{Lexical elements} \subsection{Characters} An RTLIL file is a stream of bytes. Strictly speaking, a ``character'' in an RTLIL file is a single byte. The lexer treats multi-byte encoded characters as consecutive single-byte characters. While other encodings \textit{may} work, UTF-8 is known to be safe to use. Byte order marks at the beginning of the file will cause an error. ASCII spaces (32) and tabs (9) separate lexer tokens. A \texttt{nonws} character, used in identifiers, is any character whose encoding consists solely of bytes above ASCII space (32). An \texttt{eol} is one or more consecutive ASCII newlines (10) and carriage returns (13). \subsection{Identifiers} There are two types of identifiers in RTLIL: \begin{itemize} \item Publically visible identifiers \item Auto-generated identifiers \end{itemize} \begin{indentgrammar}{<autogen-id>} <id> ::= <public-id> | <autogen-id> <public-id> ::= "\textbackslash" <nonws>$+$ <autogen-id> ::= "\textdollar" <nonws>$+$ \end{indentgrammar} \subsection{Values} A \textit{value} consists of a width in bits and a bit representation, most significant bit first. Bits may be any of: \begin{itemize} \item \texttt{0}: A logic zero value \item \texttt{1}: A logic one value \item \texttt{x}: An unknown logic value (or don't care in case patterns) \item \texttt{z}: A high-impedance value (or don't care in case patterns) \item \texttt{m}: A marked bit (internal use only) \item \texttt{-}: A don't care value \end{itemize} An \textit{integer} is simply a signed integer value in decimal format. \textbf{Warning:} Integer constants are limited to 32 bits. That is, they may only be in the range $[-2147483648, 2147483648)$. Integers outside this range will result in an error. \begin{indentgrammar}{<binary-digit>} <value> ::= <decimal-digit>$+$ \texttt{\textbf{'}} <binary-digit>$*$ <decimal-digit> ::= "0" | "1" | "2" | "3" | "4" | "5" | "6" | "7" | "8" | "9" <binary-digit> ::= "0" | "1" | "x" | "z" | "m" | "-" <integer> ::= "-"$?$ <decimal-digit>$+$ \end{indentgrammar} \subsection{Strings} A string is a series of characters delimited by double-quote characters. Within a string, any character except ASCII NUL (0) may be used. In addition, certain escapes can be used: \begin{itemize} \item \texttt{\textbackslash n}: A newline \item \texttt{\textbackslash t}: A tab \item \texttt{\textbackslash \textit{ooo}}: A character specified as a one, two, or three digit octal value \end{itemize} All other characters may be escaped by a backslash, and become the following character. Thus: \begin{itemize} \item \texttt{\textbackslash \textbackslash}: A backslash \item \texttt{\textbackslash ''}: A double-quote \item \texttt{\textbackslash r}: An 'r' character \end{itemize} \subsection{Comments} A comment starts with a \texttt{\textbf{\#}} character and proceeds to the end of the line. All comments are ignored. \section{File} A file consists of an optional autoindex statement followed by zero or more modules. \begin{indentgrammar}{<design>} <file> ::= <autoidx-stmt>$?$ <module>* \end{indentgrammar} \subsection{Autoindex statements} The autoindex statement sets the global autoindex value used by Yosys when it needs to generate a unique name, e.g. \texttt{\textdollar{}flatten\textdollar{}N}. The N part is filled with the value of the global autoindex value, which is subsequently incremented. This global has to be dumped into RTLIL, otherwise e.g. dumping and running a pass would have different properties than just running a pass on a warm design. \begin{indentgrammar}{<autoidx-stmt>} <autoidx-stmt> ::= "autoidx" <integer> <eol> \end{indentgrammar} \subsection{Modules} Declares a module, with zero or more attributes, consisting of zero or more wires, memories, cells, processes, and connections. \begin{indentgrammar}{<module-body-stmt>} <module> ::= <attr-stmt>$*$ <module-stmt> <module-body> <module-end-stmt> <module-stmt> ::= "module" <id> <eol> <module-body> ::= (<param-stmt> \alt <wire> \alt <memory> \alt <cell> \alt <process> \alt <conn-stmt>)$*$ <param-stmt> ::= "parameter" <id> <constant>$?$ <eol> <constant> ::= <value> | <integer> | <string> <module-end-stmt> ::= "end" <eol> \end{indentgrammar} \subsection{Attribute statements} Declares an attribute with the given identifier and value. \begin{indentgrammar}{<attr-stmt>} <attr-stmt> ::= "attribute" <id> <constant> <eol> \end{indentgrammar} \subsection{Signal specifications} A signal is anything that can be applied to a cell port, i.e. a constant value, all bits or a selection of bits from a wire, or concatenations of those. \textbf{Warning:} When an integer constant is a sigspec, it is always 32 bits wide, 2's complement. For example, a constant of $-1$ is the same as \texttt{32'11111111111111111111111111111111}, while a constant of $1$ is the same as \texttt{32'1}. See Sec.~\ref{sec:rtlil_sigspec} for an overview of signal specifications. \begin{indentgrammar}{<sigspec>} <sigspec> ::= <constant> \alt <wire-id> \alt <sigspec> "[" <integer> (":" <integer>)$?$ "]" \alt "\{" <sigspec>$*$ "\}" \end{indentgrammar} \subsection{Connections} Declares a connection between the given signals. \begin{indentgrammar}{<conn-stmt>} <conn-stmt> ::= "connect" <sigspec> <sigspec> <eol> \end{indentgrammar} \subsection{Wires} Declares a wire, with zero or more attributes, with the given identifier and options in the enclosing module. See Sec.~\ref{sec:rtlil_cell_wire} for an overview of wires. \begin{indentgrammar}{<wire-option>} <wire> ::= <attr-stmt>$*$ <wire-stmt> <wire-stmt> ::= "wire" <wire-option>$*$ <wire-id> <eol> <wire-id> ::= <id> <wire-option> ::= "width" <integer> \alt "offset" <integer> \alt "input" <integer> \alt "output" <integer> \alt "inout" <integer> \alt "upto" \alt "signed" \end{indentgrammar} \subsection{Memories} Declares a memory, with zero or more attributes, with the given identifier and options in the enclosing module. See Sec.~\ref{sec:rtlil_memory} for an overview of memory cells, and Sec.~\ref{sec:memcells} for details about memory cell types. \begin{indentgrammar}{<memory-option>} <memory> ::= <attr-stmt>$*$ <memory-stmt> <memory-stmt> ::= "memory" <memory-option>$*$ <id> <eol> <memory-option> ::= "width" <integer> \alt "size" <integer> \alt "offset" <integer> \end{indentgrammar} \subsection{Cells} Declares a cell, with zero or more attributes, with the given identifier and type in the enclosing module. Cells perform functions on input signals. See Chap.~\ref{chapter:celllib} for a detailed list of cell types. \begin{indentgrammar}{<cell-body-stmt>} <cell> ::= <attr-stmt>$*$ <cell-stmt> <cell-body-stmt>$*$ <cell-end-stmt> <cell-stmt> ::= "cell" <cell-id> <cell-type> <eol> <cell-id> ::= <id> <cell-type> ::= <id> <cell-body-stmt> ::= "parameter" ("signed" | "real")$?$ <id> <constant> <eol> \alt "connect" <id> <sigspec> <eol> <cell-end-stmt> ::= "end" <eol> \end{indentgrammar} \subsection{Processes} Declares a process, with zero or more attributes, with the given identifier in the enclosing module. The body of a process consists of zero or more assignments, exactly one switch, and zero or more syncs. See Sec.~\ref{sec:rtlil_process} for an overview of processes. \begin{indentgrammar}{<switch-end-stmt>} <process> ::= <attr-stmt>$*$ <proc-stmt> <process-body> <proc-end-stmt> <proc-stmt> ::= "process" <id> <eol> <process-body> ::= <assign-stmt>$*$ <switch> <assign-stmt>$*$ <sync>$*$ <assign-stmt> ::= "assign" <dest-sigspec> <src-sigspec> <eol> <dest-sigspec> ::= <sigspec> <src-sigspec> ::= <sigspec> <proc-end-stmt> ::= "end" <eol> \end{indentgrammar} \subsection{Switches} Switches test a signal for equality against a list of cases. Each case specifies a comma-separated list of signals to check against. If there are no signals in the list, then the case is the default case. The body of a case consists of zero or more switches and assignments. Both switches and cases may have zero or more attributes. \begin{indentgrammar}{<switch-end-stmt>} <switch> ::= <switch-stmt> <case>$*$ <switch-end-stmt> <switch-stmt> := <attr-stmt>$*$ "switch" <sigspec> <eol> <case> ::= <attr-stmt>$*$ <case-stmt> <case-body> <case-stmt> ::= "case" <compare>$?$ <eol> <compare> ::= <sigspec> ("," <sigspec>)$*$ <case-body> ::= (<switch> | <assign-stmt>)$*$ <switch-end-stmt> ::= "end" <eol> \end{indentgrammar} \subsection{Syncs} Syncs update signals with other signals when an event happens. Such an event may be: \begin{itemize} \item An edge or level on a signal \item Global clock ticks \item Initialization \item Always \end{itemize} \begin{indentgrammar}{<dest-sigspec>} <sync> ::= <sync-stmt> <update-stmt>$*$ <sync-stmt> ::= "sync" <sync-type> <sigspec> <eol> \alt "sync" "global" <eol> \alt "sync" "init" <eol> \alt "sync" "always" <eol> <sync-type> ::= "low" | "high" | "posedge" | "negedge" | "edge" <update-stmt> ::= "update" <dest-sigspec> <src-sigspec> <eol> \end{indentgrammar}
{ "alphanum_fraction": 0.7033270807, "avg_line_length": 33.7633333333, "ext": "tex", "hexsha": "243b56a87a4411e62d64a6d7a5546074253b039b", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2021-09-20T15:37:03.000Z", "max_forks_repo_forks_event_min_datetime": "2021-09-20T15:37:03.000Z", "max_forks_repo_head_hexsha": "c96eb2fbd7cb2f4838350c02baf3e4b23c4b2ad2", "max_forks_repo_licenses": [ "0BSD" ], "max_forks_repo_name": "dh73/yosys", "max_forks_repo_path": "manual/CHAPTER_TextRtlil.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "c96eb2fbd7cb2f4838350c02baf3e4b23c4b2ad2", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "0BSD" ], "max_issues_repo_name": "dh73/yosys", "max_issues_repo_path": "manual/CHAPTER_TextRtlil.tex", "max_line_length": 420, "max_stars_count": 3, "max_stars_repo_head_hexsha": "cd4193bfe1e41fe71f8a01754b71cb92137a76db", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "nlwmode/ifpga-mapper", "max_stars_repo_path": "src/yosys/manual/CHAPTER_TextRtlil.tex", "max_stars_repo_stars_event_max_datetime": "2017-09-29T15:23:51.000Z", "max_stars_repo_stars_event_min_datetime": "2017-09-29T07:35:16.000Z", "num_tokens": 2944, "size": 10129 }
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%% Material %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \clearpage \section{Project and materials} \label{sec:materials} In this section I will describe the project plan starting with describing the existing systems that I had to integrate with. Next, I will describe the data that was available to be used for the project. Lastly, I will describe the project schedule. \subsection{Existing systems} The project was to be integrated on top of existing diagnostics systems to save time and to improve the overall user experience by leveraging the system they were already familiar with. The store has an internal monitoring tool called \textbf{Jury} for the store engineers and other users to visualize details and relationships of the products and publishers in the store. Jury implements a user interface to query the production database for products, publishers, and events. Jury also handles authentication, authorization, and permissions. This is needed to preserve the confidentiality of private information. Jury includes a textual query language to filter and search the database. The query language allows the users to search for the information that they are interested in. The queries can be saved for later use. The saved queries can be used as shortcuts or ``dashboards'' for the users. For example, a user can save a query that searches for new products published within the last 24 hours. This query can then be used by the user to periodically check the new products. A major goal for this project was to make use of the existing query system as much as possible. This way the benefits, like user-made views and integration with the saved queries, could be re-used. The statistics and any real-time visualizations should use the query system to retrieve the events. When retrieving the events, the user's permissions need to be taken into account to preserve confidentiality. \subsubsection{Azure ML} \textbf{Azure ML} (publicly available at \cite{azuremlpub}) is a cloud service that offers machine learning capabilities on demand. They offer ready-made machine learning models in the cloud for anyone to use. They also allow the user to provide their own code or models. One of the major features Azure ML offers is a visual interface for machine learning called the ``Machine Learning Studio'' \cite{azureml}. This interface allows the user to set up the data flow of their machine learning solution visually with a ''drag and drop'' model and then execute it in the cloud. I chose Azure ML for my solution because of the accessibility, quick setup, and the multitude of ``off the shelf'' models offered. With Azure ML I was able to take the data I had and start training models quickly without setting up anything on my local machine. \subsection{Data} % description of event hub The data used by the project is a list of events. The events are the log entries generated by the distributed processes of the store ingestion/publishing system. Each store system creates events and sends them through an \textit{Azure Event Hub}. For example, there could be an ``Screenshot Upload'' event notifying that the application screenshots were successfully uploaded to a server. Another example could be a ``Complete Publish'' event that signals that the application has finished the publishing process completely. Azure Event Hubs are a shared event streaming service that works as a platform-independent ``front door'' for all the events \cite{eventhubs}, which all the events go through before being read. The event hub is divided into several separate partitions to improve availability \cite{eventhubavail}. Furthermore, separate partitions allow for concurrent reads of events to improve throughput. All the processors of the store systems broadcast events into a common Event Hub. The events are then read and processed by Jury and stored in a SQL database. My project did not concern the event hub, instead I only read the events as they appeared in the database. % specific description of events and how they are stored Each event consists of a \textbf{timestamp} and some metadata. The event also has a \textbf{submission identifier} which identifies the trace that the event belongs to. All events with the same submission identifiers belong to a single trace. The event timestamp describes the exact time the event happened. There are also two reference fields (foreign keys) referencing the \textbf{product} and the \textbf{publisher} that the event belongs to. Since these references are not used in this thesis, they can be seen simply as integer type identifier fields. In addition to these, the three other fields relevant in this research are textual (string-type) fields \textbf{Source}, \textbf{Subsource}, and \textbf{Status}. The event type is described in table \ref{tab:event}. \begin{table}[htb] \begin{center} \begin{tabular}{| c |} \hline \textbf{Event} \\ \hline SubmissionId : long integer\\ ProductId : long integer\\ PublisherId : long integer\\ Timestamp : DateTime\\ Source : string\\ Subsource : string\\ Status : string\\ \hline \end{tabular} \end{center} \caption{Event and its fields} \label{tab:event} \end{table} The Source field identifies the event source system, whereas the Subsource identifies the exact processor step withing that system. For example, for an event related to uploading images to a server during the publishing phase, the Source field could have the value ``Publishing'' and the Subsource a value of ``Upload Images''. Thus, the Source--Subsource pair is enough to identify the exact step of the workflow which generated the event. The Status field contains a string describing the status of the step, such as ``Completed'', ``In progress'' or ``Failed''. This field can be used to reason about the duration of the events and to determine whether the event is \emph{final}. A final-status event means that the processor has completed the activity and will not be sending further events for the activity in question. For example, events marked with ``Completed'' or ``Failed'' status are final. In contrast, events with an ``In progress'' status are \emph{non-final}. These events are fired to show that a long-running activity has been started, is still running, and hasn't failed. An ``In progress'' will eventually be followed by a final event such as ``Completed'' or ``Failed'' with the same Source and Subsource. The common statuses are listed in table \ref{tab:statuses}. % methods explaining splitting and grouping of traces An arbitrary set of events can be grouped into traces by the submission identifier field. Within the trace each individual step can then be identified by the Source--Subsource pair. In a valid trace there should only be a single final event for each distinct Source--Subsource pair. \begin{table}[htb] \begin{center} \begin{tabularx}{\linewidth}{| l | c | X | } \hline \textbf{Status} & \textbf{Final} & \textbf{Meaning} \\ \hline Completed & Yes & The activity was completed successfully. \\ \hline Skipped & Yes & The activity was not needed for this submission and was skipped. \\ \hline Failed & Yes & The activity was completed with an error. The activity may be retried or the whole workflow may be stopped. \\ \hline In progress & No & The activity has been started, is running, and will finish later. \\ \hline \end{tabularx} \end{center} \caption{The common values of the status field of events} \label{tab:statuses} \end{table} \subsubsection{Challenges found in the production data} \label{sec:datachallenges} In section \ref{sec:eventtheory}, I considered the event log to be a totally ordered flat log. In this theoretical log, each event would have a clear case identifier and its location within the flat log would correspond to the real world ordering of the events. In theory, this case identifier and the log order can be used to split the log into traces and further order the events in each trace. However, the production data proved to be more complex than that. % event hub ordering As mentioned earlier, the event hub processors work in several partitions to improve availability. Within a partition, the order of events is guaranteed to be stable with regards to events arriving to the event hub. However, there is no guarantee on the interleaving of the parallel processing of the partitions. When the events are processed by Jury, the order between the partitions may be different compared to the event arrival order. Thus, the event order (the row number) in the SQL database should not be fully trusted. Furthermore, network issues or other delays can affect event ordering. To combat this, each event is marked with a timestamp by the sender. With the timestamp the estimated ordering of events can be rebuilt when they are read from the SQL database. % clock skew between different systems However, the events are generated by a separate systems in parallel. Each of these systems depends on its own hardware clock, and the clock synchronization between the systems is not guaranteed to be exact. This means that even the timestamps may include an unknown error for up to several minutes. Furthermore, some events may be lost because of bugs, unhandled exceptions, or outages in the system. This means that ordering the events by timestamp includes random noise, which needs to be considered. % Incomplete traces in the chosen ''slice'' (events missing from beginning/end) Another aspect to note is that when the event log data is used, the data being analyzed is always a ``slice'' of the full event log. This means that it contains events from a time slice starting at some time instance $t_s$ and ending at some later time $t_e$, as illustrated in figure \ref{fig:timeslice}. All events between these times are considered. A common use case would be to inspect the past 24 hours of events. In this case $t_e$ is the current time, and $t_s$ is 24 hours earlier than the current time. This results in the time slice containing a large amount of events that belong to a trace that started before $t_s$ (case \textit{b} in figure \ref{fig:timeslice}). The result also contains traces that have yet to be completed (cases \textit{c} and \textit{d} in figure \ref{fig:timeslice}). These traces should be removed, since the beginning or the end of the trace is missing. Classifying a trace as anomalous like this requires an \emph{appropriate model} \cite{bezerra2009anomaly}, meaning some prior understanding is needed on how to discover whether a trace is invalid. \begin{figure}[htb] \centering \includegraphics[width=0.8\linewidth]{gfx/figures/timeslice.pdf} \caption{Time slice and validity of traces} \label{fig:timeslice} \end{figure} % resubmissions When talking about the process mining theory (see section \ref{sec:eventtheory}) it was assumed that each trace can be uniquely identified. This means splitting the traces from the flat event log and identifying the individual cases. For this specific system, each submission of a product has a unique \emph{submission identifier} (submission ID). This ID can be used to uniquely identify a trace. However, in practice this proved more complicated. Sometimes the application submission can be \textit{re-submitted} into the store. This is commonly done if some part of the workflow fails to complete. The events in such a re-submission will have the same submission ID as the original submission. This leads to a situation where the trace seems to restart from the beginning or from a different step and continue. These cases should be identified as anomalous. In the case of real-time data, only the latest submission for each ID should be considered, since the earlier submissions are rarely relevant to the user. Another big challenge comes from the real-time characteristics of the data. Because of agile practices and continuous delivery, the store systems are under constant change. This means that the underlying process model can change any time. The process model generated a day before may not reflect the current state anymore. The system should be flexible and should adapt to any changes in the underlying models. % invalid states Furthermore, the production data will also include traces that have failures or phases that are in progress. A trace with failed or erroneous steps should be considered anomalous, since a workflow is not guaranteed to progress normally after an error. Similarly, the traces include events that do not describe a final state, but only broadcast status. The most common type of event like this events with an ``In progress'' status. If an event is non-final, it means that the same process is still expected to fire a final event afterwards. These events should be handled differently in the modeling phase. All these characteristics require that the system uses some kind of pre-processing method that takes the slice of the real-world data and outputs an ordered and grouped set of traces with the anomalous traces removed. \subsection{Project timeline} \label{sec:timeline} % description of research timeline % weekly meetings with manager, PM I started preparing for the project in late 2016. The project started in Redmond, Washington, USA in early January 2017. While the main goal of the project was established early, the details needed some research. For this reason, an iterative process was set up. I held weekly meetings with my manager, as well as weekly project meetings together with my advisors and the team program manager. In addition, at the midway point I held three presentations to people from the different user groups to find more requirements and to get feedback. Because of this, the project was divided in two iterations. The weekly meetings were carried out through both iterations. The complete timeline of the project can be seen in figure \ref{fig:projecttimeline}. \begin{figure}[htb] \centering \includegraphics[width=0.7\linewidth]{gfx/figures/projecttimeline.pdf} \caption{Project timeline} \label{fig:projecttimeline} \end{figure} % - first iteration: supporting work, backend, graphing, initial UI % - explorative prototypes % - two UI proposals, graph and timeline I started the first iteration by building a prototype using a small test dataset of events exported from the database. The purpose of the prototype was to explore the graph generation algorithms and to build a proof of concept. After the prototype was validated by the team I integrated it with the Jury system. I built two user interface proposals based on different types of presentation. The first one showed a directed graph describing the workflow model. The second interface showed the events on a scrollable timeline. See section \ref{sec:userinterface} for the detailed descriptions of these views. % - user meetings (brownbag, 3pp) After the first iteration I held three presentations, two of which included a feedback session with exploration for use cases. The first presentation was an open one to the higher management and the other teams. The second presentation was with the release managers to get feedback from them and to find more requirements. The third was a ``brown bag'' type of open meeting with other team members who interact with the Jury system. In these meetings I validated my prototype designs and collected feedback about the user interface. In the meetings the timeline view was seen more useful for most user groups. The only exception was the engineers who saw the graph view more useful for debugging. % - second iteration: user interface work, features to fit needs % - machine learning exploration % - graphing accuracy exploration (alt graph) In the second iteration, I spent more effort on the user interface, based on the meetings. I experimented with machine learning models and the process discovery algorithm. I also worked with the team to integrate the graphs with the email notification system in Jury so they could be used to send emails about delays observed in the system. By the end of the second iteration the project was fully integrated with Jury and was delivered to the users.
{ "alphanum_fraction": 0.7864650562, "avg_line_length": 68.936440678, "ext": "tex", "hexsha": "c3323f58ff579698763921679772887f42a613cd", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "09846d109a45a07e28641891ee3b651228fc049e", "max_forks_repo_licenses": [ "CC-BY-4.0" ], "max_forks_repo_name": "Sentri/Master-s-Thesis", "max_forks_repo_path": "sec4_material.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "09846d109a45a07e28641891ee3b651228fc049e", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "CC-BY-4.0" ], "max_issues_repo_name": "Sentri/Master-s-Thesis", "max_issues_repo_path": "sec4_material.tex", "max_line_length": 432, "max_stars_count": null, "max_stars_repo_head_hexsha": "09846d109a45a07e28641891ee3b651228fc049e", "max_stars_repo_licenses": [ "CC-BY-4.0" ], "max_stars_repo_name": "Sentri/Master-s-Thesis", "max_stars_repo_path": "sec4_material.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 3397, "size": 16269 }
\documentclass[12pt]{article} \usepackage[hyphens,spaces,obeyspaces]{url} \usepackage{graphicx} \graphicspath{ {./images/} } \title{CMPUT 499 Literature Review} \author{Monica Bui} \begin{document} \begin{titlepage} \centering \large \vspace{1cm} CMPUT 499: Mining Sofware Repositories \\ \vspace{1cm} Literature Review \\ \vspace{1cm} Monica Bui \end{titlepage} % What is the issue of paper? Main goals and motivation behind paper + questions? Methods of research? Results? Conclusion with personal opinion (good/bad/limitations). % Literature review discusses published inforamtion in subject area % Focus on summarizing source and introducing my own understandings of topic \newpage \section{Introduction} % Give idea of the topic of the literature review, patterns + theme \paragraph{} Third-party software libraries and Application Programming Interfaces (APIs) offer a way for developers to use existing features and integrate them within their projects without having to re-invent the wheel. There are many such libraries to use for different programming languages and for different purposes e.g. encryption, text communications etc. Although there is a lot of flexibility when it comes to choosing the appropriate library, it is also conflicting to figure out which one is best for your needs. There are a variety of factors involved such as security and developer support that will affect your project's overall functionality. In this review, we look at several diferent articles to help analyze various properties of libraries and to determine a reliable method of comparison between them. \section{Article Review} % Format by theme % Recommenders, Badges, API Comparisons by Metrics, Usage, etc. \subsection{Library Recommenders} \paragraph{} One problem is researching for new analogical libraries to use that has similar functionalities to the ones you already know of. Chen's et. al \cite{analogical} paper discusses their library recommender that consumes as input a list of libraries from community resources such as blogs and Q\&A sites like StackOverflow \cite{stackoverflow} and outputs a list of recommended libraries in a different language of choice that offers similar features to the input language argument. This is implemented by first mining tags from questions posted online on StackOverflow \cite{stackoverflow}. These tags are split into two knowledge bases: relational and categorical. Respectively, relational knowledge is how pairs of tags are correlated to each other e.g. Java and JUnit, while categorical knowledge consists of how tags are grouped into categories such as language, operating system, concept, or library. Both these bases are analyzed by NLP to ensure efficient extraction of data and are sub categorized correctly. Having these seperated tag categories and relatonships between tags often mentioned together provides a simpler way to recommend a new library. With the database in place, users can then search for recomendations through the built web application called \textit{SimilarTech} \cite{similartech}. While trying out the application myself, I found that search results would yield for libraries only mentioned in specific contexts. Axios \cite{axios} a popular Javascript HTTP API should output expected recomendations like the Requests \cite{requests} module for Python but the actual output printed was empty. Axios \cite{axios} itself is mentioned around many situations like REST, APIs, and HTTP requests making it difficult to target in what circumstance it should be recommended in. While the number of languages it can suggest libraries for is limited to 5, the precision metric is impressive with 1 language at 81\% and with 5 being at 67\% demonstrating its potential to grow in the future. \paragraph{} Going back to the key problem of deciding on the right library to use, Uddin's et. al \cite{opinerarticle} article highlights their approach to this by looking at personal developer opinions on different resources and how it affects the reader's decision. The sentiment behind this can be used to indicate if its a positive, questionable, or a negative API to use. The tool created, Opiner \cite{opiner} is a summarization engine that examines these opinions and evaluates its sentiment to see if the API should be recommended through displaying ratings on API traits in addition to organizing top opinions for both positive and negative sides. The backend of the application web crawls through a Q\&A site, StackOverflow \cite{stackoverflow} to extract answer information surrounding an API topic. This data is used to help discover when an API in referenced and what kinds of opinions are associated with an API by "finding its nearest API mention" \cite{opinerarticle} through an API name, link, or code said in the same sentence. This combination of data is then used to summarize common aspects developers care about such as usability and performance by finding their interests through a survey. Using Opiner \cite{opiner} through a small research study evaluated on 2 seperate groups of particpant choices to see if just StackOverflow \cite{stackoverflow} alone would work for selection of an API or using both StackOverflow and Opiner would be better. We view that developers are more confident in their selection with both tools versus just StackOverflow alone. For correctness, Opiner gives an overall 90\% precision metric on library recommendations that demonstrates that this data is indeed reliable. A weakness is that although the evaluation of the study proves useful, having 100\% as a research study result for using both StackOverflow and Opiner is difficult to generalize for a larger population. \subsection{Github Badges} \paragraph{} With many open source software to use, it's difficult to pin point which is worth your time to contribute to and/or integrate with your project. Trockman et. al \cite{githubbadges} looks at the in depth quality of a package by analyzing repository badges eg. Figure \ref{phpbadge} that maintainers display on their README file. Trockman not only suggests that badges are signals that makes internal software aspects more transparent but also "may impact users' and contributers' decision making" \cite{githubbadges}. Game like elements known as \textit{Gamification} are not explicitly embedded into these badges but in reality motivate users to contribute higher quality code to increase the signal power displayed by these visuals. The key questions here are, \textit{What are the most common badges and what does displayong them intend to signal?} and \textit{To what degree do badges correlate with qualities that developers expect?} \cite{githubbadges}. To examine the impact of badges, Node Package Modules (npm) \cite{npm} is used as the research repository as it's the largest online collection of Javascript packages. \begin{figure} \centering \includegraphics[width=\textwidth,height=6cm,keepaspectratio=true]{gnrephpbadges} \caption{ Example badges in a open source PHP repository \protect\cite{badgeimage} } \label{phpbadge} \end{figure} % Research method analysis Data is collected by mining all npm \cite{npm} packages and then keeping those with metadata and a public Github \cite{github} repository. They extract the badges through the git history associated with the README file by matching the regular expression for a badge insertion then further classifying them into specific categories such as quality assurance, popularity, dependencies etc. which each have different signaling intentions. With their developer and maintainer survey insights, they develop sub questions to validate if the badges exert the correct qualities as intended. For each type of signal (dependencies, popularity, test suite, and quality pull requests), they look at impact before and after badge adoption with 8 months on both sides through longitudinal analysis, statistical regressions to see how it is correlated with their key questions, and any underlying indicators that the badge may not overly express at first glance. % Results and conclusion The results suggest that displaying badges highly correlate with better code practices specifically higher test coverage, updated dependencies, and a increase in quality code. However, overwhelming your repository with badges loses its intended signaling effect thus turns away users leading to a decrease in downloads, with 5 being the best number of badges to show off. Assessment dynamic signals that are more costly to produce are more reliable than static conventional badges that display trivial information easily found on the page. % My opinions With their survey design research method that collected software metrics from contributers and maintainers, having a low response rate of 15.3\% is challenging to generalize that these results will stay reliable for future studies. % look for main motivation behind project, goals, why they looked only at these prospects. % evidence for why this method is better \subsection{Metrics} \paragraph{} % API popularity and migration With software libaries evolving at a rapid speed, upgrading to the latest version can be cumbersome for developers as challenges of backwards incompatability and deciding on new APIs for your project prove to be a problem. Hora et. al \cite{apiwave} dicusses these 2 obstacles and implements a web application \textit{apiwave} \cite{apiwavewebsite} to mitigate this and highlight API popularity and migration in depth with the former measured by the number of users using the service. The research method takes the Git history of a repository as input and outputs information about the API's popularity metric and method of migration with code snippets as examples of this process. Based on insertions and deletions in the resulting Git diff, we can detect which lines have been modified for migration and update the popularity statistic by 1 suggesting that the old library lost a follower and the new library gained one. Mining import statement diffs assist with which APIs have been added or removed. The client side of \textit{apiwave} \cite{apiwavewebsite} can present a library at many levels including specific interface lookup eg. java.util vs java.util.Map. \textit{Apiwave} also displays addition and deletions statistics, an overall popularity graph, and code snippets that highlights recommended libraries to transfer over to based on diffs. These results indicate different popularity trends and can answer real life StackOverflow \cite{stackoverflow} questions regarding API migration in testing with actual human answers. This helps to recommend the best software to use for developers based on compatability through migration and the number of others who support this -- popularity. Even though there is a high number of vistors and page views, it is currently limited to the Java language even though there are many lanugages out there such as Javascript and Python that host readily available open source libaries. \paragraph{} Selecting the best suited library for your work is often difficult as you need to make sure that they are not only reliable but also contain your desired features. Mora et al. \cite{metrics} creates a method of comparison between libraries on a set criteria of metrics to help developers compare and contrast between different software. From the given motivational example where a developer had trouble picking an API because of characteristics they were unaware of such as lack of community support, Mora suggests that having a single website that presents these quantative metrics would lead to a simpler library decision. Key questions to look at are how to compile this dataset, and finding metrics that express the underlying quality of the library that developers care about. Data is collected from Stack Overflow \cite{stackoverflow}, a Q\&A resource, with source code and issues found on GitHub \cite{github} all to provide diversified content to see the perspective of API qualities at all angles. For each metric a detailed description is given on it portrays, how this is extracted by using tools such as BOA \cite{boa} and issue trackers or calculated using previous related research from other authors, and how it affects developers' decisions. Specifically, Mora et al.\cite{metrics} examines in depth 3 metrics that created different challenges not mentioned in previous articles using a set of 60 Java libraries from different domains as a guideline. BOA \cite{boa} is used to measure \textit{popularity} such that if a Java file contains an import statement, then we can conclude that the project is using that particular API. Based on the number of import statements, we can see which libraries are the most popular for each domain e.g Junit for testing. For \textit{migration}, Mora et al.\cite{metrics} uses another author's dataset to conclude that even if a library is very popular does not mean migration is also simpler as shown with Junit. With \textit{Performance and Security}, the number of bugs related to that helps to measure this metric if it's a reliable API. However, bug reports are diffcult to categorize as the bug description itself may not be overly stated in the report. Supervised classifers is then used to label these bugs properly. The web application displays detailed information all at one place to showcase the different quantitative metrics for each API. Although there is no audience feedback within the article, Mora et al. \cite{metrics} gives specific future prospects on how to improve the application looking into survey feedback and dynamic updates to the information being displayed. The study is limited to further generalization to show that quanitative measures truly work without specifying any testing in the article. \section{Conclusion} % Summarize key points % Decide based on results from article, the method of reseach to dive into with my project \paragraph{} Deciding between different APIs in various libraries and frameworks proves to be challenging. Chen et al. \cite{analogical} and Udin et al. \cite{opinerarticle} look at explicitly recommending libraries through their web application with the latter targeting on opinion sentiment. Trockman et al. \cite{githubbadges} examines the impact of GitHub \cite{github} badges and whether they influence developer choices. Finally, Hora et. al \cite{apiwave} and Mora et. al \cite{metrics} analyze quantitative statistics on API metrics with the former focusing on popularity and migration. \paragraph{} For future software metric research based on popular metrics found by Mora et. al \cite{metrics} and Trockman et al. \cite{githubbadges}, we hope to create high quality, assessment Github \cite{github} badges based on the most popular metrics to help recommend the best libraries for users through first glance at the README file. Spending more effort on these assessment badges in addition to taking the most popular metrics will ensure that the impact on code quality increases but does not overload the user with too much information on the page. \newpage \bibliographystyle{IEEEtran} \bibliography{literature_review} \end{document}
{ "alphanum_fraction": 0.8089176133, "avg_line_length": 71.2465116279, "ext": "tex", "hexsha": "49c6e4a508103c5a05eb0a4797b14ac0f9b6841d", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2019-02-11T18:47:03.000Z", "max_forks_repo_forks_event_min_datetime": "2019-02-11T18:47:03.000Z", "max_forks_repo_head_hexsha": "12f2ecd742520d81c805197e92aa443a4dd459ce", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "ualberta-smr/LibraryMetricsBadges", "max_forks_repo_path": "latex_files/literature_review/reviewdoc/literature_review.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "12f2ecd742520d81c805197e92aa443a4dd459ce", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "ualberta-smr/LibraryMetricsBadges", "max_issues_repo_path": "latex_files/literature_review/reviewdoc/literature_review.tex", "max_line_length": 208, "max_stars_count": 1, "max_stars_repo_head_hexsha": "12f2ecd742520d81c805197e92aa443a4dd459ce", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "ualberta-smr/LibraryMetricsBadges", "max_stars_repo_path": "latex_files/literature_review/reviewdoc/literature_review.tex", "max_stars_repo_stars_event_max_datetime": "2019-02-13T19:30:15.000Z", "max_stars_repo_stars_event_min_datetime": "2019-02-13T19:30:15.000Z", "num_tokens": 3151, "size": 15318 }
\documentclass[compress,mathserif,xcolor=dvipsnames,svgnames,aspectratio=169]{beamer} %% $%!TEX program = xelatex \usetheme{Algiers} %%% My favorite packages \usepackage[default]{sourcesanspro} % \usepackage[default]{raleway} \usepackage[T1]{fontenc} \usefonttheme[onlymath]{serif} % \usepackage{fontspec} % \usepackage{xunicode} %Unicode extras! % \usepackage{xltxtra} %Fixes % \usepackage{Ubuntu} % \usepackage[final,expansion=true,protrusion=true,spacing=true,kerning=true]{microtype} % better font output %%% Really nice for tables % \usepackage{booktabs} % \newcommand{\ra}[1]{\renewcommand{\arraystretch}{#1}} % \usepackage{float} %%% Citations % \usepackage{natbib} % \renewcommand{\bibsection}{\subsubsection*{\bibname } } %prevents natbib from adding a section \usepackage[english]{babel} % \usepackage[utf8]{inputenc} % It puts a simplistic plain slide with the section name % at the beginning of each section \usepackage{nameref} \makeatletter \newcommand*{\currentname}{\@currentlabelname} \makeatother \AtBeginSection[] { \plaintitle{\currentname} } % Take this off when using the template \usepackage{blindtext} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{document} \title{\fontfamily{kurier}\selectfont{The title of the talk}\\\medskip \large\textcolor{beautyblue}{ \textbf{(with potentially a subtitle\ldots)}} \medskip} \author[{Djalel Benbouzid}]{\textcolor{grizoo}{Djalel Benbouzid}} \date[someday\ldots]{Someday}%{1\up{er} juillet 2010} \institute{Laboratoire de l'Accelerateur Lineaire} \begin{frame}[plain, noframenumbering] \titlepage \end{frame} \section{Section} % \plaintitle{Section} \begin{frame}[c] \frametitle{Slide 1} \begin{itemize} \item \textbf{Stuff:} description \item \textbf{Machin:} avec truc \end{itemize} \end{frame} \begin{frame}[c] \framesubtitle{Slide}{With subtitle} \blindtext \end{frame} \begin{frame}[c] \framesubtitle[1cm]{Slide}{With shifted subtitle (2)} \blindmathtrue \blindtext \end{frame} \begin{frame}[c] \frametitle[1cm]{Slide} \blinditemize \end{frame} \begin{frame}[c] \frametitle[1cm]{Slide} \blinddescription \end{frame} \begin{frame}[c] \frametitle[1cm]{Slide} \blindenumerate \end{frame} % A trick for having back slides while ending the slide counter % on the conclusion slide \appendix \newcounter{finalframe} \setcounter{finalframe}{\value{framenumber}} % Geeky ending \begin{frame}[c,plain,fragile] %fragile \begin{python} feedback = raw_input( 'Questions ?' ) if '?' in feedback: if have_answer(): give_answer() else: pretend_the_question_is_ill_posed() else: print 'Thanks for your attention.' \end{python} % Or simply... %\begin{center} %\LARGE{Thank you} %\end{center} \end{frame} % References % \begin{frame}[allowframebreaks, plain] % \bibliographystyle{apalike} % \bibliography{} % \end{frame} \setcounter{framenumber}{\value{finalframe}} \end{document}
{ "alphanum_fraction": 0.7199184228, "avg_line_length": 22.4580152672, "ext": "tex", "hexsha": "afc524c1a6684ceae59e5c825603dd2528b0bc14", "lang": "TeX", "max_forks_count": 10, "max_forks_repo_forks_event_max_datetime": "2021-10-12T04:13:23.000Z", "max_forks_repo_forks_event_min_datetime": "2019-11-02T03:10:26.000Z", "max_forks_repo_head_hexsha": "3ab28a23fb60cb0a97fcec883847e2d8728b98c0", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "lemoxiao/Awesome-Beamer-Collection", "max_forks_repo_path": "200+ beamer 模板合集/Algiers-beamer-template-master(带进度条的简洁)/slides.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "3ab28a23fb60cb0a97fcec883847e2d8728b98c0", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "lemoxiao/Awesome-Beamer-Collection", "max_issues_repo_path": "200+ beamer 模板合集/Algiers-beamer-template-master(带进度条的简洁)/slides.tex", "max_line_length": 109, "max_stars_count": 13, "max_stars_repo_head_hexsha": "3ab28a23fb60cb0a97fcec883847e2d8728b98c0", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "lemoxiao/Awesome-Beamer-Collection", "max_stars_repo_path": "200+ beamer 模板合集/Algiers-beamer-template-master(带进度条的简洁)/slides.tex", "max_stars_repo_stars_event_max_datetime": "2021-12-24T09:27:26.000Z", "max_stars_repo_stars_event_min_datetime": "2019-07-30T04:09:54.000Z", "num_tokens": 923, "size": 2942 }
% This is a template for BU-ECE Technical Report. % % Depending on report content and author preference, a BU-ECE report may be % in one of the two following styles: % % - genuine report based on ``report'' style, i.e., with chapters, much like % a thesis; can be single- or double-sided, % % - report based on ``article'' style, i.e., with no chapters (only sections, % subsections, etc.), much like a journal or conference paper; can be % single- or double-sided. % ===================================================================== %\documentclass[12pt]{report} %Single-sided report style (chapters) %\documentclass[12pt,twoside]{report} %Double-sided report style (chapters) %\documentclass[12pt]{article} %Single-sided article style (no chapters) \documentclass[12pt,twoside]{article} %Double-sided article style (no chapters) \usepackage{bu_ece_report} % In case an adjustment of vertical or horizontal margins is needed % due to particular LaTeX/dvips or OS installation, you can uncomment % and edit the following definitions. % ------------------------------------------------------------------- %\topmargin 0.00 in %\oddsidemargin 0.50 in %\evensidemargin 0.00 in \begin{document} % Definitions. % ------------ \buecedefinitions% {A VERY VERY VERY LONG LONG LONG TITLE TITLE TITLE TITLE OF THE REPORT} {AN ABBREVIATED TITLE FOR RUNNING HEADERS} {FullAuthor1, FullAuthor2 and FullAuthor3} {Nov.\ 18, 2014} {2014-6} % Number of the report (four year digits and number) % Box with title to fit the opening in the cover % (adds an empty page in double-sided printing mode). % --------------------------------------------------- \buecereporttitleboxpage % Title page % (adds an empty page in double-sided printing mode). % --------------------------------------------------- \buecereporttitlepage % Special page, e.g., if the report is restricted or % to whom it is dedicated, etc., otherwise skip. % (adds an empty page in double-sided printing mode). % --------------------------------------------------- \bueceprefacepage{Here comes a special preface page. For example, if the report is restricted, then a suitable note can be included. This page can also be used to indicate to whom the document is dedicated, etc.} % Report summary; max. 1 page. % (adds an empty page in double-sided printing mode). % --------------------------------------------------- \pagenumbering{roman} \setcounter{page}{1} \buecereportsummary{Summary of the report. Maximum 1 page.} % Table of contents, list of figures and list of tables. % ``\bueceemptypage'' adds empty page in double-sided % printing mode and performs ``\clearpage'' in single-sided % mode. % ------------------------------------------------------ \tableofcontents\bueceemptypage \listoffigures\bueceemptypage \listoftables\bueceemptypage % Switch on running headers for the report: % odd pages - title (lowercase); if too long, use % the first few words followed by ``...'', % even pages - last names of the authors. % ------------------------------------------------------- \buecereportheaders % Introduction. % ------------- \pagenumbering{arabic} \setcounter{page}{1} \section{Introduction} % Article style %\chapter{Introduction} % Report style Here comes the introduction and reference to a paper \cite{Konr01cm}. % Following sections, subsections, etc. % ------------------------------------- \section{Starting chapter} % Article style %\chapter{Starting chapter} % Report style This is how the first chapter starts. \subsection{Early section} % Article style %\section{Early section} % Report style And this is the first section of this chapter. \section{Another chapter} % Article style %\chapter{Another chapter} % Report style \subsection{New section} % Article style %\section{New section} % Report style Section goes here ... \section{Final chapter} % Article style %\chapter{Final chapter} % Report style \subsection{Another section} % Article style %\section{Another section} % Report style Section with a figure (Fig.~\ref{fig:example}). % Plots (PostScript files) are included through the ``figure'' environment. % For more complicated figures use the minipage commaned (see LaTeX manual). % -------------------------------------------------------------------------- \begin{figure}[htb] % \begin{minipage}[t]{0.49\linewidth}\centering % \centerline{\epsfig{figure=figures/regbsdcod.eps,width=8.0cm}} \Ovalbox{\vbox to 1.5in{\vfill\hbox{\vtop{\hsize=2.5in\hfill}\hfill}\vfill}} \medskip \centerline{(a)} \end{minipage}\hfill % \begin{minipage}[t]{0.49\linewidth}\centering % \centerline{\epsfig{figure=figures/regbsdcod.eps,width=8.0cm}} \Ovalbox{\vbox to 1.5in{\vfill\hbox{\vtop{\hsize=2.5in\hfill}\hfill}\vfill}} \medskip \centerline{(b)} \end{minipage} \bigskip \begin{minipage}[t]{0.49\linewidth}\centering % \centerline{\epsfig{figure=figures/regbsdcod.eps,width=8.0cm}} \Ovalbox{\vbox to 1.5in{\vfill\hbox{\vtop{\hsize=2.5in\hfill}\hfill}\vfill}} \medskip \centerline{(c)} \end{minipage}\hfill % \begin{minipage}[t]{0.49\linewidth}\centering % \centerline{\epsfig{figure=figures/regbsdcod.eps,width=8.0cm}} \Ovalbox{\vbox to 1.5in{\vfill\hbox{\vtop{\hsize=2.5in\hfill}\hfill}\vfill}} \medskip \centerline{(d)} \end{minipage} % \caption{Block diagram: (a) one; (b) two; (c) three, and (d) four.} \label{fig:example} \end{figure} % Bibliography. % ------------- \parskip=0pt \parsep=0pt \bibliographystyle{ieeetrsrt} % Important: substitute your BiBTeX (*.bib) files below. % ------------------------------------------------------ \bibliography{strings,konrad,manuals} \end{document}
{ "alphanum_fraction": 0.6348411602, "avg_line_length": 34.0705882353, "ext": "tex", "hexsha": "eb824ac76ce8674022ed094e5e8c699a7b4f332e", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "a7332e3986bef5968658a6b54c194fecbd9038f9", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "steinwaywhw/pandoc-templates", "max_forks_repo_path": "bu_ece_tr/report.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "a7332e3986bef5968658a6b54c194fecbd9038f9", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "steinwaywhw/pandoc-templates", "max_issues_repo_path": "bu_ece_tr/report.tex", "max_line_length": 80, "max_stars_count": 1, "max_stars_repo_head_hexsha": "a7332e3986bef5968658a6b54c194fecbd9038f9", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "steinwaywhw/pandoc-templates", "max_stars_repo_path": "bu_ece_tr/report.tex", "max_stars_repo_stars_event_max_datetime": "2018-09-28T19:06:25.000Z", "max_stars_repo_stars_event_min_datetime": "2018-09-28T19:06:25.000Z", "num_tokens": 1569, "size": 5792 }
% !TeX root = ../solution.tex \hypertarget{he22.teaser}{% \chapter{Teaser Challenge}\label{hv22.teaser}} \begin{figure} \includegraphics[width=100mm]{00teaser/banner.jpg} \end{figure} \begin{center} CraCC this, you HAXXOR! \verb+a4a9fefcfefeb7b8fff8bfa9bee1a8fca2ffedb1+ \end{center} \section{Solution}\label{he22.01-solution} HAXXOR $\rightarrow$ XOR encoding \noindent CraCC hints at 0xCC as the key to XOR with \begin{minted}{python} msg = 'a4a9fefcfefeb7b8fff8bfa9bee1a8fca2ffedb1' res = '' for i in range(len(msg)//2): s = int(msg[2*i:2*i+2],16) c = s ^ 0xcc res += chr(c) print(f'{s} -> {c} %s'%chr(c)) print(res) \end{minted} \noindent \verb+he2022{t34ser-d0n3!}+.
{ "alphanum_fraction": 0.6971428571, "avg_line_length": 20, "ext": "tex", "hexsha": "eb47349229ddcfca48a484ba3850b9844def8361", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "dfac11abb3051af657ed3384c3c389c14a40c10e", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "tbrup/ctf-writeups", "max_forks_repo_path": "HackyEaster/he2022/00teaser/teaser.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "dfac11abb3051af657ed3384c3c389c14a40c10e", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "tbrup/ctf-writeups", "max_issues_repo_path": "HackyEaster/he2022/00teaser/teaser.tex", "max_line_length": 52, "max_stars_count": null, "max_stars_repo_head_hexsha": "dfac11abb3051af657ed3384c3c389c14a40c10e", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "tbrup/ctf-writeups", "max_stars_repo_path": "HackyEaster/he2022/00teaser/teaser.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 279, "size": 700 }
\section{Testing and evaluation} In this section we discuss the testing procedure of this course, and the grading criteria. \subsection{Overall description} This module is tested with a series of practical assignments that are checked in an oral assessment where you have to explain how and why they work. These assignments can be found on N@tschool. \\ Foreword and notes: \begin{itemize} \item The practical assignments determine the final grade. \item The practical assignments are made up of elements of an interpreter or a compiler which is either incomplete or wrongly built. The students task is that of extending or fixing such elements. \item The practical assignments can be made in pairs. The oral assessment is individual. \item The practical assignments must contain extensive, individually written documentation. \end{itemize} \ \\ This manner of examination is chosen for the following reasons: \begin{itemize} \item By reading existing sources students must read and reason about code (learning goals \textit{analysis} and \textit{advice}). \item By correcting or extending the sources students must write code (learning goals \textit{design} and \textit{realisation}). \item By writing documentation students must communicate about their code (learning goal \textit{communication}). \end{itemize} \ \\ The grade of each practicum assignment is determined by: \begin{itemize} \item The correctness of the underlying physical and approximation rules (60\%). \item Completeness and clarity of the documentation (40\%). \end{itemize} \subsection{Assignments} \paragraph{Assignment 1 - linear motion} Students must define an Euler-integrated simulation over position, velocity (or linear momentum) and force. \paragraph{Assignment 2 - rotational motion} Students must define an Euler-integrated simulation over rotation, angular velocity (or angular momentum) and torque. \paragraph{Assignment 2B - Gram-Schmidt ortho-normalization} Students must define a Gram-Schmidt ortho-normalization. \paragraph{Assignment 2C - quaternions} Students must replace rotation matrices with quaternions. \paragraph{Assignment 3 - Runge-Kutta integration} Students must replace the Euler integration with a Runge-Kutta integration of order 4. \subsection{Grades} Assignments 1, 2, and 3 are obligatory. With these assignments the maximum grade possible is a seven. The other assignments are all optional and give each one and a half additional point. \begin{tabularx}{\textwidth}{|>{\columncolor{lichtGrijs}} p{3cm}|X|} \hline \textbf{Assignments:} & \textbf{Value in grades} \\ \hline 1, 2, 3 & 7 \\ \hline 2B & 1.5 \\ \hline 2C & 1.5 \\ \hline \end{tabularx} \subsection{Deadlines} The assignments must be handed in, printed on paper, before the end of the last lecture on week 8 of the period. \textit{Herkansing} can be done by handing in the assignments before the end of week 1 of the following period (that would be right after the summer holiday). \subsection{Feedback} It is possible to discuss the assignments (and their evaluation) during the practicum lectures, or during week 10 of the period (the same holds for the \textit{herkansing}, but in reference to the following period).
{ "alphanum_fraction": 0.7618904275, "avg_line_length": 48.1449275362, "ext": "tex", "hexsha": "78efefd1604c6879a8200d801f29a945694caa9f", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "410b0064f541474f102a3037866e625725fed4c5", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "hogeschool/TINWIS01-7", "max_forks_repo_path": "Modulewijzer/tex/Toetsing.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "410b0064f541474f102a3037866e625725fed4c5", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "hogeschool/TINWIS01-7", "max_issues_repo_path": "Modulewijzer/tex/Toetsing.tex", "max_line_length": 273, "max_stars_count": null, "max_stars_repo_head_hexsha": "410b0064f541474f102a3037866e625725fed4c5", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "hogeschool/TINWIS01-7", "max_stars_repo_path": "Modulewijzer/tex/Toetsing.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 837, "size": 3322 }
\section{Exercise 03} \subsection{} \begin{frame} \frametitleTC{Direct synthesis for disturbance rejection} \framesubtitleTC{In fact, slightly more than a remark} \myPause \begin{itemize}[<+-| alert@+>] \item The loop and the closed-loop transfer functions are tied to one another: since \begin{displaymath} G_{yw}(z) = \frac{L(z)}{1+L(z)}, \qquad G_{yd}(z) = \frac{1}{1+L(z)}, \end{displaymath} \item we have \begin{displaymath} L(z) = \frac{G_{yw}(z)}{1-G_{yw}(z)}, \quad L(z) = \frac{1-G_{yd}(z)}{G_{yd}(z)}, \quad G_{yw}(z)+G_{yd}(z) = 1. \end{displaymath} \item As a consequence, \begin{displaymath} G_{yw}(z) = \frac{1-\alpha}{z-\alpha} \quad \Rightarrow \quad G_{yd}(z) = 1-\frac{1-\alpha}{z-\alpha} = \frac{z-1}{z-\alpha} \end{displaymath} and we are just back to the tracking problem. \end{itemize} \end{frame} \begin{frame} \frametitleTC{Direct synthesis for disturbance rejection} \framesubtitleTC{In fact, slightly more than a remark} \myPause \begin{itemize}[<+-| alert@+>] \item We can revisit the previous exercise focusing on the disturbance response. \item In doing so, it is interesting to also look at the behaviour of the \TC{control signal}\\ (in our Modelica diagram, \texttt{C.y}). \item Observe how a faster rejection -- like a faster tracking -- requires a larger peak\\ in that signal (which is quite intuitive). \item As we shall see, this may lead to require something that the system -- or more\\ precisely, the \TC{actuator} -- cannot do (e.g., allocate more cores than those\\ aboard the machine). \item Incidentally, this means that \TC{architectures need sizing for fast enough\\ reaction to TIME-VARYING \emph{stimuli}, not just for a worst-case\\ CONSTANT load} --- hence one has to know about dynamics. \item \vfill Enough on this for the moment, please experiment at home \& report. \end{itemize} \end{frame}
{ "alphanum_fraction": 0.6536945813, "avg_line_length": 41.4285714286, "ext": "tex", "hexsha": "f244d4c3aa8d30bae223280e2ebdeb6b6768341f", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "66ec14c204e16c97a5792c2e240b2daed4b39e83", "max_forks_repo_licenses": [ "CC0-1.0" ], "max_forks_repo_name": "albertoleva/PID4CSE", "max_forks_repo_path": "slides/Unit-05/sections/04-PS02-ex03.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "66ec14c204e16c97a5792c2e240b2daed4b39e83", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "CC0-1.0" ], "max_issues_repo_name": "albertoleva/PID4CSE", "max_issues_repo_path": "slides/Unit-05/sections/04-PS02-ex03.tex", "max_line_length": 96, "max_stars_count": 1, "max_stars_repo_head_hexsha": "66ec14c204e16c97a5792c2e240b2daed4b39e83", "max_stars_repo_licenses": [ "CC0-1.0" ], "max_stars_repo_name": "albertoleva/PID4CSE", "max_stars_repo_path": "slides/Unit-05/sections/04-PS02-ex03.tex", "max_stars_repo_stars_event_max_datetime": "2019-04-19T16:38:10.000Z", "max_stars_repo_stars_event_min_datetime": "2019-04-19T16:38:10.000Z", "num_tokens": 594, "size": 2030 }
\documentclass[output=paper]{langsci/langscibook} \label{AppendixA} \title{Appendix: Questionnaire on reflexive constructions in the world's languages} \label{Appendix} \abstract{} \author{Martin Haspelmath \lastand Katarzyna Janic} \begin{document} \maketitle \textit{A short notice on our terminology: the term} \textbf{\textit{reflexivizer} }\textit{refers to any specialized form that expresses coreference within a clause. By} \textbf{\textit{specialized form} we understand a form which at least in certain conditions necessarily expresses the coreference meaning (even if} {\textit{non-coreference meanings are possible elsewhere).} Reflexivizers can be dependent or non-dependent forms like reflexive (pro)nouns, reflexive argument markers, or reflexive voice markers. Languages that have not developed a specialized reflexivizer express coreference with the help of other linguistic forms, e.g. personal pronouns. In this context, we prefer to talk about a} \textbf{\textit{non-reflexive form}}. \section{Basic uses of reflexivizers} \begin{itemize} \item Describe the \textbf{personal} \textbf{pronouns} and \textbf{reflexive} \textbf{pronouns} of the language. If the language also has \textbf{verbal} \textbf{reflexivizers} (reflexive argument markers or reflexive voice markers), give a brief description of the relevant verbal marking patterns. \item If the language uses a \textbf{reflexive} \textbf{pronoun} to express coreference, does it have distinctions such as the following? a. person b. case c. number d. obviation e. gender \item How is \textbf{coreference} \textbf{of} \textbf{the} \textbf{agent} \textbf{subject} \textbf{with} \textbf{the} \textbf{patient} \textbf{referent} \textbf{in} \textbf{object} \textbf{function} expressed in the language? Give examples like (a)-(d) and indicate the form (if any) expressing coreference. \end{itemize} a. I saw myself in the mirror.\footnote{Feel free to change the provided examples here and elsewhere in the questionnaire, if for some reasons they are problematic in your language.} b. My friend hates himself. c. She praised herself. d. The man killed himself. \begin{itemize} \item If the language uses a specialized reflexivizer to express coreference, is this form \textbf{obligatory} \textbf{or} \textbf{optional}? \item If the language has \textbf{several} \textbf{different} \textbf{reflexivizers} used under different conditions, what determines their distribution? \end{itemize} a. If they are in a complementary distribution, define the conditions under which each reflexivizers is selected. b. If the forms can occur in the same environments, can you think of any context in which one form is preferred over another? \textit{Some languages use a range of different forms to express agent-patient coreference. For instance, Dutch employs reflexive pronouns only in the third person; coreference with the first and second person is expressed by ordinary personal pronouns.} \begin{itemize} \item Is the use of reflexivizers subject to specific conditions relating to person or number? If it is, define them and provide relevant examples. \end{itemize} \section{Specialized reflexive form in other functions} \begin{itemize} \item Does the reflexive form have other uses? Specify and provide relevant examples. \end{itemize} \section{Contrast between introverted and extroverted verbs} \textit{Transitive verbs that allow a human object can be divided into introverted and extroverted classes (\citealt{Haiman1980}: 803; \citealt{KönigSiemund1999}: 61). Extroverted actions express socially antagonistic events such as ‘kill’, ‘kick’, ‘attack’, ‘hate’ and ‘criticize’, whereas introverted actions include body care (or grooming) actions exemplified by ‘wash’, ‘shave’, ‘dress’, ‘bathe’, and a few others such as ‘defend oneself’.} \begin{itemize} \item How are autopathic actions with extroverted verbs expressed in the language? Give examples like (a)-(c) and indicate the form (if any) responsible for the coreference interpretation. \end{itemize} a. The dog bit itself. b. The girl hates herself. c. The politician criticized himself. \begin{itemize} \item How are autopathic actions with introverted verbs expressed in your language? Translate the examples (a)-(c) and indicate the form (if any) responsible for the coreference interpretation. \end{itemize} a. The dog was washing himself. b. The girl washed. c. He shaved. \section{Contrast between body-part and whole-body actions} \textit{Some languages encode body-part actions (combing hair, brushing teeth, clipping nails) similarly to those involving whole-body actions (wash, bathe, get tented) i.e. with the help of the same reflexive form (e.g. French ‘se peigner’ ‘to comb one’s hair’ vs. ‘se laver’ ‘to wash’). In other languages, body-part and whole-body actions are treated apart, the former being expressed through a transitive construction with the body-part expressed as object (e.g. English: ‘I comb my hair.’ vs. ‘I washed.’). Moreover, some languages specify the body-part object in addition to the reflexive form (e.g. French: ‘Il se lave les mains’ ‘he washes his hands’). If your language contrast body-part actions with whole-body actions in coding, proceed to point 20, otherwise skip it.} \begin{itemize} \item How are body-part actions expressed in your language: (i) through coreference, or (ii) through a transitive construction with the body-part expressed as object? Translate (a)-(c). \end{itemize} a. The men shaved their beard. b. She scratched her back. c. He brushed his teeth. \begin{itemize} \item If your language employs a specialized reflexive form to express body-part actions, can the body-part be expressed as well? \item How are whole-body actions expressed in your language? Translate the examples (a)-(c) and indicate the form (if any) responsible for the coreference interpretation. \end{itemize} a. The men got dressed. b. She washed. c. I bathed. \section{Reflexive pronoun in subject position} \begin{itemize} \item Except for a few cases (e.g. Georgian), languages do not allow reflexive pronouns in subject function. Does your language support this crosslinguistic observation? If it does not, provide a relevant example. \end{itemize} \subsection{Coreference of the subject with various semantic roles} \begin{itemize} \item (a) \textbf{\textit{Possessor}}. How is coreference of the subject with a possessor referent expressed in your language? Translate (a)-(c) and indicate the form (if any) triggering the coreference meaning. \end{itemize} a. She\textsubscript{1} took her\textsubscript{1} umbrella. b. John\textsubscript{1} reads his\textsubscript{1} book. c. The women\textsubscript{1} swept their\textsubscript{1} rooms. 14. (b) Can you contrast examples from \REF{ex:key:14a} with those provided in \REF{ex:key:14b} wherein the referent of possessor is not co-referred with a subject? a. She1 took her2 umbrella. b. John1 reads his2 book. c. The women1 swept their2 rooms. \begin{itemize} \item (a) \textbf{\textit{Locative}}. How is coreference of the subject with a spatial referent expressed in your language? Translate (a)-(c) and indicate the form (if any) triggering the coreference meaning. \end{itemize} a. She\textsubscript{1} saw a snake beside her\textsubscript{1}. b. John\textsubscript{1} put a book next to him\textsubscript{1}. c. She\textsubscript{1} left the traces behind her\textsubscript{1}. 15. (b) Contrast examples from \REF{ex:key:15a} with those in provided \REF{ex:key:15b} wherein the spatial referent is not co-referred with a subject. a. She\textsubscript{1} saw a snake beside her\textsubscript{2}. b. John\textsubscript{1} put a book next to him\textsubscript{2}. c. She\textsubscript{1} left the traces behind her\textsubscript{2}. \begin{itemize} \item (a) \textbf{\textit{Benefactive}}. How is coreference of the subject with a beneficiary referent expressed in your language? Translate (a)-(c) and indicate the form (if any) triggering coreference. \end{itemize} a. She bought a book for herself. b. The boy cooked a dinner for himself. c. They built a house for themselves. 16. (b) Contrast examples from \REF{ex:key:16a} with those provided in \REF{ex:key:16b} wherein the referent of beneficiary is not co-referred with a subject. a. She bought a book for her. b. He cooked a dinner for him. c. You built a house for them. \begin{itemize} \item (a) \textbf{\textit{Recipient}}. How is coreference of the subject with a recipient referent expressed in your language? Translate (a)-(c) and indicate the form (if any) triggering the coreference meaning. \end{itemize} a. John talked to himself. b. They sent a postcard to themselves. c. The girl gave herself a present. 17. (b) Contrast examples from \REF{ex:key:17a} with those provided in \REF{ex:key:17b} wherein the referent of recipient is not co-referred with a subject. a. John talked to him. b. They sent a postcard to them. c. The girl gave a present to her. \subsection{Coreference between non-subject arguments} \begin{itemize} \item How is coreference between two non-subject arguments expressed in a single clause? Translate (a)-(c) and indicate the form (if any) responsible for the coreference interpretation. \end{itemize} a. She told us\textsubscript{1} about ourselves\textsubscript{1}. b. He spoke with John\textsubscript{1} about himself\textsubscript{1}. c. John showed Mary\textsubscript{1} a picture of herself\textsubscript{1}. \subsection{Contrast between coreference and disjoint reference} \begin{itemize} \item Contrast the subject-coreference pronoun in object position (examples’) with disjoint reference pronoun in object position (examples’’). Which pronouns does your language use to code these two types of situations? \end{itemize} a' The man saw himself. vs. a” The man saw him. b' The woman criticized herself. vs. b” The woman criticized her. c' He admired himself. vs. c” He admired him. \subsection{Contrast between object and nominal adpossessor} \begin{itemize} \item Contrast the subject-coreferential pronoun in object position (examples‘) with the subject-coreferential pronouns in adnominal possessive position (examples’’). Which pronouns does your language use to code these two types of situations? \end{itemize} a’ She\textsubscript{1} killed herself\textsubscript{1}. vs. a’’ She\textsubscript{1} killed her\textsubscript{1/2} lover. b' He\textsubscript{1} admires himself\textsubscript{1}. vs. b’’ He\textsubscript{1} admires his\textsubscript{1/2} boss. c' She\textsubscript{1} saw herself\textsubscript{1}. vs. c’’ She\textsubscript{1} saw her\textsubscript{1/2} sister. \subsection{Contrast between exact and inclusive coreference} \begin{itemize} \item Contrast exact coreference (examples‘) with inclusive coreference (examples”). Which pronouns does your language use to code these two types of situations? \end{itemize} a’ She\textsubscript{1} admires herself\textsubscript{1}. vs. a’’ She\textsubscript{1} admires herself and the others\textsubscript{1+X}. b’ He\textsubscript{1} criticized himself\textsubscript{1}. vs. b” He\textsubscript{1} criticized himself and the others\textsubscript{1+X}. c’ He\textsubscript{1} defended himself\textsubscript{1}. vs. c” He\textsubscript{1} defended himself and the others\textsubscript{1+X}. \subsection{Long-distance coreference} \begin{itemize} \item How is coreference of the subject across clauses expressed in your language? Translate (a)-(c) and indicate the form (if any) responsible for the coreference interpretation. \end{itemize} a. She\textsubscript{1} thought that she\textsubscript{1} had enough money. b. The boy\textsubscript{1} said that he\textsubscript{1} must go home. c. We\textsubscript{1} said that we\textsubscript{1} worked the whole day. \sloppy\printbibliography[heading=subbibliography,notkeyword=this] \end{document}
{ "alphanum_fraction": 0.7593153714, "avg_line_length": 31.9659685864, "ext": "tex", "hexsha": "a02605eded8993573283b88ba6bf7ebe43897965", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "67efe767d14512b2f96d2d324e854100015c8480", "max_forks_repo_licenses": [ "CC-BY-4.0" ], "max_forks_repo_name": "langsci/284", "max_forks_repo_path": "chapters/AppendixQuestionnaire.tex", "max_issues_count": 1593, "max_issues_repo_head_hexsha": "67efe767d14512b2f96d2d324e854100015c8480", "max_issues_repo_issues_event_max_datetime": "2021-06-30T00:15:49.000Z", "max_issues_repo_issues_event_min_datetime": "2021-01-21T14:46:34.000Z", "max_issues_repo_licenses": [ "CC-BY-4.0" ], "max_issues_repo_name": "langsci/284", "max_issues_repo_path": "chapters/AppendixQuestionnaire.tex", "max_line_length": 780, "max_stars_count": null, "max_stars_repo_head_hexsha": "67efe767d14512b2f96d2d324e854100015c8480", "max_stars_repo_licenses": [ "CC-BY-4.0" ], "max_stars_repo_name": "langsci/284", "max_stars_repo_path": "chapters/AppendixQuestionnaire.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 3304, "size": 12211 }
\section{Related Work} \label{related} This section discusses different classes of related work. \tightparagraph{Congestion control for DCNs} \crs{Rather than proposing a new congestion control algorithm, our work investigates if congestion control can be moved to the vSwitch. Thus, many of the following schemes are complimentary.} DCTCP~\cite{alizadeh2011data} is a seminal TCP variant for datacenter networks. Judd~\cite{judd2015nsdi} proposed simple yet practical fixes to enable DCTCP in production networks. TCP-Bolt~\cite{stephens2014practical} is a variant of DCTCP for PFC-enabled lossless Ethernet. %DCQCN~\cite{zhu2015congestion} is a rate-based congestion control scheme implemented in NICs %for QCN-based~\cite{qcn} RDMA deployments. DCQCN~\cite{zhu2015congestion} is a rate-based congestion control scheme (built on DCTCP and QCN) to support RDMA deployments in PFC-enabled lossless networks. TIMELY~\cite{mittal2015timely} and DX~\cite{lee2015accurate} use accurate network latency as the signal to perform congestion control. TCP ex Machina~\cite{winstein2013tcp} uses computer-generated congestion control rules. PERC~\cite{jose2015high} proposes proactive congestion control to improve convergence. ICTCP's~\cite{wu2010ictcp} receiver monitors incoming TCP flows and modifies~\rwnd{} to mitigate the impact of incast, but this cannot provide generalized congestion control like~\acdc{}. Finally, efforts~\cite{dell-toe,chelsio-toe} to implement TCP Offload Engine (TOE) in specialized NICs are not widely deployed for reasons noted in~\cite{mogul2003tcp,linux-toe}. %~\acdc{} is designed to work with commodity NICs. \tightparagraph{Bandwidth allocation} Many bandwidth allocation schemes have been proposed. Gatekeeper~\cite{rodrigues2011gatekeeper} and EyeQ~\cite{jeyakumar2013eyeq} abstract the network as a single switch and provide bandwidth guarantees by managing each server's access link. Oktopus~\cite{Ballani2011oktopus} provides fixed performance guarantees within virtual clusters. SecondNet~\cite{Guo2010Secondnet} enables virtual datacenters with static bandwidth guarantees. Proteus~\cite{Xie2012Proteus} allocates bandwidth for applications with dynamic demands. Seawall~\cite{shieh2011sharing} provides bandwidth proportional to a defined weight by forcing traffic through congestion-based edge-to-edge tunnels. NetShare~\cite{Lam2012NetShare} utilizes hierarchical weighted max-min fair sharing to tune relative bandwidth allocation for services. FairCloud~\cite{Popa2012Faircloud} identifies trade-offs in minimum guarantees, proportionality and high utilization, and designs schemes over this space. Silo~\cite{jang2015silo} provides guaranteed bandwidth, delay and burst allowances through a novel VM placement and admission algorithm, coupled with a fine-grained packet pacer. As discussed in~\cref{background}, ~\acdc{} is largely complimentary to these schemes because it is a transport-level solution. \tightparagraph{Rate limiters} SENIC~\cite{niranjan2013fastrak} identifies the limitations of NIC hardware rate limiters (\ie{}, not scalable) and software rate limiters (\ie{}, high CPU overhead) and uses the CPU to enqueue packets in host memory and the NIC. Silo's pacer injects void packets into an original packet sequence to achieve pacing. FasTrack~\cite{niranjan2013fastrak} offloads functionality from the server into the switch for certain flows.~\acdc{} prevents TCP flows from sending in the first place and can be used in conjunction with these schemes. \tightparagraph{Low latency DCNs} Many schemes have been proposed to reduce latency in datacenter networks. HULL~\cite{alizadeh2012less} uses phantom queues to leave bandwidth headroom to support low latency. pFabric~\cite{alizadeh2013pfabric} is a clean-slate design which utilizes priority and minimal switch buffering to achieve low latency. Fastpass~\cite{perry2014fastpass} uses a centralized arbiter to perform per-packet level scheduling. QJUMP~\cite{qjump} uses priority queueing and rate limiting to bound latency. Traffic engineering~\cite{al2010hedera,rasley2014planck} and load balancing~\cite{alizadeh2014conga,he2015presto,ghorbani2015micro} can also reduce latency. Because~\acdc{} works on the transport level, it is largely complimentary to these works. \tightparagraph{Performance-enhancing proxies} Several schemes improve end-to-end protocol performance via a middlebox or proxy~\cite{RFC3449,RFC3115,balakrishnan2008maelstrom,davern2011httpep,balakrishnan1995improving}. \acdc{} fits into this class of works, but is unique in providing a mechanism to alter a VM's TCP congestion control algorithm by modifying the vSwitch. \tightparagraph{Virtualized congestion control} \crs{vCC~\cite{vcc} is a concurrently designed system that shares~\acdc{}'s goals and some of its design details. The paper is complementary in that some items not addressed in this work are presented, such as a more detailed analysis of the ECN-coexistence problem, an exploration of the design space, and a theoretical proof of virtualized congestion control's correctness. Our paper provides an in-depth design and thorough evaluation of a DCTCP-based virtualized congestion control algorithm on a 10 Gbps testbed. }
{ "alphanum_fraction": 0.822722074, "avg_line_length": 67.2564102564, "ext": "tex", "hexsha": "6457e431e0e548ee442d078de0a7ffcc25a5a77d", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "770fc637f9b7d908f349bbbfa112cbc17d898be3", "max_forks_repo_licenses": [ "Unlicense" ], "max_forks_repo_name": "keqhe/phd_thesis", "max_forks_repo_path": "acdctcp/related.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "770fc637f9b7d908f349bbbfa112cbc17d898be3", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Unlicense" ], "max_issues_repo_name": "keqhe/phd_thesis", "max_issues_repo_path": "acdctcp/related.tex", "max_line_length": 135, "max_stars_count": 2, "max_stars_repo_head_hexsha": "770fc637f9b7d908f349bbbfa112cbc17d898be3", "max_stars_repo_licenses": [ "Unlicense" ], "max_stars_repo_name": "keqhe/phd_thesis", "max_stars_repo_path": "acdctcp/related.tex", "max_stars_repo_stars_event_max_datetime": "2017-10-20T14:28:43.000Z", "max_stars_repo_stars_event_min_datetime": "2017-08-27T08:03:16.000Z", "num_tokens": 1320, "size": 5246 }
\subsection{Seven Times Table} % (fold) \label{sub:seven_times_table} This program prints out the seven times table from 1 x 7 to 10 x 7. The description of the program is in Table \ref{tbl:program-creation-times-table}, the pseudocode in Listing \ref{lst:program-creation-seven-times-pseudo}, the C code in Listing \ref{lst:program-creation-seven-times-c}, and the Pascal code in Listing \ref{lst:program-creation-seven-times-pas}. \begin{table}[h] \centering \begin{tabular}{l|p{10cm}} \hline \multicolumn{2}{c}{\textbf{Program Description}} \\ \hline \textbf{Name} & \emph{Seven Times Table} \\ \\ \textbf{Description} & Displays the Seven Times Table from 1 x 7 to 10 x 7. \\ \hline \end{tabular} \caption{Description of the Seven Times Table program} \label{tbl:program-creation-times-table} \end{table} \pseudocode{lst:program-creation-seven-times-pseudo}{Pseudocode for Seven Times Table program.}{./topics/program-creation/examples/seven-times.txt} \clearpage \csection{\ccode{lst:program-creation-seven-times-c}{C Seven Times Table}{topics/program-creation/examples/seven_times.c}} \passection{\pascode{lst:program-creation-seven-times-pas}{Pascal Seven Times Table}{topics/program-creation/examples/SevenTimesTable.pas}} % subsection times_table (end) \clearpage \subsection{Circle Area} % (fold) \label{sub:circle_area} This program prints out the area of circles with different radius. The description of the program is in Table \ref{tbl:program-creation-circle-area}, the pseudocode in Listing \ref{lst:program-creation-circle-areas-pseudo}, the C code in Listing \ref{lst:program-creation-circle-areas-c}, and the Pascal code in Listing \ref{lst:program-creation-circle-areas-pas}. \begin{table}[h] \centering \begin{tabular}{l|p{10cm}} \hline \multicolumn{2}{c}{\textbf{Program Description}} \\ \hline \textbf{Name} & \emph{Circle Areas} \\ \\ \textbf{Description} & Displays the Circle Areas for circles with radius from 1.0 to 5.0 with increments of 0.5. \\ \hline \end{tabular} \caption{Description of the Circle Areas program} \label{tbl:program-creation-circle-area} \end{table} \pseudocode{lst:program-creation-circle-areas-pseudo}{Pseudocode for Circle Area program.}{./topics/program-creation/examples/circle_areas.txt} \clearpage \csection{\ccode{lst:program-creation-circle-areas-c}{C Circle Areas}{topics/program-creation/examples/circle_areas.c}} \passection{\pascode{lst:program-creation-circle-areas-pas}{Pascal Circle Areas}{topics/program-creation/examples/CircleAreas.pas}} % subsection circle_area (end) \clearpage \subsection{Shape Drawing} % (fold) \label{sub:shape_drawing} This program draws some shapes to the screen using the \textbf{SwinGame} Software Development Kit (SDK). The SwinGame SDK is a library that provides a number of reusable code artefacts that you can use to create 2D games. This SDK is available for both C and Pascal, and work on Linux, Mac, and Windows. The description of the program is in Table \ref{tbl:program-creation-shape-drawing}, the pseudocode in Listing \ref{lst:program-creation-shape-drawing-pseudo}, the C code in Listing \ref{lst:program-creation-shape-drawing-c}, and the Pascal code in Listing \ref{lst:program-creation-shape-drawing-pas}. \begin{table}[h] \centering \begin{tabular}{l|p{10cm}} \hline \multicolumn{2}{c}{\textbf{Program Description}} \\ \hline \textbf{Name} & \emph{Shape Drawing} \\ \\ \textbf{Description} & Draws a number of shapes to the screen using SwinGame. \\ \hline \end{tabular} \caption{Description of the Shape Drawing program} \label{tbl:program-creation-shape-drawing} \end{table} \pseudocode{lst:program-creation-shape-drawing-pseudo}{Pseudocode for Shape Drawing program.}{./topics/program-creation/examples/shape_drawing.txt} The Lines from the program will do the following: \begin{itemize} \item \textbf{\texttt{OpenGraphicsWindow}} opens a Window with the title `Shape Drawing' that is 800 pixels wide by 600 pixels high. \item \textbf{\texttt{ClearScreen}} clears the screen to black. \item \textbf{\texttt{FillRectangle}} uses the color, the x, y location, and width and height to fill a rectangle. \item \textbf{\texttt{RefreshScreen}} updates the screen to show what has been drawn. All SwinGame drawing is done offscreen, and only drawn to the screen when RefreshScreen is called. \item \textbf{\texttt{Delay}} pauses the program for a number of milliseconds, so 500 will wait for half a second. \item \textbf{\texttt{FillCircle}} uses the color, given x, y location and radius to fill a circle. \item \textbf{\texttt{FillTriangle}} fills a triangle with the given x, y points (6 values for 3 points). \end{itemize} \clearpage \csection{\ccode{lst:program-creation-shape-drawing-c}{C Shape Drawing Code}{topics/program-creation/examples/shape_drawing.c}} The SwinGame procedure for C are named using the standard C naming scheme. The names are: \begin{itemize} \item \textbf{\texttt{open\_graphics\_window}} opens a Window with the title `Shape Drawing' that is 800 pixels wide by 600 pixels high. \item \textbf{\texttt{load\_default\_colors}} loads default colors for use in your code. \item \textbf{\texttt{clear\_screen}} clears the screen to black. \item \textbf{\texttt{fill\_rectangle}} uses the color, the x, y location, and width and height to fill a rectangle. \item \textbf{\texttt{refresh\_screen}} updates the screen to show what has been drawn. All SwinGame drawing is done offscreen, and only drawn to the screen when RefreshScreen is called. \item \textbf{\texttt{delay}} pauses the program for a number of milliseconds, so 500 will wait for half a second. \item \textbf{\texttt{fill\_circle}} uses the color, given x, y location and radius to fill a circle. \item \textbf{\texttt{fill\_triangle}} fills a triangle with the given x, y points (6 values for 3 points). \end{itemize} \clearpage \passection{\pascode{lst:program-creation-shape-drawing-pas}{Pascal Shape Drawing Code}{topics/program-creation/examples/ShapeDrawing.pas}} % subsection shape_drawing (end)
{ "alphanum_fraction": 0.7647251846, "avg_line_length": 52.5431034483, "ext": "tex", "hexsha": "f432b637e2a5b6e625aa71b744a73654c2c95d04", "lang": "TeX", "max_forks_count": 6, "max_forks_repo_forks_event_max_datetime": "2022-03-24T07:42:53.000Z", "max_forks_repo_forks_event_min_datetime": "2020-06-02T03:18:37.000Z", "max_forks_repo_head_hexsha": "8f3040983d420129f90bcc4bd69a96d8743c412c", "max_forks_repo_licenses": [ "CC-BY-4.0" ], "max_forks_repo_name": "macite/programming-arcana", "max_forks_repo_path": "topics/program-creation/examples/program-creation-examples.tex", "max_issues_count": 1, "max_issues_repo_head_hexsha": "bb5c0d45355bf710eff01947e67b666122901b07", "max_issues_repo_issues_event_max_datetime": "2021-12-29T19:45:10.000Z", "max_issues_repo_issues_event_min_datetime": "2021-12-29T19:45:10.000Z", "max_issues_repo_licenses": [ "CC-BY-4.0" ], "max_issues_repo_name": "thoth-tech/programming-arcana", "max_issues_repo_path": "topics/program-creation/examples/program-creation-examples.tex", "max_line_length": 364, "max_stars_count": 1, "max_stars_repo_head_hexsha": "bb5c0d45355bf710eff01947e67b666122901b07", "max_stars_repo_licenses": [ "CC-BY-4.0" ], "max_stars_repo_name": "thoth-tech/programming-arcana", "max_stars_repo_path": "topics/program-creation/examples/program-creation-examples.tex", "max_stars_repo_stars_event_max_datetime": "2021-08-10T04:50:54.000Z", "max_stars_repo_stars_event_min_datetime": "2021-08-10T04:50:54.000Z", "num_tokens": 1700, "size": 6095 }
% === [ Appendices ] =========================================================== \appendix \setcounter{secnumdepth}{0} \section{Appendices} \setcounter{secnumdepth}{3} \renewcommand{\thesubsection}{\Alph{subsection}} \input{sections/appendices/a_control_flow_analysis_terminology} \clearpage \input{sections/appendices/b_control_flow_recovery_examples} \clearpage \input{sections/appendices/c_control_flow_recovery_results} \clearpage
{ "alphanum_fraction": 0.7139588101, "avg_line_length": 29.1333333333, "ext": "tex", "hexsha": "02207a7ecaa4b04562b480115cabd32ce5baed8f", "lang": "TeX", "max_forks_count": 3, "max_forks_repo_forks_event_max_datetime": "2019-09-09T07:36:14.000Z", "max_forks_repo_forks_event_min_datetime": "2019-05-25T21:15:26.000Z", "max_forks_repo_head_hexsha": "fb82b6a5074aa8721afb24a5537bf1964ed20467", "max_forks_repo_licenses": [ "Unlicense" ], "max_forks_repo_name": "decomp/doc", "max_forks_repo_path": "report/control_flow_analysis/sections/appendices.tex", "max_issues_count": 48, "max_issues_repo_head_hexsha": "fb82b6a5074aa8721afb24a5537bf1964ed20467", "max_issues_repo_issues_event_max_datetime": "2020-01-29T19:17:53.000Z", "max_issues_repo_issues_event_min_datetime": "2019-01-30T19:08:59.000Z", "max_issues_repo_licenses": [ "Unlicense" ], "max_issues_repo_name": "decomp/doc", "max_issues_repo_path": "report/control_flow_analysis/sections/appendices.tex", "max_line_length": 80, "max_stars_count": 23, "max_stars_repo_head_hexsha": "fb82b6a5074aa8721afb24a5537bf1964ed20467", "max_stars_repo_licenses": [ "Unlicense" ], "max_stars_repo_name": "decomp/doc", "max_stars_repo_path": "report/control_flow_analysis/sections/appendices.tex", "max_stars_repo_stars_event_max_datetime": "2021-09-16T08:14:04.000Z", "max_stars_repo_stars_event_min_datetime": "2016-05-27T10:16:40.000Z", "num_tokens": 107, "size": 437 }
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % Simple Sectioned Essay Template % LaTeX Template % % This template has been downloaded from: % http://www.latextemplates.com % % Note: % The \lipsum[#] commands throughout this template generate dummy text % to fill the template out. These commands should all be removed when % writing essay content. % %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %---------------------------------------------------------------------------------------- % PACKAGES AND OTHER DOCUMENT CONFIGURATIONS %---------------------------------------------------------------------------------------- \documentclass[12pt]{article} % Default font size is 12pt, it can be changed here \usepackage{geometry} % Required to change the page size to A4 \geometry{a4paper} % Set the page size to be A4 as opposed to the default US Letter \usepackage{graphicx} % Required for including pictures \usepackage{float} % Allows putting an [H] in \begin{figure} to specify the exact location of the figure \usepackage{wrapfig} % Allows in-line images such as the example fish picture \usepackage{lipsum} % Used for inserting dummy 'Lorem ipsum' text into the template \linespread{1.2} % Line spacing %\setlength\parindent{0pt} % Uncomment to remove all indentation from paragraphs \graphicspath{{Pictures/}} % Specifies the directory where pictures are stored \begin{document} %---------------------------------------------------------------------------------------- % TITLE PAGE %---------------------------------------------------------------------------------------- \begin{titlepage} \newcommand{\HRule}{\rule{\linewidth}{0.5mm}} % Defines a new command for the horizontal lines, change thickness here \center % Center everything on the page \textsc{\LARGE University Name}\\[1.5cm] % Name of your university/college \textsc{\Large Major Heading}\\[0.5cm] % Major heading such as course name \textsc{\large Minor Heading}\\[0.5cm] % Minor heading such as course title \HRule \\[0.4cm] { \huge \bfseries Title}\\[0.4cm] % Title of your document \HRule \\[1.5cm] \begin{minipage}{0.4\textwidth} \begin{flushleft} \large \emph{Author:}\\ John \textsc{Smith} % Your name \end{flushleft} \end{minipage} ~ \begin{minipage}{0.4\textwidth} \begin{flushright} \large \emph{Supervisor:} \\ Dr. James \textsc{Smith} % Supervisor's Name \end{flushright} \end{minipage}\\[4cm] {\large \today}\\[3cm] % Date, change the \today to a set date if you want to be precise %\includegraphics{Logo}\\[1cm] % Include a department/university logo - this will require the graphicx package \vfill % Fill the rest of the page with whitespace \end{titlepage} %---------------------------------------------------------------------------------------- % TABLE OF CONTENTS %---------------------------------------------------------------------------------------- \tableofcontents % Include a table of contents \newpage % Begins the essay on a new page instead of on the same page as the table of contents %---------------------------------------------------------------------------------------- % INTRODUCTION %---------------------------------------------------------------------------------------- \section{Introduction} % Major section Example citation \cite{Figueredo:2009dg}. %------------------------------------------------ \subsection{Subsection 1} % Sub-section \lipsum[1] % Dummy text %------------------------------------------------ \subsection{Subsection 2} % Sub-section \lipsum[2] % Dummy text %------------------------------------------------ \subsubsection{Subsubsection 1} % Sub-sub-section \lipsum[3] % Dummy text \begin{figure}[H] % Example image \center{\includegraphics[width=0.5\linewidth]{placeholder}} \caption{Example image.} \label{fig:speciation} \end{figure} %------------------------------------------------ \subsubsection{Subsubsection 2} % Sub-sub-section \lipsum[4] % Dummy text %---------------------------------------------------------------------------------------- % MAJOR SECTION 1 %---------------------------------------------------------------------------------------- \section{Content Section} % Major section \lipsum[5] % Dummy text %------------------------------------------------ \subsection{Subsection 1} % Sub-section \subsubsection{Subsubsection 1} % Sub-sub-section \lipsum[6] % Dummy text %------------------------------------------------ \subsubsection{Subsubsection 2} % Sub-sub-section \lipsum[6] % Dummy text \begin{wrapfigure}{l}{0.4\textwidth} % Inline image example \begin{center} \includegraphics[width=0.38\textwidth]{fish} \end{center} \caption{Fish} \end{wrapfigure} \lipsum[7-8] % Dummy text %------------------------------------------------ \subsubsection{Subsubsection 3} % Sub-sub-section \begin{description} % Numbered list example \item[First] \hfill \\ \lipsum[9] % Dummy text \item[Second] \hfill \\ \lipsum[10] % Dummy text \item[Third] \hfill \\ \lipsum[11] % Dummy text \end{description} %---------------------------------------------------------------------------------------- % MAJOR SECTION X - TEMPLATE - UNCOMMENT AND FILL IN %---------------------------------------------------------------------------------------- %\section{Content Section} %\subsection{Subsection 1} % Sub-section % Content %------------------------------------------------ %\subsection{Subsection 2} % Sub-section % Content %---------------------------------------------------------------------------------------- % CONCLUSION %---------------------------------------------------------------------------------------- \section{Conclusion} % Major section \lipsum[12-13] %---------------------------------------------------------------------------------------- % BIBLIOGRAPHY %---------------------------------------------------------------------------------------- \begin{thebibliography}{99} % Bibliography - this is intentionally simple in this template \bibitem[Figueredo and Wolf, 2009]{Figueredo:2009dg} Figueredo, A.~J. and Wolf, P. S.~A. (2009). \newblock Assortative pairing and life history strategy - a cross-cultural study. \newblock {\em Human Nature}, 20:317--330. \end{thebibliography} %---------------------------------------------------------------------------------------- \end{document}
{ "alphanum_fraction": 0.5019806687, "avg_line_length": 29.9099526066, "ext": "tex", "hexsha": "14993e6b98308cbf0eeb32e22dede0db2bdaf255", "lang": "TeX", "max_forks_count": 95, "max_forks_repo_forks_event_max_datetime": "2022-03-17T00:41:45.000Z", "max_forks_repo_forks_event_min_datetime": "2015-03-31T13:04:04.000Z", "max_forks_repo_head_hexsha": "e34c40114d1fbae80afb3b0523f4d82b2e8e44de", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "mridwanhi/Open-LaTeX-Studio", "max_forks_repo_path": "OpenLaTeXStudio/Editor/src/main/resources/openlatexstudio/templates/essay_1.tex", "max_issues_count": 139, "max_issues_repo_head_hexsha": "e34c40114d1fbae80afb3b0523f4d82b2e8e44de", "max_issues_repo_issues_event_max_datetime": "2018-04-29T18:45:12.000Z", "max_issues_repo_issues_event_min_datetime": "2015-01-11T22:16:01.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "mridwanhi/Open-LaTeX-Studio", "max_issues_repo_path": "OpenLaTeXStudio/Editor/src/main/resources/openlatexstudio/templates/essay_1.tex", "max_line_length": 117, "max_stars_count": 189, "max_stars_repo_head_hexsha": "1953218ffec5053e6d1123840ad4068ae4a7ab34", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "pveeckhout/SQLTutorial", "max_stars_repo_path": "tex/essay_template/essay_1.tex", "max_stars_repo_stars_event_max_datetime": "2022-01-16T15:34:14.000Z", "max_stars_repo_stars_event_min_datetime": "2015-01-18T22:30:31.000Z", "num_tokens": 1359, "size": 6311 }