Search is not available for this dataset
text
string
meta
dict
\chapter{Implementation} This chapter is about implementation specifics. KubeLB is written in the Go\footnote{https://golang.org/} programming language. Since Kubernetes and its ecosystem are also written in Go, integration is easiest here. A basic understanding of the programming language is assumed. \section{KubeBuilder} KubeBuilder is a framework to build Kubernetes APIs using custom resource definitions, built on top of the controller-runtime and controller-tools libraries. It is a good entrypoint to start developing an operator, as it simplifies CRD creation and controller implementation. Like other frameworks it provides the developer with a simple abstraction and reduces boilerplate and toil. \\ KubeBuilder can be used to create a new project, initiate a basic Go project structure, as well as several configuration files needed to deploy within a Kubernetes cluster. To build a controller, various modules are required, which are added to the project by KubeBuilder. \\ \newpage For controller specific components, the controller-runtime\footnote{https://github.com/kubernetes-sigs/controller-runtime} module provides an abstraction layer. The interaction with Kubernetes is done via the Kubernetes API which is implemented within the client-go\footnote{https://github.com/kubernetes/client-go} module. The controller-runtime module provides an abstraction for various components to build an operator: \begin{itemize} \item \textit{Manager} \\ The Manager configures the go-client, a cache and is generally responsible for the management of shared resources. Several controllers can be registered at a Manager, these can be started or stopped through the manager. In addition, the manager takes over the leader election within a cluster and thus ensures regulated behavior and fail-safety. Leader election is required when more than one controller is running at the time, so they don't interfere each other. \item \textit{Controller} \\ Controllers use events originating from the Kubernetes API to trigger reconcile requests. They can trigger reconcile requests based on predicates that filter events. \item \textit{Reconciler} \\ Controller logic is implemented in terms of Reconcilers. A Reconciler implements a function which takes a reconcile request containing the name and namespace of the object to reconcile. It returns a result or an error, indicating whether to requeue the request or not. \end{itemize} \newpage \autoref{lst:main-manager} illustrates how the three components interact together in the actual application. \\ At the start of the application a Manager and the Reconcilers are created. The Manager creates a client using the configuration provided by \textit{ctrl.GetConfigOrDie()}. When creating the KubeLbNodeReconciler, this client is used by the Manager. \\ \textit{SetupWithManager()} creates a new Controller and attaches the Reconciler, the Controller is than registered at the Manager. The Kubernetes object to watch for is declared in the function as well. As shown in \autoref{lst:nodecontroller-reconcile}, it watches for nodes. \\ In the end, \textit{Start()} is executed at the Manager, which starts all registered Controllers. \begin{lstlisting}[caption={KubeLB Agent main.go snippet - Manager and Controller}, label={lst:main-manager}] mgr, err := ctrl.NewManager(ctrl.GetConfigOrDie(), ctrl.Options{ Scheme: scheme, MetricsBindAddress: metricsAddr, Port: 9443, LeaderElection: enableLeaderElection, LeaderElectionID: "k8c.io.kubelb.agent", }) if err != nil { setupLog.Error(err, "unable to start agent") os.Exit(1) } if err = (&agent.KubeLbNodeReconciler{ Client: mgr.GetClient(), Log: ctrl.Log.WithName("kubelb.node.reconciler"), Scheme: mgr.GetScheme(), KlbClient: kubeLbClient.TcpLbClient, Endpoints: &sharedEndpoints, }).SetupWithManager(mgr); err != nil { setupLog.Error(err, "unable to create controller", "reconciler", "kubelb.node.reconciler") os.Exit(1) } // +kubebuilder:scaffold:builder if err := mgr.Start(ctx); err != nil { setupLog.Error(err, "problem running agent") os.Exit(1) } \end{lstlisting} The Controller eventually triggers a reconcile event on node changes. \\ To save load on the Kubernetes API, the Agent keeps an internal list of the current endpoints (\textit{kubelb.Endpoints}) of the cluster. This means that the API does not have to be queried for the current nodes every time. One task of the KubeLbNodeReconciler is to update this list when changes occur. \\ In order for existing load balancers to be aware of node changes in the user cluster, it is necessary to change the endpoints in the Spec field of all TCPLoadBalancer objects. To do this, all TCPLoadBalancers are queried via the KubeLB client, which has a configuration for load balancer cluster, and their endpoints are changed to the updated ones. \\ The changes are registered by the Manager in the load balancer cluster, which then passes the configuration to the Envoy load balancers as explained in more detail in \autoref{sec:envoy-control-plane}. \\ \begin{lstlisting}[caption={KubeLB Agent node reconciler}, label={lst:nodecontroller-reconcile}] // KubeLbIngressReconciler reconciles a Service object type KubeLbNodeReconciler struct { client.Client KlbClient v1alpha1.TCPLoadBalancerInterface Log logr.Logger Scheme *runtime.Scheme Endpoints *kubelb.Endpoints } // +kubebuilder:rbac:groups="",resources=nodes,verbs=list;get;watch func (r *KubeLbNodeReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) { log := r.Log.WithValues("name", req.Name) log.V(2).Info("reconciling node") nodeList := &corev1.NodeList{} err := r.List(ctx, nodeList) if err != nil { log.Error(err, "unable to list nodeList") return ctrl.Result{}, err } log.V(6).Info("processing", "nodes", nodeList, "endpoints", r.Endpoints) if r.Endpoints.EndpointIsDesiredState(nodeList) { log.V(2).Info("endpoints are in desired state") return ctrl.Result{}, err } log.V(6).Info("actual", "endpoints", r.Endpoints.ClusterEndpoints) log.V(6).Info("desired", "endpoints", r.Endpoints.GetEndpoints(nodeList)) r.Endpoints.ClusterEndpoints = r.Endpoints.GetEndpoints(nodeList) log.V(5).Info("proceeding with", "endpoints", r.Endpoints.ClusterEndpoints) //patch endpoints tcpLbList, err := r.KlbClient.List(ctx, v1.ListOptions{}) if err != nil { log.Error(err, "unable to list TcpLoadBalancer") return ctrl.Result{}, err } log.V(6).Info("patching", "TcpLoadBalancers", tcpLbList) var endpointAddresses []kubelbiov1alpha1.EndpointAddress for _, endpoint := range r.Endpoints.ClusterEndpoints { endpointAddresses = append(endpointAddresses, kubelbiov1alpha1.EndpointAddress{ IP: endpoint, }) } for _, tcpLb := range tcpLbList.Items { for _, endpoints := range tcpLb.Spec.Endpoints { endpoints.Addresses = endpointAddresses } _, err = r.KlbClient.Update(ctx, &tcpLb, v1.UpdateOptions{}) if err != nil { log.Error(err, "unable to update", "TcpLoadBalancer", tcpLb.Name) } log.V(2).Info("updated", "TcpLoadBalancer", tcpLb.Name) log.V(7).Info("updated to", "TcpLoadBalancer", tcpLb) } return ctrl.Result{}, nil } func (r *KubeLbNodeReconciler) SetupWithManager(mgr ctrl.Manager) error { return ctrl.NewControllerManagedBy(mgr). For(&corev1.Node{}). Complete(r) } \end{lstlisting} \section{Code generation}\label{sec:code-generator} Due to the lack of generics in Go, it is common to use code generators. The controller-tools\footnote{https://github.com/kubernetes-sigs/controller-tools} repository includes the controller-gen command which is used for generating utility code and Kubernetes YAML. In addition, the code-generator\footnote{https://github.com/kubernetes/code-generator} repository, is used for Kubernetes client generation. Both repositories contain a CLI which can perform different types of generation, that are mostly based on special marker comments in the Go code. \subsection{RBAC} As described in \autoref{sec:agent}, the Agent needs access to some Kubernetes resources like services. RBAC\footnote{Role-based access control~\footcite{RBAC}} is an authorization method and defines the permissions for a role. The Agent and Manager are deployed with a custom role, that allows access to the needed resources. \\ The permissions are closely coupled to the controller implementation. Therefore it makes sense to store the required permissions close to the code base. In \autoref{lst:nodecontroller-reconcile}, the markers are set above the reconcile method, that implements the controller logic. The controller needs read-only access because it reacts to changes in the nodes and needs to get them from the API server. \\ With the RBAC markers set, controller-gen can generate the agent role, that is used to run the Agent inside a Kubernetes cluster. The command below creates the agent-role inside the output path, based on the marker comments controller-gen finds in the set path. \\ \begin{lstlisting}[numbers=none, caption={Generate Role YAML files with controller-gen}, label={lst:role-generation}] controller-gen rbac:roleName=agent-role paths="./pkg/controllers/agent/..." output:artifacts:config=config/agent/rbac \end{lstlisting} \subsection{Custom Resource Definitions} The \autoref{lst:tcplb} shows the basic struct in Go of the TCPLoadBalancer CRD. Like all objects in Kubernetes it contains Type-, and ObjectMeta information, a Spec and in this case also a Status field. The struct is annotated with marker comments for the controller-gen tool, which is than able to create a YAML file to deploy the CRD inside a cluster. \\ The first three comments are for CRD generation, while \textit{+genclient} is needed for client generation, which is explained in \autoref{subsec:client}. The annotations express that this is the root object, it includes a status field and the abbreviation is \textit{tcplb}. Json tags are required for internal serialization. \begin{lstlisting}[caption={TCPLoadBalancer CRD root struct},label={lst:tcplb}] import metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" // +kubebuilder:object:root=true // +kubebuilder:subresource:status // +kubebuilder:resource:shortName=tcplb // +genclient // TCPLoadBalancer is the Schema for the tcploadbalancers API type TCPLoadBalancer struct { metav1.TypeMeta `json:",inline"` metav1.ObjectMeta `json:"metadata,omitempty"` Spec TCPLoadBalancerSpec `json:"spec,omitempty"` Status TCPLoadBalancerStatus `json:"status,omitempty"` } \end{lstlisting} \newpage \autoref{lst:tcpbl-status} is the implementation of the TCPLoadBalancer status field. KubeLB mirrors the Kubernetes Service status and for that purpose it makes use of the preexisting LoadBalancerStatus from the API module. The LoadBalancer field is marked as optional, because it is absent if there is no load balancer provisioned yet. \begin{lstlisting}[caption={TCPLoadBalancerStatus struct}, label={lst:tcpbl-status}] import corev1 "k8s.io/api/core/v1" // TCPLoadBalancerStatus defines the observed state of TCPLoadBalancer type TCPLoadBalancerStatus struct { // LoadBalancer contains the current status of the load-balancer, // if one is present. // +optional LoadBalancer corev1.LoadBalancerStatus `json:"loadBalancer,omitempty" } \end{lstlisting} Every load balancer needs a set of Endpoints, as well as Ports to expose to the outside of the cluster. It is also possible to set the Type, like in a Kubernetes service. In any case at least one Endpoint is required. The \textit{//+kubebuilder:validation:MinItems:=1} annotation ensures that the generated Resource contains a validator. \\ The command in \autoref{lst:crd-generation} will check the \textit{./pkg} directory for marker comments and create the CRD of version v1 inside the \textit{config/crd/base} directory. \\ \begin{lstlisting}[caption={TCPLoadBalancerSpec struct}, label={lst:tcpbl-spec}] // TCPLoadBalancerSpec defines the desired state of TCPLoadBalancer type TCPLoadBalancerSpec struct { // Important: Run "make" to regenerate code after modifying this file // Sets of addresses and ports that comprise an exposed user service on a cluster. // +required //+kubebuilder:validation:MinItems:=1 Endpoints []LoadBalancerEndpoints `json:"endpoints,omitempty"` // The list of ports that are exposed by the load balancer service. // +optional Ports []LoadBalancerPort `json:"ports,omitempty"` // type determines how the Load Balancer Service is exposed. Defaults to ClusterIP. Valid // options are ClusterIP, NodePort and LoadBalancer. // +optional // +kubebuilder:default:=ClusterIP Type corev1.ServiceType `json:"type,omitempty" } \end{lstlisting} \begin{lstlisting}[numbers=none, caption={Generate CRD YAML files with controller-gen}, label={lst:crd-generation}] controller-gen crd:crdVersions=v1 paths="./pkg/..." output:crd:artifacts:config=config/crd/bases \end{lstlisting} \subsection{Client}\label{subsec:client} The go-client module includes an implementation for the standard Kubernetes objects. Since KubeLB extends the Kubernetes API with a new CRD, an implementation in Go is also required for the Agent (see \autoref{sec:agent}), to programmatically interact with the new resource. In order for the Agent controller to create a watch on the CRD, it needs an informer. The informer stores objects which it receives from the client and invoke the controller passing it the object. Therefore, the informer still needs a client as well as a lister to work. The client communicates with the Kubernetes API and creates watches. The lister is an abstraction to get and list the respective CRD, which is needed by the informer. \\ The \textit{+genclient} marker comment indicates to create a client for the Custom Resource, like in \autoref{lst:tcplb}. Within the code-generator project there are several binaries, like client-gen, lister-gen and informer-gen, which generate different parts of code. To bundle them the project offers a bash script called generate-groups.sh, wich acts as an entrypoint and calls the binaries. It offers the ability to generate a client, lister and informer. \autoref{lst:generate-groups} illustrates the usage of the command. The first arguments are the generators to be invoked by the script. The second and third ones are output and input modules, and the last one is the group version of the CRD. \begin{lstlisting}[numbers=none, caption={Generate client, informer and lister with code-generator}, label={lst:generate-groups}] generate-groups.sh "client,lister,informer" k8c.io/kubelb/pkg/generated k8c.io/kubelb/pkg/api "kubelb.k8c.io:v1alpha1" \end{lstlisting} \section{Envoy control-plane}\label{sec:envoy-control-plane} As explained in \autoref{sec:envoy}, Envoy was chosen as a load balancer among others because of the data plane API. The Envoy data plane API\footnote{https://github.com/envoyproxy/data-plane-api} is implemented by the go-control-plane\footnote{https://github.com/envoyproxy/go-control-plane} module. \\ The module provides a snapshot cache which needs to be initialized, updated and invalidated, as well as a gRPC\footnote{https://en.wikipedia.org/wiki/GRPC} based API server implementation, that allows bidirectional communication. To expose the Envoy data-plane API server to the cluster, the Manager deployment consist of a service. \\ An Envoy deployment is created for every TCPLoadBalancer. Each deployment receives a bootstrap configuration which tells Envoy where to find the control-plane, as well as a unique node id. This way Envoy will connect to the control-plane and receive the latest configuration for its node id. \\ The module uses snapshots to represent envoy configurations, that are stored in a cache. A controller inside the Manager watches for changes to the TCPLoadBalancer resources and reconciles multiple Kubernetes objects like service and deployments. Although the envoy snapshot cache is not persisted inside Kubernetes, the Manager follows the same approach and reconciles the Envoy snapshot like the other resources. The function is called by the TCPLoadBalancerReconciler and receives the current TCPLoadBalancer. Snapshots are identified by a unique name and a version. The actual snapshot is the latest version inside the snapshot cache. The snapshot created from the provided and possibly updated TCPLoadBalancer object is the desired one. If no snapshot is present, it will be initialized with the desired one. Otherwise, if a change is detected it will update the snapshot and increase the version. The comparison is done based on the desired and actual snapshots and if they differ, the snapshot cache needs to be updated. \\ \begin{lstlisting}[caption={Envoy snapshot reconciliation}, label={lst:snapshot-reconcile}] func (r *TCPLoadBalancerReconciler) reconcileEnvoySnapshot(ctx context.Context, tcpLoadBalancer *kubelbk8ciov1alpha1.TCPLoadBalancer) error { log := ctrl.LoggerFrom(ctx).WithValues("reconcile", "envoy") log.V(2).Info("verify envoy snapshot") // Get current snapshot actualSnapshot, err := r.EnvoyCache.GetSnapshot(tcpLoadBalancer.Name) if err != nil { // Add new snapshot to the cache initSnapshot := envoycp.MapSnapshot(tcpLoadBalancer, "0.0.1") log.Info("init snapshot", "service-node", tcpLoadBalancer.Name, "version", "0.0.1") log.V(5).Info("serving", "snapshot", initSnapshot) return r.EnvoyCache.SetSnapshot(tcpLoadBalancer.Name, initSnapshot) } log.V(5).Info("actual", "snapshot", actualSnapshot) // Generate a new snapshot using the old version to be able to do a DeepEqual comparison lastUsedVersion, err := semver.NewVersion(actualSnapshot.GetVersion(envoyresource.ClusterType)) if err != nil { return errors.Wrap(err, "failed to parse version from last snapshot") } desiredSnapshot := envoycp.MapSnapshot(tcpLoadBalancer, lastUsedVersion.String()) log.V(5).Info("desired", "snapshot", desiredSnapshot) if reflect.DeepEqual(actualSnapshot, desiredSnapshot) { log.V(2).Info("snapshot is in desired state") return nil } newVersion := lastUsedVersion.IncMajor() newSnapshot := envoycp.MapSnapshot(tcpLoadBalancer, newVersion.String()) if err := newSnapshot.Consistent(); err != nil { return errors.Wrap(err, "new Envoy config snapshot is not consistent") } log.Info("updating snapshot", "service-node", tcpLoadBalancer.Name, "version", newVersion.String()) if err := r.EnvoyCache.SetSnapshot(tcpLoadBalancer.Name, newSnapshot); err != nil { return errors.Wrap(err, "failed to set a new Envoy cache snapshot") } return nil } \end{lstlisting}
{ "alphanum_fraction": 0.779621995, "avg_line_length": 50.072386059, "ext": "tex", "hexsha": "c1d3502115a6588895a23ffcfbbf535b9727c497", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "fc4946742c4c6ae9925812d9322d815fb96638cc", "max_forks_repo_licenses": [ "CC-BY-4.0" ], "max_forks_repo_name": "WeirdMachine/KubeLB-Bachelor-Thesis", "max_forks_repo_path": "chapter/07_implementation.tex", "max_issues_count": 1, "max_issues_repo_head_hexsha": "fc4946742c4c6ae9925812d9322d815fb96638cc", "max_issues_repo_issues_event_max_datetime": "2021-05-21T13:36:03.000Z", "max_issues_repo_issues_event_min_datetime": "2021-05-21T13:36:03.000Z", "max_issues_repo_licenses": [ "CC-BY-4.0" ], "max_issues_repo_name": "WeirdMachine/Bachelor-Thesis", "max_issues_repo_path": "chapter/07_implementation.tex", "max_line_length": 229, "max_stars_count": 3, "max_stars_repo_head_hexsha": "fc4946742c4c6ae9925812d9322d815fb96638cc", "max_stars_repo_licenses": [ "CC-BY-4.0" ], "max_stars_repo_name": "WeirdMachine/Bachelor-Thesis", "max_stars_repo_path": "chapter/07_implementation.tex", "max_stars_repo_stars_event_max_datetime": "2021-03-19T15:08:03.000Z", "max_stars_repo_stars_event_min_datetime": "2021-02-06T12:10:31.000Z", "num_tokens": 4447, "size": 18677 }
\documentclass{scrartcl} \usepackage[utf8]{inputenc} \usepackage[english]{proposal} \addbibresource{sources.bib} \newcommand{\project}{[Title of the Research Unit]} \newcommand{\spokesperson}{[First name last name, research institution of spokesperson, research area]} \newcommand{\form}{DFG form 53.20 -- 02/20} \begin{document} \maketitle \section{Overview of participating researchers and projects} \subsection{Participating researchers: Project leaders (applicants) and co-applicants} % % Co-applicants are defined as researchers who assume significant responsibility in the project but neither request nor receive project funding. % % [Table] Academic title, first name, last name, year of doctoral completion, institution, research area and, where applicable, project code \begin{center} \footnotesize \begin{tabular}{lllll} Applicant & y.o.d.c.\textsuperscript{$\ast$} & Institution & Research area & Project code \\ \midrule \textit{Academic title, first name, last name} & \textit{year} & & & \textit{if applicable} \\ \bottomrule {\scriptsize\textsuperscript{$\ast$}year of doctoral completion} \end{tabular} \end{center} \subsection{List of projects} % % [Table] Project code where applicable, project leader, project title, institution, research area \begin{center} \footnotesize \begin{tabular}{lllll} Project code & Project leader & Project title & Institution & Research area\\ \midrule \textit{if applicable} & & & & \\ \bottomrule \end{tabular} \end{center} \section{Summary of the joint work programme} % % [Text] With reference to the structure and items listed in the proposal instructions (form 54.03, section II), summarise the Research Unit’s joint work programme (item 2) and additional objectives and measures (item 3) in no more than ten pages. % % - Objectives of the overall project, potential impact on the state of the art, expected benefits of collaboration % - (Joint) Preliminary work and project-specific qualifications of participating researchers/working groups as they relate to the proposed project % - Joint work programme including proposed research methods % - Potential impact on the research area and local research environment (for local collaborations) as well as how the Research Unit differs from other programmes working in a directly related area \section{Additional objectives and measures} % % [Text] % - National and international cooperation, collaboration with international partners % - Research data and knowledge management measures and support by participating institutions % - Measures to advance research careers % - Measures to promote diversity and equal opportunity \section{Estimated overall project costs including individual projects} % % [Text] % ---------------------------------------------------------------- % References: List up to ten in total. % % These references are set automatically based on their category ("reviewed", "nonreviewed", "patents_pending", "patents", and those without a category) % \section{Project-related publications by members of the Research Unit} \subsection{Articles published by outlets with scientific quality assurance, book publications, and works accepted for publication but not yet published} \printbibliography[category=reviewed, heading=none] \subsubsection{Other publications, both peer-reviewed and non-peer-reviewed} \printbibliography[category=nonreviewed, heading=none] \subsubsection{Patents} \subsubsubsection{Pending} \printbibliography[category=patents_pending, heading=none] \subsubsubsection{Issued} \printbibliography[category=patents, heading=none] \section{Bibliography} % publications cited in the proposal but not listed under item 5 \printbibliography[notcategory=reviewed, notcategory=nonreviewed, notcategory=patents_pending, notcategory=patents, heading=none] % ---------------------------------------------------------------- \section{Summary of individual projects} % % [Text] One to two pages each including project-specific publications \section{CVs of participating researchers} % % [Text] \section{Cooperation outside the Research Unit, researchers with whom you have collaborated scientifically within the past three years} % % [Text] \end{document}
{ "alphanum_fraction": 0.7607824652, "avg_line_length": 36.5775862069, "ext": "tex", "hexsha": "8f37347ebe1ac8c8bb71b18a60e055161ad61e9a", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "f02b3c8e729454c96b293e8062639649cecdfbf7", "max_forks_repo_licenses": [ "Unlicense" ], "max_forks_repo_name": "mirkobunse/proposal_dfg", "max_forks_repo_path": "form_53_20_en.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "f02b3c8e729454c96b293e8062639649cecdfbf7", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Unlicense" ], "max_issues_repo_name": "mirkobunse/proposal_dfg", "max_issues_repo_path": "form_53_20_en.tex", "max_line_length": 247, "max_stars_count": null, "max_stars_repo_head_hexsha": "f02b3c8e729454c96b293e8062639649cecdfbf7", "max_stars_repo_licenses": [ "Unlicense" ], "max_stars_repo_name": "mirkobunse/proposal_dfg", "max_stars_repo_path": "form_53_20_en.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 959, "size": 4243 }
\documentclass[10pt,reqno]{article} \usepackage{import} \usepackage{tikz} \usetikzlibrary{shapes,arrows,positioning,decorations,decorations.pathreplacing,quotes,angles} % ------- general ------- \usepackage{amsmath} % symbols \usepackage{amssymb} % symbols, \mathfrak \usepackage{amsthm} % theorems, proofs, etc %\usepackage{dutchcal} % replaces \mathcal \usepackage{eucal} \usepackage{dsfont} % symbols \usepackage{mathtools} \usepackage{amsfonts} % \usepackage{wasysym} % symbols \usepackage{gensymb} % degree symbol \degree \usepackage{graphicx} \usepackage{caption} \usepackage{subcaption} \usepackage{float} \usepackage[ a4paper, left=3.5cm, right=3.5cm, top=2cm, bottom=2cm ]{geometry} \renewcommand{\qedsymbol}{\ensuremath{\blacksquare}} \input{erd-thm.tex} \input{macros.tex} \newcommand{\header}[2][\today]{ \begin{center} % \rule{0pt}{2cm} {\LARGE #2}\\ \vspace{1em} {\large #1}\\ \end{center} } \begin{document} \header[\mbox{}]{Framing a Picture} If you put a picture as the background of a \texttt{<div>} with specified height and width, then often it will crop out a part of the image. How do we make sure that a particular point of the image is always inside this frame? In this document I will outline a method of how to achieve it using only pure \texttt{CSS}. \section{The practical problem} Consider a picture and an area on a webpage represented by a \texttt{<div>} element, which will serve as our frame and which will crop the image. Let's say that in our case, the bulls-eye is the focus point of the picture that we want to have in view. \begin{figure}[H] \centering \begin{subfigure}{.5\textwidth} \centering \includegraphics[width=.7\linewidth]{../dartboard} \caption{A dartboard.} \label{fig:sub1} \end{subfigure}% \begin{subfigure}{.5\textwidth} \centering \begin{tikzpicture}[scale=1] \fill[fill=black!15!white] (0,0) rectangle (2,-2); \draw[->] (0,0) -- (0,-2.8) node[right]{$y$}; \draw[->] (0,0) -- (2.8,0) node[below]{$x$}; \end{tikzpicture} \caption{A \texttt{<div>} element.} \label{fig:sub2} \end{subfigure} \caption{Only a square crop-out of the original dartboard will be visible in this \texttt{<div>}.} \label{fig:test} \end{figure} \noindent In our case, the bulls-eye of the dartboard is located at around $20\%$ of the width and $60\%$ of the height of the picture respectively, with respect to the top left corner. The \texttt{HTML} might look something like \begin{verbatim} <div> <img src='dartboard.jpg' draggable='false' /> </div> \end{verbatim} \noindent and the accompanying \texttt{CSS}: \begin{verbatim} div { width: 500px; height: 500px; position: relative; /* so <img> can have absolute position */ } img { position: absolute; height: auto; /* preserve aspect ratio, based on width */ width: ??; left: ??; top: ??; } \end{verbatim} \noindent The \texttt{width}, \texttt{left} and \texttt{right} properties of the picture are what we want to know. \section{Mathematical analysis} \noindent The \texttt{<div>} uses relative coordinates, which means the origin is at its top left corner and the $y$-value increases downwards (as seen in the diagram). However, in this analysis we will draw our coordinate systems in the conventional way, so to visualize what's happening one must reflect the image on the website across a horizontal line. \enter In the diagram below, $c$ and $d$ denote the prescribed width and height of the area respectively, and the point $(a,b)$ the desired placement of the point of focus of the picture within this area. In the implementation it is key that these values are concrete \texttt{CSS} units, like \texttt{px} or \texttt{vw}. \begin{figure}[H] \centering \begin{minipage}{.35\textwidth} \centering \begin{tikzpicture}[scale=1] \fill[fill=black!15!white] (0,0) rectangle (2,2.2); \draw[->] (0,0) -- (0,3) node[left]{$y$}; \draw[->] (0,0) -- (2.8,0) node[below]{$x$}; \draw (2,0.2) -- (2,-0.2) node[below] {$c$}; \draw (0.2,2.2) -- (-0.2,2.2) node[left] {$d$}; \fill (0.6,1) circle (0.04) node[above] {$(a,b)$}; \end{tikzpicture} \captionof{figure}{Mathematical model of the \texttt{<div>} element.} \end{minipage} \hfill \begin{minipage}{.60\textwidth} \centering \begin{tikzpicture}[scale=1] \begin{scope}[shift={(-2.5cm,0.5cm)}] \begin{scope}[xshift=-2cm] \draw (0,0) rectangle (1cm,1cm); \fill (0.167cm,0.35cm) circle (0.04cm) node[anchor=south west,xshift=-0.1cm]{\small $(x,y)$}; \draw[->] (1.2cm,0.5cm) -- (1.8cm,0.5cm); \end{scope} \draw[color=black!20!white] (0,0) rectangle (1.5cm,1cm); \draw[->] (0,0) -- (1.5cm,1cm) node[anchor=south west] {$r$}; \fill (0.25cm,0.35cm) circle (0.04cm) node[anchor=south]{\small $z$}; \draw[->] (1.7cm,0.5cm) -- node[pos=0.5,below]{$\lambda$} (2.3cm,0.5cm); \end{scope} \node[inner sep=0pt, opacity=.3] at (1.5cm,1cm) {\includegraphics[height=2cm]{../dartboard.jpg}}; %\draw[->] (0,0) -- (3.5cm,0); %\draw[->] (0,0) -- (0,2.5cm); \draw[] (0,0) -- (0.5cm,0.7cm); \draw[->] (0,0) --(3cm,2cm) node[anchor=south west]{$\lambda r$}; \fill (0.5cm,0.7cm) circle (0.06cm) node[above] {$\lambda z$}; \end{tikzpicture} \captionof{figure}{Getting the focus vector and rescaling the picture with the ratio vector.} \end{minipage} \end{figure} \enter We will represent the \df{focus point} of the image by a pair of numbers $(x,y)\in [0,1]^2$, both between $0$ and $1$. So a value like $(0.2,0.6)$ would correspond to a focus point that is $20\%$ horizontally and $60\%$ vertically from the top left corner of an image. \enter Let $w$ and $h$ respectively denote the width and height of the image, and let $R=w/h$ be the aspect ratio. Then we will denote by $r$ the \df{ratio vector} $(R,1)$. We may additionally think of the set $M=[0,R]\times [0,1]$ as a kind of idealized or unit picture, in that the set $\lambda M$ will always have the same aspect ratio as the picture, no matter which scaling factor $\lambda$ we pick. Notice that $(x,y)\mapsto (R\cdot x,y)$ is a bijection between $[0,1]^2$ and $M$, so the the focus point $(x,y)$ corresponds to the \df{focus vector} $z=(R\cdot x,y)$. \enter Now, consider the two rays $\gamma,\delta$ starting from $(a,b)$, given by $\gamma(t)=(a,b)+t(r-z)$ and $\delta(t)=(a,b)-tz$. The vector $r-z$ can be found as the arrow pointing from $z$ to $r$. And we have $\gamma(t)-\delta(t)=tr$, so at any $t$ the points $\gamma(t)$ and $\delta(t)$ are the corners of the image scaled by a certain factor. \begin{figure}[H] \centering \begin{tikzpicture}[scale=0.6] \filldraw[fill=black!10!white] (0,0) rectangle (5,4); \draw[->] (0,0) -- (5.8,0) node[below]{$x$}; \draw[->] (0,0) -- (0,4.6) node[left]{$y$}; \node at (5,-0.3) {$c$}; \node at (-0.3,4) {$d$}; \draw[dashed] (5,4) -- (5,6); \draw[dashed] (5,4) -- (11,4); \draw[dashed] (0,0) -- (-3.5,0); \draw[dashed] (0,0) -- (0,-2.5); \draw (-2.5,-1.5) -- node[pos=0.5,anchor=south east]{$\delta(t)$} (1,2) -- node[pos=0.33,above]{$\gamma(t)$} (10,5); \fill (1,2) circle (0.08) node[above]{$(a,b)$}; \draw[black!50!white] (-2.5,-1.5) -- (-2.5,5) -- (10,5) -- (10,-1.5) -- node[pos=0.5,above]{$\lambda M$} (-2.5,-1.5); \end{tikzpicture} \end{figure} \noindent Note that $r-z=(R,1)-(Rx,y)=(R(1-x),1-y)$. In order for the image determined by these endpoints to cover the area, we must have \[ \begin{aligned}[rl] \gamma_1(t) = a+tR(1-x) &\geq c \\ \gamma_2(t) = b+t(1-y) &\geq d \\ \delta_1(t) = a-tRx & \leq 0 \\ \delta_2(t) = b-ty & \leq 0 \end{aligned} \quad \iff \quad \begin{aligned}[rl] t & \geq \frac{c-a}{R(1-x)} \\ t & \geq \frac{d-b}{1-y} \\ t & \geq \frac{a}{Rx} \\ t & \geq \frac{b}{y}. \end{aligned} \] So make sure all of these conditions are satisfied simultaneously, we will take as our scaling factor \[ \lambda=\max \left\{ \frac{c-a}{R(1-x)} , \frac{d-b}{1-y} , \frac{a}{Rx}, \frac{b}{y} \right\}. \] Then $\texttt{width}=\pi_1(\lambda r)=\lambda R$, $\texttt{left} = \delta_1(\lambda)=a-\lambda Rx$ and $\texttt{top} = \delta_2(\lambda)=b-\lambda y$. \end{document}
{ "alphanum_fraction": 0.6537800687, "avg_line_length": 41.5714285714, "ext": "tex", "hexsha": "c4fe96f50f2556d455db204324cfbddf24f6d7b4", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "343f1459f96c10122dc0a5d3a055e336dfc6d0fb", "max_forks_repo_licenses": [ "Unlicense" ], "max_forks_repo_name": "athun/focuspoint", "max_forks_repo_path": "explanation/explanation.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "343f1459f96c10122dc0a5d3a055e336dfc6d0fb", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Unlicense" ], "max_issues_repo_name": "athun/focuspoint", "max_issues_repo_path": "explanation/explanation.tex", "max_line_length": 572, "max_stars_count": null, "max_stars_repo_head_hexsha": "343f1459f96c10122dc0a5d3a055e336dfc6d0fb", "max_stars_repo_licenses": [ "Unlicense" ], "max_stars_repo_name": "athun/focuspoint", "max_stars_repo_path": "explanation/explanation.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 3023, "size": 8148 }
%------------------------------------ % Dario Taraborelli % Typesetting your academic CV in LaTeX % % URL: http://nitens.org/taraborelli/cvtex % DISCLAIMER: This template is provided for free and without any guarantee % that it will correctly compile on your system if you have a non-standard % configuration. % Some rights reserved: http://creativecommons.org/licenses/by-sa/3.0/ %------------------------------------ %!TEX TS-program = xelatex %!TEX encoding = UTF-8 Unicode \documentclass[11pt, a4paper]{article} \usepackage{fontspec} \usepackage{setspace} %\usepackage[xetex]{graphicx} \usepackage{wallpaper} \usepackage{color} \usepackage{hyperref} %\ThisURCornerWallPaper{0.2}{sandiainternal.jpg} % DOCUMENT LAYOUT \usepackage{geometry} \geometry{a4paper, textwidth=5.5in, textheight=8.5in, marginparsep=7pt, marginparwidth=.6in} \setlength\parindent{0in} \definecolor{mycolor1}{rgb}{0.1, 0.4, 0.3} \definecolor{mycolor2}{rgb}{0.1, 0.3, 0.2} \definecolor{mycolor3}{rgb}{0.8, 0.7, 0.2} \definecolor{mycolor4}{rgb}{0.9, 0.1, 0.4} \definecolor{mycolor5}{rgb}{0.7, 0.4, 0.4} \definecolor{mycolor6}{rgb}{0.4, 0.7, 0.7} % FONTS \usepackage{xunicode} \usepackage{xltxtra} \defaultfontfeatures{Mapping=tex-text} % converts LaTeX specials (``quotes'' --- dashes etc.) to unicode %\setromanfont [Ligatures={Common}, BoldFont={Adobe Caslon Pro Bold}, ItalicFont={Adobe Caslon Pro Italic}]{Adobe Caslon Pro} \setmonofont[Scale=0.8]{Monaco} % ---- CUSTOM AMPERSAND %\newcommand{\amper}{{\fontspec[Scale=.95]{Adobe Caslon Pro Italic}\selectfont\itshape\&}} % ---- MARGIN YEARS \usepackage{marginnote} \newcommand{\years}[1]{\marginnote{\scriptsize #1}} \renewcommand*{\raggedleftmarginnote}{} \setlength{\marginparsep}{7pt} \reversemarginpar % HEADINGS \usepackage{sectsty} \usepackage[normalem]{ulem} \sectionfont{\rmfamily\mdseries\upshape\Large} \subsectionfont{\rmfamily\bfseries\upshape\normalsize} \subsubsectionfont{\rmfamily\mdseries\upshape\normalsize} % PDF SETUP % ---- FILL IN HERE THE DOC TITLE AND AUTHOR %\usepackage[dvipdfm, bookmarks, colorlinks, breaklinks, pdftitle={Michael Shaughnessy - vita},pdfauthor={Michael Shaughnessy}]{hyperref} %\hypersetup{linkcolor=blue,citecolor=blue,filecolor=black,urlcolor=blue} % DOCUMENT \begin{document} {\LARGE \textbf{Michael Shaughnessy}} \\[1cm] 1880 Tallac St.\\ Napa, CA \texttt{94558} U.S.A.\\[.2cm] Phone: \texttt{530-219-0940}\\ Citizenship: USA \\ %email:\href{mailto:[email protected]}{[email protected]}\\ %{[email protected]}\\ \href{mailto:[email protected]}{\nolinkurl{[email protected]}}\\ \url{www.linkedin.com/in/mickeyshaughnessy1}\\ \url{github.com/michaelshaughnessy} %\section*{\color{mycolor4}Skills} % %\onehalfspace {\color{mycolor1}\textbf{\textbullet Computer engineering and algorithm design}}\\ % {\color{mycolor1}\textbf{\textbullet Software development: databases, distributed systems, web}} \\ % {\color{mycolor1}\textbf{\textbullet Modeling, simulation, and optimization}} \\ %{\color{mycolor1}\textbf{\textbullet Technical communication}}\\ \singlespace % \small{\textbf{\emph{Languages \& Software:}} Python, Perl, Linux, SQL, Excel, C++, Matlab, Redis, Git, MongoDB, TeX, VASP, LAMMPS, VMD \\ \section*{\color{mycolor4}Competencies} \\ {\color{mycolor1}\textbf{A}} - Ability to analyze and interpret written technical materials, rules, regulations, instructions and reports. \\ {\color{mycolor1}\textbf{B}} - Ability to establish and maintain effective public relations with diverse groups. \\ {\color{mycolor1}\textbf{C}} - Skill in oral communications in order to make clear and convincing oral presentations.\\ {\color{mycolor1}\textbf{D}} - Ability to produce well-written information for technical material.}} \section*{\color{mycolor4}Experience} \singlespace Related competencies (see above) are marked where applicable e.g. \textit{\textbf{{\color{mycolor1}A, B, D}}} means the experience relates to competencies A, B, and D. \newline \years{ March 2015- Present}\textbf{LeapYear Technologies, Berkeley} \emph{VP of Engineering} \\ \textit{Salary}: Equity + \$100,000 / year\\ \textbf{{\color{mycolor2}Lead data analytics and data privacy teams.}} \\ \textbullet Lead a team building Shroudbase (Rust, Haskell and Python on AWS EC2) for private machine learning on sensitive data in medical, advertising and financial industries. \textit{\textbf{{\color{mycolor1}A, B, C, D}}} \newline % \\ \textbullet Client engagement, including identifying problems, preparing data, building models and communicating results. % \\ \textbullet Development and implementation of differentially private machine learning algorithms. \noindent \\ \textit{\textbf{Aug 2014- Feb 2015}\\\textbf{\emph{RTBiQ, Inc. San Francisco}} }| \emph{Data Engineer \& Data Scientist} \newline \textit{Salary}: Equity only \\ \textbf{{\color{mycolor2}Designed and implemented real-time bidding control and optimization algorithms for pricing mobile advertising.}} \\ \textbullet Dynamic control algorithm lowers cost by to 50-100\%, compared to the previous method and replies to up to hundreds of thousands of queries per second with latency less than 150 ms. \textit{\textbf{{\color{mycolor1}A}}} \\ \textbullet Bayesian machine learning allows customers to automatically avoid fraudulent impressions and systematically improve KPIs. \textit{\textbf{{\color{mycolor1}A}}}\\ \textbullet Created integration test harness for QA, including a realistic simulated ad exchange, sending requests over HTTP. \textit{\textbf{{\color{mycolor1}A, D}}}%\\ \textbullet Improved a distributed system for running real-time bidding advertising campaigns, including multiple databases, a web frontend and API, and a dynamically controlled bidder farm.} \\ \\ \textbullet Built video ad unit capability, allowing customers to upload video advertising creative. Dynamically generated VAST XML bid responses to video auction requests. Integrated the platform with two video advertising exchanges, LiveRail and Vdopia, serving up to tens of thousands of requests per second. \textit{\textbf{{\color{mycolor1}A, B, C, D}}}\\ \textit{\textbf{Aug 2013-Aug 2014}}\\\textbf{\emph{Synopsys TCAD, Mountain View}} | \emph{R\&D Engineer} \newline \textit{Salary}: \$110,000 / year \\ \textbf{{\color{mycolor2}Developed an API interfacing quantum mechanical calculations with commercial continuum reaction-diffusion simulators.}} \\ \textbullet Calculated ab-initio data sets for ternary III-V alloys and dopants, enabling industrial customers to simulate these materials without experimental data. \textit{\textbf{{\color{mycolor1}A}}} \\ \textbullet Set up a Linux-based compute environment for rapid, parallelized multi-scale calculations. Used VASP, LAMMPS, VMD, and Python scripting. \textit{\textbf{{\color{mycolor1}A}}}\\ \textbullet Documented methodology and API usage for clients and internal customers. Drafted intellectual property disclosures for legal department. \textit{\textbf{{\color{mycolor1}A, B, D}}} \\ %%\hrule \noindent \textit{\textbf{Aug 2011- Aug 2013}}\\ \textbf{\emph{Sandia National Labs, Livermore}} | \emph{Researcher - Materials Physics} \newline \textit{Salary} \$85,000 / year \\ \textbullet Developed machine learning software for molecular dynamics simulations based on \textit{ab-initio} calculations without interatomic potentials or force fields. \textit{\textbf{{\color{mycolor1}A}}}\\ \textbullet Computed contact resistance to nanostructures using multi-scale methods. \textit{\textbf{{\color{mycolor1}A}}}\\ \textbullet Simulated transport across grain boundaries in thermoelectric materials and developed a thermoelectric materials aging software package. \textit{\textbf{{\color{mycolor1}A}}}\\ \textbullet Initiated and won U.S. Naval Research Lab funding for a multi-year topological insulator device research effort. \textit{\textbf{{\color{mycolor1}C, D}}}\\ \years{2009-2011}\textbf{\emph{Lawrence Livermore National Lab, Livermore:}} \emph{Lawrence Scholar}\newline Salary: \$65,000 / year \\ Identified new magnetic alloys for permanent magnet and spintronic applications. Utilized terascale computers and databases for multi-scale modeling. \textit{\textbf{{\color{mycolor1}A}}} \\ \years{2004-2011}\textbf{\emph{University of California, Davis:}} \emph{Research Assistant}\newline Calculated properties of spintronic materials using density functional theory. Investigated topological and quantum mechanical properties of black hole and Euclidean solutions in gravity. Lead laboratory courses in physics and wrote solutions for graduate quantum mechanics courses. Orally presented research at APS and MRS conferences. \textit{\textbf{{\color{mycolor1}C}}} \\ \years{2003-2004}\textbf{\emph{Musculoskeletal Research Lab, Hershey:}} \emph{Student Researcher} \newline Created nanostructured surfaces for bone cell growth using plasma etching and polymer spin-coating. Characterized cell response using FTIR spectroscopy and electron microscopy.\\ \years{2002}\textbf{\emph{Cornell University Controlled Environment Agriculture Group, Ithaca: }}\emph{Student Researcher} \newline Developed a physical model of water diffusion in germinating seeds and built a hydroponic spinach prouting system.\\ \years{2000-2004}\textbf{\emph{Cornell University Physical Sciences Library, Ithaca: }}\emph{Library Manager}\newline Managed day-to-day library operations and customer service. \\ %\hrule \section*{{\color{mycolor4}Education}} \noindent \years{2004}\textsc{BS}, Agricultural and Biological Engineering, Cornell University, Ithaca\\ \years{2011}\textsc{PhD}, Physics, University of California, Davis\\ \indent Thesis: \textit{Electronic and Magnetic Structure in Doped Semiconductors} %\hrule \section*{{\color{mycolor4}Honors \& Clearance}} \noindent \years{2011}DOE EERE Postdoctoral Fellowship Awardee \newline \years{2009}Lawrence Scholar Fellowship \newline \years{2011-2013}DOE L Clearance \newline %\section*{{\color{mycolor4}Patents}} %\noindent %Filed 26 September 2014 (Pending) \\ %\textbullet Adaptive Parallelization for Multi-Scale Simulation (14/497681) \\ %\textbullet First Principles Design Automation Tool (PCT/US14/57803) \\ %\textbullet Estimation of Effective Channel Length for FinFETs and Nanowires (PCT/US14/57637) \\ %\textbullet Simulation Scaling with DFT and Non-DFT (14/498458) \\ %\textbullet Iterative Simulation with DFT and Non-DFT (14/498492) \\ %\textbullet Parameter Extraction of DFT (PCT/US14/57840) \\ %\textbullet Characterizing Target Material Properties Based on Properties of Similar Materials (14/497695) \\ %\textbullet Mapping Intermediate Material Properties to Target Properties to Screen Materials (PCT/US14/57707)\\ \section*{{\color{mycolor4}Publications}} \noindent \years{2008}$\bullet$\ \ J.Y. Lim, M. Shaughnessy, Z. Zhou, H. Noh, E. A. Vogler, and H. J. Donahue. %\href{http://www.sciencedirect.com/science/article/pii/S0142961207010526} {Surface energy effects on osteoblast spatial growth and mineralization.} \emph{Biomaterials} \textbf{29}: 1776-1784\\ \years{2009}$\bullet$\ \ M. Shaughnessy, C.Y. Fong, R. Snow, K. Liu, J. Pask, and L.H. Yang. %\href{http://apl.aip.org/resource/1/applab/v95/i2/p022515_s1?isAuthorized=yes} { Origin of Large Moments in Mn$_x$Si$_{1-x}$.}\emph { Appl. Phys. Lett.} \textbf{95}: 022515\\ \years{ }$\bullet$\ \ C. Y. Fong, M. Shaughnessy, R. Snow, Kai Liu, J. E. Pask, and L. H. Yang. %\href{http://proceedings.spiedigitallibrary.org/proceeding.aspx?articleid=784711} {Physical origin of measured magnetic moment in Mn$_x$Si$_{1-x}$ with x = 0.1\%.} (invited) \emph{Proceedings of SPIE}, \textbf{7398}: 73980J-1\\ \years{2010}$\bullet$\ \ M. Shaughnessy, C.Y. Fong, L.H. Yang, Ryan Snow, X.S. Chen, and Z.M. Zhiang. %\href{http://prb.aps.org/abstract/PRB/v82/i3/e035202} {Structural and magnetic properties of single dopants of Mn and Fe for Si-based spintronic materials.} \emph{Phys. Rev. B} \textbf{82}: 035202 \\ \years{ }$\bullet$\ \ C. Y. Fong, M. Shaughnessy, R, Snow, and L. H. Yang. %\href{http://onlinelibrary.wiley.com/doi/10.1002/pssc.200982696/abstract} {Theoretical investigations of defects in a Si-based digital ferromagnetic heterostructure - a spintronic material.} \emph{Physica Status Solidi C}, \textbf{7}: 747\\ \years{2011}$\bullet$\ \ M. Shaughnessy, Ryan Snow, L. Damewood, and C. Y. Fong. %\href{http://www.hindawi.com/journals/jnm/2011/140805/} {Memory and Spin Injection Devices Involving Half Metals.} \emph{Journal of Nanomaterials}, \textbf{2011}: 140805\\ \years{2012}$\bullet$\ \ S. Dag, M. Shaughnessy, C.Y. Fong, X.D. Zhu, L.H. Yang. % \href{http://www.sciencedirect.com/science/article/pii/S0921452612001901} {First principles studies of a Xe atom adsorbed on NB(110) surface.} \emph{Physica B}, \textbf{407}: 2100 \\ \years{ }$\bullet$\ \ C. Y. Fong, M. Shaughnessy, L. Damewood, and L. H. Yang. %\href{http://www.degruyter.com/view/j/nsmmt.2012.1.issue/nsmmt-2012-0001/nsmmt-2012-0001.xml} {Theory, Experiment and Computation of Half Metals for Spintronics: Recent Progress in Si-based Materials.} \emph{Nanoscale Systems: Mathematical Modeling, Theory and Applications}, \textbf{1}: 1-22, 2012. \\ \years{2013}$\bullet$\ \ M. Shaughnessy, C. Y. Fong, L. Damewood, C. Felser and L. H. Yang. %\href{http://jap.aip.org/resource/1/japiau/v113/i4/p043709_s1} {Structural variants and the modified Slater-Pauling curve for transition-metal-based half-Heusler alloys.} \emph{Journal of Applied Physics}, \textbf{113}: 043709 (2013) \\ \years{ } $\bullet$\ \ A.C. Ford, M. Shaughnessy, B.M. Wong, A. Kane, O.V. Kuznetsov, K.L. Krafcik, W.E. Billups, R.H. Hauge, F. Leonard. %\href{http://iopscience.iop.org/0957-4484/24/10/105202} {Physical Removal of Metallic Carbon Nanotubes from Nanotube Network Devices Using a Thermal and Fluidic Process.} \emph{Nanotechnology.} \textbf{24}: 105202. \\ \years{ }$\bullet$\ \ L.H. Yang, M. Shaughnessy, L. Damewood, C.Y. Fong. %\href{} {Half-metallic hole-doped Mn/Si trilayers.} \emph{Jour. of Phys. D.: Appl. Phys.}, \\ \years{2014}$\bullet$\ \ M. Shaughnessy, J.D Sugar, N. Bartelt, J. Zimmerman. {Energetics and thermodiffusion of Au in Bi$_2$Te$_3$.} Journal of Applied Physics. %\years{ } B. Busemeyer, M. Shaughnessy, C.Y. Fong, L.H. Yang and L. Damewood. \href{}{Self consistent Hubbard U modeling of magnetic properties of wurtzite NiO thin films.} In preparation. \\ %\years{ } M. Shaughnessy, A.C. Ford, R. Jones, C.D. Spataru. \href{}{Realistic carbon nanotube-metal contact configurations.} In preparation. \\ %\years{ } M. Shaughnessy, C.D. Spataru, D.L Medlin and F. Leonard. \href{}{First principles calculation of thermoelectric transport across twinned grain boundaries.} In preparation. \\ %\subsection*{Talks} %\noindent %\years{} %\ThisLRCornerWallPaper{0.5}{cichlid_on_white.jpg} %\section*{References} %Dr. Reese Jones {[email protected]}\\ %Prof. Ching Yao Fong: {[email protected] }\\ %Dr. Lin H. Yang: {[email protected]}\\ %\section*{Interests}bike touring, 3d printing, tropical fish keeping, and cooking \ThisLRCornerWallPaper{0.5}{cichlid_on_white4_flip_check.jpg} %\vspace{1cm} \vfill{} %\hrulefill \begin{center} {\scriptsize Last updated: \today\- }%Typeset in \href{http://nitens.org/taraborelli/cvtex}{ %\fontspec{Times New Roman}\XeTeX }\\ % ---- FILL IN THE FULL URL TO YOUR CV HERE %\href{http://nitens.org/taraborelli/cvtex}{http://nitens.org/taraborelli/cvtex}} \end{center} \end{document}
{ "alphanum_fraction": 0.75024108, "avg_line_length": 69.1333333333, "ext": "tex", "hexsha": "8fe4b8abdaab1b8167d4b821e8dfb72ab5b47bed", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "37cea98ad52dd401428e595db7396072de9b37a2", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "mickeyshaughnessy/Resume", "max_forks_repo_path": "Resume_MS_USPTO.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "37cea98ad52dd401428e595db7396072de9b37a2", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "mickeyshaughnessy/Resume", "max_issues_repo_path": "Resume_MS_USPTO.tex", "max_line_length": 563, "max_stars_count": null, "max_stars_repo_head_hexsha": "37cea98ad52dd401428e595db7396072de9b37a2", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "mickeyshaughnessy/Resume", "max_stars_repo_path": "Resume_MS_USPTO.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 4711, "size": 15555 }
\section{Community Engagement} Rubin Observatory will work closely with the community on the detailed design of the \esp. \subsection{Survey Cadence Optimization Committee} The \href{https://www.lsst.org/content/charge-survey-cadence-optimization-committee-scoc}{Survey Cadence Optimization Committee (SCOC)} is an advisory committee to the Rubin Observatory Operations Director consisting of 10 members drawn almost entirely from the science community. The SCOC was convened in 2020 and will be a standing committee throughout the life of Rubin Observatory operations. Early Science observations should align as closely as possible with the main survey and ultimate long-term science goals; the SCOC will be involved in all aspects of development of the \esp. Specifically, the SCOC will make specific recommendations for Early Science observations, based on the plans for commissioning and the realized performance of the telescope and software. \subsection{Community Forum} The Rubin Observatory Community Platform has a dedicated category for Early Science\footnote{ See \url{https://community.lsst.org/t/about-the-early-science-category/5775}}, where community members are encouraged to open discussions on the topic of early science. \subsection{Community Input} A process will be put in place to formally solicit input from the community. Several science collaborations have already been pro-active in providing input on considerations for template generation in year one on both the community forum and as research notes.
{ "alphanum_fraction": 0.8217757615, "avg_line_length": 77.15, "ext": "tex", "hexsha": "5843553e63cd7d1fe63bacd9b6d1e4381e602c7b", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "e1193b6253095611b060eaff993423e6f3267f7e", "max_forks_repo_licenses": [ "CC-BY-4.0" ], "max_forks_repo_name": "rubin-observatory/rtn-011", "max_forks_repo_path": "communication.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "e1193b6253095611b060eaff993423e6f3267f7e", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "CC-BY-4.0" ], "max_issues_repo_name": "rubin-observatory/rtn-011", "max_issues_repo_path": "communication.tex", "max_line_length": 280, "max_stars_count": null, "max_stars_repo_head_hexsha": "e1193b6253095611b060eaff993423e6f3267f7e", "max_stars_repo_licenses": [ "CC-BY-4.0" ], "max_stars_repo_name": "rubin-observatory/rtn-011", "max_stars_repo_path": "communication.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 297, "size": 1543 }
\section*{Results} This section of the report will discuss the results obtained from the two models used in this project, the metrics used to measure the accuracy of the model and the qualitative measure of the generated answers. \subsection*{Training and Accuracy Metrics} First, the BioBERT model was fine-tuned on our dataset. The model was trained for 20 epoch running on 4 GPU cores. The model reduced the loss value with each epoch during training. Then, the following accuracy metrics scores were achieved during evaluation on test data. \begin{itemize} \item `exact': 70.83333333333333 \item `f1': 77.78311271345326, \item `total': 72, \item `HasAns\_exact': 70.83333333333333, \item `HasAns\_f1': 77.78311271345326, \item `HasAns\_total': 72, \item `best\_exact': 70.83333333333333, \item `best\_exact\_thresh': 0.0, \item `best\_f1': 77.78311271345326, \item `best\_f1\_thresh': 0.0 \end{itemize} The accuracy metrics were monitored carefully to ensure that the model generalizes well and not overfit to the training data. Next, the GPT2 model was fine-tuned on the data processed for it. This model was tuned for 19 epochs and the loss value decreased with each epoch. Once the model was tuned, the accuracy of the model was evaluated on the text data. The primary metric used was the perplexity score of the model which is as follows, \begin{itemize} \item `perplexity': 2.5901412982272793. \end{itemize} Again, special attention is given to ensure that the model remains general and not overfit to the data. \subsection*{Qualitative Results} Given the nature of the project, numeric accuracy scores may not be an intuitive indicator of the performance of the model. Thus, a few examples are given below that shows the question asked to the model, the answer given by the BioBERT model and the verbose, comprehensive answer given by the GPT2 model stacked on top. \begin{enumerate} \item[Set 1:]\qquad\\ \textbf{Question}: ``What are the symptoms of ovarian rian Germ Cell Tumors ?'' \textbf{Answer}: signs of ovarian germ cell tumor are swelling of the abdomen or vaginal bleeding after menopause. \textbf{Comprehensive}: \begin{itemize} \item \textbf{Answer 1}: signs of ovarian germ cell tumor are swelling of the abdomen or vaginal bleeding after menopause. These and other signs and symptoms may be caused by ovarian germ cell tumor or by other conditions. Check with your doctor if you have any of the following: Weakness or feeling tired. Weight loss with little or no effect on menstrual periods. Vaginal bleeding after menopause. Sometimes pain or swelling in a women's vagina. Fever or night sweats. \item \textbf{Answer 2}: signs of ovarian germ cell tumor are swelling of the abdomen or vaginal bleeding after menopause. These and other signs and symptoms may be caused by ovarian germ cell tumor or by other conditions. Check with your doctor if you have any of the following: Swelling of the abdomen or vaginal bleeding after menopause. Trouble starting the flow of urine. Weight loss for no known reason. Trouble emptying the bladder completely after menopause. Pain or feeling of fullness below the ribs on the left side. Fever for no known reason. Easy bruising or bleeding. Pain or feeling of fullness below the ribs on the right side. \item \textbf{Answer 3}: signs of ovarian germ cell tumor are swelling of the abdomen or vaginal bleeding after menopause. Other conditions can increase the risk of hairy cell leukemia. Check with your doctor if you have any of the following: Pain or swelling in the abdomen. Weakness or feeling tired. Weight loss for no known reason. A menstrual period that does not go away. \item \textbf{Answer 4}: signs of ovarian germ cell tumor are swelling of the abdomen or vaginal bleeding after menopause. These and other signs may be caused by ovarian germ cell tumor or by other conditions. Check with your doctor if you have any of the following: Pain or swelling in the abdomen. A lump in the abdomen, vagina, or rectum. Weight loss for no known reason. Pain or a feeling of fullness below the ribs on the left side. Weight loss with no known reason. A dark urine like color that does not go away. \end{itemize} \item[Set 2:]\qquad\\ \textbf{Question}: ``What are the stages of Vulvar Cancer ?'' \textbf{Answer}: stage i stage ii stage iii stage iv after vulvar cancer has been diagnosed, tests are done to find out if cancer cells have spread within the vulva or to other parts of the body. \textbf{Comprehensive}: \begin{itemize} \item \textbf{Answer 1}: stage i stage ii stage iii stage iv after vulvar cancer has been diagnosed, tests are done to find out if cancer cells have spread within the vulva or to other parts of the body. Cancer that has spread within the vulva to other parts of the body is called seminoma. A biopsy is done to diagnose seminoma. A biopsy is done during surgery to remove the tumor. Sometimes, a small incision (cut) is made in the vulva and a laparoscope (a thin, lighted tube) is inserted into it. The doctor will usually remove the tumor and a small amount of tissue, after the doctor has removed all the cancer. The tissue sample is checked under a microscope for signs of cancer. A pathologist views the tissue under a microscope to look for cancer cells. If cancer cells are not found, it may not be necessary to remove more tissue. There are three ways that cancer spreads in the body. Cancer can spread through tissue, the lymph system, and the blood: Tissue. The cancer spreads from where it began by growing into nearby areas. Lymph system. The cancer spreads from where it began by getting into the lymph system. The cancer travels through the lymph vessels to other parts of the body. Blood. The cancer spreads from where it began by getting into the blood. The cancer travels through the blood vessels to other parts of the body. Cancer may spread from where it began to other parts of the body through organs and tissues. When cancer spreads to another part of the body, it is called metastasis. Cancer cells break away from where they began (the primary tumor) and travel through the lymph system or blood. \item \textbf{Answer 2}: stage i stage ii stage iii stage iv after vulvar cancer has been diagnosed, tests are done to find out if cancer cells have spread within the vulva or to other parts of the body. The following stages are used for vulvar cancer: Stage I Stage II Stage III Stage IV Stage V After vulvar cancer has been diagnosed, tests are done to find out if cancer cells have spread within the vulva or to other parts of the body. The process used to find out if cancer has spread within the vulva or to other parts of the body is called staging. The information gathered from the staging process determines the stage of the disease. It is important to know the stage in order to plan treatment. Vulvar cancer in men may affect the risk of developing rectal cancer. Having rectal cancer during surgery can increase the risk of rectal cancer and is called a risk factor. Having rectal cancer in a first degree relative (parent, brother, or sister) increases the risk even more. \end{itemize} \end{enumerate}
{ "alphanum_fraction": 0.7695450831, "avg_line_length": 128.8070175439, "ext": "tex", "hexsha": "0225c753ae6ba87fc73adde6087c8621369868f0", "lang": "TeX", "max_forks_count": 2, "max_forks_repo_forks_event_max_datetime": "2021-07-07T00:25:50.000Z", "max_forks_repo_forks_event_min_datetime": "2020-12-02T02:27:58.000Z", "max_forks_repo_head_hexsha": "c9ab90ef28ac20f247af41379b1b798f9204a407", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "uabinf/nlp-group-project-fall-2020-deepbiocomp", "max_forks_repo_path": "report/results.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "c9ab90ef28ac20f247af41379b1b798f9204a407", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "uabinf/nlp-group-project-fall-2020-deepbiocomp", "max_issues_repo_path": "report/results.tex", "max_line_length": 1626, "max_stars_count": 1, "max_stars_repo_head_hexsha": "c9ab90ef28ac20f247af41379b1b798f9204a407", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "uabinf/nlp-group-project-fall-2020-deepbiocomp", "max_stars_repo_path": "report/results.tex", "max_stars_repo_stars_event_max_datetime": "2021-07-07T00:24:18.000Z", "max_stars_repo_stars_event_min_datetime": "2021-07-07T00:24:18.000Z", "num_tokens": 1754, "size": 7342 }
\section{Arquitectura Harvard} /*Foto arquitecutra */ \subsection{Highlights} \begin{itemize} \item Las instrucciones y los datos se almacenan en memorias diferentes \item Hay dos conexiones entre la uniad de control de la CPU y cada sistema de memoria. \item Las instrucciones se pueden cargar al mismo tiempo que los datos (instruction fetch y data access en paralelo por distintos buses) \item Se manejan distintos espacios de direcciones para instrucciones y datos lo que idifculta la programación \item Implementado en algunos microcontroladores PIC y en procesadores de señales digitles (DSP) \item Usado en los DSP para streaming de datos \begin{itemize} \item Mayor ancho de banda de memoria \item Ancho de banda más predecible \end{itemize} \end{itemize} \subsection{Instruction types} \subsubsection{Data handling and memory operations} load, store , move \begin{itemize} \item Set a register to a fixed constant value. \item Copy data from a memory location to a register, or vice versa (a machine instruction is often called move; however, the term is misleading). Used to store the contents of a register, the result of a computation, or to retrieve stored data to perform a computation on it later. Often called load and store operations. \item Read and write data from hardware devices. \end{itemize} \subsubsection{Arithmetic and logic operations} Aritméticas y lógicas add , subtract , multiply , divide (BPF c/s, Decimal, BPFlot and, or , xor \begin{itemize} \item Add, subtract, multiply, or divide the values of two registers, placing the result in a register, possibly setting one or more condition codes in a status register. \item Increment, decrement in some ISAs, saving operand fetch in trivial cases. \item Perform bitwise operations, e.g., taking the conjunction and disjunction of corresponding bits in a pair of registers, taking the negation of each bit in a register. \item Compare two values in registers (for example, to see if one is less, or if they are equal). \item Floating-point instructions for arithmetic on floating-point numbers. \end{itemize} \subsubsection{Control flow operations} branch , jump , compare, call , return \begin{itemize} \item Branch to another location in the program and execute instructions there. \item Conditionally branch to another location if a certain condition holds. \item Indirectly branch to another location. \item Call another block of code, while saving the location of the next instruction as a point to return to. \end{itemize} \subsubsection{Coprocessor instructions} \begin{itemize} \item Load/store data to and from a coprocessor, or exchanging with CPU registers. \item Perform coprocessor operations. \end{itemize} \subsubsection{Complex instructions} \begin{itemize} \item Transferring multiple registers to or from memory (especially the stack) at once \item Moving large blocks of memory (e.g. string copy or DMA transfer) \item complicated integer and floating-point arithmetic (e.g. square root, or transcendental functions such as logarithm, sine, cosine, etc.) \item SIMD instructions, a single instruction performing an operation on many homogeneous values in parallel, possibly in dedicated SIMD registers \item performing an atomic test-and-set instruction or other read-modify-write atomic instruction \item instructions that perform ALU operations with an operand from memory rather than a register \end{itemize} \subsection{ISA - Instruction Set Architecture} An instruction set architecture (ISA) is an abstract model of a computer. An instruction set architecture is distinguished from a microarchitecture, which is the set of processor design techniques used, in a particular processor, to implement the instruction set. Processors with different microarchitectures can share a common instruction set. For example, the Intel Pentium and the Advanced Micro Devices Athlon implement nearly identical versions of the x86 instruction set, but have radically different internal designs. \subsubsection{Machine instructions characteristics} The operation of the processor is determined by the instructions it executes, referred to as machine instructions or computer instructions. The collection of different instructions that the processor can execute is referred to as the processor’s instruction set. \subsubsection{Repertorio de instrucciones} How many and which operations to provide, and how complex operations should be. \subsubsection{Especificación de su operación} \begin{itemize} \item Operation code: Specifies the operation to be performed (e.g., ADD, I/O). The operation is specified by a binary code, known as the operation code, or opcode. \item Source operand reference: The operation may involve one or more source operands, that is, operands that are inputs for the operation. \item Result operand reference: The operation may produce a result. \item Next instruction reference: This tells the processor where to fetch the next instruction after the execution of this instruction is complete. \end{itemize} Source and result operands can be in one of four areas: \subsubsection{Especificación de su operación} \begin{itemize} \item Main or virtual memory: As with next instruction references, the main or virtual memory address must be supplied. \item Processor register: With rare exceptions, a processor contains one or more registers that may be referenced by machine instructions. If only one register exists, reference to it may be implicit. If more than one register exists, then each register is assigned a unique name or number, and the instruction must contain the number of the desired register. \item Immediate: The value of the operand is contained in a field in the instruction being executed. \item I/O device: The instruction must specify the I/O module and device for the operation. If memory-mapped I/O is used, this is just another main or virtual memory address. \end{itemize} \subsubsection{Clasificación según la ubicación de los operandos} \begin{itemize} \item Stack (‘60s a ‘70s \item Acumulador (antes de ‘60s) \item Registro Memoria (‘70s hasta ahora) \item Registro Registro (Load/Store) (‘60s hasta ahora) \item Memoria Memoria (‘70s a ‘80s) \end{itemize} \subsubsection{Registros} Number of processor registers that can be referenced by instructions, and their use \subsubsection{Tipos de datos} \paragraph{Numéricos}\mbox{}\\\\%% \begin{itemize} \item BPF s/s \item BPF c/s \item BPFlotante (IEEE 754 o \item BCD (decimales) \end{itemize} \paragraph{Caracteres}\mbox{}\\\\%% \begin{itemize} \item ASCII \item EBCDIC \item Unicode \end{itemize} \paragraph{Datos lógicos}\mbox{}\\\\%% \paragraph{Direcciones}\mbox{}\\\\%% \subsubsection{Instruction Sets: Addressing Modes} /*458 William Stallings 10th edition*/ \paragraph{Immediate}\mbox{}\\\\%% The simplest form of addressing is immediate addressing, in which the operand value is present in the instruction. The advantage of immediate addressing is that no memory reference other than the instruction fetch is required to obtain the operand, thus saving one memory or cache cycle in the instruction cycle. The disadvantage is that the size of the number is restricted to the size of the address field, which, in most instruction sets, is small compared with the word length. Operand = A \paragraph{Direct}\mbox{}\\\\%% A very simple form of addressing is direct addressing, in which the address field contains the effective address of the operand. The technique was common in earlier generations of computers but is not common on contemporary architectures. It requires only one memory reference and no special calculation. The obvious limitation is that it provides only a limited address space. EA = A \paragraph{Indirect}\mbox{}\\\\%% With direct addressing, the length of the address field is usually less than the word length, thus limiting the address range. One solution is to have the address field refer to the address of a word in memory, which in turn contains a full- length address of the operand. This is known as indirect addressing: EA = (A) As defined earlier, the parentheses are to be interpreted as meaning contents of. The obvious advantage of this approach is that for a word length of N, an address space of 2N is now available. The disadvantage is that instruction execution requires two memory references to fetch the operand: one to get its address and a second to get its value. Although the number of words that can be addressed is now equal to 2N, the number of different effective addresses that may be referenced at any one time is limited to 2K, where K is the length of the address field. Typically, this is not a burdensome restriction, and it can be an asset. \paragraph{Register}\mbox{}\\\\%% Register addressing is similar to direct addressing. The only difference is that the address field refers to a register rather than a main memory address: EA = R To clarify, if the contents of a register address field in an instruction is 5, then register R5 is the intended address, and the operand value is contained in R5. Typically, an address field that references registers will have from 3 to 5 bits, so that a total of from 8 to 32 general- purpose registers can be referenced. The advantages of register addressing are that (1) only a small address field is needed in the instruction, and (2) no time- consuming memory references are required. As was discussed in Chapter 4, the memory access time for a register internal to the processor is much less than that for a main memory address. The disadvantage of register addressing is that the address space is very limited. \paragraph{Register indirect}\mbox{}\\\\%% Just as register addressing is analogous to direct addressing, register indirect addressing is analogous to indirect addressing. In both cases, the only difference is whether the address field refers to a memory location or a register. Thus, for register indirect address, EA = (R) The advantages and limitations of register indirect addressing are basically the same as for indirect addressing. In both cases, the address space limitation (limited range of addresses) of the address field is overcome by having that field refer to a word- length location containing an address. In addition, register indirect addressing uses one less memory reference than indirect addressing. \paragraph{Displacement}\mbox{}\\\\%% A very powerful mode of addressing combines the capabilities of direct addressing and register indirect addressing. It is known by a variety of names depending on the context of its use, but the basic mechanism is the same. We will refer to this as displacement addressing: EA = A + (R) Displacement addressing requires that the instruction have two address fields, at least one of which is explicit. The value contained in one address field (value = A) is used directly. The other address field, or an implicit reference based on opcode, refers to a register whose contents are added to A to produce the effective address. We will describe three of the most common uses of displacement addressing: \subparagraph{Relative addressing}\mbox{}\\\\%% For relative addressing, also called PC-relative addressing, the implicitly referenced register is the program counter (PC). That is, the next instruction address is added to the address field to produce the EA. Typically, the address field is treated as a twos complement number for this operation. Thus, the effective address is a displacement relative to the address of the instruction. Relative addressing exploits the concept of locality. If most memory references are relatively near to the instruction being executed, then the use of relative addressing saves address bits in the instruction. \subparagraph{Base-register addressing}\mbox{}\\\\%% For base-register addressing, the interpretation is the following: The referenced register contains a main memory address, and the address field contains a displacement (usually an unsigned integer representation) from that address. The register reference may be explicit or implicit. Base-register addressing also exploits the locality of memory references. \subparagraph{Indexing}\mbox{}\\\\%% For indexing, the interpretation is typically the following: The address field references a main memory address, and the referenced register contains a positive displacement from that address. Note that this usage is just the opposite of the interpretation for base-register addressing. Of course, it is more than just a matter of user interpretation. Because the address field is considered to be a memory address in indexing, it generally contains more bits than an address field in a comparable base-register instruction. Also, we will see that there are some refinements to indexing that would not be as useful in the base- register context. Nevertheless, the method of calculating the EA is the same for both base- register addressing and indexing, and in both cases the register reference is sometimes explicit and sometimes implicit (for different processor types). An important use of indexing is to provide an efficient mechanism for performing iterative operations. Consider, for example, a list of numbers stored starting at location A. Suppose that we would like to add 1 to each element on the list. We need to fetch each value, add 1 to it, and store it back. The sequence of effective addresses that we need is A, A + 1, A + 2, . . . , up to the last location on the list. With indexing, this is easily done. The value A is stored in the instruction’s address field, and the chosen register, called an index register, is initialized to 0. After each operation, the index register is incremented by 1 \paragraph{Stack}\mbox{}\\\\%% The final addressing mode that we consider is stack addressing. As defined in Appendix I, a stack is a linear array of locations. It is sometimes referred to as a pushdown list or last-in-first-out queue. The stack is a reserved block of locations. Items are appended to the top of the stack so that, at any given time, the block is partially filled. Associated with the stack is a pointer whose value is the address of the top of the stack. Alternatively, the top two elements of the stack may be in processor registers, in which case the stack pointer references the third element of the stack. The stack pointer is maintained in a register. Thus, references to stack locations in memory are in fact register indirect addresses. The stack mode of addressing is a form of implied addressing. The machine instructions need not include a memory reference but implicitly operate on the top of the stack. \subsubsection{Instruction Sets: Addressing Modes} Definición:“Define el despliegue de los bits que componen la instrucción” \paragraph{Components:}\mbox{}\\\\%% \begin{itemize} \item Opcode \item 0 a n operandos \item Modo de direccionamiento de cada operando \item Flags \end{itemize} /*Formato ARM*/ U3 Pág 11 /*Formato x86*/ U3 Pág 12 \subsubsection{Clasificación de la ISA según el número de direcciones.} \paragraph{3 addresses}\mbox{}\\\\%% \begin{itemize} \item Operand 1, Operand 2, Result \item e.g. a=b+c \end{itemize} \paragraph{2 address}\mbox{}\\\\%% \begin{itemize} \item One address doubles as operand and result \item eg a . . = a+c \end{itemize} \paragraph{1 address}\mbox{}\\\\%% \begin{itemize} \item Implicit second address (accumulator) \end{itemize} \paragraph{0 address}\mbox{}\\\\%% \begin{itemize} \item All addresses are implicitly defined \item Stack based computer \end{itemize} \subsection{Memoria} Word size: The “natural” unit of organization of memory. The size of a word is typically equal to the number of bits used to represent an integer and to the instruction length \subsubsection{Big / Little Endian} /* Libro página 452 10th edition*/ \subsubsection{Direccionamiento} Addressing: The mode or modes by which the address of an operand is specified. \subsubsection{Espacio de direcciones ( address space)} \begin{itemize} \item Memory: The memory space includes system main memory. It also includes PCIe I/O devices. Certain ranges of memory addresses map into I/O devices. \item I/O: This address space is used for legacy PCI devices, with reserved memory address ranges used to address legacy I/O devices. \item Configuration: This address space enables the TL to read/write configuration registers associated with I/O devices. \item Message: This address space is for control \end{itemize}
{ "alphanum_fraction": 0.7927028499, "avg_line_length": 57.3854166667, "ext": "tex", "hexsha": "3e86e35ead674fb0bb81cddb2ea163570c68c2e9", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "86623a6fe71118ffb767f7705285a60c856e6455", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "Savantage/ORGACOMPUTER", "max_forks_repo_path": "Resumen/Latex/U3/part2.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "86623a6fe71118ffb767f7705285a60c856e6455", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "Savantage/ORGACOMPUTER", "max_issues_repo_path": "Resumen/Latex/U3/part2.tex", "max_line_length": 524, "max_stars_count": null, "max_stars_repo_head_hexsha": "86623a6fe71118ffb767f7705285a60c856e6455", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "Savantage/ORGACOMPUTER", "max_stars_repo_path": "Resumen/Latex/U3/part2.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 3692, "size": 16527 }
% !TeX root = ../../thesis.tex \chapter{Objectives} % \label{ch:objectives} % \epigraphhead[\epipos]{% \epigraph{% % ``You could find out most things, if you knew the right questions to ask. Even if you didn't, you could still find out a lot.'' % }{% \textit{`Gurgeh' in `The Player of Games' by Iain M. Banks} % }} Given their introduction in \cref{ch:nanopores}, it should stand without doubt that nanopores---and biological ones in particular---have become powerful single molecule sensing tools. Among the key strengths of nanopores are their ability to interrogate individual molecules label-free, at high sampling frequencies and with long observation times. Whereas over the past 25 years nanopore research was driven by genomics, researchers are now recognizing these advantages may be useful also in other fields, including proteomics, metabolomics, single-molecule enzymology~\cite{Willems-VanMeervelt-2017} and biosensing. This expanding application space also presents an increasing challenge: translating the `two-dimensional' nanopore current into a meaningful set of information. Even though complementary experimental techniques may alleviate the problem, it is here that modeling approaches---be it analytical or computational---can bestow researchers with a thorough understanding of the complete system itself, rather than its individual components. Specifically, by elucidating the physical mechanisms that govern the interactions between the nanopore and the analyte molecules, they may aid in the unambiguous interpretation of the current signals, provide insights into the prevailing conditions within the pore, and detail guidelines for tailoring the properties of nanopores. In this dissertation, the focus lies on the development of analytical and computational methodologies (\textbf{Objective~1}) that explain the physical mechanisms behind the experimentally observed behavior of nanopore-analyte systems (\textbf{Objectives~2 and 3}), or that can be used as a `computational microscope' to study all the properties of the nanopore itself (\textbf{Objective~4}). \clearpage % \objective{Develop methodologies for accurate modeling of biological nanopores} % Due to their proteinaceous nature, the primary computational tool for studying biological nanopores are \glsfirst{md} simulations, where every atom is modeled explicitly. Unfortunately, the wealth of information accessible through \gls{md} simulations comes at a heavy computational cost: often necessitating the use of months of supercomputer time. Even though we will still make use of \gls{md} to construct realistic and well equilibrated homology models of various biological nanopores, most simulations in this work will be based on continuum, rather than discrete representations of the nanopore systems. In~\cref{ch:electrostatics} we will show how the \glsfirst{pb} equation can be employed to quantify the equilibrium (\ie~without an external bias voltage) electrostatic interactions between a nanopore and the surrounding electrolyte on the one hand, and its analyte molecules on the other. Nanopores are governed by more than just electrostatics however, and thus in~\cref{ch:epnpns} we develop the \gls{epnp-ns} equations: a simulation framework that introduces several self-consistent corrections to the electrolyte properties in an attempt to mitigate the shortcomings of the classical {PNP-NS} equations at the nanoscale and beyond infinite dilution. Because the \gls{epnp-ns} equations can be solved for continuum systems, they aim to provide a fast yet accurate means to model the transport of ions and the flow of water through a nanopore. In~\cref{ch:trapping} we will also use an analytic model---which intrinsically necessitates a reduction to the most essential components---to gain valuable insights. % \objective{Investigate the equilibrium electrostatics of biological nanopores} % The interior walls of most biological nanopores are typically lined with charged amino acids. Hence, it is unsurprising that electrostatic interactions often play a determining role in the overall behavior of a pore. In~\cref{ch:electrostatics}, we use the \gls{pb} equations to calculate the electrostatic potential within the several variants of the \glsfirst{plyab}~\cite{Huang-2020}, \glsfirst{frac}~\cite{Wloka-2016,Huang-2017} and \glsfirst{clya}~\cite{Franceschini-2016} nanopores and investigate the effect of ionic strength and pH on their emergent properties, such as the \glsfirst{eof}. Additionally, we map out the electrostatic free energies associated with \gls{ssdna} and \gls{dsdna} translocation through variants of the \gls{frac} and \gls{clya} pores, respectively, and link the observed differences with published experimental findings. A similar methodology will be used in \cref{ch:trapping} for quantifying the electrostatics energy associated with the trapping of a protein within \gls{clya}~\cite{Soskine-Biesemans-2015}, which is the focus of \textbf{Objective~3}. \clearpage % \objective{Elucidate the trapping behavior of a protein inside a biological nanopore} % In their pioneering work with \gls{clya}, Soskine and Biesemans~\etal{} showed that the dwell time of the \glsfirst{dhfr} enzyme (\SI{\approx19}{\kDa}) within \gls{clya} could be increased by orders of magnitude by (1) fusing a positively charged polypeptide tag to the C-terminus of the protein (`\DHFRt'), and (2) allowing it to bind to \gls{mtx}, a small negatively charged inhibitor (\SI{454}{\dalton})~\cite{Soskine-Biesemans-2015}. Moreover, the dwell time of \DHFRt was found to depend strongly and non-monotonously on the magnitude applied bias voltage~\cite{Biesemans-2015}. The physical mechanisms behind this behavior are elucidated in~\cref{ch:trapping}, using a combination of experiments, electrostatic energy calculations and an analytical `double energy barrier' model. The latter captures the essential physics governing the escape of \DHFRt{} from either the \cisi{} or \transi{} sides of \gls{clya}, and the fitting to an extensive set of experimental data will yield quantitative values for magnitude of the electrophoretic, electro-osmotic, electrostatic, and steric forces acting on proteins captured by \gls{clya}. % \objective{Mapping the transport properties of a biological nanopore} % In their inspiring 2005 publication, Aksimentiev \etal{} made of use \SI{\approx100}{\ns} of all-atom \gls{md} simulations to map out the electrostatic potential, electro-osmotic flow, and ionic concentrations within the \gls{ahl} nanopore~\cite{Aksimentiev-2005}. Even though the available computational power has increased \num{\approx1000}-fold over the past 15~years, \gls{md} simulations remain prohibitively expensive compared to continuum approaches, which are approximately 1000-fold faster at obtaining the same information. Moreover, to a large extent, they have benefited from the same advances in computational power and algorithms. Hence, in~\cref{ch:transport} we apply the \gls{epnp-ns} framework developed in~\cref{ch:epnpns} to a 2D-axisymmetric model of \gls{clya} nanopore to show that continuum simulation can provide the same or more information as \gls{md} simulations---at a fraction of the (computational) cost. By simulating the \gls{clya} over a wide range of ionic strengths (\SIrange{0.005}{5}{\Molar}~\ce{NaCl}) and bias voltages (\SIrange[retain-explicit-plus=true]{-200}{+200}{\mV}), we will be able to gauge the accuracy of the \gls{epnp-ns} equations for predicting nanoscale conductances. Furthermore, it will allow us to paint a quantitative picture of nanopore properties that are difficult to access experimentally, including ion selectivity, ion concentrations, (non)equilibrium electrostatic potentials and the electro-osmotic flow. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % Keep the following \cleardoublepage at the end of this file, % otherwise \includeonly includes empty pages. \cleardoublepage % vim: tw=70 nocindent expandtab foldmethod=marker foldmarker={{{}{,}{}}}
{ "alphanum_fraction": 0.7968419744, "avg_line_length": 67.025, "ext": "tex", "hexsha": "5fb5d3ee58b910b2f920b955e0b57f6cad2ea1a3", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "ff3e78e3c1d5a6a9225af3521b294ed9110be85c", "max_forks_repo_licenses": [ "CC-BY-4.0" ], "max_forks_repo_name": "willemsk/phdthesis-text", "max_forks_repo_path": "chapters/aims/aims.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "ff3e78e3c1d5a6a9225af3521b294ed9110be85c", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "CC-BY-4.0" ], "max_issues_repo_name": "willemsk/phdthesis-text", "max_issues_repo_path": "chapters/aims/aims.tex", "max_line_length": 110, "max_stars_count": 1, "max_stars_repo_head_hexsha": "ff3e78e3c1d5a6a9225af3521b294ed9110be85c", "max_stars_repo_licenses": [ "CC-BY-4.0" ], "max_stars_repo_name": "willemsk/phdthesis-text", "max_stars_repo_path": "chapters/aims/aims.tex", "max_stars_repo_stars_event_max_datetime": "2021-03-11T17:06:02.000Z", "max_stars_repo_stars_event_min_datetime": "2021-03-11T17:06:02.000Z", "num_tokens": 1980, "size": 8043 }
\documentclass[a4paper]{article} \usepackage[utf8]{inputenc} \usepackage[margin=1in]{geometry} \usepackage{amsmath} \usepackage{amssymb} \usepackage{setspace} \title{Chapter 2\\ Basic Topology} \author{solutions by Hikari} \date{August 2021} \begin{document} \newcommand{\V}{\mathbf} \maketitle \paragraph{1.} "$x\in\phi $" is always false, so "$x\in\phi \Rightarrow x\in S$" in always true, by mathematical logic. \paragraph{2.} Let $S$ be the set of equations of the form $a_0z^n+a_1z^{n-1}+\cdots+a_{n-1}z+a_n=0$ with $a_0,\cdots,a_n$ being integers. Let $A_N$ be the set of equations in $S$ such that $n+|a_0|+|a_1|+\cdots+|a_n|=N$ with $N$ being positive integer. By \textit{Hint}, $A_N$ is finite (it has less than $\frac{(N+n+1)!}{N!(n+1)!}2^{n+1}$ elements). It is obvious that \[ S=\bigcup_{N=1}^\infty A_N \] Because $A_N$ is finite, $S$ is countable by Theorem 2.12. Let $\alpha$ be an equation is $S$, and let $B_\alpha$ be the set of complex numbers satisfying $\alpha$. By fundamental theorem of algebra, the roots of $\alpha$ are countable, so $B_\alpha$ is countable. So the set \[ T=\bigcup_{\alpha\in S}B_\alpha \] is countable by Corollary of Theorem 2.12, and the set of algebraic numbers is a subset of $T$, so the set of algebraic numbers is countable. \paragraph{3.} If every real number is algebraic, then the set of real numbers is a subset of the set of algebraic numbers and is therefore at most countable by Exercise 2, which contradicts the fact that the set of real numbers is uncountable (Corollary of Theorem 2.43). \paragraph{4.} Not countable. If the set of irrational numbers is countable, then because the set of rational numbers is also countable, the set of real numbers, which is the union of rational and irrational numbers, will be countable, contradicting to the fact that the set of real numbers is uncountable. \paragraph{5.} Let $A=\{\frac{1}{n}|n\in\mathbb{N}\}$, $B=\{1+\frac{1}{n}|n\in\mathbb{N}\}$, $C=\{2+\frac{1}{n}|n\in\mathbb{N}\}$. Then the set $A\cup B\cup C$ is bounded and has only three limit points: $0,1,2$. \paragraph{6.} Let $p$ be a limit point of $E'$, then every neighborhood of $p$, $\{x|d(x,p)<r_1\}$ contains a $q\in E'$, which means $q$ is a limit point of $E$, so every neighborhood of $q$, $\{x|d(x,q)<r_2\}$ contains a $s\in E$. Let $r_2=r_1-d(q,p)$, then \[ d(s,p)\leq d(s,q)+d(q,p)<r_2+d(q,p)=r_1 \] which means $s$ is in the neighboorhood of $p$. So every neighborhood of $p$ contains a point of $E$, which means $p$ is a limit point of $E$, so $p\in E'$. So every limit point of $E'$ belongs to $E'$, which means $E'$ is closed. \medskip If $a$ is a limit point of $\bar{E}$, then every neighborhood of $a$ contains a point of $E$ or $E'$, and from above we know that the later also implies the neighborhood contains a point of $E$, so $a$ is a limit point of $E$. So every limit point of $\bar{E}$ is a limit point of $E$. If $b$ is a limit point of $E$, so every neighborhood of $b$ contains a point of $E$, then because $E\subset\bar{E}$, the neighborhood contains a point of $\bar{E}$, which means $b$ is a limit point of $\bar{E}$. So every limit point of $E$ is a limit point of $\bar{E}$. From the above two statements we know $E$ and $\bar{E}$ have the same limit points. \medskip $E$ and $E'$ do not necessary have the same limit points. Let $E$ be $\{\frac{1}{n}|n\in\mathbb{N}\}$, then $0$ is a limit point of $E$, but that also means $E'$ has only $0$ as its element, so $E'$ has no limit point. \paragraph{7.} (a) $B_n\supset A_i$, so $\bar{B}_n\supset A_i$. By Theorem 2.27(a) we know $\bar{B}_n$ is closed, so by Theorem 2.27(c) we have $\bar{B}_n\supset\bar{A}_i$. Therefore $\bar{B}_n\supset\bigcup_{i=1}^n\bar{A}_i$. $B_n\subset\bigcup_{i=1}^n\bar{A}_i$. Because $\bar{A}_i$ is closed, by Theorem 2.24(d) we know $\bigcup_{i=1}^n\bar{A}_i$ is closed. So by Theorem 2.27(c) we have $\bar{B}_n\subset\bigcup_{i=1}^n\bar{A}_i$. The above two statements implies $\bar{B}_n=\bigcup_{i=1}^n\bar{A}_i$. \medskip (b) The first result in (a) still holds, but the second does not because when $n$ becomes infinity, Theorem 2.24(d) no longer applies. So we have only $\bar{B}_n\supset\bigcup_{i=1}^\infty\bar{A}_i$. Let $A_n=\{x|x>\frac{1}{n}\}$. Then $B=\{x|x>0\}$, $\bar{B}=\{x|x\geq0\}$, $\bar{A}_n=\{x|x\geq\frac{1}{n}\}$, $\bigcup_{n=1}^\infty\bar{A}_n=\{x|x>0\}$. So $0$ is a element of $\bar{B}_n$ but not $\bigcup_{i=1}^\infty\bar{A}_i$. \paragraph{8.} Every point $p$ of an open set $E$ is an interior point of $E$, which means there is a neighborhood $\{x:|\V{x}-\V{p}|<r_1\}\subset E$. For every neighborhood of $p$, $N_r(p)=\{x:|\V{x}-\V{p}|<r_2\}$, let $y$ be a point such that $0<|\V{y}-\V{p}|<\min(r_1,r_2)$, then $y\neq p$ and $y$ is in both $E$ and $N_r(p)$. So every neighborhood of $p$ contains a point of $E$ , which means $p$ is a limit point of $E$. The statement does not hold for closed set when the closed set contains isolated points, for example $E=\{(0,0)\}$. \paragraph{9.} (a) Let $p\in E^\circ$, which means $p$ is an interior point of $E$, so there is a neighborhood $N_p$ such that $N_p\subset E$. $N_p$ is an open set, so for every $q\in N_p$, there is a neighborhood $N_q$ such that $N_q\subset N_p$, and therefore $N_q\subset E$. The fact that $q$ has a neighborhood $N_q\subset E$ implies $q$ is an interior point of $E$, or $q\in E^\circ$. The statement "$q\in N_p\Rightarrow q\in E^\circ$" implies $N_p\subset E^\circ$, so for every $p\in E^\circ$ there is a neighborhood $N_p\subset E^\circ$, which means $E^\circ$ is open. \medskip (b) If $E$ is open, then every point in $E$ is an interior point of $E$ and therefore belongs to $E^\circ$, so $E\subset E^\circ$. It is obvious that $E^\circ\subset E$, so $E=E^\circ$. If $E^\circ=E$, then because $E^\circ$ is open, $E$ is open. \medskip (c) If $G$ is open, then for every $p\in G$ there is a neighborhood $N_p\subset G$, and therefore $N_p\subset E$. So $p$ is an interior point of $E$, which means $p\in E^\circ$. The statement "$ p\in G\Rightarrow p\in E^\circ$" implies $G\subset E^\circ$. \medskip (d) For every $p\in(E^\circ)^c$, $p\not\in E^\circ$, so $p$ is not an interior point of $E$, which means every neighborhood of $p$ contains an element not in $E$, which means in $E^c$. Then $p$ is either an element of $E^c$, or a limit point of $E^c$, in either cases $p\in\overline{(E^c)}$. So $(E^\circ)^c\subset\overline{(E^c)}$. For every $p\in\overline{(E^c)}$, either $p\in E^c$, or $p$ is a limit point of $E^c$. For the former case, because $E^\circ\subset E$, $(E^\circ)^c\supset E^c$, so $q\in E^c\subset (E^\circ)^c$. For the later case, every neighborhood of $p$ contains an element of $E^c$, so $p$ cannot be an interior point of $E$, $p\not\in E^\circ$, $p\in(E^\circ)^c$. In either cases $p\in(E^\circ)^c$, so $\overline{(E^c)}\subset(E^\circ)^c$. Combining the above two statements we have $(E^\circ)^c=\overline{(E^c)}$. \medskip (e) No. Let $E=(-1,0)\cup(0,1)$. Then $\bar{E}=[-1,1]$, $E^\circ=(-1,0)\cup(0,1)$, $(\bar{E})^\circ=(-1,1)$, so $E^\circ\neq(\bar{E})^\circ.$ \medskip (f) No. Let $E$ contains only a single point, then $\bar{E}$ also contains that point. But $E^\circ$ is empty, and therefore $\overline{(E^\circ)}$ is empty. So $\overline{E}\neq\overline{(E^\circ)}$. \paragraph{10.} We verify the three definitions of metric. (a) $d(p,q)=1>0$ if $p\neq q$; $d(p,p)=0$ (b) $d(p,q)=d(q,p)=1$ when $p\neq q$; $d(p,q)=d(q,p)=0$ when $p=q$. (c) If $p=q$, then $d(p,q)=0\leq d(p,r)+d(r,q)$. \quad\; If $p\neq q$, then either $r\neq p$ or $r\neq q$ or both, so $d(p,r)+d(r,q)\geq1=d(p,q)$. \medskip For every subset $E$ in $X$, let $p$ be any point of $E$, then the neighborhood $N_p$ of $p$ with radius $r<1$ will contain only $p$, so $N_p\subset E$, and therefore $p$ is an interior point of $E$, which means $E$ is open. So every subset in $X$ is open. For every subset of $E$ in $X$, because there is no limit point in $X$ (the neighborhood of $p$ with radius $r<1$ contains only $p$), the statement "every limit point of $E$ is a point of $E$ " is always true, so $E$ is closed. So every subset in $X$ is closed. For every finite subset $E$ in $X$, let $\{G_\alpha\}$ be an open cover of $E$. Choose one $G_\alpha$ for each element that contains that element, then the resulting subcover is finite and contains E, so $E$ is compact. For every infinite subset $E'$ in $X$, let $G_\alpha$ contains only $\alpha$, $\alpha\in E'$, then $\bigcup_\alpha G_\alpha$ is an open cover of $E'$, while any finite subcover contains only finite elements of $E'$ and therefore cannot contains $E'$, so $E'$ is not compact. In conclusion, a subset in $X$ is compact if and only if it is finite. \paragraph{11.} \quad \\ \noindent $d_1$: $d(0,2)=2^2>1^2+1^2=d(0,1)+d(1,2)$, so Definition 2.15(c) is not satisfied and $d_1$ is not a metric. \medskip \noindent $d_2$: Definition (a) and (b) can be easily verified. For Definition (c), \[|p-q|\leq|p-r|+|r-q|\leq|p-r|+|r-q|+2\sqrt{|p-r||r-q|}=(\sqrt{|p-r|}+\sqrt{|r-q|})^2\] \[ d(p,q)=\sqrt{|p-q|}\leq\sqrt{|p-q|}+\sqrt{|r-q|}=d(p,r)+d(r,q) \] so Definition (c) is satisfied, and $d_2$ is a metric. \medskip \noindent $d_3$: $d(-1,1)=|(-1)^2-1^2|=0$ while $-1\neq1$, so Definition (a) is not satisfied and $d_3$ is not a metric. \medskip \noindent $d_4$: $d(1,1)=|1-2|=1\neq0$, so Definition (a) is not satisfied and $d_4$ is not a metric. \medskip \noindent $d_5$: Definition (a) and (b) can be easily verified. For Definition (c), $|p-q|\leq|p-r|+|r-q|$, so \[ d(p,q)= \frac{|p-q|}{1+|p-q|}\leq\frac{(|p-r|+|r-q|)}{1+(|p-r|+|r-q|)}\leq\frac{|p-r|}{1+|p-r|}+\frac{|r-q|}{1+|r-q|}=d(p,r)+d(r,q) \] so Definition (c) is satisfied, and $d_5$ is a metric. \paragraph{12.} If an open cover $\bigcup_\alpha G_\alpha$ contains $K$, then there is an open set $G_0$ contains $0$, which means $0$ is an interior point of $G_0$, so there is a neighborhood $N_r(0)=\{x|-r<x<r\}$ be included in $G_0$. According to the archimedean property of $R$, there is a $N$ such that $Nr>1$, and if $n>N$, $\frac{1}{n}$ is included in $N_r(0)$ and therefore included in $G_0$ beacause $\frac{1}{n}<\frac{1}{N}<r$. So take the union of $G_0$ and the other $N$ open sets that contain $1,\frac{1}{2},\cdots,\frac{1}{N}$, then we formed a finite subcover of $K$, so $K$ is compact. \paragraph{13.} Let $K_i$ consists of $\frac{1}{2^i}$ and $\frac{1}{2^i}+\frac{1}{n}$, $n=2^{i+1},2^{i+1}+1,\cdots$, so all the elements are in the range $[\frac{1}{2^i},\frac{1}{2^i}+\frac{1}{2^{i+1}}]$, and the only limit point is $\frac{1}{2^i}$. Note that the range of different $K_i$ do not overlap. Let $K$ be the union of $0$ and all the $K_i$, for $i=1,2,\cdots$. Note that all the elements are in the range $[0,1]$, so $K$ is bounded. We want to find all the limit points of $K$ to prove that it is closed. \medskip For $x<0$, $x$ is not a limit point because the neighborhood with radius $r<|x|$ does not contain an element of $K$. $x=0$ is a limit point of $K$ because for every neighborhood with radius $r$, let $N>\log_2\frac{1}{r}$, then $\frac{1}{2^N}<r$, so every neighborhood contains an element of $K$. For $0<x<1$, either $x$ is in the range of $K_i$, $[\frac{1}{2^i},\frac{1}{2^i}+\frac{1}{2^{i+1}}]$, or in the interval between $K_i$ and $K_{i-1}$, which is $(\frac{1}{2^i}+\frac{1}{2^{i+1}},\frac{1}{2^{i-1}})$. For the former, $x$ is a limit point of $K$ if and only if it is a limit point of $K_i$, so the only possible limit point is $\frac{1}{2^i}$. For the later, $x$ is not a limit point of $K$ because the neighborhood with radius $r<\min\left(|x-\frac{1}{2^i}-\frac{1}{2^{i+1}}|,|x-\frac{1}{2^{i-1}}|\right)$ does not contain an element of $K$. For $x\geq1$, $x$ is not a limit point because the neighborhood with radius $r<\frac{1}{4}$ does not contain an element of $K$. So the limit points of $K$ are $0$ and $\frac{1}{2^i}$, $i=1,2,\cdots$, which are all in $K$, so $K$ is closed. \medskip By Theorem 2.41, $K$ being closed and bounded implies $K$ is compact, and the limit points $0,\frac{1}{2^1},\frac{1}{2^2},\cdots$ form a countable set, so the $K$ we constructed satisfy the condition. \paragraph{14.} Let $G_n=(\frac{1}{n},1)$, then $\bigcup_{n=1}^\infty G_n$ is a open cover of $(0,1)$ because for every $0<x<1$, $x$ is included in $G_N$ if $Nx>1$. Let $\bigcup_{i=1}^kG_{n_i}\;(n_1<n_2\cdots<n_k)$ be a finite subcover of the open cover, then the element $0<x<\frac{1}{n_k}$ is not included in this subcover, a contradiction. So $\bigcup_{n=1}^\infty G_n$ is an open cover of $(0,1)$ but has no finite subcover. \paragraph{15.} \quad For the "closed" case, let $K_n=\{x|x\geq n\}$, then $K_n$ is closed and every finite subcollection of $\{K_n\}$ is nonempty, but $\bigcap_1^\infty K_n$ is empty because for every $x$, let $N>x$, then $x$ is not included in $K_N$. \medskip For the "bounded" case, let $K_n=\{x|0<x<\frac{1}{n}\}$, then $K_n$ is bounded and every finite subcollection of $\{K_n\}$ is nonempty, but $\bigcap_1^\infty K_n$ is empty because for every $x$, let $Nx>1$, then $x$ is not included in $K_N$. \paragraph{16.} $E^c=\{p\,|\,p^2\leq2\;or\;p^2\geq3\}=\{p\,|\,p^2<2\;or\;p^2>3\}$ because there is no rational number whose square is $2$ or $3$. Let $x\in E^c$, if $x^2<2$, by Theorem 1.20(b) there is rational number $y$ such that $x<y<\sqrt{2}$, so the neighborhood $N_r(x)$ with $r=y-x$ is included in $E^c$, which means $x$ is an interior point of $E^c$. Similarly, if $x^2>3$, there is a rational number $y$ such that $\sqrt{3}<y<x$, so the neighborhood $N_r(x)$ with $r=x-y$ is included in $E^c$, which means $x$ is an interior point of $E^c$. So every $x\in E^c$ is an interior point of $E^c$, which means $E^c$ is an open set, and therefore $E$ is an closed set. It is obvious that $E$ is bounded. Let $G_n=\{p\,|\,2+\frac{1}{n}<p^2<3\}$, then $\bigcup_{n=1}^\infty G_n$ is an open cover of $E$ because $G_n$ is open (as $E$), and for every $x\in E$, which means $x^2>2$, there is a positive real number $\delta$ such that $x^2>2+\delta$; let $N\delta>1$, then $x^2>2+\delta>2+\frac{1}{N}$, so $x$ is included in $G_N$. For every finite subcover $\bigcup_{i=1}^kG_{n_i}\;(n_1<n_2<\cdots<n_k)$, the rational $x$ in $E$ that satisfy $\sqrt{2}<x<\sqrt{2+\frac{1}{n_k}}$ will not be included in any $G_{n_i}$, so $\bigcup_{i=1}^kG_{n_i}$ cannot be a finite subcover of $E$. Therefore, $E$ is not compact. For $x\in E$, $2<x^2<3$. There exists $y_1,y_2$ such that $\sqrt{2}<y_1<x<y_2<\sqrt{3}$, so the neighborhood $N_r(x)$ with $r=\min(|x-y_1|,|x-y_2|)$ is included in $E$, which means $x$ is an interior point of $E$, so $E$ is open. \paragraph{17.} \quad \textit{countable}: If $E$ is countable, let its elements be enumerated as $s_1,s_2,\cdots$. Construct a sequence $s$ by making the $n^{th}$ digit of $s$ be $4$ if the $n^{th}$ digit of $s_n$ is $7$, and the $n^{th}$ digit of $s$ be $7$ if the $n^{th}$ digit of $s_n$ is $4$. Then $s$ is different with every $s_i$, but $s$ is a element of $E$, a contradiction. So $E$ is uncountable. \textit{dense}: $E$ is not dense in $[0,1]$ because there are many points in $[0,1]$ that are neither a point or a limit point of $E$, for example, $0.5$\,. \textit{compact}: It is obvious that $E$ is bounded. If there is a limit point of $E$ that is not an element of $E$, whose $n^{th}$ digit is neither $4$ nor $7$, consider the neighborhood with radius $r=10^{-(n+1)}$, then the neighborhood cannot contain any element of $E$ because the $n^{th}$ and $(n+1)^{th}$ digit cannot be $44,47,74$ or $77$, a contradiction. So every limit point of $E$ belongs to $E$, which means $E$ is closed. Being bounded and closed implies being compact in $R^1$ (Theorem 2.41). \textit{perfect}: We have proved that $E$ is closed. If there is a element $x$ of $E$ that is not a limit point of $E$, which means there is a neighborhood $N_r(x)$ contains no element of $E$, let $N$ be large enough so $10^{-N}<r$, and let $y$ be an element of $E$ that is the same with $x$ at the first $N$ digit but different at the $(N+1)^{th}$ digit, then $|y-x|<10^{-N}<r$, so $y\in E$ is in $N_r(x)$, a contradiction. So every element of $E$ is a limit point of $E$. Together with the fact that $E$ is closed, we proved that $E$ is perfect. \paragraph{18.} Let $E_0=[a_0,b_0]$, $a_0,b_0$ are irrational. Enumerate all the rational number in $[a_0,b_0]$ as $r_1,r_2,\cdots$. For $a_0<r_1<b_0$, found irrational number $a_1,b_1$ that such that $a_0<b_1<r_1<a_1<b_0$, and remove $(b_1,a_1)$ to get $E_1=[a_0,b_1]\cup[a_1,b_0]$. Continue in this way to construct $E_2,E_3,\cdots$ by removing segments around $r_2,r_3,\cdots$ with irrational numbers being the end points, then $E_1\supset E_2\supset E_3\supset\cdots$. Let $P=\bigcap_{n=1}^\infty E_n$. It is obvious that $E_n$ is compact, so by Theorem 2.36, $P$ is not empty. From the construction, $P$ contains no rational number. It is obvious that $P$ is closed. If $x\in P$, let $S$ be a neighborhood of $x$. Let $I_n$ be that interval of $E_n$ which contains x. Choose $n$ large enough so that $I_n\subset S$ (It is possible because the length of $I_n$ can be arbitrary small, due to the fact that rational numbers are dense). Let $x_n$ be an endpoint of $I_n$, such that $x_n\neq x$. It follows from the construction of $P$ that $x_n\in P$. Hence $x$ is a limit point of $P$, and $P$ is perfect. Therefore, $P$ is not empty, perfect and contains no rational number, which satisfies the conditions. \paragraph{19.} (a) $A$ and $B$ being disjoint implies $A\cap B=\phi$. From Theorem 2.27, because $A$ and $B$ are closed, $A=\bar{A}$ and $B=\bar{B}$. So $A\cap\bar{B}=\bar{A}\cap B=A\cap B=\phi$, which means $A$ and $B$ are separated. \medskip (b) If $A\cap B=\phi$ but $A\cap\bar{B}\neq\phi$, then there is a limit point $x$ of $B$ being included in $A$. Because $A$ is open, there is a neighborhood $N_r(x)\subset A$. But because $x$ is also a limit point of $B$, there is an element of $B$ contained in $N_r(x)$, and therefore contained in $A$. So $A$ and $B$ have a common element, a contradiction. So $A\cap\bar{B}=\phi$, similarly $\bar{A}\cap B=\phi$, so $A$ and $B$ are separated. \medskip (c) It is obvious that $A$ and $B$ are disjoint open sets, so by (b), $A$ and $B$ are separated. \medskip (d) Let $E$ be a connected metric space with at least two points. Assume $E$ is countable. Find two points $a$ and $b$. The set of real number between $0$ and $d(a,b)$ is uncountable, but the set of distances between $a$ and other elements are countable, so there exist a $\delta$ such that $0<\delta<d(a,b)$ and $\delta$ is not equal to any distance between $a$ and other elements. So $E=\{x|d(a,x)<\delta\}\cup\{x|d(a,x)>\delta\}$, which means $E$ is not connected by (c), a contradiction. So $E$ must be uncountable. \paragraph{20.} Let $\bar{E}$ be a closure of a connected set $E$. Assume $\bar{E}$ is not connected, so there are two nonempty sets $A$ and $B$ such that $\bar{E}=A\cup B$ and $\bar{A}\cap B=A\cap\bar{B}=\phi$. Note that \[E=\bar{E}\cap E=(A\cup B)\cap E=(A\cap E)\cup(B\cap E)\] If $A\cap E=\phi$, then $A\subset E'$ and $E\subset B$, so $A\subset E'\subset\bar{E}\subset\bar{B}$, which means $A\cap\bar{B}\neq\phi$, a contradiction. So $A\cap E\neq\phi$, and similarly $B\cap E\neq\phi$. \smallskip $A\cap E\subset A$,\; $\overline{(A\cap E)}\subset\overline{A}$,\; $B\cap E\subset B$,\; $\overline{(B\cap E)}\subset\overline{B}$. So $\overline{(A\cap E)}\cap(B\cap E)\subset\overline{A}\cap B=\phi$, and $(A\cap E)\cap\overline{(B\cap E)}\subset A\cap\overline{B}=\phi$, which means $A\cap E$ and $B\cap E$ are two nonempty separated sets, so $E$, being the union of them, is not connected, a contradiction. So $\bar{E}$ must be connected. \medskip The interior of a connected set can be not connected. Consider a set $E$ in $R^2:$ \[ E=\{\V{x}:|\V{x}-(-2,0)|<1\}\cup\{\V{x}:|\V{x}-(2,0)|<1\}\cup\{\V{x}:\V{x}=(k,0),-2<k<2\} \] which means $E$ consists of two balls and a line connecting them. It is obvious that $E$ is connected, but its interior, being $\{\V{x}:|\V{x}-(-2,0)|<1\}\cup\{\V{x}:|\V{x}-(2,0)|<1\}$, is not connected. \paragraph{21.} (a) If $x\in A_0'$, so $x$ is a limit point of $A_0$, then every neighborhood $N_{r_0}(x)$ contains a point $y\neq x$ such that $y\in A_0$. Consider a neighborhood of $\V{p}(x)$ with radius $r$. Let $r_0=\frac{r}{|\V{a}|+|\V{b}|}$, then there is a $y\in A_0$ such that $|y-x|<r_0=\frac{r}{|\V{a}|+|\V{b}|}$. Then $|\V{p}(y)-\V{p}(x)|=|-(y-x)\V{a}+(y-x)\V{b}|\leq|y-x|(|\V{a}|+|\V{b}|)<r$, so $\V{p}(y)\in A$ is in the neighborhood of $P(x)$, which means $\V{p}(x)$ is a limit point of $A$. Therefore, if $x\in A_0'$, then $\V{p}(x)\in A'$. It implies that if $x\in\bar{A}_0$, then $\V{p}(x)\in\bar{A}$. Similarly, if $x\in\bar{B}_0$, then $\V{p}(x)\in\bar{B}$. $A$ and $B$ are separated, so $\bar{A}\cap B=A\cap\bar{B}=\phi$. If $x\in B_0$, $\V{p}(x)\in B$, then $\V{p}(x)\not\in \bar{A}$, so $x\not\in \bar{A}_0$. It implies $\bar{A}_0\cap B_0=\phi$. Similarly, $A_0\cap\bar{B}_0=\phi$. So $A_0$ and $B_0$ are separated. \medskip (b) Let $(0,1)=I$. $(A_0\cup B_0)\cap I=(A_0\cap I)\cup(B_0\cap I)\subset I$. Because $A_0$ and $B_0$ are separated, so $A_0\cap I$ and $B_0\cap I$ are separated, but $I=(0,1)$ is connected, so $(A_0\cup B_0)\cap I$ is a proper subset of $I$. So there must exist a $t_0\in(0,1)$ but $t_0\not\in A_0\cup B_0$ , which means $\V{p}(t_0)\not\in A\cup B$. \medskip (c) If a subset $E$ of $R^k$ is not connected, then $E$ is the union of two nonempty separated sets $A$ and $B$. Let $\V{a}\in A\in E$, $\V{b}\in B\in E$, then by (b) there is a $t_0\in(0,1)$ such that $(1-t)\V{a}+t\V{b}\not\in A\cup B=E$. However, if $E$ is convex, then whenever $\V{a}\in E$, $\V{b}\in E$ and $0<t<1$, we must have $(1-t)\V{a}+t\V{b}\in E$, which means that $E$ is connected. \paragraph{22.} Consider the set of points which have only rational coordinates, $Q^k$. Because the set of rational numbers is countable, so by Theorem 2.13 we knows $Q^k$ is countable. Consider any point $\V{x}=(x_1,x_2,\cdots,x_k)\in R^k$, and consider a neighborhood of $\V{x}$ with radius $r$. Find rational numbers $a_i$ such that $x_i-\frac{r}{\sqrt{k}}<a_i<x_i+\frac{r}{\sqrt{k}},\;i=1,2,\cdots,k$ (They exist by Theorem 1.20(b)). Then $|\V{a}-\V{x}|=\sqrt{\sum_k(a_i -x_i)^2}<r$, so $\V{a}\in Q^k$ is in the neighborhood of $\V{x}$, which means $\V{x}$ is a limit point of $Q^k$, so $Q^k$ is dense in $R^k$. Therefore, $Q^k$ is a countable dense subset of $R^k$, which means $R^k$ is separable. \paragraph{23.} Let $X$ be a separable metric space, and let $C$ be a countable dense subset of $X$. Let $\mathcal{B}$ be the collection of all neighborhoods centered at some point of $C$ with rational radius, which means \[ \mathcal{B}=\{N_r(x):r\in Q,\;x\in C \} \] $Q$ and $C$ are countable, so $\mathcal{B}$ is countable by Corollary of Theorem 2.12. If $G$ is an open set in $X$ and $x\in G$, then there is a neighborhood $N_{\varepsilon}(x)\subset G$. Let $h$ be a rational number such that $0<h<\frac{\varepsilon}{2}$ (it exists by Theorem 1.20 (b)). $C$ is dense in $X$, so $x\in C$ or $x$ is a limit point of $C$, in either cases there is a $q\in C$ such that $q\in N_h(x)$. So $x\in N_h(q)$ (because $d(x,q)<h$), and $N_h(q)\subset N_\varepsilon(x)\subset G$ (because if $y\in N_h(q)$, then $d(y,x)\leq d(y,q)+d(q,x)<h+h=2h<\varepsilon$). Also note that $N_h(q)\in\mathcal{B}$ (because $h\in Q$ and $q\in C$). Therefore, for every $x\in X$ and every open set $G\subset X$ such that $x\in G$, we have $N_h(q)\in\mathcal{B}$ such that $x\in N_h(q)\subset G$, so $\mathcal{B}$ is a countable base of $X$. So every separable metric space has a countable base. \paragraph{24.} If the process of choosing $x_{j+1}$ from $x_1,\cdots,x_j$ does not stop after a finite number of steps, then $x_1,x_2,\cdots$ form an infinite subset of $X$, and therefore has a limit point $p$ in $X$. Then the neighborhood $N_{\frac{\delta}{2}}(p)$ contains infinite many points of $x_1,x_2,\cdots$ (Theorem 2.20), in specific, two points $x_i$ and $x_j$, so $d(x_i,x_j)=d(x_i,p)+d(p,x_j)<\frac{\delta}{2}+\frac{\delta}{2}=\delta$, a contradiction. So the process must stop after a finite number of steps. Therefore, $X$ can be covered by finitely many neighborhoods of radius $\delta$, which means we can find $E_\delta=\{x_{\delta1},x_{\delta2},\cdots,x_{\delta k} \}$ such that \[ X\subset\bigcup_{j=1}^kN_\delta(x_{\delta j}) \] for every $\delta>0$. Let \[ E=\bigcup_{n=1}^\infty E_{\frac{1}{n}} \] Because $E_{\frac{1}{n}}$ is finite, $E$ is countable by Theorem 2.12. For every $x\in X$, let $N_r(x)$ be any neighborhood of $x$, and let $N$ be a positive integer such that $\frac{1}{N}<r$, then $x\in N_{\frac{1}{N}}(y)$ for some $y\in E_{\frac{1}{N}}\subset E$ (because $X$ is covered by neighborhoods of radius $\frac{1}{N}$), and therefore $y\in N_r(x)$ (because $d(y,x)<\frac{1}{N}<r$). So for every $x\in X$, every neighborhood of $x$ contains a point $y\in E$, which means $x\in E$ or $x$ is a limit point of $E$, and that means $E$ is dense in $X$. So $X$ contains a countable dense subset $E$, which means $X$ is separable. \paragraph{25.} For every compact metric space $K$, the collection of neighborhoods centered at every $x\in K$ with radius $\frac{1}{n}$ forms a open cover of $K$, so it has a finite subcover covering $K$. Let $E_{\frac{1}{n}}$ be the finite subcover, and let \[ E=\bigcup_{n=1}^\infty E_{\frac{1}{n}} \] $E$ is countable by Theorem 2.12. If $G$ is an open set in $K$ and $x\in G$, then there is a neighborhood $N_\varepsilon(x)\subset G$. Let $N$ be an integer such that $0<\frac{1}{N}<\frac{\varepsilon}{2}$. Because $E_{\frac{1}{N}}$ covers $K$, there is an open set $V\in E$ with center $q$ such that $x\in V$. And $V\subset N_{\varepsilon}(x)\subset G$ (because if $y\in V$, then $d(y,x)<d(y,q)+d(q,x)<\frac{\varepsilon}{2}+\frac{\varepsilon}{2}=\varepsilon$). Therefore, for every $x\in K$ and every open set $G\subset K$ such that $x\in G$, we have $V\subset E$ such that $x\in V\subset G$, which means $E$ is a countable base of $K$. Let $C$ be the set of centers of neighborhoods in $E$. $C$ is countable because $E$ is countable. For every $x\in K$ and every neighborhood $N_r(x)$, we can find $x\in V\subset N_r(x)$ where $V\in E$, so the center of $V$, which is an element of $C$, is in $N_r(x)$. So every neighborhood of $x$ contains a point of $C$, which means $x$ is a point or a limit point of $C$, so $C$ is dense in $K$. Therefore, $C$ is a countable dense subset of $K$, which means $K$ is separable. \medskip (Or alternatively, note that every infinite subset of $K$ has a limit point in $K$ (Theorem 2.37), so $K$ is separable (Exercise 24), and therefore has a countable base (Exercise 23).) \paragraph{26.} Every infinite subset of $X$ has a limit point, so $X$ is separable (Exercise 24), and therefore has a countable base (Exercise 23). It follows that every open cover of $X$ has a countable subcover $\{G_n\},\,n=1,2,3,\cdots$ (because every open set is in the union of a subcollection of the countable base). If no finite subcollection of $\{G_n\}$ covers $X$, then the complement $F_n$ of $G_1\cup\cdots\cup G_n$ is nonempty for each $n$, but $\bigcap F_n$ is empty. Let $E$ be a set contains a point from each $F_n$, which means $E=\{x_1,x_2,\cdots\}$ with $x_n\in F_n$, then $E$ is obviously infinite (otherwise there will be an element of $E$ belongs to infinitely many $F_n$, which implies $\bigcap F_n$ being non-empty). So $E$ being infinite implies there is a limit point $p$ of $E$, let $p\in G_k$, and there is a neighborhood $N_r(p)\in G_k\subset G_1\cup\cdots\cup G_k$. For $i\geq k$, $x_i\in F_i$ so $x_i\not\in G_1\cup\cdots\cup G_k$, which means there are at most $k$ elements of $E$ belongs to $G_1\cup\cdots\cup G_k$ and therefore possibly belongs to $N_r(p)$, contradicts to the fact that $p$ is a limit point of $E$ (Theorem 2.20). So there must be a finite subcollection of $\{G_n\}$ covers $X$, which means $X$ is compact. \paragraph{27.} Let $\{V_n\}$ be a countable base of $R^k$, and let $W$ be the union of those $V_n$ for which $E\cap V_n$ is at most countable. If $x\in P^c$, then there is a neighborhood $N_r(x)$ containing at most countably many points of $E$. By the definition of countable base, there is a $V_n\in\{V_n\}$ such that $x\in V_n\subset N_r(x)$. $E\cap V_n\subset E\cap N_r(x)$ is at most countable, so $V_n\subset W$, and therefore $x\in W$. This implies $P^c\subset W$. If $x\in W$, then there is a $V_n$ such that $x\in V_n$ and $E\cap V_n$ is at most countable. Let $N_r(x)$ be a neighborhood such that $N_r(x)\subset V_n$, then $N_r(x)\cap E\subset V_n\cap E$ is at most countable, which means $N_r(x)$ contains at most countably many points of $E$, so $x\not\in P$. This implies $W\subset P^c$. From the above two statements, we have $P^c=W$. So \[ P^c\cap E=W\cap E=\bigcup_{n=1}^\infty V_n\cap E \] Every $V_n\cap E$ is at most countable, so $P^c\cap E$ is at most countable, which means at most countably many points of $E$ are not in $P$. \medskip Let $x$ be a limit point of $P$, then in every neighborhood $N_r(x)$, there is a $y\in P$ in $N_r(x)$. Let $r'=r-d(x,y)$, then the neighborhood $N_{r'}(y)\subset N_r(x)$, and it contains uncountably many points of E because $y\in P$, so $N_r(x)$ also contains uncountably many points of $E$. So $x\in P$, which means $P$ is closed. Let $x\in P$ and $N_r(x)$ be a neighborhood of $x$, then $N_r(x)$ contains uncountably many points of $E$. If $N_r(x)$ contains no elements of $P$ other than $x$, then $N_r(x)\subset\{x\}\cup P^c$, and $N_r(x)\cap E\subset \{x\}\cup(P^c\cap E)$ is at most countable, contradicting to the fact that $N_r(x)$ contains uncountably many points of $E$. Therefore, $N_r(x)$ contains an element of $P$ other then $x$, which means $x$ is a limit point of $P$. Therefore, $P$ is closed, and every point of $P$ is a limit point of $P$, which means $P$ is perfect. \paragraph{28.} Let $P$ be defined as in Exercise 27. Every point of $P$ is a limit point of $E$, and $E$ is closed, so $P\subset E$, which means $P\cap E=P$. \[ E=(P\cap E)\cup(P^c\cap E)=P\cup(P^c\cap E) \] The result in Exercise 27 is not restricted to $R^k$, so $P$ is perfect, and $P^c\cap E$ is at most countable. So $E$ is the union of a perfect set and a set which is at most countable. \medskip \textit{Corollary}\quad Let $E$ be a closed set in $R^k$. If it has no isolated point, then it is perfect and is therefore uncountable (Theorem 2.43). So every countable closed set in $R^k$ has isolated point. \paragraph{29.} From Theorem 22 and 23, $R^1$ is separable and has a countable base $\{V_\alpha\}$. For every open set $E$, let $\{V_{\alpha'}\}$ be the collection of all $V_{\alpha'}$ such that $x\in V_{\alpha'}\subset E$ for some $x\in E$. Every $V_{\alpha'}\subset E$, so $\bigcup_{\alpha'}V_{\alpha'}\subset E$. For every $x\in E$, $x\in V_{\alpha'}$ for some $\alpha'$, so $E\subset\bigcup_{\alpha'}V_{\alpha'}$. Therefore $E=\bigcup_{\alpha'}V_{\alpha'}$. $\{V_{\alpha}\}$ is countable, so $\{V_{\alpha'}\}$ is also countable, and note that $V_{\alpha'}$ is a segment, so $\bigcup_{\alpha'}V_{\alpha'}$ is the union of at most countable collection of disjoint segments. \paragraph{30.} Let $G_n$ be a dense open subset of $R^k$, for $n=1,2,3,\cdots$. Let $G$ be an nonempty set in $R^k$, and let $N_r(x)\subset G$ be an neighborhood in $G$. $G_1$ is dense in $R^k$, so $N_r(x)$ contains an element of $G_1$, which means $N_r(x)\cap G_1\neq\varnothing$. Both $G_1$ and $N_r(x)$ are open, so $N_r(x)\cap G_1$ is open. Let $N_{r_1}(x_1)$ be a neighborhood such that $\overline{N_{r_1}}\subset N_r(x)\cap G_1$ (It is easy to find it if we first choose a neighborhood with radius $\varepsilon$ in $N_r(x)\cap G_1$, and let $N_{r_1}(x_1)$ be a neighborhood centered at the same point and with radius $\frac{\varepsilon}{2}$). Similarly, $N_{r_1}(x_1)\cap G_2$ is nonempty and open, so we can find $N_{r_2}(x_2)$ such that $\overline{N_{r_2}(x_2)}\subset N_{r_1}(x_1)\cap G_2$. Repeat the process, we have \begin{alignat*}{2} & \overline{N_{r_1}(x_1)} && \subset N_r(x)\cap G_1\\ & \overline{N_{r_2}(x_2)} && \subset N_{r_1}(x_1)\cap G_2\\ & && \vdots \\ & \overline{N_{r_n}(x_n)} && \subset N_{r_{n-1}}(x_{n-1})\cap G_n \end{alignat*} Every $\overline{N_{r_n}(x_n)}$ is closed and bounded and therefore compact. $\overline{N_{r_{n+1}}(x_{n+1})}\subset N_{r_n}(x_n) \subset\overline{N_{r_n}(x_n)}$, so \[ \overline{N_{r_1}(x_1)}\supset \overline{N_{r_2}(x_2)}\supset\cdots \] By Theorem 2.36, \[ \bigcap_{n=1}^\infty\overline{N_{r_n}(x_n)}\neq\varnothing \] $\overline{N_{r_n}(x_n)}\subset G_n$, so \[ \bigcup_{n=1}^\infty G_n\supset\bigcap_{n=1}^\infty\overline{N_{r_n}(x_n)}\neq\varnothing \] The equivalent statement is proved. \medskip If $R^k=\bigcup_1^\infty F_n$, where every $F_n$ is closed and has empty interior, then $F_n^c$ is open and dense in $R^k$ (Every neighborhood of $x\in F_n$ contains a point not in $F_n$, so $F_n\subset(F_n^c)'$, and $R^k=F\cup F^c\subset(F^c)'\cup F^c$, which means $F^c$ is dense in $R^k$). Then from the equivalent statement, we have \[ \bigcap_{n=1}^\infty F_n^c\neq\varnothing \] which means there is an element not contained in every $F_n$, a contradiction. Therefore, at least one $F_n$ has a nonempty interior. \end{document}
{ "alphanum_fraction": 0.6598649618, "avg_line_length": 91.704109589, "ext": "tex", "hexsha": "ed0353ede616b8693e04aed2a3d9d449a66bfa79", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "3f48f7e1e97cc78c01142936a267255f7164f6a4", "max_forks_repo_licenses": [ "CC-BY-4.0" ], "max_forks_repo_name": "hikarimusic2002/Solutions", "max_forks_repo_path": "Principles of Mathematical Analysis/Chapter 02/main.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "3f48f7e1e97cc78c01142936a267255f7164f6a4", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "CC-BY-4.0" ], "max_issues_repo_name": "hikarimusic2002/Solutions", "max_issues_repo_path": "Principles of Mathematical Analysis/Chapter 02/main.tex", "max_line_length": 1243, "max_stars_count": null, "max_stars_repo_head_hexsha": "3f48f7e1e97cc78c01142936a267255f7164f6a4", "max_stars_repo_licenses": [ "CC-BY-4.0" ], "max_stars_repo_name": "hikarimusic2002/Solutions", "max_stars_repo_path": "Principles of Mathematical Analysis/Chapter 02/main.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 12449, "size": 33472 }
\documentclass[letterpaper,twocolumn,openany,nodeprecatedcode]{dndbook} % Use babel or polyglossia to automatically redefine macros for terms % Armor Class, Level, etc... % Default output is in English; captions are located in lib/dndstring-captions.sty. % If no captions exist for a language, English will be used. %1. To load a language with babel: % \usepackage[<lang>]{babel} %2. To load a language with polyglossia: % \usepackage{polyglossia} % \setdefaultlanguage{<lang>} \usepackage[english]{babel} %\usepackage[italian]{babel} % For further options (multilanguage documents, hypenations, language environments...) % please refer to babel/polyglossia's documentation. \usepackage[utf8]{inputenc} \usepackage[singlelinecheck=false]{caption} \usepackage{lipsum} \usepackage{listings} \usepackage{shortvrb} \usepackage{stfloats} \captionsetup[table]{labelformat=empty,font={sf,sc,bf,},skip=0pt} \MakeShortVerb{|} \lstset{% basicstyle=\ttfamily, language=[LaTeX]{TeX}, breaklines=true, } \title{Fall of the Arrived \\ \large A Story in the World of Nevanauia} \author{The rpgTeX Team} \date{2019/07/18} \begin{document} \frontmatter \maketitle \tableofcontents \mainmatter% \part{In Universe} \chapter{The World} \section{Year 458 of the Second Era} Seven years ago a previously unknown pantheon of deities arrived planet-side during a time of great strife. Maintaining a more hands-on presence than any other pantheon they formed the beautiful "Silver City" which is crowned by the Silver Pillar, more commonly known as the Pillar of the Arrived, which functions as the headquarters of the Church of the Arrived, the church that has formed planet-side to worship these new saviors. The Pillar of the Arrived "descended from the heavens" and implanted itself deep within the ground, but still visible from vast distances around the continent, Thyronia. While they have helped to bring health and food to large swaths of the populace on the continent of Thyronia as well as chase away the armies of the lich, Farkrere, all is not well. A growing portion of the populace believes that rather than deities, the Arrived are frauds, using some unknown magics to mimic their miracles, this belief is also supported by certain religions of the most "established" deities. These grumblings have gained traction also due to the seemingly increased rates of planar instability which have resulted in odd occurrences and rapid changes within the races planet-side since the Arrived descended. While content to leave the grumblings be, there have been raids against Arrived holdings which resulted in various artifacts being stolen and resulted in horrific reprisals by the Church of the Arrived, to varying degrees of success. \onecolumn \section{Land of the Arrived} \includegraphics[width=\textwidth]{img/initial_region.jpg} \section{Northern Thyronia} \includegraphics[width=\textwidth]{img/westport_region.jpg} \subsection{Westport} \twocolumn \chapter{Adventure Synopsis } Seven years ago and previously unknown pantheon of deities arrived planet-side during a time of great strife. Maintaining a more hands-on presence than any other pantheon they formed the beautiful "Silver City" which is crowned by the Silver Pillar, more commonly known as the Pillar of the Arrived, which functions as the headquarters of the Church of the Arrived, the church that has formed planet-side to worship these new saviors. The Pillar of the Arrived "descended from the heavens" and implanted itself deep within the ground, but still visible from vast distances around the continent, Thyronia. While they have helped to bring health and food to large swaths of the populace on the continent of Thyronia as well as chase away the armies of the lich, Farkrere, all is not well. A growing portion of the populace believes that rather than deities, the Arrived are frauds, using some unknown magics to mimic their miracles, this belief is also supported by certain religions of the most "established" deities. These grumblings have gained traction also due to the seemingly increased rates of planar instability which have resulted in odd occurrences and rapid changes within the races planet-side since the Arrived descended. While content to leave the grumblings be, there have been raids against Arrived holdings which resulted in various artifacts being stolen and resulted in horrific reprisals by the Church of the Arrived, to varying degrees of success. \subsection{This is Where You Find Yourself} You've been hired on to guard a caravan which originated from the western continent of Raevrak and heading into Westport, the only deep water port capable of safe trade with Raevrak. While the caravan has many concerns, the portion of the shipment you've been hired to guard deals with a shipment for the Church of the Arrived and is accompanied by the Priest of the Arrived, Wehton. You've been hired to accompany the shipment from Westport on the long and perilous journey to the Silver City. The campaign will start as you meet Wehton at the Inn of the Drunken Dragon on the outskirts of the warehouse district of Westport, and begin to prepare for your journey. \chapter{Races} All of the Races from Pathfinder (Piazo Content Only) have found their way to the world, with differences of how they got started and how they interact with one another. \section{Humans} \section{Elves} \section{Dwarves} \section{Orcs} \section{Ifrit} \section{The Arrived} \chapter{Factions} \section{Church of the Arrived} \subsection{Priesthood} \subsection{The Fist} \subsection{The Apostates} \section{The Erasticlani} A secretive order of Oracles whom give prophecies of great heroes and changing of the times. It is them who decide when planet-side changes era, and are capable of announcing it to all planet-side. \section{Westport} \subsection{Westport River Guard} \section{Druids of the Northern Expanse} \section{Trade Guilds} Different occupations have realized that the path to wealth comes from organization and cooperation, a large number of them have formed various guilds to help each other. The following are some of the more prominent. \subsection{The Caravaners Guild} \subsection{The Bridge Guard} Not a guild per se, but rather an organization supported by the guilds and other groups in the region. The Bridge of Jarasciel, or more commonly known as just \textit{The Bridge}, is the closest to safe way to journey from Westport to the rest of the continent. It's not to say that this journey is safe, the region is haunted by dragons and wyveryn, but the presence of the Bridge Guard makes it the preferred method of travel. Highly paid, and highly unscrupulous, this organization exists \textit{only} to defend The Bridge and is unlikely to stick their necks out for anything else. Dwarves make up roughly 60\% of their numbers, with the rest roughly equally split between other races. Even though Dwarves make up such a large proportion of their numbers, their members are tight-knit regardless of race, and moral generally runs very high. \section{Linnorm Jarldom} \subsection{The Linnorm Barbarians} \section{The Dwarvish Communion} Dwarves are ... odd. This is the general feeling towards them. Tightly knit communities and a general distrust of outsiders haven't had a great impact on their standing throughout the region, and it's generally important to realize... they don't care. The primary feeling of a Dwarvish community is "If it doesn't effect me, why should I care?" and understanding this simple fact improves most persons relations with the Dwarves. The \textbf{Dwarvish Communion} is a loosely linked confederation of Dwarvish interests who primarily agree on free trade amongst Dwarves, and kicking the shit out of anyone who messes with them. \textit{This has worked out well for them so far.} Groups earning the ire of the communion have found neither time nor distance will truly keep them safe. Aside from their skilled traders, most people know of two different groups of Dwarves in the communion, The Diggers who explore deeper into the planet in their everlong quest to explore and find new treasure, and The Helmless who protect Dwarves from people and entities who would prey on them. \subsection{The Diggers} \subsection{The Helmless} \section{Pirates} \subsection{The Free Pirates of the Labyrinth} \subsection{The Maelstrom Marauders} \section{Western Lefetian Empire} \subsection{The Honor Guard} \section{Farkrere's Cabal} \subsection{} \subsection{The Winged Devils} \section{Kingdom Of Nematu} \subsection{Nematuian Assassins} \part{Mechanics} \chapter{Character Creation} \chapter{Feat Modifications} \chapter{World Specific Equipment} \chapter{Sections} \DndDropCapLine{T}{his package is designed to aid you in} writing beautifully typeset documents for the fifth edition of the world's greatest roleplaying game. It starts by adjusting the section formatting from the defaults in \LaTeX{} to something a bit more familiar to the reader. The chapter formatting is displayed above. \section{Section} Sections break up chapters into large groups of associated text. \subsection{Subsection} Subsections further break down the information for the reader. \subsubsection{Subsubsection} Subsubsections are the furthest division of text that still have a block header. Below this level, headers are displayed inline. \paragraph{Paragraph} The paragraph format is seldom used in the core books, but is available if you prefer it to the ``normal'' style. \subparagraph{Subparagraph} The subparagraph format with the paragraph indent is likely going to be more familiar to the reader. \section{Special Sections} The module also includes functions to aid in the proper typesetting of multi-line section headers: |\DndFeatHeader| for feats, |\DndItemHeader| magic items and traps, and |\DndSpellHeader| for spells. \DndFeatHeader{Typesetting Savant}[Prerequisite: \LaTeX{} distribution] You have acquired a package which aids in typesetting source material for one of your favorite games. You have advantage on Intelligence checks to typeset new content. On a failed check, you can ask questions online at the package's website. \DndItemHeader{Foo's Quill}{Wondrous item, rare} This quill has 3 charges. While holding it, you can use an action to expend 1 of its charges. The quill leaps from your hand and writes a contract applicable to your situation. The quill regains 1d3 expended charges daily at dawn. \DndSpellHeader% {Beautiful Typesetting} {4th-level illusion} {1 action} {5 feet} {S, M (ink and parchment, which the spell consumes)} {Until dispelled} You are able to transform a written message of any length into a beautiful scroll. All creatures within range that can see the scroll must make a wisdom saving throw or be charmed by you until the spell ends. While the creature is charmed by you, they cannot take their eyes off the scroll and cannot willingly move away from the scroll. Also, the targets can make a wisdom saving throw at the end of each of their turns. On a success, they are no longer charmed. \section{Map Regions} The map region functions |\DndArea| and |\DndSubArea| provide automatic numbering of areas. \DndArea{Village of Hommlet} This is the village of hommlet. \DndSubArea{Inn of the Welcome Wench} Inside the village is the inn of the Welcome Wench. \DndSubArea{Blacksmith's Forge} There's a blacksmith in town, too. \DndArea{Foo's Castle} This is foo's home, a hovel of mud and sticks. \DndSubArea{Moat} This ditch has a board spanning it. \DndSubArea{Entrance} A five-foot hole reveals the dirt floor illuminated by a hole in the roof. \chapter{Text Boxes} The module has three environments for setting text apart so that it is drawn to the reader's attention. |DndReadAloud| is used for text that a game master would read aloud. \begin{DndReadAloud} As you approach this module you get a sense that the blood and tears of many generations went into its making. A warm feeling welcomes you as you type your first words. \end{DndReadAloud} \section{As an Aside} The other two environments are the |DndComment| and the |DndSidebar|. The |DndComment| is breakable and can safely be used inline in the text. \begin{DndComment}{This Is a Comment Box!} A |DndComment| is a box for minimal highlighting of text. It lacks the ornamentation of |DndSidebar|, but it can handle being broken over a column. \end{DndComment} The |DndSidebar| is not breakable and is best used floated toward a page corner as it is below. \begin{DndSidebar}[float=!b]{Behold the DndSidebar!} The |DndSidebar| is used as a sidebar. It does not break over columns and is best used with a figure environment to float it to one corner of the page where the surrounding text can then flow around it. \end{DndSidebar} \section{Tables} The |DndTable| colors the even rows and is set to the width of a line by default. \begin{DndTable}[header=Nice Table]{XX} \textbf{Table head} & \textbf{Table head} \\ Some value & Some value \\ Some value & Some value \\ Some value & Some value \end{DndTable} \chapter{Monsters and NPCs} % Monster stat block \begin{DndMonster}[float*=b,width=\textwidth + 8pt]{Monster Foo} \begin{multicols}{2} \DndMonsterType{Medium aberration (metasyntactic variable), neutral evil} % If you want to use commas in the key values, enclose the values in braces. \DndMonsterBasics[ armor-class = {9 (12 with \emph{mage armor})}, hit-points = {\DndDice{3d8 + 3}}, speed = {30 ft., fly 30 ft.}, ] \DndMonsterAbilityScores[ str = 12, dex = 8, con = 13, int = 10, wis = 14, cha = 15, ] \DndMonsterDetails[ %saving-throws = {Str +0, Dex +0, Con +0, Int +0, Wis +0, Cha +0}, %skills = {Acrobatics +0, Animal Handling +0, Arcana +0, Athletics +0, Deception +0, History +0, Insight +0, Intimidation +0, Investigation +0, Medicine +0, Nature +0, Perception +0, Performance +0, Persuasion +0, Religion +0, Sleight of Hand +0, Stealth +0, Survival +0}, %damage-vulnerabilities = {cold}, %damage-resistances = {bludgeoning, piercing, and slashing from nonmagical attacks}, %damage-immunities = {poison}, %condition-immunities = {poisoned}, senses = {darkvision 60 ft., passive Perception 10}, languages = {Common, Goblin, Undercommon}, challenge = 1, ] % Traits \DndMonsterAction{Innate Spellcasting} Foo's spellcasting ability is Charisma (spell save DC 12, +4 to hit with spell attacks). It can innately cast the following spells, requiring no material components: \begin{DndMonsterSpells} \DndInnateSpellLevel{misty step} \DndInnateSpellLevel[3]{fog cloud, rope trick} \DndInnateSpellLevel[1]{identify} \end{DndMonsterSpells} \DndMonsterAction{Spellcasting} Foo is a 2nd-level spellcaster. Its spellcasting ability is Charisma (spell save DC 12, +4 to hit with spell attacks). It has the following sorcerer spells prepared: \begin{DndMonsterSpells} \DndMonsterSpellLevel{blade ward, fire bolt, light, shocking grasp} \DndMonsterSpellLevel[1][3]{burning hands, mage armor, shield} \end{DndMonsterSpells} \DndMonsterSection{Actions} \DndMonsterAction{Multiattack} The foo makes two melee attacks. %Default values are shown commented out \DndMonsterAttack[ name=Dagger, %distance=both, % valid options are in the set {both,melee,ranged}, %type=weapon, %valid options are in the set {weapon,spell} mod=+3, %reach=5, %range=20/60, %targets=one target, dmg=\DndDice{1d4+1}, dmg-type=piercing, %plus-dmg=, %plus-dmg-type=, %or-dmg=, %or-dmg-when=, %extra=, ] %\DndMonsterMelee calls \DndMonsterAttack with the melee option \DndMonsterMelee[ name=Flame Tongue Longsword, mod=+3, %reach=5, %targets=one target, dmg=\DndDice{1d8+1}, dmg-type=slashing, plus-dmg=\DndDice{2d6}, plus-dmg-type=fire, or-dmg=\DndDice{1d10+1}, or-dmg-when=if used with two hands, %extra=, ] %\DndMonsterRanged calls \DndMonsterAttack with the ranged option \DndMonsterRanged[ name=Assassin's Light Crossbow, mod=+1, range=80/320, dmg=\DndDice{1d8}, dmg-type=piercing, %plus-dmg=, %plus-dmg-type=, %or-dmg=, %or-dmg-when=, extra={, and the target must make a DC 15 Constitution saving throw, taking 24 (7d6) poison damage on a failed save, or half as much damage on a successful one} ] % Legendary Actions \DndMonsterSection{Legendary Actions} The foo can take 3 legendary actions, choosing from the options below. Only one legendary action option can be used at a time and only at the end of another creature's turn. The foo regains spent legendary actions at the start of its turn. \begin{DndMonsterLegendaryActions} \DndMonsterLegendaryAction{Move}{The foo moves up to its speed.} \DndMonsterLegendaryAction{Dagger Attack}{The foo makes a dagger attack.} \DndMonsterLegendaryAction{Create Contract (Costs 3 Actions)}{The foo presents a contract in a language it knows and waves it in the face of a creature within 10 feet. The creature must make a DC 10 Intelligence saving throw. On a failure, the creature is incapacitated until the start of the foo's next turn. A creature who cannot read the language in which the contract is written has advantage on this saving throw.} \end{DndMonsterLegendaryActions} \end{multicols} \end{DndMonster} The |DndMonster| environment is used to typeset monster and NPC stat blocks. The module supplies many functions to easily typeset the contents of the stat block \chapter{Colors} \begin{table*}[b]% \caption{}\label{tab:colors} \begin{DndTable}[width=\linewidth,header=Colors Supported by This Package]{lX} \textbf{Color} & \textbf{Description} \\ |PhbLightGreen| & Light green used in PHB Part 1 (Default) \\ |PhbLightCyan| & Light cyan used in PHB Part 2 \\ |PhbMauve| & Pale purple used in PHB Part 3 \\ |PhbTan| & Light brown used in PHB appendix \\ |DmgLavender| & Pale purple used in DMG Part 1 \\ |DmgCoral| & Orange-pink used in DMG Part 2 \\ |DmgSlateGray| (|DmgSlateGrey|) & Blue-gray used in PHB Part 3 \\ |DmgLilac| & Purple-gray used in DMG appendix \\ \end{DndTable} \end{table*} This package provides several global color variables to style |DndComment|, |DndReadAloud|, |DndSidebar|, and |DndTable| environments. \begin{DndTable}[header=Box Colors]{lX} \textbf{Color} & \textbf{Description} \\ |commentcolor| & |DndComment| background \\ |readaloudcolor| & |DndReadAloud| background \\ |sidebarcolor| & |DndSidebar| background \\ |tablecolor| & background of even |DndTable| rows \\ \end{DndTable} They also accept an optional color argument to set the color for a single instance. See Table~\ref{tab:colors} for a list of core book accent colors. \begin{lstlisting} \begin{DndTable}[color=PhbLightCyan]{cX} \textbf{d8} & \textbf{Item} \\ 1 & Small wooden button \\ 2 & Red feather \\ 3 & Human tooth \\ 4 & Vial of green liquid \\ 6 & Tasty biscuit \\ 7 & Broken axe handle \\ 8 & Tarnished silver locket \\ \end{DndTable} \end{lstlisting} \begin{DndTable}[color=PhbLightCyan]{cX} \textbf{d8} & \textbf{Item} \\ 1 & Small wooden button \\ 2 & Red feather \\ 3 & Human tooth \\ 4 & Vial of green liquid \\ 6 & Tasty biscuit \\ 7 & Broken axe handle \\ 8 & Tarnished silver locket \\ \end{DndTable} \section{Themed Colors} Use |\DndSetThemeColor[<color>]| to set |commentcolor|, |readaloudcolor|, |sidebarcolor|, and |tablecolor| to a specific color. Calling |\DndSetThemeColor| without an argument sets those colors to the current |themecolor|. In the following example the group limits the change to just a few boxes; after the group finishes, the colors are reverted to what they were before the group started. \begin{lstlisting} \begingroup \DndSetThemeColor[PhbMauve] \begin{DndComment}{This Comment Is in Mauve} This comment is in the the new color. \end{DndComment} \begin{DndSidebar}{This Sidebar Is Also Mauve} The sidebar is also using the new theme color. \end{DndSidebar} \endgroup \end{lstlisting} \begingroup \DndSetThemeColor[PhbMauve] \begin{DndComment}{This Comment Is in Mauve} This comment is in the the new color. \end{DndComment} \begin{DndSidebar}{This Sidebar Is Also Mauve} The sidebar is also using the new theme color. \end{DndSidebar} \endgroup \end{document}
{ "alphanum_fraction": 0.7412060302, "avg_line_length": 41.442043222, "ext": "tex", "hexsha": "2852e7008cec68e2611aabb6dc904e4e46b7d0f7", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "0ec46d337dae803a2fe4c51c5dc98320b64b6be3", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "gengler1123/Fall-of-the-Arrived", "max_forks_repo_path": "example.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "0ec46d337dae803a2fe4c51c5dc98320b64b6be3", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "gengler1123/Fall-of-the-Arrived", "max_issues_repo_path": "example.tex", "max_line_length": 628, "max_stars_count": null, "max_stars_repo_head_hexsha": "0ec46d337dae803a2fe4c51c5dc98320b64b6be3", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "gengler1123/Fall-of-the-Arrived", "max_stars_repo_path": "example.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 5420, "size": 21094 }
\documentclass[pdflatex,compress,8pt, xcolor={dvipsnames,dvipsnames,svgnames,x11names,table}, hyperref={colorlinks = true, breaklinks = true, urlcolor = NavyBlue, breaklinks = true}]{beamer} \usecolortheme{crane} \usepackage[super]{nth} \usepackage{amsmath} \usepackage{subfig} \usepackage{gensymb} % degree symbol \usepackage{amsmath} % math symbols \usepackage{graphicx} % to insert figures % ---------------------------------------------------------------------------- % *** START BIBLIOGRAPHY <<< % ---------------------------------------------------------------------------- \usepackage[ backend=biber, % style = numeric, % style=nature, style=apa, % style=mla, % style=phys, % без doi maxbibnames=99, citestyle=numeric, giveninits=true, isbn=true, url=true, natbib=true, sorting=ndymdt, bibencoding=utf8, useprefix=false, language=auto, autolang=other, backref=true, backrefstyle=none, indexing=cite, ]{biblatex} \DeclareSortingTemplate{ndymdt}{ \sort{ \field{presort} } \sort[final]{ \field{sortkey} } \sort{ \field{sortname} \field{author} \field{editor} \field{translator} \field{sorttitle} \field{title} } \sort[direction=descending]{ \field{sortyear} \field{year} \literal{9999} } \sort[direction=descending]{ \field[padside=left,padwidth=2,padchar=0]{month} \literal{99} } \sort[direction=descending]{ \field[padside=left,padwidth=2,padchar=0]{day} \literal{99} } \sort{ \field{sorttitle} } \sort[direction=descending]{ \field[padside=left,padwidth=4,padchar=0]{volume} \literal{9999} } } \addbibresource{Split.bib}% \tiny \scriptsize \footnotesize \normalsize \renewcommand*{\bibfont}{\footnotesize} % \setbeamertemplate{bibliography item}{\insertbiblabel} % ---------------------------------------------------------------------------- % *** END BIBLIOGRAPHY <<< % ---------------------------------------------------------------------------- \title{Spatial analysis for the assessment of the environmental changes in the landscapes of Izmir surroundings} \subtitle{Presented at \\ \nth{10} International Conference on \\ Environmental, Cultural, Economic and Social Sustainability\\ Split, Croatia} \author{Polina Lemenkova} \date{January 22, 2014} \begin{document} \begin{frame} \titlepage \end{frame} \section*{Outline} \begin{frame}\frametitle{Table of Contents} \tableofcontents \end{frame} \section{Study Area} \subsection{Research Problem} \begin{frame}\frametitle{Research Problem} \begin{minipage}[0.4\textheight]{\textwidth} \begin{columns}[T] \begin{column}{0.5\textwidth} \vspace{2em} \begin{figure}[H] \centering \includegraphics[width=5.0cm]{F1.jpg} \end{figure} \small{The study region is located in western Turkey, Izmir surroundings. \\ The region has strong anthropogenic pressure: well developed transport network, intensive shipping and maritime constructions, industrial factories, plants, densely populated urban districts, intensive agricultural cultivation. } \end{column} \begin{column}{0.5\textwidth} \vspace{2em} Research Problem: \begin{itemize} \item The region of Izmir is a particular part of Turkey: it has unique landscapes with variety of vegetation types, diverse relief and natural reserve areas; \item The vegetation within the Aegean region has very complex character; \item Area is characterized by the the variety, biogeographical diversity and richness; \item At the same time, Izmir, a third large metropolis of Turkey, is an industrial city of high importance; \item Izmir is a key seaport harbor, strategic for the country and Mediterranean region; \end{itemize} \end{column} \end{columns} \end{minipage} \end{frame} \subsection{Research Questions and Goals} \begin{frame}\frametitle{Research Questions and Goals} \begin{minipage}[0.4\textheight]{\textwidth} \begin{columns}[T] \begin{column}{0.5\textwidth} \vspace{2em} \begin{figure}[H] \centering \includegraphics[width=4.7cm]{F2.jpg} \end{figure} \footnotesize{Western Turkey, Izmir region. Landscapes from the aerial view. Source: Google Earth} \end{column} \begin{column}{0.5\textwidth} \vspace{2em} Research Questions and Goals: \begin{itemize} \item How landscapes within the test area of Izmir region changed due to the anthropogenic effects \item Visualization of the landscapes in the given time scope of 13 years (1987-2000) \item If there are changes, what are the exact areas (in ha or km) occupied by every land cover type. \item Calculate \& Assess Accuracy. \item How can remote sensing (RS) data and GIS tools of Erdas Imagine be used for answering questions (1) and (2). \item Demonstration \& Discussion \end{itemize} \end{column} \end{columns} \end{minipage} \end{frame} \section{Methods} \begin{frame}\frametitle{Methods} \begin{minipage}[0.4\textheight]{\textwidth} \begin{columns}[T] \begin{column}{0.5\textwidth} \vspace{2em} \begin{figure}[H] \centering \includegraphics[width=4.5cm]{F4.jpg} \end{figure} \footnotesize{Methodological Flowchart} \end{column} \begin{column}{0.5\textwidth} \vspace{2em} \begin{itemize} \item Data import and conversion \item Creating multi-band layer \& color composite Selecting AOI (Area Of Interest) \item Clustering segmentation and classification GIS Mapping \item Verification via Google Earth \item Accuracy Assessment \item Analyzing results \end{itemize} \end{column} \end{columns} \end{minipage} \end{frame} \subsection{Data Import} \begin{frame}\frametitle{Data Import} \begin{minipage}[0.4\textheight]{\textwidth} \begin{columns}[T] \begin{column}{0.5\textwidth} \vspace{1em} \begin{figure}[H] \centering \includegraphics[width=5.5cm]{F5.jpg} \end{figure} \begin{itemize} \item Study Area. Selecting study area covered by Landsat TM scenes. \item GLCF website: Landsat Thematic Mapper (TM) \item Global Land Cover Facility (GLCF) Earth Science Data Interface \item Analysis of vegetation types: images taken during summer (June). \end{itemize} \end{column} \begin{column}{0.5\textwidth} \vspace{1em} For selecting target area, a spatial mask of coordinates ranging from 26\degree 00'-26\degree 00' E to 38\degree 00'-39\degree 00' N was applied. \begin{figure}[H] \centering \includegraphics[width=5.5cm]{F6.jpg} \end{figure} \begin{itemize} \item Target images: 1987 and 2000 \item Tme span of 13-years (1987-2000) \item Change detection in the land cover types. \end{itemize} \end{column} \end{columns} \end{minipage} \end{frame} \subsection{Data Conversion} \begin{frame}\frametitle{Data Conversion} \begin{figure}[H] \centering \includegraphics[width=10.0cm]{F7.jpg} \end{figure} Conversion of raw .TIFF Landsat TM images into Erdas Imagine “.img” format. \end{frame} \subsection{Creating Multi-band Color Composite} \begin{frame}\frametitle{Creating Multi-band Color Composite} \begin{figure}[H] \centering \includegraphics[width=9.0cm]{F8.jpg} \end{figure} \end{frame} \subsection{Selecting Area of Interest (AOI)} \begin{frame}\frametitle{Selecting AOI} Test area: Izmir surroundings. \begin{itemize} \item Test area: Manisa and Izmir provinces covering various landscapes types; \item AOI ecological diversity: urban built-up areas, coastal zone, agricultural crop areas, hilly landscapes; \item Urban areas located on the coastal area of the Aegean Sea with ca. 4 M people; \item Human impact on the environment: demographic, cultural \& economic pressure; \item This is reflected in various land cover types, landscapes patterns, heterogeneity; \end{itemize} \begin{figure}[H] \centering \includegraphics[width=8.0cm]{F9.jpg} \end{figure} \begin{itemize} \item Left: Selecting AOI from the overlapping initial Landsat images. \item Center: adjusting parameters, Erdas Imagine. \item Right: AOI 1987 (above) and AOI 2000 (below). \end{itemize} \end{frame} \subsection{Clustering Segmentation} \begin{frame}\frametitle{Clustering Segmentation} Principle of clustering segmentation: \begin{itemize} \item The logical algorithmic approach of clustering segmentation consists in merging pixels on the images into clusters. \item Grouping pixels is based on the assessment of their homogeneity, that is, distinguishability from the neighboring pixel elements. \item Clusters enable to analyze spectral \& textural characteristics of the land cover types, i.e. to perform spatial analysis. \item Accurate cluster segmentation of the images is an important step for supervised classification. \end{itemize} Differentiating Patterns via DNs: \begin{itemize} \item Image classification consists in assignation of all pixels into land cover classes of the study area. \item Classification is done using multispectral data, spectral pattern (signatures) of the pixels that represent land cover classes. \item Various land cover types and landscape features are detected using individual properties of digital umbers (DNs) of the pixels. \item The DNs show values of the spectral reflectance of the land cover features, and individual properties of the objects. \end{itemize} \end{frame} \subsection{Clustering: Algorithm} \begin{frame}\frametitle{Clustering: Algorithm} \begin{itemize} \item Clustering was performed to classify pixels into thematic groups, or clusters. \item Number of clusters = 15, which responds to the selected land cover types in the study area. \item During clustering, each digital pixel on the image is categorized to the respecting cluster, \item Assigned cluster is the one to which the mean DN value of the given pixel is the closest. \item The process is repeated in an iterative way, \item Iteration continued until optimal values of the class groups and the pixels assigned to the corresponding classes are reached. \item Afterwards, the land cover types were visually assessed and identified for each land cover class. \end{itemize} \end{frame} \subsection{Clustering: Visualization} \begin{frame}\frametitle{Clustering: Visualization} \begin{figure}[H] \centering \includegraphics[width=8.0cm]{F10.jpg} \end{figure} \begin{itemize} \item Final thematic mapping is based on the results of the image classification: \item Visualizing landscape structure and land cover in the study area. \item Final thematic maps are represented on the following two slides. \end{itemize} \end{frame} \section{Results} \begin{frame}\frametitle{Maps of 1987 and 2000} \begin{minipage}[0.4\textheight]{\textwidth} \begin{columns}[T] \begin{column}{0.5\textwidth} \begin{figure}[H] \centering \includegraphics[width=3.8cm]{F11.jpg} \end{figure} \small{1987} \end{column} \begin{column}{0.5\textwidth} \begin{figure}[H] \centering \includegraphics[width=4.0cm]{F12.jpg} \end{figure} \small{2000} \end{column} \end{columns} \end{minipage} Classified Landsat TM image (above) and thematic map of land cover types (below). \end{frame} \section{Accuracy Assessment} \subsection{Verification via the Google Earth: Algorithm} \begin{frame}\frametitle{Verification via the Google Earth: Algorithm} \begin{minipage}[0.4\textheight]{\textwidth} \begin{columns}[T] \begin{column}{0.5\textwidth} \begin{figure}[H] \centering \includegraphics[width=5.0cm]{F13.jpg} \end{figure} \small{Linking Map with the Google Earth} \end{column} \begin{column}{0.5\textwidth} \begin{itemize} \item The selected areas with the most diverse landscape structure and high heterogeneity of the land cover types, have been verified by the overlapping of the Google Earth aerial images. \item The function “connect to Google Earth” was activated that enabled to visualize the same region of the current study on the Google Earth in a simultaneous way. \item The functions “Link Google Earth to View” and “Sync Google Earth to View” enabled to synchronize the view areas between the Google Earth and the current view on the image. \item This enabled to check the difficult study areas where questions arose in which land cover type this site belongs. \end{itemize} \end{column} \end{columns} \end{minipage} \end{frame} \subsection{Error Matrix} \begin{frame}\frametitle{Computing Error Matrix} \begin{figure}[H] \centering \includegraphics[width=6.0cm]{F15.jpg} \end{figure} \small{Left: Correction of the assigned class values of the generated points according to the real values. \\ Right: Error matrix generated for each land cover class, Landsat TM classification 1987.} \begin{figure}[H] \centering \includegraphics[width=5.0cm]{F14.jpg} \end{figure} \small{Results validation: the quality control and validation of the results\\ Quality control was performed using accuracy assessment operations in Erdas Imagine menu} \end{frame} \subsection{Final Calculations} \begin{frame}\frametitle{Final Calculations} \begin{figure}[H] \centering \includegraphics[width=10.0cm]{F16.jpg} \end{figure} Classification of Landsat TM image, 1987. Classification of Landsat TM image, 2000. \end{frame} \subsection{Kappa Statistics} \begin{frame}\frametitle{Accuracy Results: Kappa Statistics} \begin{figure}[H] \centering \includegraphics[width=8.5cm]{F17.jpg} \end{figure} Accuracy results for Landsat TM image classification are computed as follows: \begin{itemize} \item The classification of the image 1987: accuracy 81.25\%, 2000: 80,47\%. \item Kappa statistics for the image 2000: 0.7843, for the image 1987: 0.7923 \end{itemize} \end{frame} \subsection{Comments on Table} \begin{frame}\frametitle{Comments on Table} \begin{itemize} \item The results indicate changes in land cover types affected by human activities, i.e. increased agricultural areas. \item 1987: croplands (wheat) covered 71\% of the today's area (2000): 2382 vs. 3345 ha. \item Increase in barley cropland areas is noticeable as well: 1149 ha in 1987 vs. 4423 ha in 2000. \item Sparsely vegetated areas now also occupy more areas : 5914 ha in 2000 against 859 ha in 1987. \item Natural vegetation, decreased, which can be explained by the expansion of the agricultural lands. \item 1987: coppice areas covered 5500 ha while later on there are only 700 ha in this land type. \end{itemize} \end{frame} \section{Conclusions} \begin{frame}\frametitle{Conclusions} Conclusions: \begin{itemize} \item Increased human activities (agricultural works, urbanization, industrialization) affect environment, cause negative impacts on the ecosystems and make changes in the vegetation coverage (land cover types). \item Climate change affect land cover types: decrease of typical woody vegetation. \item Drastic land use changes are recorded and detected in diverse regions of Turkey, including Izmir surroundings. \end{itemize} R\'{e}sum\'{e}: \begin{itemize} \item Monitoring land cover changes is necessary for maintaining environmental sustainability. \item Updated information and spatial analysis are useful tools. \item The presentation demonstrated how landscapes changed in the selected study area at a 13-year time span (1987-2000). \item The data included Landsat imagery covering research area. The image processing was done by classification methods. \item The classification results detected changes in landscapes in 2000 comparing to 1987. This proved anthropogenic impacts on the landscapes which affect sustainable environmental development of the region. \item The results demonstrated successful combination of the RS data and methods of GIS spatial analysis, effective for monitoring of highly heterogeneous landscapes in the area of intensive anthropogenic activities. \end{itemize} \end{frame} \section{Thanks} \begin{frame}{Thanks} \centering \LARGE \emph{Thank you for attention !}\\ \vspace{5em} \normalsize Acknowledgements: \\ Current research has been supported by the T\"{U}BİTAK: \\ T\"{u}rkiye Bilimsel ve Teknoloji Arastirma Kurumu\\ (The Scientific and Technological Research Council of Turkey) \\ Research Fellowship for Foreign Citizens, No. 2216 for 2012.\\ The research stay was done during 11-12.2012 at \\ Ege University, Faculty of Geography,\\ Izmir, Turkey. \end{frame} \section{References} \begin{frame}{References} \begin{figure}[H] \centering \includegraphics[width=11.0cm]{F18.jpg} \end{figure} \end{frame} %%%%%%%%%%% Bibliography %%%%%%% \section{Bibliography} \Large{Bibliography} \vspace{1em} \nocite{*} \printbibliography[heading=none] \end{document} %Changing the font size locally (from biggest to smallest): %\Huge %\huge %\LARGE %\Large %\large %\normalsize (default) %\small %\footnotesize %\scriptsize %\tiny \end{document}
{ "alphanum_fraction": 0.7515569261, "avg_line_length": 35.3397435897, "ext": "tex", "hexsha": "fa0861c7eee82e3c136959e30dd19f6cd0941866", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "c371098f7b30f413d1ae64c1d27c983928afb2ef", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "paulinelemenkova/LaTeX-Presentations", "max_forks_repo_path": "Lemenkova-Split.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "c371098f7b30f413d1ae64c1d27c983928afb2ef", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "paulinelemenkova/LaTeX-Presentations", "max_issues_repo_path": "Lemenkova-Split.tex", "max_line_length": 227, "max_stars_count": null, "max_stars_repo_head_hexsha": "c371098f7b30f413d1ae64c1d27c983928afb2ef", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "paulinelemenkova/LaTeX-Presentations", "max_stars_repo_path": "Lemenkova-Split.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 4564, "size": 16539 }
\chapter{Installation and Getting Help} \label{cha:installation} Figure~\ref{fig:install:choices} provides a guide to select the appropriate method for installing PyLith. Installation of PyLith on a desktop or laptop machine is, in most cases, very easy. Binary packages have been created for Linux and Mac OS X (Darwin) platforms. For Windows 10 users, we recommend installing the Windows Subsystem for Linux and using the Linux binary (see instructions in Section~\ref{sec:install:windows}). You can also run PyLith inside a Docker container, which provides a virtual Linux environment on any platform that Docker supports, including Linux, Mac OS X, and Windows. Installation of PyLith on other operating systems -- or installation on a cluster -- requires building the software from the source code, which can be difficult for inexperienced users. We have created a small utility called PyLith Installer that makes installing PyLith and all of its dependencies from source much easier. \begin{figure}[htbp] \includegraphics[scale=0.8]{install/figs/installchoices} \caption{Guide for selecting the appropriate installation choice based on a hardware and intended use. The installation options are discussed in more detail in the following sections.} \label{fig:install:choices} \end{figure} Help for installing and using PyLith is available from both a CIG mailing list and the GitHub issue tracking system \url{https://github.com/geodynamics/pylith/issues}. See Section~\vref{sec:help} for more information. \section{Installation of Binary Executable} The binaries are intended for users running on laptops or desktop computers (as opposed to clusters). The binaries contain the compilers and header files, so users wishing to extend the code can still use the binary and do not need to build PyLith and its dependencies from source. See Chapter~\vref{cha:extending} for more information on extending PyLith. Binary executables are available for Linux (glibc 2.12 and later) and Mac OS X (Intel 10.10 and later) from the PyLith web page \url{geodynamics.org/cig/software/packages/short/pylith/}. Users running Windows 10 build 14316 and later can install a Linux bash environment and use the PyLith binary for Linux (see Section~\vref{sec:install:windows} for more information). \tip{On Linux systems you can check which version of glibc you have by running \filename{ldd --version}}. \tip{On Darwin systems running OS X, you can check the operating system version by clicking on the Apple icon and \menu{About this Mac}.} \subsection{Linux and Mac OS X (Darwin)} \begin{enumerate} \item Open a terminal window and change to the directory where you want to place the distribution. \begin{shell} $ cd $HOME $ mkdir pylith $ cd pylith \end{shell} \item Download the Linux or Mac OS X (Darwin) tarball from the PyLith web page \url{geodynamics.org/cig/software/packages/short/pylith/}, and save it to the desired location, e.g., \filename{\$HOME/pylith}. \item Unpack the tarball. \begin{shell} # Linux 32-bit $ tar -xzf pylith-2.2.1-linux-i686.tgz # Linux 64-bit $ tar -xzf pylith-2.2.1-linux-x86_64.tgz # Mac OS X $ tar -xzf pylith-2.2.1-darwin-10.11.6.tgz \end{shell} \item Set environment variables. The provided \filename{setup.sh} script only works if you are using bash shell. If you are using a different shell, you will need to alter how the environment variables are set in \filename{setup.sh}. \begin{shell} $ source setup.sh \end{shell} \end{enumerate} \warning{The binary distribution contains PyLith and all of its dependencies. If you have any of this software already installed on your system, you need to be careful in setting up your environment so that preexisting software does not conflict with the PyLith binary. By default the \filename{setup.sh} script will prepend to the PATH and PYTHONPATH (for Darwin and Linux) and LD\_LIBRARY\_PATH (for Linux) environment variables. This will prevent most conflicts.} \warning{The PyLith binary distribution for {\bf Darwin} systems is built using the system clang compiler suite and the system Python. {\bf This means the system Python must be in your path to use the PyLith binary executable}; ensure \filename{/bin} and \filename{/usr/bin} are at the beginning of the PATH environment variable, which is done automatically if you use the \filename{setup.sh} script. {\bf This condition is often violated if you have Python installed from Anaconda, HomeBrew, MacPorts, etc. and set the PATH variable in your bash configuration file.}} \subsection{Windows 10} \label{sec:install:windows} PyLith is developed within the Unix/Linux framework, and we do not provide a native PyLith binary distribution for Windows. The preferred approach to installing PyLith on a computer running Windows 10 is to enable use of a Linux subsystem. This permits use of the PyLith Linux x86\_64 binary within the bash environment. To enable the Linux subsystem on Windows 10 build 14316 and later (users running an earlier Windows build should use the PyLith Docker container): \begin{enumerate} \item Go to \menu{Settings} $\rightarrow$ \menu{Security}. \item Under \menu{For developers} select \menu{Developer mode}. This step should not be required for Windows build 16215 and later. \item Go to \menu{Control Panel} $\rightarrow$ \menu{Programs} $\rightarrow$ \menu{Turn Windows Features On or Off}. \item Enable \menu{Windows Subsystem for Linux} and click \menu{OK}. \item Restart the computer. \item Go to \menu{Start} $\rightarrow$ \menu{bash}. You will be prompted to download "Bash on Ubuntu on Windows" from the Windows Store. Create a user account and password for the bash environment. \item Install the PyLith Linux x86 binary within the bash environment following the instructions for installing the PyLith binary for Linux. You will run PyLith within the bash environment just like you would for a Linux operating system. \end{enumerate} \subsection{Extending PyLith and/or Integrating Other Software Into PyLith} \newfeature{v.2.2.0} We have constructed the binary package so that you can extend PyLith and/or build additional software for integration with PyLith using the binary distribution. \begin{description} \item[Darwin] The binary package includes the header files for PyLith and all of its dependencies. Use the clang compiler and Python provided with the operating system. You will need to install XTools. \item[Linux] The binary package includes the GNU compilers, Python, as well as header files for PyLith and all of its dependencies. \end{description} \tip{We encourage anyone extending PyLith to fork the PyLith repository and build from source using the PyLith Installer Utility to facilitate contributing these features back into the CIG repository via pull requests.} \section{Installation of PyLith Docker Container} As an alternative to installing a binary package, we provide a Docker container for running PyLith in a self-contained virtual environment. Docker containers provide a self-contained virtual environment that are a smaller, simpler alternative to a virtual machine. The PyLith Docker container provides a Debian Linux environment with a pre-built PyLith executable, vim text editor, iceweasel (GNU version of Firefox) web-browser, and the matplotlib Python module. \tip{In nearly all cases, installing a PyLith binary provides easier integration with mesh generation and post-processing tools, so binaries are the preferred approach to using the PyLith Docker container. This installation method targets users running Windows versions earlier than Windows 10 build 14316.} \subsection{Setup (first time only)} \begin{enumerate} \item Install Docker (See \url{https://www.docker.com/products/docker}) \item Create a container to store persistent user data\\ This container, called pylith-data, will hold a directory where all your user data can be stored for use with PyLith within Docker. The data can persist for different versions of PyLith; that is, you can update to a newer version of PyLith and your user data will still be available. This directory is not directly accessible from your host computer. However, you can copy files to/from your host filesystem using ``docker cp'' (see below). \end{enumerate} \begin{shell}[] # Create the container $ docker create --name pylith-data geodynamics/pylith-data # Run the docker container and copy examples to the persistent storage. $ docker run -ti --volumes-from pylith-data geodynamics/pylith # This next command is run WITHIN the docker container. $ cp -R $HOME/pylith-VERSION/examples $HOME/data \end{shell} \subsection{Run Unix shell within Docker to use PyLith.} To run the container with a text only interface: \begin{shell} $ docker run -ti --volumes-from pylith-data geodynamics/pylith \end{shell} To run the container and allow display of windows on the host computer (requires that X-Windows be installed): \begin{shell} # Darwin: Allow X connections $ xhost +YOUR_IP_ADDRESS; DISPLAY=YOUR_IP_ADDRESS:0 # Linux: Allow X connections $ xhost +local:root # For Linux and Darwin, continue with the follow lines. $ XSOCK=/tmp/.X11-unix $ docker run -ti --volumes-from pylith-data \ -e DISPLAY=$DISPLAY -v $XSOCK:$XSOCK geodynamics/pylith \end{shell} In addition to a minimalist Debian Linux distribution and PyLith and all of its dependencies, the container includes the following useful utilities: \begin{description} \item[vim] Lightweight text editor \item[matplotlib] Python plotting module \item[iceweasel] GNU version of Firefox \end{description} \important{We do not yet include ParaView due to difficulties associated with setting up rendering on the host display outside the container. You will need to copy the output files to your host machine to view them in ParaView as described later.} \subsubsection{Using Docker containers} \begin{itemize} \item To ``pause'' a container: \texttt{Control-p Control-q} \item To attach to a ``paused'' or ``running'' container. \begin{shell} # Get the container id. $ docker ps # Attach to the container $ docker attach CONTAINER_ID \end{shell} \item To restart an existing container after it exited. \begin{shell} # Get the container id. $ docker ps -a # Start and then attach to the container $ docker run CONTAINER_ID $ docker attach CONTAINER_ID \end{shell} \end{itemize} \subsection{Copy data to/from persistent storage volume.} These commands are run on the local host outside the container, not inside the Docker container. These commands are used to move files from your host machine into the PyLith Docker container and vice versa. For example, you will generate your mesh on the host, copy the mesh file into the Docker container, run PyLith within the container, and then copy the output files to the host to display in ParaView. \begin{shell} # Copy data FROM persistent storage volume TO local host $ docker cp pylith-data:/data/pylith-user/PATH/FILENAME LOCAL_PATH # Copy data FROM local host TO persistent storage volume $ docker cp LOCAL_PATH pylith-data:/data/pylith-user/PATH/ \end{shell} \subsection{Docker Quick Reference} \begin{shell} # List local docker images. $ docker images # List all docker containers. $ docker ps -a # List running docker containers. $ docker ps # Remove docker container $ docker rm CONTAINER_ID # Remove docker image $ docker rmi IMAGE_ID \end{shell} \section{Installation from Source} PyLith depends on a number of other packages (see Figure \vref{fig:pylith-dependencies}). This complicates building the software from the source code. In many cases some of the packages required by PyLith are available as binary packages. On the one hand, using the binary packages for the dependencies removes the burden of configuring, building, and installing these dependencies, but that can come with its own host of complications if consistent compiler and configuration settings are not used across all of the packages on which PyLith depends. This is usually not an issue with Linux distributions, such as Fedora, Ubuntu, and Debian that have good quality control; it can be an issue with Darwin package managers, such as Fink, MacPorts, and Homebrew, where there is limited enforcement of consistency across packages. Nevertheless, PyLith can be built on most systems provided the instructions are followed carefully. PyLith is developed and tested on Linux and Mac OS X. A small utility, PyLith Installer, removes most of the obstacles in building PyLith and its dependencies from source. For each package this utility downloads the source code, configures it, builds it, and installs it. This insures that the versions of the dependencies are consistent with PyLith and that the proper configure arguments are used. The minimum requirements for using the PyLith installer are a C compiler, \filename{tar}, and \filename{wget} or \filename{curl}. Detailed instructions for how to install PyLith using the installer are included in the installer distribution, which is available from the PyLith web page \url{geodynamics.org/cig/software/packages/short/pylith/}. \section{Verifying PyLith is Installed Correctly} The easiest way to verify that PyLith has been installed correctly is to run one or more of the examples supplied with the binary and source code. In the binary distribution, the examples are located in \filename{src/pylith-\pylithVersionNumber/examples} while in the source distribution, they are located in \texttt{pylith-\pylithVersionNumber/examples}. Chapter \vref{cha:examples} discusses how to run and visualize the results for the examples. To run the example discussed in Section \vref{sec:example:3dhex8-static}: \begin{shell} $ cd examples/3d/hex8 $ pylith step01.cfg # A bunch of stuff will be written to stdout. The last few lines should be: WARNING! There are options you set that were not used! WARNING! could be spelling mistake, etc! Option left: name:-snes_atol value: 1.0e-9 Option left: name:-snes_converged_reason (no value) Option left: name:-snes_error_if_not_converged (no value) Option left: name:-snes_linesearch_monitor (no value) Option left: name:-snes_max_it value: 100 Option left: name:-snes_monitor (no value) Option left: name:-snes_rtol value: 1.0e-10 \end{shell} If you run PyLith in a directory without any input, you will get the error message: \begin{shell} $ pylith >> {default}:: -- pyre.inventory(error) -- meshimporter.meshioascii.filename <- '' -- Filename for ASCII input mesh not specified. To test PyLith, run an example as discussed in the manual. >> {default}:: -- pyre.inventory(error) -- timedependent.homogeneous.elasticisotropic3d.label <- '' -- Descriptive label for material not specified. >> {default}:: -- pyre.inventory(error) -- timedependent.homogeneous.elasticisotropic3d.simpledb.label <- '' -- Descriptive label for spatial database not specified. >> {default}:: -- pyre.inventory(error) -- timedependent.homogeneous.elasticisotropic3d.simpledb.simpleioascii.filename <- '' -- Filename for spatial database not specified. pylithapp: configuration error(s) \end{shell} This indicates that a number of default settings must be set in order to run PyLith, including setting the filename for the finite-element mesh. \section{Configuration on a Cluster} If you are installing PyLith on a cluster with a batch system, you can configure Pyre such that the \filename{pylith} command automatically submits jobs to the batch queue. Pyre contains support for the LSF, PBS, SGE, and Globus batch systems. The command to submit a batch job depends upon the particular batch system used. Further, the command used in a batch script to launch an MPI program varies from one cluster to the next. This command can vary between two clusters, even if the clusters use the same batch system! On some systems, \filename{mpirun} is invoked directly from the batch script. On others, a special wrapper is used instead. Properly configured, Pyre can handle job submissions automatically, insulating users from the details of the batch system and the site configuration. This feature has the most value when the system administrator installs a global Pyre configuration file on the cluster (under \filename{/etc/pythia-0.8}), for the benefit of all users and all Pyre-based applications. \subsection{Launchers and Schedulers} \label{sec:launchers:schedulers} If you have used one of the batch systems, you will know that the batch system requires you to write a script to launch a job. Fortunately, launching a parallel PyLith job is simplified by Pyre's \texttt{launcher} and \facility{scheduler} facilities. Many properties associated with \facility{launcher} and \facility{scheduler} are pertinent to the cluster you are on, and are best customized in a configuration file. Your personal PyLith configuration file (\filename{\$HOME/.pyre/pylithapp/pylithapp.cfg}) is suitable for this purpose. On a cluster, the ideal setup is to install a system-wide configuration file under \filename{/etc/pythia-0.8}, for the benefit of all users. Pyre's \facility{scheduler} facility is used to specify the type of batch system you are using (if any): \begin{cfg} <h>[pylithapp]</h> # The valid values for scheduler are 'lsf", 'pbs', 'globus', and 'none. <f>scheduler</f> = lsf # Pyre's launcher facility is used to specify the MPI implementation. # The valid values for launcher include 'mpich' and 'lam-mpi'. <f>launcher</f> = mpich \end{cfg} You may find the 'dry' option useful while debugging the \facility{launcher} and \facility{scheduler} configuration. This option causes PyLith to perform a ``dry run,'' dumping the batch script or mpirun command to the console, instead of actually submitting it for execution (the output is only meaningful if you're using a batch system). \begin{shell} # Display the bash script that would be submitted. $ pylith --scheduler.dry # Display the mpirun command. $ pylith --launcher.dry \end{shell} \subsection{Running without a Batch System} On a cluster without a batch system, you need to explicitly specify the machines on which the job will run. Supposing the machines on your cluster are named n001, n002, \ldots, etc., but you want to run the job on machines n001, n003, n004, and n005 (maybe n002 is down for the moment). To run an example, create a file named \filename{mymachines.cfg} which specifies the machines to use: \begin{cfg} <h>[pylithapp.launcher]</h> <p>nodegen</p> = n%03d <p>nodelist</p> = [1,3-5] \end{cfg} The \property{nodegen} property is a printf-style format string, used in conjunction with \property{nodelist} to generate the list of machine names. The \texttt{nodelist} property is a comma-separated list of machine names in square brackets. Now, invoke the following: \begin{shell} $ pylith example.cfg mymachines.cfg \end{shell} This strategy gives you the flexibility to create an assortment of \filename{cfg} files (with one \filename{cfg} file for each machine list) which can be easily paired with different parameter files. If your machine list does not change often, you may find it more convenient to specify default values for \property{nodegen} and \property{nodelist} in \filename{\$HOME/.pyre/pylithapp/pylithapp.cfg} (which is read automatically). Then, you can run any simulation with no additional arguments: \begin{shell} $ pylith example.cfg \end{shell} \warning{This assumes your machine list has enough nodes for the simulation in question.} You will notice that a machine file \filename{mpirun.nodes} is generated. It will contain a list of the nodes where PyLith has run. \subsection{Using a Batch System} Many clusters use some implementation of a PBS (e.g., TORQUE/Maui) or LSF batch system. The examples below illustrate use of some of the more important settings. You may need to make use of more options or adjust these to submit jobs on various cluster. These settings are usually placed in \filename{\$HOME/.pyre/pylithapp/pylithapp.cfg} or in a system-wide configuration file. They can be overridden on the command line, where one typically specifies the number of compute nodes and number of processes per compute node, the job name, and the allotted time for the job: \begin{shell} $ pylith example1.cfg \ --job.queue=debug \ --job.name=example1 \ --job.stdout=example1.log \ --job.stderr=example1.err \ --job.walltime=5*minute \ --nodes=4 \end{shell} \important{The value for nodes is equal to the number of compute nodes times the number of processes (usually the number of cores) requested per compute node. Specifying the number of processes per compute node depends on the batch system. For more information on configuring Pyre for your batch system, see CIG's Pythia page \url{geodynamics.org/cig/software/packages/cs/pythia}.} \subsubsection{LSF Batch System} \begin{cfg} <h>[pylithapp]</h> <f>scheduler</f> = lsf ; the type of batch system <h>[pylithapp.lsf]</h> <p>bsub-options</p> = [-a mpich_gm] ; special options for 'bsub' <h>[pylithapp.launcher]</h> <p>command</p> = mpirun.lsf ; 'mpirun' command to use on our cluster <h>[pylithapp.job]</h> <p>queue</p> = normal ; default queue for jobs \end{cfg} \subsubsection{PBS Batch System} \begin{cfg} <h>[pylithapp]</h> <f>scheduler</f> = pbs ; the type of batch system <h>[pylithapp.pbs]</h> <p>shell</p> = /bin/bash ; submit the job using a bash shell script # Export all environment variables to the batch job # Send email to [email protected] when the job begins, ends, or aborts <p>qsub-options</p> = -V -m bea -M [email protected] <h>[pylithapp.launcher]</h> <p>command</p> = mpirun -np ${nodes} -machinefile ${PBS_NODEFILE} \end{cfg} For most PBS batch systems you can specify N processes per compute node via the command line argument \commandline{-{}-scheduler.ppn=N}. \section{Getting Help and Reporting Bugs} \label{sec:help} The CIG Short-Term Crustal Dynamics Mailing List \url{[email protected]} is dedicated to CIG issues associated with short-term crustal dynamics, including the use of PyLith. You can subscribe to the mailing list and view messages at cig-short Mailing List \url{geodynamics.org/cig/lists/cig-short}. CIG uses \object{GitHub} for source control and bug tracking. If you find a bug in PyLith, please submit a bug report to the GitHub issue tracking system for PyLith \url{https://github.com/geodynamics/pylith/issues}. Of course, it is helpful to first check to see if someone else already submitted a report related to the issue; one of the CIG developers may have posted a work around to the problem. You can reply to a current issue by clicking on the issue title. To submit a new issue, click on the \object{New Issue} button. % End of file
{ "alphanum_fraction": 0.7771793861, "avg_line_length": 42.2944444444, "ext": "tex", "hexsha": "32af97dc1bb81fab3caaa5b94eac920f4d6e2595", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "f74060b7b19d7e90abf8597bbe9250c96593c0ad", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "joegeisz/pylith", "max_forks_repo_path": "doc/userguide/install/install.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "f74060b7b19d7e90abf8597bbe9250c96593c0ad", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "joegeisz/pylith", "max_issues_repo_path": "doc/userguide/install/install.tex", "max_line_length": 103, "max_stars_count": 1, "max_stars_repo_head_hexsha": "f74060b7b19d7e90abf8597bbe9250c96593c0ad", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "joegeisz/pylith", "max_stars_repo_path": "doc/userguide/install/install.tex", "max_stars_repo_stars_event_max_datetime": "2021-01-20T17:18:28.000Z", "max_stars_repo_stars_event_min_datetime": "2021-01-20T17:18:28.000Z", "num_tokens": 5719, "size": 22839 }
\documentclass[output=paper]{langsci/langscibook} \ChapterDOI{10.5281/zenodo.4680328} \author{Mary Aizawa Kato\affiliation{Universidade Estadual de Campinas} and Maria Eugenia L. Duarte\affiliation{Universidade Federal do Rio de Janeiro}} \title{Parametric variation: The case of Brazilian Portuguese null subjects} \abstract{This chapter revisits comparative and diachronic studies of linguists analysing Brazilian Portuguese (BP) with regard to the NSP, especially in view of recent debates on the existence of the so-called partial null subject languages. It will be shown that BP is losing the properties of a prototypical NSL like European Portuguese (EP), with a rich inflectional paradigm, but, as the change is very recent, there is still not a consensus regarding the target of the change. Our question is whether BP classifies as a PNS language like Finnish, Hebrew or Marathi, as was recently claimed in \textcite{Holmberg2010}, and \textcite{HolShee2010}. Methodologically, it is our purpose to observe the overt and null subjects in real data so as to check whether eventual optionality of null and overt pronouns can be attributed to a grammatical competition from a diachronic perspective \parencite{Kroch1994} or to some licensing possibility within a single type of grammar, which is normally a view taken by formal linguists analyzing synchronic data. Using acquisition\is{language acquisition} data we will show that while null non-referential subjects are part of Brazilian \isi{core grammar}, null referential subjects are not, and their existence in the production of Brazilian literate adults results from instruction through schooling. The chapter suggests that from a typological view BP is a semi-NS language like Icelandic.} \begin{document}\glsresetall \maketitle \section{The null subject parameter: A background} Since the advent of the principles and parameters model within the government and binding theory (\citealt{Chomsky1981,Rizzi:1982}, a.o), the \gls{NSP} has received the widest range of discussions and refinements. Not only did its formal formulation deserve a lot of attention, but its typological binary concept (\citealt{Chomsky1981}, based on \citealt{Taraldsen1978}) gave rise to a new way to do comparative and historical linguistics. But \citet[144]{Rizzi:1982} soon pointed to the fact that what was considered a single parameter should be decomposed into two sub-parameters,\is{parameters} distinguishing languages allowing both null referential and expletive\is{expletives} subjects from those licensing only null expletives (what he calls \emph{semi-pro-drop} languages)\footnote{Which we will later call semi-[non-NS] languages, after \citet{Biberauer2010}.} (e.g.\ \ili{Italian} vs.\ \ili{German}). Further studies in the 1980s and 1990s would show that morphological richness\footnote{“The intuitive idea is that where there is overt agreement, the subject can be dropped, since the deletion is recoverable” \parencite[241]{Chomsky1981}.} was not sufficient to explain licensing and identification of null subjects. \citegen{Huang1984} classic article showed that null subjects were also licensed in systems like Chinese, without any inflection for mood, tense, number and person, which led to a new hypothesis \parencite{JaeggliSafir1989}, according to which what licenses null subjects is not a “rich” inflectional verbal paradigm but its morphological uniformity. In the case of a paradigm consisting of different affixes, identification would occur through agreement markers; in the case of a paradigm consisting of a single stem, identification would be possible through a discursive topic. In the first case the NS would be a pronominal category; in the second, a variable. If, however, a paradigm is mixed, the NS would not be licensed. \citet{Roberts1993} would bring new contributions to the discussion based on diachronic evidence from medieval \ili{French}. He argued that a “functionally” rich paradigm, i.e.\ with a zero ending and two identical forms for different grammatical persons, could act as a “formally” rich one. \citeauthor{Roberts1993}, however, pointed out the fact that the limit of syncretic forms could not be exceeded. This proposal has been used to explain licensing of null subjects in European Portuguese and Brazilian Portuguese before the latter underwent a change in its inflectional paradigm, as we will show in \Cref{sec:26.2.2}. The cluster of properties, which has been crucially related to the \gls{NSL}\is{null subject languages} since the classical formulation of the \gls{NSP}, has not been thoroughly confirmed in more than thirty years of research, which has led to negative conclusions and certain scepticism with respect to the principles and parameters theory, according to \citet{RobHol2010}. \glsunset{PNS} In recent years, in the light of new theoretical and empirical evidence, the notion of “partially” null subject (\gls{PNS}) languages has been introduced (cf.\ \citealt{Holmberg2005}; works in \citealt{Biberauer2008,BibHolRobShee2010}, a.o.), which draws a much more complex picture, leading to a proposal of \isi{parameter hierarchies}, able to accommodate different parametric values. The representation in \figref{fig:ex:26.1}, still covering languages with some sort of agreement, includes such \gls{PNS}\is{null subject languages!partial null subject languages} systems (cf.\ \citealt{HolShee2010}; \citealt[6]{Sheehan2014b}, a.o.).\largerpage[2] \begin{figure} \caption{Null subject parameter hierarchy (preliminary)}\label{fig:ex:26.1} \begin{tikzpicture}[baseline=(root.base), align=center, font=\small] \Tree [.\node(root){Is INFL [$+$pronoun]?}; {No\\\emph{overt subject}} [.\node(y1){Yes:}; {No\\\emph{expletive \emph{pro}}} [.\node(y2){Yes:}; {Yes\\\emph{referential \emph{pro}}} [.\node(y3){No:}; {Yes\\\emph{partial NS}} {No\\\emph{\emph{quasi}-argumental \emph{pro}}} ] ] ] ] \node [right=1.5mm of y1.east] {Is INFL [$+$pronoun] referential?}; \node [right=1.5mm of y2.east] {Is INFL [$+$person]?}; \node [right=1.5mm of y3.east] {Can INFL be bound?}; \end{tikzpicture} \end{figure} Based on evidence coming from a number of languages of different families, \citeauthor{RobHol2010} list, beside non-null subject languages,\footnote{We must keep in mind that non-null subject languages do not admit null subjects in neutral contexts. We do not ignore the fact that such systems can exhibit null subjects, pragmatically identified in non-neutral contexts (see, for instance, null first person subjects in \ili{English} diaries, \citealt{Haegeman1990}).} the following types of \glspl{NSL}: consistent \glspl{NSL}, such as \ili{Italian}, \ili{Greek} and \ili{Turkish}, with “rich” inflection; null expletive\is{expletives} languages (also referred as \emph{semi} \emph{pro-drop}), which do not allow referential NSs, among which we can find \ili{German} and some varieties of \ili{Dutch} and many creoles, such as \ili{Capeverdian}, \ili{Haitian}, and \ili{Jamaican}; radical null subject\is{null subject languages!radical null subject languages} languages (\emph{discourse pro-drop}), such as Chinese, \ili{Japanese} and \ili{Thai}, with no agreement marker, which allow null subjects and objects in appropriate discursive conditions; and finally, partial null subject languages, including \ili{Finnish}, \ili{Hebrew}, \ili{Icelandic}, \ili{Russian}, \ili{Marathi} (a variety spoken in western India) and \ili{Brazilian Portuguese}. According to the authors, they constitute a more difficult type to define because the languages under this label may show a very diverse range of characteristics. \gls{BP}\il{Brazilian Portuguese}, on the contrary, instead of creating a lexical expletive\is{expletives} like \ili{French}, shows a competition between a null subject, and a prominent constituent moved to the structural subject position, resembling constructions of discourse configurational languages. The proposal of \isi{parameter hierarchies} can be related to the notion of \emph{micro}-parameters\is{parameters} \citep{Kayne1996}, which could explain small differences among similar systems. According to \citet{Roberts2012}, each formal feature defines a distinct parameter, and he also argues that parameters move from “macro” to “micro” levels; thus, it would be natural to expect lower layers in the hierarchy to become more marked, showing a more complex behaviour than upper layers. The relevance of the parameter hierarchy\is{parameter hierarchies} for acquisition\is{language acquisition} should be the prediction that higher options would be preferred as they are less marked; as more marked options appear in the primary data, the learner moves to lower levels, until the definition of a parametric setting compatible with the data is accomplished. The distinction between \emph{micro-} and \emph{macro}-parameters\is{parameters} would not be, according to \citet[310]{Roberts2012}, part of \gls{UG}\is{Universal Grammar}, but a property that emerges as a result of the interaction of the learner with the primary data and \gls{UG}\is{Universal Grammar}. These hierarchies also include some predictions about diachronic changes: they should happen in the direction of upper hierarchies, less marked, driven by functional pressures or linguistic contact. Finally, refining \figref{fig:ex:26.1}, \textcite{RobHol2010} proposed the \gls{NSP} hierarchy in \figref{fig:ex:26.2}, suggesting that each functional head\is{functional items} defines its parametric hierarchy. \begin{figure} \caption{Null subject parameter hierarchy\label{fig:ex:26.2}} \begin{tikzpicture}[baseline=(a.base),% level distance=50pt,% sibling distance=-50pt,% font=\small] \tikzset{level 4/.style={sibling distance=5pt}} \Tree [.\node(a){\vphantom{Y}\\a.}; {No\\\emph{Radical pro-drop}} [.\node(b){Yes\\b.}; {Yes\\\emph{Pronominal arguments}} [.\node(c){No\\c.}; {No\\\emph{Non pro-drop}} [.\node(d){Yes\\d.}; {Yes\\\emph{Consistent null subject}} No ] ] ] ] \node [right=1.5mm of a.south, align=left, anchor=south west] {Are uφ-features present on probes?}; \node [right=1.5mm of b.south, align=left, anchor=south west] {Are uφ-features present on all probes?}; \node [right=1.5mm of c.south, align=right, anchor=south west] {Are uφ-features fully specified on some probes?}; \node [right=1.5mm of d.south, align=right, anchor=south west] {Are uφ-features fully specified on T?}; \end{tikzpicture} \end{figure} In sum, the attempt to accommodate different hierarchies, keeping the binary values of each parameter,\is{parameters} is in itself evidence that it is not an easy enterprise. As for the label \gls{NSP} in the interpretation it has in the theory of principles and parameters today, it seems to include several sub-types of languages, as argued by \citet{Biberauer2010}. We will see that \gls{BP}\il{Brazilian Portuguese} exhibits a very peculiar behaviour in this regard. \section{Preliminaries}\label{sec:26.2} \subsection{Our aims}\label{sec:26.2.1} The aim of this chapter is to revisit the comparative and diachronic studies of linguists analysing \gls{BP} with regard to the \gls{NSP}, especially in view of recent debates on the existence of the so-called \gls{PNS}\is{null subject languages!partial null subject languages} languages. It is a well known fact that \gls{BP}\il{Brazilian Portuguese} is losing the properties of a prototypical \gls{NSL}\is{null subject languages}, like \gls{EP}, with a rich inflectional paradigm, but, as the change is very recent, there is still not a consensus regarding the target of the change. Our question is whether \gls{BP}\il{Brazilian Portuguese} classifies as a \gls{PNS}\is{null subject languages!partial null subject languages} language like \ili{Finnish}, \ili{Hebrew} or \ili{Marathi}, as was recently claimed in \citet{Holmberg2010} and \citet{HolShee2010}. Methodologically, it is our purpose to observe the overt and null subjects in real data so as to check whether eventual optionality of null and overt pronominals can be attributed to a grammatical competition from a diachronic perspective \citep{Kroch1994} or to some licensing possibility within a single type of grammar, which is normally a view taken by formal linguists analysing synchronic data. Using acquisition\is{language acquisition} data (\citealt{Magalhaes2003} and \citealt{Kato2011}), we will try to see how the Brazilian child selects their grammar, and will follow the hypothesis that null referential subjects in the Brazilian literate adult are not residues of the old grammar, but the result of instruction through schooling. Our upcoming sections are organized as follows: \Cref{sec:26.2.2}. describes the \gls{BP}\il{Brazilian Portuguese} diachronic facts; \Cref{sec:26.2.3} brings some considerations on acquisition\is{language acquisition} data; \Cref{sec:26.3} contains a comparative analysis of \gls{BP}\il{Brazilian Portuguese} with four types of languages: \Cref{sec:26.3.1} with \gls{EP}\il{European Portuguese}, a consistent \gls{NSL}\is{null subject languages}, with rich Agr inflection; \Cref{sec:26.3.2} with Japanese, a radical type,\is{null subject languages!radical null subject languages} or a discourse configurational language type, with no Agr inflection; \Cref{sec:26.3.3} with Finnish, a partial \gls{NSL}\is{null subject languages}; \Cref{sec:26.3.4} with \ili{English}, a [$-$NS] language; and \Cref{sec:26.3.5} with \ili{Icelandic}, the so-called \emph{semi[NS]} language. In the conclusions we will summarize the findings of the article, namely that \gls{BP}\il{Brazilian Portuguese} \isi{core grammar} is set to a [$-$NS] language with referential subjects and to a [+NS] language with regard to non-referential ones. With regard to the literate Brazilians' \isi{E-language} it will be shown to exhibit a competition with regard to referential subjects, between overt pronominal subjects of the \ili{English} type, and NSs, of the radical type.\is{null subject languages!radical null subject languages} With regard to non-referential subjects, the literate adult maintains the same types of NSs exhibited by the child. \subsection{From Old Portuguese to Modern Brazilian Portuguese}\label{sec:26.2.2} As is well known among Romanists, \gls{OFr}\il{Old French} was “a sort of V2\is{V2 word order} type of language” (cf.\ (\ref{ex:26.3}a)) and also a \gls{NSL}\is{null subject languages} (cf.\ \ref{ex:26.4}a) (\citealt{Adams1987}, \citealt{Roberts1993}, a.o.). The latter property was lost when \gls{OFr} lost this characteristic. According to \citet{Ribeiro1995}, \gls{OP} was also a \gls{NSL}\is{null subject languages} and a “sort of V2\is{V2 word order} type of language”\footnote{Cf.\ \textcite{Ribeiro1995} for OP and \textcite{TorresMoraes1993} for the Classic period. Brazilian authors acknowledge that \ili{Romance} V2\is{V2 word order} is not exactly like the \ili{Germanic} V2\is{V2 word order}. See also \citet{Kaiser1999} and \citet{Rinke2009} against Old Portuguese as a V2\is{V2 word order} language.} (cf.\ \ref{ex:26.3}b). \gls{EP}\il{European Portuguese} retained both properties, while \gls{BP}\il{Brazilian Portuguese} lost both the same way \gls{OFr} did. \ea%3 \label{ex:26.3} \ea \ili{Old French} V2\is{V2 word order}\\ \gll Eisint revindrent li mesage en la ville.\\ then returned the messenger to the town\\ \glt ‘Then the messenger returned to town.' \ex \ili{Old Portuguese} V2\is{V2 word order}\\ \gll Maravilhosas \textbf{son} estas cousas que co’ntas, padre\dots\\ beautiful are these things that tell.\Ssg{}, father\\ \glt ‘Beautiful are the things that you tell us, father.’ \z \z However, contrary to \ili{Germanic} languages, \gls{OFr} and OP could both exhibit the V1 pattern (cf.\ \citealt{Kaiser1999}; \citealt{Ribeiro1995}), which in French was restricted to VS, while in Portuguese it exhibited a null subject: \ea\label{ex:26.4} \ea \ili{Old French} V1\\ \gll \textbf{Respundi} li evesches.\\ answered the bishop\\ \glt \enquote*{The bishop answered.} \ex \ili{Old Portuguese} V1\\ \gll \textbf{Quero} que m’o digas e desejo mui de coraçon a saber\dots{}\\ want.\Fsg{} that me=it tell.\Ssg{} and wish.\Fsg{} much of heart to know\\ \glt \enquote*{I want you to tell me, and I strongly wish to know...} \z \z If we take fronted Focus\is{focus} structures (FocusVS) as a diagnostic of V2\is{V2 word order} structures in older periods of Portuguese, we can say that these started to disappear in the 18th century in the \gls{BP}\il{Brazilian Portuguese} variety \parencite{KatoRibeiro2009}. On the other hand, the optionality between NS and overt pronominal subjects in \gls{BP}\il{Brazilian Portuguese} started to appear by the end of the 19th century (\citealt{Tarallo1985,Duarte1993}). It is clear, therefore, that V2\is{V2 word order} structures started to disappear one century before the NS began to decline, suggesting that the two changes were independent in \gls{BP}\il{Brazilian Portuguese}, contrary to what has been observed in \ili{French}. \begin{sloppypar} A number of investigations on the morphosyntax of Brazilian Portuguese point to the conclusion that variable phenomena have a very regular distribution in the country. In fact, the polarization to which \citet{Lucchesi2009b} refers should be related particularly to variation in the use of agreement markers. The author, in a recent overview of sociolinguistic polarization in Brazil \parencite{Lucchesi2015}, distinguishes those processes of variation and change that reach all sectors of Brazilian society \emph{in the same direction} from those processes which take opposite directions, setting apart high and middle sectors from those at the base of the social pyramid. In spite of that, the author recognizes a sort of “leveling” towards non-standard variants. \end{sloppypar} In fact, the alleged contrast may be valid when we consider the rural--urban \emph{continuum}. Results for contemporary Brazilian morphosyntax show that, when we take into account Brazilian Portuguese spoken in the cities, many so-called “non-standard” variants have reached all sectors of society, in such a way that it has become inappropriate to use the distinction standard/non-standard to refer to spontaneous speech produced by people with fewer or more years of school attendance. A possible explanation for that could be in the successive migration flows from 1940, which would give rise to intense contact among a wide range of linguistic varieties from all over the country and might, thus, be among the causes of the implementation of non-standard variants in the city, moving towards a new concept of the “standard norm”.\footnote{The rural exodus, with data from the Brazilian Institute of Geography and Statistics, shows the deep transformation related to those intense migration flows. Brazil, an eminently rural country in 1940, reached the year of 2000 with 80\% of its population in the cities.} The fact is that, as far as the cities are concerned, descriptions of \gls{BP}\il{Brazilian Portuguese} morphosyntax do not allow us to set a boundary to separate varieties. In an attempt to trace the expression of referential subjects, \citeauthor{Duarte1993}’s (\citeyear{Duarte1993,Duarte2012}) diachronic analysis shows the loss of the “\isi{avoid pronoun principle}” \citep{Chomsky1981} in popular theatre plays, written in Rio de Janeiro in the 19th and the 20th centuries. The results for referential subjects can be seen in~\Cref{fig:26.1}. %\begin{figure} % \centering % \includegraphics[width=.75\textwidth]{./img/fig1.pdf} %\caption{Null subjects in \gls{BP}\il{Brazilian Portuguese} in two centuries %(From \citealt{Duarte1993})} %\label{fig:26.1} %\end{figure} \begin{figure}[t] \pgfplotstableread{data/katoduarte-fig1.csv}{\table} \pgfplotstablegetcolsof{\table} \pgfmathtruncatemacro\numberofcols{\pgfplotsretval-1} \begin{tikzpicture} \begin{axis}[ cycle list name=black white, height = 4cm, legend cell align=left, legend columns=3, legend style={font=\footnotesize,anchor=north,at={(0.5,1.2)},anchor=north}, line width=.75pt, smooth, width = \textwidth, xmin = 1835, xmax = 2000, xtick = data, x tick label style={/pgf/number format/1000 sep=}, xlabel = {Date}, ymin = 0, ymax = 100, ylabel = {\%}, axis lines*=left ] \addplot+ table {\table}; \end{axis} \end{tikzpicture} \caption{Null subjects in \ili{Brazilian Portuguese} in two centuries (from \citealt{Duarte1993})\label{fig:26.1}} \end{figure} The rates of null subjects across the periods analysed suggest three stages in the process of change, which coincide with changes in the inflectional paradigm triggered by apocope in the second person singular, a very common phenomenon, and third person plural, a socially constrained phenomenon, as well as by two important changes in the set of nominative\is{nominative case} pronouns, shown in \Cref{tab:26.1}.\footnote{Considering that the first author was born in 1815 and the fourth in 1884, we could assume that the change took place at the turn of the century. We are aware of the fact that tracing linguistic change over long periods of time implies using documents that do not capture the vernacular of their writers. Quoting \parencite[11]{Labov1994}, “historical linguistics can then be thought of as the art of making the best use of bad data”.} \begin{table}[t] \begin{tabularx}{1\textwidth}{lXXXX} \lsptoprule & Nominative\newline pronouns & Paradigm 1\newline19th century & Paradigm 2\newline 20th century/1 & Paradigm 3\newline 20th century/2\\ \midrule \Fsg{} & eu & cant\emph{o} & cant\emph{o} & cant\emph{o} \\ \Ssg{} & tu\newline \emph{você} & canta\emph{s}\newline -- & canta\emph{s}\newline canta$\varnothing$ & canta(\emph{s})\newline canta$\varnothing$ \\ \Tsg{} & ele, ela & canta$\varnothing$ & canta$\varnothing$ & canta$\varnothing$ \\ \Fpl{} & nós\newline \emph{a gente} & canto\emph{mos}\newline -- & canta\emph{mos}\newline canta$\varnothing$ & canta\emph{mos}\newline canta$\varnothing$\\ \Spl{} & vós\newline \emph{vocês} & canta\emph{is}\newline canta\emph{m} & --\newline canta\emph{m} & --\newline canta(\emph{m}) \\ \Tpl{} & eles, elas & canta\emph{m} & canta\emph{m} & canta(\emph{m}) \\ \lspbottomrule \end{tabularx} \caption{Evolution of verbal inflectional paradigm in \gls{BP} -- \emph{cantar} \enquote*{to sing} (adapted from~\citealt{Duarte1993})}\label{tab:26.1} \end{table} The plays written in the first three periods, exhibit six and sometimes five different forms, with a syncretism, represented by the address forms \emph{o(a) senhor(a)} \enquote*{the lord}, \enquote*{the lady} and \emph{Vossa Mercê} \enquote*{Your Grace}, which all combine with third person unmarked form for singular. This is what we attest for European Portuguese. The reduction of null subjects in the 1930s and the 1950s is triggered by the \isi{grammaticalization} of \emph{Vossa Mercê} as \emph{você}, which is fully inserted in the pronominal system as second person reference, while the pronoun \emph{tu} is abandoned by some authors.\footnote{For some reason to be investigated, the most popular authors of this type of “light” plays written in Rio de Janeiro made a choice in favor of \emph{você}. The city population has not abandoned the use of \emph{tu} but it was more restricted to the suburban areas, with a number of new textile industries, where people born in the city were concentrated.} Those who insist in keeping \emph{tu} and \emph{você} in the paradigm usually mix both forms to address the same person, not only in nominative\is{nominative case} function but in accusative and dative\is{dative case} functions as well.\footnote{This is real evidence of the \isi{grammaticalization} of \emph{você}; the loss of courtesy, originally distinguishing \emph{você}, is kept in European Portuguese, which maintains the complementary distribution between \emph{tu}, for family and close friends, and \emph{você}, usually null, for other social relations. Explicit \emph{você} coming from a stranger is not well accepted by older Portuguese. See \citet{LopesBrocardo2016} with respect to current \isi{grammaticalization} processes in \gls{BP}\il{Brazilian Portuguese}.} This change was further aggravated by the entry of \emph{a gente} (\enquote*{the folks}, \enquote*{the people}, similar in meaning to \ili{French} \enquote*{on}), in Paradigm 3, replacing first person plural \emph{nós} (we), also requiring the unmarked third person singular agreement, due to its nominal origin. We have enough evidence from diachronic research, according to which both processes started before the 19th century. With respect to \emph{a gente}, \citet{Lopes2003} shows that after a transitory period of ambiguity between a nominal reading or its interpretation as a pronoun, it is at the end of the 19th century that its full implementation is attested in variation with the conservative pronoun \emph{nós} (we), which has an exclusive ending \tuple{\text{-mos}}. With respect to \emph{você} (you), \citet{Lopes2003} claims that its variation with \emph{tu} (you) in letters, very sporadic in the 19th century, enters the system slowly in the 20th century. A side effect of this pronominalization is attested in the mixture of oblique and possessive pronouns of second and third persons in letters and plays written from the 1930s on. Today, \emph{você} (in variation with \emph{tu}) and \emph{a gente} are preferred not only for definite reference but for generic reference as well, in which case the former may or may not include the speaker and the addressee, the latter must include the speaker. Such changes have been the most significant trigger for the “impoverishment” of BP’s paradigm. Differently from the variable use of \tuple{\text{-s}} and \tuple{\text{-m}}, related to a phonological process (apocope) and constrained by social factors, there is no variation in the use of the unmarked verb form with the new pronouns derived from DPs. The consequence was the loss of the \emph{functional richness} of the inflectional paradigm, in \citegen{Roberts1993} terms. For \citet{Galves1993}, this reduction entails the loss of the semantic feature in the category \emph{person.} Associated with the feature \emph{number}, the paradigm was reduced to four possible combinations:\newpage \ea%5 \label{ex:26.5}\leavevmode\\[-1\baselineskip]% \begin{tabular}{lllll} $+$person & / & $-$plural & $>$ & \emph{-o} \\ $+$person & / & $+$plural & $>$ & \emph{-mos} \\ $-$person & / & $+$plural & $>$ & \emph{-m} \\ $-$person & / & $-$plural & $>$ & \emph{-}$\varnothing$ \\ \end{tabular} \z Such an \emph{impoverished} or \emph{weakened} paradigm would certainly affect the identification of an empty category. The empirical evidence of the late implementation of the two new pronouns does not sustain the claim that it could actually be the case that the set of pronouns changed as a consequence of the changes in the inflectional paradigm. The cases of apocope shown in the chart above were certainly a consequence of contact. However, additional evidence that African slaves and their descendants did not reduce the verbal paradigm drastically comes from important written documents produced by Africans, who learned Portuguese as a second language in the State of Bahia. Such documents, written in the 19th century – along the decades of 1830 and 1840 – consist of 53 Acts of the \emph{Sociedade Protetora dos Desvalidos} (Protecting Society of the Helpless), a fraternity founded by Africans to protect one another, who kept minutes (memoranda) of their regular meetings, written by five members. \citet{AlmeidaCarneiro2009} analysed the expression of pronominal subjects and their results show the preference for null subjects with rates of 68\% for \Fsg{}, 89\% \Fpl{}, 89\% for \Tsg{}, and 93\% for \Tpl{}. The paradigm used in the memoranda includes the pronoun \emph{nós} for \Fpl{} reference, with the canonical inflection \tuple{\text{-mos}}. The cases of non-agreement are restricted to the apocope of \Tpl{} inflection \tuple{\text{-m}}. This discursive tradition does not favour the use of second person. All the constraints pointed out as favoring null subjects, such as co-reference and non-animate antecedents, are confirmed. The only oscillation attested in the data is related to individual performances – only one of the five authors shows a low rate of null subjects (33\%); the other four exhibit overall rates above 77\%. The analyses of spoken Portuguese acquired by African descendants are not different from those obtained by Brazilians. \citegen{Lucchesi2009b} analysis of the expression of subjects based on the vernacular speech of four isolated rural Afro-Brazilian communities in the state of Bahia, with different historical and socio-economic backgrounds, shows the same rates attested by \citet{Duarte1995} for contemporary Portuguese spoken in the city of Rio de Janeiro. Returning to the results in \Cref{fig:26.1}, Duarte shows that the course of change is different with respect to first and second person on one hand and to third person on the other. In the last quarter of the 20th century null first and second person subjects reach a mean of 20\%. Third person, thanks to the interaction of [+human] and [$-$human/$-$animate] referents, exhibits a slow descending curve (see \citealt{CyrinoEtAl2000}). Such results would be confirmed by \citegen{Duarte1995} analysis of spoken variety of Rio de Janeiro. Referential pronominal subjects in root clauses are preferentially overt \parencite{Duarte1995}.\footnote{In short answers we can have an apparent NS with third person, but we analyse this sort of structure as resulting from the fronting/focalization\is{focalisation} of the inflected verb eventually accompanied by its adjuncts, followed by the remnant \isi{movement} of the TP (cf.\ \citealt{Kato2016}).} Second person singular, which triggered and led the change, reveals 10\% of null subjects, usually pragmatically identified (\ref{ex:26.6}a); first person singular null subjects reach 25\%, particularly when preceded by a functional category, such as a NegP, and AspP (\ref{ex:26.6}b): \ea\label{ex:26.6}\ili{Brazilian Portuguese} \ea \gll $\varnothing$\tss{\Ssg} sabe {o que} é pinho de riga?\\ {} know what is pine of riga\\ \glt ‘Do you know what riga pine is?’ \ex \gll $\varnothing$\tss{\Fsg} não gosto de boxe.\\ {} not like of boxing\\ \glt ‘I don´t like boxing’ \z \z Third person subjects, as mentioned, are constrained by \isi{animacy} and structural patterns. In root clauses \citet{Duarte1995} attested 36\% of null subjects, usually identified by an antecedent bearing the same function in the adjacent clause or by an antecedent with discursive prominence (cf.\ \citealt{BarbosaDuarteKato2005,KatoDuarte2014b}): \ea%7 \label{ex:26.7}\ili{Brazilian Portuguese} \ea \gll Ela\tss{i} gosta de cozinhar. $\varnothing$\tss{\Tsg\tss{i}} Aprende com as amigas.\\ she likes of to.cook. {} learns with the friends.\\ \glt \enquote*{She likes to cook. She learns with her friends} \ex \gll [ O meu irmão ]\tss{i}? $\varnothing$\tss{\Tsg\tss{i}} Mudou pros Estados Unidos.\\ {} the my brother? {} {} moved to.the United States.\\ \glt \enquote*{My brother? He's moved to the United States} \z \z In embedded clauses, co-reference still plays an important role (\citealt{Modesto2000,FigueiredoSilva2000,DuarteSoaresdaSilva2016}, a.o.), with a regular distribution between overt and null subjects. \citegen{Duarte1995} data show 32\% of null subjects in this \isi{control} pattern with [+human] and 44\% with [$-$animate] referents:\largerpage[-1] \ea%8 \label{ex:26.8}\ili{Brazilian Portuguese} \ea \gll mas \textbf{ele}\textbf{\tss{i}} sentiu [ que $\varnothing$\tss{\Tsg\tss{i}} era o único novo ali, recém-casado \dots{}]\\ but he\tss{i} felt {} that {} was the only young there, newly-married \\ \glt \enquote*{But he felt he was the only young guy there, newly married….} \ex \gll {}[ \textbf{Esse} \textbf{filme} ]\tss{i} emocionou muita gente quando (ele)\tss{i} ficou pronto\\ {} That film\tss{i} {} touched many people when \hphantom{(}he was ready\\ \glt \enquote*{That film touched many people when it was shown} \z \z A null subject in a subordinate clause without co-reference with the subject of the main clause is still attested if the verb of the main clause has an epistemic verb. In such contexts, which have the antecedent in an A$'$-position, overt subjects are also far more frequent: (\citealt{MoreiradaSilva1983}; \citealt{FigueiredoSilva1996,FigueiredoSilva2000}, a.o.): \ea\label{ex:26.9}\ili{Brazilian Portuguese}\\ \gll {}[ \textbf{O} \textbf{armazém} ]\tss{i} (\dots{}) {quer dizer,} \underline{acho} [ que $\varnothing$\tss{\Tsg\tss{i}} já é extinto ] né?\\ {} the grocery-store {} {} {I mean} think.\Fsg{} {} that {} already is extinct, {} see?\\ \glt \enquote*{The grocery store\dots{} I think it's now extinct} \z One significant difference between \ili{French} and Brazilian Portuguese noted by \citet{Duarte1995} was the fact that, although the two \ili{Romance} languages have lost null referential subjects, \ili{French} also lost the null expletive\is{expletives} with the development of the expletives \emph{ce} and \emph{il} while \gls{BP}\il{Brazilian Portuguese} retained it: \ea%11 \label{ex:26.11} \ea \ili{French}\\ \gll Il fait froid.\\ it is cold\\ \ex \ili{Brazilian Portuguese}\\ \gll \textbf{$\varnothing$}\tss{\Expl} Faz frio./ $\varnothing$\tss{\Expl} Está frio.\\ {} does cold {} is cold\\ \z \z \ea%12 \label{ex:26.12} \ea Middle \ili{French} (apud \citealt[151]{Roberts1993})\\ \gll {\textbf{Il} i} avoit bien .xxiiij.M. archiers {a piet}\\ there were about 24.000 archers marching\\ \ex \ili{Brazilian Portuguese}\\ \gll $\varnothing$\tss{\Expl} havia {bem uns} 24.000 arqueiros {a pé}\\ {} was about 24.000 archers marching\\ \z \z With the loss of the generic clitic \emph{se}, BP shows a NS in generic constructions,\footnote{Since the arbitrary clitic \emph{se} is also extinct in speech, \gls{BP} also exhibits a null arbitrary subject \parencite{Rodrigues2004}, at very modest rates, attested in variation with the use of a third person plural verb with a null or an overt pronoun \emph{eles} (they).} while \ili{French} has the indefinite pronoun \emph{on}. \ea%13 \label{ex:26.13} \ea \ili{French}\\ \textbf{On} ne voit plus de rémouleurs. \ex \ili{Brazilian Portuguese}\\ $\varnothing$ Não vê mais amolador-de-faca.\\ ‘One doesn’t see knife sharpeners any more.’ \z \z However, in both languages, these constructions have nominative\is{nominative case} pronouns as variants, largely preferred in \gls{BP}: \ea\label{ex:26.14} \ea \ili{French}\\ Vous / On ne voyez plus de rémouleurs. Nous ne voyons plus de rémouleurs. \ex\ili{Brazilian Portuguese}\\ Você / A gente não vê mais amolador-de-faca\\ ‘You / we don't see knife sharpeners anymore.’ \z \z There are even contexts, as illustrated in \eqref{ex:26.15}, where a null generic is ungrammatical in \gls{BP}\il{Brazilian Portuguese}: \ea%15 \label{ex:26.15}\ili{Brazilian Portuguese}\\ \gll Quando \textbf{a} \textbf{gente} / \textbf{você} / *$\varnothing$\textbf{\tss{\Genc}} é menor, \textbf{a} \textbf{gente} / \textbf{você} não dá muito valor a essas coisas.\\ when the people {} you {} {} are little, the people {} you not give much value to these things\\ \glt \enquote*{When we /you are young, we / you do not value such things} \z Summarizing, our empirical analysis reveals that null referential subjects are much less frequent than overt pronominals. Furthermore, the null generic subject is not the most productive strategy to represent this type of indeterminate subject; in addition, recent research does not show any sign of increasing use of it among younger generations (see~\citealt{MarinsEtAlta}). This might support the hypothesis that null subjects in \gls{BP}\il{Brazilian Portuguese} could be residual cases still reflecting the replaced null subject system, as far as referential (definite and indeterminate -- either arbitrary or generic) uses are concerned. We will return to this matter in the following section. \section{Core grammar and I-language}\label{sec:26.2.3} The theory of \glsunset{UG}\gls{UG}\is{Universal Grammar} tries to account for the acquisition\is{language acquisition} of \emph{core} grammars through parameter\is{parameters} setting in a context of poverty of stimulus \citep{Chomsky1986}, which can be understood partly as data containing competing forms due to different values of the same parameter\is{parameters} coexisting in the input that children receive. This is exactly the situation that a child faces when there is a recent change or a change in progress as shown by the well-studied case of the null subject (NS) in Brazilian Portuguese (BP). As we saw above, in the \isi{I-language} of most literate Brazilian adults, a range of referential NSs are possible, competing with the innovative pronominal subjects. It is the case of the optionality of NSs and pronouns in complement clauses as in example \REF{ex:15/2}: \ea%15 \label{ex:15/2}\ili{Brazilian Portuguese}\\ \gll O Pedr\textbf{o}\tss{i} disse que (\textbf{ele}\tss{i}) fala bem espanhol.\\ the Peter said that \hphantom{(}he speaks well Spanish\\ \glt \enquote*{Peter said that he speaks Spanish well.} \z Assuming, with \citet{Kato2011},\footnote{See also \citeauthor{Dresher1999}'s (\citeyear{Dresher1999}, a.o.) theory according to which children do not reset parameters.\is{parameters}} that \emph{core} grammars do not admit morphological “doublets”, and that children have only the innovative variant, we will see that pre-school children do not have pronouns competing with referential null subjects as in the above context. \citeauthor{Kato2011} borrows data from \textcite{Magalhaes2003}, who argues that referential NSs in \gls{BP}\il{Brazilian Portuguese} are learned in school, where old forms are provided through instruction.\largerpage[1] \begin{table}[htpb] \centering \begin{tabularx}{\textwidth}{lXXX} \lsptoprule & Pre-school & 3rd/4th grades & 7th/8th grades\\ \midrule Pronominal subjects & 97.89\% & 78.0\% & 50.38\%\\ Null subjects & 2.11\% & 22.0\% & 49.62\%\\ \lspbottomrule \end{tabularx} \caption{Pronominal and null subjects in complement clauses (adapted from~\citealt{Magalhaes2003})}\label{tab:26.2} \end{table} When the child masters complex clauses in pre-school, the NS is still almost nonexistent in his/her oral production of complement clauses. NSs start to increase very quickly in their written performance, achieving the status of an equal variant of the overt pronoun at the end of 8th grade.\footnote{\citet{KatoEtAl2009} arrive at a similar conclusion with regard to null objects,\is{null objects} but in the opposite direction. Children have only null objects\is{null objects} in their \isi{core grammar}, and acquire the lost third person clitic at school.} Several studies try to analyse the nature of the NS in such constructions, where optionality is found in the adult’s \isi{E-language}, but what we are actually studying is a variant learned at school, and one may ask whether these NSs are an object of \gls{UG}. We will return to this problem in the following sections. The conclusion is that the only type of null subject licensed in \gls{BP}\il{Brazilian Portuguese} \emph{core} grammar are the non-referential NSs, namely the null expletive\is{expletives} and the generic subjects without the clitic \emph{se}, as they are attested during language acquisition\is{language acquisition}. \ea%16 \label{ex:26.16}\ili{Brazilian Portuguese} \ea \textcite{Simoes2000}\\ \gll $\varnothing$\tss{\Expl} Tem dois aviões aqui.\\ {} there-are two planes here.\\ \ex \textcite{Magalhaes2007}\\ \gll $\varnothing$\tss{\Genc} pode chupar o dedo?\\ {} can suck the finger\\ \z \z As for the \isi{E-language} exhibited by the literate adult, it will be shown that the non-referential null subjects are the same as those of the Brazilian child, but the null referential ones are in variation with the overt pronominal ones. \section{Comparing the NS in BP with different types of languages}\label{sec:26.3} \subsection{BP vs.\ EP, a consistent NS language}\label{sec:26.3.1} \citet{CarSta1994} distinguish three types of pronouns: strong, weak and clitic. Following \citet{Kato1999} we will make an initial split between strong and weak forms, and will assume that weak pronominals can be one of three types: i) free pronouns, like in \ili{English}, ii) \isi{clitics} as in Trentino, a Northern \ili{Italian} dialect or iii) agreement affixes, or pronominal Agr as in \ili{Italian} and \gls{EP}\il{European Portuguese} (cf.\ Fig 2). The weak pronominals are Agreement affixes in the so-called consistent \emph{pro-drop} languages. All languages, on the other hand, have strong pronouns, which exhibit a “default” case (\citealt{Kato2000,Schutze2001}).\footnote{Moreover, strong pronouns are always deictic, or referential, while weak pronouns can be deictic or referentially dependent. Strong pronouns are always [+human] while weak pronouns can be [+human] or [$-$human].} \ea\label{tree:fig2} \begin{tikzpicture}[baseline=(root.base), align=left] \Tree [.\node(root){Pronouns}; [.Strong {English\\Trentino} ] [.Weak {Free\\English} {Clitics\\Trentino} {Pronominal Agr\\Italian} ] ] \end{tikzpicture} \z \citegen{Salvi1997} conclusions on what happened in the beginning of \ili{Romance} seem to partially support what is being proposed here. Studying the changes from \ili{Latin} to Old \ili{Romance} and from Old \ili{Romance} to \ili{French} and the Northern \ili{Italian} dialects, he concludes that: (a) \ili{Latin} had only one form of nominative pronouns, which, he assumes, were used as strong or weak pronouns, (b) in Old \ili{Romance} pronominal anaphora was not obligatory since subject \isi{clitics} did not exist; (c) in \ili{French} and in some \ili{Italian} dialects zero anaphora (NS) ceases to exist when subject \isi{clitics} appear (see also \citealt{Roberts1993}). For \citet{Kato1999},\footnote{See also similar views in \citet{Barbosa1995,AleAna1998,OrdonezTrevino1999}.} pronominal Agr, understood as the \isi{grammaticalization}/in\-cor\-poration of personal pronouns in verbal Inflection, is claimed to be in crosslinguistic complementary distribution with weak pronouns and subject \isi{clitics}. Thus, the loss of one implies the introduction of the other type of weak pronouns.\footnote{Studying the loss of NSs in \ili{Dominican Spanish} and \gls{BP}\il{Brazilian Portuguese}, \textcite[28]{Camacho2016} proposes, in line with \citet{Kato1999}, that the change has to do with “modification in the lexical entries for inflection”, namely the introduction of weak pronouns.} In \gls{BP}\il{Brazilian Portuguese} the great innovation was the introduction of an English-like paradigm of weak pronouns partially homophonous with the strong ones \parencite{Nunes1990,Kato1999} in place of the old pronominal Agr system.\footnote{In written language the new paradigm is represented as homophonous to the strong pronouns.} \ea\label{ex:26.17}\leavevmode\\[-1\baselineskip] \begin{tabular}{llll} strong & weak & strong & weak \\ EU (I) & [eu/ô] & NÓS (we) & [nós] \\ VOCÊ (you) & [cê & VOCÊS (you) & [cêis] \\ ELE (he) & [ele/ei] & ELES (they) & [eles/eis] \\ \end{tabular} \z\largerpage Pronominal Agr is syntactically defined by \citet{Kato1999} as a D-category that appears in the numeration as an independent item from the verb, being first merged as an external argument of \emph{v}, with interpretable φ-features.\footnote{\citegen{Kato1999} analysis above eliminated \emph{pro}, and its problems in a Minimalist frame: (a) the position of \emph{pro} ceases to be a problem, (b) its presence in the numeration is eliminated and (c) it will give a coherent explanation on why there is free inversion since it will be moving a maximal projection. Brazilian Portuguese, on the other hand, cannot move T’, the reason why it lost free inversion.} There is no Spec of T/INFL projected, as the pronominal agreement satisfies the \glsunset{EPP}\gls{EPP} morphologically. In \gls{BP}\il{Brazilian Portuguese} with Agr no longer pronominal, free weak pronouns are introduced, and Spec of T/INFL has to be projected. In \gls{EP}\il{European Portuguese}, on the other hand, pronominal Agr remained and, therefore, no weak free pronouns were created. \begin{figure} \begin{subfigure}[b]{.5\linewidth}\centering% \begin{tikzpicture}[baseline=(root.base)] \Tree [.\node(root){TP}; [.T -o\tss{i} fala-V ] [.VP [.DP [.D t\tss{i} ] ] [.V$'$ [.V t\tss{V} ] ] ] ] \end{tikzpicture} \caption{Before the change (\gls{EP})} \end{subfigure}\begin{subfigure}[b]{.5\linewidth}\centering% \begin{tikzpicture}[baseline=(root.base)] \Tree [.\node(root){TP}; [.DP eu ] [.T$'$ [.T falo-V ] [.VP [.DP t\tss{i} ] [.V t\tss{V} ] ] ] ] \end{tikzpicture} \caption{After the change (\gls{BP})} \end{subfigure} \caption{Pronominal Agr and weak pronouns\label{fig:ex:26.fig3}} \end{figure} Strong pronouns are in a higher projection than weak pronouns. This higher projection can be ΣP, as in \citet{Martins1994}, or the SubjP in \citet{Cardinaletti:2004a}. When the pronoun is overt in \gls{NSL}s, it always has an emphatic or contrastive interpretation. If a non-NS language has an overt pronoun, the sentence exhibits subject doubling, as in \gls{BP}\il{Brazilian Portuguese} (cf.\ the examples in \eqref{ex:26.18}, apud \citealt{Kato2012}). But in either case, strong pronouns have a “default” case and are always referential and [+animate] (\citealt{Kato1999}, \citealt{Schutze2001}). \begin{figure}[p] \begin{subfigure}[b]{.5\linewidth}\centering\small% \begin{tikzpicture}[baseline=(root.base), align=center] \Tree [.\node(root){ΣP/SubjP}; [.DP VOCÊ ] [.{} Σ [.TP [.T -$\varnothing$\tss{i} come-\tss{V} ] [.VP [.DP {D\\t\tss{i}} ] [.V$'$ {V\\t\tss{V}} {DP\\pizza} ] ] ] ] ] \end{tikzpicture} \caption{Before the change (\gls{EP})} \end{subfigure}\begin{subfigure}[b]{.5\linewidth}\centering\small% \begin{tikzpicture}[baseline=(root.base), align=center] \Tree [.\node(root){ΣP/SubjP}; [.DP VOCÊ ] [.{} Σ [.TP [.DP cê ] [.T$'$ [.T come-\tss{V} ] [.VP [.DP t\tss{i} ] [.V$'$ {V\\t\tss{V}} {DP\\pizza} ] ] ] ] ] ] \end{tikzpicture} \caption{After the change (\gls{BP})} \end{subfigure} \caption{Position of strong pronouns\label{fig:ex:26.fig4}} \end{figure} \begin{figure}[p] \pgfplotstableread{data/katoduarte-fig2.csv}{\table} \pgfplotstablegetcolsof{\table} \pgfmathtruncatemacro\numberofcols{\pgfplotsretval-1} \begin{tikzpicture} \begin{axis}[ ybar, nodes near coords, cycle list = {{black, fill=black}, {black, fill=black!50}}, height = 4cm, legend cell align=left, legend columns=3, legend style={font=\footnotesize,anchor=north,at={(0.5,1.2)},anchor=north}, width = \textwidth, xmin = .5, xmax = 3.5, xtick = data, xticklabels = {{First person}, {Second person}, {Third person}}, xlabel = {Person of null subject}, ymin = 0, ymax = 100, ylabel = {\%}, axis lines*=left, enlarge y limits={upper=20} ] \foreach \i in {1,...,\numberofcols} {% \addplot+ table [x index={1},y index={\i},x expr=\thisrow{Person}] {\table}; \pgfplotstablegetcolumnnamebyindex{\i}\of{\table}\to{\colname} % Adding column headers to legend \addlegendentryexpanded{\colname} } \end{axis} \end{tikzpicture} \caption{Null subjects in spoken European and Brazilian Portuguese (adapted from~\citealt{BarbosaDuarteKato2005}, Figure 3, apud~\citealt{Duarte2004})}% \label{fig:26.2} \end{figure} \ea%18 \label{ex:26.18} \ea \ili{European Portuguese}\\ \gll VOCÊ, come-${\varnothing}$ pizza.\\ you eat pizza\\ \ex \ili{Brazilian Portuguese}\\ \gll VOCÊ, cê come pizza\\ YOU you eat pizza\\ \glt \enquote*{YOU, you eat pizza.} \z \z Taking into consideration that the referential NS of the literate Brazilian adult has been acquired through schooling, we can bring some interesting results from \citeauthor{BarbosaDuarteKato2005}’s study as to what extent instruction recovers the “\isi{avoid pronoun principle}”, which seems to rule the speakers of a consistent \gls{NSL}\is{null subject languages}. \Cref{fig:26.2} shows null subjects in spoken \gls{EP}\il{European Portuguese} and \gls{BP}\il{Brazilian Portuguese}. Despite the fact that schools in Brazil try to provide the students with the old NS grammar, Brazilians produce a much higher proportion of overt pronouns than Portuguese speakers, following the same hierarchy (see examples (\ref{ex:26.6}--\ref{ex:26.9}) in \Cref{sec:26.2.2}). As we mentioned in \Cref{sec:26.2.2}, this has been related to (a) the neutralization of \emph{tu} and \emph{você} (second PS) for second person reference, (b) the replacement of \emph{vós} by \emph{vocês} (second PP), and (c) the introduction of \emph{a} \emph{gente} in competition with \emph{nós}, which reduced the inflectional paradigm (see \tabref{tab:26.1}), requiring the overt pronoun for identification reasons.\footnote{Most regions of the country that keep the pronoun \emph{tu} combine it, in colloquial speech, with the same unmarked third person verb form used with \emph{você} (\emph{tu}/\emph{você} \emph{fala} – you speak). Evidence for the neutralization of both pronouns is in the fact that they are used without any distinction as regards courtesy, contrary to what happens in Portugal.} As for qualitative distinctions \citeauthor{BarbosaDuarteKato2005} (\citeyear[19]{BarbosaDuarteKato2005}, BDK) listed the following observations: \begin{enumerate}[label=(\alph*)] \item A significant difference between the two varieties is in the fact that overt pronouns in \gls{EP}\il{European Portuguese} are almost invariably [+animate], which shows that they are generally strong pronouns, while in \gls{BP}\il{Brazilian Portuguese} they can be [+animate] or [$-$animate], indicating that they can be strong or weak. \ea\label{ex:26.19} \ea\ili{European Portuguese}\\ \gll Os miúdos vão pra escola e ela vai pro escritório.\\ the children go to.the school and she goes to.the office\\ \glt \enquote*{The children go to school and she goes to the office.} \ex\ili{Brazilian Portuguese}\\ \gll Eu acho que um trabalho\tss{i}, ele\tss{i} teria que começar por aí.\\ I think that a task it should-have to start from there.\\ \glt \enquote*{I think that a task should have to start from here.} \z \z \item The \isi{control} relation between the antecedent and the null subject is the most favourable context for NSs in both varieties, even though \gls{BP}\il{Brazilian Portuguese} prefers overt subjects; in \gls{EP}\il{European Portuguese}, on the other hand, a null subject is categorical, as in (\ref{ex:26.20}), the exceptional cases having to do with emphatic/contrastive strong ones.\largerpage[-1] \ea%20 \label{ex:26.20}\ili{European Portuguese}\\ \gll Ela\tss{i} disse logo que \textbf{$\varnothing$}\tss{i} tava em férias e que \textbf{$\varnothing$}\tss{i} morava ali {ao pé} do liceu.\\ she said soon that {} was on vacation and that {} lived there near of.the liceum\\ \glt \enquote*{She soon said that she was on vacation and that she lived there near the school.} \z \item The real variation domain of null and expressed subjects in both varieties is where no \isi{control} relation obtains. It seems to be correlated with a functional factor, namely topic maintenance, which favours the NS, vs. topic shift, favouring overt pronouns (cf.\ also \citealt{DeOliveira2000} and \citealt{Marins2009} with respect to Italian). However, a consistent NSL will prefer a null subject even in anaphoric contexts. \ea%21 \label{ex:26.21}\ili{European Portuguese} \ea \gll Quando eu estava a trabalhar com ele\tss{i} \textbf{$\varnothing$}\tss{i} nunca me queria ver na cozinha\\ when I was at work with he {} never me.\Cl{} wanted to.see in.the kitchen\\ \glt \enquote*{When I was at work with him, he never wanted to see me in the kitchen.} \ex \gll Parece que numa ida d[ela]\tss{i} à Inglaterra, ela\tss{i} fez com que a rainha pedisse nossos produtos.\\ seems that in.a trip of.her to.the England, she made with that the queen ordered our products\\ \glt \enquote*{It seems that in one of her trips to England she made the queen order our products.} \z \z \end{enumerate} To account for the finding that \gls{BP}\il{Brazilian Portuguese} still licenses NSs, as opposed to a language like \ili{English}, we have had two lines of explanation: \begin{enumerate}[label=(\alph*)] \item they result from the fact that we have a change in progress, with two grammars in competition (\citealt{Duarte1993,Duarte1995}; \citealt{Kato2000}), the NSs being residual occurrences of the same NS of the old grammar; \item the NS in \gls{BP}\il{Brazilian Portuguese} is not a pronominal Agr, but (b1) a variable bound by a quantifier \parencite{NegraoMuller1996}; (b2) a variable or an anaphor \parencite{FigueiredoSilva2000}; (b3) a variable bound by a Topic, the subject in \gls{BP}\il{Brazilian Portuguese} being in A$'$-position \citep{Modesto2000}; (b4) the trace of A-movement \parencite{Ferreira2004,Rodrigues2004,MartinsNunes2010}. \end{enumerate} However, according to the data in \textcite{BarbosaDuarteKato2005} and in \citet{Kato2009}, the theories in (b) do not explain the optionality in real data, namely the presence of overt pronouns, where the NS would be the only option. \ea\label{ex:26.22}\ili{Brazilian Portuguese} \ea \textcite{NegraoMuller1996}\\ \gll \textbf{Nenhuma} \textbf{criança} acha que \textbf{$\varnothing$}\tss{i} / *ela é burra.\\ no child thinks that {} {} \hphantom{(}she is stupid\\ \ex \textcite{BarbosaDuarteKato2005}\\ \gll \textbf{Ninguém} \textbf{no} \textbf{Brasil}\tss{i} acha que \textbf{ele}\textbf{\tss{i}} é prejudicado pelo governo.\\ nobody in Brazil thinks that he is impaired by-the government\\ \z \ex%23 \label{ex:26.23}\ili{Brazilian Portuguese} \ea \textcite{FigueiredoSilva2000}\\ \gll A Maria achou um carro que *\textbf{$\varnothing$}\tss{i} tem grana pra comprar. \\ the Maria found a car that {} has money to buy\\ \glt \enquote*{Mary found a car that she has money to buy.} \ex \textcite{Kato2009}\\ \gll A Maria\tss{i} achou o carro que \textbf{$\varnothing$}\tss{i} queria.\\ the Maria found a car that {} wanted\\ \glt \enquote*{Mary found a car that she wanted.} \z \ex%24 \label{ex:26.24}\ili{Brazilian Portuguese} \ea \textcite{Modesto2000}\\ \gll Paulo\tss{1} convenceu o Pedro\tss{2} que \textbf{$\varnothing$}\tss{1/*2/*3} tinha que ir embora.\\ Paulo convinced the Pedro that {} had to go home\\ \glt \enquote*{Paulo convinced Peter that he had to go home.} \ex \textcite{Kato2009}\\ \gll O Paulo\tss{1} convenceu \textbf{o} \textbf{Pedro}\textbf{\tss{2}} que \textbf{$\varnothing$}\textbf{\tss{1/2}} devia estudar mais.\\ the Paulo convinced the Peter that {} should study more\\ \glt \enquote*{Paul convinced peter that he should study more.} \z \z Working with the \isi{raising} phenomenon in \gls{BP}\il{Brazilian Portuguese}, \citet{MartinsNunes2005} show that standard raising, very rare in spoken \gls{BP}, gave rise to a structure such as (\ref{ex:26.25}a), initially treated by the authors as a case of hyper-raising, explained by the possibility of an optional defective T in the embedded clause, incapable of checking the features of a raised subject. However, the optionality of a null or overt pronoun from the embedded clause led \textcite{MartinsNunes2010} to propose that what raises to SpecTP of the main clause is a dislocated topic inside the embedded clause, and both the raised constituent and the subject of the embedded clause can check the features properly. According to \citeauthor{MartinsNunes2010}, in view of the input of literate speakers, children can acquire, much later along with standard raising, the structure in (\ref{ex:26.25}b), another possibility in European Portuguese, which exhibits a dislocated topic, and the problem of case checking no longer applies: \ea%25 \label{ex:26.25}\ili{Brazilian Portuguese} \ea \gll [\tss{CP}~[ Os vizinhos {]\tss{i}} parec\textbf{em} [ que [~\emph{t}~]\tss{i} (eles)\tss{i} comprar\textbf{am} um carro ].\\ {} the neighbors {} seem.\Tpl{} {} that {} \hphantom{(}they bought.\Tpl{} a car\\ \ex \gll [\tss{TopP}~[ Os vizinhos ]\tss{i} [\tss{CP\tss{\Expl}} parece [ que (eles)\tss{i} compraram um carro ]].\\ {} the neighbors {} {} seem.\Tsg{} {} that \hphantom{(}they bought a car\\ \glt \enquote*{The neighbours seem to have bought a car.} \z \z As in \citet{MartinsNunes2010} and \citet{Kato2011}, the hypothesis that we will be considering is that the Brazilian child has set the \gls{NSP} to its negative value, and that the referential NSs in \gls{BP}\il{Brazilian Portuguese} adult data result from the imperfect learning of a “second grammar”. \subsection{BP vs.\ Japanese, a radical NS language}\label{sec:26.3.2}\largerpage A radical\is{null subject languages!radical null subject languages} null subject (NS) language has been defined as one without rich agreement, like, for instance, Chinese and \ili{Japanese}, also referred to as \emph{discourse configurational} (DC) languages \parencite{EKiss1995,Miyagawa2010} or Topic-prominent languages \parencite{LiThompson1976}.\footnote{The first author of the paper is a speaker of Japanese as L1, and of \gls{BP}\il{Brazilian Portuguese} as L2, but more fluent in the latter.} Three reasons lead Brazilian linguists to hypothesize that \gls{BP}\il{Brazilian Portuguese} is changing towards a DC type of language:\footnote{See the first proposals in \citet{Pontes1987} and \citet{Kato1989}. Actually they propose that \gls{BP}\il{Brazilian Portuguese} is a Topic and Subject prominent language in \citegen{LiThompson1976} terminology. More recently, see \citet{NegraoViotti2000,Modesto2008} with a similar view.} (a) \gls{BP}\il{Brazilian Portuguese} lost rich agreement, (b) like other DC type of language, \gls{BP}\il{Brazilian Portuguese} not only has NSs, but also null objects and bare nouns, and (c) like other DC types of language, \gls{BP}\il{Brazilian Portuguese} does not dispose of lexical expletives, in accordance with \citegen{LiThompson1976} assumption for Topic prominent languages.\footnote{\textcite{KatoDuarte2014a} proposed the \isi{movement} of an internal constituent to SpecTP in \gls{BP}, instead of the direct merging of the null expletive\is{expletives} (cf.\ \citealt{Chomsky2004}). But, in later work, \textcite{KatoDuarte2014b} show that the two resulting constructions co-exist, one in categorical constructions and the other in the thetic one.} With existential sentences, what we have in Japanese, instead of the expletive\is{expletives}, is the morpheme \emph{-ga} marking the subject. For the locative raised ones, we have \emph{-wa}, the topic marker. A sentence with \emph{-ga} is interpreted as a thetic, or a presentational, sentence, while a sentence with \emph{-wa} is interpreted as a categorical (or predicational) one.\footnote{See \citet{Kuroda1972} for this terminology. Existential sentences are typical thetic sentences. In \gls{BP}\il{Brazilian Portuguese} the subject is a null expletive\is{expletives} when it is a thetic sentence, but if the locative raises to subject position it is a categorical sentence like sentences with \emph{-wa} in Japanese.} \ea%26 \label{ex:26.26}\ili{Brazilian Portuguese} \ea \gll \textbf{$\varnothing$} Tem dois cachorros no quintal.\\ has two dogs in.the yard\\ \ex \gll (N)o quintal tem dois cachorros.\\ in.the yard has two dogs\\ \glt \enquote*{There are two dogs in the yard.} \z \ex%27 \label{ex:26.27}\ili{Japanese} \ea \gll Inu-\textbf{ga} nihiki niwa-ni iru.\\ dog-\Nom{} two yard-\Loc{} aru\\ \ex \gll Niwa-ni-\textbf{wa} inu-\textbf{ga} nihiki iru.\\ yard-\Loc{}-\Topic{} dog-\Nom{} two are\\ \z \z Weather constructions in \gls{BP}\il{Brazilian Portuguese} have (a) the verb denoting the climatic event with a null expletive\is{expletives} as the subject (cf.\ (\ref{ex:26.28}a)), or (b) like Japanese, the subject denoting the event with a general verb of motion \emph{cair} \enquote*{fall} as in (\ref{ex:26.28}b). The third possibility is locative \isi{raising} to the subject position (\ref{ex:26.28}c). Moreover, in this case the sentence is categorical and the subject triggers agreement in \gls{BP}\il{Brazilian Portuguese}, but not in Japanese. \ea%28 \label{ex:26.28}\ili{Brazilian Portuguese} \ea \gll $\varnothing$ Está nevando desde ontem nesta cidade.\\ {} is snowing since yesterday in.this city\\ \glt \enquote*{It is snowing since yesterday in this city.} \ex \gll A neve cai desde ontem nesta cidade.\\ the snow falls since yesterday in.this city\\ \glt \enquote*{The snow has been falling since yesterday in this city} \ex \gll As cidades nessa região nevam muito.\\ the cities in.this region rain.\Tpl{} {a lot}\\ \glt \enquote*{In the cities in this region it rains a lot.} \z \z \ea%29 \label{ex:26.29}\ili{Japanese} \ea \gll Yuki-\textbf{ga} kinoo-kara fute-iru.\\ snow-\Nom{} yesterday-since raining-is\\ \glt \enquote*{The snow falls since yesterday.} \ex \gll Kono-hen-no matchi-\textbf{wa} yoku yuki-\textbf{ga} furu .\\ {this region} city–\Topic{} well snow-\Nom{} fall\\ \glt \enquote*{The cities in this region snow a lot.} \z \z But besides the existential and the weather verb sentences, \gls{BP}\il{Brazilian Portuguese} has another NS similar to Japanese, namely the null generic and arbitrary sentences. \ea%30 \label{ex:26.30}\ili{Brazilian Portuguese} \ea \gll $\varnothing$ conserta sapato.\\ {} repairs shoes\\ \ex \gll $\varnothing$ kutsu-o nao-shimasu.\\ {} shoes-\Acc{} repair-do\\ \glt \enquote*{One repairs shoes.} \z \z In order to analyse the NS of generic and arbitrary sentences, \citet{Kato2000} made use of PRO for finite contexts, adapting \citegen{Huang1989} idea of \emph{generalized \isi{control} theory}. We can support this view as, with the deterioration of inflection, finite sentences tend to behave as infinitive or gerundive clauses. Kato also assumes that PRO is the strong null third person pronoun and we are assuming with \citet{Tomioka2003} that the weak pronoun in Japanese is a null noun. We would have the following representation in \gls{BP}\il{Brazilian Portuguese} for a non-referential generic sentence with the NS. The nominal [\tss{NP} $\varnothing$] in \eqref{ex:26.31} would correspond to the \ili{English} nominal \emph{one}, or the \ili{French} \emph{on.} \ea%31 \label{ex:26.31} {}[ PRO\tss{i} [ [\tss{NP} $\varnothing$ ]\tss{i} conserta sapato ]] \z Just like with existentials, we can have \isi{raising} of a locative, both in \gls{BP}\il{Brazilian Portuguese} and Japanese, with the same categorical reading \ea%36 \label{ex:26.36bp} \ea\ili{Brazilian Portuguese}\\ Aqui conserta sapato. \ex\ili{Japanese}\\ Koko-de-\textbf{wa} kutsu-o nao-su.\\ ‘Here one repairs shoes.’ \z \z This parallel behaviour between agreement and a Discourse feature can be explained in terms of \citet{HolNik2002}, for whom Topic\is{topic} and Focus\is{focus} are formal features, equivalent to \is{phi-features@φ-features}φ-features. \citegen{Miyagawa2010} implements this idea in an interesting way to derive agreement languages vs.\ discourse configurational languages. In his analysis, discourse features forces \isi{movement} in the same fashion as does agreement. In the spirit of \citeauthor{Chomsky2007}’s (\citeyear{Chomsky2007,Chomsky2008}) proposal of merging \is{phi-features@φ-features}φ-features in C, with their subsequent percolation to T,\footnote{Miyagawa uses φ-probes, instead of \is{phi-features@φ-features}φ-features.} Miyagawa’s proposal is to merge the discourse-features (\is{delta-features@δ-features}δ-features) in C as an alternative to the φ{}-features, which would also trigger movement.\footnote{\textcite{NavesEtAl2013} provide the first attempt to analyse \gls{BP}\il{Brazilian Portuguese} using Miyagawa’s theory. Though it is similar in approach, the purpose of the present analysis is to compare Japanese and \gls{BP}\il{Brazilian Portuguese} using the same theoretical frame.} He admits, moreover, that there are also mixed types of languages, such as \ili{Turkish}, which can percolate both types of features. We may say that \gls{BP}\il{Brazilian Portuguese} is this mixed kind of language as \isi{raising} is triggered if the DP is a topic, but, at the same time, T inherits agreement features, as can be seen in (\ref{ex:26.28}c). \subsection{BP: A PNS language?}\label{sec:26.3.3} This section brings some support to Biberauer’s comment, presented at the beginning of this chapter, namely to the fact that this group seems to include several sub-types of languages. According to \citegen{HolNik2002} well-known article on Finnish, this language has the following properties related to the subject position: (a) it has a rich agreement system; (b) but, contrary to consistent \gls{NSL}s, the NS is \emph{optional} (even though extremely rare in speech) with first and second persons (36a,b) while third person subjects, animate or inanimate, must be \emph{overt} in matrix clauses (\ref{ex:26.36}c), with null subjects allowed only in embedded clauses under the requirement that they be bound by the closest controller (see similar examples for \gls{BP}\il{Brazilian Portuguese} in \eqref{ex:26.8} and \eqref{ex:26.9} in \Cref{sec:26.2.2}); (c) expletives can be optional with weather-verbs and extraposed sentences (\ref{ex:26.36}d); (d) but are obligatory with existential type of predicates (\ref{ex:26.36}e), and (e) it is a topic prominent language in the sense that the \gls{EPP} can be satisfied only by referential categories, such as temporal adverbials and locatives or even DPs, apparently to avoid V1 (\ref{ex:26.36}e), (\ref{ex:26.37}a,b). \ea%36 \label{ex:26.36}\ili{Finnish} \ea \gll (Minä) ol-i-n väsynyt.\\ \hphantom{(}I be-\Pst-\Fsg{} tired\\ \ex \gll (Sinä) ol-i-t väsynyt.\\ \hphantom{(}thou be-\Pst-\Ssg{} tired\\ \ex \gll Hän ol-i väsynyt.\\ {he / she} be-\Pst{}.\Tsg{} tired\\ \ex \gll Nyt (se) taas sataa.\\ now \hphantom{(}it again rains\\ \ex \gll Sitä leikkii lapsia kadulla.\\ \Expl{} play children in.street\\ \glt \enquote*{There are children playing in the yard.} \z \ex%37 \label{ex:26.37}\ili{Finnish} \ea \gll Tämän kirjan on kirjoittanut Graham Greene.\\ this book has written Graham Greene\\ \ex \gll Tanään leikkii lapsia kadulla.\\ today play children in.street\\ \z \z \citet{HolmbergNayuduSheehan2009} and \citet{HolShee2010} account for the data above assuming that (a) the NSs in \gls{PNS}\is{null subject languages!partial null subject languages} languages are full pronouns, deleted at \glsunset{PF}\gls{PF},\footnote{The authors who propose this \gls{PNS}\is{null subject languages!partial null subject languages} type of language follow \citegen{Perlmutter1971} old thesis of NSs as deleted pronouns. See also \citet{Roberts2010c} with an analysis of NSs along the same lines.} and (b) that the non-referential cases can be explained as the lack of a D-feature in T.\footnote{A different analysis is provided by \citet{Barbosa2013}, who follows \citet{Tomioka2003}. The NS in discourse pro-drop languages for the author is a null NP anaphora.} Moreover, according to the authors, subjects and non-subject topics occupy the same position in Finnish: SpecFP. In generic sentences the expletive\is{expletives} \emph{sitä}, which is not nominative\is{nominative case}, also occupies SpecFP. \ea%38 \label{ex:26.38}\ili{Finnish}\\ \gll Sitä väsyy nykyään helpommin kuin ennen.\\ \Expl{} gets-tired nowadays easier than before\\ \glt \enquote*{One gets tired these days easier than before.} \z \citet{Holmberg2005} later includes generic subjects in the list where the subject can be null: \ea%39 \label{ex:26.39}\ili{Finnish}\\ \gll Täällä ei saa polttaa\\ here not may smoke\\ \glt \enquote*{One can't smoke here.} \z As was shown in \Cref{sec:26.3.1}, the weakened \gls{BP}\il{Brazilian Portuguese} agreement morphemes have developed into a system of weak free pronouns, but without developing a lexical expletive\is{expletives}. This is the opposite of Finnish, with its rich pronominal agreement paradigm, but which, surprisingly, displays a lexical expletive\is{expletives}, a property of [$-$NS] languages, except that it is not nominative\is{nominative case}. The creation of weak pronouns in \gls{BP}\il{Brazilian Portuguese}, like in \ili{French}, also explains why \gls{BP}\il{Brazilian Portuguese} null generic subjects occur in variation with overt weak pronouns, which may include either the speaker, \emph{a gente} \enquote*{the people} (= \enquote*{we folks}) or the speaker, \emph{você} \enquote*{you}, both with third person agreement. Although the null generic subject in \gls{BP}\il{Brazilian Portuguese} (\ref{ex:26.39}a) shares characteristics of the Japanese null noun, in the latter, the generic, or indefinite, subject cannot be encoded by weak pronouns as in (\ref{ex:26.40}b,c). The same seems to be the case in Finnish, as according to \citet[540]{Holmberg2005}: “\dots, in partial null-subject languages generic pronouns can, and must, be null”. \ea%40 \label{ex:26.40}\ili{Brazilian Portuguese} \ea \gll \textbf{$\varnothing$} Pode comer a pizza agora.\\ {} can eat the pizza now\\ \ex \gll \textbf{Você} pode comer a pizza agora\\ you can eat the pizza now\\ \ex \gll \textbf{A gente} pode comer a pizza agora.\\ we-folks can eat the pizza now\\ \glt \enquote*{One can eat the pizza now.} \z \z As for referential NSs, \gls{BP}\il{Brazilian Portuguese} differs significantly from \ili{Finnish} in that \gls{BP}\il{Brazilian Portuguese} null second person is almost completely absent, restricted to questions, whose subject is pragmatically identified. First person null subjects are also on the way to obsolescence, in matrix and in embedded clauses. Third person subjects, as illustrated in \Cref{sec:26.2.2}, are allowed but not frequent either in matrix or in embedded clauses, obeying the same requirement of an accessible prominent antecedent (see \citealt{KatoDuarte2014a,KatoDuarte2014b}). \Cref{sec:26.3.2} revealed, additionally, that \gls{BP}\il{Brazilian Portuguese} is a sort of discourse configurational language. There is a difference, however, between topic sentences in \ili{Finnish} and topic ones in \gls{BP}\il{Brazilian Portuguese}. In the latter the topic--subjects are in A-position, triggering agreement, while in the former, it is proposed to be located in SpecFP.\is{agreement!topic agreement} The Brazilian system also allows merging of a non-argument in existentials, instead of the null expletive\is{expletives}, usually a demonstrative or the very pronoun \emph{você}, which, besides its definite second person reference, has developed a generic one, to finally appear inserted in an existential or any impersonal sentence. This brings support to \citegen{AvelarGalves2011} claim that SpecTP in \gls{BP}\il{Brazilian Portuguese} is φ-in\-de\-pen\-dent, or we can say, following \citet{Miyagawa2010}, that T in \gls{BP}\il{Brazilian Portuguese} can inherit both φ- and \is{delta-features@δ-features}δ-features.\is{phi-features@φ-features} \ea%40 \label{ex:26.40bp}\ili{Brazilian Portuguese} \ea \gll \textbf{$\varnothing$\tss{\Expl}} era {em torno de} mil pessoas.\\ {} was around {a thousand} people\\ \ex \gll {\textbf{Aquilo} / \textbf{isso}} era {em torno de} mil pessoas.\\ that was around {a thousand} people\\ \glt \enquote*{It was around a thousand people} \z \ex%41 \label{ex:26.41}\ili{Brazilian Portuguese} \ea \gll \textbf{$\varnothing$\tss{\Expl}} não tem mais comércio no centro da cidade.\\ {} not have more commerce in.the center of.the city\\ \ex \gll \textbf{Você} não tem mais comércio no centro da cidade\\ you not have more commerce in.the center of.the city\\ \glt \enquote*{There is no commerce downtown anymore} \z \z Summarizing, \gls{BP}\il{Brazilian Portuguese} has been included among \gls{PNS} languages by \citet{HolShee2010}. However, if only its spoken vernacular language is taken into consideration, it becomes clear that its dissimilarities with other \gls{PNS}\is{null subject languages!partial null subject languages} languages are greater than its similarities. \subsection{BP vs.\ English, a [$-$NS] language}\label{sec:26.3.4} We have seen in \Cref{sec:26.2} that the deterioration of verbal pronominal affixes led \gls{BP}\il{Brazilian Portuguese} to replace them with free weak pronouns and quasi-homophonous strong ones, but without a “default” case. The examples below show the substantial replacement of NSs with overt pronouns in one century \parencite{Duarte1993,Duarte2012}.\newpage \ea%42 \label{ex:26.42}\ili{Brazilian Portuguese} \ea Quando \textbf{$\varnothing$\tss{\Fsg}} te vi pela primeira vez, \textbf{$\varnothing$\tss{\Fsg}} não sabia que \textbf{$\varnothing$\tss{\Ssg}} eras viúva e rica. \textbf{$\varnothing$\tss{\Fsg}} Amei-te por simpatia. (Martins Pena, 1845)\\ ‘When (I) saw you for the first time, (I) didn’t know that (you) were a widow and rich’ \ex Se \textbf{eu} ficasse aqui \textbf{eu} ia querer ser a madrinha. (M.\ Falabella, 1992)\\ ‘If I stayed here I would want to be the god-mother.’ \z \ex%43 \label{ex:26.43} \ea \gll \textbf{$\varnothing$\tss{\Ssg}} Terá o cavalo que \textbf{$\varnothing$\tss{\Ssg}} deseja. (G. Tojeiro, 1918)\\ (you) will-have the horse that (you) want.\\ \ex \textbf{Você} não entende meu coração porque \textbf{você} ‘tá sempre olhando\\ pro céu \dots{} (M. Falabella, 1992) \\ \enquote*{You don't understand my heart because you are always looking at-the sky.} \z \z Moreover, \gls{BP}\il{Brazilian Portuguese} underwent two changes with regard to generic “\emph{se}” constructions seen above: first it lost the clitic “\emph{se}” resulting in the NS; second, as seen above, impersonal \emph{se} is being preferably replaced by the personal form with \emph{você} or \emph{a gente} (see~\Cref{fig:26.3}). \ea%44 \label{ex:26.44} \ea cf.\ Italian\\ \gll $\varnothing$\tss{\Genc} não \textbf{se} pode entrar de sapato.\\ {} not \emph{se} can enter of shoes\\ \ex cf.\ Japanese\\ \gll $\varnothing$\tss{\Genc} não pode entrar de sapato.\\ {} not can enter of shoes\\ \ex cf.\ English\\ \gll \textbf{Você} / \textbf{a} \textbf{gente} não pode entrar de sapato.\\ you {} the folks not can enter of shoes\\ \glt ‘You / We can’t get in with your / our shoes on.’ \z \z %\begin{figure}[htpb] % \centering % \includegraphics[width=.75\linewidth]{./img/fig3.pdf} % \caption{Generic subjects in Brazilian Portuguese in three % generations}\label{fig:26.3} %\end{figure} \begin{figure} \pgfplotstableread{data/katoduarte-fig3.csv}{\table} \pgfplotstablegetcolsof{\table} \pgfmathtruncatemacro\numberofcols{\pgfplotsretval-1} \begin{tikzpicture} \begin{axis}[ axis lines*=left, height = 7cm, legend cell align=left, legend columns=1, legend pos=outer north east, legend style={font=\footnotesize}, nodes near coords, width = .8\textwidth, xlabel = {Age groups}, xtick = data, xticklabels = {{55+ years}, {35--55 years}, {25--35 years}}, ybar, ylabel = {\%}, ymax = 100, ymin = 0, ] \addplot+ [black, fill=black, mark=none] table [x index={1},y index={1},x expr=\thisrow{Type}] {\table}; \addplot+ [black, fill=black!40, mark=none] table [x index={1},y index={2},x expr=\thisrow{Type}] {\table}; \addplot+ [black, fill=black!10, mark=none] table [x index={1},y index={3},x expr=\thisrow{Type}] {\table}; \addlegendentry{Clitic \emph{se}} \addlegendentry{Null} \addlegendentry{\emph{você} \enquote*{you}} % \node [above] at (axis cs: 1, 58) {58}; % \node [above] at (axis cs: 2, 85) {85}; % \node [above, yshift=-2] at (axis cs: 3, 93) {93}; % \node [above] at (axis cs: 1, 21) {21}; % \node [above] at (axis cs: 2, 10) {10}; % \node [above] at (axis cs: 3, 6) {6}; % \node [below] at (axis cs: 1, 21) {21}; % \node [left, yshift=-5] at (axis cs: 2, 5) {5}; % \node [right, yshift=3] at (axis cs: 3, 1) {1}; \end{axis} \end{tikzpicture} \caption{Generic subjects in Brazilian Portuguese in three generations}% \label{fig:26.3} \end{figure} Further evidence that \gls{BP}\il{Brazilian Portuguese} has become a [$-$NS] language is in the fact that subject doubling (or \isi{left dislocation}) is frequent in daily speech.\footnote{See \citet{Britto2000}, for whom the loss of VS order in \gls{BP}\il{Brazilian Portuguese} made thetic sentences exhibit the SV order, and the categorical sentence exhibit a Left Dislocation structure.}\largerpage[2] \ea%45 \label{ex:26.45}\ili{Brazilian Portuguese} \ea Eu acho que \textbf{um} \textbf{trabalho}\textbf{\tss{i}}, \textbf{ele}\textbf{\tss{i}} teria que começar por ai.\\ \enquote*{I think that a work it would have to start from there.} \ex \dots{} é porque existe uma filosofia que \textbf{o preço}\textbf{\tss{i}}\textbf{ele}\textbf{\tss{i}} tem uma paridade.\\ \enquote*{(It)’s because (there) exists a belief that the price (it) has a parity.} \z \z Though doubling is possible in \gls{NSL}s like \ili{Spanish}, it is inaudible because the subject is the pronominal agreement. \gls{BP}\il{Brazilian Portuguese}, on the other hand, pairs up with \ili{English}, a non-NS language, with null non-referential subjects, and their doubling is similar.\largerpage \ea%46 \label{ex:26.46} \ea \textbf{YO\tss{i}}, com-\textbf{o}\tss{i} pizza. \ex \textbf{ME}\tss{i}, \textbf{I} eat pizza. \ex \textbf{EU}, [\textbf{ô}] como pizza. \z \z \citet{Roberts1993b} shows that, when \ili{French} became a [$-$]NS language, it also started having subject doubling. A further subsequent change in \ili{French} was that the “default” case of its strong pronouns changed from nominative to dative\is{dative case}. \gls{BP}\il{Brazilian Portuguese} retained the same case of the old strong pronouns. \ea%47 \label{ex:26.47}\ili{French} \ea Renars respond: \textbf{Jou, je} n’irai. \ex Et \textbf{jou je} cuit. \ex \textbf{Moi}, je le cuit. \z \z Another similarity to [$-$]NS languages is present in complement contexts. When the embedded subject is a pronoun, \gls{BP}\il{Brazilian Portuguese} is exactly like \ili{English} (EN) in anaphoric interpretation. However, its NS is distinct in interpretation from the NS in \gls{EP}\il{European Portuguese}, a prototypical \gls{NSL}\is{null subject languages}, and similar to the NS in Japanese, a radical type.\is{null subject languages!radical null subject languages} \ea%48 \label{ex:26.48}\ili{Brazilian Portuguese} = \ili{English} \ea {}[ John’s\tss{i} father\tss{k} ]\tss{j} said that he\tss{i/k/j} was stupid. \ex {}[ O pai\tss{i} do João\tss{k} ] disse que ele\tss{i/k/j} era estúpido.\label{ex:26.48b} \z \ex%49 \label{ex:26.49} \ili{Brazilian Portuguese} $\neq$ \ili{European Portuguese}, \ili{Brazilian Portuguese} = \ili{Japanese}\\ {}[ O pai\tss{i} do João\tss{k} ]\tss{i} disse que $\varnothing$\tss{i/*k/j} era estúpido \z Recall that (\ref{ex:26.48}b) is the form that a pre-school child would produce, while \eqref{ex:26.49} is the one that may be produced by some Brazilians after schooling in formal settings. \subsection{BP vs.\ Icelandic, a semi [$-$NS] language}\label{sec:26.3.5} Up to now, we have been considering three types of \gls{NSL}s: the consistent, like \gls{EP}\il{European Portuguese}, the radical\is{null subject languages!radical null subject languages} like \ili{Japanese}, and the partial \gls{NSL}\is{null subject languages} like Finnish. We also saw a prototypical example of a [$-$NS] language, namely \ili{English}. We have now to consider the \emph{semi pro-drop type}, like \ili{German}, namely languages that were defined as having only null expletives. \citet{Biberauer2010} prefers to call these languages \emph{semi null subject (semi-NS) languages.} The author considers that \emph{semi NSLs} deserve a further division between languages like \ili{German} and \ili{Dutch}, which have only true null expletives,\is{expletives} and the \ili{Icelandic} and \ili{Yiddish} type, which also dispose of the NS with weather verbs (cf.\ also \citealt{Huang2000}). If we consider that referential NSs in Brazilian \isi{core grammar} are [–NS] and that it disposes of null expletives, we might propose that \gls{BP}\il{Brazilian Portuguese} is actually a \textit{semi [$-$NS]} language, as was defended in \citet{Saab2016}, with both \emph{quasi}-argumental (weather verbs) and true expletive\is{expletives} NSs. What we should point out, however, is the fact that in both types of \emph{semi NS} language, the expletive\is{expletives} can be overt or null \citep{Biberauer2010}, while in Brazil there are no overt expletives, like in consistent \gls{NSL}s. \ea%50 \label{ex:26.50ice}\ili{Icelandic} \ea Overt expletive\is{expletives}\\ \gll það rigndi í gaer.\\ it rains on morning\\ \ex Null expletive\is{expletives}\\ Í gaer rigndi (*það). \z \z However, concerning generic null subjects, \ili{Icelandic} is exactly like \gls{BP}\il{Brazilian Portuguese}. According to \citet{SigurdssonEgerland2009}, this language has null expletives and, in addition, the following generic types of sentences: (a) \emph{generic}, like generic \ili{English} \emph{you;} (b) \emph{arbitrary}, like \ili{English} \emph{they;} and \emph{Specific} often referring to the speaker or a group including the speaker. \ea%50 \label{ex:26.50}\ili{Icelandic} \parencite[160]{SigurdssonEgerland2009} \ea \gll Í þessari fjölskyldu drekkur þú bara ekki áfengi\\ in this family may.\Tsg{} you just not alcohol\\ \glt \enquote*{In this family, one just does not drink alcohol.} \ex \gll Þeir segja að það rigni á morgun.\\ they.\M{} say.\Tpl{} that it rains on morning\\ \glt \enquote*{They say it is going to rain tomorrow.} \ex \gll Menn náðu bófanum um kvöldið.\\ men caught.\Tpl{} culprit.the in evening.\\ \glt \enquote*{They caught the culprit in the evening.} \z \z \gls{BP} can have exactly the same type of generic/arbitrary NSs: \ea%51 \label{ex:26.51}\ili{Brazilian Portuguese} \ea \gll Ali $\varnothing$ não chega em 30 minutos\\ there {} not arrives in 30 minutes\\ \ex \gll Na nossa familia $\varnothing$ não bebe pinga.\\ in our family {} not drinks brandy\\ \ex \gll Eles dizem que $\varnothing$ vai chover amanhã.\\ they say that {} goes to.rain tomorrow\\ \ex \gll $\varnothing$ Pegaram o culpado ontem à noite.\\ (they) caught the culprit yesterday evening\\ \z \z\largerpage[2] What is different with respect to \gls{BP}\il{Brazilian Portuguese} is the variation allowed between the NS and the weak pronouns (\emph{você} and \emph{a gente}), a possibility nonexistent in \ili{Icelandic}.\footnote{As shown before, \gls{BP}\il{Brazilian Portuguese} allows personal sentences with climate verbs: \begin{exe} \exi{(i)} Essas florestas tropicais chovem muito.\\ \enquote*{These rain forests rain.\Tpl{} a lot.} \exi{(ii)} Todos os meus aniversários chovem, porque eu faço aniversário em novembro.\\ \enquote*{All the my birthdays rain.\Tpl{}, because my birthday is in November.}, lit. \enquote*{\dots{} I do birthday in November} \end{exe}} \ea%52 \label{ex:26.52}\ili{Brazilian Portuguese} \ea \gll Ali \textbf{você} não chega em 30 minutos\\ there you not arrive in 30 minutes\\ \ex \gll Na nossa familia \textbf{a gente } não bebe pinga.\\ in our family {we (the folks)} not drinks brandy\\ \ex \textbf{Eles} pegaram o culpado ontem à noite. \z \z It seems, therefore, that \emph{semi} \emph{NS} languages should be split in three types, the last of which has referential overt pronouns, Null expletives and null generic subjects. \section{Conclusions}\label{sec:26.4} After examining several empirical and theoretical works related to syntactic phenomena in Brazilian Portuguese, \citet[411]{Roberts1993b} considered that \gls{BP}\il{Brazilian Portuguese} was in fact undergoing a series of deep changes along the past century, which suggested parametric changes in progress. He added that the authors’ privileged patrimony was mainly in the rich “raw material” they worked with, combining quantitative evidence and theoretically inspired hypotheses. The present chapter reports on work done on the NS conducted after \citegen{RobKat1993} edited volume, and contains a reflection about the nature of the NS phenomenon in \gls{BP}\il{Brazilian Portuguese} in light of recent theoretical hypotheses on the NS parameter. We compared \gls{BP}\il{Brazilian Portuguese} with five language types: (a) the consistent [NS] type; (b) the radical\is{null subject languages!radical null subject languages} [NS] type; (c) the partial [NS] type,\is{null subject languages!partial null subject languages} (d) the [$-$NS] type and the semi [$-$NS] type.\is{null subject languages!semi null subject languages} The comparison has led to the following summary: \begin{enumerate}[label=(\alph*)] \item except for the expletive\is{expletives} NS, \gls{BP}\il{Brazilian Portuguese} \textit{core} grammar has almost entirely lost any similarities with \gls{EP}\il{European Portuguese}, a consistent \gls{NSL}\is{null subject languages}; \item (i) generic sentences with NSs are similar to the Japanese NSs ones, but \gls{BP}\il{Brazilian Portuguese} generic sentences resort more frequently to personal constructions with \emph{você} and \emph{a} \emph{gente;} (ii) Japanese \isi{raising} structures are superficially similar to the \gls{BP}\il{Brazilian Portuguese} ones, as in the latter they trigger agreement, whereas in Japanese the subject gets the topic marker \emph{-wa}. \item (i) \ili{Finnish} is similar to \gls{BP}\il{Brazilian Portuguese} \textit{written language}, in the optionality between referential NS and overt pronouns; (ii) even though \ili{Finnish} and \gls{BP}\il{Brazilian Portuguese} often resort to topicalization, in \gls{BP}\il{Brazilian Portuguese} topics are in SpecTP, triggering agreement, while in \ili{Finnish} they seem to be in SpecFP, an A$'$-position; \item (i) \gls{BP}\il{Brazilian Portuguese} has no lexical expletives or indefinite pronouns like \emph{one} in \ili{English}; (ii) but, in its referential NSs, \gls{BP}\il{Brazilian Portuguese} is exactly like \ili{English} in production and comprehension: a [$-$NS] language. \end{enumerate} In conclusion, the \isi{core grammar} of \gls{BP}\il{Brazilian Portuguese} is (i) a [$-$NS] language with regard to referential subjects, and (ii) a [+NS] of the consistent type regarding null expletives; and (iii) a [+NS] of the radical\is{null subject languages!radical null subject languages} type concerning the null generic subjects. As for the system of the literate adult, it maintains the null expletives and null generic subjects of the core grammar, while, with regard to referential expressions, they are partly pronominal (DP), like in the child \isi{core grammar}, and [$-$NS] like \ili{English}. \printchapterglossary{} \section*{Acknowledgements} This research has the support of the National Council of Research (CNPq). We thank the editor(s) for the accurate observations and Marcello Marcelino for his careful revision of the first draft of this chapter. {\sloppy \printbibliography[heading=subbibliography,notkeyword=this] } \end{document}
{ "alphanum_fraction": 0.6981219814, "avg_line_length": 50.9536887453, "ext": "tex", "hexsha": "3ce42f32f9672596e32308f60d6e7df9b44a2f78", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "80f4daa0d585057e668d6581927bb35c73e51828", "max_forks_repo_licenses": [ "CC-BY-4.0" ], "max_forks_repo_name": "langsci/277", "max_forks_repo_path": "chapters/26.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "80f4daa0d585057e668d6581927bb35c73e51828", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "CC-BY-4.0" ], "max_issues_repo_name": "langsci/277", "max_issues_repo_path": "chapters/26.tex", "max_line_length": 222, "max_stars_count": null, "max_stars_repo_head_hexsha": "80f4daa0d585057e668d6581927bb35c73e51828", "max_stars_repo_licenses": [ "CC-BY-4.0" ], "max_stars_repo_name": "langsci/277", "max_stars_repo_path": "chapters/26.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 27548, "size": 94621 }
\documentclass[11pt,letterpaper]{article} \usepackage{microtype} \usepackage{authblk} % \usepackage[utf8]{inputenc} \usepackage{amsthm} \usepackage{amsfonts} \usepackage{amsmath} \usepackage{amssymb} \let\amssquare\square \usepackage{mathtools} % \usepackage{stmaryrd} % \usepackage{tensor} \usepackage[mathscr]{eucal} \usepackage{url} % \usepackage{marvosym} \usepackage[left=2cm,right=2cm,top=2cm,bottom=2cm]{geometry} %\usepackage{geometry} \usepackage{minted} %\usepackage[finalizecache]{minted} %\usepackage[frozencache]{minted} \usepackage{cite} \usepackage{multicol} \usepackage{tocloft} \usepackage{enumerate} \usepackage{tikz-cd} \usetikzlibrary{positioning} \usetikzlibrary{fit} \usetikzlibrary{cd} \usetikzlibrary{arrows} \usetikzlibrary{calc} \usetikzlibrary{decorations.markings} \tikzset{ed/.style={auto,inner sep=2pt,font=\scriptsize}} %edges \tikzset{>=stealth'} \tikzset{vert/.style={draw,circle, minimum size=6mm, inner sep=0pt, fill=white}} \tikzset{vertbig/.style={draw,circle, minimum size=8mm, inner sep=0pt, fill=white}} \tikzset{->-/.style={decoration={ markings, mark=at position #1 with {\arrow{>}}},postaction={decorate}}} \usepackage[hidelinks]{hyperref} % Should be imported LAST \theoremstyle{plain} \newtheorem{theorem}{Theorem}[subsection] % \newtheorem{axiom}[theorem]{Axiom} \newtheorem*{theoremstar}{Theorem} % \newtheorem{fact}[theorem]{Fact} \newtheorem{proposition}[theorem]{Proposition} \newtheorem{lemma}[theorem]{Lemma} \newtheorem{corollary}[theorem]{Corollary} \newtheorem{conjecture}[theorem]{Conjecture} \theoremstyle{definition} \newtheorem{definition}[theorem]{Definition} % \newtheorem{convention}[theorem]{Convention} % \newtheorem{construction}[theorem]{Construction} \newtheorem{example}[theorem]{Example} % \newtheorem{examples}[theorem]{Examples} % \newtheorem{notation}[theorem]{Notation} \newtheorem{remark}[theorem]{Remark} % \newtheorem{idea}[theorem]{Idea} % \newtheorem{question}[theorem]{Question} \newcommand{\C}{\mathscr{C}} \newcommand{\homC}{\underline{\C}} \newcommand{\D}{\mathscr{D}} \newcommand{\E}{\mathscr{E}} \newcommand{\M}{\mathscr{M}} \newcommand{\N}{\mathscr{N}} \newcommand{\T}{\mathscr{T}} \renewcommand{\S}{\mathscr{S}} \newcommand{\bN}{\mathbb{N}} \newcommand{\bZ}{\mathbb{Z}} \newcommand{\lenslib}{\texttt{lens}} \newcommand{\Pastro}{\Phi} % \newcommand{\Pastro}{\mathrm{Pastro}} \newcommand{\Double}{\mathcal{D}} % Categories \newcommand{\Set}{\mathbf{Set}} \newcommand{\Cat}{\mathbf{Cat}} %\newcommand{\Hask}{\mathbf{Hask}} \newcommand{\Prof}{\mathbf{Prof}} \newcommand{\Core}{\mathbf{Core}} \newcommand{\MonCat}{\mathbf{MonCat}} \newcommand{\LaxMonCat}{\mathbf{LaxMonCat}} \newcommand{\SymmMonCat}{\mathbf{SymmMonCat}} \newcommand{\StrictSymmMonCat}{\mathbf{StrictSymmMonCat}} \newcommand{\Tele}{\mathbf{Tele}} \newcommand{\StrictTele}{\mathbf{StrictTele}} \newcommand{\Tamb}{\mathbf{Tamb}} \newcommand{\Endo}{\mathbf{Endo}} \newcommand{\Strong}{\mathbf{Strong}} \newcommand{\Point}{\mathbf{Point}} \newcommand{\CoPoint}{\mathbf{CoPoint}} \newcommand{\App}{\mathbf{App}} \newcommand{\Traversable}{\mathbf{Traversable}} \newcommand{\IdxTraversable}{\mathbf{IdxTraversable}} \newcommand{\Act}{\mathbf{Act}} \newcommand{\Optic}{\mathbf{Optic}} \newcommand{\Twoptic}{\mathbf{Optic}^2} \newcommand{\Lawful}{\mathbf{Lawful}} %\newcommand{\SemiOptic}{\mathbf{SemiOptic}} \newcommand{\Lens}{\mathbf{Lens}} \newcommand{\Prism}{\mathbf{Prism}} \newcommand{\Setter}{\mathbf{Setter}} \newcommand{\Traversal}{\mathbf{Traversal}} \newcommand{\Getter}{\mathbf{Getter}} \newcommand{\Review}{\mathbf{Review}} \newcommand{\Fold}{\mathbf{Fold}} \newcommand{\IdxLens}{\mathbf{IdxLens}} \newcommand{\IdxTraversal}{\mathbf{IdxTraversal}} \newcommand{\switched}{\mathbin{\tilde{\otimes}}} \newcommand{\conc}{\mathbb{C}} \newcommand{\conctwice}{\mathbb{C}^2} \newcommand{\id}{\mathrm{id}} \newcommand{\op}{\mathrm{op}} \newcommand{\const}{\mathrm{const}} \DeclareMathOperator{\ob}{ob} \DeclareMathOperator{\copr}{copr} \newcommand{\inl}{\mathrm{inl}} \newcommand{\inr}{\mathrm{inr}} \DeclareMathOperator{\im}{im} \newcommand{\act}{\cdot} \newcommand{\codisc}{\mathsf{codisc}} \DeclareMathOperator*{\colim}{\mathrm{colim}} \newcommand{\teletimes}{\mathbin{\boxtimes}} \newcommand{\defeq}{\mathrel{\vcentcolon=}} \newcommand*\circled[1]{\tikz[baseline={([yshift=-0.65ex]current bounding box.center)}]{ \node[shape=circle,draw,inner sep=1pt] (char) {#1};}} \newcommand{\actL}{{\circled{\tiny$\mathsf{L}$}}} \newcommand{\actR}{{\circled{\tiny$\mathsf{R}$}}} \newcommand{\rep}[2]{{\ensuremath \left\langle #1 \mid #2 \right\rangle}} \newcommand{\repthree}[3]{{\ensuremath \langle #1 \mid #2 \mid #3 \rangle}} \newcommand{\repfour}[4]{{\ensuremath \langle #1 \mid #2 \mid #3 \mid #4 \rangle}} \newcommand{\fget}{\textsc{Get}} \newcommand{\fput}{\textsc{Put}} \newcommand{\fmodify}{\textsc{Modify}} \newcommand{\freview}{\textsc{Review}} \newcommand{\fcreate}{\textsc{Create}} \newcommand{\fmatching}{\textsc{Matching}} \newcommand{\funzip}{\textsc{Unzip}} \newcommand{\fover}{\textsc{Over}} \newcommand{\findex}{\textsc{Index}} \newcommand{\mget}{\textsc{MGet}} \newcommand{\mput}{\textsc{MPut}} \newcommand{\munzip}{\textsc{Munzip}} \newcommand{\inside}{\mathsf{inside}} \newcommand{\outside}{\mathsf{outside}} \newcommand{\once}{\mathsf{once}} \newcommand{\twice}{\mathsf{twice}} % Special arrows %\newcommand{\isoto}{\xrightarrow{\cong}} \newcommand{\hto}{\ensuremath{\,\mathaccent\shortmid\rightarrow\,}} \makeatletter \providecommand{\leftsquigarrow}{% \mathrel{\mathpalette\reflect@squig\relax}% } \newcommand{\reflect@squig}[2]{% \reflectbox{$\m@th#1\rightsquigarrow$}% } \makeatother % Draft helpers \newcommand{\todo}[1]{\textcolor{red}{\small #1}} \title{Categories of Optics} \author{Mitchell Riley} \affil{Wesleyan University \\ \texttt{[email protected]}} \date{\vspace{-5ex}} \begin{document} \maketitle \begin{abstract} Bidirectional data accessors such as lenses, prisms and traversals are all instances of the same general `optic' construction. We give a careful account of this construction and show that it extends to a functor from the category of symmetric monoidal categories to itself. We also show that this construction enjoys a universal property: it freely adds counit morphisms to a symmetric monoidal category. Missing in the folklore is a general definition of `lawfulness' that applies directly to any optic category. We provide such a definition and show that it is equivalent to the folklore profunctor optic laws. \end{abstract} \setcounter{tocdepth}{1} \setlength\cftparskip{-8pt} \microtypesetup{protrusion=false} \tableofcontents \microtypesetup{protrusion=true} \section{Introduction} In its most concrete form, a \emph{lens} $S \hto A$ is a pair of maps ${\fget : S \to A}$ and ${\fput : S \times A \to S}$. From an engineering standpoint, such a lens allows us to ``zoom in'' on $S$ to focus on a small part $A$, manipulate $A$ in some way, then ``zoom out'' and have our changes reflected in $S$~\cite{CombinatorsForBidirectionalTreeTransformations}. So that our lenses better adhere to this intuitive idea of ``zooming in'', we often want them satisfy some conditions known as the \emph{lens laws}: \begin{center} \begin{minipage}[b]{0.33333\textwidth} \begin{center} \[ \begin{tikzcd} S \times A \ar[rr, "\fput"] \ar[dr, "\pi_2", swap] && S \ar[dl, "\fget"] \\ & A \end{tikzcd} \] \hspace{0.8cm}$\fput\fget$ \end{center} \end{minipage}% \begin{minipage}[b]{0.33333\textwidth} \begin{center} \[ \begin{tikzcd} S \ar[rr, "{[\id_S, \fget]}"] \ar[dr, "\id_S", swap] && S \times A \ar[dl, "\fput"] \\ & S \end{tikzcd} \] \hspace{-0.6cm}$\fget\fput$ \end{center} \end{minipage}% \begin{minipage}[b]{0.33333\textwidth} \begin{center} \[ \begin{tikzcd} S \times A \times A \ar[r, "\fput \times A"] \ar[d, "\pi_{1, 3}", swap] & S \times A\ar[d, "\fput"] \\ S \times A \ar[r, "\fput", swap] & A \end{tikzcd} \] \quad$\fput\fput$ \end{center} \end{minipage}% \end{center} We call such lenses lawful. The $\fput\fget$ law states that any update to $A$ is represented faithfully in $S$. The $\fget\fput$ law states that if $A$ is not changed then neither is $S$; and finally, the $\fput\fput$ law states that any update to $A$ completely overwrites previous updates. Lenses form a category, with the composition of two lenses $(\fget_1, \fput_1) : T \hto S$ and $(\fget_2, \fput_2) : S \hto A$ as indicated: \begin{align*} \fget &: T \xrightarrow{\fget_1} S \xrightarrow{\fget_2} A \\ \fput &: T \times A \xrightarrow{[\id_T, \fget_1] \times A} T \times S \times A \xrightarrow{T \times \fput_2} T \times S \xrightarrow{\fput_1} T \end{align*} If the two input lenses are lawful then the composite is as well, so we find there is a subcategory of lawful lenses. Lenses were discovered to be just one of a hierarchy of data accessors, including prisms, setters, traversals and more. These are collectively called \emph{optics} and have been best explored in the widely uesd Haskell \lenslib{} library: see~\cite{LensLibrary}. Each optic variant has a concrete description as a certain collection of maps, with attendant laws under which we consider them well-behaved, similar to the pair $(\fget, \fput)$ above and the lens laws. We begin in Section~\ref{sec:optics} by defining the \emph{category of optics} for a symmetric monoidal category in a sufficiently general way to encompass almost all the optic variants in use in the wild, using lenses as a running example. The category of lenses is precisely the result of this construction when applied to a symmetric monoidal category where the tensor is given by binary product. Section~\ref{sec:lawful-optics} defines the equivalent of the lens laws for a general category of optics. Then in Section~\ref{sec:examples} we see that these generic definitions specialise correctly to the other basic varieties of optic, including the laws. % We include in Section~\ref{sec:mixed-optics} a more speculative discussion of ``mixed optics'', which include indexed and coindexed lenses and traversals. When implementing optics, library authors often use a form known as the \emph{profunctor encoding}, which at first glance is completely different to that given in Section~\ref{sec:optics}. (The Haskell \lenslib{} library itself actually uses a variant called the \emph{van Laarhoven encoding}, for both historical and efficiency reasons.) As this was being written, Milewski~\cite{ProfunctorOpticsPost} and Boisseau and Gibbons~\cite{YouNeeda} independently described the isomorphism between optics and their profunctor encoding. In Section~\ref{sec:profunctor-optics} we review this isomorphism and verify that the folklore profunctor optic laws are equivalent to lawfulness as defined here. More recently, concrete lenses have found use in compositional game theory~\cite{CompositionalGameTheory}. The $\fget$ function is thought of as mapping observations on the state of play to choices of what move to make. The $\fput$ function computes the utility of the moves that the players choose. There is interest in generalising this to a probabilistic setting, but it is not yet clear what the right replacement for concrete lenses is. Much of what is known about optics is folklore, and careful verification of some of their categorical properties has been lacking, especially when working in categories other than $\Set$ (or $\Set$-like categories such as $\mathbf{Hask}$). The aim of the present paper is to fill this gap, with the hope that a better understanding of the general structure of these categories will make it easier to generalise optics to new and exotic settings. This is particularly important with the advent of linear types in Haskell, enabling a new branch of the lens family tree, and also with the new applications to game theory. \subsection{Contributions} \begin{itemize} \item A careful account of the folklore optic construction in an arbitrary symmetric monoidal category $\C$, which we show extends to a functor $\Optic : \SymmMonCat \to \SymmMonCat$ (Section~\ref{sec:optics}), \item A universal property of the $\Optic$ construction as freely adding counits to a category of `dualisable morphisms' (Section~\ref{sec:teleological-categories}), \item A definition of lawfulness for a general optic category that specialises in the correct way to known cases and allows us to derive concrete laws for new kinds of optic (Section~\ref{sec:lawful-optics}), \item Commentary on the optic variants used most frequently in the wild (Section~\ref{sec:examples}), \item A proof that lawfulness as defined here is equivalent to the folklore profunctor optic laws (Section~\ref{sec:profunctor-optics}). \end{itemize} \subsection{(Co)ends and Yoneda Reduction} In this paper we will make frequent use of the (co)end calculus. For a comprehensive introduction to ends and coends, see~\cite{CoendCofriend}. We write $\copr_X : F(X, X) \to \int^{X \in \M} F(X, X)$ for the structure maps of a coend. The most important results for us regarding ends and coends are: \begin{lemma}[Coend as coequaliser]\label{lemma:calculate-coend} If $\E$ is cocomplete and $\M$ is small, the coend of $P : \M^\op \times \M \to \E$ can be calculated as the coequaliser in the diagram \[ \begin{tikzcd} \displaystyle \coprod_{M \to N} P(N, M) \ar[r,shift left=.75ex] \ar[r,shift right=.75ex] & \displaystyle\coprod_{M \in \M} P(M, M) \ar[r] & \displaystyle\int^{M \in \M} P(M, M) \end{tikzcd} \] \qed \end{lemma} \begin{lemma}[Ninja Yoneda Lemma/Yoneda Reduction]\label{lem:yoneda-reduction} For every functor $K : \C^\op \to \Set$ and $H : \C \to \Set$, we have the following natural isomorphisms: \begin{align*} KX &\cong \int^{C \in \C} KC \times \C(X,C) & KX &\cong \int_{C \in \C} \Set(\C(C,X), KC) \\ HX &\cong \int^{C \in \C} HC \times \C(C,X) & HX &\cong \int_{C \in \C} \Set(\C(X,C), HC) \end{align*} where the isomorphisms are given by inclusion with the identity morphism $\C(X, X)$ for the left two, and evaluation at the identity morphism on the right. \qed \end{lemma} \begin{theorem}[Fubini Theorem] For a functor $F : \C^\op \times \C \times \D^\op \times \D \to \E$, there are canonical isomorphisms \begin{align*} \int^{C \in \C} \int^{D \in \D} F(C,C,D,D) \cong \int^{(C,D) \in \C \times \D} F(C,C,D,D) \cong \int^{D \in \D} \int^{C \in \C} F(C,C,D,D) \end{align*} \qed \end{theorem} \begin{lemma}[Mute coends]\label{lem:mute-coend} Consider a functor $F : \C \to \E$ as a functor $\C^\op \times \C \to \E$ that ignores its contravariant argument. Then \[ \int^{C \in \C} F(C) \cong \colim F. \] \qed \end{lemma} \section{Optics}\label{sec:optics} We begin by defining the category of optics for a symmetric monoidal category. This category was first defined in~\cite[Section 6]{Doubles} as the `double' of a monoidal category. There it was used for a completely different purpose---to investigate the relationship between Tambara modules and the `center' of a monoidal category. Our definition is almost identical, the only differences being that we have flipped the direction of the morphisms to match the existing work on lenses and restricted our attention to the unenriched setting. Our definition of optic has as domain and codomain \emph{pairs} of objects of $\C$, one of which behaves covariantly and the other contravariantly. For example, our lenses will be pairs of maps $\fget : S \to A$ and $\fput : S \times A' \to S'$. This generality is important for the applications to game theory, and in fact helps in calculations by making the covariant and contravariant positions more difficult to confuse. Readers more familiar with lenses should ignore the primes. In this section we work with a fixed symmetric monoidal category $(\C, \otimes, I)$, with associator $\alpha$ and unitors $\lambda$ and $\rho$. To avoid getting lost in the notation we will use the standard cheat of omitting associativity morphisms and trust that the dedicated reader could insert them everywhere they are needed. \begin{definition} Given two pairs of objects of $\C$, say $(S, S')$ and $(A, A')$, an \emph{optic} $p : (S, S') \hto (A, A')$ is an element of the set \begin{align*} \Optic_\C((S, S'), (A, A')) := \int^{M \in \C} \C(S, M \otimes A) \times \C(M \otimes A', S') \end{align*} \end{definition} Because this coend takes place in $\Set$, we can use Lemma~\ref{lemma:calculate-coend} to describe $\Optic_\C((S, S'), (A, A'))$ explicitly. It is the set of pairs $(l, r)$, where $l : S \to M \otimes A$ and $r : M \otimes A' \to S'$, quotiented by the equivalence relation generated by relations of the form \begin{align*} ((f \otimes A) l, r) \sim (l, r (f \otimes A')) \end{align*} for any $l : S \to M \otimes A$, $r : N \otimes A' \to S'$ and $f : M \to N$. For a pair of maps $l : S \to M \otimes A$ and $r : M \otimes A' \to S'$, we write $\rep{l}{r} : (S, S') \hto (A, A')$ for their image in $\Optic_\C((S, S'), (A, A'))$, and say that the object $M$ is the \emph{residual} for this representative. Optics will always be written with a crossed arrow $\hto$ to distinguish them from morphisms of $\C$. The residual $M$ should be thought of as a kind of `scratch space'; information from $S$ that we need to remember to construct $S'$. The quotienting imposed by the coend means we cannot inspect this temporary information, indeed, given an optic $S \hto A$ there is not even a canonical choice for the object $M$ in general. Elements of $\Optic_\C((S, S'), (A, A'))$ have an appealing interpretation as string diagrams with a ``hole'' missing. We draw the pair $\rep{l}{r}$ as \begin{center} \input{diagrams/generic-optic.tikz} \end{center} reading left to right, so the portion of the diagram to the left of the line represents $l$ and the right portion $r$. The relation expressed by the coend can be drawn graphically as: \begin{center} \input{diagrams/coend-relation-left.tikz} \hspace{0.7cm} \raisebox{1.35cm}{$\sim$} \hspace{1cm} \input{diagrams/coend-relation-right.tikz} \end{center} We will therefore omit the vertical cut between $l$ and $r$ in most subsequent diagrams; any choice yields a representative of same optic. A common use of the coend relation is to introduce or cancel isomorphisms. Given $l : S \to M \otimes A$ and $r : M \otimes A \to S$, for any isomorphism $f : M \to N$ we have \begin{align*} \rep{l}{r} = \rep{(f^{-1} \otimes A)(f \otimes A)l}{r} = \rep{(f \otimes A)l}{r(f^{-1} \otimes A)} \end{align*} Diagrammatically, this is the equality \begin{center} \input{diagrams/generic-optic.tikz} \hspace{0.8cm} \raisebox{1.5cm}{$=$} \hspace{1cm} \input{diagrams/generic-optic-with-iso.tikz} \end{center} \begin{example} ~\begin{enumerate}[(1)] \item For any three objects $M, A, A' \in \C$, there is the \emph{tautological} optic \[t_{M,A,A'} : (M \otimes A, M \otimes A') \hto (A, A')\] given by $\rep{\id_{M \otimes A}}{\id_{M \otimes A'}}$. This would be drawn as follows: \begin{center} \input{diagrams/tautological-optic.tikz} \end{center} \item We also have the \emph{identity} optic $\id_{(S, S')} : (S, S') \hto (S, S')$, given by $\rep{\lambda^{-1}_S}{\lambda_{S'}}$, where $\lambda_S : I \otimes S \to S$ is the left unitor for $S$ and similarly for $S'$. The identity optic is drawn as \begin{center} \input{diagrams/identity-optic-full.tikz} \end{center} This dashed line above the diagram represents the unit object. It is common in string diagrams to omit unitors and the unit object unless they are necessary to make sense of the diagram. We therefore prefer to draw the identity morphism as: \begin{center} \input{diagrams/identity-optic.tikz} \end{center} \end{enumerate} \end{example} Optics compose as follows. The easiest interpretation is graphical: composition corresponds to substituting the first optic for the hole of the second: \begin{center} \input{diagrams/generic-optic-noline.tikz} \hspace{0.9cm} \raisebox{1.5cm}{$\circ$} \hspace{1cm} \input{diagrams/generic-optic2-noline.tikz} \\ \raisebox{1.5cm}{$:=$}\qquad \input{diagrams/optic-composition.tikz} \end{center} More formally, we wish to construct a map \begin{align*} &\left(\int^{M \in \C} \C(S, M \otimes A) \times \C(M \otimes A', S')\right) \times \left(\int^{N \in \C} \C(R, N \otimes S) \times \C(N \otimes S', R')\right) \\ &\quad \to \int^{M \in \C} \C(R, M \otimes A) \times \C(M \otimes A', R'). \end{align*} The product in $\Set$ preserves colimits, so in particular coends. Using this fact and the Fubini theorem for coends, the domain is isomorphic to \begin{align*} \int^{(M, N) \in \C \times \C} \C(S, M \otimes A) \times \C(M \otimes A', S') \times \C(R, N \otimes S) \times \C(N \otimes S', R'). \end{align*} So by the universal property of coends, it suffices to construct maps \begin{align*} & \C(S, M \otimes A) \times \C(M \otimes A', S') \times \C(R, N \otimes S) \times \C(N \otimes S', R') \\ & \quad \to \int^{M \in \C} \C(R, M \otimes A) \times \C(M \otimes A', R'). \end{align*} natural in $M$ and $N$. For these we use the composites \begin{align*} &\C(S, M \otimes A) \times \C(M \otimes A', S') \times \C(R, N \otimes S) \times \C(N \otimes S', R')\\ \to \,& \C(N \otimes S, N \otimes M \otimes A) \times \C(N \otimes M \otimes A', N \otimes S') \times \C(R, N \otimes S) \times \C(N \otimes S', R') && \text{(functoriality of $N \otimes -$)} \\ \to \,& \C(R, N \otimes M \otimes A) \times \C(N \otimes M \otimes A', R') && \text{(composition in $\C$)} \\ \to \,&\int^{P \in \M} \C(R, P \otimes A) \times \C(P \otimes A', R') && \text{($\copr_{N \otimes M}$)} \end{align*} Written equationally, suppose $\rep{l'}{r'} : (R, R') \hto (S, S')$ and $\rep{l}{r} : (S, S') \hto (A, A')$ are optics with $M$ the residual for $\rep{l'}{r'}$. The composite $(R, R') \hto (A, A')$ is then: \[\rep{l}{r} \circ \rep{l'}{r'} := \rep{(M \otimes l)l'}{r'(M \otimes r)}.\] \begin{proposition}\label{prop:optic-is-cat} The above data form a category $\Optic_\C$. \end{proposition} \begin{proof} In~\cite[Section 6]{Doubles} this is proven abstractly by exhibiting this category as the Kleisli category for a monad in the bicategory $\Prof$. We prefer a direct proof. Suppose we have representatives of three optics \begin{align*} \rep{l_1}{r_1} &: (R, R) \hto (S, S') \\ \rep{l_2}{r_2} &: (S, S') \hto (A, A') \\ \rep{l_3}{r_3} &: (A, A') \hto (B, B'), \end{align*} that have residuals $M$, $N$ and $P$ respectively. We must choose these representatives simultaneously but, as in the definition of composition, this is allowed by the Fubini theorem. Then: \begin{align*} (\rep{l_3}{r_3} \circ \rep{l_2}{r_2}) \circ \rep{l_1}{r_1} &= \rep{(N \otimes l_3)l_2}{r_2(N \otimes r_3)} \circ \rep{l_1}{r_1} \\ &= \rep{(M \otimes ((N \otimes l_3)l_2))l_1}{r_1(M \otimes (r_2(N \otimes r_3)))} \\ &= \rep{(M \otimes N \otimes l_3)(M \otimes l_2)l_1}{r_1(M \otimes r_2)(M \otimes N \otimes r_3)} \\ &= \rep{l_3}{r_3} \circ (\rep{(M \otimes l_2)l_1}{r_1(M \otimes r_2)}) \\ &= \rep{l_3}{r_3} \circ (\rep{l_2}{r_2} \circ \rep{l_1}{r_1}) \end{align*} For the unit laws, suppose we have $\rep{l}{r} : (S, S') \hto (A, A')$ with representative $M$. We calculate: \begin{align*} \id_{A, A'} \circ \rep{l}{r} &= \rep{\lambda^{-1}_A}{\lambda_{A'}} \circ \rep{l}{r} \\ &= \rep{(M \otimes \lambda^{-1}_A) l}{r (M \otimes \lambda_{A'})} \\ &= \rep{(\rho^{-1}_M \otimes A) l}{r (\rho_M \otimes A')} \\ &= \rep{l}{r (\rho_M \otimes A') (\rho^{-1}_M \otimes A')} \\ &= \rep{l}{r} \\ \rep{l}{r} \circ \id_{S, S'} &= \rep{l}{r} \circ \rep{\lambda^{-1}_S}{\lambda_{S'}} \\ &= \rep{(I \otimes l)\lambda^{-1}_S}{\lambda_{S'} (I \otimes r)} \\ &= \rep{(\lambda^{-1}_M \otimes S)l}{r (\lambda_{M} \otimes S')} \\ &= \rep{l}{r (\lambda_{M} \otimes S')(\lambda^{-1}_M \otimes S')} \\ &= \rep{l}{r} \end{align*} In both cases we have used the coend relation to cancel an isomorphism appearing on both sides of an optic. \end{proof} Note that the homsets of $\Optic_\C$ are given by a coend indexed by a possibly large category. If $\C$ is small then these coends always exist, but if $\C$ is not small their existence is not guaranteed by the cocompleteness of $\Set$. Because of this we should be careful to only discuss optic categories where we know that the coends exist by some other means, e.g., by exhibiting an isomorphism of $\Optic_\C((S, S'), (A, A'))$ with a set. For all of the examples we give later we provide such a isomorphism. \begin{proposition} If $\C$ is a category with finite products, then $\Lens \defeq \Optic_\C$ is the category of lenses described in the introduction (so long as we restrict to optics of shape $(S, S) \hto (A, A)$). \end{proposition} \begin{proof} We see that optics correspond to pairs of $\fget$ and $\fput$ functions via the following isomorphisms: \begin{align*} \Lens((S, S'), (A, A')) &= \int^{M \in \C} \C(S, M \times A) \times \C(M \times A', S') \\ &\cong \int^{M \in \C} \C(S, M) \times \C(S, A) \times \C(M \times A', S') && \text{(universal property of product)} \\ &\cong \C(S, A) \times \C(S \times A', S') && \text{(Yoneda reduction)} \end{align*} This last step deserves some explanation. We are applying the isomorphism $KX \cong \int^{C \in \C} \C(X,C) \times KC$ of Lemma~\ref{lem:yoneda-reduction} to the case $X = S$ and $K = \C(S, A) \times \C(- \times A', S')$. Explicitly the isomorphism states that, given an optic $\rep{l}{r} : (S, S') \hto (A, A')$, the corresponding concrete lens is the pair $\fget : S \to A$ and $\fput : S \times A' \to S'$, where $\fget = \pi_2 l$ and $\fput = r (\pi_1 l \times A)$. In the other direction, given $(\fget, \fput)$, the corresponding optic is represented by $\rep{[\id_S, \fget]}{\fput}$. We leave it to the reader to verify that composition in $\Lens$ corresponds to ordinary composition of concrete lenses by using this isomorphism in both directions. (Of course, there is only one sensible way to compose such a collection of morphisms!) \end{proof} %The remainder of this section comprises some useful observations that were not made in~\cite{Doubles}. \begin{proposition}\label{prop:iota-functor} There is a functor $\iota : \C \times \C^\op \to \Optic_\C$, which on objects is given by $\iota(S, S') = (S, S')$ and on morphisms $(f, g) : (S, S') \to (A, A')$ by $\iota(f, g) = \rep{\lambda_A^{-1} f}{g \lambda_{A'}}$. \end{proposition} \begin{proof} Graphically, this is: \begin{center} \input{diagrams/iota.tikz} \end{center} This preserves identities, as the identity on an object $(S, S')$ in $\Optic_\C$ is defined to be exactly $\rep{\lambda^{-1}_S}{\lambda_{S'}}$. To check functoriality, suppose we have $(f, g) : (S, S') \to (A, A')$ and $(f', g') : (A, A') \to (B, B')$ in $\C \times \C^\op$. Then: \begin{align*} \iota(f', g') \circ \iota(f, g) &= \rep{\lambda^{-1}_B f'}{g' \lambda_{B'}} \circ \rep{\lambda^{-1}_A f}{g \lambda_{A'}} \\ &= \rep{(I\otimes (\lambda^{-1}_B f'))\lambda^{-1}_A f}{g \lambda_{A'} (I\otimes (g' \lambda_{B'}))} && \text{(By definition of $\circ$)}\\ &= \rep{(I \otimes \lambda^{-1}_B) (I \otimes f')\lambda^{-1}_A f}{g \lambda_{A'} (I \otimes g')(I\otimes \lambda_{B'})} && \text{(Functoriality of $I \otimes -$)}\\ &= \rep{(I\otimes \lambda^{-1}_B) \lambda^{-1}_B f' f}{g g' \lambda_{B'} (I\otimes \lambda_{B'})} && \text{(Naturality of $\lambda$)}\\ &= \rep{(\lambda^{-1}_I \otimes B) \lambda^{-1}_B f' f}{g g' \lambda_{B'} (\lambda_I \otimes B')} && \text{(Unitality of action)} \\ &= \rep{\lambda^{-1}_B f' f}{g g' \lambda_{B'} (\lambda_I \otimes B') (\lambda^{-1}_I \otimes B')} && \text{(Coend relation)} \\ &= \rep{\lambda^{-1}_B f'f}{g g' \lambda_{B'}} \\ &= \iota(f'f, gg') \end{align*} Graphically, there is not much to do: \begin{center} \input{diagrams/iota-functorial-left.tikz} \qquad \raisebox{0.3cm}{$=$} \qquad \input{diagrams/iota-functorial-right.tikz} \end{center} \end{proof} % This functor is not necessarily faithful, see Remark~\ref{lens-iota-not-faithful}. There are some other easy-to-construct optics; specifically, optics out of and into the monoidal unit $(I, I)$. Such maps in a monoidal category are sometimes called states and costates~\cite{CategoricalQuantumMechanics}. \begin{proposition}\label{prop:costates} The set of costates $(S, S') \hto (I, I)$ is isomorphic to $\C(S, S')$. \end{proposition} \begin{proof} \begin{align*} \Optic_\C((S, S'), (I, I)) &= \int^{M \in \C} \C(S, M \otimes I) \times \C(M \otimes I, S') \\ &\cong \int^{M \in \C} \C(S, M) \times \C(M, S') \\ &\cong \C(S, S') \end{align*} by Yoneda reduction, so a state $\rep{l}{r} : (S, S') \hto (I, I)$ corresponds to the morphism $rl : S \to S'$, and a morphism $f : S \to S'$ corresponds to the state $\rep{\rho_S^{-1}}{f \rho_S} : (S, S') \hto (I, I)$ \end{proof} In particular, for any $S \in \C$, the identity $\id_S$ yields an optic $c_S = \rep{\rho_S^{-1}}{\rho_S} : (S, S) \hto (I, I)$ that we call the \emph{connector}: %\begin{center} % \input{diagrams/connector-full.tikz} %\end{center} %Or, again omitting the unitors: \begin{center} \input{diagrams/connector.tikz} \end{center} \begin{proposition}\label{prop:states} Suppose the monoidal unit $I$ of $\C$ is terminal. Then the set of states $(I, I) \hto (A, A')$ is isomorphic to $\C(I, A)$. \end{proposition} \begin{proof} First, note that \begin{align*} \Optic_\C((I,I), (A,A')) &= \int^{M \in \C} \C(I, M \otimes A) \times \C(M \otimes A', I) \\ &\cong \int^{M \in \C} \C(I, M \otimes A). \end{align*} as $I$ is terminal. The interior of this coend is mute in the contravariant position, so the coend is equal to the colimit of the functor $\C(I, - \otimes A) : \C \to \Set$ by Lemma~\ref{lem:mute-coend}. But $\C$ has terminal object $I$, so \begin{align*} \int^{M \in \C} \C(I, M \otimes A) &\cong \colim \C(I, - \otimes A) \\ &\cong \C(I, I \otimes A) \\ &\cong \C(I, A) \end{align*} Explicitly, a state $f : I \to A$ in $\C$ corresponds to the optic $\rep{\lambda_A^{-1} f}{!_{I \times A'}} : (I, I) \hto (A, A')$, where $!_{I \times A'} : I \times A' \to I$ is the unique map. \end{proof} The remainder of this section comprises a proof of the following fact: \begin{theorem}\label{thm:optic-functor} The $\Optic_\C$ construction extends to a functor \[\Optic : \SymmMonCat \to \SymmMonCat,\] where $\SymmMonCat$ denotes the (1-)category of (small) symmetric monoidal categories and strong symmetric monoidal functors. \end{theorem} \begin{proposition}\label{prop:change-of-action-monoidal} A monoidal functor $F : \C \to \D$ induces a functor $\Optic(F) : \Optic_\C \to \Optic_\D$, given on objects by $\Optic(F)(S, S') = (FS, FS')$ and on morphisms $\rep{l}{r} : (S, S') \to (A, A')$ by \begin{align*} \Optic(F)(\rep{l}{r}) := \rep{\phi^{-1}_{M,A} (Fl)}{(Fr) \phi_{M,A'}}, \end{align*} where $\phi_{M,A} : FM \otimes FA \to F(M \otimes A)$ and $\phi_I : I \to FI$ denote the structure maps of the monoidal functor. Graphically: \begin{center} \input{diagrams/induced-by-functor.tikz} \end{center} \end{proposition} \begin{proof} This preserves identities: \begin{align*} &\Optic(F)(\id_{(S, S')}) \\ &= \Optic(F)(\rep{\lambda^{-1}_S}{\lambda_{S'}}) &&\text{(Definition of $\id$)} \\ &= \rep{\phi^{-1}_{I,S} (F\lambda^{-1}_S)}{(F\lambda_{S'}) \phi_{I,S'}} && \text{(Definition of $\Optic(F)$)} \\ &= \rep{(\phi_I^{-1} \otimes S) \phi^{-1}_{I,S} (F\lambda^{-1}_S)}{(F\lambda_{S'}) \phi_{I,S'}(\phi_I \otimes S) } && \text{(Introducing isomorphism to both sides)} \\ &= \rep{\lambda^{-1}_{FS}}{\lambda_{FS'}} &&\text{($F$ is a monoidal functor)} \\ &= \id_{(FS, FS')} \end{align*} And given two optics $\rep{l}{r} : (S, S') \hto (A, A')$ and $\rep{l'}{r'} : (R, R') \hto (S, S')$ with residuals $M$ and $M'$, it preserves composition: \begingroup \allowdisplaybreaks \begin{align*} &\Optic(F)(\rep{l}{r} \circ \rep{l'}{r'}) \\ &\qquad \text{(Definition of $\circ$)} \\ &= \Optic(F)(\rep{(M' \otimes l)l'}{r'(M' \otimes r)}) \\ &\qquad \text{(Definition of $\Optic(F)$)} \\ &= \rep{\phi^{-1}_{M' \otimes M,A} F((M' \otimes l)l')}{F(r'(M' \otimes r)) \phi_{M' \otimes M,A'}} \\ &\qquad \text{(Functoriality of $F$)} \\ &= \rep{\phi^{-1}_{M' \otimes M,A} F(M' \otimes l)(Fl')}{(Fr') F(M' \otimes r) \phi_{M' \otimes M,A'}} \\ &\qquad \text{(Introducing isomorphism to both sides)} \\ &= \rep{(\phi^{-1}_{M', M} \otimes A)\phi^{-1}_{M' \otimes M,A} F(M' \otimes l)(Fl')}{(Fr') F(M' \otimes r) \phi_{M' \otimes M,A'}(\phi_{M', M} \otimes A)} \\ &\qquad \text{(Hexagon axiom for $F$)} \\ &= \rep{(FM' \otimes \phi^{-1}_{M,A})\phi^{-1}_{M',M \otimes A}(F(M' \otimes l)) (Fl')}{(Fr') (F(M' \otimes r)) \phi_{M',M \otimes A'} (FM' \otimes \phi_{M,A'})} \\ &\qquad \text{(Naturality of $\phi$)} \\ &= \rep{(FM' \otimes \phi^{-1}_{M,A})(FM' \otimes Fl)\phi^{-1}_{M',S} (Fl')}{(Fr') \phi_{M',S'} (FM' \otimes Fr) (FM' \otimes \phi_{M,A'})} \\ &\qquad \text{(Functoriality of $\otimes$)} \\ &= \rep{(FM' \otimes \phi^{-1}_{M,A} (Fl))(\phi^{-1}_{M',S} (Fl'))}{((Fr') \phi_{M',S'})(FM' \otimes (Fr) \phi_{M,A'})} \\ &\qquad \text{(Definition of $\circ$)} \\ &= \rep{\phi^{-1}_{M,A} (Fl)}{(Fr) \phi_{M,A'}} \circ \rep{\phi^{-1}_{M',S} (Fl')}{(Fr') \phi_{M',S'}} \\ &\qquad \text{(Definition of $\Optic(F)$)} \\ &= \Optic(F)(\rep{l}{r}) \circ \Optic(F)(\rep{l'}{r'}) \end{align*} \endgroup The critical move is adding the isomorphism $(\phi_{M', M} \otimes A)$ to both sides of the coend, so that the hexagon axiom for $F$ may be applied. \end{proof} \begin{lemma}\label{lem:iota-commute-with-opticf} $\iota$ commutes with $\Optic(F)$, in the sense that \[ \Optic(F)(\iota(f, g)) = \iota(Ff, Fg) \] \end{lemma} \begin{proof} This is a straightforward calculation: \begin{align*} & \Optic(F)(\iota(f, g)) \\ &\qquad \text{(Definition of $\iota$)} \\ &= \Optic(F)(\rep{\lambda_A^{-1} f}{g \lambda_{A'}}) \\ &\qquad \text{(Definition of $\Optic(F)$)} \\ &= \rep{\phi^{-1}_{I,A} (F(\lambda_A^{-1} f))}{(F(g \lambda_{A'})) \phi_{I,A'}} \\ &\qquad \text{(Functoriality of $F$)} \\ &= \rep{\phi^{-1}_{I,A} (F\lambda_A^{-1}) (Ff)}{(Fg)(F \lambda_{A'}) \phi_{I,A'}} \\ &\qquad \text{(Introducing $\phi_I$ to both sides)} \\ &= \rep{(\phi^{-1}_I \otimes FA) \phi^{-1}_{I,A} (F\lambda_A^{-1}) (Ff)}{(Fg)(F \lambda_{A'}) \phi_{I,A'} (\phi_I \otimes FA)} \\ &\qquad \text{($F$ is a monoidal functor)} \\ &= \rep{\lambda_{FA}^{-1} (Ff)}{(Fg)\lambda_{FA'}} \\ &\qquad \text{(Definition of $\iota$)} \\ &= \iota(Ff, Fg) \end{align*} \end{proof} \begin{proposition}\label{prop:iota-naturality} $\iota : \C \times \C^\op \to \Optic_\C$ ``lifts natural isomorphisms'', in the following sense. Given monoidal functors $F, G : \C \to \D$ and a monoidal natural isomorphism $\alpha : F \Rightarrow G$, there is an induced natural isomorphism $\Optic(\alpha) : \Optic(F) \Rightarrow \Optic(G)$ with components: \begin{align*} {\Optic(\alpha)}_{(S, S')} &: (FS, FS') \to (GS, GS') \\ {\Optic(\alpha)}_{(S, S')} &:= \iota(\alpha_{S}, \alpha^{-1}_{S'}) \end{align*} \end{proposition} \begin{proof} Suppose $\phi$ and $\psi$ are the structure maps for $F$ and $G$ respectively. We just have to show naturality, i.e.\ that for $p : (S, S') \hto (A, A')$ in $\D$, the equation \[\Optic(\alpha)_{(A, A')} \circ \Optic(F)(p) = \Optic(G)(p) \circ \Optic(\alpha)_{(S, S')}\] holds. Suppose $p = \rep{l}{r}$ with residual $M$. On the left we have: \begin{center} \input{diagrams/iota-lift-step1.tikz} \end{center} We use the coend relation to place an $\alpha$ on either side: \begin{center} \input{diagrams/iota-lift-step2.tikz} \end{center} And then monoidality of $\alpha$ to commute it past $\phi$. \begin{center} \input{diagrams/iota-lift-step3.tikz} \end{center} Finally, $\alpha$ commutes with $F l$ and $F r$ by naturality. \begin{center} \input{diagrams/iota-lift-step4.tikz} \end{center} This is the diagram for $\Optic(G)(p) \circ {\Optic(\alpha)}_{(S, S')}$. \end{proof} \begin{theorem} $\Optic_\C$ is symmetric monoidal, where $(S, S') \otimes (T, T') = (S \otimes T, S' \otimes T')$, the unit object is $(I, I)$, and the action on a pair of morphisms $\rep{l}{r} : (S, S') \hto (A, A')$ and $\rep{l'}{r'} : (T, T') \hto (B, B')$ is given by: \begin{center} \input{diagrams/tensor-on-morphisms.tikz} \end{center} \end{theorem} \begin{proof} Suppose the two optics have residuals $M$ and $N$ respectively. Written equationally, their tensor is: \begin{align*} \rep{l}{r} \otimes \rep{l'}{r'} &:= \rep{(M \otimes s_{A,N} \otimes B)(l \otimes l')}{(r \otimes r')(M \otimes s_{A',N} \otimes B')} \end{align*} This does not depend on the choice of representatives, as demonstrated by the equivalence of the following diagrams: \begin{center} \input{diagrams/tensor-on-morphisms-defined-left.tikz} \input{diagrams/tensor-on-morphisms-defined-right.tikz} \end{center} To check functoriality of $\otimes$, suppose we have optics \begin{align*} \rep{l_1}{r_1} : (S_1, S_1') &\hto (S_2, S_2') \\ \rep{l_2}{r_2} : (S_2, S_2') &\hto (S_3, S_3') \\ \rep{p_1}{q_1} : (T_1, T_1') &\hto (T_2, T_2') \\ \rep{p_2}{q_2} : (T_2, T_2') &\hto (T_3, T_3'). \end{align*} The string diagram for $(\rep{l_2}{r_2} \circ \rep{l_1}{r_1}) \otimes (\rep{p_2}{q_2} \circ \rep{p_1}{q_1})$ is: \begin{center} \input{diagrams/tensor-functorial-left.tikz} \end{center} And for $(\rep{l_2}{r_2} \otimes \rep{p_2}{q_2}) \circ (\rep{l_1}{r_1} \otimes \rep{p_1}{q_1})$ is: \begin{center} \input{diagrams/tensor-functorial-right.tikz} \end{center} These two diagrams are equivalent: we can use the naturality of the symmetry morphism to push $l_2$ and $r_2$ past the crossing to be next to $p_2$ and $q_2$ respectively. This creates two extra twists that can be cancelled in the center of the diagram. The structure morphisms are all lifted from the structure morphisms in $\C \times \C^\op$: \begin{align*} \alpha_{(R, R'), (S, S'), (T, T')} &:= \iota(\alpha_{R,S,T}, \alpha_{R',S',T'}^{-1}) \\ \lambda_{(S, S')} &:= \iota(\lambda_{S}, \lambda_{S'}^{-1}) \\ \rho_{(S, S')} &:= \iota(\rho_{S}, \rho_{S'}^{-1}) \\ s_{(S, S'), (T, T')} &:= \iota(s_{S, T}, s_{T', S'}) \end{align*} Note that because $\iota(S, S') = (S, S')$, the equations required to hold for $\iota$ to be a monoidal functor hold by definition (although we don't yet know that $\Optic_\C$ is monoidal). The pentagon and triangle equations then hold in $\Optic_\C$, as they are the image of the same diagrams in $\C \times \C^\op$ under $\iota$. The only remaining thing to verify is that these structure maps are natural in $\Optic_\C$, but this follows from the previous proposition. \end{proof} % In the Haskell \lenslib{} library, the monoidal product on optics is denoted ``\mintinline{haskell}{alongside}'' for the product and ``\mintinline{haskell}{without}'' for the coproduct. \begin{proposition} For monoidal $F : \C \to \D$, the induced $\Optic(F) : \Optic_\C \to \Optic_\D$ is also monoidal. \end{proposition} \begin{proof} The structure morphisms for monoidality are given by lifting the structure morphisms for $F$: \begin{align*} \phi_{(S, S'), (T, T')} &:= \iota(\phi_{S, T}, \phi_{S', T'}) &&: F(S, S') \otimes F(T, T') \hto F((S, S') \otimes (T, T')) \\ \phi &:= \iota(\phi, \phi) &&: (I, I) \hto F(I, I) \end{align*} The monoidality axioms follow by lifting the axioms for $F$ and naturality follows by Proposition~\ref{prop:iota-naturality}. \end{proof} \begin{proof}[Proof of Theorem~\ref{thm:optic-functor}] The functor is well defined on its domain: if $\C$ is small then $\Optic_\C$ exists. The only property left to check is functoriality, i.e. that for monoidal functors $F : \C \to \D$ and $G : \D \to \E$ we have \[ \Optic(G) \circ \Optic(F) = \Optic(G \circ F).\] On objects this is clear, as $\Optic(F)(S, S') = (FS, FS')$. On a morphism $\rep{l}{r} : (S, S') \hto (A, A')$ in $\C$, we check: \begin{align*} (\Optic(G) \circ \Optic(F))(\rep{l}{r}) &= \Optic(G) \left(\rep{\phi^{-1}_{M,A} (Fl)}{(Fr) \phi_{M,A'}}\right) \\ &= \rep{\psi^{-1}_{FM,FA} (G(\phi^{-1}_{M,A} (Fl)))}{(G((Fr) \phi_{M,A'}))\psi_{FM,FA'}} \\ &= \rep{\psi^{-1}_{FM,FA} (G\phi^{-1}_{M,A}) (GFl)}{(GFr) (G\phi_{M,A'})\psi_{FM,FA'}} \\ &=\Optic(G \circ F)(\rep{l}{r}) \end{align*} where $\phi$ and $\psi$ denote the structure maps for $F$ and $G$ respectively, and in the last step we use that $(G\phi_{M,A'})\psi_{FM,FA'}$ is by definition the structure map for $G \circ F$. Checking that the identity is preserved is similar. % %Secondly, we note that $\Optic$ acts functorially on 2-cells. The action on 2-cells is given by Proposition~\ref{prop:iota-naturality}, and functoriality follows by the functoriality of $\iota$. % %Finally, we check that horizontal composition is preserved. Horizontal composition in $\SymmMonCat$ is defined in terms of whiskering and vertical composition, so it suffices to verify that whiskering is preserved. But indeed it is: %\begin{align*} %{\Optic(F\alpha)}_{(S, S')} %&= \iota({(F\alpha)}_{S}, {(F\alpha)}^{-1}_{S'}) \\ %&= \iota(F(\alpha_{S}), F(\alpha_{S'}^{-1})) \\ %&= {(\Optic(F)\Optic(\alpha))}_{(S, S')} %\end{align*} %by Lemma~\ref{lem:iota-commute-with-opticf}, and %\begin{align*} %{\Optic(\alpha G)}_{(S, S')} %&= \iota(\alpha_{GS}, \alpha_{GS'}^{-1}) \\ %&= \iota({(\alpha G)}_{S}, {(\alpha G)}_{S'}^{-1}) \\ %&= {(\Optic(\alpha)\Optic(G))}_{(S, S')} %\end{align*} \end{proof} This doesn't extend to a strict 2-functor $\SymmMonCat \to \SymmMonCat$, as there is only an action of $\Optic$ on natural \emph{isomorphisms}. It is however functorial on natural isomorphisms, giving a 2-functor on the `homwise-core' of $\SymmMonCat$. We do not explore this any further in the present note. \begin{proposition} If $\C$ is a strict symmetric monoidal category then $\Optic_\C$ is strict, and $\iota : \C \times \C^\op \to \Optic_\C$ is a strict monoidal functor. For $F : \C \to \D$ a strict monoidal functor, the induced functor $\Optic(F) : \Optic_\C \to \Optic_\D$ is also strict. \end{proposition} \begin{proof} The structure maps of $\Optic_\C$ are given by $\iota$ applied to the structure maps of $\C$. If the latter are identities, then so are the former---the identity morphisms in $\Optic_\C$ are by definition $\iota(\id_S, \id_{S'})$. That $\iota$ is strict is clear, as the structure morphisms in $\Optic_\C$ are exactly the structure morphisms in $\C \times \C^\op$ under $\iota$. Finally, the structure morphisms of $\Optic(F)$ are lifted from $F$, so if the latter is strict then so is the former. \end{proof} \subsection{Teleological Categories}\label{sec:teleological-categories} In this section we establish a universal property of the $\Optic$ construction. The idea is that every optic $\rep{l}{r} : (S, S') \hto (A, A')$ consists of a morphism $S \to M \otimes A$ and the `formal dual' of a morphism $M \otimes A' \to S'$, composed with a `formal counit' that traces out the object $M$: \begin{center} \input{diagrams/generic-optic-folded.tikz} \end{center} It will be convenient to equip $\Optic_\C$ with a slightly different symmetric monoidal structure: \begin{definition} The \emph{switched} monoidal product on $\Optic_\C$ is given on objects by \begin{align*} (S, S') \switched (T, T') := (S \otimes T, T' \otimes S') \end{align*} And on morphisms $\rep{l}{r} : (S, S') \hto (A, A')$ and $\rep{l'}{r'} : (T, T') \hto (B, B')$ by: \begin{center} \input{diagrams/switched-tensor-on-morphisms.tikz} \end{center} \end{definition} The universal property for $\Optic_\C$ given in this section is an argument for this being the ``morally correct'' tensor, although it does seem a little strange. When we later discuss lawful optics, we are forced to use the unswitched tensor to maintain the invariant that our objects are of the form $(X, X)$. \begin{proposition} $(\Optic_\C, \switched, (I, I))$ is a symmetric monoidal category. %that is monoidally equivalent to $\Optic_\C$ with the unswitched tensor. \end{proposition} \begin{proof} The proof that $(\Optic_\C, \switched, (I, I))$ is symmetric monoidal is nearly identical to that for the unswitched tensor. Note that due to the switching, the structure morphisms are slightly different: \begin{align*} \alpha_{(R, R'), (S, S'), (T, T')} &:= \iota(\alpha_{R,S,T}, \alpha_{T',S',R'}^{-1}) \\ \lambda_{(S, S')} &:= \iota(\lambda_{S}, \rho_{S'}^{-1}) \\ \rho_{(S, S')} &:= \iota(\rho_{S}, \lambda_{S'}^{-1}) \\ s_{(S, S'), (T, T')} &:= \iota(s_{S, T}, s_{S', T'}) \end{align*} % The categories are monoidally equivalent via the identity functor \[\id : (\Optic_\C, \switched, (I, I)) \to (\Optic_\C, \otimes, (I, I)),\] where the structure isomorphisms making this functor monoidal are given by % \begin{align*} % \iota(\id_{S \otimes S'}, s_{T, T'}) : \id(S, S') \otimes \id(T, T') \to \id((S, S') \switched (T, T')) % \end{align*} \end{proof} \begin{remark} Just as in the unswitched case, if $\C$ is a strict monoidal category than so is $(\Optic_\C, \switched, (I, I))$. \end{remark} We now define the structure on a symmetric monoidal category universally provided by the $\Optic$ construction. \begin{definition}[Compare {\cite[Definition 5.1]{CoherenceForLenses}}] A \emph{teleological category} is a symmetric monoidal category $(\T, \teletimes, I)$, equipped with: \begin{itemize} \item A symmetric monoidal subcategory $\T_d$ of \emph{dualisable morphisms} containing all the objects of $\T$, with an involutive symmetric monoidal functor ${(-)}^* : \T_d \to \T_d^\op$, where---not finding a standard symbol for such a thing---we mean $\T_d^\op$ to be the category with both the direction of the arrows \emph{and} the order of the tensor flipped: ${(A \teletimes B)}^* \cong B^* \teletimes A^*$. Note that there is therefore also a canonical isomorphism $\phi : I \cong I^*$ \item A symmetric monoidal extranatural family of morphisms $\varepsilon_X : X \teletimes X^* \to I$, called \emph{counits}, natural with respect to the \emph{dualisable} morphisms. \end{itemize} \end{definition} Unpacking the definition, $\varepsilon$ being a symmetric monoidal extranatural transformation amounts to the following diagrams in $\T$ commuting: \[ \begin{tikzcd} X \teletimes Y^* \ar[r, "f \teletimes Y^*"] \ar[d, "X \teletimes f^*", swap] & Y \teletimes Y^* \ar[d, "\varepsilon_Y"] \\ X \teletimes X^* \ar[r, "\varepsilon_X", swap] & I \end{tikzcd} \hspace{1cm} \begin{tikzcd} X^* \teletimes X \ar[r, "s"] \ar[d, "\cong" swap] & X \teletimes X^* \ar[d, "\varepsilon_X"] \\ X^* \teletimes {(X^*)}^* \ar[r, "\varepsilon_{X^*}", swap] & I \end{tikzcd}\] \[ \begin{tikzcd}[column sep = large] X \teletimes Y \teletimes Y^* \teletimes X^* \ar[r, "X \teletimes \varepsilon_Y \teletimes X^*"] \ar[d, "\cong" swap] & X \teletimes X^* \ar[d, "\varepsilon_X"] \\ X \teletimes Y \teletimes (X \teletimes Y)^* \ar[r, "\varepsilon_{X \teletimes Y}", swap] & I \end{tikzcd} \hspace{1cm} \begin{tikzcd} I \teletimes I^* \ar[r,"I \teletimes \phi"] \ar[dr, swap, "\varepsilon_I"] & I \teletimes I \ar[d, "\cong"] \\ & I \end{tikzcd} \] where $f : X \to Y$ is dualisable. %%%% The following is a version with duality given by reflection. % \begin{definition}[{\cite[Definition 5.1]{CoherenceForLenses}}] % A \emph{teleological category is} a symmetric monoidal category $(\C, \otimes, I)$, equipped with: % \begin{itemize} % \item A wide symmetric monoidal subcategory $\C_d$ of \emph{dualisable morphisms}, with an involutive strong symmetric monoidal functor $(-)^* : \C_d \to \C_d^\op$; and, % \item A family of morphisms $\varepsilon_X : X \otimes X^* \to I$, called \emph{counits}, natural with respect to the morphisms in $\C_d$, such that % \[ % \begin{tikzcd} % X \otimes Y^* \ar[r, "f \otimes Y^*"] \ar[d, "X \otimes f^*", swap] & Y \otimes Y^* \ar[d, "\varepsilon_Y"] \\ % X \otimes X^* \ar[r, "\varepsilon_X", swap] & I % \end{tikzcd} \hspace{1cm} % \begin{tikzcd} % X^* \otimes X \ar[r, "s_{X^*, X}"] \ar[dr, "\varepsilon_{X^*}", swap] & X \otimes X^* \ar[d, "\varepsilon_X"] \\ % & I % \end{tikzcd} % \] % \[ % \begin{tikzcd} % X \otimes Y \otimes X^* \otimes Y^* \ar[r, "X \otimes s_{Y,X^*} \otimes X^*"] \ar[dr, "\varepsilon_{X \otimes Y}", swap] & X \otimes X^* \otimes Y \otimes Y^* \ar[d, "\varepsilon_X \otimes \varepsilon_Y"] \\ % & I % \end{tikzcd} % \] % commute for all $f : X \to Y$ is in $\C_d$. % \end{itemize} % \end{definition} Note that because $\T_d$ is symmetric monoidal and has the same collection of objects as $\T$, the symmetric monoidal structure morphisms of $\T$ must be contained in $\T_d$ and so are dualisable. \begin{example} ~\begin{enumerate}[(1)] \item Any compact closed category is a teleological category, where every morphism is dualisable and the unit morphisms have been forgotten. \item Any symmetric monoidal category with terminal monoidal unit is trivially teleological, setting the dualisable morphisms to be all isomorphisms. \end{enumerate} \end{example} This definition of teleological category differs from the original given in~\cite{CoherenceForLenses}, in that the duality switches the order of the tensor product. We do this so that compact closed categories are teleological, but the bookkeeping does admittedly become more confusing. %\begin{example} % \todo{The funny graph example from the coherence paper?} %\end{example} \begin{definition} A \emph{teleological functor} $F : \T \to \S$ is a symmetric monoidal functor that restricts to a functor $F_d : \T_d \to \S_d$ on the dualisable subcategories, commutes with the duality via a monoidal natural isomorphism $d_X : F(X^*) \to {(FX)}^*$, and such that the counits are preserved: \[ \begin{tikzcd}[column sep = large] F(X \teletimes X^*) \ar[r, "\phi_{X, X^*}"] \ar[d, "F\varepsilon_X", swap] & FX \teletimes F(X^*) \ar[r, "FX \teletimes d_X"] & FX \teletimes (FX)^* \ar[d, "\varepsilon_{FX}"] \\ FI \ar[rr, "\phi_I", swap] & & I \end{tikzcd} \] \end{definition} Together we have $\Tele$, the category of teleological categories and teleological functors. There are evident functors \begin{align*} U &: \Tele \to \SymmMonCat \\ {(-)}_d &: \Tele \to \SymmMonCat \end{align*} that take a teleological category to its underlying symmetric monoidal category and subcategory of dualisable morphisms respectively. The definition of teleological category suggests a string diagram calculus similar to that for compact closed categories, but where only counits are allowed and only morphisms known to be dualisable may be passed around a counit. We have of course not proven that such a calculus is sound for teleological categories, but we trust that a sceptical reader could verify our arguments equationally. \begin{proposition} $\Optic_\C$ forms a teleological category, where: \begin{itemize} \item The dualisable morphisms are all morphisms of the form $\iota(f, g)$; \item The involution is given on objects by ${(S, S')}^* := (S', S)$, and on morphisms by $\iota{(f, g)}^* := \iota(g, f)$; \item The counit $\varepsilon_{(S, S')} : (S, S') \switched {(S, S')}^* = (S \otimes S', S \otimes S') \to (I, I)$ is given by the connector: \[\varepsilon_{(S, S')} := c_{S \otimes S'}.\] \end{itemize} \end{proposition} \begin{proof} That morphisms of the form $\iota(f, g)$ constitute a symmetric monoidal subcategory is clear, they are the image of the symmetric monoidal functor $\iota$. The functor ${(-)}^*$ is a symmetric monoidal involution, in fact it is strictly so: \begin{align*} {\left( (S, S') \switched (T, T') \right)}^* &= {\left( S \otimes T, T' \otimes S' \right)}^* \\ &= {\left(T' \otimes S', S \otimes T \right)} \\ &= (T', T) \switched (S', S) \\ &= {(T, T')}^* \switched {(S, S')}^* \end{align*} To check extranaturality of $\varepsilon$, suppose we have a dualisable optic $\iota(f, g) : (S, S') \hto (T, T')$, so $f : S \to T$ and $g : T' \to S'$. Happily, all the switching in the definitions cancels out! Extranaturality is witnessed by the equality of the string diagrams: \begin{center} \input{diagrams/counit-extranatural-left.tikz} \qquad \raisebox{1.5cm}{$=$} \qquad \input{diagrams/counit-extranatural-right.tikz} \end{center} Symmetry of $\varepsilon$ by: \begin{center} \input{diagrams/counit-symmetry-left.tikz} \qquad \raisebox{1.5cm}{$=$} \qquad \input{diagrams/counit-symmetry-right.tikz} \end{center} And for monoidality of $\varepsilon$ there is essentially nothing to do in the graphical calculus: \begin{center} \input{diagrams/counit-monoidal-left.tikz} \qquad \raisebox{2cm}{$=$} \qquad \input{diagrams/counit-monoidal-right.tikz} \end{center} Note that the diagrams that are required to commute in the definition of teleological category all terminate with the unit $I$, so in view of Proposition~\ref{prop:costates} we should not be surprised that they correspond to equality of maps in $\C$. \end{proof} \begin{proposition} The functor $\Optic : \SymmMonCat \to \SymmMonCat$ of Theorem~\ref{thm:optic-functor} extends to a functor to $\Tele$. \end{proposition} \begin{proof} We have seen that $\Optic_\C$ is always teleological. We must show that for a symmetric monoidal functor $F : \C \to \D$, the induced functor $\Optic(F) : \Optic_\C \to \Optic_\D$ is teleological. That $\Optic(F)$ preserves the dualisable morphisms is exactly Lemma~\ref{lem:iota-commute-with-opticf}. It also preserves the counits: \begin{align*} &\Optic(F)(\varepsilon_{(S, S')}) \\ &= \Optic(F)(c_{S \otimes S'}) && \text{(Definition of the counit)} \\ &= \Optic(F)(\rep{\rho_S^{-1}}{\rho_{S'}}) && \text{(Definition of the $c$)} \\ &= \rep{\phi^{-1}_{S,I} (F \rho_S^{-1})}{(F \rho_{S'}) \phi_{S',I}}&& \text{(Definition of the $\Optic(F)$)} \\ &= \rep{(FS \otimes \phi_I^{-1}) \phi^{-1}_{S,I} (F \rho_S^{-1})}{(F \rho_{S'}) \phi_{S',I} (FS \otimes \phi_I)} && \text{(Introduce $\phi_I$ to both sides)} \\ &= \rep{\rho_{FS}^{-1}}{\rho_{FS'}} && \text{($F$ is monoidal)} \\ &= \varepsilon_{(FS, FS')} && \text{(Definition of the counit)} \end{align*} %For a natural transformation $\alpha : F \Rightarrow F'$, the induced natural transformation $\Optic(\alpha) : \Optic(F) \Rightarrow \Optic(F')$ has components of the form $\iota(\alpha_{S}, \alpha^{-1}_{S'})$ that are dualisable by definition. Compatibility with the dualisation functor is immediate: for any object $(S, S')$, we have $({\Optic(\alpha)}_{(S, S')})^* = {\Optic(\alpha)}_{(S, S')^*}$ on the nose. \end{proof} We will establish the universal property in the somewhat contrived case of \emph{strict} symmetric monoidal categories and \emph{strict} monoidal functors, but anticipate that this result could be weakened to non-strict symmetric monoidal categories at the cost of checking far more coherences. \begin{definition} A teleological category is \emph{strict} if it is strict as a symmetric monoidal category and ${(-)}^*$ is a strict monoidal involution, so ${(A \teletimes B)}^* = B^* \teletimes A^*$ and $I^* = I$, and also ${(A^*)}^* = A$. A teleological functor is \emph{strict} if it is strict as a symmetric monoidal functor and strictly preserves the duality and counits. \end{definition} We have previously noted that $\Optic_\C$ is strict monoidal if $\C$ is, and that in that case the duality is strict. There are functors \begin{align*} \Optic &: \StrictSymmMonCat \to \StrictTele \\ U &: \StrictTele \to \StrictSymmMonCat \\ {(-)}_d &: \StrictTele \to \StrictSymmMonCat \end{align*} The crux is the following proposition that decomposes every optic in a canonical way. \begin{proposition}\label{prop:optic-decompose} Suppose $\rep{l}{r} : (S, S') \hto (A, A')$ has residual $M$. Then \begin{align*} \rep{l}{r} = ((A, I) \switched \varepsilon_{(M, I)} \switched (I, A'))(j(s_{M,A}l) \switched j{(rs_{A',M})}^*) \end{align*} where $j : \C \to \Optic_\C$ is the functor $j(A) := \iota(A, I)$. \end{proposition} The symmetries in the above expression could have been avoided if $\Optic$ had been defined as $\int^{M \in \C} \C(S, A \otimes M) \times \C(A' \otimes M, S')$, but it is too late to change the convention now! \begin{proof} First note that because $\C$ is strict monoidal, the counit $\varepsilon_{(M, I)} : (M \otimes I, M \otimes I) = (M, M) \hto (I, I)$ is equal to the connector $c_M : (M, M) \hto (I, I)$. Then, up to strictness of the monoidal unit, we are composing the two optics \begin{center} \input{diagrams/optic-decomposed-outer.tikz} \end{center} and \begin{center} \input{diagrams/optic-decomposed-inner.tikz} \end{center} so the two pairs of twists cancel, and we are left exactly with the diagram for $\rep{l}{r}$. \end{proof} This also holds for monoidal categories that are not necessarily strict, if the unit object and unitors are inserted in the appropriate places. \begin{proposition} Suppose $(\C, \otimes, I)$ is a strict symmetric monoidal category and $(\T, \teletimes, I, {(-)}^*, \varepsilon)$ is a strict teleological category. Given a strict symmetric monoidal functor $F : \C \to \T_d$, there exists a unique strict teleological functor $K : \Optic_\C \to \T$ with the property $Kj = F$. \end{proposition} \begin{proof} We construct $K$ as follows. Note that any object $(S, S')$ in $\Optic_\C$ can be written uniquely as $j(S) \switched {j(S')}^*$, so we are forced to define $K(S, S') = FS \teletimes {(FS')}^*$. Suppose $\rep{l}{r} : (S, S') \hto (A, A')$ is an optic. By the previous Proposition, \begin{align*} \rep{l}{r} = ((A, I) \switched \varepsilon_{(M, I)} \switched (I, A'))(j(s_{M,A}l) \switched {j(rs_{A', M})}^*) \end{align*} So if a $K$ with $Kj = F$ exists, it must hold that \begin{align*} K\rep{l}{r} &= K((A, I) \switched \varepsilon_{(M, I)} \switched (I, A')) K(j(s_{M,A}l) \switched {j(rs_{A', M})}^*) \\ &\qquad\text{($K$ is monoidal)} \\ &= (K(A, I) \teletimes K\varepsilon_{(M, I)} \teletimes K(I, A')) (K(j(s_{M,A}l)) \teletimes K( {j(rs_{A', M})}^*)) \\ &\qquad\text{($K$ preserves the counit and duality)} \\ &= (K(A, I) \teletimes \varepsilon_{K(M,I)} \teletimes K(I, A')) (K(j(s_{M,A}l)) \teletimes K( {j(rs_{A', M})})^*) \\ &\qquad\text{($K$ satisfies $Kj = F$)} \\ &= (FA \teletimes \varepsilon_{FM} \teletimes {(FA')}^*) (F(s_{M,A}l) \teletimes F(rs_{A', M})^*) \end{align*} We therefore take \[ K\rep{l}{r} = (FA \teletimes \varepsilon_{FM} \teletimes {(FA')}^*) (F(s_{M,A}l) \teletimes F(rs_{A', M})^*)) \] as our definition of $K$. The diagram for $K\rep{l}{r}$ in $\T$ is as follows: \begin{center} \input{diagrams/k-generic-optic.tikz} \end{center} % \begin{align*} % K\rep{(f \otimes A) l}{r} % &= (FA \otimes \varepsilon_{FM} \otimes (FA')^*)(F(s_{M,A}(f \otimes A)l) \otimes (F(rs_{A',M}))^* ) \\ % &= (FA \otimes \varepsilon_{FM} \otimes (FA')^*)(F((A \otimes f)s_{N,A}l) \otimes (F(rs_{A',M}))^* ) \\ % % &= (FA \otimes \varepsilon_{FM} \otimes (FA')^*)(FA \otimes Ff \otimes FM \otimes FA')(F(s_{N,A}l) \otimes (F(rs_{A',M}))^* ) \\ % &= (FA \otimes (\varepsilon_{FM} (Ff \otimes FM)) \otimes (FA')^*)(F(s_{N,A}l) \otimes (F(rs_{A',M}))^* ) \\ % &= (FA \otimes (\varepsilon_{FN} (FN \otimes (Ff)^*)) \otimes (FA')^*)(F(s_{N,A}l) \otimes (F(rs_{A',M}))^* ) \\ % &= (FA \otimes \varepsilon_{FN} \otimes (FA')^*)(F(s_{N,A}l) \otimes (F(rs_{A',M}(A' \otimes f)))^* ) \\ % &= (FA \otimes \varepsilon_{FN} \otimes (FA')^*)(F(s_{N,A}l) \otimes (F(r(f \otimes A')s_{A',N}))^* ) \\ % &= K\rep{ l}{r (f \otimes A')} % \end{align*} It remains to show that $K$ so defined is indeed a strict teleological functor. There are several things to check: \begin{itemize} \item Well-definedness: Suppose we have two optics related by the coend relation: \begin{align*} \rep{(f \otimes A) l}{r} = \rep{l}{r (f \otimes A')} \end{align*} Then well-definedness is shown by the equivalence of diagrams \begin{center} \input{diagrams/k-well-defined-left.tikz} \qquad \raisebox{1.5cm}{$=$} \qquad \input{diagrams/k-well-defined-right.tikz} \end{center} using naturality of the symmetry and extranaturality of the counit. \item Functoriality: We have an equivalence of diagrams \begin{center} \input{diagrams/k-functorial-left.tikz} \quad \raisebox{1.5cm}{$=$} \quad \input{diagrams/k-functorial-right.tikz} \end{center} using naturality of the symmetry and monoidality of the counit. \item Monoidality: \begin{align*} K((S, S') \switched (T, T')) &= K(S \otimes T, T' \otimes S) \\ &= F(S \otimes T) \teletimes {F(T' \otimes S')}^* \\ &= FS \teletimes FT \teletimes {(FS')}^* \teletimes {(FT')}^* \\ &= FS \teletimes {(FS')}^* \teletimes FT \teletimes {(FT')}^* \\ &= K(S, S') \teletimes K(T, T') \end{align*} and \begin{align*} K(I, I) &= FI \teletimes {(FI)}^* \\ &= I \teletimes I^* \\ &= I \end{align*} %That these obey the required coherences is straightforward. %More difficult is showing that the first isomorphism is natural in $(S, S')$ and $(T, T')$: \todo{todo} \item Preservation of duals: \begin{align*} K({(S, S')}^*) = K(S', S) = FS' \teletimes {(FS)}^* = {(FS \teletimes {(FS')}^*)}^* = {(K(S, S'))}^* \end{align*} \item Preservation of dualisable morphisms: For a morphism $\iota(f, g)$: \begin{align*} K(\iota(f, g)) &= K(\rep{\lambda_A^{-1} f}{g \lambda_{A'}}) \\ &= (FA \teletimes \varepsilon_{FI} \teletimes {(FA')}^*)(F(s_{I,A}\lambda_A^{-1} f) \teletimes {(F(g \lambda_{A'}s_{A', I}))}^* ) \\ &= (FA \teletimes {(FA')}^*)(Ff \teletimes {(Fg)}^* ) \\ &= Ff \teletimes {(Fg)}^* \end{align*} and this is dualisable, as dualisability is preserved by taking the monoidal product and duals. \item Preservation of counits: \begin{align*} K(\varepsilon_{(S, S')}) &= K(c_{S \otimes S'}) \\ &= K(\rep{\rho_{S \otimes S'}^{-1}}{\rho_{S \otimes S'}}) \\ &= (FI \teletimes \varepsilon_{F(S \otimes S')} \teletimes {(FI)}^*)(F(s_{S \otimes S',I}\rho_{S \otimes S'}^{-1}) \teletimes (F(\rho_{S \otimes S'} s_{I, S \otimes S'}))^* ) \\ &= (\varepsilon_{F(S \otimes S')})(F(S \otimes S') \teletimes F{(S \otimes S')}^* ) \\ &= \varepsilon_{F(S \otimes S')} \\ &= \varepsilon_{FS \otimes FS'} \\ &= \varepsilon_{FS}(FS \otimes \varepsilon_{FS'} \otimes {(FS)}^*) \\ &= \varepsilon_{FS}(FS \otimes \varepsilon_{{FS'}^*} \otimes {(FS)}^*) \\ &= \varepsilon_{FS \otimes {FS'}^*} \\ &= \varepsilon_{K(S, S')} \end{align*} The critical move is applying the equality $\varepsilon_{FS'} = \varepsilon_{{FS'}^*}$, which follows because $\varepsilon$ is a symmetric monoidal transformation and the duality is strict. \end{itemize} %To conclude biadjointness it remains to show that for any symmetric monoidal category $\C$ and teleological category $\T$, restriction along $j : \C \to \Optic_\C$ defines an equivalence of categories \[\Tele(\Optic_\C, \T) \simeq \SymmMonCat(\C, \T_d). \] %The restriction of a functor $K$ along $j$ indeed has its image in the subcategory $\T_d$, as by definition $j(f) = \iota(f, \id_I)$ is dualisable and $K$ preserves dualisable morphisms. The construction of $K$ given earlier establishes that the restriction functor is essentially surjective, so % %Given two teleological functors $K, L : \Optic_\C \to \T$, the whiskering of a teleological natural transformation $\alpha : K \Rightarrow L$ by $j$ is a well defined natural transformation $\beta : K j \Rightarrow L j$ in $\SymmMonCat(\C, \T_d)$ as the components of teleological natural transformations are required to be dualisable. % %We provide an inverse to this whiskering operation as follows. If $\beta : K j \Rightarrow L j$ is a monoidal natural transformation, define $\alpha : K \Rightarrow L$ to be the natural transformation with components: %\begin{align*} %K(S, S') = K(j(S) \switched {j(S')}^*) \xrightarrow{\phi} Kj(S) \teletimes K({j(S')}^*) \xrightarrow{\beta_S \teletimes \beta_{S'}^*} Lj(S) \teletimes L({j(S')}^*) \xrightarrow{\phi^{-1}} L(j(S) \switched {j(S')}^*) = L(S, S') %\end{align*} %\todo{Not quite right, we should be using $d_X$} We leave showing that this is an inverse to the reader. % %\todo{pseudonaturality of that equivalence of categories is missing} \end{proof} \begin{theorem}\label{thm:optic-is-free-teleological-cat} $\Optic : \StrictSymmMonCat \to \StrictTele$ is left adjoint to the `underlying dualisable morphisms' functor ${(-)}_d : \StrictTele \to \StrictSymmMonCat$. \end{theorem} \begin{proof} Precomposition with $j$ gives a function \begin{align*} \StrictTele(\Optic_\C, \T) \to \StrictSymmMonCat(\C, \T_d) \end{align*} and the previous proposition states that this is a isomorphism. This is automatically natural in $\T$. Naturality in $\C$ follows by Lemma~\ref{lem:iota-commute-with-opticf}. \end{proof} \begin{remark} The above theorem and its proof have much in common with~\cite[Proposition 5.2]{JoyalStreetVerity}, which gave a similar universal property for their $\mathrm{Int}$ construction on traced monoidal categories. \end{remark} Working with strict monoidal categories made it significantly easier to prove the universal property. There is likely to be a 2-categorical universal property of $\Optic$ for non-strict monoidal categories, so long as we restrict our attention to the sub-2-category $\SymmMonCat_\mathrm{homcore}$ of $\SymmMonCat$ that only contains natural isomorphisms. We leave this to future work: \begin{definition} A \emph{teleological natural isomorphism} $\alpha : F \Rightarrow G$ is a monoidal natural isomorphism whose components are all dualisable and that is additionally compatible with the dualisation: \[ \begin{tikzcd} {(FX)}^* \ar[r, "\cong"] \ar[d, "(\alpha_X)^*", swap] & F(X^*) \ar[d, "\alpha_{X^*}"] \\ {(GX)}^* \ar[r, "\cong", swap] & G(X^*) \end{tikzcd} \] There is a (strict) 2-category $\Tele$ consisting of teleological categories, functors and natural isomorphisms. \end{definition} \begin{conjecture} \[ \Optic : \SymmMonCat_\mathrm{homcore} \to \Tele \] is left biadjoint to \[(-)_d : \Tele \to \SymmMonCat_\mathrm{homcore}\] \end{conjecture} \subsection{Optics for a Monoidal Action} To capture more of the optic variants available in the Haskell \lenslib{} library, we generalise to the case of a monoidal action of one category on another. \begin{definition} Let $\C$ be a category and $(\M, \otimes, I)$ a monoidal category. An \emph{action of $\M$ on $\C$} is a monoidal functor $a : \M \to [\C, \C]$. For two objects $M \in \M$ and $A \in \C$, the action $a(M)(A)$ is abbreviated $M \act A$. \end{definition} Given such an action, we define \begin{align*} \Optic_\M((S, S'), (A, A')) := \int^{M \in \M} \C(S, M \act A) \times \C(M \act A', S') \end{align*} This subsumes the earlier definition, taking $\M = \C$ and having $\C$ act on itself via left-tensor: \begin{align*} a : \C &\to [\C, \C] \\ X &\mapsto X \otimes - \end{align*} We henceforth write this case as $\Optic_\otimes$, to emphasise the action on $\C$ that is used. \begin{proposition} We have a category $\Optic_\M$ and a functor $\iota : \C \times \C^\op \to \Optic_\M$ defined analogously to Propositions~\ref{prop:optic-is-cat} and~\ref{prop:iota-functor}. \qed \end{proposition} \begin{definition} Given two categories equipped with monoidal actions $(\M, \C)$ and $(\N, \D)$, a \emph{morphism of actions} is a monoidal functor $F^\bullet : \M \to N$ and a functor $F : \C \to \D$ that commutes with the actions, in the sense that there exists a natural isomorphism \begin{align*} \phi_{M,A} &: F(M \act A) \to (F^\bullet M) \act (F A) \end{align*} satisfying conditions analogous to those for a monoidal functor. %A morphism of actions is \emph{lax} if $\phi$ and $\phi_{M,A}$ are not necessarily invertible, and \emph{oplax} if they face the other direction. \todo{Do I need laxness in $F^\bullet$?} \end{definition} \begin{proposition}\label{prop:change-of-action} If $F : (\M, \C) \to (\N, \D)$ is a morphism of actions, there is an induced functor $\Optic(F) : \Optic_\M \to \Optic_\N$. \qed \end{proposition} For the remainder of the paper we work in this more general setting. \section{Lawful Optics}\label{sec:lawful-optics} Typically we want our optics to obey certain laws. The `constant-complement' perspective suggests declaring an optic $\rep{l}{r}$ to be lawful if $l$ and $r$ are mutual inverses. There are a couple of issues with this definition. Firstly, it is not invariant under the coend relation, so the condition holding for one representative is no guarantee that it holds for any other. Still, we might say that an optic is lawful if it has \emph{some} representative that consists of mutual inverses. In our primary example of an optic variant, lenses in $\Set$, this does indeed correspond to the concrete lens laws. However, this fact relies on some extra structure possessed by $\Set$: the existence of pullbacks, and that all objects (other than the empty set) have a global element. In this section we make a different definition of lawfulness that at first seems strange, but which in the case of lenses corresponds \emph{exactly} to the three concrete lens laws with no additional assumptions on $\C$ required. As further justification for this definition, in Section~\ref{sec:profunctor-optics} we will see an interpretation of (unlawful) optics as maps between certain comonoid objects. Lawfulness in our sense corresponds exactly to this map being a comonoid homomorphism. The optic laws only make sense for optics of the form $p : (S,S) \hto (A, A)$. In this section we will abbreviate $\Optic_\M((S, S), (A, A))$ as $\Optic_\M(S, A)$ and $p : (S, S) \hto (A, A)$ as $p : S \hto A$. \begin{remark} We use $;$ to denote composition of $\C$ in diagrammatic order. The reason for this is that the coend relation can be applied simply by shifting the position of $\mid$ in a representative: \begin{align*} \rep{l;(\phi \act A)}{r} = \rep{l}{(\phi \act A);r} \end{align*} %\todo{Should I switch the entire paper into using diagrammatic order?} \end{remark} Let $\Twoptic_\M(S, A)$ denote the set \[ \int^{M_1, M_2 \in \M} \C(S, M_1 \act A) \times \C(M_1 \act A, M_2 \act A) \times \C(M_2 \act A, S). \] Using the universal property of the coend, we define three maps: \begin{align*} \outside &: \Optic_\M(S, A) \to \C(S, S) \\ \once, \twice &: \Optic_\M(S, A) \to \Twoptic_\M(S, A) \end{align*} by \begin{align*} \outside(\rep{l}{r}) &= l;r \\ \once(\rep{l}{r}) &= \repthree{l}{\id_{M\act A}}{r} \\ \twice(\rep{l}{r}) &= \repthree{l}{r;l}{r} \end{align*} \begin{definition} An optic $p : S \hto A$ is \emph{lawful} if \begin{align*} \outside(p) &= \id_S \\ \once(p) &= \twice(p) \end{align*} \end{definition} Returning to ordinary lenses, we can show that this is equivalent to the laws we expect. \begin{proposition}\label{prop:lawful-lens-laws} A concrete lens described by $\fget$ and $\fput$ is lawful (in our sense) iff it obeys the three concrete lens laws. \end{proposition} \begin{proof} We begin by giving $\Twoptic_\times(S, A)$ the same treatment as we did $\Optic_\times(S, A)$. Using the universal property of the product and Yoneda reduction twice each, we have: \begin{align*} \Twoptic_\times(S, A) &= \int^{M_1, M_2 \in \C} \C(S, M_1 \times A) \times \C(M_1 \times A, M_2 \times A) \times \C(M_2 \times A, S) \\ &\cong \int^{M_1, M_2 \in \C} \C(S, M_1) \times \C(S, A) \times \C(M_1 \times A, M_2 \times A) \times \C(M_2 \times A, S) \\ &\cong \int^{M_2 \in \C} \C(S, A) \times \C(S \times A, M_2 \times A) \times \C(M_2 \times A, S) \\ &\cong \int^{M_2 \in \C} \C(S, A) \times \C(S \times A, M_2) \times \C(S \times A, A) \times \C(M_2 \times A, S) \\ &\cong \C(S, A) \times \C(S \times A, A) \times \C(S \times A \times A, S) \end{align*} % Call this last set $\conctwice_\times(S, A)$. Written equationally, the isomorphism $\Phi : \Twoptic_\times(S, A) \to \conctwice_\times(S, A)$ is given by: Written equationally, the isomorphism $\Phi : \Twoptic_\times(S, A) \to \C(S, A) \times \C(S \times A, A) \times \C(S \times A \times A, S)$ is given by: \begin{align*} \Phi(\repthree{l}{c}{r}) = (\quad&l;\pi_2, \\ &(l;\pi_1 \times A);c;\pi_2, \\ &((l;\pi_1 \times A);c;\pi_1 \times A);r \quad ) \end{align*} Now suppose we are given a lens $p$ that corresponds concretely to $(\fget, \fput)$, so $p = \rep{[\id_S, \fget]}{\fput}$. Evaluating $\outside$ on this gives: \begin{align*} \outside(\rep{[\id_S, \fget]}{\fput}) = [\id_S, \fget];\fput \end{align*} so requiring $\outside(p) = \id_S$ is precisely the $\fget\fput$ law. We now have to slog through evaluating $\Phi(\once(p))$ and $\Phi(\twice(p))$. \begingroup \allowdisplaybreaks \begin{alignat*}{3} \Phi(\once(\rep{[\id_S, \fget]}{\fput})) &= \Phi(&&\repthree{[\id_S, \fget]}{\id_{S \times A}}{\fput}) \\ &= (&& [\id_S, \fget];\pi_2, \\ &&& ( [\id_S, \fget];\pi_1 \times A);\id_{S \times A};\pi_2, \\ &&& (( [\id_S, \fget];\pi_1 \times A);\id_{S \times A};\pi_1 \times A) ; \fput \quad) \\ % &= (&&\fget, \\ &&& \pi_2, \\ &&& \pi_{1,3} ;\fput \quad) \\ %%%% %%%% \Phi(\twice(\rep{[\id_S, \fget]}{\fput})) &= \Phi(&&\repthree{[\id_S, \fget]}{\fput;[\id_S, \fget]}{\fput}) \\ &= (&& [\id_S, \fget];\pi_2, \\ &&& ([\id_S, \fget];\pi_1 \times A);\fput;[\id_S, \fget];\pi_2, \\ &&& (([\id_S, \fget];\pi_1 \times A);\fput;[\id_S, \fget];\pi_1 \times A);\fput \quad) \\ % &= (&&\fget, \\ &&& (\id_S \times A);\fput;\fget, \\ &&& ((\id_S \times A);\fput \times A);\fput \quad) \\ % &= (&&\fget, \\ &&& \fput;\fget, \\ &&& (\fput \times A);\fput \quad) \end{alignat*} \endgroup So comparing component-wise, $\Phi(\once(p))$ being equal to $\Phi(\twice(p))$ is exactly equivalent to the $\fput\fget$ and $\fput\fput$ laws holding. \end{proof} We can also check when some other of our basic optics are lawful. \begin{proposition} If $p = \rep{l}{r} : S \hto A$ is an optic such that $l$ and $r$ are mutual inverses, then $p$ is lawful. \end{proposition} \begin{proof} The conditions are easy to check: \begin{align*} \outside(\rep{l}{r}) &= l;r = \id_S \\ \twice(\rep{l}{r}) &= \repthree{l}{r;l}{r} \\ &= \repthree{l}{\id_{M\act A}}{r} \\ &= \once(\rep{l}{r}) \end{align*} \end{proof} \begin{corollary}\label{cor:iota-lawful} If $f : S \to A$ and $g : A \to S$ are mutual inverses, then $\iota(f, g) : S \hto A$ is a lawful optic, so $\iota$ restricts to a functor $\iota : \mathrm{Core}(\C) \to \Lawful_\M$. \qed \end{corollary} \begin{corollary}\label{cor:tautological-lawful} For any two objects $A \in \C$ and $M \in \M$, the tautological optic $M \act A \hto A$ is lawful. \qed \end{corollary} \begin{proposition} A costate $p : (S, S) \hto (I, I)$ corresponding to a morphism $f : S \to S$ via Proposition~\ref{prop:costates} is lawful iff $f = \id_S$. \end{proposition} \begin{proof} The first law states that $\outside(p) = \id_S$, so if $p$ is lawful we have \[ \id_S = \outside(\rep{\rho_S^{-1}}{\rho_S;f}) = \rho_S^{-1};\rho_S;f = f \] On the other hand, if $f = \id_S$ then $\rep{\rho_S^{-1}}{\rho_S}$ is lawful because its components are mutual inverses. \end{proof} %\begin{remark} % Jeremy Gibbons notes (originally in the case of ordinary lenses) that the first law is equivalent to requiring that $p$ composed with the connector $c_A$ is equal to the connector $c_S$. This is an appealing description from a string diagram standpoint, but it is not clear whether there is a similar description for the second law. % \todo{Something to do with embedding $\Optic_\otimes$ into a compact closed category then using the unit to compose the lenses vertically? Should I draw a picture of this?} %\end{remark} \begin{proposition}\label{prop:lawful-category} There is an subcategory $\Lawful_\M$ of $\Optic_\M$ given by objects of the form $(S, S)$ and lawful optics between them. \end{proposition} \begin{proof} This will follow from our description of lawful profunctor optics later, but we give a direct proof. The identity optic is lawful as by definition it has a representative $\rep{\lambda_S^{-1}}{\lambda_S}$ consisting of mutual inverses. We just have to show that lawfulness is preserved under composition. Suppose we have two lawful optics $\rep{l}{r} : R \hto S$ and $\rep{l'}{r'} : S \hto A$ with residuals $M$ and $N$ respectively. We must show that $\rep{l;(M\act l')}{(M\act r');r}$ is also a lawful optic. Showing the first law is straightforward: \begin{align*} \outside(\rep{l; (M\act l')}{(M\act r') ; r}) &= l ; (M \act l') ; (M \act r') ; r \\ &= l ; (M \act l'r') ; r \\ &= l ; (M \act \id_{N \act A}) ; r \\ &= l ; r \\ &= \id_S \end{align*} For the second law, we must show that \[ \repthree{ l;(M\act l')}{(M\act r'); r;l;(M\act l')}{(M\act r') ; r} = \repthree{l;(M\act l')}{\id_{M \act N \act A}}{(M\act r') ; r}. \] The idea is that, by the lawfulness of $\rep{l}{r}$ and $\rep{l'}{r'}$, there are chains of coend relations that prove \begin{align*} \repthree{l}{r;l}{r} &= \repthree{l}{\id_{M \act S}}{r} \\ \repthree{l'}{r';l'}{r'} &= \repthree{l'}{\id_{N \act A}}{r'} \end{align*} The result is achieved by splicing these chains of relations together in the following way. Consider one of the generating relations $\repthree{l;(\phi \act S)}{c}{r} = \repthree{l}{(\phi \act S) ; c}{r}$ in $\Twoptic_\M(R, S)$, where $\phi : M \to M'$. By the functoriality of the action, we calculate: \begin{align*} &\repthree{l;(\phi \act S);(M' \act l')}{(M'\act r'); c ;(M\act l')}{(M\act r') ; r} \\ &= \repthree{l;(M \act l');(\phi \act N \act A)}{(M' \act r'); c ;(M \act l')}{(M \act r') ; r} && \text{(functoriality)} \\ &= \repthree{l;(M \act l')}{(\phi \act N \act A);(M' \act r'); c ;(M \act l')}{(M \act r') ; r} && \text{(coend relation)} \\ &= \repthree{l;(M \act l')}{(M \act r');(\phi \cdot S); c ;(M \act l')}{(M \act r') ; r} && \text{(functoriality)} \end{align*} And similarly for the other generating relation, $\repthree{l}{c;(\phi \act S)}{r} = \repthree{l}{c}{(\phi \act S); r}$. So indeed by replicating the same chain of relations that proves $\repthree{l}{r;l}{r} =\repthree{l}{\id_{M \act S}}{r}$, we see \begin{align*} \repthree{l;(M \act l')}{(M \act r');r;l;(M \act l')}{(M \act r') ; r} &= \repthree{l;(M \act l')}{(M \act r');\id_{M \act S};(M \act l')}{(M \act r') ; r} \\ &= \repthree{l;(M \act l')}{(M \act r';l')}{(M \act r') ; r}. \end{align*} Now that the $r;l$ in the center has been cleared away, we turn to the chain of relations proving $\repthree{l'}{r';l'}{r'} = \repthree{l'}{\id_{N\act A}}{r'}$. A generating relation $\repthree{l';(\psi \act A)}{c'}{r'} = \repthree{l'}{(\psi \act A) ; c'}{r'}$ in $\Twoptic_\M(S, A)$ implies that \begin{align*} \repthree{l;(M \act l';\psi \act A)}{M \act c'}{(M \act r') ; r} &= \repthree{l;(M\act l');(M \act \psi \act A)}{M \act c'}{(M \act r') ; r} \\ &= \repthree{l;(M \act l')}{(M \act \psi \act A) ; (M \act c')}{(M\act r') ; r} \\ &= \repthree{l;(M \act l')}{M \act ((\psi \act A);c')}{(M \act r') ; r} \end{align*} And similarly for the generating relation on the other side. So again we can replicate the chain of relations proving $\repthree{l'}{r';l'}{r'} = \repthree{l'}{\id_{M\act A}}{r'}$ to show that \begin{align*} \repthree{l;(M \act l')}{(M \act r';l')}{(M \act r') ; r} &= \repthree{l;(M \act l')}{M(\id_{N\act A})}{(M \act r') ; r} \\ &= \repthree{l;(M \act l')}{\id_{M \act N \act A}}{(M \act r') ; r} \end{align*} as required. We conclude that composition preserves lawfulness, so $\Lawful_\M$ is indeed a subcategory of $\Optic_\M$. \end{proof} \begin{proposition} In the case that $\C$ is symmetric monoidal and $\M = \C$ acts by left-tensor, then $\Lawful_\otimes$ is symmetric monoidal with the unswitched tensor. \end{proposition} This would of course make no sense with the switched tensor, as the tensor of two objects would typically no longer be of the form $(X, X)$. \begin{proof} Due to Corollary~\ref{cor:iota-lawful}, the structure maps of $\Optic_\M$ are all lawful. We just have to check that $\otimes : \Optic_\M \times \Optic_\M \to \Optic_\M$ restricts to a functor on $\Lawful_\M$. Given two lawful optics $p : S \hto A$ and $q : T \hto B$, the first law for $p \otimes q$ follows immediately from the first law for $p$ and $q$. To prove the second law, we follow the same strategy as used in the previous proposition: the two chains of relations proving $p$ and $q$ lawful can be combined to prove the law for $p \otimes q$. %Given two lawful optics $\rep{l}{r} : S \hto A$ and $\rep{l'}{r'} : T \hto B$ with residuals $N$ and $M$, the first law is easy to check. %\begin{align*} %\outside(\rep{l}{r} \otimes \rep{l'}{r'}) %&= \outside(\rep{(l \otimes l');(M \otimes s_{A,N} \otimes B)}{(M \otimes s_{A,N} \otimes B');(r \otimes r')}) \\ %&= (l \otimes l');(M \otimes s_{A,N} \otimes B);(M \otimes s_{A,N} \otimes B');(r \otimes r') \\ %&= (l \otimes l');(r \otimes r') \\ %&= \id_{S \otimes T} %\end{align*} %For the second, we follow the same strategy as Proposition~\ref{prop:lawful-category}, and show that the action of $\otimes$ on objects respects the coend relations in $\Twoptic_\M(S, A)$ and $\Twoptic_\M(T, B)$. % %Without loss of generality, we just check the case of $\repthree{l;\phi_S}{c}{r} = \repthree{l}{\phi_S ; c}{r}$. (The right entry is identical in all expressions, so we omit it.) %\begin{align*} %&\repthree{(l;\phi_S \otimes l');(M \otimes s_{A,N} \otimes B)}{(M \otimes s_{A,N} \otimes B');(c \otimes c');(M \otimes s_{A,N} \otimes B)}{\dots} \\ %&= \repthree{(l \otimes l');(\phi_S \otimes N \otimes B);(M \otimes s_{A,N} \otimes B)}{(M \otimes s_{A,N} \otimes B');(c \otimes c');(M \otimes s_{A,N} \otimes B)}{\dots} \\ %&= \repthree{(l \otimes l');(\phi_S \otimes N \otimes B)}{(c \otimes c');(M \otimes s_{A,N} \otimes B)}{\dots} \\ %&= \repthree{(l \otimes l')}{(\phi_S \otimes N \otimes B);(c \otimes c');(M \otimes s_{A,N} \otimes B)}{\dots} \\ %&= \repthree{(l \otimes l')}{(\phi_S;c \otimes c');(M \otimes s_{A,N} \otimes B)}{\dots} \\ %&= \repthree{(l \otimes l');(M' \otimes s_{A,N} \otimes B)}{(M' \otimes s_{A,N} \otimes B);(\phi_S;c \otimes c');(M \otimes s_{A,N} \otimes B)}{\dots} %\end{align*} %So again, we can transplant the chain of relations proving $\repthree{l}{r;l}{r} = \repthree{l}{\id_{MA}}{r}$ and $\repthree{l'}{r';l'}{r'} = \repthree{l'}{\id_{NB}}{r'}$ to prove: %\begin{align*} %&\twice(\rep{l}{r} \otimes \rep{l'}{r'}) \\ %&= \twice(\rep{(l \otimes l');(M \otimes s_{A,N} \otimes B)}{(M \otimes s_{A,N} \otimes B');(r \otimes r')}) \\ %&=\repthree{\dots}{(M \otimes s_{A,N} \otimes B);(r \otimes r');(l \otimes l');(M \otimes s_{A,N} \otimes B)}{\dots} \\ %&= \repthree{\dots}{(M \otimes s_{A,N} \otimes B);(rl \otimes r'l');(M \otimes s_{A,N} \otimes B)}{\dots} \\ %&= \repthree{\dots}{(M \otimes s_{A,N} \otimes B);(\id_{M \otimes A} \otimes \id_{N \otimes B});(M \otimes s_{A,N} \otimes B)}{\dots} \\ %&= \repthree{\dots}{(M \otimes s_{A,N} \otimes B);(M \otimes s_{A,N} \otimes B)}{\dots} \\ %&= \repthree{\dots}{\id_{M \otimes N \otimes A \otimes B}}{\dots} \\ %&= \once(\rep{l}{r} \otimes \rep{l'}{r'}) %\end{align*} % %\todo{This suggests there's something general here I should actually be proving} \end{proof} \begin{proposition} Suppose $F : (\M, \C) \to (\N, \D)$ is a morphism of actions. Then $\Optic(F) : \Optic_\M \to \Optic_\N$ restricts to a functor $\Lawful_\M \to \Lawful_\N$. \end{proposition} \begin{proof} If $p = \rep{l}{r}$ is lawful, then verifying the first equation is easy: \begin{align*} \outside(\Optic(F)(\rep{l}{r})) &= \outside\left(\rep{(Fl);\phi^{-1}_{M,A}}{\phi_{M,A'};(Fr)}\right) \\ &= (Fl);\phi^{-1}_{M,A};\phi_{M,A'};(Fr)\\ &= (Fl);(Fr)\\ &= \id_{FS} \end{align*} where $\phi_{M,A'} : (F^\bullet M) \act (FA) \to F(M \act A)$ is the structure map that commutes $F$ with the actions. For the second equation, consider a generating relation $\repthree{l;(\psi \act A)}{c}{r} = \repthree{l}{(\psi \act A) ; c}{r}$. We can use the naturality of $\phi$ to show \begin{align*} \repthree{F(l;\psi \act A);\phi^{-1}_{M,A}}{\phi_{M,A};Fc;\phi^{-1}_{N,A}}{\phi_{N,A};Fr} &= \repthree{Fl;F(\psi \act A);\phi^{-1}_{M,A}}{\phi_{M,A};Fc;\phi^{-1}_{N,A}}{\phi_{N,A};Fr} \\ &= \repthree{Fl;\phi^{-1}_{M',A};(F^\bullet \psi) \act A}{\phi_{M,A};Fc;\phi^{-1}_{N,A}}{\phi_{N,A};Fr} \\ &= \repthree{Fl;\phi^{-1}_{M',A}}{(F^\bullet \psi) \act A;\phi_{M,A};Fc;\phi^{-1}_{N,A}}{\phi_{N,A};Fr} \\ &= \repthree{Fl;\phi^{-1}_{M',A}}{\phi_{M',A};F(\psi \act A);Fc;\phi^{-1}_{N,A}}{\phi_{N,A};Fr} \\ &= \repthree{Fl;\phi^{-1}_{M',A}}{\phi_{M',A};F(\psi \act A;c);\phi^{-1}_{N,A}}{\phi_{N,A};Fr} \end{align*} Similarly, \begin{align*} \repthree{Fl;\phi^{-1}_{M,A}}{\phi_{M,A};Fc;\phi^{-1}_{N,A}}{\phi_{N,A};F((\psi \act A);r)} &= \repthree{Fl;\phi^{-1}_{M,A}}{\phi_{M,A};F(c;(\psi \act A));\phi^{-1}_{N,A}}{\phi_{N,A};Fr} \end{align*} If $\rep{l}{r}$ is lawful, we can therefore replicate the chain of relations proving $\twice(\rep{l}{r}) = \once(\rep{l}{r})$ to show: \begin{align*} \twice(\Optic(F)(\rep{l}{r})) &= \repthree{Fl;\phi^{-1}_{M,A}}{\phi_{M,A};F(r;l);\phi^{-1}_{M,A}}{\phi_{M,A};Fr} \\ &= \repthree{Fl;\phi^{-1}_{M,A}}{\phi_{M,A};F(\id_{M \act A});\phi^{-1}_{M,A}}{\phi_{M,A};Fr} \\ &= \repthree{Fl;\phi^{-1}_{M,A}}{\id_{(F^\bullet M) \act (FA)}}{\phi_{M,A};Fr} \\ &= \once(\Optic(F)(\rep{l}{r})) \end{align*} \end{proof} We end with some commentary on the optic laws. The requirement that $\once(p) = \twice(p)$ is mysterious, but there are sufficient conditions that are easier to verify. \begin{proposition} Let $\rep{l}{r} : S \hto A$ be an optic. If $l;r = \id_S$ and $r;l = \phi \act A$ for some $\phi : M \to M$ in $\M$, then $\rep{l}{r}$ is lawful. \end{proposition} \begin{proof} The statement $\outside(\rep{l}{r}) = l;r = \id_S$ is exactly the first law. And for the second, we verify: \begin{align*} \twice(\rep{l}{r}) &= \repthree{l}{r;l}{r} \\ &= \repthree{l}{\phi \act A}{r} && \text{($r;l = \phi \act A$)}\\ &= \repthree{l ; (\phi \act A)}{\id_{M\act A}}{r} && \text{(coend relation)} \\ &= \repthree{l;r;l}{\id_{M\act A}}{r} && \text{($r;l = \phi \act A$ again)}\\ &= \repthree{l}{\id_{M\act A}}{r} && \text{($l;r = \id_S$)}\\ &= \once(\rep{l}{r}) \end{align*} \end{proof} Even if $r;l = \phi \act A$ for some $\phi$, the same is not necessarily true for other representatives of the same optic. Let $\inside : \Optic_\M(S, A) \to \int^{M \in \M} \C(M \act A, M \act A)$ be the map induced by $\inside(\rep{l}{r}) = \langle r ; l \rangle$. We might ask that instead of requiring $r;l = \phi \act A$ exactly, we have $\langle r ; l \rangle = \langle \phi \act A \rangle$ in $\int^{M \in \M} \C(M \act A, M \act A)$. In fact, this is equivalent: \begin{proposition}\label{prop:onthenose} Suppose $p : S \hto A$ satisfies $\outside(p) = \id_S$ and $\inside(p) = \langle \phi \act A \rangle$. Then there exists a representative $\rep{l}{r}$ such that $r;l = \psi \act A$ on the nose for some (possibly different) $\psi : M \to M$. \end{proposition} \begin{proof} The generating relation for $\int^{M \in \M} \C(M \act A, M \act A)$ is \[ \langle f; (\phi \act A) \rangle = \langle (\phi \act A); f \rangle \] whenever $f : N \act A \to M \act A$ and $\phi : M \to N$. This relation $f; (\phi \act A) \rightsquigarrow (\phi \act A); f$ is not likely to be symmetric or transitive in general. Note that if $f; (\phi \act A) \rightsquigarrow (\phi \act A); f$ then $f ;(\phi \act A) ;f; (\phi \act A) \rightsquigarrow (\phi \act A) ;f ;(\phi \act A); f$. More generally, if $f \rightsquigarrow g$ then $f^n \rightsquigarrow g^n$ for any $n$. Now let $\rep{l}{r}$ be a representative for $p$, so $l;r = \id_S$ and $\langle r;l \rangle = \langle \psi \act A\rangle$. There therefore exists a finite chain of relations $r;l = u_1 \leftrightsquigarrow \dots \leftrightsquigarrow u_n = \psi \act A$. Suppose the first relation faces rightward, so there exists a $k$ and $\phi$ with $r;l = (\phi \act A);k$ and $u_2 = k;(\phi \act A)$. Define $l' = l;(\phi \act A)$ and $r' = k;r$. Then: \begin{align*} \rep{l'}{r'} &= \rep{l ; (\phi \act A)}{k ; r} \\ &= \rep{l}{(\phi \act A) ; k ; r} \\ &= \rep{l}{r ; l; r} \\ &= \rep{l}{r} \end{align*} This new representative satisfies \begin{align*} l';r' &= l;(\phi \act A);k;r = l;r;l;r = \id_S \\ r';l' &= k;r;l;(\phi \act A) = k;(\phi \act A);k;(\phi \act A) = u_2^2 \end{align*} A symmetric argument shows that if instead the relation faces leftward, so $lr \leftsquigarrow u_2$, there again exists $l'$ and $r'$ so that $\rep{l}{r} = \rep{l'}{r'}$, and both $l';r' = \id_S$ and $r';l' = u_2^2$. We can now inductively apply the above argument to the shorter chain \[l'r' = u_1^2 \leftrightsquigarrow \dots \leftrightsquigarrow u_n^2 \leftrightsquigarrow {(\psi \act A)}^2 = \psi^2 \act A,\] obtained by squaring each morphism in the original chain, until we are left with a representative $\rep{l^*}{r^*}$ such that $r^*;l^* = \psi^N \act A$, for some $N>0$. This pair $\rep{l^*}{r^*}$ is the required representative. \end{proof} The above argument has a similar form to those that appear in~\cite{OnTheTrace}, which considered (among other things) coends of the form $\int^{c \in \C} \C(c, Fc)$ for an endofunctor $F : \C \to \C$. \section{Examples}\label{sec:examples} The general pattern is as follows. Once we choose a particular monoidal action $\M \to [\C, \C]$, we find an isomorphism between the set of optics $(S, S') \hto (A, A')$ and a set $\conc((S, S'), (A, A'))$ that is easier to describe. We follow~\cite{ProfunctorOptics} (and others) in calling elements of $\conc$ \emph{concrete optics}. There is no canonical choice for this set; our primary goal is to find a way to eliminate the coend so that we no longer have to deal with equivalence classes of morphisms. Ideally, we then also find a simplified description $\conctwice(S, A)$ for the set $\Twoptic_\C(S, A)$. We can then ``read off'' what conditions on are needed on a concrete optic to ensure that the corresponding element of $\Optic_\M(S, A)$ is lawful. We will call these conditions the \emph{concrete laws}. It is worth emphasising that once a monoidal action has been chosen and a concrete description of the corresponding optics found, no further work is needed to show that the result forms a category with a subcategory of lawful optics. This is especially useful when devising new optic variants, as we do later. \subsection{Lenses} The founding example, that of lenses, has already been discussed in the previous sections. We add a couple of remarks. \begin{remark}\label{lens-iota-not-faithful} For the category of sets, the functor $\iota : \Set \times \Set^\op \to \Optic_\times$ is not faithful. The problem is the empty set: the functor $0 \times (-)$ is not faithful. Any pair of maps $f : 0 \to A$, $g : A' \to S'$ yield equivalent optics $\iota(f, g)$, as the corresponding $\fget$ and $\fput$ functions must be the unique maps from $0$. \end{remark} \begin{remark} In the case that $\C$ is cartesian closed, $\Optic_\times$ is monoidal closed via the astonishing formula \begin{align*} [(S, S'), (A, A')] := (\homC(S, A) \times \homC(S \times A', S'), \, S \times A') \end{align*} where $\homC(-, -)$ denotes the internal hom. For a proof see~\cite[Section 1.2]{DialecticaCategories}. This cannot be extended to non-cartesian closed categories, the isomorphism \begin{align*} \Lens((S, S') \otimes (T, T'), (A, A')) \cong \Lens((S, S'), [(T, T'), (A, A')]) \end{align*} uses the diagonal maps of $\C$ in an essential way. \end{remark} If we ask more of our category $\C$, we can show that a lens $S \hto A$ implies the existence of a complement $\C$ with $S \cong C \times A$. This doesn't appear to follow purely from the concrete lens laws---an argument that a definition of lawfulness based on constant complements is not the correct generalisation. For completeness we include a proof in our notation. \begin{proposition}[{Generalisation of~\cite[Corollary 13]{AlgebrasAndUpdateStrategies}}] Suppose $\C$ has pullbacks and that there is a morphism $x : 1 \to A$. If $p : S \hto A$ is lawful then there exists $C \in \C$ and mutual inverses $l : S \to C \times A$ and $r : C \times A \to S$ so that $p = \rep{l}{r}$. \end{proposition} \begin{proof} Set $C$ to be the pullback of $\fget$ along $x$, so there is a map $i : C \to S$ with $\fget \, i = x !_C$. There is also a map $j : S \to C$ induced by the following diagram: \[ \begin{tikzcd} S \ar[ddr, bend right = 20] \ar[dr, "j", dashed] \ar[r, "{[\id_S, x !_S]}"] & S \times A \ar[dr, "\fput"] & \\ & C \ar[r, "i"] \ar[d] \arrow[dr, phantom, "\lrcorner", very near start] & S \ar[d, "\fget"] \\ & 1\ar[r, "x", swap] & A \end{tikzcd} \] which commutes by the $\fput\fget$ law. Note that $ji = \id_C$ by the universal property of pullbacks. Now take $l : [j,\fget] : S \to C \times A$ and $r : \fput (i \times A) : C \times A \to S$. That they are mutual inverses is easily checked: \begin{align*} \fput (i \times A)[j,\fget] &= \fput [ij,\fget] \\ &= \fput [\fput [\id_S, x!_S],\fget] && \text{(by definition of $j$)} \\ &= \fput [\id_S,\fget] && \text{(by $\fput\fput$)} \\ &= \id_S && \text{(by $\fget\fput$)} \intertext{and} [j,\fget]\fput (i \times A) &= [j\fput (i \times A),\fget\,\fput (i \times A)] && \text{(by universal property of product)} \\ &= [j\fput (i \times A), \pi_2 (i \times A)] && \text{(by $\fput\fget$)} \\ &= [j\fput (i \times A), \pi_2] && \\ &= [jij\fput (i \times A), \pi_2] && \\ &= [j\fput [\id_S,x !_S] \fput (i \times A), \pi_2] && \\ &= [j\fput [\id_S, x !_S] \pi_1 (i \times A), \pi_2] && \text{(by $\fput\fput$)}\\ &= [jij \pi_1 (i \times A), \pi_2] && \\ &= [jiji \pi_1, \pi_2] && \\ &= [\pi_1, \pi_2] && \\ &= \id_{C \times A} \end{align*} Finally, the coend relation gives that \[\rep{[j,\fget]}{\fput (i \times A)} = \rep{(i \times A)[j,\fget]}{\fput} = \rep{[\id_S, \fget]}{\fput}\] as elements of $\Optic_\M(S, A)$. \end{proof} \begin{remark} Much of the work on bidirectional transformations~\cite{CombinatorsForBidirectionalTreeTransformations} considers lenses that are only `well-behaved', not `very well-behaved': they obey the $\fput\fget$ and $\fget\fput$ laws but not the $\fput\fput$ law. For example, the ``change counter'' lens $\bN \times A \hto A$ from~\cite{AClearPictureOfLensLaws} has $\fput$ and $\fget$ given by: \begin{align*} \fget(n, a) &= a \\ \fput((n, a), a') &= \begin{cases} (n, a) & \text{if } a = a' \\ (n+1, a') & \text{otherwise} \end{cases} \end{align*} This example is typical of (merely) well-behaved lenses: there is metadata stored alongside the target of a lens that mutates as the lens is used. Lenses that satisfy only the two laws correspond to pairs $\rep{l}{r}$ such that $rl = \id_S$ and $\pi_2lr = \pi_2$. This condition seems unavoidably tied to the product structure on $\C$; there is no obvious way generalise this to other optics variants. \end{remark} %\begin{example} % A minimal example of a lens that obeys $\fget\fput$ and $\fput\fget$ but not $\fput\fput$ is the following. Let $p : \{A,B,C\} \hto \{X, Y\}$ be the lens with % \begin{align*} % \fget : \{A,B,C\} &\to \{X, Y\} \\ % A &\mapsto X \\ % B &\mapsto X \\ % C &\mapsto Y % \end{align*} % \begin{align*} % \fput : \{A,B,C\} \times \{X, Y\} &\to \{A,B,C\} \\ % (A,X) &\mapsto A \\ % (B,X) &\mapsto B \\ % (C,X) &\mapsto B \\ % (A,Y) &\mapsto C \\ % (B,Y) &\mapsto C \\ % (C,Y) &\mapsto C % \end{align*} %\end{example} \subsection{Prisms} Prisms are dual to lenses: \begin{definition} Suppose $\C$ has finite coproducts. The \emph{category of prisms} is the category of optics with respect to the coproduct $\sqcup$: $\Prism \defeq \Optic_\sqcup$. \end{definition} Just as optics for $\times$ correspond to a pair of maps $\fget : S \to A$ and $\fput : S \times A \to S$, optics for $\sqcup$ correspond to pairs of maps $\freview : A \to S$ and $\fmatching : S \to S \sqcup A$. These names are taken from the Haskell \lenslib{} library. \begin{align*} \Prism((S, S'), (A, A')) &= \int^{M \in \C} \C(S, M \sqcup A) \times \C(M \sqcup A', S') \\ &\cong \int^{M \in \C} \C(S, M \sqcup A) \times \C(M, S') \times \C(A, S') && \text{(universal property of coproduct)} \\ &\cong \C(S, S' \sqcup A) \times \C(A', S') && \text{(Yoneda reduction)} \end{align*} If we are given a prism $\rep{l}{r} : (S, S') \hto (A, A')$ then associated $\freview$ and $\fmatching$ morphisms are given by $\freview = r \inr$ and $\fmatching = (r\inl \sqcup A)l$ The concrete laws for prisms are the obvious duals to the lens laws: \begin{align*} \fmatching \; \freview &= \inr \\ [\id_S, \freview] \fmatching &= \id_S \\ (\fmatching \sqcup A) \fmatching &= \mathrm{in}_{1,3} \, \fmatching \end{align*} In the \lenslib{} library documentation the third law is missing, on account of following: \begin{proposition} When $\C = \Set$, the third law is implied by the other two. \end{proposition} \begin{proof} The key is that for any map $f : X \to Y$ in $\Set$, the codomain $Y$ is equal to the union of $\im f$ and its complement. The first law implies that $\freview$ is injective, so $S \cong C \sqcup A$ for some complement $C$. Identifying $A$ with its image in $S$, the second law implies that if $a\in A \subset S$ then $\fmatching(a) = \inr(a)$ and if $c\in C \subset S$ then $\fmatching(c) = \inl(c)$. The third law can then be verified pointwise by checking both cases $a\in A \subset S$ and $c\in C \subset S$ separately. \end{proof} The following is then exactly the dual of Proposition~\ref{prop:lawful-lens-laws}. \begin{proposition}\label{prop:lawful-prism-laws} If $p : S \hto A$ is a lawful prism then the associated $\fmatching$ and $\freview$ functions satisfy the concrete prism laws. \qed \end{proposition} \subsection{Isos} For any category $\C$, there is a unique action of the terminal category $1$ on $\C$ that fixes every object. \begin{proposition} The category of optics for this action is isomorphic to $\C \times \C^\op$. \end{proposition} \begin{proof} \begin{align*} \Optic_1((S, S'), (A, A')) &= \int^{M \in 1} \C(S, M \act A) \times \C(M \act A', S') \\ &\cong \C(S, \star \act A) \times \C(\star \act A', S') \\ &\cong \C(S, A) \times \C(A', S') \end{align*} where $\star$ denotes the object of $1$. Composition in $\Optic_1((S, S'), (A, A'))$ does indeed correspond to composition in $\C \times \C^\op$. \end{proof} \begin{proposition} An iso $\rep{l}{r}$ is lawful iff (as expected) $l$ and $r$ are mutual inverses. \end{proposition} \begin{proof} $\Twoptic_\M(S, A)$ specialises in this case to just \[ \C(S, A) \times \C(A, A) \times \C(A, S) \] The condition $\outside(\rep{l}{r}) = \id_S$ is the claim that $rl = \id_S$, and $\once(\rep{l}{r}) = (l, \id_A, r)$ is equal to $\twice(\rep{l}{r}) = (l, lr, r)$ iff $lr = \id_A$. \end{proof} \subsection{Coalgebraic Optics}\label{sec:coalgebraic} There is a common pattern in many of the examples to follow: for every object $A \in \C$, the evaluation-at-$A$ functor $- \act A : \M \to \C$ has a right adjoint, say $R_A : \C \to \M$. To fix notation, let \[\overrightarrow{(-)} : \C(F \act A, S) \to \M(F, R_{A} S) : \overleftarrow{(-)}\] denote the homset bijection, so the unit and counit are: \begin{equation*} \begin{aligned}[t] \eta_F &: F \to R_A (F \act A) \\ \eta_F &:= \overrightarrow{\id_{F \act A}} \end{aligned} \qquad\qquad\qquad \begin{aligned}[t] \varepsilon_{S} &: (R_A S) \act A \to S \\ \varepsilon_{S} &:= \overleftarrow{\id_{R_{A} S}} \end{aligned} \end{equation*} It is shown in~\cite[Section 6]{ANoteOnActions} that, at least in the case $\M$ is right closed, to give such an action is equivalent to giving a ``copowered $\M$-category $\C$''. In most cases of interest to us, however, $\M$ is not right closed. %\todo{Pointed out to me years ago when I emailed Ross Street, but I didn't understand what he was saying} When we have such a right adjoint, we can always find a concrete description of an optic: \begin{align*} \Optic_\M((S, S'), (A, A')) &= \int^{F \in \M} \C(S, F\act A) \times \C(F\act A', S') \\ &\cong \int^{F \in \M} \C(S, F \act A) \times \M(F, R_{A'} S') \\ &\cong \C(S, (R_{A'} S') \act A) \end{align*} A concrete optic is therefore a map $\funzip : S \to (R_{A'} S') \act A$. The above isomorphism sends $\funzip$ to the element $\rep{\funzip}{\varepsilon_{S'}}$. In the other direction, given $\rep{l}{r}$ we have $\funzip = (\overrightarrow{r} \act A)l$. \begin{theorem}\label{thm:optics-are-coalgebras} A concrete optic $\funzip : S \to (R_{A} S) \act A$ is lawful iff it is a coalgebra for the comonad $X \mapsto (R_{A} X) \act A$. \end{theorem} \begin{proof} By adjointness and Yoneda reduction we have an isomorphism \[\Phi : \Twoptic_\M(S, A) \to \C(S, R_A (R_A S \act A) \act A)\] given by \begin{align*} &\Twoptic_\M(S, A) \\ &= \int^{M_1, M_2 \in \M} \C(S, M_1 \act A) \times \C(M_1 \act A, M_2 \act A) \times \C(M_2 \act A, S) \\ &\cong \int^{M_1, M_2 \in \M} \C(S, M_1 \act A) \times \M(M_1, R_A (M_2 \act A)) \times \M(M_2, R_A S) \\ &\cong \int^{M_1, M_2 \in \M} \C(S, M_1 \act A) \times \M(M_1, R_A (M_2 \act A)) \times \M(M_2, R_A S) \\ &\cong \int^{M_1 \in \M} \C(S, M_1 \act A) \times \M(M_1, R_A (R_A S \act A)) \\ &\cong \C(S, R_A (R_A S \act A) \act A) \end{align*} which evaluated on an element $\repthree{l}{c}{r}$ is \begin{align*} \Phi(\repthree{l}{c}{r}) &= (\overrightarrow{(\overrightarrow{r} \act A)c} \act A)l \end{align*} So now interpreting the optic laws, we find \[\outside(\rep{\funzip}{\varepsilon_{S} }) = \varepsilon_{S} \; \funzip = \id_S \] is exactly the coalgebra counit law, and equality of \begin{align*} \Phi(\once(\rep{\funzip}{\varepsilon_S})) &= \Phi(\repthree{\funzip}{\id_{(R_{A} S) \act A}}{\varepsilon_S }) \\ &= (\overrightarrow{(\overrightarrow{\varepsilon_S} \act A)\id_{(R_{A} S) \act A}} \act A)\funzip \\ &= (\overrightarrow{(\id_{R_A S} \act A)} \act A)\funzip \\ &= (\overrightarrow{\id_{R_A S \act A}} \act A)\funzip \\ &= (\eta_{R_A S} \act A) \funzip \\ \Phi(\twice(\rep{\funzip}{\varepsilon_S })) &= \Phi(\repthree{\funzip}{\funzip \; \varepsilon_S}{\varepsilon_S }) \\ &= (\overrightarrow{(\overrightarrow{\varepsilon_S} \act A)(\funzip \; \varepsilon_S)} \act A)\funzip \\ &= (\overrightarrow{(\funzip \; \varepsilon_S)} \act A)\funzip \\ &= (R_A (\funzip) \act A)\funzip \end{align*} is exactly the coalgebra comultiplication law. \end{proof} \subsection{Setters}\label{sec:setters} \begin{definition} The \emph{category of setters} $\Setter_\C$ is the category of optics for the action of $[\C, \C]$ on $\C$ by evaluation. \end{definition} To devise the concrete form of a setter, we use the following proposition. This is a generalisation of~\cite[Proposition 2.2]{SecondOrderFunctionals}, and helps to explain why the store comonad is so important in the theory of lenses. \begin{proposition} If $\C$ is powered over $\Set$ then the evaluation-at-$A$ functor $-A : [\C, \C] \to \C$ has a right adjoint given by $S \mapsto S^{\C(-, A)}$. If $\C$ is copowered over $\Set$ then $-A : [\C, \C] \to \C$ has a left adjoint given by $S \mapsto \C(A, -) \bullet S$, where $\bullet$ denotes the copower. \end{proposition} \begin{proof} For the first, we have \begin{align*} \C(FA, S) &\cong \int_X \Set(\C(X, A), \C(FX, S)) \\ &\cong \int_X \C(FX, S^{\C(X, A)}) \\ &\cong [\C, \C](F, S^{\C(-, A)}) \end{align*} and for the second, \begin{align*} \C(S, FA) &\cong \int_X \Set(\C(A, X), \C(S, FX)) \\ &\cong \int_X \C(\C(A, X) \bullet S, FX) \\ &\cong [\C, \C](\C(A, -) \bullet S, F) \end{align*} \end{proof} Recall that any category with coproducts is copowered over $\Set$ and any category with products is powered over $\Set$. We could immediately use the previous section to give a coalgebraic description of setters and their laws, but with a little manipulation we get a form that looks more familiar: \begin{align*} \Setter_\C((S, S'), (A, A')) &= \int^{F \in [\C, \C]} \C(S, FA) \times \C(FA', S') \\ &\cong \int^{F \in [\C, \C]} [\C, \C](\C(A, -) \bullet S, F) \times \C(FA', S') \\ &\cong \C(\C(A, A') \bullet S, S') \\ &\cong \Set(\C(A, A'), \C(S, S')) \end{align*} In the Haskell \lenslib{} library, the map $\C(A, A) \to \C(S,S)$ corresponding to a setter is called $\fover$: we think of a setter as allowing us to apply a morphism $A \to A$ over some parts of $S$. Tracing through the isomorphisms, the optic corresponding to $\fover$ is $\rep{l}{r}$ where $l : S \to \C(A, A) \bullet S$ is the inclusion with the identity morphism $\id_A$ and $r : \C(A, A') \bullet S \to S'$ is the transpose of $\fover$ along the adjunction defining the copower. The laws for setters in this form are a kind of functoriality: \begin{proposition} A setter $p : S \hto A$ is lawful iff \begin{align*} \fover(\id_A) &= \id_S \\ \fover(f)\fover(g)&= \fover(fg) \end{align*} \end{proposition} \begin{proof} The key is concretely describing $\Twoptic_{[\C, \C]}(S, A)$ as \[ \Set( \C(A, A) \times \C(A, A), \C(S, S) ).\] We leave the verification that the conditions are equivalent to the reader. %\todo{Worth actually writing this out?} %Because a concrete setter is a function in $\Set$, it is enough to verify the equations pointwise. \end{proof} This characterisation of setters is maybe a little odd, in that we have ended up with a function of $\Set$s, rather than a description internal to $\C$. If we modify our definition of $\Setter$, we do get an internal characterisation. Suppose $\C$ is cartesian closed and let $\Strong_\C$ be the category of \emph{strong functors} on $\C$. \begin{definition}[\cite{StrongFunctors}]\label{def:strong-functor} A \emph{(left) strong functor} is a functor $F : \C \to \C$ equipped with a natural transformation called the \emph{strength}: \begin{align*} \theta_{A,B} : A \otimes F B \to F(A \otimes B) \end{align*} such that the strength commutes with the unitor: \[ \begin{tikzcd} I \otimes F A \ar[r, "\theta_{1,A}"] \ar[d, "\cong" left] & F(I \otimes A) \ar[d, "\cong" right] \\ F A \ar[r, equals] & F A \end{tikzcd} \] and with associativity: \[ \begin{tikzcd} (A \otimes B) \otimes F C \ar[rr, "\theta_{A \otimes B, C}"] \ar[d, "\alpha_{A,B,FC}" left] && F((A \otimes B) \otimes C) \ar[d, "F\alpha_{A,B,FC}" right] \\ A \otimes (B \otimes F C) \ar[r, "A \otimes \theta_{B,C}" below] & A \otimes F(B \otimes C) \ar[r, "\theta_{A, B\otimes C}" below] & F(A \otimes (B \otimes C)) \end{tikzcd} \] A \emph{strong natural transformation} $\tau : (F,\theta) \Rightarrow (G,\theta')$ is a natural transformation that respects the strengths. There is an evident category $\Strong(\C)$ of strong endofunctors and strong natural transformations, and a forgetful functor $U : \Strong(\C) \to [\C, \C]$. \end{definition} Then, again, $\Strong_\C$ acts on $\C$ by evaluation. We leave it to the reader to verify there is a natural isomorphism \[\C(S, FA) \cong \Strong_\C(\homC(A, -) \times S, F),\] which we can use to describe optics for this action as elements of $\C(\homC(A, A'), \homC(S,S'))$. %The use of strong functors here is due to the correspondence between tensorial strengths and ``$\C$-enrichments'' on a functor. %\todo{This section might also work just with monoidal closed? I don't see where cartesianness was used.} \subsection{Traversals} In this section we work in the case $\C = \Set$. Traversals allow us to traverse through a data structure, accumulating applicative actions as we go. We begin by reviewing the definitions of applicative and traversable functors~\cite{AnInvestigationOfTheLawsOfTraversals}. \begin{definition} An \emph{applicative functor} $F : \C \to \C$ is a lax monoidal functor with a strength compatible with the monoidal structure, in the sense that \[ \begin{tikzcd}[column sep = large] A \otimes FB \otimes FC \ar[r, "{\theta_{A, B} \otimes FC}"] \ar[d, swap, "{A \otimes \phi_{B, C}}"] & F(A \otimes B) \otimes FC \ar[d, "{\phi_{A \otimes B, C}}"] \\ A \otimes F(B \otimes C) \ar[r, swap, "{\theta_{A, B \otimes C}}"] & F(A \otimes B \otimes C) \end{tikzcd} \] commutes. An \emph{applicative natural transformation} is one that is both monoidal and strong. Applicative functors and natural transformations form a monoidal category $\App$ with the tensor given by functor composition. \end{definition} \begin{definition} A \emph{traversable functor} is a functor $T : \C \to \C$ equipped with a distributive law $\delta_F : TF \to FT$ for $T$ over the action of $\App$ on $\C$ by evaluation. Explicitly, this means that the diagrams \[ \begin{tikzcd} TF \ar[r, "\delta_F"] \ar[d, swap, "T\alpha"] & FT \ar[d, "T\alpha"] \\ TG \ar[r, swap, "\delta_G"] & GT \end{tikzcd} \hspace{1cm} \begin{tikzcd} TFG \ar[dr, swap, "\delta_F G"] \ar[rr, "\delta_{FG}"] & & FGT \\ & FTG \ar[ur, swap, "F \delta_G"] & \end{tikzcd} \hspace{1cm} \begin{tikzcd} T\id_\C \ar[r, bend left, "\id_T"] \ar[r, bend right, swap, "\delta_{\id_\C}"] & \id_\C T \end{tikzcd} \] in $[\C, \C]$ commute. \end{definition} \begin{definition} The category $\Traversal$ of traversals is the category of optics for the action of $\Traversable$ on $\Set$ given by evaluation. (Yes, the names $\Traversal$/$\Traversable$ are confusing!) \end{definition} %\todo{The rest of this section should probably be rewritten to use the free applicative functor instead of the confusing parameterised comonad business} It is known that traversable functors correspond to coalgebras for a particular parameterised comonad. See~{\cite[Definitions 4.1 and 4.2]{SecondOrderFunctionals}}, also~\cite{AlgebrasForParameterisedMonads} for the relevant definitions of parameterised comonads and coalgebras. \begin{proposition}[{\cite[Theorem 4.10, Proposition 5.4]{SecondOrderFunctionals}}] Traversable structures on a functor $T : \Set \to \Set$ correspond to parameterised coalgebra structures \begin{align*} t_{A, B} : TA \to UR^*_{A, B}(T B) \end{align*} where $UR^*_{X,Y}$ is the parameterised comonad \begin{align*} UR^*_{X, Y} Z = \Sigma_{n\in \bN} X^n \times \Set(Y^n,Z) \end{align*} Moreover, this correspondence forms an isomorphism of categories between $\Traversable$ and the Eilenberg-Moore category of coalgebras for $UR^*_{-, -}$, which we denote $\E$. \qed \end{proposition} % Experts will recognise $UR^*_{-, -}$ as the free applicative functor \cite{FreeApplicativeFunctors} on the functor $R_{X,Y} Z = X \times Y \to Z$. \begin{lemma} For any objects $A, B \in \Set$ and traversable functor $F$, \[\Set(FA, B) \cong \Traversable(F, \Sigma_n {(-)}^n \times \Set(A^n,B))\] naturally in $B$ and $F$. In other words, the functor \[(B \mapsto \Sigma_n {(-)}^n \times \Set(A^n,B)) : \Set \to \Traversable\] is right adjoint to the evaluation-at-$A$ functor $-A : \Traversable \to \Set$. \end{lemma} \begin{proof} By~\cite[Proposition 6]{AlgebrasForParameterisedMonads}, there is a parameterised adjunction $L_T \dashv R_T$, where \begin{align*} L_T : \Set \times \E &\to \Set \\ (X, (F, f)) &\mapsto FX \\ R_T : \Set \times \Set &\to \E \\ (Y, Z) &\mapsto (UR^*_{-, Y} Z, \varepsilon) \end{align*} where $\varepsilon$ is the counit of $UR^*_{-, -}$. Evaluating these with the fixed parameter $A$, we get an ordinary adjunction \begin{align*} L_T(A) : \E &\to \Set \\ (F, f) &\mapsto FA \\ R_T(A) : \Set &\to \E \\ Z &\mapsto (UR^*_{-, A} Z, \varepsilon) \end{align*} But this is exactly the adjunction we were trying to show. \end{proof} We can then use the coalgebraic pattern from earlier to reach the same concrete description of traversals as found in~\cite{ProfunctorOptics}. \begin{align*} \Traversal((S, S'), (A, A')) &\cong \Set(S, \Sigma_n A^n \times \Set(A'^n,S')) \end{align*} The concrete laws for this representation are the coalgebra laws. These laws, however, are not the ones usually presented for traversals. Instead, versions of the profunctor laws are used, see Section~\ref{sec:profunctor-optics}. %\begin{remark} %In~\cite[Section 2.3]{ProfunctorOptics}, it is further claimed that a traversal $S \hto A$ exhibits an isomorphism $S \cong \Sigma_n A^n \times \Set(A^n,S)$. This cannot possibly be true---consider the traversable functor $X \times -$. The claim appears to be a misreading of~\cite[Proposition 5.4]{SecondOrderFunctionals}. %\end{remark} %\begin{remark} %A careful analysis would be needed to describe traversals in some other category, in particular, the description of $UR^*$ does not make sense if $\C$ is not locally cartesian closed. %\end{remark} %\begin{remark} %Traversable functors can be described as a particular class of polynomial functors known as finitary containers. It may be possible to generalise this to other classes of polynomial functors. %\end{remark} \subsection{Polymorphic Optics} Haskell's optics allow \emph{polymorphic updates}, where the type of the codomain of the lens may be changed by an update, causing a corresponding change in the type of the domain. As an example, we permitted to use a lens into the \mintinline{haskell}{first} entry of a tuple in the following way: \begin{minted}{haskell} set first (1, 5) "hello" == ("hello", 5) \end{minted} This has changed the type from \mintinline{haskell}{(Int, Int)} to \mintinline{haskell}{(String, Int)}. Polymorphic optics can be captured by the coend formalism as follows. Any action of a monoidal category $\M \times \C \to \C$ can be extended to act object-wise on a functor category: \begin{align*} \M \times [\D, \C] &\to [\D, \C] \\ (M, F) &\mapsto M \act (F-) \end{align*} So in the above example, we have the product $\times$ acting pointwise on the functor category $\Set \to \Set$. Our example \mintinline{haskell}{first} is then an optic $F \hto G$, where $F = (-) \times \mintinline{haskell}{Int}$ and $G$ is the identity functor. Given such a polymorphic optic in $[\D, \C]$, we can always `monomorphise' to obtain an ordinary optic in $\C$. \begin{proposition} There is a functor \begin{align*} \mathsf{mono} : \D \times \D^\op \times \Optic_{[\D, \C]} \to \Optic_\C \end{align*} that sends an object $(D, D') \in \D \times \D^\op$ and optic $\rep{l}{r} : (F, F') \hto (G, G')$ in $\Optic_{[\D, \C]}$ to the optic $\rep{l_D}{r_{D'}} : (FD, F'D') \hto (GD, G'D')$ in $\Optic_\C$. For fixed $(D, D) \in \D \times \D^\op$, this functor preserves lawfulness. \end{proposition} \begin{proof} On an object $(D, D') \in \D \times \D^\op$, that we get a functor $\Optic_{[\D, \C]} \to \Optic_\C$ is essentially the same proof as Proposition~\ref{prop:change-of-action} but with different functors on each side of the lens: the evaluation-at-$D$ functor $[\D, \C] \to \C$ on the left and evaluation-at-$D'$ on the right. For functoriality in $\D \times \D^\op$, given $(f, g) : (D_1, D'_1) \to (D_2, D'_2) \in \D \times \D^\op$ and an object $(F, F') \in \Optic_{[\D, \C]}$, there is an induced lens $\iota(Ff, F'g) : (FD_1, F'D'_1) \hto (FD_2, F'D'_2)$. Bifunctoriality of $\mathsf{mono}$ is ensured by the naturality of each $l$ and $r$ in the morphisms of $\Optic_{[\D, \C]}$. \end{proof} \subsection{Linear Lenses}\label{sec:linear-lenses} \newcommand{\ev}{\mathsf{ev}} \newcommand{\coev}{\mathsf{coev}} If $\C$ is closed monoidal but not necessarily cartesian, we can still define the category of \emph{linear lenses} to be $\Optic_\otimes$. The internal hom provides a right adjoint to the evaluation-at-$A$ functor, so we have immediately \begin{align*} \Optic_\otimes((S, S'), (A, A')) &\cong \C(S, \homC(A',S') \otimes A) \end{align*} where $\homC(A', S')$ denotes the internal hom. If $\C$ is cartesian, this is of course isomorphic to the set of $(\fget, \fput)$ functions discussed earlier. We cannot possibly use the three $\fput$/$\fget$ style lens laws in this setting as we lack projections, but specialising the coalgebra laws gives us: \begin{proposition}\label{prop:concrete-linear-lawful} A linear lens $p : S \hto A$ is lawful iff the following two concrete laws for $\funzip$ hold: \begin{align*} \ev_{A, S} \; \funzip &= \id_S && \textsc{(Rezip)} \\ (\coev_{\homC(A, S), A} \otimes A)\funzip &= ((\funzip \circ -) \otimes A)\funzip && \textsc{(ZipZip)} \end{align*} where \[ \funzip \circ - : \homC(A, S) \to \homC(A, \homC(A, S) \otimes A) \] denotes internal composition and \[\coev_{\homC(A, S), A} : \homC(A, S) \to \homC(A, \homC(A, S) \otimes A)\] is coevaluation. %\todo{I obviously need better names for the laws} \qed \end{proposition} We have essentially rederived the result given in~\cite[Section 3.2]{RelatingAlgebraicAndCoalgebraic} for ordinary lenses, but we note that cartesianness was not required. \subsection{Effectful Optics} \newcommand{\monact}{\rtimes} Many proposed definitions of effectful lenses~\cite{ReflectionsOnMonadicLenses} have modified one or both of $\fget$ and $\fput$ to produce results wrapped in a monadic action. There are disadvantages to this approach: it is not obvious what the laws ought to be and there is no clear generalisation to other optic variants. The general definition of optic given in Section~\ref{sec:optics} suggests we instead work with the Kleisli category $\C_T$ of some monad $(T, \eta, \mu) : \C \to \C$. \begin{definition} The Kleisli category $\C_T$ of a monad $T$ has the same objects as $\C$, with morphisms $X \to Y$ in $\C_T$ given by morphisms $X \to TY$ in $\C$. Identity morphisms are given by the unit of $T$, and the composite of two morphisms $f : X \to Y$ and $g : Y \to Z$ in $\C_T$ is given by \begin{align*} X \xrightarrow{f} TY \xrightarrow{Tg} TTZ \xrightarrow{\mu_Z} TZ \end{align*} For $f : X \to Y$ in $\C_T$, we write $\underline{f} : X \to TY$ for its underlying morphism in $\C$. \end{definition} Working in a Kleisli category presents its own set of difficulties. The product in $\C$ is a monoidal product in a $\C_T$ only when the monad in question is \emph{commutative}, which rules out many monads of interest. A premonoidal structure~\cite{PremonoidalCategories} is not sufficient: composition of optics would in that case not be well defined. But this does not preclude the existence of monoidal actions on $\C_T$. In fact, there is a monoidal action that has long been used under a different guise: \begin{definition}[{\cite{NotionsOfComputationAndMonads}}] A \emph{strong monad} $T : \C \to \C$ on a monoidal category $(\C, \otimes, I)$ is a monad that is strong as a functor (Definition~\ref{def:strong-functor}), and such that the strength commutes with the unit and multiplication: \[ \begin{tikzcd} A \otimes B \ar[d, swap, "A \times \eta_B"] \ar[dr, "\eta_{A \times B}"] & \\ A \otimes TB \ar[r, swap, "\theta_{A, B}"] & T(A \otimes B) \end{tikzcd} \hspace{1cm} \begin{tikzcd} A \otimes T^2 B \ar[r, "\theta_{A, TB}"] \ar[d, swap, "A \otimes \mu_B"] & T(A \otimes TB) \ar[r, "T\theta_{A, B}"] & T^2(A \otimes B) \ar[d, "\mu_{A \otimes B}"] \\ A \otimes TB \ar[rr, swap, "\theta_{A, B}"] & & T(A \times B) \end{tikzcd} \] \end{definition} \begin{proposition} If $T : \C \to \C$ is a strong monad then $\C$ acts on $\C_T$ by $X \act Y := X \otimes Y$. \end{proposition} The crucial difference between this and a monoidal structure on $\C_T$ is that we only demand $X$ be functorial with respect to \emph{pure functions} in $\C$, whereas $Y$ must be functorial with respect to \emph{computations} in $\C_T$. We will write this action as $X \monact Y$ to highlight the different roles played by $X$ and $Y$. \begin{proof} Suppose $T$ is a strong monad with strength $\theta_{A, B} : A \otimes T B \to T(A \otimes B)$. For $A \in \C$, we have a functor $A \otimes - : \C_T \to \C_T$ which on a morphism $f : X \to Y$ in $\C_T$ is defined to be the composite \begin{align*} A \otimes X \xrightarrow{A \otimes \underline{f}} A \otimes TY \xrightarrow{\theta_{A, Y}} T(A \otimes Y) \end{align*} For details, see~\cite[Theorem 4.2]{PremonoidalCategories}. Our goal is to show this extends to a monoidal functor $a : \C \to [\C_T, \C_T]$. A morphism $f : A \to B$ in $\C$ induces a natural transformation $A \otimes - \Rightarrow B \otimes -$ of functors $\C_T \to \C_T$, with components $A \otimes X \to T(B \otimes X)$ given by composing $A \otimes X \to B \otimes X$ with the unit of the monad. Naturality follows by the naturality of the strength and the unit of $T$. %Naturality is easy to check: %\[\begin{tikzcd} %A \otimes X \ar[r] \ar[d] & B \otimes X \ar[r] \ar[d] & T(B \otimes X) \ar[d] \\ %A \otimes TY \ar[r] \ar[d] & B \otimes TY \ar[r] \ar[d] & T(B \otimes TY) \ar[d] \\ %T(A \otimes Y) \ar[r] & T(B \otimes Y) \ar[r] & TT(B \otimes Y) %\end{tikzcd}\] %The upper left square commutes by functoriality of $\otimes$, lower left by naturality of the strength, the two right squares by naturality of the unit. Monoidality of $a$ is shown exactly by the commutative diagrams in the definition of strong functor, i.e.\ that the strength commutes with associator and left unitor of $\C$. %\todo{I suspect there is a 1-to-1 correspondence between strengths on $T$ and actions of $\C$ on $\C_T$ by $A \otimes B$ on objects, is it worth proving this?} \end{proof} Suppose $\C$ is a monoidal closed category and $T : \C \to \C$ is a strong monad. Then the evaluation-at-$A$ functor has a right adjoint: \begin{align*} \C_T(M \monact A', S') &= \C(M \times A', T S') \\ &\cong \C(M, \homC(A', T S')) \end{align*} Using the coalgebraic description, we see that concrete effectful lenses consist of a single morphism in $\C$ \[\munzip : S \to T(\homC(A', T S') \otimes A).\] The optic laws in this case specialise to: \begin{proposition} A concrete effectful lens is lawful iff \begin{align*} \mu_S T(\ev_{A, TS}) \; \munzip &= \eta_S \\ T(\eta_{\homC(A, TS) \otimes A}\coev_{\homC(A, TS), A} \otimes A)\munzip &= T((\munzip \circ_T -) \otimes A)\munzip \end{align*} where \[ \munzip \circ_T - : \homC(A, TS) \to \homC(A, T(\homC(A, TS) \otimes A)) \] denotes internal Kleisli composition and \[\coev_{\homC(A, TS), A} : \homC(A, TS) \to \homC(A, \homC(A, TS) \otimes A) \] is coevaluation. \qed \end{proposition} Or, if you prefer do-notation, the two laws are: \begin{multicols}{2} \begin{minted}{haskell} do (c, a) <- munzip s c a == return s \end{minted} ~\columnbreak \begin{minted}{haskell} do (c, a) <- munzip s let f a' = do s' <- c a' munzip s' return (f, a) == do (c, a) <- munzip s let f a' = (c, a') return (f, a) \end{minted} \end{multicols} The inclusion of $\C$ into $\C_T$ preserves the action of $\C$, so there is an induced inclusion $\Optic_\otimes \to \Optic_\monact$. If we choose a specific monad, we can hope to simplify the description of a concrete effectful optic and its laws. %\subsubsection{Partial Lenses} % %The simplest nontrivial monad we could try is the maybe monad. % %\begin{definition} %The \emph{partiality} or \emph{maybe monad} is defined by %\begin{align*} %T X = X \sqcup 1 %\end{align*} %with unit $\eta_X : X \to X \sqcup 1$ the inclusion, multiplication $\mu_X : (X \sqcup 1) \sqcup 1 \to X \sqcup 1$ given by the fold map $1 \sqcup 1 \to 1$, and strength $\theta_{A, B}$ given by the composite %\[ %A \times (B \sqcup 1) \to (A \times B) \sqcup (A \times 1) \to (A \times B) \sqcup 1 %\] %using the unique map $A \times 1 \to 1$. %\end{definition} % \subsubsection{Writer Lenses} We begin with a simple example. Suppose $\C$ has finite products. \begin{definition} The \emph{writer monad} for a monoid $W$ is defined by \begin{align*} T_W X = X \times W \end{align*} The unit, multiplication of $T_W$ are given by pairing with the unit and multiplication of $W$, and the strength is simply the associativity morphism. \end{definition} We can find a more explicit description of concrete effectful lenses for this monad. \begin{align*} \Optic((S, S'), (A, A')) &= \int^{M \in \C} \C_{T_W}(S, M \monact A) \times \C_{T_W}(M \monact A', S') \\ &= \int^{M \in \C} \C(S, M \times A \times W) \times \C_{T_W}(M \monact A', S') \\ &\cong \int^{M \in \C} \C(S, M) \times \C(S, A\times W) \times \C_{T_W}(M \monact A', S') \\ &\cong \C_{T_W}(S, A) \times \C_{T_W}(S \times A', S') \end{align*} Fortunately, concrete writer lenses correspond to $\fget$ and $\fput$ functions in the Kleisli category of $T_W$. \subsubsection{Stateful Lenses} Suppose $\C$ is cartesian closed. \begin{definition} The \emph{state monad} with state $Q$ is defined by \begin{align*} T_Q X = \homC(Q, X \times Q) \end{align*} %This is the monad for the tensor-hom adjunction. There are a pair of useful morphisms in the Kleisli category: %\begin{align*} %\textsc{GetState} &: 1 \to Q \\ %\textsc{PutState} &: Q \to 1 %\end{align*} %where $\textsc{GetState}$ is the transpose of the diagonal map $1 \times Q \cong Q \to Q \times Q$, and $\textsc{PutState}$ is the transpose of the second projection $Q \times Q \to Q \cong 1 \times Q$. \end{definition} We call optics for the action $\monact : \C \times \C_{T_Q} \to \C_{T_Q}$ \emph{stateful lenses}. We can find a concrete description that is closer to that for ordinary lenses: \begin{align*} \Optic_\monact((S, S'), (A, A')) &= \int^{M \in \C} \C_{T_Q}(S, M \monact A) \times \C_{T_Q}(M \monact A', S') \\ &= \int^{M \in \C} \C(S, \homC(Q, M \times A \times Q)) \times \C_{T_Q}(M \monact A', S') \\ &\cong \int^{M \in \C} \C(S \times Q, M \times A\times Q) \times \C_{T_Q}(M \monact A', S') \\ &\cong \int^{M \in \C} \C(S \times Q, M) \times \C(S \times Q, A\times Q) \times \C_{T_Q}(M \monact A', S') \\ &\cong \C(S \times Q, A \times Q) \times \C_{T_Q}((S \times Q) \monact A', S') \\ &\cong \C_{T_Q}(S, A) \times \C_{T_Q}(S \times Q \times A', S') \end{align*} By analogy with ordinary lenses, let us call these maps $\mget$ and $\mput$. The induced composition of effectful lenses is a little intricate, and is possibly best explained in code. The composite $\mget$ is straightforward, just the composite of $\mget_1$ and $\mget_2$ in the Kleisli category. For $\mput$ however, there is some curious plumbing of the state into different places. Tracing through the isomorphism, two stateful lenses $(\mget_1, \mput_1) : (T, T') \hto (S, S')$ and $(\mget_2, \mput_2) : (S, S') \hto (A, A')$ compose as follows. \begin{minted}{haskell} mget t = do s <- mget1 t mget2 s mput t q a = do start <- getState s <- mget1 t q' <- getState putState start s' <- mput1 s q' a mput2 t q s' \end{minted} \begin{proposition} A stateful lens given by \begin{minted}{haskell} mget :: s -> State q a mput :: s -> q -> a -> State q s \end{minted} is lawful iff the following three laws hold: \\ \begin{minipage}{\textwidth} \begin{multicols}{3} \begin{minted}{haskell} do q <- getState a <- mget s mput s q a == return s \end{minted} ~\columnbreak \begin{minted}{haskell} do s' <- mput s q a mget s' == return a \end{minted} ~\columnbreak \begin{minted}{haskell} let (s', q') = runState (mput s q1 a1) q2 in mput s' q' a2 == mput s q1 a2 \end{minted} \end{multicols} \end{minipage} By analogy we call these the $\fget\fput$, $\fput\fget$ and $\fput\fput$ laws. \end{proposition} %\begin{proof} %We similarly calculate %\begin{align*} %&\int^{M, N \in \C} \C_{T_Q}(S, M \monact A) \times \C_{T_Q}(M \monact A, N \monact A) \times \C_{T_Q}(N \monact A, S) \\ %&\cong \int^{M, N \in \C} \C(S, \homC(Q, M \times A \times Q)) \times \C(M \times A, \homC(Q, N \times A \times Q)) \times \C_{T_Q}(N \monact A, S) \\ %&\cong \int^{M, N \in \C} \C(S \times Q, M \times A \times Q) \times \C(M \times A \times Q, N \times A \times Q) \times \C_{T_Q}(N \monact A, S) \\ %&\cong \int^{M, N \in \C} \C(S \times Q, M) \times \C(S \times Q, A \times Q) \times \C(M \times A \times Q, N \times A \times Q) \times \C_{T_Q}(N \monact A, S) \\ %&\cong \int^{N \in \C} \C_{T_Q}(S, A) \times \C(S \times Q \times A \times Q, N \times A \times Q) \times \C_{T_Q}(N \monact A, S) \\ %&\cong \int^{N \in \C} \C_{T_Q}(S, A) \times \C(S \times Q \times A \times Q, N) \times \C(S \times Q \times A \times Q, A \times Q) \times \C_{T_Q}(N \monact A, S) \\ %&\cong \C_{T_Q}(S, A)\times \C_{T_Q}((S \times Q) \monact A, A) \times \C_{T_Q}((S \times Q \times A \times Q) \monact A, S) %\end{align*} %\todo{This is a mess} %\end{proof} Of course, this notion of effectful lens may not be useful! It is hard to get intuition for the meaning of the laws, but they seem to suffer from the same deficiency that other attempts at effectful lenses do: they are too strong. The $\fget\fput$ law here appears easier to satisfy than the $\mathsf{MGetPut_0}$ law of~\cite{ReflectionsOnMonadicLenses}, as $\mput$ is given access to the original state. However, our $\fput\fget$ law seems very restrictive: no matter what auxiliary state is provided, $\fput$ting then $\fget$ting must leave the state unchanged. \subsection{Further Examples} The dedicated reader may enjoy deriving the concrete representation and laws for the following optic varieties: \begin{itemize} \item \emph{``Achromatic'' Lenses}~\cite[Section 5.2]{ProfunctorOpticsThesis} are lenses that also admit an operation $\fcreate : A \to S$. These are optics for the action of $\C$ on itself by $M \act A = (M \sqcup 1) \times A$, or equivalently, of the category of pointed objects of $\C$ on $\C$ by cartesian product. Concrete achromatic lenses $(S, S') \hto (A, A')$ are elements of the set \[\C(S, \homC(A', S') \sqcup 1) \times \C(S, A) \times \C(A', S').\] %\begin{align*} % \Optic((S, S'), (A, A')) % &= \int^{M \in \C} \C(S, M \act A) \times \C(M \act A', S') \\ % &= \int^{M \in \C} \C(S, (M \sqcup 1) \times A) \times \C((M \sqcup 1) \times A', S') \\ % &\cong \int^{M \in \C} \C(S, (M \sqcup 1) \times A) \times \C((M \times A') \sqcup A', S') \\ % &\cong \int^{M \in \C} \C(S, (M \sqcup 1) \times A) \times \C(M \times A', S') \times \C(A', S') \\ % &\cong \int^{M \in \C} \C(S, (M \sqcup 1) \times A) \times \C(M, \homC(A', S')) \times \C(A', S') \\ % &\cong \C(S, (\homC(A', S') \sqcup 1) \times A) \times \C(A', S') \\ % &\cong \C(S, \homC(A', S') \sqcup 1) \times \C(S, A) \times \C(A', S') %\end{align*} \item \emph{Affine Traversals}~\cite{SecondOrderFunctionals} allow access to a target that may or may not be present. Suppose $\C$ is cartesian closed and has binary coproducts. Let $\mathsf{Aff}$ be the category $\C \times \C$, equipped with the monoidal structure \begin{align*} (P', Q') \otimes (P, Q) &= (P' \sqcup (Q' \times P) , Q' \times Q) \end{align*} The category $\mathsf{Aff}$ acts on $\C$ by $(P, Q) \act A = P \sqcup (Q \times A)$, in fact, $\mathsf{Aff}$ is cooked up to act on $\C$ exactly by the closure of the actions $- \times A$ and $- \sqcup A$ under composition. A concrete affine traversal is an element of \[\C(S, S' \sqcup (\homC(A', S') \times A)).\] Affine traversals are described in the folklore as pairs of maps $\C(S, A \sqcup S') \times \C(S\times A', S')$. Such a pair does determine an affine traversal, but gives more information than is necessary: the right-hand map need not be defined at all $S$. %\begin{align*} % (P', Q') \act (P, Q) \act A % &= (P', Q') \act (P \sqcup (Q \times A)) \\ % &= P' \sqcup (Q' \times (P \sqcup (Q \times A))) \\ % &\cong P' \sqcup (Q' \times P) \sqcup (Q' \times Q \times A) \\ % &= (P' \sqcup (Q' \times P) , Q' \times Q) \act A \\ % &= ((P', Q') \otimes (P, Q)) \act A %\end{align*} %\begin{remark} % It is important here that the morphisms in $\mathsf{Aff}$ are only those that arise from pairs of morphisms $P \to P'$ and $Q \to Q'$, although in principle there may be other natural transformations between the corresponding functors $(P, Q) \act -$ and $(P', Q') \act -$. %\end{remark} %\todo{This feels similar to taking some sort of `compositum' of the two actions $\times$ and $\sqcup$, both embed in this category. Marco suggests taking the pushout of the projections into the pullback, calculated in the 2-cat of monoidal categories.} %Now the set of optics $(S, S') \hto (A, A')$ is: %\begin{align*} % \Optic_{\mathsf{Aff}}((S, S'), (A, A')) % &= \int^{M \in \mathsf{Aff}} \C(S, M \act A) \times \C(M \act A', S') \\ % &\cong \int^{P,Q \in \C} \C(S, (P,Q) \act A) \times \C((P,Q) \act A', S') \\ % &= \int^{P,Q \in \C} \C(S, P \sqcup (Q \times A)) \times \C(P \sqcup (Q \times A'), S') \\ % &\cong \int^{P,Q \in \C} \C(S, P \sqcup (Q \times A)) \times \C(P,S') \times \C(Q \times A', S') \\ % &\cong \int^{Q \in \C} \C(S, S' \sqcup (Q \times A)) \times \C(Q \times A', S') \\ % &\cong \int^{Q \in \C} \C(S, S' \sqcup (Q \times A)) \times \C(Q, \homC(A', S')) \\ % &\cong \C(S, S' \sqcup (\homC(A', S') \times A)) %\end{align*} \item \emph{Grates}~\cite{GratesPost} are optics for the contravariant action of a monoidal closed category $\C$ on itself by $X \act A \mapsto \homC(X, A)$. Concretely these correspond to morphisms \[ \C(\homC(\homC(S, A), A'), S'). \] \end{itemize} \section{The Profunctor Encoding}\label{sec:profunctor-optics} To use optics in practice, one could take the definition of the optic category and translate it almost verbatim into code---using an existential type in place of the coend. In Haskell syntax, lenses would be defined as: \begin{minted}{haskell} data Lens s s' a a' = forall m. Lens { l :: s -> (m, a), r :: (m, a') -> s' } \end{minted} This not the approach usually taken in implementations! Instead the somewhat indirect \emph{profunctor encoding} is used. (This is not quite true for the Haskell \lenslib{} library, for a few reasons \lenslib{} uses the closely related \emph{van Laarhoven encoding}, see Section~\ref{sec:van-laarhoven}. The Purescript \texttt{purescript-profunctor-lenses} library~\cite{PurescriptLibrary} does use the profunctor encoding directly.) The equivalence between the profunctor encoding and optics as described earlier has been explored in~\cite{ProfunctorOptics} and~\cite{ProfunctorOpticsPost}. We begin by reviewing this equivalence from a categorical perspective before investigating how the optic laws manifest in this setting. \subsection{Tambara Modules} Let $I = \C(-,{=}) : \C \hto \C$ be the identity profunctor and $\odot$ be profunctor composition, written in diagrammatic order. The following section generalises definitions that first appeared in~\cite[Section 3]{Doubles} for monoidal categories to the more general case of a monoidal action. \begin{definition} Suppose a category $\C$ is acted on by $(\M, \otimes, I)$ and let $P \in \Prof(\C, \C)$ be a profunctor. A \emph{Tambara module structure for $\M$ on $P$} is a family of maps: \begin{align*} \zeta_{A,B,M} : P(A,B) \to P(M \act A, M\act B) \end{align*} natural in $A$ and $B$, dinatural in $M$, and such that $\zeta$ commutes with the action of $\M$: \[ \begin{tikzcd} P(A,B) \ar[r, "\zeta_{A,B,M}"] \ar[d, "\zeta_{A, B, N\otimes M}" swap] & P(M \act A, M \act B) \ar[d, "\zeta_{M \act A, M \act B, N}" right] \\ P((N\otimes M) \act A), (N\otimes M) \act B) \ar[r, "\alpha_{N, M, A}" swap] & P(N\act (M\act A), N\act (M \act B)) \end{tikzcd} \qquad \begin{tikzcd} P(A,B) \ar[r, "\zeta_{A,B,I}"] \ar[dr, equal] & P(I\act A, I\act B) \ar[d, "{P(\lambda_A^{-1}, \lambda_B)}" right] \\ & P(A, B) \end{tikzcd} \] for all $A, B \in \C$ and $N, M \in \M$. \end{definition} Note that the identity profunctor $I$ has a canonical Tambara module structure $\zeta_{A, B, M} : \C(A, B) \to \C(M \act A, M \act B)$ for any $\M$, given by functoriality. If $P, Q \in \Prof(\C, \C)$ are equipped with module structures $\zeta$ and $\xi$ respectively, there is a canonical module structure on $P \odot Q$. Given $M \in \M$ and $A,B \in \C$, the structure map ${(\zeta \odot \xi)}_{A,B,M}$ is induced by \begin{align*} &P(A,C) \times Q(C,B) \\ \xrightarrow{\zeta_{A,C,M} \times \xi_{C,B,M}} \quad& P(M\act A, M\act C) \times Q(M\act C, M\act B) \\ \xrightarrow{\copr_{M\act C}} \quad&\int^{C \in \C} P(M\act A, C) \times Q(C, M\act B) \\ = \quad&(P \odot Q)(M\act A, M\act B) \end{align*} \begin{definition} There is a category $\Tamb_\M$ of Tambara modules and natural transformations that respect the module structure, in the sense that for any $l : P \to Q$, the diagram \[ \begin{tikzcd} P(A,B) \ar[r, "\zeta_{A,B,M}"] \ar[d, "l_{A,B}" left] & P(M\act A, M\act B) \ar[d, "l_{M\act A, M\act B}" right] \\ Q(A,B) \ar[r, "\xi_{A,B,M}" swap] & Q(M \act A, M \act B) \end{tikzcd} \] commutes. \end{definition} This category is monoidal with respect to $\odot$ as given above with monoidal unit $I$. There is an evident forgetful functor $U : \Tamb_\M \to \Prof(\C, \C)$ that is strong monoidal. This forgetful functor has both a left and right adjoint; important for us is the left adjoint: (The right adjoint to $U$ is described in~\cite{NotionsOfComputationAsMonoids}, used there to investigate Haskell's \mintinline{haskell}{Arrow} typeclass.) \begin{definition}[{\cite[Section 5]{Doubles}}] Let $\Pastro_\M : \Prof(\C, \C) \to \Tamb_\M$ be the functor: \begin{align*} \Pastro_\M(P) := \int^{M \in \M} \C(-, M\act {=}) \odot P \odot \C(M\act -, {=}) \end{align*} Or, in other words, \begin{align*} \Pastro_\M(P)(A,B) := \int^{M \in \M} \int^{C,D \in \C} \C(A, M\act C) \times P(C,D) \times \C(M \act D, B) \end{align*} The module structure $\zeta_{A,B,M} : \Pastro_\M P(A,B) \to \Pastro_\M P (M\act A, M\act B) $ is induced by the maps \begin{align*} &\C(A, N\act C) \times P(C,D) \times \C(N\act D, B) \\ \xrightarrow{\text{functoriality}} \quad& \C(M\act A, M\act N\act C) \times P(C,D) \times \C(M\act N\act D, M\act B) \\ \xrightarrow{\copr_{M\otimes N}} \quad&\int^{N \in \M} \int^{C,D \in \C} \C(M\act A, N\act C) \times P(C,D) \times \C(N\act D, M\act B) \\ = \quad&\Pastro_\M P (M \act A, M \act B) \end{align*} for all $C, D \in \C$ and $N \in \M$. Equationally, this is $\zeta_{A,B,M}(\repthree{l}{p}{r} ) = \repthree{M\act l}{p}{M\act r} $. \end{definition} \begin{proposition} $\Pastro_\M : \Prof(\C, \C) \to \Tamb_\M$ is left adjoint to $U : \Tamb_\M \to \Prof(\C, \C)$. \end{proposition} \begin{proof} For any $P \in \Prof(\C, \C)$, there is a map $\eta : P \hto U \Pastro_\M P$, given by $\eta(p) = \repthree{\id_A}{p}{\id_B}$. Suppose we have an element $\repthree{l}{p}{r} \in \Pastro_\M P(A,B)$. One can check that this element is equal to \begin{align*} \repthree{l}{p}{r} = (\Pastro_\M P(l, r)) \zeta_{A, B, M} \; \eta(p) \end{align*} where $\zeta_{A, B, M}$ is the module structure map for $\Pastro_\M P$. If $T \in \Tamb_\M$ is a Tambara module with structure map $\xi$, we would like to show that for any map $f : P \hto UT$ there exists a unique $\hat f : \Pastro_\M P \hto T$ so that $f$ factors as \[P \xrightarrow{\eta} U \Pastro_\M P \xrightarrow{U\hat f} UT. \] The data of such a map $\hat f : \Pastro_\M P \hto T$ is a natural transformation between the underlying profunctors. For the factorisation property to hold we must have that $\hat{f}\eta(p) = f(p)$ for any $p \in P(A,B)$, but then the action on the remainder of $\Pastro_\M P(A, B)$ is fixed: \begin{align*} \hat{f}(\repthree{l}{p}{r}) &= \hat{f}(\Pastro_\M P(l, r) \zeta_{A, B, M} \; \eta(p)) \\ &=T(l, r) \; \xi_{A, B, N} \; f(p) \end{align*} This establishes uniqueness. It remains to show that $\hat{f}$ so defined is actually a Tambara module morphism, but this is easy: \begin{align*} \hat{f}\zeta_{A,B,N}(\repthree{l}{p}{r}) &= \hat{f}(\repthree{N\act l}{p}{N\act r}) && \text{(definition of $\zeta$)}\\ &= T(N\act l, N\act r) \; \xi_{A, B, N \otimes M} \; f(p) && \text{(definition of $\hat{f}$)}\\ &= T(N\act l, N\act r) \xi_{M\act A,M\act B,N} \; \xi_{A, B, M} \; f(p) && \text{($\xi$ commutes with tensor in $\M$)} \\ &= \xi_{A,B,N} T(l, r) \; \xi_{A, B, M} \; f(p) && \text{(naturality of $\xi$)} \\ &= \xi_{A,B,N} \hat{f} (\repthree{l}{p}{r}) && \text{(definition of $\hat{f}$)} \end{align*} \end{proof} \begin{corollary} $\Pastro_\M$ (and therefore also $U \Pastro_\M$) is oplax monoidal. \end{corollary} \begin{proof} This follows from abstract nonsense as $\Pastro_\M$ is the left adjoint of a strong monoidal functor, see~\cite{Kelly1974}. \end{proof} \subsection{Optics} \begin{definition} For a pair of objects $A, A' \in \C$, the \emph{exchange profunctor} $E_{A, A'}$ is defined to be $\C(-, A) \times \C(A', {=})$. \end{definition} Given a profunctor, or indeed a Tambara module, we can evaluate it at any two objects of $\C$. This process is functorial in the choice of Tambara module, giving a functor $(U-)(A,A') : \Tamb_\M \to \Set$. \begin{lemma}\label{lemma-rep} The functor $(U-)(A,A') : \Tamb_\M \to \Set$ is representable: there is a isomorphism $(U-)(A,A') \cong \Tamb_\M(\Pastro_\M E_{A, A'}, -)$ \end{lemma} \begin{proof} We have the chain of isomorphisms: \begin{align*} &(U-)(A,A') \\ \cong \;&\int_{X,Y \in \C} \Set(\C(X,A) \times \C(A',Y), (U-)(X,Y)) && \text{(by Yoneda reduction twice)} \\ =\;&\int_{X,Y \in \C} \Set(E_{A, A'}(X,Y), (U-)(X,Y)) && \text{(by definition)}\\ \cong \;&\Prof(E_{A, A'}, U-) && \text{(natural transformations as ends)} \\ \cong \;&\Tamb_\M(\Pastro_\M E_{A, A'}, -) && \text{(by adjointness)} \end{align*} \end{proof} Note that the value of $\Pastro_\M E_{A, A'}$ at $(X,Y)$ is precisely the set of optics $(X, Y) \hto (A, A')$: \begin{align*} \Pastro_\M E_{A, A'} (X, Y) &= \int^{M \in \M} \int^{C,D \in \C} \C(X, M\act C) \times E_{A, A'}(C,D) \times \C(M\act D, Y) \\ &= \int^{M \in \M} \int^{C,D \in \C} \C(X, M\act C) \times \C(C, A) \times \C(A', D) \times \C(M\act D, Y) \\ &\cong \int^{M \in \M} \C(X, M\act A) \times \C(M\act A', Y) \end{align*} For convenience we identify $\Pastro_\M E_{A, A'}(X,Y)$ with $\Optic_\M((X, Y), (A, A'))$. We can now show that profunctor optics are precisely optics in the ordinary sense. \begin{proposition}[Profunctor Optics are Optics]\label{prop:profunctor-optics-are-optics} \begin{align*} [\Tamb_\M, \Set]((U-)(A,A'),(U-)(S,S')) &\cong \Optic_\M((S, S'), (A, A')) \end{align*} \end{proposition} \begin{proof} We have the chain of isomorphisms: \begin{align*} &[\Tamb_\M, \Set]((U-)(A,A'),(U-)(S,S')) \\ \cong \;&[\Tamb_\M, \Set](\Tamb_\M(\Pastro_\M E_{A, A'}, -), (U-)(S,S')) && \text{(by Lemma~\ref{lemma-rep})}\\ \cong \;&(U\Pastro_\M E_{A, A'})(S,S') && \text{(by Yoneda)} \\ = \;&\Optic_\M((S, S'), (A, A')) \end{align*} \end{proof} For $p : (S, S') \hto (A, A')$, let $\tilde{p} : (U-)(A,A') \Rightarrow (U-)(S,S')$ denote the corresponding natural transformation under this isomorphism, and for $t : (U-)(A,A') \Rightarrow (U-)(S,S')$, let $\hat{t} : (S, S') \hto (A, A')$ be the corresponding optic. \begin{corollary} A profunctor optic $t$ is determined by its component at $\Pastro_\M E_{A, A'}$, and furthermore, this component is determined by its value on $\rep{\lambda_A^{-1}}{\lambda_{A'}} \in (U \Pastro_\M E_{A, A'})(A, A')$. \end{corollary} \begin{proof} This is the content of the first two isomorphisms above. Explicitly, suppose $p = \rep{l}{r}$ with $l : S \to M\act A$ and $r : M\act A' \to S'$. Then for any Tambara module $P$, the component of $\tilde{p}$ at $P$ is \begin{align*} \tilde{p}_P = (UP)(l,r) \zeta_{A,A',M} \end{align*} where $\zeta$ is the module structure for $P$. In particular, \[ \tilde{p}_{\Pastro_\M E_{A, A'}}(\rep{\lambda_A^{-1}}{\lambda_{A'}}) = \rep{l}{r} \] \end{proof} We finish with one final isomorphic description of an optic: \begin{proposition} $\Optic_\M((S, S'), (A, A'))$ is isomorphic to $\Tamb_\M(\Pastro_\M E_{S, S'}, \Pastro_\M E_{A, A'})$. \end{proposition} \begin{proof} This follows from the previous two propositions and the Yoneda lemma: \begin{align*} &\Optic_\M((S, S'), (A, A')) \\ &\cong [\Tamb_\M, \Set]((U-)(A,A'),(U-)(S,S')) \\ &\cong [\Tamb_\M, \Set](\Tamb_\M(\Pastro_\M E_{A, A'}, -),\Tamb_\M(\Pastro_\M E_{S, S'}, -)) \\ &\cong \Tamb_\M(\Pastro_\M E_{S, S'}, \Pastro_\M E_{A, A'}) \end{align*} Explicitly, an optic $p = \rep{l}{r}$ corresponds to the natural transformation with components: \begin{align*} t_{X, Y} : \Pastro_\M E_{S, S'}(X, Y) \to \Pastro_\M E_{A, A'}(X, Y) \\ t_{X, Y}(\rep{f}{g}) = \rep{(M\act l)f}{g(M\act r)} \end{align*} where $M$ is the residual for the representative $\rep{f}{g}$. This is exactly the formula for optic composition! \end{proof} \subsection{Lawful Profunctor Optics} The next goal is to characterise the profunctor optics that correspond to lawful optics. The exchange profunctor $E_{A, A}$, hereafter abbreviated to $E_A$, has a comonoid structure, where the comultiplication $\Delta : E_A \to E_A \odot E_A$ and counit $\varepsilon : E_A \to \C$ are given by \begin{align*} \Delta_{X, Y} : (E_A)(X, Y) &\to (E_A \odot E_A)(X, Y) \\ \Delta_{X, Y}(\rep{f}{g}) &= \repthree{f}{\id_A}{g} \\ \varepsilon_{X, Y} : (E_A)(X, Y) &\to \C(X, Y) \\ \varepsilon_{X, Y}(\rep{f}{g}) &= gf \end{align*} respectively. Here we have identified $E_A \odot E_A$ with the profunctor $\C(-, A) \times \C(A, A) \times \C(A, =)$, via the isomorphism \begin{align*} E_A \odot E_A &= \int^{Z \in \C} E_A(-, Z) \times E_A(Z, =) \\ &= \int^{Z \in \C} \C(-, A) \times \C(A, Z) \times \C(Z, A) \times \C(A, =) \\ &\cong \C(-, A) \times \C(A, A) \times \C(A, =) \end{align*} Because $\Pastro_\M$ is oplax monoidal, the Tambara module $\Pastro_\M E_A$ has an induced comonoid structure, in this case given by \begin{align*} \Delta_{X, Y} : (\Pastro_\M E_A)(X, Y) &\to (\Pastro_\M E_A \odot \Pastro_\M E_A)(X, Y) \\ \Delta(\rep{l}{r}) &= \repthree{l}{\id_{M\act A}}{r} \\ \varepsilon_{X, Y} : (\Pastro_\M E_A)(X, Y) &\to \C(X, Y) \\ \varepsilon(\rep{l}{r}) &= rl \end{align*} The connection with lawfulness is hopefully now evident! \begin{proposition}\label{prop:lawful-if-homomorphism} An optic $p : S \hto A$ is lawful iff the corresponding natural transformation $\Pastro_\M E_S \rightarrow \Pastro_\M E_A$ is a comonoid homomorphism. \end{proposition} \begin{proof} For $t : \Pastro_\M E_S \rightarrow \Pastro_\M E_A$ to be a comonoid homomorphism means that the following diagrams commute for every $X, Y \in \C$: \[ \begin{tikzcd} (\Pastro_\M E_S)(X, Y) \ar[r, "t_{X, Y}"] \ar[d, "\varepsilon_{X,Y}" swap] & (\Pastro_\M E_A)(X, Y) \ar[d, "\varepsilon_{X,Y}"] \\ \C(X, Y) \ar[r, equals] & \C(X, Y) \end{tikzcd} \quad \begin{tikzcd} (\Pastro_\M E_S)(X, Y) \ar[r, "t_{X, Y}"] \ar[d, "\Delta_{X, Y}" swap] & (\Pastro_\M E_A)(X, Y) \ar[d, "\Delta_{X, Y}"] \\ (\Pastro_\M E_S \odot \Pastro_\M E_S)(X, Y) \ar[r, "(t \odot t)_{X, Y}" swap] & (\Pastro_\M E_A \odot \Pastro_\M E_A)(X, Y) \end{tikzcd} \] Suppose $t$ corresponds to an optic with representative $\rep{l}{r}$ with residual $M$ and we have an element $\rep{f}{g} : (\Pastro_\M E_S)(X, Y)$ with residual $N$. The left diagram requires that \begin{align*} g(Nr)(Nl)f = gf, \end{align*} as an element of $\C(X, Y)$. This is certainly true as $rl = \id_S$. The right diagram claims that \begin{align*} \repthree{(N\act l)f}{\id_{N\act M\act A}}{g(N \act r)} = \repthree{(N\act l)f}{(N\act r)(N\act l)}{g(N\act r)} \end{align*} But this holds by exactly the same argument as used in Proposition~\ref{prop:lawful-category} to show that the composite of lawful optics is lawful: by transplanting the relations showing the second optic law for $\rep{l}{r}$ For the backward direction, consider the above diagrams specialised to $X = Y = S$. Tracing the element $\rep{\lambda_S^{-1}}{\lambda_S} \in (\Pastro_\M E_S)(S, S)$ around the commutative diagrams yields precisely the first and second optic laws respectively. \end{proof} All that is needed to complete the connection with profunctor optics is the following standard result in category theory. \begin{lemma} For an object $X$ in a monoidal category $(\C, \otimes, I)$, a comonoid structure $(X,\Delta,\varepsilon)$ is equivalent to a lax monoidal structure on the functor $\C(X, -) : \C \to \Set$, considering $\Set$ as a monoidal category with respect to $\times$. Further, a morphism $(X_1,\Delta_1,\varepsilon_1) \to (X_2,\Delta_2,\varepsilon_2)$ is a comonoid homomorphism iff the induced natural transformation $\C(X_2, -) \Rightarrow \C(X_1, -)$ is monoidal. \end{lemma} \begin{proof} This is a follow-your-nose result! % For a comonoid $(X,\Delta,\varepsilon)$, define a lax monoidal structure % \begin{align*} % \phi &: 1 \to \C(X, I) \\ % \phi_{A, B} &: \C(X, A) \times \C(X, B) \to \C(X, A \otimes B) % \end{align*} % by $\phi = \varepsilon$ and $\phi_{A, B}(f, g) = (f \otimes g) \Delta$. The coherences follow straightforwardly using the comonoid axioms and functoriality of $\otimes$. % % In the other direction, we recover the comonoid maps by $\varepsilon = \phi$ and $\Delta = \phi_{X, X}(\id_X, \id_X)$. \end{proof} \begin{theorem} $p : S \hto A$ is a lawful optic iff the associated natural transformation $\tilde{p} : (U-)(A,A) \Rightarrow (U-)(S,S)$ is monoidal with respect to the canonical lax monoidal structures on $(U-)(A,A)$ and $(U-)(S,S)$. \end{theorem} \begin{proof} \begin{align*} & p : S \hto A \text{ is lawful} \\ \Leftrightarrow\; & \Pastro_\M E_S \to \Pastro_\M E_A \text{ is a comonoid homomorphism} \\ \Leftrightarrow\; & \Tamb_\M(\Pastro_\M E_A, -) \Rightarrow \Tamb_\M(\Pastro_\M E_S, -) \text{ is a monoidal natural transformation} \\ \Leftrightarrow\; & (U-)(A, A) \Rightarrow (U-)(S, S) \text{ is a monoidal natural transformation} \end{align*} \end{proof} \subsection{Implementation} We review quickly how the profunctor encoding is translated into code in the Haskell~\cite{LensLibrary} and Purescript~\cite{PurescriptLibrary} libraries. We define a typeclass for profunctors: \begin{minted}{haskell} class Profunctor p where dimap :: (a -> b) -> (c -> d) -> p b c -> p a d \end{minted} To be considered a valid instance of \mintinline{haskell}{Profunctor}, the function \mintinline{haskell}{dimap} must behave functorially. Now, for each optic variant we wish to define, we create a typeclass for the corresponding Tambara module. In the case of $\Lens$es, this typeclass is named \mintinline{haskell}{Strong}: \begin{minted}{haskell} class Profunctor p => Strong p where second :: p a b -> p (c, a) (c, b) \end{minted} This \mintinline{haskell}{second} function is the equivalent of the structure map $\zeta$ for the Tambara module. We require this map to satisfy the Tambara module coherences, but as with any definition in Haskell, these equations must be checked manually. Now the type of lenses $(S, S') \hto (A, A')$ is the direct translation of the set of natural transformations $(U-)(A,A') \Rightarrow (U-)(S,S')$: \begin{minted}{haskell} type Lens s s' a a' = forall p. Strong p => p a a' -> p s s' \end{minted} where we use parametricity in \mintinline{haskell}{p} as a proxy for naturality. A profunctor lens \begin{minted}{haskell} l :: forall p. Strong p => p a a -> p s s \end{minted} is lawful if it is monoidal as a natural transformation. In code this is: \begin{minted}{haskell} l id == id l (Procompose p q) == Procompose (l p) (l q) \end{minted} where \begin{minted}{haskell} data Procompose p q d c where Procompose :: p x c -> q d x -> Procompose p q d c \end{minted} denotes profunctor/Tambara module composition, once equipped with appropriate \mintinline{haskell}{Profunctor} and \mintinline{haskell}{Strong} instances. %The profunctor encoding has a number of benefits. The primary benefit is that the typeclass system allows optics to automatically degrade from one variant to another as needed. For example, if we wish to also encode $\Traversal$s, we would define the corresponding class of Tambara modules %\begin{minted}{haskell} %class Strong p => Wandering p where % wander :: Traversable f => p a b -> p (f a) (f b) %\end{minted} %Note that \mintinline{haskell}{Strong} is a superclass. An instance of \mintinline{haskell}{Wandering} is required to behave the same way %So given a % % %Another benefit is that optics compose via ordinary function composition \subsection{The van Laarhoven Encoding}\label{sec:van-laarhoven} Some optic variants can be encoded in a profunctor-like style without requiring the full complexity of profunctors. Chronologically this development came before profunctor optics, and was first introduced by Twan van Laarhoven~\cite{VanLaarhovenPost}. The van Laarhoven encoding for \mintinline{haskell}{Lens}es, \mintinline{haskell}{Traversal}s and \mintinline{haskell}{Setter}s is: \begin{minted}{haskell} type Lens s a = forall f. Functor f => (a -> f a) -> (s -> f s) type Traversal s a = forall f. Applicative f => (a -> f a) -> (s -> f s) type Setter s a = forall f. Settable f => (a -> f a) -> (s -> f s) \end{minted} What allows such a description to work for these optic variants is that the Tambara module that characterises them, $\Pastro_\M E_A$, can be written in the form $\C(-, \mintinline{haskell}{f}=)$ for some \mintinline{haskell}{f} that is an instance of the corresponding typeclass. This is possible in particular for the optic variants that admit a coalgebraic description; the ones for which the evaluation-at-$A$ functor has a right adjoint. No expressive power is lost by defining an optic to operate only on functions of the shape \mintinline{haskell}{a -> f a'}, as the entire concrete description of the optic can be extracted from its value on that particular Tambara module. The same is not true for other optic variants, and indeed in the Haskell \lenslib{} library, \mintinline{haskell}{Prism}s and \mintinline{haskell}{Review}s take a form much closer to the profunctor encoding. (The \lenslib{} library does not use \emph{precisely} the profunctor encoding even here, for backwards compatibility reasons.) A consequence is that the laws typically given for $\Traversal$s actually only need to be checked for the applicative functor we earlier called $UR^*$. In Haskell this functor is implemented as \mintinline{haskell}{FunList}~\cite{FunListPost} or \mintinline{haskell}{Bazaar}~\cite{LensLibrary}. \section{Future Work} There are many avenues for future exploration! \subsection{Mixed Optics} One can generalise the definition of $\Optic$ so that the two halves lie in different categories. Suppose $\C_L$ and $\C_R$ are categories that are acted on by a common monoidal category $\M$. Write these actions as $\actL : \M \to [\C_L, \C_L]$ and $\actR : \M \to [\C_R, \C_R]$ respectively. \begin{definition} Given two objects of $\C_L \times \C_R^\op$, say $(S, S')$ and $(A, A')$, a \emph{mixed optic} $p : (S, S') \hto (A, A')$ for $\actL$ and $\actR$ is an element of the set \begin{align*} \Optic_{\actL, \actR}((S, S'), (A, A')) := \int^{M \in \M} \C_L(S, M \actL A) \times \C_R(M \actR A', S') \end{align*} \end{definition} $\Optic_{\actL, \actR}$ forms a category. It is not so clear what notion of lawfulness is appropriate in this setting. Examples of mixed optics include the \emph{degenerate optics} of the \lenslib{} library: \mintinline{haskell}{Getter}s, \mintinline{haskell}{Review}s and \mintinline{haskell}{Fold}s. The mixed optic formalism also appears able to capture \emph{indexed optics} such as \mintinline{haskell}{IndexedLens}es and \mintinline{haskell}{IndexedTraversal}s~\cite{ProfunctorOpticsPost}. \subsection{Monotonic Lenses} In the bidirectional transformation community, the $\fput\fput$ law is often considered too strong. In particular we have seen that in $\Set$, together with the other laws, it implies that $\fget$ must be a projection from a product. To overcome this we work in $\Cat$, so that the objects under consideration have internal morphisms that we think of as updates. We modify $\fput$ so that instead of accepting an object $a$ of $A$ to overwrite the original in $S$ with, it requires a morphism in $A$ of the form $\fget(s) \to a$. In this way we are restricted in what updates we may perform. This is captured in the following definition: \begin{definition}[{\cite[Definition 4.1]{LensesFibrationsAndUniversalTranslations}}] A \emph{c-lens} $S \hto A$ in $\Cat$ is a pair of functors \begin{align*} \fget &: S \to A \\ \fput &: (\fget \downarrow \id_{A}) \to S \end{align*} such that a version of the three lens laws hold, where $(\fget \downarrow \id_{A})$ denotes the comma category construction. \end{definition} We can rewrite this in a form that gives hope for a correspondence with some optic category: %\todo{I don't know if this is known:} \begin{theorem} The data of a c-lens $S \hto A$ corresponds to a functor \[ S \to \int [(- / A), S] \] where $(-/A)$ denotes the slice category and $\int$ denotes (confusingly!) the Grothendieck construction. Furthermore, a c-lens is lawful iff it is a coalgebra for the comonad of the adjunction \[ \begin{tikzcd}[column sep = large] {[A^\op, \Cat]} \ar[r, bend left, "\int"] \ar[r, phantom, "\bot" pos = 0.4] & \Cat \ar[l, bend left, "{X \mapsto [(-/A), X]}" below] \end{tikzcd} \] \qed \end{theorem} It is not clear whether there is an action on $\Cat$ that generates this description as its concrete optics. There doesn't seem to be a natural place for an $A'$ to appear! We remain optimistic: \begin{conjecture} c-lenses are the lawful (possibly mixed) optics for some action on $\Cat$. \end{conjecture} \subsection{Functor and Monad Transformer Lenses} These were considered by Edward Kmett~\cite{MonadTransformerLensesTalk} as a method for embedding pieces of a monad transformer stack into the whole. There is some debate about the correct categorical description of monad transformers~\cite{MonadTransformersAsMonoidTransformers, CalculatingMonadTransformersCategoryTheory}, so we do not attempt to say anything precise, but the perspective given here could help in a couple of ways. Kmett considers optics for the operation of composing two monad transformers. The primary test-case was to embed \mintinline{haskell}{ReaderT} actions into \mintinline{haskell}{StateT} actions, but from the constant-complement perspective, this is impossible: \mintinline{haskell}{StateT} does not factor as the composite of \mintinline{haskell}{ReaderT} with some other monad transformer. In this setting the constant-complement laws may be asking too much, the optic laws given here might be the correct notion of lawfulness for monad transformers. Also, instead of considering optics within a category of monad transformers, we could instead look at optics for the action of monad transformers on monads. One can indeed define an optic $\mintinline{haskell}{State} \hto \mintinline{haskell}{Reader}$ that uses residual $\mintinline{haskell}{StateT}$. Whether this is lawful or useful is not clear! \subsection{Learners} A recent paper in applied category theory~\cite{BackpropAsFunctor} describes a compositional approach to machine learning, with a category whose morphisms describe learning algorithms. \begin{definition}[{\cite[Definition 2.1]{BackpropAsFunctor}}] For $A$ and $B$ sets, a \emph{learner} $A \hto B$ is a tuple $(P, I, U, r)$ where $P$ is a set, and $I$, $U$, and $r$ are functions of shape: \begin{align*} I &: P \times A \to B \\ U &: P \times A \times B \to P \\ r &: P \times A \times B \to A \end{align*} \end{definition} To form a category, one must consider learners up to an equivalence relation on the sets $P$. There is an alternate slick description of the set of learners $A \hto B$, that goes as follows. Note that the data of a learner is describing an element of the coend \begin{align*} \int^{P, Q \in \Set} \Set(P \times A, Q \times B) \times \Set(Q \times B, P \times A) \end{align*} via the isomorphisms \begin{align*} &\int^{P, Q \in \Set} \Set(P \times A, Q \times B) \times \Set(Q \times B, P \times A) \\ &\cong \int^{P, Q \in \Set} \Set(P \times A, Q) \times \Set(P \times A, B) \times \Set(Q \times B, P \times A) \\ &\cong \int^{P \in \Set} \Set(P \times A, B) \times \Set(P \times A \times B, P \times A) \\ &\cong \int^{P \in \Set} \Set(P \times A, B) \times \Set(P \times A \times B, P) \times \Set(P \times A \times B, A) \end{align*} Composition of learners can be defined analogously to composition for optics. This perspective explains the slight fussing around required in dealing with equivalence classes of learners, and suggests a generalisation to other monoidal categories. %\section{Conclusion} % %\todo{Maybe some of the long equational manipulations could be done by commutative diagram, using a pair of dashed lines to signal the start and end value?} % %\todo{Move some proofs to an appendix?} % %\todo{Find an example where the lens laws are definitely not equivalent} \bibliographystyle{alpha} \bibliography{optics.bib} \end{document}
{ "alphanum_fraction": 0.6537737821, "avg_line_length": 60.6120014909, "ext": "tex", "hexsha": "aef0f4efc337e61b9befcb1f2b5163eb6b412e21", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "bdd520b745f78c51b90788f8e4605115b232bf81", "max_forks_repo_licenses": [ "BSD-3-Clause" ], "max_forks_repo_name": "mvr/optics", "max_forks_repo_path": "optics.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "bdd520b745f78c51b90788f8e4605115b232bf81", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "BSD-3-Clause" ], "max_issues_repo_name": "mvr/optics", "max_issues_repo_path": "optics.tex", "max_line_length": 1282, "max_stars_count": 1, "max_stars_repo_head_hexsha": "bdd520b745f78c51b90788f8e4605115b232bf81", "max_stars_repo_licenses": [ "BSD-3-Clause" ], "max_stars_repo_name": "mvr/optics", "max_stars_repo_path": "optics.tex", "max_stars_repo_stars_event_max_datetime": "2020-03-15T09:33:24.000Z", "max_stars_repo_stars_event_min_datetime": "2020-03-15T09:33:24.000Z", "num_tokens": 56477, "size": 162622 }
\documentclass[12pt, letterpaper, preprint, comicneue]{aastex63} %\usepackage[default]{comicneue} % comic sans font for editing \usepackage[T1]{fontenc} \input{vc} \usepackage{color} \usepackage{amsmath} \usepackage{natbib} \usepackage{ctable} \usepackage{bm} \usepackage[normalem]{ulem} % Added by MS for \sout -> not required for final version \usepackage{xspace} \usepackage{csvsimple} \usepackage{graphicx} \usepackage{pgfkeys, pgfsys, pgfcalendar} % typesetting shih \linespread{1.08} % close to 10/13 spacing \setlength{\parindent}{1.08\baselineskip} % Bringhurst \setlength{\parskip}{0ex} \let\oldbibliography\thebibliography % killin' me. \renewcommand{\thebibliography}[1]{% \oldbibliography{#1}% \setlength{\itemsep}{0pt}% \setlength{\parsep}{0pt}% \setlength{\parskip}{0pt}% \setlength{\bibsep}{0ex} \raggedright } \setlength{\footnotesep}{0ex} % seriously? % citation alias % math shih \newcommand{\setof}[1]{\left\{{#1}\right\}} \newcommand{\given}{\,|\,} \newcommand{\lss}{{\small{LSS}}\xspace} \newcommand{\Om}{\Omega_{\rm m}} \newcommand{\Ob}{\Omega_{\rm b}} \newcommand{\OL}{\Omega_\Lambda} \newcommand{\smnu}{M_\nu} \newcommand{\sig}{\sigma_8} \newcommand{\mmin}{M_{\rm min}} \newcommand{\BOk}{\widehat{B}_0} \newcommand{\hmpc}{\,h/\mathrm{Mpc}} \newcommand{\bfi}[1]{\textbf{\textit{#1}}} \newcommand{\parti}[1]{\frac{\partial #1}{\partial \theta_i}} \newcommand{\partj}[1]{\frac{\partial #1}{\partial \theta_j}} \newcommand{\mpc}{{\rm Mpc}} \newcommand{\eg}{\emph{e.g.}} \newcommand{\ie}{\emph{i.e.}} % cmds for this paper \newcommand{\gr}{g{-}r} \newcommand{\fnuv}{FUV{-}NUV} \newcommand{\sfr}{{\rm SFR}} \newcommand{\ssfr}{{\rm SSFR}} \newcommand{\mtaum}{m_{\tau,M_*}} \newcommand{\mtaus}{m_{\tau,{\rm SSFR}}} \newcommand{\ctau}{c_\tau} \newcommand{\mdeltam}{m_{\delta,M_*}} \newcommand{\mdeltas}{m_{\delta,{\rm SFR}}} \newcommand{\cdelta}{c_\delta} \newcommand{\eda}{EDA} \newcommand{\specialcell}[2][c]{% \begin{tabular}[#1]{@{}c@{}}#2\end{tabular}} % text shih \newcommand{\foreign}[1]{\textsl{#1}} \newcommand{\etal}{\foreign{et~al.}} \newcommand{\opcit}{\foreign{Op.~cit.}} \newcommand{\documentname}{\textsl{Article}} \newcommand{\equationname}{equation} \newcommand{\bitem}{\begin{itemize}} \newcommand{\eitem}{\end{itemize}} \newcommand{\beq}{\begin{equation}} \newcommand{\eeq}{\end{equation}} %% collaborating \newcommand{\todo}[1]{\marginpar{\color{red}TODO}{\color{red}#1}} \definecolor{orange}{rgb}{1,0.5,0} \newcommand{\ch}[1]{{\color{orange}{\bf CH:} #1}} \begin{document} \sloppy\sloppypar\frenchspacing %\title{Measuring Unbiased Star Formation Histories: Correcting Model Imposed Priors} \title{Mitigating Model Priors in Galaxy Spectral Energy Distribution Fitting} \date{\texttt{DRAFT~---~\githash~---~\gitdate~---~NOT READY FOR DISTRIBUTION}} \newcounter{affilcounter} \author{ChangHoon Hahn} \altaffiliation{[email protected]} \affil{Department of Astrophysical Sciences, Princeton University, Peyton Hall, Princeton NJ 08544, USA} \begin{abstract} Models for galaxy star formation histories (SFHs), both parametric and non-parametric, impose strong priors on the physical properties of galaxies. These priors significantly bias galaxy stellar mass, star formation rate, and metalicities inferred from fitting galaxy spectral energy distributions (SED) and therefore impact all of the main summary statistics used to investigate galaxy populations (\eg~stellar mass function, star formation rate-density, star-forming sequence). In this work, we %demonstrate that the \cite{handley} method can correct for these biases by present a method that can correct for these biases by imposing uniform, or uninformative, priors, on the physical properties. The method imposes a maximum-entropy transformation on the probability distributions of the SED model parameters to force the physical properties into any specified distribution. We demonstrate, using simulated galaxy spectra constructed from the IllustrisTNG hydrodynamical simulation, that with this method we can accurately recover the input SFHs with SED modeling. Lastly, we use the method to infer the SFHs of galaxies in a low-redshift, volume-complete sample from the Galaxy and Mass Assembly (GAMA) Survey. The cosmic star formation rate-density we derive from the inferred SFHs are in good agreement with direct observations. \end{abstract} \keywords{ keyword1 -- keyword2 -- keyword3 } % --- intro --- \input{intro} % --- methods --- \input{maxent} % --- results --- \input{results} % --- summary --- \input{summary} \section*{Acknowledgements} It's a pleasure to thank Mariska Kriek, Marius Millea, Katherine Suess, Jeremy Tinker, Rita Tojeiro \appendix \bibliographystyle{mnras} \bibliography{maxent} \end{document}
{ "alphanum_fraction": 0.7249846532, "avg_line_length": 33.4726027397, "ext": "tex", "hexsha": "f945a7a9825a739c2f351c164a2e69ee43e62a23", "lang": "TeX", "max_forks_count": 3, "max_forks_repo_forks_event_max_datetime": "2021-11-07T21:17:13.000Z", "max_forks_repo_forks_event_min_datetime": "2021-01-10T15:20:26.000Z", "max_forks_repo_head_hexsha": "6a62dcee6933dd7834d9c9871c24391e6c797105", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "biprateep/provabgs", "max_forks_repo_path": "doc/paper/main.tex", "max_issues_count": 15, "max_issues_repo_head_hexsha": "6a62dcee6933dd7834d9c9871c24391e6c797105", "max_issues_repo_issues_event_max_datetime": "2021-04-07T15:34:52.000Z", "max_issues_repo_issues_event_min_datetime": "2020-11-25T05:06:26.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "biprateep/provabgs", "max_issues_repo_path": "doc/paper/main.tex", "max_line_length": 105, "max_stars_count": 11, "max_stars_repo_head_hexsha": "6a62dcee6933dd7834d9c9871c24391e6c797105", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "biprateep/provabgs", "max_stars_repo_path": "doc/paper/main.tex", "max_stars_repo_stars_event_max_datetime": "2022-03-16T17:20:57.000Z", "max_stars_repo_stars_event_min_datetime": "2020-12-11T21:06:53.000Z", "num_tokens": 1544, "size": 4887 }
%% start of file `template.tex'. %% Copyright 2006-2015 Xavier Danaux ([email protected]), 2020-2021 moderncv maintainers (github.com/moderncv). % % This work may be distributed and/or modified under the % conditions of the LaTeX Project Public License version 1.3c, % available at http://www.latex-project.org/lppl/. \documentclass[11pt,a4paper,sans]{moderncv} % possible options include font size ('10pt', '11pt' and '12pt'), paper size ('a4paper', 'letterpaper', 'a5paper', 'legalpaper', 'executivepaper' and 'landscape') and font family ('sans' and 'roman') % moderncv themes \moderncvstyle{classic} % style options are 'casual' (default), 'classic', 'banking', 'oldstyle' and 'fancy' \moderncvcolor{blue} % color options 'black', 'blue' (default), 'burgundy', 'green', 'grey', 'orange', 'purple' and 'red' %\renewcommand{\familydefault}{\sfdefault} % to set the default font; use '\sfdefault' for the default sans serif font, '\rmdefault' for the default roman one, or any tex font name %\nopagenumbers{} % uncomment to suppress automatic page numbering for CVs longer than one page % character encoding %\usepackage[utf8]{inputenc} % if you are not using xelatex ou lualatex, replace by the encoding you are using %\usepackage{CJKutf8} % if you need to use CJK to typeset your resume in Chinese, Japanese or Korean % adjust the page margins \usepackage[scale=0.75]{geometry} \setlength{\footskip}{136.00005pt} % depending on the amount of information in the footer, you need to change this value. comment this line out and set it to the size given in the warning %\setlength{\hintscolumnwidth}{3cm} % if you want to change the width of the column with the dates %\setlength{\makecvheadnamewidth}{10cm} % for the 'classic' style, if you want to force the width allocated to your name and avoid line breaks. be careful though, the length is normally calculated to avoid any overlap with your personal info; use this at your own typographical risks... % font loading % for luatex and xetex, do not use inputenc and fontenc % see https://tex.stackexchange.com/a/496643 \ifxetexorluatex \usepackage{fontspec} \usepackage{unicode-math} \defaultfontfeatures{Ligatures=TeX} \setmainfont{Latin Modern Roman} \setsansfont{Latin Modern Sans} \setmonofont{Latin Modern Mono} \setmathfont{Latin Modern Math} \else \usepackage[utf8]{inputenc} \usepackage[T1]{fontenc} \usepackage{lmodern} \fi % personal data \name{Sergey}{Malashneko} \title{Curriculum Vitae} % optional, remove / comment the line if not wanted \born{20 July 1984} % optional, remove / comment the line if not wanted \address{Silkina st. 8a, app. 67}{607190, Sarov, Nizhny Novgorod Region}{Russia}% optional, remove / comment the line if not wanted; the "postcode city" and "country" arguments can be omitted or provided empty \phone[mobile]{+7~(909)~294~79~72} % optional, remove / comment the line if not wanted; the optional "type" of the phone can be "mobile" (default), "fixed" or "fax" \email{malashenko\[email protected]} % optional, remove / comment the line if not wanted % Social icons \social[linkedin]{sergey-malashenko} % optional, remove / comment the line if not wanted \social[telegram]{sergey\_malashenko} % optional, remove / comment the line if not wanted %\extrainfo{additional information} % optional, remove / comment the line if not wanted \photo[70pt][0.4pt]{pictures/DKC_0122.JPG} % optional, remove / comment the line if not wanted; '64pt' is the height the picture must be resized to, 0.4pt is the thickness of the frame around it (put it to 0pt for no frame) and 'picture' is the name of the picture file % bibliography adjustments (only useful if you make citations in your resume, or print a list of publications using BibTeX) % to show numerical labels in the bibliography (default is to show no labels) %\makeatletter\renewcommand*{\bibliographyitemlabel}{\@biblabel{\arabic{enumiv}}}\makeatother \renewcommand*{\bibliographyitemlabel}{[\arabic{enumiv}]} % to redefine the bibliography heading string ("Publications") %\renewcommand{\refname}{Articles} % bibliography with mutiple entries %\usepackage{multibib} %\newcites{book,misc}{{Books},{Others}} %---------------------------------------------------------------------------------- % content %---------------------------------------------------------------------------------- \begin{document} %\begin{CJK*}{UTF8}{gbsn} % to typeset your resume in Chinese using CJK %----- resume --------------------------------------------------------- \makecvtitle \section{Education} %\cventry{year--year}{Degree}{Institution}{City}{\textit{Grade}}{Description} % arguments 3 to 6 can be left empty %\cventry{year--year}{Degree}{Institution}{City}{\textit{Grade}}{Description} \cventry{2002--2007}{mathematician, system programmer}{Sarov State Physics Technical Institute (MEPhI)}{}{\textit{GPA -- 4.95}}{} % Arguments not required can be left empty \cventry{2011--2015}{Ph.D. not complete}{National Research Lobachevsky State University of Nizhni Novgorod}{}{}{} \cventry{2020--Present}{data science and data engineering}{OZON Masters}{}{}{ \begin{itemize} \item Machine Learning, Deep Learning \item Mathematical Statistics and Applications \item Numerical Linear Algebra \item Big Data and Data Engineering \end{itemize} } \section{Experience} %\subsection{Vocational} %\cventry{year--year}{Job title}{Employer}{City}{}{General description no longer than 1--2 lines.\newline{} %Detailed achievements: %\begin{itemize} %\item Achievement 1 %\item Achievement 2 (with sub-achievements) % \begin{itemize} % \item Sub-achievement (a); % \item Sub-achievement (b), with sub-sub-achievements (don't do this!); % \begin{itemize} % \item Sub-sub-achievement i; % \item Sub-sub-achievement ii; % \item Sub-sub-achievement iii; % \end{itemize} % \item Sub-achievement (c); % \end{itemize} %\item Achievement 3 %\item Achievement 4 %\end{itemize}} %\cventry{year--year}{Job title}{Employer}{City}{}{Description line 1\newline{}Description line 2\newline{}Description line 3} %\subsection{Miscellaneous} %\cventry{year--year}{Job title}{Employer}{City}{}{Description} \cventry{2018--Present}{Team Lead/Data Scientist}{\textsc{Erlyvideo}}{\url{https://flussonic.ru/}}{}{Under my supervision and with my participation, a license plate detection and recognition system was developed. To do this, we collected and annotated the necessary data, developed a tool for generating synthetic car license plates, then developed the required neural network models for objects detection and text recognition. Also we developed a human face detection and recognition system. Both systems are in production now.} \cventry{2015--2018}{Senior Software Engineer}{\textsc{V5Systems}}{\url{https://v5systems.us/}}{}{ Under my supervision and with my participation, a video analytics system was developed to solve the problem of detecting objects (person, car) on embedded systems. (Nvidia Jetson TX1, TX2). To do this, we collected and annotated the necessary data, then developed compact neural network models, created our own inference engine, and implemented object tracking algorithms.} \cventry{2011--2015}{Senior Software Engineer}{\textsc{Intel}}{\url{https://intel.com/}}{}{ Participated in development of MOST library with geometric primitives and algorithms which is used for building numerical grid. Added support exact real arithmetic in core algorithms. \newline{} Participated in development Level Set Methods library. Implemented numerical solver for electromigration problem. } \cventry{2007-- 2011}{Junior Researcher}{\textsc{RFNC-VNIIEF}}{\url{http://www.vniief.ru/}}{}{ Participated in the development of numerical solver of gas dynamics and heat transfer equations. Performed parallelization of the numerical core and service algorithms using OpenMP and MPI. \newline{} Participated in a joint project with OKBM Afrikantov. Applied similarity theory for theoretical research of the problem and we performed numerical experiments that proved the applicability of some turbulence models for describing of the problem. } \cventry{2007--2008}{Software Engineer}{CJSC INKOMET}{}{}{ Implemented software package for the thermal measurement module. Performed numerical experiments on the equipment of OJSC NLMK.} \section{Languages} \cvitemwithcomment{Russian}{Perfect}{} \cvitemwithcomment{English}{Intermediate}{} \section{Skills} \cvitem{Math background}{Machine learning, Deep learning, Neural networks, Finite element method, Finite volume method, Systems of partial differential equations (Navier–Stokes, Maxwell), Level Set Methods} \cvitem{Programming languages}{\textsc{C/C++}, \textsc{Python}, \textsc{Bash}, \textsc{Lua}, \LaTeX} %\cvdoubleitem{category 1}{XXX, YYY, ZZZ}{category 4}{XXX, YYY, ZZZ} %\cvdoubleitem{category 2}{XXX, YYY, ZZZ}{category 5}{XXX, YYY, ZZZ} %\cvdoubleitem{category 3}{XXX, YYY, ZZZ}{category 6}{XXX, YYY, ZZZ} %\section{Skill matrix} %\cvitem{Skill matrix}{Alternatively, provide a skill matrix to show off your skills} %% Skill matrix as an alternative to rate one's skills, computer or other. %% Adjusts width of skill matrix columns. %% Usage \setcvskillcolumns[<width>][<factor>][<exp_width>] %% <width>, <exp_width> should be lengths smaller than \textwidth, <factor> needs to be between 0 and 1. %% Examples: % \setcvskillcolumns[5em][][]% adjust first column. Same as \setcvskillcolumns[5em] % \setcvskillcolumns[][0.45][]% adjust third (skill) column. Same as \setcvskillcolumns[][0.45] % \setcvskillcolumns[][][\widthof{``Year''}]% adjust fourth (years) column. % \setcvskillcolumns[][0.45][\widthof{``Year''}]% % \setcvskillcolumns[\widthof{``Languag''}][0.48][] % \setcvskillcolumns[\widthof{``Languag''}]% %% Adjusts width of legend columns. Usage \setcvskilllegendcolumns[<width>][<factor>] %% <factor> needs to be between 0 and 1. <width> should be a length smaller than \textwidth %% Examples: % \setcvskilllegendcolumns[][0.45] % \setcvskilllegendcolumns[\widthof{``Legend''}][0.45] % \setcvskilllegendcolumns[0ex][0.46]% this is usefull for the banking style %% Add a legend if you are using \cvskill{<1-5>} command or \cvskillentry %% Usage \cvskilllegend[*][<post_padding>][<first_level>][<second_level>][<third_level>][<fourth_level>][<fifth_level>]{<name>} % \cvskilllegend % insert default legend without lines %\cvskilllegend*[1em]{}% adjust post spacing % \cvskilllegend*{Legend}% Alternatively add a description string %% adjust the legend entries for other languages, here German % \cvskilllegend[0.2em][Grundkenntnisse][Grundkenntnisse und eigene Erfahrung in Projekten][Umfangreiche Erfahrung in Projekten][Vertiefte Expertenkenntnisse][Experte\,/\,Spezialist]{Legende} %% Alternative legend style with the first three skill levels in one column %% Usage \cvskillplainlegend[*][<post_padding>][<first_level>][<second_level>][<third_level>][<fourth_level>][<fifth_level>]{<name>} % \setcvskilllegendcolumns[][0.6]% works for classic, casual, banking % \setcvskilllegendcolumns[][0.55]% works better for oldstyle and fancy % \cvskillplainlegend{} % \cvskillplainlegend[0.2em][Grundkenntnisse][Grundkenntnisse und eigene Erfahrung in Projekten][Umfangreiche Erfahrung in Projekten][Vertiefte Expertenkenntnisse][Experte/Guru]{Legende} %% Add a head of the skill matrix table with descriptions. %% Usage \cvskillhead[<post_padding>][<Level>][<Skill>][<Years>][<Comment>]% %\cvskillhead[-0.1em]% this inserts the standard legend in english and adjust padding %% Adjust head of the skill matrix for other languages % \cvskillhead[0.25em][Level][F\"ahigkeit][Jahre][Bemerkung] %% \cvskillentry[*][<post_padding>]{<skill_cathegory>}{<0-5>}{<skill_name>}{<years_of_experience>}{<comment>}% %% Example usages: %\cvskillentry*{Language:}{3}{Python}{2}{I'm so experienced in Python and have realised a million projects. At least.} %\cvskillentry{}{2}{Lilypond}{14}{So much sheet music! Man, I'm the best!} %\cvskillentry{}{3}{\LaTeX}{14}{Clearly I rock at \LaTeX} %\cvskillentry*{OS:}{3}{Linux}{2}{I only use Archlinux btw}% notice the use of the starred command and the optional %\cvskillentry*[1em]{Methods}{4}{SCRUM}{8}{SCRUM master for 5 years} %% \cvskill{<0-5>} command % \cvitem{\textbackslash{cvskill}:}{Skills can be visually expressed by the \textbackslash{cvskill} command, e.g. \cvskill{2}} %\section{Interests} %\cvitem{hobby 1}{Description} %\cvitem{hobby 2}{Description} %\cvitem{hobby 3}{Description} %\section{Extra 1} %\cvlistitem{Item 1} %\cvlistitem{Item 2} %\cvlistitem{Item 3. This item is particularly long and therefore normally spans over several lines. Did you notice the indentation when the line wraps?} %\section{Extra 2} %\cvlistdoubleitem{Item 1}{Item 4} %\cvlistdoubleitem{Item 2}{Item 5\cite{book2}} %\cvlistdoubleitem{Item 3}{Item 6. Like item 3 in the single column list before, this item is particularly long to wrap over several lines.} %\section{References} %\begin{cvcolumns} % \cvcolumn{Category 1}{\begin{itemize}\item Person 1\item Person 2\item Person 3\end{itemize}} % \cvcolumn{Category 2}{Amongst others:\begin{itemize}\item Person 1, and\item Person 2\end{itemize}(more upon request)} % \cvcolumn[0.5]{All the rest \& some more}{\textit{That} person, and \textbf{those} also (all available upon request).} %\end{cvcolumns} % Publications from a BibTeX file without multibib % for numerical labels: \renewcommand{\bibliographyitemlabel}{\@biblabel{\arabic{enumiv}}}% CONSIDER MERGING WITH PREAMBLE PART % to redefine the heading string ("Publications"): \renewcommand{\refname}{Articles} %\nocite{*} %\bibliographystyle{plain} %\bibliography{publications} % 'publications' is the name of a BibTeX file % Publications from a BibTeX file using the multibib package %\section{Publications} %\nocitebook{book1,book2} %\bibliographystylebook{plain} %\bibliographybook{publications} % 'publications' is the name of a BibTeX file %\nocitemisc{misc1,misc2,misc3} %\bibliographystylemisc{plain} %\bibliographymisc{publications} % 'publications' is the name of a BibTeX file \end{document} %% end of file `template.tex'.
{ "alphanum_fraction": 0.7191855698, "avg_line_length": 61.2384937238, "ext": "tex", "hexsha": "3e486ee4ac54de207afa44afb606ea98ddb61a11", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "1bce0d52692ee1c49e9759116ae0108513090a54", "max_forks_repo_licenses": [ "LPPL-1.3c" ], "max_forks_repo_name": "SergeyMalashenko/moderncv", "max_forks_repo_path": "SergeyMalashenko_CV_English.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "1bce0d52692ee1c49e9759116ae0108513090a54", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "LPPL-1.3c" ], "max_issues_repo_name": "SergeyMalashenko/moderncv", "max_issues_repo_path": "SergeyMalashenko_CV_English.tex", "max_line_length": 529, "max_stars_count": null, "max_stars_repo_head_hexsha": "1bce0d52692ee1c49e9759116ae0108513090a54", "max_stars_repo_licenses": [ "LPPL-1.3c" ], "max_stars_repo_name": "SergeyMalashenko/moderncv", "max_stars_repo_path": "SergeyMalashenko_CV_English.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 4004, "size": 14636 }
\documentclass[aspectratio=169]{beamer} \mode<presentation> { \usetheme{Madrid}%\usetheme{uzhneu-en} \setbeamercovered{transparent} %\usecolortheme{crane} \usefonttheme{professionalfonts} } \setbeamertemplate{navigation symbols}{} %\usepackage[numbers]{natbib} \usepackage{graphicx} %include i grafici \graphicspath{{pictures/img/}} \usepackage[english]{babel} \usepackage[normal]{subfigure} \usepackage{amsthm} \usepackage{amssymb} \usepackage{amsmath} \usepackage{cancel} \usepackage{comment} %\DeclareMathOperator{\supp}{supp} \usepackage{mathrsfs} \usepackage{latexsym} \usepackage{graphicx} \DeclareGraphicsExtensions{.pdf,.jpg} \usepackage{subfigure} \usepackage{hyperref} %\hypersetup{colorlinks=true,linkcolor=blue} \usepackage{amsfonts} \usepackage{bm} \usepackage{algorithm2e} \usepackage[style=authortitle,backend=bibtex]{biblatex} %\bibliography{biblio} \usepackage[utf8]{inputenc} \usepackage{tikz} \usepackage{bbm} \usepackage{times} \usepackage[T1]{fontenc} \newtheorem{induction_hyp}{Induction Hypothesis} \newtheorem{remark}{Remark} \newtheorem{algo}{Algorithm} \definecolor{green}{rgb}{0.0, 0.42, 0.14} \include{config} \title[ADER vs DeC]{ADER and DeC: \\ arbitrarily high order (explicit)\\ methods for PDEs and ODEs} \author[D. Torlo]{Davide Torlo} \institute[Inria] {Inria Bordeaux - Sud Ouest\\ Team Cardamom} \date[] {\small Based on: Han Veiga, M., Öffner, P. \& Torlo, D. \textit{DeC and ADER: Similarities, Differences and a Unified Framework.} J Sci Comput 87, 2 (2021). https://doi.org/10.1007/s10915-020-01397-5 } \AtBeginSection[] { \begin{frame}<beamer> \frametitle{Outline} \tableofcontents[currentsection] \end{frame} } \begin{document} \begin{frame} \titlepage \end{frame} \begin{frame}<beamer> \frametitle{Outline} \tableofcontents % You might wish to add the option [pausesections] \end{frame} \section{Motivation} \begin{frame}{Motivation: high order accurate explicit method} We want to solve a hyperbolic PDE system for $u:\R^+\times \Omega \to \R^D$ \begin{equation}\label{eq:scalarPDE} \partial_t u + \nabla_{\mathbf{x}} \mathcal{F}(u) =0. \end{equation} Or ODE system for $\bc:\R^+\to \R^S$ \begin{equation}\label{eq:scalarODE} \partial_t \bc + F(\bc) =0. \end{equation} Applications: \begin{itemize} \item Fluids/transport \item Chemical/biological processes \end{itemize} \vspace{5mm} How? \begin{itemize} \item Arbitrarily high order accurate \item \only<3>{Explicit (if nonstiff problem)} \end{itemize} \only<2>{ \begin{tikzpicture}[remember picture,overlay] \node at (current page.center) {\includegraphics[width=0.81\textwidth]{pictures/HighOrderMethods.pdf}}; \end{tikzpicture} } \end{frame} \begin{frame}{Classical time integration: Runge--Kutta} \begin{align} &\bc^{(1)}:=\bc^n,\\ &\bc^{(k)}:=\bc^n+\sum_{s=1}^{K} A_{ks} F\left(t^n+b_s\Delta t,\bc^{(s)}\right), \quad \text{for } k=2,\dots, K, \label{eq:RK}\\ &\bc^{n+1}:= \sum_{k=1}^K \gamma_k \bc^{(k)}. \end{align} \end{frame} \begin{frame}{Classical time integration: Explicit Runge--Kutta} \begin{align*} &\bc^{(k)}:=\bc^n+\sum_{s=1}^{k-1} A_{ks} F\left(t^n+b_s\Delta t,\bc^{(s)}\right), \quad \text{for } k=2,\dots, K. \end{align*} \begin{itemize} \item Easy to solve \item High orders involved: \begin{itemize} \item Order conditions: system of many equations \item Stages $K\geq d$ order of accuracy (e.g. RK44, RK65) \end{itemize} \end{itemize} \end{frame} \begin{frame}{Classical time integration: Implicit Runge--Kutta} \begin{align*} &\bc^{(k)}:=\bc^n+\sum_{s=1}^{K} A_{ks} F\left(t^n+b_s\Delta t,\bc^{(s)}\right), \quad \text{for } k=2,\dots, K. \end{align*} \begin{itemize} \item More complicated to solve for nonlinear systems \item High orders easily done: \begin{itemize} \item Take a high order quadrature rule on $[t^n,t^{n+1}]$ \item Compute the coefficients accordingly, see Gauss--Legendre or Gauss--Lobatto polynomials \item Order up to $d=2K-1$ \end{itemize} \end{itemize} \end{frame} \begin{frame}{ADER and DeC} Two iterative explicit arbitrarily high order accurate methods. \begin{itemize} \item ADER\footnote{M. Dumbser, D. S. Balsara, E. F. Toro, and C.-D. Munz. A unified framework for the construction of one-step finite volume and discontinuous galerkin schemes on unstructured meshes. Journal of Computational Physics, 227(18):8209–8253, 2008.} for hyperbolic PDE, after a first analytic more complicated approach. \item Deferred Correction (DeC): introduced for explicit ODE\footnote{A. Dutt, L. Greengard, and V. Rokhlin. Spectral Deferred Correction Methods for Ordinary Differential Equations. BIT Numerical Mathematics, 40(2):241–266, 2000.}, extended to implicit ODE\footnote{M. L. Minion. Semi-implicit spectral deferred correction methods for ordinary differential equations. Commun. Math. Sci., 1(3):471–500, 09 2003.} and to hyperbolic PDE\footnote{R. Abgrall. High order schemes for hyperbolic problems using globally continuous approximation and avoiding mass matrices. Journal of Scientific Computing, 73(2):461–494, Dec 2017.}. \end{itemize} \end{frame} \section{DeC} \begin{frame}{DeC high order time discretization: $\L^2$} \begin{minipage}{0.65\textwidth} High order in time: we discretize our variable on $[t^n, t^{n+1}]$ in $M$ substeps ($\bc^{m}$). \begin{equation*} \partial_t\bc + F(\bc(t))=0. \end{equation*} Thanks to Picard–Lindelöf theorem, we can rewrite \begin{equation*} \bc^{m}=\bc^0 -\int_{t^0}^{t^m} F(\bc(t))dt. \end{equation*} and if we want to reach order $r+1$ we need $M=r$. \end{minipage} \begin{minipage}{0.32\textwidth} \begin{figure}[h] \centering \begin{tikzpicture} \draw [thick] (0,0) -- (0,6) node [right=2mm]{}; % nodes \fill[black] (0,0) circle (1mm) node[right=2mm] {$t^n=t^0$} node[left=2mm] {$\bc^{n}=\bc^0$} (0,1) circle (0.7mm) node[right=2mm] {$t^1$} node[left=2mm] {$\bc^1$} (0,2) circle (0.7mm) node[right=2mm] {} (0,3) circle (0.7mm) node[right=2mm] {$t^m$ } node[left=2mm] {$\bc^m$} (0,4) circle (0.7mm) node[right=2mm] {} (0,5) circle (0.7mm) node[right=2mm] {} (0,6) circle (1mm) node[right=2mm] {$t^M=t^{n+1}$} node[left=2mm] {$\bc^{n+1}=\bc^M$} ; \end{tikzpicture} \end{figure} \end{minipage} \end{frame} \begin{frame}{DeC high order time discretization: $\L^2$} \begin{minipage}{0.77\textwidth} More precisely, for each $\sigma$ we want to solve $\L^2 (\bc^{n,0},\dots,\bc^{n,M})=0$, where {\begin{align*} \L^2(\bc^0, \dots, \bc^M) & \only<1>{= \begin{pmatrix} \bc^M-\bc^0 -\sum_{r=0}^M \int_{t^0}^{t^M} F(\bc^r) \varphi_r(s) \diff s\\ \vdots\\ \bc^1-\bc^0 - \sum_{r=0}^M \int_{t^0}^{t^1} F(\bc^r) \varphi_r(s) \diff s \end{pmatrix} } \only<2>{= \begin{pmatrix} \bc^M-\bc^0 -\Delta t \sum_{r=0}^M \theta_r^M F(\bc^r) \\ \vdots\\ \bc^1-\bc^0 - \Delta t \sum_{r=0}^M \theta_r^1 F(\bc^r) \end{pmatrix}} \end{align*} } \begin{itemize} \item $\L^2=0$ is a system of $M \times S$ coupled (non)linear equations \item $\L^2$ is an implicit method \item Not easy to solve directly $\L^2(\bbc^*)=0$ \item High order ($\geq M+1$), depending on points distribution \end{itemize} \end{minipage}\hfill \begin{minipage}{0.2\textwidth} \begin{figure}[h] \centering \begin{tikzpicture} \draw [thick] (0,0) -- (0,6) node [right=2mm]{}; % nodes \fill[black] (0,0) circle (1mm) node[right=2mm] {$t^{0}$} node[left=2mm] {$\bc^{0}$} (0,1) circle (0.7mm) node[right=2mm] {$t^{1}$} node[left=2mm] {$\bc^{1}$} (0,2) circle (0.7mm) node[right=2mm] {} (0,3) circle (0.7mm) node[right=2mm] {$t^{m}$ } node[left=2mm] {$\bc^{m}$} (0,4) circle (0.7mm) node[right=2mm] {} (0,5) circle (0.7mm) node[right=2mm] {} (0,6) circle (1mm) node[right=2mm] {$t^{M}$} node[left=2mm] {$\bc^{M}$} ; \end{tikzpicture} \end{figure} \end{minipage} \end{frame} \begin{frame}{DeC low order time discretization: $\L^1$} \begin{minipage}{0.77\textwidth} Instead of solving the implicit system directly (difficult), we introduce a first order scheme $\L^1(\bc^{n,0},\dots,\bc^{n,M})$: \begin{align*} &\L^1(\bc^{0},\dots,\bc^{M})= \begin{pmatrix} \bc^M-\bc^0 -\Delta t \beta^M F(\bc^0) \\ \vdots\\ \bc^1-\bc^0 - \Delta t \beta^1 F(\bc^0) \end{pmatrix} \end{align*} \begin{itemize} \item First order approximation \item Explicit Euler \item Easy to solve $\L^1(\bbc)=0$ \end{itemize} \end{minipage}\hfill \begin{minipage}{0.2\textwidth} \begin{figure}[h] \centering \begin{tikzpicture} \draw [thick] (0,0) -- (0,6) node [right=2mm]{}; % nodes \fill[black] (0,0) circle (1mm) node[right=2mm] {$t^{0}$} node[left=2mm] {$\bc^{0}$} (0,1) circle (0.7mm) node[right=2mm] {$t^{1}$} node[left=2mm] {$\bc^{1}$} (0,2) circle (0.7mm) node[right=2mm] {} (0,3) circle (0.7mm) node[right=2mm] {$t^{m}$ } node[left=2mm] {$\bc^{m}$} (0,4) circle (0.7mm) node[right=2mm] {} (0,5) circle (0.7mm) node[right=2mm] {} (0,6) circle (1mm) node[right=2mm] {$t^{M}$} node[left=2mm] {$\bc^{M}$} ; \end{tikzpicture} \end{figure} \end{minipage} \end{frame} \begin{comment} \begin{frame}{DeC: Iterative process} $K$ iterations where the iteration index is the superscript $(k)$, with $k=0,\dots, K$ \begin{enumerate} \item Define $\bc^{(0),m}=\bc^n=\bc(t^n)$ for $m=0,\dots,M$ \item Define $\bc^{(k),0}=\bc(t^n)$ for $k=0,\dots,K$ \item Find $\bbc^{(k)}$ as $\L^1(\bbc^{(k)})=\L^1(\bbc^{(k-1)})-\L^2(\bbc^{(k-1)})$ \item $\bc^{n+1}= \bc^{(K),M}$. \end{enumerate} \begin{theorem}[Convergence DeC] \begin{itemize} \item If $\L^1$ coercive with constant $C_1$ \item If $\L^1-\L^2$ Lipschitz with constant $C_2 \Delta t$ \end{itemize} Then $\lVert \bbc^{(k)}-\bbc^*\rVert \leq C\Delta t^k$ \end{theorem} Hence, choosing $K=M+1$, then $\lVert \bc^{(K),M}-\bc^{ex}(t^{n+1})\rVert \leq C\Delta t ^K$ \end{frame} \end{comment} \begin{frame}{Deferred Correction\footnote{A. Dutt, L. Greengard, and V. Rokhlin. BIT Numerical Mathematics, 40(2):241–266, 2000.}} How to combine two methods keeping the accuracy of the second and the stability and simplicity of the first one? \begin{minipage}{0.58\textwidth} \begin{equation*}\label{DeC_method} \begin{split} &\bc^{0,(k)}:=\bc(t^n), \quad k=0,\dots, K,\\ &\bc^{m,(0)}:=\bc(t^n),\quad m=1,\dots, M\\ &\L^1(\bbc^{(k)})=\L^1(\bbc^{(k-1)})-\L^2(\bbc^{(k-1)})\text{ with }k=1,\dots,K. \end{split} \end{equation*}\vspace{-4mm} \begin{theorem}[Convergence DeC] \begin{itemize} \item $\L^2(\bbc^*)=0$ \item If $\L^1$ coercive with constant $C_1$ \item If $\L^1-\L^2$ Lipschitz with constant $C_2 \Delta t$ \end{itemize} Then $\lVert \bbc^{(K)}-\bbc^*\rVert \leq C(\Delta t)^K$ \end{theorem} \end{minipage} \hfill \begin{minipage}{0.4\textwidth} \begin{itemize} { \item $\mathcal{L}^1(\bbc)=0$, first order accuracy, easily invertible. \item $\mathcal{L}^2(\bbc)=0$, high order $M+1$. } \end{itemize} \begin{tikzpicture} \tikzset{dot/.style={fill=black,circle}} \foreach\l[count=\y] in {0,1,2,M} { \draw (1,\y) -- (3,\y); \draw[dashed] (3,\y) -- (5,\y); \node at (0.6,\y){$t^{\l}$}; \foreach\foreach\z[count=\x] in {0,1,2,k,K} { \only<\x>{\fill (\x,\y) circle (1mm) node[anchor=south west] {$\!\bc^{(\z),\l}$};} } } \foreach\l[count=\x] in {0,1,2,k,K} { \draw (\x,1) -- (\x,3); \draw[dashed] (\x,3) -- (\x,4); \node at (\x,0.5){$\l$}; } \end{tikzpicture} \end{minipage} \end{frame} %\begin{frame}{Deferred Correction\footnote{A. Dutt, L. Greengard, and V. Rokhlin. Spectral Deferred Correction Methods % for Ordinary Differential Equations. BIT Numerical Mathematics, 40(2):241–266, % 2000.}} % How to combine two methods keeping the accuracy of the second and the stability and simplicity of the first one?\\ % \begin{itemize} % { % \item $\mathcal{L}^1(f^{n+1},f^n)=0$, first order accuracy, easily invertible (IMEX). % \item $\mathcal{L}^2(f^{n+1},f^n)=0$, high order $r$ (>1), not directly solvable. % } % \end{itemize} % \pause % \begin{algo}[DeC method] % \begin{itemize} % \item $\mathcal{L}^1(f^{(1)},f^n)=0$, prediction $f^{(1)}$. % \item For $j=2,\dots,K$ corrections: \\ $\quad \mathcal{L}^1(f^{(j)},f^n)=\mathcal{L}^1(f^{(j-1)},f^n)-\mathcal{L}^2(f^{(j-1)},f^n).$ % \item $f^{n+1}:=f^{(K)}$. % \end{itemize} % \end{algo} % \begin{remark} % $\mathcal{L}^1$ is used implicitly and $\mathcal{L}^2$ only explicitly. % \end{remark} %\end{frame} %\begin{frame}{Deferred Correction} % \begin{theorem}[Deferred Correction convergence] % Given the DeC procedure. If % \begin{itemize} % \item $\mathcal{L}^1$ is coercive with constant $\alpha_1$ % \item $\mathcal{L}^2-\mathcal{L}^1$ is Lipschitz continuous with constant $\alpha_2 \Delta$ % \item $\exists !\, f^{*}_\Delta$ such that $\mathcal{L}^2(f^{*}_\Delta)=0$. % \end{itemize} % Then if $\eta=\frac{\alpha_2}{\alpha_1}\Delta<1$, the deferred correction is converging to $ f^*_\Delta$ and after $K$ iterations the error is smaller than $\eta^K$ times the original error. % \end{theorem} %\end{frame} \begin{frame}{DeC -- Proof} \small \begin{proof} Let $f^*$ be the solution of $\L^2(\bbc^*)=0$. We know that $\L^1(\bbc^*)=\L^1(\bbc^*)-\L^2(\bbc^*)$, so that \visible<2>{ \begin{align*} \L^1(\bbc^{(k+1)})-\L^1(\bbc^*)=&\left(\L^1(\bbc^{(k)})-\L^2(\bbc^{(k)})\right)-\left(\L^1(\bbc^*)-\L^2(\bbc^*)\right) \\ {\color{red}C_1 }||\bbc^{(k+1)}-\bbc^*||\leq & ||\L^1(\bbc^{(k+1)})-\L^1(\bbc^*)||=\\ =&||\L^1(\bbc^{(k)})-\L^2(\bbc^{(k)})-(\L^1(\bbc^*)-\L^2(\bbc^*))||\leq \\ \leq & {\color{red} C_2 \Delta } ||\bbc^{(k)}-\bbc^*||.\\ ||\bbc^{(k+1)}-\bbc^*||\leq &\left(\frac{C_2}{C_1}\Delta\right) ||\bbc^{(k)}-\bbc^*|| \leq \left(\frac{C_2}{C_1}\Delta\right)^{k+1} ||\bbc^{(0)}-\bbc^*||. \end{align*} After $K$ iteration we have an error at most of $\left(\frac{C_2}{C_1}\Delta\right)^K ||\bbc^{(0)}-\bbc^*||$. } \end{proof} \end{frame} \begin{comment} \begin{frame}{DeC: $\L^2$ operator} \begin{align*} \L^2(\bc^0, \dots, \bc^M) &= \begin{cases} \bc^M-\bc^0 -\int_{t^0}^{t^M} \I_M ( F(\bc^0),\dots,F(\bc^M))ds \\ \dots\\ \bc^1-\bc^0 - \int_{t^0}^{t^1} \I_M ( F(\bc^0),\dots,F(\bc^M))ds \end{cases}\\ &= \begin{cases} \bc^M-\bc^0 -\sum_{r=0}^M \int_{t^0}^{t^M} F(\bc^r) \varphi_r(s) \diff s\\ \dots\\ \bc^1-\bc^0 - \sum_{r=0}^M \int_{t^0}^{t^1} F(\bc^r) \varphi_r(s) \diff s \end{cases} \\ &= \begin{cases} \bc^M-\bc^0 -\Delta t \sum_{r=0}^M \theta_r^M F(\bc^r) \\ \dots\\ \bc^1-\bc^0 - \Delta t \sum_{r=0}^M \theta_r^1 F(\bc^r) \end{cases} \end{align*} \end{frame} \begin{frame}{DeC: $\L^2$ operator} Goal: find $\bbc^*=(\bc^0, \dots, \bc^m, \dots, \bc^M)^*$ : $\L^2(\bbc^*)=0$. \vspace{1cm} \begin{itemize} \item $\L^2=0$ is a system of $M \times S$ coupled (non)linear equations \item $\L^2$ is an implicit method \item Not easy to solve directly \item High order ($\geq M+1$), depending on points distribution \end{itemize} \end{frame} \begin{frame}{DeC: $\L^1$ operator} \begin{equation}\label{eq:L1} \L^1(\bc^0, \dots, \bc^M) := \begin{cases} \bc^M-\bc^0 - \beta^M \Delta t F(\bc^0) \\ \vdots\\ \bc^1- \bc^0 - \beta^1 \Delta t F(\bc^0) \end{cases} \quad\beta^m:=\frac{t^m-t^0}{t^M-t^0}. \end{equation} \begin{itemize} \item First order approximation \item Explicit Euler \item Easy to solve $\L^1(\bbc)=0$ \end{itemize} \end{frame} % \end{comment} \begin{comment} \begin{frame}{DeC -- Proof} \small \begin{proof} Let $\bbc^*$ be the solution of $\L^2(\bbc^*)=0$. We know that $\L^1(\bbc^*)=\L^1(\bbc^*)-\L^2(\bbc^*)$ and $\L^1(\bbc^{(k+1)})=\left(\L^1(\bbc^{(k)})-\L^2(\bbc^{(k)})\right)$, so that \begin{align*} C_1 ||\bbc^{(k+1)}-\bbc^*||\leq & ||\L^1(\bbc^{(k+1)})-\L^1(\bbc^*)||=\\ =&||\L^1(\bbc^{(k)})-\L^2(\bbc^{(k)})-(\L^1(\bbc^*)-\L^2(\bbc^*))||\leq \\ \leq & C_2 \Delta t ||\bbc^{(k)}-\bbc^*||.\\ ||\bbc^{(k+1)}-\bbc^*||\leq &\left(\frac{C_2}{C_1}\Delta t\right) ||\bbc^{(k)}-\bbc^*|| \leq \left(\frac{C_2}{C_1}\Delta t\right)^{k+1} ||\bbc^{(0)}-\bbc^*||. \end{align*} After $K$ iteration we have an error at most of $\eta^K\cdot ||\bbc^{(0)}-\bbc^*||$. \end{proof} \end{frame} \end{comment} %\begin{comment} \begin{frame}{DeC: Second order example} \end{frame} \begin{frame}{DeC: Second order example} \end{frame} \begin{frame}{DeC: Second order example} \end{frame} \begin{frame}{DeC: Second order example} \end{frame} %\end{comment} \begin{frame}{Simplification of DeC for ODE} In practice \begin{equation*} \L^1(\bbc^{(k)})= \L^1(\bbc^{(k-1)})-\L^2(\bbc^{(k-1)}),\qquad k=1,\dots, K, \end{equation*} For $m=1,\dots, M$ \begin{align*} & \bc^{(k),m}\!\!\!\! -\only<2->{\cancel}{\bc^0\!\!-\beta^m\Delta t F(\bc^{0})}- \only<3->{\cancel}{\bc^{(k-1),m}} \!\! +\only<2->{\cancel}{\bc^0\!\!+\!\!\beta^m\Delta t F(\bc^{0})}\\ & +\only<3->{\cancel}{ \bc^{(k-1),m}}\!\!\!\!-\bc^0\!\! -\!\!\Delta t \sum_{r=0}^M\theta_r^m F(\bc^{(k-1),r}) =0 \only<4->{\\ & \bc^{(k),m} -\bc^0 -\Delta t \sum_{r=0}^M\theta_r^m F(\bc^{(k-1),r})=0.} \end{align*} \end{frame} \begin{frame}{DeC and residual distribution} Deferred Correction + Residual distribution \begin{itemize} \item Residual distribution (FV $\Rightarrow$ FE) $\Rightarrow$ High order in space \item Prediction/correction/iterations $\Rightarrow$ High order in time \item Subtimesteps $\Rightarrow$ High order in time \end{itemize} \begin{equation*}\label{oneline} \begin{split} U^{m,(k+1)}_\xi = U_\xi^{m,(k)} - |C_p|^{-1} \sum_{\E|\xi \in \E}\bigg(\int_\E \Phi_\xi \left(U^{m,(k)} - U^{n,0} \right) \dd \mathbf{x} +\Delta t \sum_{r=0}^M \theta_{r}^m \mathcal{R}_\xi^\E(U^{r,(k)}) \bigg), \end{split} \end{equation*} \begin{center} with \end{center} \begin{equation*}\label{eq:L2RDspacetime} \sum_{\xi \in {\E}} {\mathcal{R}}^{{\E}}_\xi (u) = \int_{\E} \nabla_\mathbf{x}F(u) \dd \mathbf{x}. \end{equation*} \begin{itemize} %Example: PDEs, FEM discretization \vspace{3mm} \item The $\L^2$ operator contains also the complications of the spatial discretization (e.g. mass matrix)\vspace{2mm} \item $\L^1$ operator further simplified up to a first order approximation (e.g. \textbf{mass lumping}) \end{itemize} \end{frame} \begin{frame}{$\L^1$ with mass lumping} \end{frame} \begin{frame}{Implicit simple DeC} Define $\L^1$ as \begin{align*} \L^1(\bc^{0},\dots,\bc^{M})&=\only<1>{ \begin{pmatrix} \bc^M-\bc^0 -\Delta t \beta^M F(\bc^0) \\ \vdots\\ \bc^1-\bc^0 - \Delta t \beta^1 F(\bc^0) \end{pmatrix}} \only<2>{\begin{pmatrix} \bc^M-\bc^0 -\Delta t \beta^M \left( F(\bc^0) + \partial_{\bc} F(\bc^0) (\bc^M-\bc^0) \right) \\ \vdots\\ \bc^1-\bc^0 - \Delta t \beta^1 \left( F(\bc^0) + \partial_{\bc} F(\bc^0) (\bc^1-\bc^0) \right) \end{pmatrix}\\ &=\begin{pmatrix} \bc^M-\bc^0 -\Delta t \beta^M \partial_{\bc} F(\bc^0) \bc^M \\ \vdots\\ \bc^1-\bc^0 - \Delta t \beta^1 \partial_{\bc} F(\bc^0) \bc^1 \end{pmatrix}} \end{align*} \end{frame} \begin{frame}{DeC as RK} $$\bc^{(k),m} -\bc^0 -\Delta t \sum_{r=0}^M\theta_r^m F(\bc^{(k-1),r})=0$$ \vspace{10cm} \end{frame} \begin{frame}{DeC as RK} \end{frame} \begin{frame}{DeC as RK} We can write DeC as RK defining $\vec{\theta}_0 = \lbrace\theta_0^m\rbrace_{m=1}^M$, $\vec{\theta}^M=\theta_r^{M}$ with $r \in 1, \dots, M, $ denoting the vector $ \vec{\theta}_r^{M,T} =(\theta_1^M, \dots, \theta_M^M )$. The Butcher tableau for an arbitrarily high order DeC approach is given by: \begin{equation}\label{eq:DeC_RK} \begin{aligned} \begin{array}{c|cccccccc} 0 & 0 & & & & & & & \\ \vec{\beta} & \vec{\beta} & & & & & & & \\ \vec{\beta} & \vec{\theta}_0 & \mat{\tilde{\theta}} & & & & & &\\ \vdots & \vec{\theta}_0 & \mat{0} &\mat{\tilde{\theta}} && & & &\\ \vdots & \vec{\theta}_0 & \mat{0} & \mat{0} & \mat{\tilde{\theta}} & & & &\\ \vdots & \vdots & \vdots & \vdots & \ddots & \ddots& & & \\ \vec{\beta} & \vec{\theta}_0 & \mat{0} & \dots & \dots & \mat{0} & \mat{\tilde{\theta}} & \\ \hline & \theta_0^M & \vec{0}^T & \dots & & \dots & \vec{0}^T& \vec{\theta}_r^{M,T} \end{array}. \end{aligned} \end{equation} \end{frame} \begin{frame}{CODE} \begin{itemize} \item Choice of order \item Choice of point distributions $t^0, \dots, t^M$ \item Computation of $\theta$ \item Loop for timesteps \item Loop for correction \item Loop for subtimesteps \end{itemize} \end{frame} \section{ADER} \begin{frame}{ADER} \begin{minipage}{0.43\textwidth} \begin{itemize} \item Cauchy–Kovalevskaya theorem \item Modern automatic version \item Space/time DG \item Prediction/Correction \item Fixed-point iteration process \end{itemize} \end{minipage}\hfill \begin{minipage}{0.55\textwidth} Modern approach is DG in space time for hyperbolic problem \begin{equation} \label{eq:pde} \partial_t u(x,t) + \nabla \cdot F(u(x,t)) = 0, \, x\in \Omega\subset \R^d,\; t>0. \end{equation} \end{minipage} Prediction: iterative procedure \begin{equation*} \int_{\STC}\!\!\!\!\!\!\!\! \theta_{rs}(x,t)\partial_t \theta_{pq}(x,t) z^{pq} \diff x \diff t+ \int_{\STC}\!\!\!\!\!\!\!\! \theta_{rs}(x,t) \nabla_{\mathbf{x}} \cdot F(\theta_{pq}(x,t) z^{pq}) \diff x \diff t=0. \end{equation*} Correction step: communication between cells \begin{equation*} \int_{\SC} \Phi_r\left( u(t^{n+1})-u(t^n) \right) \dd x + \int_{ \TC\times \partial \SC}\!\!\!\!\!\!\!\! \!\!\!\!\Phi_r(x) \mathcal{G}(z^{-},z^{+}) \cdot \boldsymbol{\mathrm{n}} \, \diff S\, \diff t - \int_{ \STC} \!\!\!\!\!\!\!\!\!\!\!\!\nabla_{\mathbf{x}} \Phi_r \cdot F(z) \, \diff x\, \diff t =0, \end{equation*} \end{frame} \begin{frame}{ADER: space-time discretization} Defining $\theta_{rs}(x,t) =\Phi_r(x) \phi_s(t)$ basis functions in space and time \begin{equation} \int_{\STC}\!\!\!\!\!\! \theta_{rs}(x,t)\partial_t \theta_{pq}(x,t) u^{pq} \diff x \diff t+ \int_{\STC}\!\!\!\!\!\! \theta_{rs}(x,t) \nabla \cdot F(\theta_{pq}(x,t) u^{pq}) \diff x \diff t=0.\label{eq:spaceTimeDG} \end{equation} \pause This leads to \begin{equation}\label{eq:ADER_DG} \vec{\vec{\M}}_{rspq} u^{pq} = \vec{\vec{r}}(\vec{\vec{\mathbf{u}}})_{rs}, \end{equation} solved with fixed point iteration method. + Correction step where cells communication is allowed (derived from \eqref{eq:spaceTimeDG}). \end{frame} \begin{frame}{ADER: time integration method} Simplify! Take $\bc(t) = \sum_{m=0}^M \phi_m(t) \bc^m = \bphi(t)^T\bbc$ \begin{align*}\label{eq:ADERODEL2} &\int_{T^n} \psi(t)\partial_t \bc(t) dt - \int_{T^n} \psi(t)F(\bc(t)) dt = 0, \quad \forall \psi: T^n=[t^n,t^{n+1}]\to \R. \\ &\L^2(\bbc ):= \int_{T^n} \bphi(t) \partial_t \bphi(t)^T \bbc dt - \int_{T^n} \bphi(t) F(\bphi(t)^T\bbc) dt = 0\\ &\bphi(t) = \left( \phi_0(t), \dots, \phi_M(t) \right)^T \end{align*}\\ Quadrature\dots \begin{equation}\label{fix:point} \L^2(\bbc):=\M\bbc-\vec{r}(\bbc)=0 \Longleftrightarrow \M \bbc = \vec{r}(\bbc) . \end{equation} Nonlinear system of $M \times S$ equations \end{frame} \begin{frame}{ADER: Mass matrix} What goes into the mass matrix? Use of the integration by parts \begin{align*} \L^2(\bbc ):=& \int_{T^n} \bphi(t) \partial_t \bphi(t)^T \bbc dt + \int_{T^n} \bphi(t) F(\bphi(t)^T\bbc) dt =\\ &{\color{red}\bphi(t^{n+1}) \bphi(t^{n+1})^T \bbc} - \bphi(t^{n}) \bc^n - {\color{red}\int_{T^n} \partial_t \bphi(t) \bphi(t)^T \bbc } - \int_{T^n} \bphi(t) F(\bphi(t)^T\bbc) dt \end{align*} $$ \M = \bphi(t^{n+1}) \bphi(t^{n+1})^T -\int_{T^n} \partial_t \bphi(t) \bphi(t)^T $$ $$ \vec{r}(\bbc) = \bphi(t^{n}) \bc^n + \int_{T^n} \bphi(t) F(\bphi(t)^T\bbc) dt $$ $$ \M \bbc = \vec{r}(\bbc) $$ \end{frame} \begin{frame}{ADER: Fixed point iteration} Iterative procedure to solve the problem for each time step \begin{equation}\label{fix:point} \bbc^{(k)}=\M^{-1}\vec{r}(\bbc^{(k-1)}),\quad k=1,\dots, \text{convergence} \end{equation} with $\bbc^{(0)}=\bc(t^n)$. Reconstruction step \begin{equation*} \bc(t^{n+1}) = \bc(t^{n}) - \int_{T^n} F(\bc^{(K)}(t)) dt. \end{equation*} \begin{itemize} \item Convergence? \item How many steps $K$? \end{itemize} \end{frame} \begin{frame}{ADER 2nd order} Example with 2 Gauss Legendre points and 2 iterations Let us consider the timestep interval $[t^n,t^{n+1}]$, rescaled to $[0,1]$. Gauss-Legendre points quadrature and interpolation (in the interval $[0,1]$) \[\ww{t}_q = \left( t^0_q, t^1_q \right) = \left( t^0, t^1 \right) = \left( \frac{\sqrt{3}-1}{2\sqrt{3}}, \frac{\sqrt{3}+1}{2\sqrt{3}} \right), \quad \ww{w} = \left(1/2,1/2\right). \] \[\ww{\phi}(t) = \left( \phi_0(t), \phi_1(t) \right) = \left( \frac{t-t^1}{t^0-t^1}, \frac{t-t^0}{t^1-t^0} \right). \] Then, the mass matrix is given by \[\M_{m,l} = \phi_m(1)\phi_l(1) - \phi'_m(t^l) w_l, \quad m, l = 0,1,\] \[ \M = \begin{pmatrix} 1 & \frac{\sqrt{3}-1}{2} \\ -\frac{\sqrt{3}+1}{2} & 1 \end{pmatrix}.\] \end{frame} \begin{frame}{ADER 2nd order} The right hand side is given \[ r(\bbc)_m = \alpha(0) \phi_m(0) + \Delta t F(\alpha(t^m)) w_m, \quad m=0,1. \] \[ \vec{r}(\bbc) = \alpha(0)\vec{\phi}(0) +\Delta t \begin{pmatrix} F(\alpha(t^1)) w_1 \\ F(\alpha(t^2)) w_2. \end{pmatrix}.\] Then, the coefficients $\bbc$ are given by \begin{align*} \bbc^{(k+1)} &= \M^{-1} \vec{r}( \bbc^{(k)} ). \end{align*} Finally, use $\bbc^{(k+1)}$ to reconstruct the solution at the time step $t^{n+1}$: \begin{align*} \alpha^{n+1} &= \vec{\phi}(1)^T \bbc^{(k+1)}. \end{align*} \end{frame} \begin{frame}{CODE} \begin{itemize} \item Precompute $\M$ \item Precompute the rhs vector part using quadratures after a further approximation $$\vec{r}(\bbc) = \bphi(t^{n}) \bc^n + \int_{T^n} \bphi(t) F(\bphi(t)^T\bbc) dt \approx \bphi(t^{n}) \bc^n + \underbrace{\int_{T^n} \bphi(t)\bphi(t)^Tdt}_{\text{Can be stored}} F(\bbc) $$ \item Precompute the reconstruction coefficients $\bphi(1)^T$ \end{itemize} \end{frame} \section{Similarities} \begin{frame}{ADER\footnote{M. Dumbser, D. S. Balsara, E. F. Toro, and C.-D. Munz. A unified framework for the construction of one-step finite volume and discontinuous galerkin schemes on unstructured meshes. Journal of Computational Physics, 227(18):8209–8253, 2008.} and DeC\footnote{R. Abgrall. High order schemes for hyperbolic problems using globally continuous approximation and avoiding mass matrices. Journal of Scientific Computing, 73(2):461–494, Dec 2017.}: immediate similarities} \begin{itemize} \item High order time-space discretization \item Start from a well known space discretization (FE/DG/FV) \item FE reconstruction in time \item System in time, with $M$ equations \item Iterative method / $K$ corrections \end{itemize} \pause \begin{itemize} \item Both high order explicit time integration methods (neglecting spatial discretization) \end{itemize} \end{frame} \begin{frame}{ADER as DeC} \end{frame} \begin{frame}{ADER as DeC} \end{frame} \begin{frame}{ADER as DeC} \begin{align*} & \L^2(\bbc):=\M\bbc-r(\bbc),\\ & \L^1(\bbc):= \M\bbc-r(\bc(t^n)). \end{align*} \begin{equation*} \L^1(\bbc^{(k)})= \L^1(\bbc^{(k-1)})-\L^2(\bbc^{(k-1)}),\qquad k=1,\dots, K, \end{equation*} \begin{align*} & \M \bbc^{(k)} -\only<2->{\cancel}{r(\bc^{(k),0})}- \only<3->{\cancel}{\M \bbc^{(k-1)}} +\only<2->{\cancel}{r(\bc^{(k-1),0})} +\only<3->{\cancel}{\M \bbc^{(k-1)}} -r(\bbc^{(k-1)}) =0 \only<4>{\\ & \M \bbc^{(k)} -r(\bbc^{(k-1)}) =0.} \end{align*} \end{frame} \begin{frame}{ADER as DeC} \begin{align*} & \L^2(\bbc):=\M\bbc-r(\bbc),\\ & \L^1(\bbc):= \M\bbc-r(\bc(t^n)). \end{align*} Apply the DeC Convergence theorem! \begin{itemize} \item $\L^1$ is coercive because $\M$ is always invertible \item $\L^1-\L^2$ is Lipschitz with constant $C\Delta t$ because they are consistent approx of the same problem \item Hence, after $K$ iterations we obtain a $K$th order accurate approximation of $\bbc^*$ \end{itemize} \end{frame} \begin{frame}{DeC as ADER} \begin{align*} \L^2(\bc^0, \dots, \bc^M) &:= \begin{cases} \bc^M-\bc^0 -\sum_{r=0}^M \int_{t^0}^{t^M} F(\bc^r) \varphi_r(s) \diff s\\ \dots\\ \bc^1-\bc^0 - \sum_{r=0}^M \int_{t^0}^{t^1} F(\bc^r) \varphi_r(s) \diff s \end{cases}. \end{align*} \vspace{10cm} \end{frame} \begin{frame}{DeC as ADER} \end{frame} \begin{frame}{DeC as ADER} \end{frame} \begin{frame}{DeC as ADER} \begin{align*}\label{eq:L2op} %\L^2(\bc^0, \dots, \bc^M) &:= %\begin{cases} % \bc^M-\bc^0 -\int_{t^0}^{t^M} \I_M ( F(\bc^0),\dots,F(\bc^M)) %\\ %\vdots\\ %\bc^1-\bc^0 - \int_{t^0}^{t^1} \I_M ( F(\bc^0),\dots,F(\bc^M)) %\end{cases}\\ \L^2(\bc^0, \dots, \bc^M) &:= \begin{cases} \bc^M-\bc^0 -\sum_{r=0}^M \int_{t^0}^{t^M} F(\bc^r) \varphi_r(s) \diff s\\ \dots\\ \bc^1-\bc^0 - \sum_{r=0}^M \int_{t^0}^{t^1} F(\bc^r) \varphi_r(s) \diff s \end{cases}. \end{align*} \pause \begin{align*} & \cc{m}(t^m)\bc^m-\cc{m}(t_0)\bc^0- \int_{t^0}^{t^m} \cc{m}(t) \sum_{r=0}^M F(\bc^r)\varphi_r(t) \diff t=0\\ %\begin{equation}\label{eq:characteristic} % \cc{m}(t)=\begin{cases} % 1, \qquad \text{if} &t\in [t^0,t^m],\\ % 0, \qquad \text{else}. % \end{cases} %\end{equation} %\pause &\int_{t^0}^{t^M} \cc{m}(t) \partial_t \left(\bc(t)\right) \diff t- \int_{t^0}^{t^M} \cc{m}(t) \sum_{r=0}^M F(\bc^r) \varphi_r(t) \diff t=0,\\ &\int_{T^n} \psi_{m}(t) \partial_t \bc(t) \diff t- \int_{T^n} \psi_{m}(t) F(\bc(t)) \diff t=0. \end{align*} \end{frame} \begin{comment} \begin{frame}{DeC -- ADER} Both are \begin{itemize} \item Iterative processes (only iterations $K=d$ order of accuracy) \item Arbitrarily high order accurate \item Explicit \end{itemize} ADER as DeC iterative process \begin{itemize} \item The operators $\L^1$ and $\L^2$ can be written \item Convergence results hold \item We know in practice how many iteration $K$ \end{itemize} DeC as ADER \begin{itemize} \item $\L^2$ is the same up to the choice of basis and test functions in time \end{itemize} \end{frame} \end{comment} %\subsection{} \begin{frame}{Runge Kutta vs DeC--ADER} \begin{minipage}{0.54\textwidth} \begin{block}{Classical Runge Kutta (RK)} \begin{itemize} \item One step method\item Internal stages \end{itemize} %\pause Explicit Runge Kutta \begin{itemize} \item[\color{green}+] Simple to code \item[\color{red}-] Not easily generalizable to arbitrary order \item[\color{red}-] Stages $>$ order \end{itemize} %\pause Implicit Runge Kutta \begin{itemize} \item[\color{green}+] Arbitrarily high order \item[\color{red}-] Require nonlinear solvers for nonlinear systems \item[\color{red}-] May not converge \end{itemize} \end{block} \end{minipage} \hfill \begin{minipage}{0.41\textwidth} %\pause \begin{block}{DeC -- ADER} \begin{itemize} \item One step method\item Internal subtimesteps \item Can be rewritten as explicit RK (for ODE) \item[\color{green}+] Explicit \item[\color{green}+] Simple to code \item[\color{green}+] Iterations $=$ order \item[\color{green}+] Arbitrarily high order \item[\color{red}-] Large memory storage \end{itemize} \end{block} \end{minipage} \end{frame} \section{Simulations} \begin{frame}{A--Stability} $$y'(t) = \lambda y(t) \qquad y(0) = 1 $$ \begin{figure}[h] \centering \includegraphics[width=0.45\textwidth, trim={30 20 30 20},clip]{stab_glb.pdf} \includegraphics[width=0.45\textwidth, trim={30 20 30 20},clip]{ader_all.pdf} \caption{Stability region} \label{fig:stab} \end{figure} \end{frame} \begin{frame}{Convergence} \begin{figure} \begin{minipage}[c]{0.55\linewidth} \begin{equation} \label{eq:scalar-nonlinear} \begin{split} &y'(t) = - |y(t)| y(t) ,\\ &y(0) = 1,\\ &t\in [0,0.1]. \end{split} \end{equation} Convergence curves for ADER and DeC, varying the approximation order and collocation of nodes for the subtimesteps for a scalar nonlinear ODE \end{minipage} \hfill \begin{minipage}[c]{0.4\linewidth} \includegraphics[width=\linewidth]{scalar-2.png} \end{minipage}% \end{figure} \end{frame} \begin{frame}{Lotka--Volterra} %\begin{equation} %\label{eq:system-nonlinear} %\begin{split} %y_1'(t) &= \alpha y_1(t) - \beta y_1(t) y_2(t) \\ %y_2'(t) &= -\gamma y_2(t) + \delta y_1(t) y_2(t) %\end{split} %\end{equation} \begin{figure} \begin{columns} \column{.6\linewidth} \includegraphics[width=0.95\linewidth]{n100_ader.png} \includegraphics[width=0.95\linewidth]{n100_dec.png} \column{.3\linewidth} \caption{Numerical solution of the Lotka-Volterra system using ADER (top) and DeC (bottom) with Gauss-Lobatto nodes with timestep $\Delta T = 1$. \label{fig:lodka-sol-dec}} \end{columns} \end{figure} \end{frame} \begin{frame}{PDE: Burgers with spectral difference} \begin{figure} \begin{center} \begin{columns} \column{0.35\linewidth} \includegraphics[width=\linewidth,trim={0 55 0 60},clip]{burgers_temp_ader.png} \column{0.35\linewidth} \includegraphics[width=\linewidth,trim={0 55 0 60},clip]{burgers_temp_dec.png} \column{0.25\linewidth} \caption{Convergence error for Burgers equations: Left ADER right DeC. Space discretization with spectral difference} \label{fig:advection-conv-dec} \end{columns} \end{center} \end{figure} \end{frame} \end{document}
{ "alphanum_fraction": 0.6208333333, "avg_line_length": 31.9540229885, "ext": "tex", "hexsha": "d06fc173cad0877f67f0c7fd16dd177af3fcaca1", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "d886357cd425eef902b540015276d0e49e53cef2", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "accdavlo/HighOrderODESolvers", "max_forks_repo_path": "Chapter5/latexSlides/ADERDeC_chapter5.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "d886357cd425eef902b540015276d0e49e53cef2", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "accdavlo/HighOrderODESolvers", "max_issues_repo_path": "Chapter5/latexSlides/ADERDeC_chapter5.tex", "max_line_length": 626, "max_stars_count": null, "max_stars_repo_head_hexsha": "d886357cd425eef902b540015276d0e49e53cef2", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "accdavlo/HighOrderODESolvers", "max_stars_repo_path": "Chapter5/latexSlides/ADERDeC_chapter5.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 13887, "size": 33360 }
\chapter{Introduction} %\addcontentsline{toc}{chapter}{Introduction} \label{introduction} %\todolist{ %Turbulence in the edge and SOL %} %--------------------------------------------------------------------------------------------------------------------- The growth of the world's economy is made possible by a continuous increase in primary energy consumption. % Indeed, a substantial amount of energy is required in order to keep improving the living standards of both developing and developed economies. % As shown in \cref{fig:worldconsumption}, between 1992 and 2017, the primary energy consumption of the world increased by 70\%, from 8 billion to a 13.5 billion tons oil equivalent \citep{Chen2017}. % In 2016, the world consumption of energy grew by 1.3\%, and a growth of 2.2\% was recorded in 2017, the highest since 2013. % The growth is projected to continue increasing in the next years \citep{BP2017}. % Such growth had a direct impact on the climate, particularly through the global emissions of CO$_2$, which doubled in the period 1975-2015 and are projected to triple by 2040 \citep{Chu2017}. % The amount of energy generated from fossil fuels needs to be severely limited if the production of greenhouse gases such as CO$_2$ is to be reduced. % Therefore, there is an urge to outline possible paths towards sustainable energy production and consumption. % In this context, a huge effort is currently devoted to investigate the possibility of using fusion as a source of energy that can ultimately address the increasing world energy demand. \begin{figure} \centering \includegraphics[width=.85\textwidth]{images/2017_world_consumption.pdf} \caption{World primary energy consumption between 1992 and 2017 in million tonnes oil equivalent. In 2017 alone, energy consumption grew 2.2\%, with the largest increment provided by natural gas, followed by renewable power and oil. Source: \citet{BP2017}.} \label{fig:worldconsumption} \end{figure} Fusion is a form of nuclear energy, the main source of energy in the Sun and other stars. % Here, light nuclei with combined initial mass $m_i$ recombine into one or more atomic nuclei with mass $m_f$. % When $m_f<m_i$, the difference in mass between the initial and final particles is converted into released energy $E$ according to Einstein's relation % \begin{equation} E = (m_i - m_f)c^2, \label{eq:einstein} \end{equation} % \noindent where $c$ is the speed of light. % Among all possible fusion reactions that release energy, the one between deuterium-tritium (DT) % \begin{equation} ^2_1 \text{D} + ^3_1 \text{T} \rightarrow ^4_2 \text{He}+^1_0\text{n}, \label{eq:dt} \end{equation} % is considered to be the best suited reaction for the first generation of fusion devices \citep{Freidberg2007}. % This reaction yields a net energy of $17.6$ MeV that goes into the kinetic energy of the fusion products, approximately $3.5$ MeV to $^4_2 \text{He}$ and 14.1 MeV to $^1_0\text{n}$. % The kinetic energy imparted to the neutrons will be used to produce electricity. % The energy of the $^4_2 \text{He}$ will be used to heat the fresh fusion fuel and compensate for the unavoidable heat losses, keeping the reaction going. % The deuterium for the fusion process can be extracted from sea water. % On the other hand, tritium can be obtained from the reaction of the neutron with the lithium in a blanket surrounding the device. % A fusion reactor is expected not to produce long-lived radioactive waste. % Indeed, with an appropriate choice of materials, half-lives of dozens of years can be achieved \citep{Fetter1988}. The material in a fusion reactor must be sufficiently well confined with a sufficiently high temperature $T$ and density $n$ for the $^4_2 \text{He}$ energy to balance the energy losses due to radiation, conduction, and convection. % This statement can be quantified into a single constraint in terms of $T$, $n$, and confinement time, $\tau$. % The confinement time is defined as the energy content of the plasma $W$ divided by the power loss ${P_{\text{loss}}}$, $\tau = {W}/{P_{\text{loss}}}$ (with the thermal energy of the plasma $W$ given by the integral over volume of the energy density $n_a T_a$ summed over all species $a$). % Indeed, for self-sustained fusion reactors, the power loss ${P_{\text{loss}}}$ has to be compensated by the energy produced by the fusion reactions, such that $f E_{fp} \ge P_{\text{loss}}$ where $f$ is the number of fusion reactions per time unit and $E_{fp}$ the energy of the charged fusion products. % Assuming that the plasma in the reactor is composed by electrons, deuterium, and tritium with roughly the same density and temperature, and assuming that the distribution of energy of the plasma particles follows a Gaussian distribution, a minimum value for the product $n T \tau$ can be found, yielding the condition \citep{Wesson2004} % \begin{equation} \tau n T > 5 \times 10^{21}\text{~s m$^{-3}$ keV}, \label{eq:lawsoncriteria} \end{equation} % with a minimizing value of $T_{\text{min}}=15$ keV (which is in fact one order of magnitude higher than the temperatures at the sun's core, $\sim 1$ KeV). % Equation (\ref{eq:lawsoncriteria}) is commonly known as Lawson's criterion. At the temperatures necessary for self-sustained fusion, the fusion fuel is fully ionized, i.e., electrons are stripped away from their atomic nuclei as the ionization energy of the plasma elements ($\sim$10 eV) is a few orders of magnitude below the keV range. % The resulting neutral gas of dissociated electrons and ions is called a plasma. % As both electrons and ions are electrically charged, the particles in the plasma interact through electromagnetic forces. % Ultimately, the description of a plasma can be reduced to the understanding of the trajectories of its constituting particles. % This usually involves solving an extremely complex set of equations in typically non-trivial geometry settings to study the motion of charged particles in the electromagnetic fields that are both externally applied and generated by the plasma itself. Several strategies have been devised to confine the plasma in fusion conditions, with two main lines of research pursued today: inertial and magnetic confinement fusion. % In inertial confinement fusion, nuclear fusion reactions are initiated through the heating and compression of a fuel target by high-energy laser, electron, or ion beams. % With very high plasma densities ($n \sim 10^{30}$ m$^{-3}$), \cref{eq:lawsoncriteria} allows for short confinement times ($\tau \sim 10^{-9}$ s). % On the other hand, in magnetic confinement fusion, the plasma is confined by strong magnetic fields. % Magnetic confinement fusion reactors are targeted to work at considerably lower densities ($n\sim 10^{20}$ m$^{-3}$) that are, in fact, much lower than the density of air ($n \sim 10^{25}$ m$^{-3}$). % This constraints the confinement time to be greater than at least one second, according to \cref{eq:lawsoncriteria}. % The present thesis focuses on magnetic confinement fusion. The magnetic field $\mathbf B=B \mathbf b$ necessary to ensure plasma equilibrium in magnetic fusion devices can be derived from the force balance equation \citep{Freidberg2007} % \begin{equation} \mathbf J \times \mathbf B = \nabla P, \label{eq:forcebalanceMHD} \end{equation} % where $\mathbf J$ is the plasma current, related to the magnetic field by Ampère's law % \begin{equation} \nabla \times \mathbf B = \mu_0 \mathbf J, \end{equation} % and $P$ is the plasma pressure. % The force balance equation, \cref{eq:forcebalanceMHD}, is derived from the magnetohydrodynamics (MHD) equation of motion in the steady state limit without flows, and it essentially provides the amount of current necessary to magnetically confine a plasma with finite pressure. % From \cref{eq:forcebalanceMHD}, we see that the vectors $\mathbf B$ and $\mathbf J$ should lie on surfaces of constant pressure, as $\mathbf B \cdot \nabla P = \mathbf J \cdot \nabla P = 0$. % This statement, combined with the fact that, according to Poincaré's theorem, a compact surface which is everywhere tangential to a non-vanishing vector field free of singularities must have the topology of a torus \citep{Helander2014}, shows that surfaces of constant pressure in a magnetically confined plasma must have a toroidal geometry, and that field lines of $\mathbf B$ and $\mathbf J$ should wind around the torus (see \cref{fig:toroidaldevice}). % There are three ways to twist the magnetic field lines around a torus: by driving an electric current through the plasma, by rotating the poloidal cross-section of the magnetic flux surfaces along the toroidal direction, or by making the magnetic axis not lie in a plane (this is called magnetic torsion) \citep{Mercier1964,Helander2014}. % Currently, the magnetic confinement fusion device that showed higher confinement times and is more theoretically and experimentally advanced is the tokamak (\cref{fig:toroidaldevice} a). % In tokamaks, magnetic field line twisting is provided by means of a plasma current only. % This contrasts with stellarators that usually rely on a combination of both rotation of the flux surfaces' poloidal cross-section and torsion of the magnetic axis. \begin{figure} \centering \includegraphics[width=.99\textwidth]{images/tokamak_stellarator.jpg} \caption{Schematics of two magnetic confinement fusion designs: tokamaks (a) and stellarators (b). The twist in magnetic field lines in the tokamak is driven by a current generated in the plasma, while in the stellarator, a plasma current is not needed as magnetic field lines are twisted entirely by external non-axisymmetric coils. Source: \citep{Xu2016}.} \label{fig:toroidaldevice} \end{figure} \section{The Tokamak Device} In a tokamak, the plasma is confined by means of a magnetic field inside a toroidal chamber, as shown in \cref{fig:toroidaldevice} (a). % The largest tokamak in operation today is JET, where the highest ratio $Q$ between the fusion power generated in the reactor and the external heating power, namely $Q \simeq 0.7$, with a triple product $n T \tau \simeq 8 \times 10^{20}$ keV s m${^{-3}}$ was obtained \citep{Jacquinot2010}. % The achievements in tokamak research paved the way to the construction of the ITER tokamak in France, expected to produce its first plasma in 2025, with the goal of obtaining $Q=10$ \citep{Aymar2002} and show the feasibility of using magnetic confinement fusion as a source of energy. % A schematic diagram of the ITER fusion reactor is shown in \cref{fig:iter}. % \begin{figure} \centering \includegraphics[width=.75\textwidth]{images/iter.jpg} \caption{Schematic of the ITER (International Thermonuclear Experimental Reactor) device, including its divertor (blue), external coils (orange and green), and its D-shaped vessel. Source: iter.org} \label{fig:iter} \end{figure} The magnetic field in a tokamak is generated by a combination of coils arranged on a set of equidistant poloidal planes, creating the toroidal component of the magnetic field, and by plasma current driven by a toroidal electric field which is induced, thanks to a transformer action, by the central coils in \cref{fig:toroidaldevice} (a). % The plasma in the tokamak can be heated to temperatures of a few keV leveraging the fact that plasma current produces ohmic heating. % However, temperatures above 10 keV are necessary to ignite the fusion reactions are achieved by means of additional heating using particle beams or electromagnetic waves \citep{Wesson2004}. % While such temperatures are expected to be achieved in the plasma core, the periphery region of the plasma should be substantially colder in order not to damage plasma-facing materials, ensuring a reasonable lifetime of the device, and avoiding the impurities sputtered by the solid walls to contaminate the plasma and decrease its stability and confinement properties. % Ultimately, the complex interaction between the plasma and the device can constitute a limiting factor in achieving Lawson's criterion, \cref{eq:lawsoncriteria}. % For this reason, several mechanisms to control the plasma-solid interaction are devised. % In most of present tokamaks and in ITER, the flux of heat and particles is typically diverted to the bottom of the device in the divertor region (blue region in \cref{fig:iter}). % A divertor configuration of a tokamak plasma is shown in \cref{fig:plasmaboundary}, together with a typical structure of the magnetic flux surfaces that allow the removal of heat and particles through the divertor. % In this thesis, we mainly focus on the plasma periphery region, composed of the edge, where the magnetic field lines lie on flux surfaces that do not intercept the wall of the device, and the scrape-off layer (SOL), where the magnetic field lines intercept the wall of the device (see \cref{fig:plasmaboundary}). % The magnetic flux surface that defines the separation between these two regions is called the last closed flux surface, or separatrix. % \begin{figure} \centering \includegraphics[width=.35\textwidth]{images/Core_Edge_Sol.pdf} \caption{Poloidal cross-section of a tokamak plasma, divided into three regions: most inward hotter region (core), most outward region with closed magnetic flux surfaces in red (edge), region with magnetic field lines that intercept the wall of the device in yellow (SOL).} \label{fig:plasmaboundary} \end{figure} \section{Modelling of Plasma Dynamics at the Tokamak Periphery} \label{sec:plasmamodelling} A full understanding of the dynamics at the tokamak edge and SOL regions is essential for the successful operation of future fusion experiments and reactors, as this region is responsible for much of the overall confinement of the tokamak device \citep{Ricci2015}. % In the edge region of magnetic fusion devices operating a regime of improved confinement (the so-called H-mode observed in many present devices and predicted to occur in many future devices such as ITER) a pedestal develops, i.e., the profiles of density and temperature become very steep near the separatrix and a radial electric field is formed, which is thought to be responsible for the reduction of turbulence levels \citep{Wagner1984}. % The H-mode pedestal can be periodically relaxed due to Edge-Localized Modes (the so-called ELMs), yielding large amplitude bursts of particle and heat exhaust into the SOL \citep{Leonard2014}, which are a major concern on the way to fusion. % The SOL region, on the other hand, controls the plasma heat exhaust, plasma refuelling and the removal of fusion ashes, and sets the boundary between the plasma and the vessel. % Moreover, in the SOL, due to the presence of a complex magnetic geometry, typical coordinate systems used for core simulations are found to be singular. % Due to the crucial role of the tokamak periphery region on the performance of a fusion device, significant experimental and theoretical work has been devoted in the last few decades to the understanding of the fundamental mechanisms governing the dynamics of this region \citep{Loarte2007}. The dynamics of the plasma at the tokamak periphery region is observed to be strongly nonlinear. Fluctuations occur on a broadband range of wavenumbers $\mathbf k\sim \nabla \log n \sim \nabla \log T$ and frequencies $\omega\sim |\partial_t \log n|\sim|\partial_t \log T|$ \citep{Scott2007}, and are strongly anisotropic, i.e., wavenumbers parallel to the magnetic field ($k_\parallel = \mathbf k \cdot \mathbf b$) are much smaller than the perpendicular ones ($\mathbf k_\perp=\mathbf k - k_\parallel \mathbf b$). % Modes present in the edge can have perpendicular wavelengths as low as the ion gyration radius $\rho_i$ ($\rho_i\sim 0.3$ cm at $T=1$ keV an $B=1$ T) and, in the SOL, the dominant turbulent modes have a perpendicular wavelengths that are usually one order of magnitude or more smaller than $\rho_i$ ($\rho_i\sim 0.3$ mm at $T=10$ eV an $B=1$ T) \citep{Agostini2011}. % The typical $\rho_i$ lengths at the tokamak periphery and core are indeed much smaller than the tokamak minor and major radius, $a$ and $R$, respectively, of typical magnetic field gradient lengths $L_B \sim R$, and of typical scale lengths of the fluctuations in the parallel direction $L_\parallel \sim 1/k_\parallel$. % Regarding turbulent frequencies $\omega$, these are typically much lower than the ion gyrofrequency $\Omega_i$ \citep{Hahm2009}. The gyrokinetic model is the most established one to describe tokamak turbulence in the ordering $k_\perp \rho_i \sim 1$, $\omega/\Omega_i \ll 1$ and $k_\parallel/k_\perp \ll 1$ \citep{Catto1978a,Frieman1982,Brizard2007a,Parra2008,Hahm2009}. % Gyrokinetic theory provides a rigorous framework to remove the details of the charged particle's gyromotion and other high frequency phenomena. % A variety of numerical methods have been developed to solve numerically the gyrokinetic equation, with the two main types being the continuum \citep{Jenko2001} and the particle-in-cell \citep{Lee1987} methods. % These methods have allowed major progress in the understanding of tokamak turbulence in the core, where a low collisionality model can be used and plasma quantities can be split between fluctuating and time-averaged components, in order to evolve only the former (the so-called $\delta f$ approach) \citep{Kinsey2011}. % Among several gyrokinetic codes used to describe plasma turbulence in the tokamak core, we mention CGYRO \citep{Candy2016}, GEM \citep{Parker1999}, GENE \citep{Jenko2000a,Gorler2011}, GKV \citep{Watanabe2006}, GKW \citep{Peeters2009}, GS2 \citep{Kotschenreuther1995,Dorland2000a}, GYRO \citep{Candy2003}, GYSELA \citep{Latu2007}, and ORB5 \citep{Jolliet2007}. % However, some complications arise when applying established gyrokinetic simulations for the tokamak core to the plasma periphery. % In the edge and SOL, the plasma is turbulent, with fluctuation levels of order unity, which renders conventional $\delta f$ gyrokinetic approaches unable to handle such conditions, as opposed to more computational demanding approaches that do not separate fluctuating and time-averaged quantities, also called full-F approaches. % Furthermore, while the core is weakly collisional with temperatures of $\sim 10$ keV, the tokamak periphery is characterized by temperatures ranging from the keV range at the inner edge to a few eV in the far SOL region, with a similar order of magnitude variation for the plasma density. % The development of a gyrokinetic collision operator derived from first principles, able to handle arbitrary collisionality regimes in a turbulent setting is still the subject of ongoing research \citep{Hirvijoki2017}. % Indeed, there are only a few recent attempts to use gyrokinetic simulations for the tokamak periphery. % Among these, we mention COGENT \citep{Dorf2013}, ELMFIRE \citep{Heikkinen2008}, G5D \citep{Kawai2017}, GKEYLL \citep{Shi2017}, TEMPEST \citep{Xu2010a}, and XGC1 \citep{Chang2009}. We remark that the effect of Coulomb collisions between charged particles is crucial to accurately predict the growth rate of instabilities occurring in magnetic confinement fusion devices and to predict the level of turbulent transport \citep{Barnes2009}. % Collisions are not only a major regulator of low-frequency turbulence and associated transport, but they also determine the steady state of the system by dictating the long term evolution of the plasma quantities. % Although several theoretical studies have emerged in order to derive an appropriate Coulomb collision operator for drift-kinetic and gyrokinetic formulations \citep{Brizard2004,Sugama2015,Burby2015}, such operators still involve a complicated nonlinear six-dimensional phase-space integral to be performed \citep{Hirvijoki2017}. % Due to constraints related to code parallelization and computational resources, a numerical implementation of such intricate formulations of the Coulomb collision operator is still out of reach. Because of the limitation of current gyrokinetic models, for numerical reasons, and due to their simplicity, fluid models that incorporate the drift ordering approximation $k_\perp \rho_i \ll 1$ have become the standard for SOL theoretical and numerical modelling \citep{Zeiler1997,Ribeiro2008a}. % Notable examples include BOUT++ \citep{Dudson2009}, GBS \citep{Ricci2012a}, GDB \citep{Zhu2018}, GRILLIX \citep{Stegmeir2018} HESEL \citep{Nielsen2015}, STORM \citep{Easy2014}, and TOKAM3X \citep{Tamain2009}. % Such models are usually derived from the Braginskii fluid equations \citep{Braginskii1965}, where the plasma is assumed to be close to thermodynamic equilibrium because of collisions, i.e., assuming that the electron $\nu_e$ and ion $\nu_i$ collision frequencies are larger than the typical turbulent frequencies. % For L-mode cold SOL plasmas, fluid models have been successfully benchmarked against experimental results \citep{Riva2016,Militello2016}. % Moreover, in such regimes, previous studies on the plasma dynamics at the SOL region \citep{Ricci2013, Mosetto2015} have estimated key SOL parameters such as cross-field transport, plasma scale lengths, and instability thresholds through a careful combination of linear analysis of the turbulent modes and turbulent saturation mechanisms, yielding a simple physical picture of SOL turbulence as the interplay between turbulent transport and plasma losses at the vessel wall. % However, inside the separatrix, in the edge region, although turbulence is still mediated by low-frequency fluctuations, the plasma becomes hotter, less collisional, and small scale $k_\perp \rho_i \sim 1$ fluctuations become important \citep{Hahm2009}. % Also, when events such as ELMs expel large amounts of heat and particles to the SOL and to the wall, the description of such high-temperature structures requires a kinetic treatment valid at arbitrary collision frequencies, such as drift-kinetic theory \citep{Hazeltine2003}. % These ultimately require to incorporate the effects of Coulomb collisions using an accurate Coulomb collision operator. We believe that a model that evolves a set of three-dimensional moments of the kinetic distribution function represents the best choice to simulate tokamak periphery plasmas in an accurate and efficient manner. % Such a framework has the inherent flexibility of providing a description that spans from the fluid models, when a low number of moments is used and a coarse plasma description is needed, to fully kinetic models, for accurate plasma simulations. % To build this model, the plasma distribution function $f$ is expanded in a suitable set of basis functions, i.e., a set of orthogonal polynomials ensuring that the expansion coefficients converge rapidly in order to allow manageable numerical implementation and simulations with a minimum number of terms. % In this work, we show that this model, which is indeed a moment-hierarchy, formulated in terms of Hermite and Laguerre orthogonal polynomials, fulfills these requirements, and that it can be used to study the dynamics at the tokamak periphery, both in the fluid and in the gyrokinetic regime. % The use of Hermite polynomials in plasma physics can be traced back to the work of \citet{Grad1963}, which used a tensorial formulation of the Hermite polynomials, the so-called reducible Hermite polynomials [as opposed to the irreducible ones used in \citet{Balescu1988}]. % In fact, the orthogonal basis associated with a Gaussian weight consists of Hermite polynomials. % The Gaussian function is relevant for statistical and plasma physics as the long term and stationary solution of the collisional kinetic equation is given by the Maxwell-Boltzmann distribution (a Gaussian function in velocity space) \citep{Helander2002}. % We note that, although moment-hierarchy methods have a long history in plasma physics \citep{Grad1963,Braginskii1965,Balescu1988}, only recently such formulations were developed for arbitrary collisionality regimes, using reducible \citep{Hirvijoki2016}, irreducible \citep{Ji2009}, and scalar \citep{Jorge2017} Hermite polynomials. \section{Scope and Outline of the Thesis} With the final goal of gaining a deeper understanding and obtaining a predictive tool for the plasma dynamics in the periphery region of magnetic confinement fusion devices, in the present thesis, we develop a moment-hierarchy framework able to evolve the plasma dynamics at the tokamak periphery. % A first-principles model is developed based on the careful reconstruction of the motion of single charged particles in a regime relevant for the tokamak periphery. % We consider first the drift-kinetic limit assuming $k_\perp \rho_i \ll 1$, a regime of interest for the SOL. % Then, gyrokinetic fluctuations at $k_\perp \rho_i \sim 1$ are included. % The collective motion of particles is described by an appropriate kinetic equation, including the effect of Coulomb collisions. % Aiming for a numerical efficient framework, we expand the distribution function in a Hermite-Laguerre moment-hierarchy set of equations valid at arbitrary collisionalities, where the integro-differential character of the Coulomb collision operator is converted into linear combinations of moments of the distribution function. % The feasibility of the numerical implementation is shown by the study of the linear evolution of electron-plasma waves and of the drift-wave instability. % This study serves not only as a proof of concept of the Hermite-Laguerre formulation, but it also allows, for the first time, the accurate calculation of the impact of collisions in such linearized systems at arbitrary collisionalities. We note that, in the present work, we focus on the electrostatic limit, which requires three criteria to be satisfied: (1) that $\beta=n T_e/(B^2/2\mu_0) \ll 1$, (2) that $\alpha = \beta a/L_p$ stays below the electromagnetic ballooning instability threshold, and (3) that the frequency of interest is far below the shear Alfvén frequency. % While condition (1) is, in general, valid across the tokamak periphery region, condition (2) can be broken down in the edge region in the H-mode regime and condition (3) may be violated near an X-point where parallel wavenumbers can make the shear Alfvén frequency similar to the one of the turbulence. % Therefore, we note that the electrostatic approximation employed in this work rules out drift-Alfvén coupling and the treatment of peeling-ballooning modes in the edge. % An extension of the model derived here to include electromagnetic perturbations will be addressed in a future publication \citep{Frei2019}. % Finally, we point out that the Coulomb collision operator and its velocity moments derived in this work remain unchanged when electromagnetic perturbations are taken into account. This thesis is structured as follows. % In \cref{ch:dk}, we develop a full-F drift-kinetic model to describe the plasma dynamics in the scrape-off layer region of tokamak devices at arbitrary collisionalities, closely following \citep{Jorge2017}. % The formulation is based on a gyroaveraged Lagrangian description of the charged particle motion, and the corresponding drift-kinetic Boltzmann equation that includes a full Coulomb collision operator. % The Hermite–Laguerre velocity space decomposition of the distribution function is used, and a set of equations to evolve the coefficients of the expansion is presented, including the moments of the Coulomb collision operator, therefore describing plasma distribution functions arbitrarily far from equilibrium. % A fluid closure in the high collisionality limit is presented, and the corresponding fluid equations are compared with previously derived fluid models. In \cref{ch:gk}, a gyrokinetic moment-hierarchy model describing the plasma dynamics in the tokamak periphery is derived within a full-F framework. % With respect to the drift-kinetic model of \cref{ch:dk}, this model evolves periphery turbulence in the presence of time-dependent electrostatic fluctuations on scale lengths ranging from the ion gyroradius to typical time-averaged gradient lengths. % The formulation is based on a nonlinear second order accurate gyrokinetic equation, derived from Hamiltonian perturbation theory methods. % The electrostatic field is evolved according to the gyrokinetic Poisson's equations. % A moment-hierarchy formulation of the resulting set of equations is performed, yielding a fluid-like set of equations, valid at $k_\perp \rho_i \sim 1$. A moment expansion of the Coulomb collision operator valid at arbitrary collisionality and $k_\perp \rho_i \sim 1$ is presented in \cref{ch:op}. % This is done by performing a multipole expansion of the Rosenbluth potentials, similar to commonly employed multipole expansions in electrostatics \citep{Jackson1999}. % This allows us to derive the dependence of the full Coulomb collision operator on the particle gyroangle in terms of scalar spherical harmonics. % Finally, the resulting operator is projected onto a Hermite-Laguerre polynomial basis, yielding analytically closed formulas for numerically implementation. In \cref{ch:epw}, following \citep{Jorge2018a}, the linearized moment-hierarchy equation is numerically solved to describe the dynamics of electron-plasma waves. % The damping rate, frequency and eigenmode spectrum of electron-plasma waves are found as a function of the collision frequency and wavelength. % A comparison is made with the collisionless limit and with simplified collision operators, where large deviations are found in the damping rates and eigenmode spectra. % Furthermore, we show the presence of a purely damped entropy mode, characteristic of a plasma where Coulomb collisions are dominant. % The dispersion relation of this mode is analytically derived and compared with numerical results. In \cref{ch:dwi}, we focus on the drift-wave instability. % We show that the moment-hierarchy framework allows retrieving established collisional and collisionless limits, closely following \citep{Jorge2018}. % At the intermediate collisionalities relevant for present and future magnetic nuclear fusion devices, deviations with respect to collision operators used in state-of-the-art turbulence simulation codes show the need for retaining the full Coulomb operator in order to obtain both the correct instability growth rate and eigenmode spectrum. % We note that, ultimately, this may significantly impact quantitative predictions of transport levels. Finally, in \cref{ch:conclusion}, the results and outlook of the thesis are summarized.
{ "alphanum_fraction": 0.7934059483, "avg_line_length": 86.3774647887, "ext": "tex", "hexsha": "e6a5db2e9ac12722041e38228cb16e0b8fde375f", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "955be22ad75b54d44a3c2d1499098824e7500d38", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "rogeriojorge/Rogerio_PhD_Thesis", "max_forks_repo_path": "IST Version/main/ch1_introduction.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "955be22ad75b54d44a3c2d1499098824e7500d38", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "rogeriojorge/Rogerio_PhD_Thesis", "max_issues_repo_path": "IST Version/main/ch1_introduction.tex", "max_line_length": 514, "max_stars_count": null, "max_stars_repo_head_hexsha": "955be22ad75b54d44a3c2d1499098824e7500d38", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "rogeriojorge/Rogerio_PhD_Thesis", "max_stars_repo_path": "IST Version/main/ch1_introduction.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 7135, "size": 30664 }
\documentclass[12pt]{article} % 12-point font \usepackage[margin=1in]{geometry} % set page to 1-inch margins \usepackage{bm,bbm} % for math \usepackage{amsmath} % for math \usepackage{amssymb} % like \Rightarrow \setlength\parindent{0pt} % Suppresses the indentation of new paragraphs. % Big display \newcommand{\ds}{ \displaystyle } % Parenthesis \newcommand{\norm}[1]{\left\lVert#1\right\rVert} \newcommand{\p}[1]{\left(#1\right)} \newcommand{\bk}[1]{\left[#1\right]} \newcommand{\bc}[1]{ \left\{#1\right\} } \newcommand{\abs}[1]{ \left|#1\right| } % Derivatives \newcommand{\df}[2]{ \frac{d#1}{d#2} } \newcommand{\ddf}[2]{ \frac{d^2#1}{d{#2}^2} } \newcommand{\pd}[2]{ \frac{\partial#1}{\partial#2} } \newcommand{\pdd}[2]{\frac{\partial^2#1}{\partial{#2}^2} } % Distributions \newcommand{\Normal}{\text{Normal}} \newcommand{\Beta}{\text{Beta}} \newcommand{\G}{\text{Gamma}} \newcommand{\InvGamma}{\text{Inv-Gamma}} \newcommand{\Uniform}{\text{Uniform}} \newcommand{\Dirichlet}{\text{Dirichlet}} \newcommand{\LogNormal}{\text{LogNormal}} % Statistics \newcommand{\E}{ \text{E} } \newcommand{\iid}{\overset{iid}{\sim}} \newcommand{\ind}{\overset{ind}{\sim}} \newcommand{\true}{\text{TRUE}} \usepackage{color} \newcommand{\alert}[1]{\textcolor{red}{#1}} % Graphics \usepackage{graphicx} % for figures \usepackage{float} % Put figure exactly where I want [H] % Uncomment if using bibliography % Bibliography % \usepackage{natbib} % \bibliographystyle{plainnat} % Adds settings for hyperlinks. (Mainly for table of contents.) \usepackage{hyperref} \hypersetup{ pdfborder={0 0 0} % removes red box from links } % Title Settings \title{CyTOF Density Estimation Data Analysis} \author{Arthur Lui} \date{\today} % \date{} to set date to empty % MAIN % \begin{document} \maketitle \section{Preliminary Data Analysis} The markers CD3z, EOMES, Perforin, and Siglec7 for one subject were studied in this data analysis. The number of cells in each sample are summarized in Table~\ref{tab:data-counts}. % \alert{Juhee: This data set was randomly subsampled such that only 20000 cells total are used. That is $N_C + N_T = 20000$.} % We set $K=5$. Beyond that, the same procedure for determining the priors for this analysis was the same as that in the simulation studies. Posterior samples were obtained via MCMC as previously outlined. The initial 2000 samples were discarded as burn, and the subsequent 10000 samples were kept for analysis. Figures~\ref{fig:data-post-pred}~and~\ref{fig:data-post-gamma} show the posterior density and posterior distribution for $\gamma_i$ for each marker. Also included is the posterior mean for $\beta$, denoted by $\hat\beta$. Note that for marker Siglec7, $\hat\beta=0$; and $\hat\beta=1$ for all other markers. Overall, when $\beta=1$ the model fit is excellent and the posterior densities contain the kernel density estimates and follow them closely. \alert{Juhee: It seems there's a problem when $\beta=0$. For Siglec7, $\hat\beta$ is estimated to be 0. This somewhat makes sense from Figure~\ref{fig:data-post-pred} as the two KDEs are very similar. But in Figure~\ref{fig:data-post-gamma}~(d), $\gamma_C$ is combining both samples, but neither $\gamma_C$ nor $\gamma_T^\star$ (which in this case are the same) match the data well. I think part of the issue is that when $\beta=0$, $\gamma_T$ and $\eta_T$ are simply samples from the prior $\Beta(1,1)$. The chance of drawing good $\gamma_T$ and $\eta_T$ from the priors is small, so $\beta$ gets stuck for a long time. In essence, even though $Z_i/N_i$ is small for $i\in\bc{C,T}$ I think $\hat\beta$ should be non-zero, and the MCMC is simply stuck. What are your thoughts?} \begin{table}[!t] \centering \begin{tabular}{|c|rrrr|} \hline Marker & $N_C$ & $N_T$ & $Z_C$ & $Z_T$ \\ \hline CD3z & 9730 & 10270 & 159 & 47 \\ EOMES & 9730 & 10270 & 939 & 806 \\ Perforin & 9730 & 10270 & 10 & 102 \\ Siglec7 & 9730 & 10270 & 610 & 465 \\ % Granzyme A & 9730 & 10270 & 4 & 12 \\ \hline \end{tabular} \caption{Counts of the number of cells in donor sample before ($N_C$) and after ($N_T$) treatment. $Z_C$ and $Z_T$ respectively denote the proportion of zeros in the measurements before and after treatment.} \label{tab:data-counts} \end{table} \begin{figure}[t!] \centering \begin{tabular}{cc} \includegraphics[scale=0.5]{results/donor1/CD3z/img/postpred.pdf} & \includegraphics[scale=0.5]{results/donor1/EOMES/img/postpred.pdf} \\ (a) CD3z ($\hat\beta=1$) & (b) EOMES ($\hat\beta=1$) \\ % \includegraphics[scale=0.5]{results/donor1/Perforin/img/postpred.pdf} & \includegraphics[scale=0.5]{results/donor1/Siglec7/img/postpred.pdf} \\ % \includegraphics[scale=0.5]{results/donor1/Granzyme_A/img/postpred.pdf} \\ (c) Perforin ($\hat\beta=1$) & (d) Siglec7 ($\hat\beta=0$) \\ \end{tabular} \caption{The dashed blue and red lines are, respectively, the kernel density estimates of $\bm{\tilde{y}}_C$ and $\bm{\tilde{y}}_T$. Likewise, the blue and red shaded regions are the posterior density estimates for $\bm{\tilde{y}}_C$ and $\bm{\tilde{y}}_T$, respectively.} \label{fig:data-post-pred} \end{figure} \begin{figure}[t!] \centering \begin{tabular}{cc} \includegraphics[scale=0.5]{results/donor1/CD3z/img/gammas.pdf} & \includegraphics[scale=0.5]{results/donor1/EOMES/img/gammas.pdf} \\ (a) CD3z ($\hat\beta=1$) & (b) EOMES ($\hat\beta=1$) \\ % \includegraphics[scale=0.5]{results/donor1/Perforin/img/gammas.pdf} & \includegraphics[scale=0.5]{results/donor1/Siglec7/img/gammas.pdf} \\ % \includegraphics[scale=0.5]{results/donor1/Granzyme_A/img/gammas.pdf} \\ (c) Perforin ($\hat\beta=1$) & (d) Siglec7 ($\hat\beta=0$) \\ \end{tabular} \caption{Posterior distributions of $\gamma_C$ and $\gamma_T^\star$ in blue and red respectively. The circles represent the proportion of zeros in each sample.} \label{fig:data-post-gamma} \end{figure} % Uncomment if using bibliography: % \bibliography{bib} \end{document}
{ "alphanum_fraction": 0.6989955541, "avg_line_length": 39.1806451613, "ext": "tex", "hexsha": "c96ca064f1c9f5ec4bd1d32156cf00fa873c9000", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "1f62d693c66b9e303dc8ee0cb8743dc848d9df5e", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "luiarthur/CytofDensityEstimation", "max_forks_repo_path": "runs/datastudy/run0/tex/datastudy.tex", "max_issues_count": 4, "max_issues_repo_head_hexsha": "1f62d693c66b9e303dc8ee0cb8743dc848d9df5e", "max_issues_repo_issues_event_max_datetime": "2020-12-07T07:05:00.000Z", "max_issues_repo_issues_event_min_datetime": "2020-10-12T18:10:36.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "luiarthur/CytofDensityEstimation", "max_issues_repo_path": "runs/datastudy/run0/tex/datastudy.tex", "max_line_length": 80, "max_stars_count": null, "max_stars_repo_head_hexsha": "1f62d693c66b9e303dc8ee0cb8743dc848d9df5e", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "luiarthur/CytofDensityEstimation", "max_stars_repo_path": "runs/datastudy/run0/tex/datastudy.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 2034, "size": 6073 }
\section{Projection Pursuit} One solution is projection-pursuit. It assumes a model of the form \[ f(X_1,\dots,X_n) = \sum_{j=1}^p f_j\{\bg{\alpha}_j'\bX\} \] where $\bg{\alpha}_j ' \bX$ denotes a one dimensional projection of the vector $(X_1,\dots,X_p)'$ and $f_j$ is an arbitrary function of this projection. The model builds up the regression surface by estimating these univariate regressions along carefully chosen projections defined by the $\bg{\alpha}_k$. Thus for $K=1$ and $p=2$ the regression surface looks like a corrugated sheet and is constant in the directions orthogonal to $\bf{\alpha}_k$. If you don't see how this solves the problem of dimensionality, the next section will help you understand.
{ "alphanum_fraction": 0.7538247566, "avg_line_length": 37.8421052632, "ext": "tex", "hexsha": "fc12f809d22f9c65c827d58c9ddc9faba75e6aeb", "lang": "TeX", "max_forks_count": 38, "max_forks_repo_forks_event_max_datetime": "2021-11-20T12:17:08.000Z", "max_forks_repo_forks_event_min_datetime": "2016-08-17T22:17:30.000Z", "max_forks_repo_head_hexsha": "2f27ea0d9e0b8a2342bb851ae7415ba3268fd00f", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "igrabski/rafalab.github.io", "max_forks_repo_path": "pages/754/section-07-01.tex", "max_issues_count": 6, "max_issues_repo_head_hexsha": "2f27ea0d9e0b8a2342bb851ae7415ba3268fd00f", "max_issues_repo_issues_event_max_datetime": "2021-01-21T22:35:40.000Z", "max_issues_repo_issues_event_min_datetime": "2016-08-18T00:41:36.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "igrabski/rafalab.github.io", "max_issues_repo_path": "pages/754/section-07-01.tex", "max_line_length": 71, "max_stars_count": 50, "max_stars_repo_head_hexsha": "2f27ea0d9e0b8a2342bb851ae7415ba3268fd00f", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "igrabski/rafalab.github.io", "max_stars_repo_path": "pages/754/section-07-01.tex", "max_stars_repo_stars_event_max_datetime": "2022-01-31T19:21:02.000Z", "max_stars_repo_stars_event_min_datetime": "2016-08-17T23:04:04.000Z", "num_tokens": 201, "size": 719 }
\subsection{Dereference Class} \label{sec:dereference} A \code{Dereference} object is an \code{Expression} that dereferences another \code{ValueComputation}. A \code{Dereference} contains an \code{Expression} representing an effective address computation. Its use set is the same as the use set of the \code{Expression} being dereferenced. It is not possible, given the information in a single instruction, to evaluate the result of a dereference. \code{eval} may still be called on an \code{Expression} that includes dereferences, but the expected use case is as follows: \begin{itemize} \item Determine the address being used in a dereference via the \code{eval} mechanism \item Perform analysis to determine the contents of that address \item If necessary, fill in the \code{Dereference} node with the contents of that addresss, using \code{setValue} \end{itemize} The type associated with a \code{Dereference} node will be the type of the value {\itshape read\/} {\itshape from\/} {\itshape memory\/}, not the type used for the address computation. Two \code{Dereference}s that access the same address but interpret the contents of that memory as different types will produce different values. The children of a \code{Dereference} at a given address are identical, regardless of the type of dereference being performed at that address. For example, the \code{Expression} shown in Figure 6 could have its root \code{Dereference}, which interprets the memory being dereferenced as a unsigned 16-\/bit integer, replaced with a \code{Dereference} that interprets the memory being dereferenced as any other type. The remainder of the \code{Expression} tree would, however, remain unchanged. \begin{apient} Dereference (Expression::Ptr addr, Result_Type result_type) \end{apient} \apidesc{ A \code{Dereference} is constructed from an \code{Expression} pointer (raw or shared) representing the address to be dereferenced and a type indicating how the memory at the address in question is to be interpreted. } \begin{apient} virtual void getChildren (vector< InstructionAST::Ptr > & children) const \end{apient} \apidesc{ A \code{Dereference} has one child, which represents the address being dereferenced. Appends the child of this \code{Dereference} to \code{children}. } \begin{apient} virtual void getUses (set< InstructionAST::Ptr > & uses) \end{apient} \apidesc{ The use set of a \code{Dereference} is the same as the use set of its children. The use set of this \code{Dereference} is inserted into \code{uses}. } \begin{apient} virtual bool isUsed (InstructionAST::Ptr findMe) const \end{apient} \apidesc{ An \code{InstructionAST} is used by a \code{Dereference} if it is equivalent to the \code{Dereference} or it is used by the lone child of the \code{Dereference} }
{ "alphanum_fraction": 0.7822493712, "avg_line_length": 56.7959183673, "ext": "tex", "hexsha": "308201af8154c9e72e2b7fc28bff3a7b4d56d5df", "lang": "TeX", "max_forks_count": 18, "max_forks_repo_forks_event_max_datetime": "2021-10-14T10:17:39.000Z", "max_forks_repo_forks_event_min_datetime": "2015-11-04T03:44:22.000Z", "max_forks_repo_head_hexsha": "9879a85c7ba56b443aeccde730778dcb6d55a39d", "max_forks_repo_licenses": [ "BSD-2-Clause" ], "max_forks_repo_name": "Vtech181/Path_Armor", "max_forks_repo_path": "Dyninst-8.2.1/instructionAPI/doc/API/Dereference.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "9879a85c7ba56b443aeccde730778dcb6d55a39d", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "BSD-2-Clause" ], "max_issues_repo_name": "Vtech181/Path_Armor", "max_issues_repo_path": "Dyninst-8.2.1/instructionAPI/doc/API/Dereference.tex", "max_line_length": 822, "max_stars_count": 47, "max_stars_repo_head_hexsha": "9879a85c7ba56b443aeccde730778dcb6d55a39d", "max_stars_repo_licenses": [ "BSD-2-Clause" ], "max_stars_repo_name": "Vtech181/Path_Armor", "max_stars_repo_path": "Dyninst-8.2.1/instructionAPI/doc/API/Dereference.tex", "max_stars_repo_stars_event_max_datetime": "2022-03-18T11:23:59.000Z", "max_stars_repo_stars_event_min_datetime": "2015-10-14T23:12:32.000Z", "num_tokens": 715, "size": 2783 }
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % This is a modified ONE COLUMN version of % the following template: % % Deedy - One Page Two Column Resume % LaTeX Template % Version 1.1 (30/4/2014) % % Original author: % Debarghya Das (http://debarghyadas.com) % % Original repository: % https://github.com/deedydas/Deedy-Resume % % IMPORTANT: THIS TEMPLATE NEEDS TO BE COMPILED WITH XeLaTeX % % This template uses several fonts not included with Windows/Linux by % default. If you get compilation errors saying a font is missing, find the line % on which the font is used and either change it to a font included with your % operating system or comment the line out to use the default font. % %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % % TODO: % 1. Integrate biber/bibtex for article citation under publications. % 2. Figure out a smoother way for the document to flow onto the next page. % 3. Add styling information for a "Projects/Hacks" section. % 4. Add location/address information % 5. Merge OpenFont and MacFonts as a single sty with options. % %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % % CHANGELOG: % v1.1: % 1. Fixed several compilation bugs with \renewcommand % 2. Got Open-source fonts (Windows/Linux support) % 3. Added Last Updated % 4. Move Title styling into .sty % 5. Commented .sty file. % %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % % Known Issues: % 1. Overflows onto second page if any column's contents are more than the % vertical limit % 2. Hacky space on the first bullet point on the second column. % %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \documentclass[]{deedy-resume-openfont} \begin{document} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % % Profile % %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \namesection{Robert}{James Whitaker}{[email protected]} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % % Education % %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{Education} \raggedright \runsubsection{Marist College}\descript{| BS Computer Science}\hfill \location{Poughkeepsie, NY | May 2015}\\ \sectionsep %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % % Experience % %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{Experience} \runsubsection{Unlockable}\descript{| Web Developer}\hfill \location{New York, NY | June 2014 – October 2015} \begin{tightemize} \item Blah \end{tightemize} \sectionsep %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % % Skills % %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{Skills} \raggedright \begin{tabular}{ l l } \descript{Languages} & {\location{Haskell, Elm, PureScript, HTML, CSS, JavaScript, Java}} \\ \end{tabular} \sectionsep %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % % Projects % %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{Projects} \raggedright \runsubsection{\large{Midnight Murder Party}} \descript{| Elm, PureScript, JavaScript, HTML, CSS}\hfill \location{http://midnightmurderparty.com}\\ Sure is a project. It\textquotesingle{}s more like a million sub-projects. Yup.\\ \sectionsep %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % % Awards % %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{Awards} \runsubsection{\large{Eagle Scout}} \descript{White Plains, NY} \\ Does this count as an award?\\ \sectionsep \ \end{document}
{ "alphanum_fraction": 0.5814895606, "avg_line_length": 28.3982300885, "ext": "tex", "hexsha": "5cfca45fa598daca07dc2ce8d4c3825b46fdf8ff", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "c69897ce9baf2e1114bdaed58a48c21b5d99166c", "max_forks_repo_licenses": [ "BSD-3-Clause" ], "max_forks_repo_name": "robwhitaker/robwhitaker.github.io", "max_forks_repo_path": "static/resume/resume.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "c69897ce9baf2e1114bdaed58a48c21b5d99166c", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "BSD-3-Clause" ], "max_issues_repo_name": "robwhitaker/robwhitaker.github.io", "max_issues_repo_path": "static/resume/resume.tex", "max_line_length": 109, "max_stars_count": null, "max_stars_repo_head_hexsha": "c69897ce9baf2e1114bdaed58a48c21b5d99166c", "max_stars_repo_licenses": [ "BSD-3-Clause" ], "max_stars_repo_name": "robwhitaker/robwhitaker.github.io", "max_stars_repo_path": "static/resume/resume.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 782, "size": 3209 }
\chapter{Troubleshooting} \section{Network} \scalaris{} uses a couple of TCP ports for communication. It does not use UDP at the moment. \begin{center} \begin{tabular}{lll} \toprule & HTTP Server & Inter-node communication \\ \midrule default (see \code{bin/scalaris.cfg}) & $8000$ & $14195$--$14198$ \\ first node (\code{bin/firstnode.sh}) & $8000$ & $14195$ \\ joining node 1 (\code{bin/joining_node.sh}) & $8001$ & $14196$ \\ other joining nodes (\code{bin/joining_node.sh <ID>}) & $8000+\texttt{<ID>}$ & $14195+\texttt{<ID>}$ \\ standalone mgmt server (\code{bin/mgmt-server.sh}) & $7999$ & $14194$ \\ \bottomrule \end{tabular} \end{center} Please make sure that at least 14195 and 14196 are not blocked by firewalls in order to be able to start at least one first and one joining node on each machine.. \section{Miscellaneous} For up-to-date information about frequently asked questions and troubleshooting, please refer to our FAQs at \url{http://scalaris.zib.de/faq.html} and our mailing list at \url{http://groups.google.com/group/scalaris}.
{ "alphanum_fraction": 0.6678352323, "avg_line_length": 36.8064516129, "ext": "tex", "hexsha": "9276d4ff80b3b88ba298f36c28364a7cdc08351a", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "201ff7242b6d8ae8d34cd1c09111ddf17184394b", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "schintke/scalaris", "max_forks_repo_path": "user-dev-guide/user-troubleshoot.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "201ff7242b6d8ae8d34cd1c09111ddf17184394b", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "schintke/scalaris", "max_issues_repo_path": "user-dev-guide/user-troubleshoot.tex", "max_line_length": 103, "max_stars_count": 4, "max_stars_repo_head_hexsha": "201ff7242b6d8ae8d34cd1c09111ddf17184394b", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "schintke/scalaris", "max_stars_repo_path": "user-dev-guide/user-troubleshoot.tex", "max_stars_repo_stars_event_max_datetime": "2019-05-14T11:53:22.000Z", "max_stars_repo_stars_event_min_datetime": "2015-03-26T05:21:42.000Z", "num_tokens": 333, "size": 1141 }
\chapter{Results} \label{chap:results} This chapter first explores the behavior of Sum and Product Coupling. Then it compares to PAIRpred and the other variants of graph convolution. Some convolution filters are then visualized, along with network output for one trained network. Finally, it summarizes the effect of protein size on network training time. \section{Sum and Product Coupling Performance} \begin{table} \begin{center} \begin{tabular}{lccccc} \toprule \multirow{2}{*}{Method} & Receptive Field & \multicolumn{4}{c}{Layers Before Merge} \\ & Size & 1 & {2} & {3} & {4} \\ \midrule No Convolution & N/A & \textbf{0.815} & 0.812 & 0.800 & 0.811 \\\cline{1-6} \multirow{2}{*}{Sum Coupling} & 11 & 0.868 & 0.889 & 0.882 & 0.884 \\ & 21 & 0.875 & \textbf{0.903} & 0.880 & 0.890 \\\cline{1-6} \multirow{2}{*}{Product Coupling} & 11 & 0.856 & 0.869 & 0.885 & 0.868 \\ & 21 & 0.863 & 0.876 & 0.896 & \textbf{0.899} \\ \bottomrule \end{tabular} \caption{Median area under the receiver operating characteristic curve (AUC) across all complexes in the test set for two variants of graph convolution, Sum Coupling and Product Coupling, as well as No Convolution. Results are shown for two different sizes of receptive field, 11 and 21, for different numbers of convolutional layers before the pairwise merge operation. Bold faced values indicate best performance for each method. \label{tab:med_auc}} \end{center} \end{table} Table \ref{tab:med_auc} shows the results of experiments involving Sum Coupling and Product Coupling, as well as No Convolution. Comparing Sum and Product Coupling to No Convolution reveals that convolution is beneficial. In other words, incorporating information from neighboring residues helps indicate whether a residue is part of an interface, which is consistent with the biological properties of interfaces. It's also clear that a receptive field of size 21 is generally better than 11. Interestingly, this value is near the median number of interface residues for a single protein, which is 27. When using convolution, performance improves with network depth up to a point, then either decreases or remains relatively constant. In Sum Coupling, this maximum occurs in the second layer, whereas in Product Coupling, this maximum occurs in the fourth layer. Though not shown in the table, Product Coupling indeed fails to improve when using 5 or 6 layers. In contrast, networks without convolution are best with only one pre-merge layer. This suggests that depth alone does not improve performance, but when convolution is performed, a useful hierarchical representation is learned. Other applications of deep learning have seen this same trend of increasing and decreasing performance. The commonly given explanations for such behavior include insufficient training data, a vanishing gradient, and a difficulty with optimization~\cite{he2015}. Overall, both Sum and Product Coupling achieved similar performance in AUC. \begin{table} \begin{center} \begin{tabular}{lccccc} \toprule \multirow{2}{*}{Method} & Receptive Field & \multicolumn{4}{c}{Layers Before Merge} \\ & Size & 1 & {2} & {3} & {4} \\ \midrule No Convolution & N/A & \textbf{48} & 55 & 53 & 66 \\\cline{1-6} \multirow{2}{*}{Sum Coupling} & 11 & 32 & 28 & 70 & 86 \\ & 21 & \textbf{26} & 37 & 56 & 63 \\\cline{1-6} \multirow{2}{*}{Product Coupling} & 11 & 30 & 46 & 26 & 51 \\ & 21 & 26 & \textbf{25} & 36 & 37 \\ \bottomrule \end{tabular} \caption{Median rank of the first positive prediction (RFPP) across all complexes in the test set for two variants of graph convolution, Sum Coupling and Product Coupling, as well as No Convolution. Results are shown for two different sizes of receptive field, 11 and 21, for different numbers of convolutional layers before the pairwise merge operation. Bold faced values indicate best performance for each method (lower is better).} \label{tab:med_rfpp} \end{center} \end{table} Table \ref{tab:med_rfpp} parallels Table \ref{tab:med_auc} but shows RFPP instead of AUC. As before, Sum and Product Coupling perform similarly to each other, and both outperform No Convolution. In this case, however, the best performance for Sum and Product Coupling is seen for fewer layers than AUC. This difference is not surprising, considering the cross-entropy loss function leads to optimization of performance on \emph{all} pairs, not just the top scoring ones. In other words, RFPP is not being explicitly optimized in this problem, whereas AUC is more closely related to the quantity being optimized. \begin{figure} \includegraphics[width=\textwidth]{med_auc.png} \caption{Median area under the receiver operating characteristic curve (AUC) across all complexes in the test set, separated by complex class. Sum and Product Coupling are shown for two receptive field sizes each (11 and 21), as well as No Convolution, for 1-4 pre-merge layers. Product Coupling performs better for difficult complexes, but worse overall because ther are far more rigid and medium difficulty complexes. \label{fig:med_auc}} \end{figure} To understand the behavior of each method in more detail, we can separate performance by the difficulty class of the test proteins. Figure \ref{fig:med_auc} shows performance for rigid, medium difficulty, and difficult classes, with 33, 16, and 6 complexes respectively. Here it appears that Sum and Product Coupling are closely matched for rigid and medium difficulty. A slight difference is seen for the difficult complexes, where Product Coupling appears to be performing better for two and three layers, but with so few complexes it's unclear if this is a true trend. \begin{figure} \includegraphics[width=0.8\textwidth]{sum_20_2_histo1.png} \caption{Histogram of area under the receiver operating characteristic curve (AUC) for complexes in the test set, colored by difficulty class. Scores are from Sum Coupling with two layers and receptive field size 21, which had the highest median AUC of all methods.} \label{fig:histo1} \end{figure} For another picture of performance across difficulty classes, we can examine a histogram of AUCs, as shown in Figure \ref{fig:histo1}. These AUCs are heavily skewed, justifying the choice of median as a summary measure. Surprisingly, there is no clear divide between rigid, medium difficulty, and difficult classes. In fact, the worst AUC is a rigid complex, and one difficult complex achieves AUC above 0.9. It appears that the distinguishing characteristic between classes is the number of complexes that achieve above 0.95 AUC. These "trivial" complexes are most frequent in the rigid class, less so in the medium difficulty class, and absent for the difficult class. This suggests that the networks are effectively coping with some complexes that under conformational change. \begin{figure} \includegraphics[width=\textwidth]{med_rfpp.png} \caption{Median rank of the first positive prediction (RFPP) across all complexes in the test set, separated by difficulty class. Vertical axes are log scaled. Sum and Product Coupling are shown for two receptive field sizes each (11 and 21), as well as No Convolution, for 1-4 pre-merge layers. Lower RFPP is better. Best performance on rigid complexes is achieved with just one layer for all networks. As difficulty increases, so does the number of layers needed to achieve best results. \label{fig:med_rfpp}} \end{figure} RFPP can also be separated by difficulty class. From figure \ref{fig:med_rfpp} we can observe some heterogeneus behavior across methods, but there are some trends worth noting. For each class, there appears to be a favored number of layers where a dip (improvement) in RFPP is observed. This optimal depth appears to increase with difficulty class, suggesting that harder complexes require more layers to achieve best performance. Therefore an ensemble approach with varying depths may perform well on complexes of any difficulty. \section{Comparison to Other Methods} \begin{table} \begin{center} \begin{tabular}{l c c c c c } \toprule \multirow{2}{*}{Method} & \multirow{2}{*}{Variant} & \multicolumn{4}{c}{Layers} \\ & & 1 & 2 & 3 & 4 \\ \midrule No Convolution & N/A & \textbf{0.815} & 0.812 & 0.800 & 0.811 \\ \midrule \multirow{2}{*}{Sum Coupling} & $\text{RF}=11$ & 0.868 & 0.889 & 0.882 & 0.884 \\ & $\text{RF}=21$ & 0.875 & \textbf{0.903} & 0.880 & 0.890 \\ \midrule \multirow{2}{*}{Product Coupling} & $\text{RF}=11$ & 0.856 & 0.869 & 0.885 & 0.868 \\ & $\text{RF}=21$ & 0.863 & 0.876 & 0.896 & \textbf{0.899} \\ \midrule PAIRpred~\cite{minhas2014} & N/A & (\textbf{0.863})* & - & - & - \\ \midrule \multirow{2}{*}{PATCHY-SAN~\cite{niepert2016}} & $\text{RF}=11$ & 0.862 & 0.867 & 0.883 & 0.891 \\ & $\text{RF}=21$ & 0.850 & 0.875 & \textbf{0.897} & 0.886 \\ \midrule \multirow{2}{*}{Fingerprint~\cite{duvenaud2015}} & $\text{RF}=11$ & 0.857 & 0.850 & 0.863 & 0.833 \\ & $\text{RF}=21$ & 0.861 & 0.867 & 0.881 & \textbf{0.891} \\ \midrule \multirow{6}{*}{R-GCN~\cite{schlichtkrull2017}} & No basis fns, $\text{RF}=11$ & 0.862 & 0.871 & 0.886 & 0.893 \\ & No basis fns, $\text{RF}=21$ & 0.876 & \textbf{0.901} & 0.892 & 0.897 \\ & 2 basis fns, $\text{RF}=11$ & 0.851 & 0.872 & 0.779 & - \\ & 2 basis fns, $\text{RF}=21$ & 0.873 & 0.804 & 0.539 & - \\ & 5 basis fns, $\text{RF}=11$ & 0.870 & 0.747 & 0.748 & - \\ & 5 basis fns, $\text{RF}=21$ & 0.867 & 0.900 & 0.709 & - \\ \midrule \multirow{2}{*}{DTNN~\cite{schutt2017}}& $\text{RF}=11$ & 0.853 & 0.872 & 0.878 & 0.861 \\ & $\text{RF}=21$ & 0.862 & 0.880 & 0.873 & \textbf{0.885} \\ \midrule \multirow{4}{*}{DCNN~\cite{atwood2016}} & 2 hops, $\sigma=2$\AA{} & 0.782 & - & - & - \\ & 2 hops, $\sigma=4$\AA{} & 0.801 & - & - & - \\ & 5 hops, $\sigma=2$\AA{} & \textbf{0.838} & - & - & - \\ & 5 hops, $\sigma=4$\AA{} & 0.819 & - & - & - \\ \bottomrule \end{tabular} \caption{Comparison of proposed convolutions with existing classification methods. Bold values indicate best performance for each method. For reference, retraining Sum Coupling ten times with a different random seed yields a standard deviation of 0.006, and other methods are believed to behave similarly. *PAIRpred is an SVM-based approach, so the result is not really associated with a layer number.} \label{tab:results_compare} \end{center} %\end{minipage} \end{table} Table \ref{tab:results_compare} compares the median AUC of the various existing methods discussed. PAIRpred, the state of the art pairwise interface predictor, establishes the baseline. Interestingly, most of the graph convolution based methods exceed this. DCNN is the exception, showing worse performance than PAIRpred. This is perhaps unsurprising, since it differs considerably from the other convolution methods, which are very similar to one another. The order imposing PATCHY-SAN performs quite well, suggesting that the chosen ordering of neighbors by proximity to the central vertex was a good one. Its best AUC of 0.897 is within one standard deviation (0.006) of the best overall performance, which was 0.903 from Sum Coupling. Therefore it is difficult to say whether order-free or ordered methods perform better for this problem. Only one ordered method was examined, leaving the possibility that others may do better. Because of the unique weights used for each neighbor, ordered methods have significantly more parameters than order-free methods, making them more susceptible to overfitting, though that is not observed in these results. This method also uses information from all edges in the receptive field, whereas the other methods are only concerned with edges connecting neighbors to the central vertex. R-GCN did the best without basis matricess. The original intent for basis matricess was to allow for weight sharing across several relation types. This is less relevant for these protein graphs since only a single neighborhood is being considered per central vertex, so in a sense there is only one relation type. When using basis matricess, larger networks (starting at three layers) drop significantly in performance compared to other methods. At four layers, networks consistently classified all residue pairs with the same score, suggesting all filters had become zero valued (note that AUC cannot be calculated with all scores are identical). Note that AUC cannot be calculated when all scores are identical. This is possible when using ReLU nonlinearities, since the output and gradient vanishes for signals less than zero. It's possible that a different initialization scheme or a different nonlinearity would remedy this problem. Regardless, the use of basis matricess in this context is somewhat unjustified because these graphs lack the numerous relation types that prompted use of basis matricess in the first place~\cite{schlichtkrull2017}. Nevertheless, the two layer, five basis matrices version performed similarly to the best results observed. But testing with 8 basis matrices did not show improvement. Without basis matricess, R-GCN is equivalent to Sum Coupling if edge information is excluded. The fact that R-GCN without basis matricess performs similarly to Sum Coupling suggests that the added edge information in Sum Coupling is not very useful. Fingerprint is arguably the simplest form of graph convolution since it uses the same weight matrix for both central and neighbor vertices. Its best performance is only 0.01 below that of R-GCN, but it requires twice as many layers. The number of layers to five or six exhibits no substantive improvement. Whereas PATCHY-SAN, Fingerprint, and R-GCN are most similar to Sum Coupling, DTNN allows for a multiplicative coupling between neighbors and edges, similar to Product Coupling. Like Product Coupling, best performance is achieved at a greater depth than the other methods, but does not continue to improve for five or six layers. This method is notably worse than the three just mentioned. One potentially limiting aspect of this method is that the number of channels is necessarily unchanged. This may explain why this method performs worse than Product Coupling. For DCNN, five hops performs better than two hops, presumably for the same reason that a larger receptive field is better in the other convolution methods: information is able to propagate further across the graph. The Gaussian standard deviation determines the range of diffusion, where smaller values limit diffusion to a localized neighborhood for each hop, and larger values allow diffusion across longer distances. In the two hop DCNN, the larger standard deviation allows more diffusion to occur across the graph, compensating for the limited number of hops. This explains the overall better performance for $\sigma=4\AA{}$. In contrast, the five hop DCNN performed better for $\sigma=2\AA{}$, suggesting that the larger number of hops allows sufficient information propagation, eliminating the need for diffusion across greater distances for each hop. \section{Filter Visualization} It is often difficult to understand the behavior of a model by simply looking at the overall performance. For convolutional neural networks on images, intuition can be gained through a variety of methods, including visualizing filters directly, tracking a filter's activation as different regions of the image are occluded, and identifying which images maximally activate each filter. To understand the filters being learned in this pairwise neural network architecture, network scores and filter activations were mapped to a protein complex heatmap. Figure \ref{fig:filter_vis} presents color maps on the complex 3HI6~\cite{zhang2009} which depict the true interface, maximum predicted scores for each residue, and the activation of two filters from last convolutional layer, for the best performing method (Sum Coupling with two layers and receptive field size of 21). Scores are partner specific, so they are shown for a single residue after taking the maximum over all potential partners. This partner independent visualization matches well with the true interface. In contrast, convolutional filters occur at the residue level and can be visualized directly. The two filters shown illustrate learned features which are useful for interface prediction. Specifically, one activates only for buried residues (indicating an \emph{unlikeliness} to participate in an interface), and the other activates only for residues near the true interface. \begin{figure} \includegraphics[width=0.6\textwidth]{3HI6_collage.png} \caption{PyMOL~\cite{schrodinger2015} visualizations of the best performing test complex (3HI6~\cite{zhang2009}). Upper left: Ligand (red) and receptor (blue), along with the true interface (yellow). Upper right: Visualization of predicted scores, where brighter colors (cyan and orange) are higher scores and darker colors (blue and red) are lower scores. Since scores are for pairs of residues, we take the max score over all partners in the opposing protein. Bottom row: Activations of two filters in the second convolutional layer, where brighter colors indicate greater activation and black indicates activation of zero. Lower left: Filter which provides higher activations for buried residues, a useful screening criterion for interface detection. Lower right: Filter which gives high activations for residues near the interface of this complex. \label{fig:filter_vis}} \end{figure} \section{Training Time} Figure \ref{fig:train_times} shows a log-log plot of training time as a function of the number of residue pairs in a given complex. The relationship is roughly linear in this space, indicating a power-law relationship. Fitting lines to the data shows that the power is slightly greater than 1.0 in all cases, showing that training time increases near linearly in the number of pairs as expected. This is due to the pairwise nature of the problem. For problems on a single graph, the training time would likely scale near linearly in the number of vertices. Deeper networks take longer to train, and summing over neighbors (as in Product Coupling), takes longer than just using the central vertex (as in No Convolution). These trends are as expected. \begin{figure} \includegraphics[width=1.0\textwidth]{training_time_product_coupling_1-4.png} \caption{Training time as a function of number of residue pairs, for a single No Convolution layer, as well as four depths of Product Coupling, receptive field size 21. Linearity in this log-log plot indicates a power law relationship. In this case, the relationship is near linear, with powers of 1.02, 1.22, 1.25, 1.25, and 1.24 respectively for No Convolution 1 layer, and Product Coupling for 1, 2, 3, and 4 layers. Other convolution methods have similar power law relationships. \label{fig:train_times}} \end{figure}
{ "alphanum_fraction": 0.7615768725, "avg_line_length": 76.1, "ext": "tex", "hexsha": "d6832211d386bb680eb5aa1d653fccf65e68d389", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "50362de8bb633e7cc3737936b5c4ba2920423e19", "max_forks_repo_licenses": [ "Unlicense" ], "max_forks_repo_name": "fouticus/msthesis", "max_forks_repo_path": "results.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "50362de8bb633e7cc3737936b5c4ba2920423e19", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Unlicense" ], "max_issues_repo_name": "fouticus/msthesis", "max_issues_repo_path": "results.tex", "max_line_length": 852, "max_stars_count": null, "max_stars_repo_head_hexsha": "50362de8bb633e7cc3737936b5c4ba2920423e19", "max_stars_repo_licenses": [ "Unlicense" ], "max_stars_repo_name": "fouticus/msthesis", "max_stars_repo_path": "results.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 4867, "size": 19025 }
\def\module{Algebraic Number Theory} \def\lecturer{Professor Anthony Scholl} \def\term{Lent 2020} \def\cover{} \def\syllabus{} \def\thm{section} \input{../style/header} % Macros \newcommand{\1}{\mathbbm{1}} \newcommand{\dA}{\dif_\AA} \newcommand{\dF}{\dif_F} \newcommand{\dJ}{\dif_\JJ} \newcommand{\dJI}{\dif_{\JJ^1}} \newcommand{\dv}{\dif_v} \newcommand{\hathat}[1]{\widehat{\widehat{#1\ }}\!\!} \newcommand{\intA}[3][]{\int_{\ifstrempty{#1}{\AA_K}{#1}} #2 \, \dA #3} \newcommand{\intF}[3][]{\int_{\ifstrempty{#1}{F}{#1}} #2 \, \dF #3} \newcommand{\intFX}[3][]{\int_{\ifstrempty{#1}{F^\times}{#1}} #2 \, \dF^\times #3} \newcommand{\intJ}[3][]{\int_{\ifstrempty{#1}{\JJ_K}{#1}} #2 \, \dJ #3} \newcommand{\intJI}[3][]{\int_{\ifstrempty{#1}{\JJ_K^1}{#1}} #2 \, \dJI #3} \newcommand{\intv}[3][]{\int_{\ifstrempty{#1}{K_v}{#1}} #2 \, \dv #3} \newcommand{\intvX}[3][]{\int_{\ifstrempty{#1}{K_v^\times}{#1}} #2 \, \dv^\times #3} \newcommand{\mods}{\mod\!\!^*\ } \newcommand{\twobytwosmall}[4]{ \begin{psmallmatrix} #1 & #2 \\ #3 & #4 \end{psmallmatrix} } \begin{document} \input{../style/cover} \setcounter{section}{0} \section{Absolute values and places} \subsection{Absolute values} \lecture{1}{Thursday}{21/01/21} Let $ K $ be a field. Recall that an \textbf{absolute value (AV)} on $ K $ is a function $ \abs{\cdot} : K \to \RR_{\ge 0} $ such that for all $ x, y \in K $, \begin{enumerate} \item $ \abs{x} = 0 $ if and only if $ x = 0 $, \item $ \abs{xy} = \abs{x} \cdot \abs{y} $, and \item $ \abs{x + y} \le \abs{x} + \abs{y} $. \end{enumerate} Also assume \begin{itemize} \item[$ 4 $.] there exists $ x \in K $ such that $ \abs{x} \ne 0, 1 $. \end{itemize} This excludes the trivial AV $$ \abs{x} = \begin{cases} 0 & x = 0 \\ 1 & x \ne 0 \end{cases}. $$ An AV is a \textbf{non-archimedean} if \begin{itemize} \item[$ 3^{\text{NA}} $.] $ \abs{x + y} \le \max\br{\abs{x}, \abs{y}} $, \end{itemize} and \textbf{archimedean} otherwise. An AV determines a metric $ \d\br{x, y} = \abs{x - y} $ which makes $ K $ a \textbf{topological field}, so $ + $, $ \times $, and $ \br{\cdot}^{-1} $ are continuous. \begin{remark*} It is convenient to weaken $ 3 $ to \begin{itemize} \item[$ 3' $.] there exists $ \alpha > 0 $ such that for all $ x $ and $ y $, $ \abs{x + y}^\alpha \le \abs{x}^\alpha + \abs{y}^\alpha $. \end{itemize} For non-archimedean AV, makes no difference. Does mean that if $ \abs{\cdot} $ is an AV, then so is $ \abs{\cdot}^\alpha $ for any $ \alpha > 0 $. The point is that we want the function $ z \mapsto z\overline{z} $ on $ \CC $ to be an AV. Explain why later. \end{remark*} Let us suppose $ \abs{\cdot} $ is a non-archimedean AV. Then $$ R = \cbr{x \in K \st \abs{x} \le 1} $$ is a subring of $ K $. It is a \textbf{local ring} with maximal ideal $$ \mmm_R = \cbr{\abs{x} < 1}. $$ It is a \textbf{valuation ring} of $ K $, so if $ x \in K \setminus R $ then $ x^{-1} \in R $. \begin{lemma} \label{lem:1.1} $ R $ is a maximal subring of $ K $. \end{lemma} \begin{proof} Let $ x \in K \setminus R $. Then $ \abs{x} > 1 $. Then if $ y \in K $, there exists $ n \ge 0 $ such that $ \abs{yx^{-n}} = \abs{y} / \abs{x}^n \le 1 $, that is $ y \in x^nR $ for $ n \gg 0 $. So $ R\sbr{x} = K $, hence $ R $ is maximal. \end{proof} \begin{remark*} There is a general notion of valuation, not necessarily $ \RR $-valued, seen in algebraic geometry. The valuations we are considering here are rank one valuations, and they have this maximality property. \end{remark*} AVs $ \abs{\cdot} $ and $ \abs{\cdot}' $ are \textbf{equivalent} if there exists $ \alpha > 0 $ such that $ \abs{\cdot}' = \abs{\cdot}^\alpha $. \begin{proposition} \label{prop:1.2} The following are equivalent. \begin{itemize} \item $ \abs{\cdot} $ and $ \abs{\cdot}' $ are equivalent. \item for all $ x, y \in K $, $ \abs{x} \le \abs{y} $ if and only if $ \abs{x}' \le \abs{y}' $. \item for all $ x, y \in K $, $ \abs{x} < \abs{y} $ if and only if $ \abs{x}' < \abs{y}' $. \end{itemize} \end{proposition} \begin{proof} See local fields. \end{proof} A corollary is if $ \abs{\cdot} $ and $ \abs{\cdot}' $ are non-archimedean AVs with valuation rings $ R $ and $ R' $, then $ \abs{\cdot} $ and $ \abs{\cdot}' $ are equivalent if and only if $ R = R' $, if and only if $ R \subset R' $, by \ref{lem:1.1}. \pagebreak Equivalent AVs define equivalent metrics on $ K $, hence the completion of $ K $ with respect to $ \abs{\cdot} $ depends only on the equivalence class of $ \abs{\cdot} $. Inequivalent AVs determine independent topologies, in the following sense. \begin{proposition}[Weak approximation] Let $ \abs{\cdot}_i $ for $ 1 \le i \le n $ be pairwise inequivalent AVs on $ K $, let $ a_1, \dots, a_n \in K $, and let $ \delta > 0 $. Then there exists $ x \in K $ such that for all $ i $, $ \abs{x - a_i}_i < \delta $. \end{proposition} \begin{proof} Suppose $ z_j \in K $ such that $ \abs{z_j}_j > 1 $ and $ \abs{z_j}_i < 1 $ for all $ i \ne j $. Then $ \abs{z_j^N / \br{z_j^N + 1}}_i \to 0 $ as $ N \to \infty $ if $ i \ne j $ but $ \abs{z_j^N / \br{z_j^N + 1} - 1}_j = \abs{1 / \br{z_j^N + 1}}_j \to 0 $. So $$ x = \sum_j a_j\dfrac{z_j^N}{z_j^N + 1} $$ works if $ N $ is sufficiently large. So it is enough to find $ z_j $, and by symmetry take $ j = 1 $. Induction on $ n $. \begin{itemize}[leftmargin=0.5in] \item[$ n = 1 $.] Trivial. \item[$ n > 1 $.] Suppose have $ y $ with $ \abs{y}_1 > 1 $ and $ \abs{y}_2, \dots, \abs{y}_{n - 1} < 1 $. If $ \abs{y}_n < 1 $, finished. Otherwise, pick $ w \in K $ with $ \abs{w}_1 > 1 > \abs{w}_n $, such as by \ref{prop:1.2}. If $ \abs{y}_n = 1 $, then $ z = y^Nw $ works, for $ N $ sufficiently large. If $ \abs{y}_n > 1 $, then $ z = y^Nw / \br{y^N + 1} $ works, for $ N $ sufficiently large. \end{itemize} \end{proof} \begin{remark*} If $ K = \QQ $ and $ \abs{\cdot}_1, \dots, \abs{\cdot}_n $ are $ p_i $-adic AVs for distinct primes $ p_i $, and $ a_i \in \ZZ $, then weak approximation says that for all $ n_i \ge 1 $, there exists $ x \in \QQ $, which is a $ p_i $-adic integer for all $ i \in \cbr{1, \dots, n} $ and $ x \equiv a_i \mod p_i^{n_i} $. This of course follows from CRT, which guarantees there exists $ x \in \ZZ $ satisfying this. \end{remark*} \subsection{Places} \begin{definition*} A \textbf{place} of $ K $ is an equivalence class of AVs on $ K $. \end{definition*} \begin{example*} If $ K = \QQ $, by Ostrowski's theorem, every AV on $ \QQ $ is equivalent to one of \begin{itemize} \item a $ p $-adic AV $ \abs{\cdot}_p $ for $ p $ prime, or \item a Euclidean AV $ \abs{\cdot}_\infty $. \end{itemize} So places of $ \QQ $ are in bijection with $ \cbr{\text{primes}} \cup \cbr{\infty} $. We will usually simply denote the places of $ \QQ $ by $ \cbr{2, 3, \dots, \infty} = \cbr{p \le \infty} $. \end{example*} \begin{notation*} Let \begin{itemize} \item $ \V_K $ be the places of $ K $, \item $ \V_{K, \infty} $ be the places given by archimedean AVs, the \textbf{infinite places}, and \item $ \V_{K, \f} $ be the places given by non-archimedean AVs, the \textbf{finite places}. \end{itemize} Often use letters $ v $ and $ w $, decorated suitably, to denote places. If $ v \in \V_K $, then $ K_v $ will denote the completion. If $ v : K^\times \to \RR $ is a valuation, will also use $ v $ to denote the corresponding place, that is the class of AVs $ x \mapsto r^{-v\br{x}} $ for $ r > 1 $. \end{notation*} Can restate weak approximation in terms of places. \begin{proposition} Let $ v_1, \dots, v_n $ be distinct places of $ K $. Then the image of the diagonal inclusion $$ K \hookrightarrow \prod_{1 \le i \le n} K_{v_i} $$ is dense, for the product topology. \end{proposition} \pagebreak Let $ L / K $ be finite separable, and let $ v $ and $ w $ be places of $ K $ and $ L $ respectively. Say $ w $ \textbf{lies over}, or \textbf{divides}, $ v $, denoted $ w \mid v $, if $ v = \eval{w}_K $ is the restriction of $ w $ to $ K $. Then there exists a unique continuous $ K_v \hookrightarrow L_w $ extending $ K \hookrightarrow L $. \begin{proposition} There is a unique isomorphism of topological rings mapping $$ \function{L \otimes_K K_v}{\prod_{w \in \V_L, \ w \mid v} L_w}{x \otimes y}{\br{xy}_w}. $$ \end{proposition} In the local fields course, proved this for finite places of number fields. \begin{proof} Let $ L = K\br{a} $, and let $ f \in K\sbr{T} $ be the minimal polynomial, which is separable. Factor $ f = \prod_i g_i $ for $ g_i \in K_v\sbr{T} $ irreducible and distinct. Let $ L_i = K_v\sbr{T} / \abr{g_i} $. Then $ L \otimes_K K_v = K_v\sbr{T} / \abr{f} \xrightarrow{\sim} \prod_i L_i $ by CRT. Let $ w \mid v $, inducing $ \iota_w : L \hookrightarrow L_w $. Let $ g_w \in K_v\sbr{T} $ be the minimal polynomial of $ \iota_w\br{a} $ over $ K_v $. Then $ g_w \mid f $ so $ g_w \in \cbr{g_i} $ and $ L_w = K_v\br{\iota_w\br{a}} $ is some $ L_i $. Conversely, $ K_v $ is complete and $ L_i / K_v $ is finite, so there exists a unique extension of $ v $ to $ L_i $, so there is a bijection $ \cbr{g_i} \leftrightarrow \cbr{w \mid v} $, and thus $$ L \otimes_K K_v \cong \prod_w L_w. $$ For the topological isomorphism, use that both sides are finite-dimensional normed $ K_v $-spaces. For the left hand side, choose a basis of $ L / K $ for $ L \otimes_K K_v \cong K_v^{\sbr{L : K}} $ with norm $ \norm{\br{x_i}} = \sup_i \abs{x_i}_v $, where $ \abs{\cdot}_v $ is an AV in class of $ v $ satisfying triangle inequality. For the right hand side, $ \norm{\br{y_w}} = \sup_w \abs{y_w}_w $, where $ \abs{\cdot}_w $ is the AV in class of $ w $ extending $ \abs{\cdot}_v $. A fact is that any two norms on a finite-dimensional vector space over a field complete with respect to an AV are equivalent. For local fields, exactly the same proof as for $ \RR $, and in general not much harder. See Cassels and Fr\"ohlich, Chapter II, Section $ 8 $. \end{proof} \begin{corollary} \label{cor:1.6} \hfill \begin{itemize} \item $ \cbr{w \mid v} $ is finite, non-empty, and $$ \sum_{w \mid v} \sbr{L_w : K_v} = \sbr{L : K}. $$ \item For all $ x \in L $, $$ \N_{L / K}\br{x} = \prod_{w \mid v} \N_{L_w / K_v}\br{x}, \qquad \Tr_{L / K}\br{x} = \sum_{w \mid v} \Tr_{L_w / K_v}\br{x}. $$ \end{itemize} \end{corollary} \lecture{2}{Saturday}{23/01/21} Let $ L / K $ be a finite Galois extension with $ G = \Gal\br{L / K} $. Then $ G $ acts on places $ w $ of $ L $ lying over a given place $ v $ of $ K $. If $ \abs{\cdot} $ is an AV on $ L $, then for all $ g \in G $, the map $ x \mapsto \abs{g^{-1}\br{x}} $ is an AV on $ L $, agreeing with $ \abs{\cdot} $ on $ K $. So this defines a left action of $ G $ on $ \cbr{w \mid v} $ by $ g\br{w} = w \circ g^{-1} $. If $ w = \v_\ppp $ for a prime $ \ppp $ in a Dedekind domain, then $ g\br{w} = \v_{g\br{\ppp}} $. \begin{definition*} Define the \textbf{decomposition group} $ \D_w $ or $ G_w $ to be the stabiliser of $ w $ in $ G $. \end{definition*} If $ g \in G_w $, then it is continuous for the topology induced by $ w $ on $ L $, so extends to an automorphism of $ L_w $, the completion. Then $ G_w \hookrightarrow \Aut\br{L_w / K_v} $, by continuity, so $ \#G_w \le \sbr{L_w : K_v} $, and $$ \#G = \br{G : G_w}\#G_w \le \br{G : G_w}\sbr{L_w : K_v} = \sum_{g \in G / G_w} \sbr{L_{g\br{w}} : K_v} \le \sum_{w' \mid v} \sbr{L_{w'} : K_v} = \sbr{L : K} = \#G, $$ by \ref{cor:1.6}. So have equality, hence $ \sbr{L_w : K_v} = \#G_w $, and so $ L_w / K_v $ is Galois with group $ \Gal\br{L_w / K_v} \xrightarrow{\sim} G_w \subset G $, and $ G $ acts transitively on places over $ v $. \begin{notation*} Suppose $ v $ is discrete valuation of $ L $, so a finite place, and the valuation ring is a DVR. Then so is any $ w \mid v $, and define $ \f\br{w \mid v} = \f_{L_w / K_v} $ to be the degree of residue class extension and $ \e\br{w \mid v} $ to be the ramification degree. Then $$ \sbr{L_w : K_v} = \e\br{w \mid v}\f\br{w \mid v}. $$ \end{notation*} \pagebreak \section{Number fields} \begin{remark*} A lot of theory applies to other global fields, that is \textbf{function fields} $ K / \FF_p\br{t} $ that are finite extensions. These are less interesting, at least to number theorists, since there are no infinite places. \end{remark*} \subsection{Dedekind domains} Let $ K $ be a \textbf{number field}, a finite extension of $ \QQ $, with \textbf{ring of integers} $ \OOO_K $, the integral closure of $ \ZZ $ in $ K $. A basic property is that $ \OOO_K $ is a Dedekind domain, that is \begin{enumerate} \item Noetherian, in fact, by finiteness of integral closure, $ \OOO_K $ is a finitely generated $ \ZZ $-module, \item integrally closed in $ K $, by definition, and \item every non-zero prime ideal is maximal, so Krull dimension at most one. \end{enumerate} The following are basic results about Dedekind domains. \begin{theorem} \label{thm:2.1} \hfill \begin{enumerate} \item A local domain is Dedekind if and only if it is a DVR. \item For a domain $ R $, the following are equivalent. \begin{enumerate} \item $ R $ is Dedekind. \item $ R $ is Noetherian and for all non-zero prime $ \ppp \subset R $, $ R_\ppp $ is a DVR. \item Every fractional ideal of $ R $ is invertible. \end{enumerate} \item A Dedekind domain with only finitely many prime ideals, so \textbf{semi-local}, is a PID. \end{enumerate} \end{theorem} A \textbf{fractional ideal} of $ R $ is a non-zero $ R $-submodule $ I \subset K $ such that for some $ 0 \ne x \in R $, $ xI \subset R $ is an ideal, and $ I $ is \textbf{invertible} if there exists a fractional ideal $ I^{-1} $ such that $ II^{-1} = R $. \begin{proof} \hfill \begin{enumerate} \item A DVR is a local PID. Proved in local fields. The forward direction is the hardest part. \item Let $ K = \Frac R $. \begin{itemize}[leftmargin=0.5in] \item[$ \br{a} \implies \br{b} $.] Enough to check \footnote{Exercise} that properties $ 1 $ to $ 3 $ are preserved under localisation, then use part $ 1 $. \item[$ \br{b} \implies \br{c} $.] To prove $ \br{c} $, may assume $ I \subset R $ is an ideal. Let $$ I^{-1} = \cbr{x \in K \st xI \subset R}. $$ If $ 0 \ne y \in I $, then $ R \subset I^{-1} \subset y^{-1}R $, so $ I^{-1} $ is a fractional ideal and $ I^{-1}I \subset R $. Let $ \ppp \subset R $ be prime, so $ R_\ppp $ is a DVR. It suffices to prove $ I^{-1}I \not\subset \ppp $. Let $ I = \abr{a_1, \dots, a_n} $ for $ a_i \in R $. Without loss of generality, $ \v_\ppp\br{a_1} \le \v_\ppp\br{a_i} $ for all $ i $. Then $ IR_\ppp = a_1R_\ppp $, so for all $ i $, $ a_i / a_1 = x_i / y_i \in R_\ppp $ for $ x_i \in R $ and $ y_i \in R \setminus \ppp $. Then $ y = \prod_i y_i \notin \ppp $ as $ \ppp $ is prime, and $ ya_i / a_1 \in R $ for all $ i $, so $ y / a_1 \in I^{-1} $. Thus $ y \in II^{-1} \setminus \ppp $. \item[$ \br{c} \implies \br{a} $.] Check the following. \begin{itemize} \item $ R $ is Noetherian. Let $ I \subset R $ be an ideal. Then $ II^{-1} = R $, so $ 1 = \sum_{i = 1}^n a_ib_i $ for $ a_i \in I $ and $ b_i \in I^{-1} $. Let $ I' = \abr{a_1, \dots, a_n} \subset I $. Then $ I'I^{-1} = R = II^{-1} $, so $ I' = I $. So $ I $ is finitely generated. \item $ R $ is integrally closed. Let $ x \in K $, integral over $ R $. Then $ I = R\sbr{x} = \sum_{0 \le i < d} Rx^i \subset K $, where $ d $ is the degree of the polynomial of integral independence, is a fractional ideal. Obviously $ I^2 = I $, so $ I = I^2I^{-1} = II^{-1} = R $, that is $ x \in R $. \item Every non-zero prime is maximal. Let $ \cbr{0} \ne \qqq \subset \ppp \subsetneq R $ for $ \ppp $ and $ \qqq $ prime. Then $ R \subsetneq \ppp^{-1} \subset \qqq^{-1} $, so $ \qqq \subsetneq \ppp^{-1}\qqq \subset R $, and $ \ppp\br{\ppp^{-1}\qqq} = \qqq $, so as $ \qqq $ is prime and $ \ppp^{-1}\qqq \not\subset \qqq $, so $ \ppp \subset \qqq $, that is $ \ppp = \qqq $. \end{itemize} \end{itemize} \pagebreak \item Let $ R $ be semi-local Dedekind with non-zero primes $ \ppp_1, \dots, \ppp_n $. Choose $ x \in R $ with $ x \in \ppp_1 \setminus \ppp_1^2 $ and $ x \notin \ppp_2, \dots, \ppp_n $. Then $ \ppp_1 = \abr{x} $, and every ideal is a product of powers of $ \cbr{\ppp_i} $, by below, so $ R $ is a PID. \end{enumerate} \end{proof} \begin{theorem} Let $ R $ be Dedekind. Then \begin{enumerate} \item the group of fractional ideals is freely generated by the non-zero prime ideals, and $$ I = \prod_\ppp \ppp^{\v_\ppp\br{I}}, \qquad \v_\ppp\br{I} = \inf \cbr{\v_\ppp\br{x} \st x \in I}, $$ \item if $ \br{R : I} < \infty $ for all $ I \ne 0 $, then for all $ I $ and $ J $, $$ \br{R : IJ} = \br{R : I}\br{R : J}. $$ \end{enumerate} \end{theorem} \begin{proof} \hfill \begin{enumerate} \item If $ I \ne R $, then $ I \subset \ppp $ for some prime ideal $ \ppp $. Then $ I = \ppp I' $ where $ I' = I\ppp^{-1} \supsetneq I $ then by Noetherian induction, using the ascending chain condition on ideals, $ I $ is a product of powers of prime ideals, $ I = \prod_\ppp \ppp^{a_\ppp} $. Then get the same for fractional ideals $ J = x^{-1}I $. Consider the homomorphisms $$ \function{\cbr{\text{fractional ideals of} \ R}}{\cbr{\text{fractional ideals of} \ R_\ppp}}{I}{IR_\ppp}, \qquad \function{\cbr{\text{fractional ideals of} \ R_\ppp}}{\ZZ}{\abr{\pi^n}}{n}. $$ The composition is $ I \mapsto \v_\ppp\br{I} $, and if $ \qqq \ne \ppp $ then $ \v_\ppp\br{\qqq} = 0 $. So $$ \function[\br{\v_\ppp}_\ppp]{\cbr{\text{fractional ideals of} \ R}}{\bigoplus_\ppp \ZZ}{\prod_\ppp \ppp^{a_\ppp}}{\br{a_\ppp}_\ppp}. $$ So $ a_\ppp $ are unique and $ \br{\v_\ppp}_\ppp $ is an isomorphism. \lecture{3}{Tuesday}{26/01/21} \item By unique factorisation of ideals in $ 1 $, $$ \prod_\ppp \ppp^{a_\ppp} \cap \prod_\ppp \ppp^{b_\ppp} = \prod_\ppp \ppp^{\max\br{a_\ppp, b_\ppp}}, $$ so if $ I + J = R $, then $ IJ = I \cap J $, so by CRT, $ R / IJ \cong R / I \times R / J $ so the result holds if $ I + J = R $. So reduced to showing that $ \br{R : \ppp^{n + 1}} = \br{R : \ppp}\br{R : \ppp^n} $. Now $ R / \ppp^n \cong R_\ppp / \ppp^nR_\ppp $, so without loss of generality, $ R $ is local, so a DVR, $ \ppp = \abr{\pi} $, and $$ \cdot \pi : R / \abr{\pi^n} \xrightarrow{\sim} \abr{\pi} / \abr{\pi^{n + 1}}, $$ hence $ \br{R : \ppp^{n + 1}} = \br{R : \ppp}\br{\ppp : \ppp^{n + 1}} = \br{R : \ppp}\br{R : \ppp^n} $. \end{enumerate} \end{proof} The quotient group $$ \Cl R = \cbr{\text{fractional ideals of} \ R} / \cbr{\text{principal fractional ideals} \ aR \ \text{for} \ a \in K^\times} $$ is the \textbf{class group} of $ R $, or the \textbf{Picard group} $ \Pic R $. If $ K $ is a number field, write $ \Cl\br{K} = \Cl \OOO_K $, the \textbf{ideal class group} of $ K $. \begin{fact*} For a number field $ K $, $ \Cl\br{K} $ is finite. \end{fact*} \pagebreak \subsection{Places of number fields} Recall that $ \V_\QQ = \cbr{p \mid p \ \text{prime}} \cup \cbr{\infty} $. Let $ K $ be a number field. Let $ \ppp \subset \OOO_K $ be non-zero prime. Then $ \ppp $ determines a discrete valuation $ \v_\ppp $ of $ K $ and so a non-archimedean AV $ \abs{x}_\ppp = r^{-\v_\ppp\br{x}} $ for $ r > 1 $. \begin{theorem} There is a bijection $$ \function{\cbr{\text{non-zero primes of} \ \OOO_K}}{\V_{K, \f}}{\ppp}{\abs{\cdot}_\ppp}. $$ \end{theorem} \begin{proof} Let $ \ppp \ne \qqq $. Then there exists $ x \in \ppp \setminus \qqq $, and then $ \abs{x}_\ppp < 1 = \abs{x}_\qqq $, so $ \abs{\cdot}_\ppp $ and $ \abs{\cdot}_\qqq $ are inequivalent, so the map is injective. Let $ \abs{\cdot} $ be a non-archimedean AV on $ K $, with valuation ring $ R = \cbr{x \in K \st \abs{x} \le 1} $. As $ \abs{\cdot} $ is non-archimedean, $ \ZZ \subset R $, hence $ R \supset \OOO_K $, as $ R $ is integrally closed, and so $ R \supset \OOO_{K, \ppp} $ for some prime $ \ppp = \mmm_R \cap \OOO_K $. Thus $ R = \OOO_{K, \ppp} $, since by \ref{lem:1.1} $ \OOO_{K, \ppp} $ is a maximal subring of $ K $, so $ \abs{\cdot} $ and $ \abs{\cdot}_\ppp $ are equivalent. \end{proof} \begin{notation*} If $ v \in \V_{K, \f} $, then \begin{itemize} \item $ \ppp_v $ is the corresponding prime ideal of $ \OOO_K $, \item $ K_v $ is a complete discretely valued field, the completion of $ K $, \item $ \OOO_v = \OOO_{K_v} \subset K_v $ is the valuation ring, not to be confused with $ \OOO_{K, \ppp_v} $, \item $ \pi_v \in \OOO_v $ is any generator of the maximal ideal, the \textbf{uniformiser}, often assuming $ \pi_v \in K $, \item $ v : K^\times \twoheadrightarrow \ZZ $ is the \textbf{normalised discrete valuation} such that $ v\br{\pi_v} = 1 $, \item $ \kappa_v = \OOO_K / \ppp_v \cong \OOO_v / \abr{\pi_v} $ is finite of order $ \q_v = p^{\f_v} $ for a prime $ p $ such that $ v \mid p $, and \item $ \abs{x}_v = \q_v^{-v\br{x}} $ is the \textbf{normalised AV}, so $ \abs{\pi_v}_v = 1 / \q_v $. \end{itemize} \end{notation*} \begin{theorem} There is a bijection $$ \function{\cbr{\text{homomorphisms} \ \sigma : K \hookrightarrow \CC} / \br{\sigma \sim \overline{\sigma}}}{\V_{K, \infty}}{\sigma}{\abs{\sigma\br{\cdot}}}. $$ \end{theorem} \begin{proof} Recall that if $ L / K $ is a finite separable field extension and $ v $ is a place of $ K $, then $ L \otimes_K K_v \cong \prod_{w \mid v} L_w $. There is a unique infinite place $ \infty $ of $ \QQ $ and $ \QQ_\infty = \RR $. So $$ K \otimes_\QQ \RR \xrightarrow{\sim} \prod_{v \in \V_{K, \infty}} K_v. $$ Each $ K_v $ is a finite extension of $ \RR $, so either \begin{itemize} \item $ K_v = \RR $, and $ v $ is \textbf{real}, or \item $ K_v \cong \CC $, and $ v $ is \textbf{complex}. \end{itemize} In the second case, as $ K \subset K_v $ is dense, $ K \not\subset \RR $. On the other hand, by Galois theory, the group of homomorphisms $ \sigma : K \hookrightarrow \CC $ has order $ n = \sbr{K : \QQ} $ and there is an isomorphism \begin{equation} \label{eq:1} \function{K \otimes_\QQ \CC}{\prod_{\sigma : K \hookrightarrow \CC} \CC}{x \otimes z}{\br{\sigma\br{x}z}_\sigma}. \end{equation} Complex conjugation acts on both sides of $ \br{\ref{eq:1}} $ by $ x \otimes z \mapsto x \otimes \overline{z} $ and $ \br{z_\sigma}_\sigma \mapsto \br{\overline{z_{\overline{\sigma}}}}_\sigma $. Let $$ \sigma_1, \dots, \sigma_{\r_1} : K \hookrightarrow \RR, \qquad \sigma_{\r_1 + 1} = \overline{\sigma_{\r_1 + \r_2 + 1}}, \dots, \sigma_{\r_1 + \r_2} = \overline{\sigma_{\r_1 + 2\r_2}} : K \hookrightarrow \CC, \qquad \r_1 + 2\r_2 = n. $$ Then taking fixed points under complex conjugation of $ \br{\ref{eq:1}} $, $$ K \otimes_\QQ \RR \xrightarrow{\sim} \prod_{\sigma \ \text{real}} \RR \times \prod_{\br{\sigma, \overline{\sigma}}, \ \sigma \ne \overline{\sigma}} \cbr{\br{z, \overline{z}} \in \CC \times \CC} \cong \RR^{\r_1} \times \CC^{\r_2}. $$ \end{proof} \pagebreak \begin{notation*} Define $$ K_\infty = K \otimes_\QQ \RR \cong \prod_{v \in \V_{K, \infty}} K_v \cong \RR^{\cbr{\text{real} \ v}} \times \CC^{\cbr{\text{complex} \ v}}, $$ where for $ v $ complex, $ K_v \cong \CC $ is well-defined up to complex conjugation. For normalised AVs, \begin{itemize} \item $ v $ real corresponds to $ \sigma : K \hookrightarrow \RR $ and $ \abs{x}_v = \abs{\sigma\br{x}} $ is the Euclidean AV, and \item $ v $ complex corresponds to $ \sigma \ne \overline{\sigma} : K \hookrightarrow \CC $ and $ \abs{x}_v = \sigma\br{x}\overline{\sigma}\br{x} = \abs{\sigma\br{x}}^2 $ is the square of modulus. \end{itemize} \end{notation*} Let $ L / K $ be an extension of number fields, and let $ w \mid v $. \begin{itemize} \item If $ L_w / K_v $ is a finite extension of non-archimedean local fields $ \sbr{L_w : K_v} = \e\br{w \mid v}\f\br{w \mid v} $. \item If $ L_w / K_v \cong \RR / \RR $ or $ L_w / K_v \cong \CC / \CC $, then $ \f = \e = 1 $. If $ L_w / K_v \cong \CC / \RR $, then $ v $ is ramified, and $ \e = 2 $ and $ \f = 1 $. Neukirch has a different terminology. \end{itemize} \lecture{4}{Thursday}{28/01/21} \begin{proposition} Let $ x \in L $ and $ v \in \V_K $. Then $$ \abs{\N_{L / K}\br{x}}_v = \prod_{w \mid v} \abs{x}_w. $$ \end{proposition} \begin{proof} $ \N_{L / K}\br{x} = \prod_{w \mid v} \N_{L_w / K_v}\br{x} $ so it is enough to show $ \abs{\N_{L_w / K_v}\br{x}}_v = \abs{x}_w $. If $ v $ is finite, it is enough to take $ x = \pi_w \in L $, and $$ \abs{\N_{L_w / K_v}\br{\pi_w}}_v = \abs{u\pi_v^{\f\br{w \mid v}}}_v = \q_v^{-\f\br{w \mid v}} = \q_w^{-1} = \abs{\pi_w}_w, \qquad u \in \OOO_{K_v}^\times. $$ If $ v $ is infinite, need only consider $ L_w / K_v \cong \CC / \RR $ and $ \N_{\CC / \RR}\br{z} = z\overline{z} $. \end{proof} \begin{theorem}[Product formula] Let $ x \in K^\times $. Then $ \abs{x}_v = 1 $ for all but finitely many $ v $ and $$ \prod_{v \in \V_K} \abs{x}_v = 1. $$ \end{theorem} \begin{proof} Let $ x = a / b $ for $ a, b \in \OOO_K \setminus \cbr{0} $. Then $$ \cbr{v \in \V_K \st \abs{x}_v \ne 1} \subset \V_{K, \infty} \cup \cbr{v \in \V_{K, \f} \st v\br{a} > 0 \ \text{or} \ v\br{b} > 0} $$ is a finite set. Now $$ \prod_{v \in \V_K} \abs{x}_v = \prod_{p \le \infty} \prod_{v \mid p} \abs{x}_v = \prod_{p \le \infty} \abs{\N_{K / \QQ}\br{x}}_p. $$ So it is enough to prove for $ K = \QQ $, and by multiplicativity, reduce to \begin{itemize} \item $ x = q $ prime, where $$ \abs{q}_p = \begin{cases} \dfrac{1}{q} & p = q \\ 1 & p \ne q, \infty \\ q & p = \infty \end{cases}, $$ \item $ x = -1 $, where $ \abs{-1}_p = 1 $ for all $ p \le \infty $. \end{itemize} \end{proof} \begin{remark*} \hfill \begin{itemize} \item $ \RR $, with standard measure $ \d x $, transforms under $ a \in \RR^\times $ by $ \d\br{ax} = \abs{a}\d x $. \item $ \CC $, with standard measure $ \d x\d y $, transforms under $ a \in \CC^\times $ by $ \d\br{ax}\d\br{ay} = \abs{a}^2\d x\d y $, with the normalised AV on $ \CC $. \end{itemize} \end{remark*} \begin{fact*} On $ K_v $, for any $ v $, there is a translation-invariant measure, the Haar measure, $ \dv x $, and for all $ a \in K_v^\times $, $ \dv\br{ax} = \abs{a}_v\dv x $ where $ \abs{\cdot}_v $ is the normalised AV. \end{fact*} \pagebreak \section{Different and discriminant} \subsection{Discriminant} Let $ R \subset S $ be rings, commutative with unity, such that $ S $ is a free $ R $-module of finite rank $ n \ge 1 $. Then we have a trace map given by $$ \function[\Tr_{S / R}]{S}{R}{x}{\Tr\br{y \mapsto xy}}, $$ the trace of the $ R $-linear map $ S \to S \cong R^n $. If $ x_1, \dots, x_n \in S $, define $$ \disc_{S / R} \br{x_i} = \disc \br{x_i} = \det \br{\Tr_{S / R}\br{x_ix_j}} \in R. $$ If $ y_i = \sum_{j = 1}^n r_{ji}x_j $ for $ r_{ji} \in R $, then $ \Tr_{S / R}\br{y_iy_j} = \sum_{k, l} r_{ki}r_{lj}\Tr_{S / R}\br{x_kx_l} $, so \begin{equation} \label{eq:2} \disc \br{y_i} = \det \br{r_{ij}}^2 \disc \br{x_i}. \end{equation} \begin{definition*} Let $ S = \bigoplus_{i = 1}^n Re_i $. Then the \textbf{discriminant} $$ \disc\br{S / R} = \disc_{S / R} \br{e_i}R \subset R $$ is an ideal of $ R $, independent of the basis by $ \br{\ref{eq:2}} $. \end{definition*} The following are obvious properties. \begin{itemize} \item If $ S = S_1 \times S_2 $ for $ S_i $ free over $ R $, then $$ \disc\br{S / R} = \disc\br{S_1 / R}\disc\br{S_2 / R}. $$ \item If $ f : R \to R' $ is a ring homomorphism, then $$ \disc\br{S \otimes_R R' / R'} = f\br{\disc\br{S / R}}R'. $$ \item If $ R $ is a field, then $ \disc\br{S / R} = R $ or $ \disc\br{S / R} = 0 $, and $ \disc\br{S / R} = R $ if and only if the $ R $-bilinear form $$ \function{S \times S}{R}{\br{x, y}}{\Tr_{S / R}\br{xy}} $$ is non-degenerate, that is there is a duality of the $ R $-vector space $ S $ with itself. \end{itemize} By field theory, if $ L / K $ is a finite field extension, then $ \disc\br{L / K} = K $ if and only if the trace form is non-degenerate, if and only if there exists $ x \in L $ with $ \Tr_{L / K}\br{x} \ne 0 $, if and only if $ L / K $ is separable. More generally is the following. \begin{theorem} \label{thm:3.1} Let $ k $ be a field, and let $ A $ be a finite-dimensional $ k $-algebra. Then $ \disc\br{A / k} \ne 0 $, so $ \disc\br{A / k} = k $, if and only if $ A = \prod_i K_i $ for $ K_i / k $ a finite separable field extension. \end{theorem} \begin{proof} Write $ A = \prod_{i = 1}^m A_i $ where $ A_i $ are indecomposable $ k $-algebras, so $ A_i $ is local. So may assume $ A $ is local with maximal ideal $ \mmm $. If $ \mmm = 0 $, that is $ A $ is a field, reduced to the previous statement. If not, then every element of $ \mmm $ is nilpotent, since $ \dim_k A < \infty $. So there exists $ x \in \mmm \setminus \cbr{0} $ nilpotent. So the endomorphism $ y \mapsto xy $ of $ A $ is nilpotent and for all $ r \in A $, so is $ y \mapsto \br{rx}y $, so for all $ r \in A $, $ \Tr_{A / k}\br{rx} = 0 $. So the trace form is degenerate, and the discriminant is zero. See Atiyah-Macdonald chapter on Artinian rings for an explanation of $ A = \prod_i A_i $. \end{proof} \lecture{5}{Saturday}{30/01/21} Let $ R $ be a Dedekind domain, let $ K = \Frac R $, let $ L / K $ be finite separable, and let $ S $ be the integral closure of $ R $ in $ L $. Say $ S / R $ is an \textbf{extension of Dedekind domains}. Then $ S $ is a finitely generated $ R $-module, but need not be free. \begin{proposition} $ S $ is \textbf{locally free} $ R $-module of rank $ n = \sbr{L : K} $, that is for all $ \ppp \subset R $, $ S_\ppp \cong R_\ppp^n $. \end{proposition} \begin{proof} $ S \subset L $ so $ S $ is torsion-free, hence so is $ S_\ppp $, and $ R_\ppp $ is a PID, so $ S_\ppp $ is free, clearly of rank $ \dim_K L = n $. \end{proof} \pagebreak \begin{lemma} If $ x \in S $, then $ \Tr_{L / K}\br{x} \in R $. \end{lemma} \begin{proof} If $ R $ is local, then $ S $ is a free $ R $-module so $ \Tr_{L / K}\br{x} = \Tr_{S \otimes_R K / K}\br{x \otimes 1} = \Tr_{S / R}\br{x} \in R $. So in general, for all $ 0 \ne \ppp \subset R $, $ y = \Tr_{L / K}\br{x} \in R_\ppp $ and $$ \bigcap_\ppp R_\ppp = \cbr{x \in K \st \forall \ppp, \ \v_\ppp\br{x} \ge 0} = R. $$ \end{proof} Then there are two equivalent definitions of $ \disc\br{S / R} $. \begin{definition*} $ \disc\br{S / R} $ is defined to be the ideal of $ R $ generated by $$ \cbr{\disc_{L / K} \br{x_1, \dots, x_n} \st x_1, \dots, x_n \in S}. $$ \end{definition*} If $ S / R $ is free, this gives the previous definition. As $ S \otimes_R K = L $ is separable over $ K $, $ \disc\br{L / K} = K \ne 0 $ and so $ \disc\br{S / R} \ne 0 $. This is how we prove that $ S / R $ is finitely generated. \begin{proposition} \label{prop:3.4} $ \disc\br{S / R}R_\ppp = \disc\br{S_\ppp / R_\ppp} $ for all $ \ppp $. \end{proposition} \begin{proof} Claim there exist $ x_1, \dots, x_n \in S $ which is an $ R_\ppp $-basis for $ S_\ppp $. Certainly there exist $ e_1, \dots, e_n \in S_\ppp $ which is an $ R_\ppp $-basis. Let $$ \QQQ = \cbr{\text{primes} \ \qqq \subset S \st \exists i, \ \v_\qqq\br{e_i} < 0} $$ be a finite set. By CRT, there exist $ a_i \in S $ such that $ \v_\qqq\br{a_i} + \v_\qqq\br{e_i} \ge 0 $ for all $ \qqq \in \QQQ $ and $ a_i - 1 \in \ppp S $. Then $ x_i = a_ie_i \in S $ and $ x_i \equiv e_i \mod \ppp S $. So $ \br{x_i} $ is an $ R / \ppp $-basis for $ S / \ppp S = S_\ppp / \ppp S_\ppp $, so $ \br{x_i} $ is an $ R_\ppp $-basis for $ S_\ppp $. Thus $ \disc\br{S_\ppp / R_\ppp} = \disc \br{x_i}R_\ppp $, and $ \disc \br{x_i} \in \disc\br{S / R} $. So $ \disc \br{S_\ppp / R_\ppp} \subset \disc\br{S / R}R_\ppp $ and the other inclusion is obvious. \end{proof} There is an alternative definition of $ \disc\br{S / R} $. If $ x_1, \dots, x_n \in S $ is a $ K $-basis for $ L $, then $ \disc_{L / K} \br{x_i} \ne 0 $. Let $$ \PPP = \cbr{\ppp \subset R \st \v_\ppp\br{\disc_{L / K} \br{x_i}} > 0} $$ be a finite set. So for all $ \ppp \notin \PPP $, $ \disc\br{S_\ppp / R_\ppp} = R_\ppp $. \begin{definition*} Define $$ \disc\br{S / R} = \prod_{\ppp \in \PPP} \ppp^{\v_\ppp\br{\disc\br{S_\ppp / R_\ppp}}}, $$ which is equivalent by \ref{prop:3.4} to the previous definition. \end{definition*} \begin{theorem} \label{thm:3.5} $ \v_\ppp\br{\disc\br{S / R}} = 0 $ if and only if $ \ppp $ is unramified in $ S $ and for all $ \qqq \subset S $ over $ \ppp $, the residue field extension $ \br{S / \qqq} / \br{R / \ppp} $ is separable. \end{theorem} \begin{proof} May assume $ R $ is local, so $ S $ is free over $ R $. Have $ \ppp S = \prod_\qqq \qqq^{e_\qqq} $, so $$ S \otimes_R \br{R / \ppp} \cong S / \ppp S \cong \prod_\qqq S / \qqq^{e_\qqq}. $$ So $ \v_\ppp\br{\disc\br{S / R}} = 0 $ if and only if $ \disc\br{\br{S / \ppp S} / \br{R / \ppp}} = R / \ppp $, if and only if each $ S / \qqq^{e_\qqq} $ is a finite separable field extension of $ R / \ppp $ by \ref{thm:3.1}, if and only if for all $ \qqq $, $ e_\qqq = 1 $ and $ \br{S / \qqq} / \br{R / \ppp} $ is separable. \end{proof} \begin{corollary} In an extension $ S / R $ of Dedekind domains, only finitely many primes are ramified, just the $ \ppp $ such that $ \v_\ppp\br{\disc\br{S / R}} > 0 $. \end{corollary} \begin{proposition} Let $ \ppp \subset R $. Then $$ \v_\ppp\br{\disc\br{S / R}} = \sum_{\qqq \supset \ppp} \v_\ppp\br{\disc\br{\widehat{S_\qqq} / \widehat{R_\ppp}}}. $$ \end{proposition} \begin{proof} By \ref{prop:3.4} may assume $ R $ is local, so $ S $ is a free $ R $-module, and $ S \otimes_R \widehat{R} \cong \prod_{\qqq \subset S} \widehat{S_\qqq} $ so $$ \v_\ppp\br{\disc\br{S / R}} = \v_\ppp\br{\disc\br{S \otimes_R \widehat{R} / \widehat{R}}} = \sum_\qqq \v_\ppp\br{\disc\br{\widehat{S_\qqq} / \widehat{R}}}. $$ \end{proof} \pagebreak \subsection{Different} There is a finer invariant of ramification. \begin{definition*} The \textbf{inverse different} $ \DDD_{S / R}^{-1} $ of an extension $ S / R $ of Dedekind domains is $$ \DDD_{S / R}^{-1} = \cbr{x \in L \st \forall y \in S, \ \Tr_{L / K}\br{xy} \in R}. $$ \end{definition*} This is the dual of $ S $ with respect to the trace form $ \br{x, y} \mapsto \Tr_{L / K}\br{xy} $, which is non-degenerate and clearly an $ S $-submodule of $ L $. If $ \bigoplus_{i = 1}^n Rx_i \subset S $, let $ \br{y_i} $ be the dual basis to $ \br{x_i} $ for the trace form, that is $ \Tr_{L / K}\br{x_iy_j} = \delta_{ij} $. Then $ S \subset \DDD_{S / R}^{-1} \subset \bigoplus_{i = 1}^n Ry_i $, so $ \DDD_{S / R}^{-1} $ is a fractional ideal, since it is finitely generated. \begin{definition*} $ \DDD_{S / R} $ is an ideal of $ S $, the \textbf{different}. \end{definition*} \begin{proposition} \label{prop:3.8} \hfill \begin{enumerate} \item If $ \ppp \subset R $, then $ \DDD_{S_\ppp / R_\ppp} = \DDD_{S / R}S_\ppp $. \item $ \N_{L / K}\br{\DDD_{S / R}} = \disc\br{S / R} $. \item Let $ \qqq \subset S $ lying over $ \ppp \subset R $. Then $ \v_\qqq\br{\DDD_{S / R}} = \v_\qqq\br{\DDD_{\widehat{S_\qqq} / \widehat{R_\ppp}}} $. \end{enumerate} \end{proposition} \begin{proof} \hfill \begin{enumerate} \item Exercise. \footnote{Exercise: the same idea as \ref{prop:3.4}} \item By $ 1 $ and \ref{prop:3.4}, can suppose $ R $ is local. Then $ S $ is a PID by \ref{thm:2.1}.$ 3 $. So $ \DDD_{S / R}^{-1} = x^{-1}S $ for some $ 0 \ne x \in S $. Let $ \br{e_i} $ be a basis for $ S $ over $ R $. Then there exists a basis $ \br{e_i'} $ for $ S $ over $ R $ such that $ \Tr_{L / K}\br{e_ix^{-1}e_j'} = \delta_{ij} $. Let $ x^{-1}e_j' = \sum_k b_{kj}e_k $ for $ b_{kj} \in K $. Then $$ \abr{1} = \abr{\det \br{\Tr_{L / K}\br{e_ix^{-1}e_j'}}} = \abr{\det \br{\Tr_{L / K}\br{e_ie_j}}\det \br{b_{ij}}} = \det \br{b_{ij}}\disc\br{S / R}. $$ But $ \N_{L / K}\br{x^{-1}} $ is $ \det \br{b_{ij}} $ times some unit in $ R $. So $ \abr{1} = \abr{\N_{L / K}\br{x^{-1}}}\disc\br{S / R} $. \lecture{6}{Tuesday}{02/02/21} \item Assume $ R $ is local and $ \ppp = \abr{\pi_\ppp} $. Write $ \widehat{K} = \Frac \widehat{R} $ and for $ \qqq = \abr{\pi_\qqq} \subset S $ write $ \widehat{L_\qqq} = \Frac \widehat{S_\qqq} $. So say $$ L \otimes_K \widehat{K} \supset S \otimes_R \widehat{R} \xrightarrow{\sim} \prod_\qqq \widehat{S_\qqq} \subset \prod_\qqq \widehat{L_\qqq}, $$ and \begin{equation} \label{eq:3} \Tr_{L \otimes_K \widehat{K} / \widehat{K}}\br{x} = \sum_\qqq \Tr_{\widehat{L_\qqq} / \widehat{K}}\br{x}. \end{equation} Let $ S = \bigoplus_{i = 1}^n Rx_i $, and $ \bigoplus_{i = 1}^n Ry_i = \DDD_{S / R}^{-1} = \prod_\qqq \pi_\qqq^{-a_\qqq}S $ for some $ a_\qqq \ge 0 $ and $ y_i \in L $, the dual basis to $ x_i $. Then as $ S \otimes_R \widehat{R} = \bigoplus_{i = 1}^n \widehat{R}\br{x_i \otimes 1} $, \begin{align*} \DDD_{S \otimes_R \widehat{R} / \widehat{R}}^{-1} & = \cbr{x \in L \otimes_K \widehat{K} \st \forall y \in S \otimes_R \widehat{R}, \ \Tr_{L \otimes_K \widehat{K} / \widehat{K}}\br{xy} \in \widehat{R}} \\ & = \bigoplus_{i = 1}^n \widehat{R}\br{y_i \otimes 1} = \DDD_{S / R}^{-1}\br{S \otimes_R \widehat{R}} = \prod_\qqq \pi_\qqq^{-a_\qqq}\br{S \otimes_R \widehat{R}} \subset L \otimes_K \widehat{K}, \end{align*} since $ \Tr_{L / K}\br{x_iy_j} = \delta_{ij} $ and trace commutes with base change. On the other hand, by $ \br{\ref{eq:3}} $ and the definitions $$ \DDD_{S \otimes_R \widehat{R} / \widehat{R}}^{-1} \cong \prod_\qqq \DDD_{\widehat{S_\qqq} / \widehat{R}}^{-1} \subset \prod_\qqq \widehat{L_\qqq}, $$ so $$ \DDD_{\widehat{S_\qqq} / \widehat{R}}^{-1} = \prod_{\qqq'} \pi_{\qqq'}^{-a_{\qqq'}}\widehat{S_\qqq} = \pi_\qqq^{-a_\qqq}\widehat{S_\qqq}, $$ as $ \v_\qqq\br{\pi_{\qqq'}} = 0 $ if $ \qqq' \ne \qqq $. \end{enumerate} \end{proof} \pagebreak Use this to prove the following. \begin{theorem} Assume all extensions of residue fields are separable. Let $ \ppp S = \prod_{i = 1}^g \qqq_i^{e_i} \subset S $. Then $ \qqq_i \mid \DDD_{S / R} $ if and only if $ e_i > 1 $, and $ \qqq_i^{e_i - 1} \mid \DDD_{S / R} $. \end{theorem} \begin{proof} First assume $ R $ is complete local and $ \ppp = \abr{\pi_\ppp} $. Then $ S $ is also local, and complete, with unique prime $ \qqq = \abr{\pi_\qqq} $, so $ g = 1 $. So $ \DDD_{S / R} = \abr{\pi_\qqq}^d $ for $ d \ge 0 $. By \ref{prop:3.8}.$ 2 $, $ \disc\br{S / R} = \abr{\N_{L / K}\br{\pi_\qqq}^d} = \abr{\pi_\ppp}^{d\f} $. So as $ \v_\ppp\br{\disc\br{S / R}} = 0 $ if and only if $ \ppp $ is unramified by \ref{thm:3.5}, get the first statement. For the second, claim $ \Tr_{L / K}\br{\qqq} \subset \ppp $. Let $ x \in \qqq $. Then multiplication by $ x $ is a nilpotent endomorphism of $ S \otimes_R \br{R / \ppp} \cong S / \qqq^\e $, so $ \Tr_{S \otimes_R \br{R / \ppp} / \br{R / \ppp}}\br{x \otimes 1} = 0 $, that is $ \Tr_{L / K}\br{x} = \Tr_{S / R}\br{x} \in \ppp $. Hence the claim. Therefore $ \Tr_{L / K}\br{\qqq^{1 - \e}} = \Tr_{L / K}\br{\pi_\ppp^{-1}\qqq} \subset R $, so $ \qqq^{1 - \e} \subset \DDD_{S / R}^{-1} $, that is $ \qqq^{\e - 1} \mid \DDD_{S / R} $. For the general case, apply the above to $ \widehat{S_{\qqq_i}} / \widehat{R_\ppp} $ and use \ref{prop:3.8}.$ 3 $. \end{proof} \begin{fact*} \hfill \begin{itemize} \item If $ \ppp \nmid e_i $ then $ \v_{\qqq_i}\br{\DDD_{S / R}} = e_i - 1 $. If $ \ppp \mid e_i $ then $ \v_{\qqq_i}\br{\DDD_{S / R}} \ge e_i $. More precisely, $ \v_{\qqq_i}\br{\DDD_{S / R}} $ is determined by the orders of the higher ramification groups, for a Galois closure of $ L / K $. See for example Serre, Local fields, Chapter $ 4 $, Section $ 1 $, Proposition $ 4 $. \item If $ S = R\sbr{x} $, and $ x $ has minimal polynomial $ f \in R\sbr{T} $ then $ \DDD_{S / R} = \abr{f'\br{x}} $ where $ f' $ is the derivative. See example sheet $ 1 $. This means that $ \DDD_{S / R} $ is the annihilator of the cyclic $ S $-module $ \Omega_{S / R} $ of K\"ahler differentials, generated by $ \d x $. \end{itemize} \end{fact*} For an extension $ L / K $ of number fields write $$ \DDD_{L / K} = \DDD_{\OOO_L / \OOO_K} \subset \OOO_L, \qquad \delta_{L / K} = \disc\br{\OOO_L / \OOO_K} \subset \OOO_K. $$ \begin{remark*} Let $ K / \QQ $, and let $ \br{e_i} $ be a $ \ZZ $-basis for $ \OOO_K $. Then $ \delta_{K / \QQ} \subset \ZZ $ is $ \abr{\disc \br{e_i}} $ and if $ \br{e_i'} $ is another basis such that $ e_i' = \sum_{i, j} a_{ji}e_j $, then $ \disc \br{e_i'} = \br{\det \br{a_{ij}}}^2\disc \br{e_i} = \disc \br{e_i} $, since $ \det \br{a_{ij}} = \pm 1 $. So the integer $ \disc \br{e_i} $ is independent of the basis, not just the ideal it generates. This is called the \textbf{absolute discriminant} $ \d_K \in \ZZ \setminus \cbr{0} $ of $ K $. The sign is significant. \end{remark*} \begin{theorem}[Kummer-Dedekind criterion] Let $ S / R $ be an extension of Dedekind domains, and let $ x \in S $ such that $ L = K\br{x} $. Suppose $ \ppp \subset R $ such that $ S_\ppp = R_\ppp\sbr{x} $. Let $ g \in R\sbr{T} $ be the minimal polynomial of $ x $ and $ g = \prod_i \overline{g_i}^{e_i} \in \br{R / \ppp}\sbr{T} $ the factorisation of reduction of $ g $ into powers of distinct monic irreducibles $ \overline{g_i} $. Let $ g_i \in R\sbr{T} $ be any monic lifting of $ \overline{g_i} $ and $ f_i = \deg g_i = \deg \overline{g_i} $. Then $$ \qqq_i = \ppp S + \abr{g_i\br{x}} \subset S $$ is prime with $ \sbr{S / \qqq_i : R / \ppp} = f_i $, if $ i \ne j $ then $ \qqq_i \ne \qqq_j $, and $ \ppp S = \prod_i \qqq_i^{e_i} $. \end{theorem} \begin{proof} Can assume $ R $ is local, so then $ S = R\sbr{x} $. Set $ \ppp = \abr{\pi} $ and $ R / \ppp = \kappa $. \begin{itemize} \item $ \qqq_i $ is prime with residue degree $ f_i $, since $ S / \qqq_i \cong \kappa\sbr{T} / \abr{\overline{g_i}} $, and $ \overline{g_i} $ is irreducible of degree $ f_i $. \item If $ i \ne j $, there exist $ a, b \in R\sbr{T} $ such that $ \overline{a}\overline{g_i} + \overline{b}\overline{g_j} = 1 \in \kappa\sbr{T} $, so $ 1 = ag_i + bg_j + \pi c $ for some $ c \in R\sbr{T} $. Then $ 1 \in \abr{\pi, g_i\br{x}, g_j\br{x}} = \qqq_i + \qqq_j $, so $ \qqq_i \ne \qqq_j $. \end{itemize} Let $ g = \prod_i g_i^{e_i} + \pi h $ for $ h \in R\sbr{T} $. Then $$ \prod_i \qqq_i^{e_i} = \prod_i \abr{\pi, g_i\br{x}}^{e_i} \subset \prod_i \abr{\pi, g_i\br{x}^{e_i}} \subset \abr{\pi, \prod_i g_i\br{x}^{e_i}} = \abr{\pi, \pi h\br{x}} \subset \abr{\pi} = \ppp S. $$ Now $$ \dim_\kappa \br{S / \ppp S} = n = \sbr{L : K}, \qquad \dim_\kappa \br{S / \qqq_i^{e_i}} = \sum_{j = 0}^{e_i - 1} \dim_\kappa \br{\qqq_i^j / \qqq_i^{j + 1}} = e_i\dim_\kappa \br{S / \qqq_i} = e_if_i, $$ so $ \prod_i \qqq_i^{e_i} \subset \ppp S $ gives $ \sum_i e_if_i \ge n $. As $ \sum_i e_if_i = \sum_i e_i\deg \overline{g_i} = \deg \overline{g} = n $, have equality. \end{proof} \pagebreak \section{Example: quadratic fields} \lecture{7}{Thursday}{04/02/21} Let $ K = \QQ\br{\sqrt{d}} $ for $ d \in \QQ^\times $ not a square. Multiplying $ d $ by a square, can assume $ d \in \ZZ \setminus \cbr{0, 1} $ is squarefree. Then $$ \OOO_K \supset \ZZ\sbr{\sqrt{d}} = \ZZ \oplus \ZZ\sqrt{d}. $$ Since $ \Tr_{K / \QQ}\br{1} = 2 $ and $ \Tr_{K / \QQ}\br{\sqrt{d}} = 0 $, $ \disc \br{1, \sqrt{d}} = 4d $, so either $ \d_K = 4d $, and $$ \OOO_K = \ZZ\sbr{\sqrt{d}}, $$ or $ \d_K = d $, and $ \br{\OOO_K : \ZZ\sbr{\sqrt{d}}} = 2 $. This holds if and only if there exist $ m, n \in \ZZ $ not both even with $ \tfrac{m + n\sqrt{d}}{2} \in \OOO_K $, if and only if $ \tfrac{1 + \sqrt{d}}{2} \in \OOO_K $ since obviously $ \tfrac{1}{2}, \tfrac{\sqrt{d}}{2} \notin \OOO_K $, if and only if $ d \equiv 1 \mod 4 $ since the minimal polynomial of $ \tfrac{1 + \sqrt{d}}{2} $ is $ \br{T - \tfrac{1}{2}}^2 - \tfrac{d}{4} = T^2 - T - \tfrac{d - 1}{4} $, in which case $$ \OOO_K = \ZZ \oplus \ZZ\tfrac{1 + \sqrt{d}}{2} = \ZZ\sbr{\tfrac{1 + \sqrt{d}}{2}}. $$ The dual basis of $ \br{1, \sqrt{d}} $ for the trace form is $ \br{\tfrac{1}{2}, \tfrac{1}{2\sqrt{d}}} $, so $$ \DDD_{K / \QQ} = \begin{cases} \abr{2\sqrt{d}} & d \not\equiv 1 \mod 4 \\ \abr{\sqrt{d}} & d \equiv 1 \mod 4 \end{cases}. $$ Decomposition of primes by Kummer-Dedekind. \begin{itemize} \item If $ p \ne 2 $ or $ d \not\equiv 1 \mod 4 $ then $ p \nmid \br{\OOO_K : \ZZ\sbr{\sqrt{d}}} $. So applying the criterion to $ T^2 - d $, see that \begin{itemize} \item $ \abr{p} = \ppp^2 $ is ramified if $ p \mid d $, so $ \ppp = \abr{p, \sqrt{d}} $, \item $ \abr{p} = \ppp $ is inert if $ \br{\tfrac{d}{p}} = -1 $, and \item $ \abr{p} = \ppp\ppp' $ is split if $ \br{\tfrac{d}{p}} = 1 $, so if $ d \equiv a^2 \mod p $ then $ \ppp = \abr{p, \sqrt{d} - a} \ne \abr{p, \sqrt{d} + a} = \ppp' $. \end{itemize} \item The remaining case is $ p = 2 $ and $ d \equiv 1 \mod 4 $. Factoring $ T^2 - T - \tfrac{d - 1}{4} $ modulo two, get \begin{itemize} \item $ \abr{2} $ is inert if $ d \equiv 5 \mod 8 $, and \item $ \abr{2} = \ppp\ppp' $ is split if $ d \equiv 1 \mod 8 $ and $ \ppp = \abr{2, \tfrac{\sqrt{d} + 1}{2}} \ne \abr{2, \tfrac{\sqrt{d} - 1}{2}} = \ppp' $. \end{itemize} \end{itemize} Go through the calculations if you have not seen them before. \footnote{Exercise} \pagebreak \section{Example: cyclotomic fields} Recall some Galois theory. Let $ n > 1 $, and let $ K $ be a field of characteristic zero or characteristic $ p \nmid n $. Suppose $ L = K\br{\zeta_n} $, where $ \zeta_n \in L $ is a primitive $ n $-th root of unity, that is $ \zeta_n^m \ne 1 $ for all $ 1 \le m < n $. Equivalently, $ \zeta_n $ is a root of the $ n $-th cyclotomic polynomial $ \Phi_n \in \ZZ\sbr{T} $ of degree $ \phi\br{n} $, defined recursively by $$ T^n - 1 = \prod_{d \mid n} \Phi_d\br{T}. $$ Then $ L / K $ is Galois, with abelian Galois group, and $$ \function{\Gal\br{L / K}}{\br{\ZZ / n\ZZ}^\times}{g}{\text{unique} \ a \mod n \ \text{such that} \ g\br{\zeta_n} = \zeta_n^a}. $$ is an injective homomorphism. \begin{theorem} \label{thm:5.1} Let $ L = \QQ\br{\zeta_n} $ for $ n $ odd or $ 4 \mid n $. Then \begin{enumerate} \item $ \Gal\br{L / \QQ} \xrightarrow{\sim} \br{\ZZ / n\ZZ}^\times $, \item $ p $ ramifies in $ L $ if and only if $ p \mid n $, and \item $ \OOO_L = \ZZ\sbr{\zeta_n} $. \end{enumerate} \end{theorem} \begin{remark*} $ 1 $ if and only if $ \Phi_n $ is irreducible over $ \QQ $, if and only if $ \sbr{L : \QQ} = \phi\br{n} $. \end{remark*} \begin{proof} Let $ n = p^rm $ for $ r \ge 1 $ and $ p \nmid m $ prime, so $ r \ge 2 $ if $ p = 2 $. Let $ \zeta_m = \zeta_n^{p^r} $ and $ \zeta_{p^r} = \zeta_n^m $. Then there exist $ a, b \in \ZZ $ such that $ p^ra + mb = 1 $, so $ \zeta_n = \zeta_m^a\zeta_{p^r}^b $. Let $ K = \QQ\br{\zeta_m} $. Then $ L = K\br{\zeta_{p^r}} $. Will prove that \begin{itemize} \item $ \Phi_{p^r} $ is irreducible over $ K $, \item if $ v \in \V_{K, \f} $ and $ v \nmid p $ then $ v $ is unramified in $ L / K $, \item if $ v \mid p $ then $ v $ is totally ramified in $ L / K $, since $ p^r \ge 3 $ so $ L \ne K $, and \item $ \OOO_L = \OOO_K\sbr{\zeta_{p^r}} $. \end{itemize} This proves \ref{thm:5.1} by induction on $ n $. For a place $ w $ of $ L $, write $ x_w \in L_w $ for the image of $ \zeta_{p^r} $ under $ L \hookrightarrow L_w $. Suppose $ v \mid p $. By induction, $ p $ is unramified in $ K / \QQ $, so $ v\br{p} = 1 $. Then $$ \Phi_{p^r}\br{T + 1} = \dfrac{\br{T + 1}^{p^r} - 1}{\br{T + 1}^{p^{r - 1}} - 1} $$ is an Eisenstein polynomial in $ \OOO_{K_v}\sbr{T} $. Indeed $ \Phi_{p^r}\br{T + 1} \equiv T^{p^{r - 1}\br{p - 1}} \mod p $, and the constant coefficient is $ p $, so has valuation one. Then from local fields, \begin{itemize} \item $ \Phi_{p^r} $ is irreducible over $ K_v $, hence over $ K $, \item $ L / K $ is totally ramified at $ v $, and \item if $ w $ is the unique place of $ L $ over $ v $, then $ \OOO_{L_w} = \OOO_{K_v}\sbr{\pi_w} $ where $ \pi_w = x_w - 1 $ is the root of $ \Phi_{p^r}\br{T + 1} $ in $ L_w $. \end{itemize} Now let $ v \mid q \ne p $. Then $ \Phi_{p^r} $ is separable modulo $ q $. Have $$ K_v \otimes_K L \cong \prod_{w \mid v} L_w = \prod_{w \mid v} K_v\br{x_w}. $$ Let $ f_w \in \OOO_{K_v}\sbr{T} $ be the minimal polynomial of $ x_w $ over $ K_v $. Then \begin{itemize} \item $ \prod_{w \mid v} f_w = \Phi_{p^r} $, so the reduction of $ f_w $ at $ v $ is separable, hence $ L_w / K_v $ is unramified, and \item by local fields again, $ \OOO_{L_w} = \OOO_{K_v}\sbr{x_w} $. \end{itemize} \pagebreak Thus for all $ v \in \V_{K, \f} $, $$ \OOO_{K_v} \otimes_{\OOO_K} \OOO_K\sbr{\zeta_{p^r}} \cong \OOO_{K_v}\sbr{T} / \abr{\Phi_{p^r}} \cong \prod_{w \mid v} \OOO_{K_v}\sbr{T} / \abr{f_w} = \prod_{w \mid v} \OOO_{L_w} \cong \OOO_{K_v} \otimes_{\OOO_K} \OOO_L, $$ by CRT, so must have $ \OOO_K\sbr{\zeta_{p^r}} = \OOO_L $. \end{proof} Recall Frobenius elements. Let $ L / K $ be a Galois extension of number fields, let $ w \mid v $ be finite places, and let $ G = \Gal\br{L / K} \supset G_w \cong \Gal\br{L_w / K_v} $ be the decomposition group of $ w $. Then $$ 1 \to \I_w \to G_w \to \Gal\br{\ell_w / \kappa_v} \to 1, $$ where $ \I_w $ is the inertia subgroup. Suppose $ w $ is unramified in $ L / K $, if and only if $ v $ is unramified in $ L / K $. Then $ \I_w = 1 $. Define the \textbf{Frobenius} at $ w $ to be the unique element $ \sigma_w \in G_w $ mapping to the generator $ x \mapsto x^{\q_v} $ of $ \Gal\br{\ell_w / \kappa_v} $. So $ \ord \sigma_w = \f\br{w \mid v} = \sbr{\ell_w : \kappa_v} = \sbr{\ell_{w'} : \kappa_v} $ for any $ w' \mid v $, as $ G $ acts transitively on $ \cbr{w'} $. In particular, $ \sigma_w = 1 $ if and only if $ v $ splits completely in $ L / K $, that is there exist $ \sbr{L : K} $ places of $ L $ over $ v $. Suppose $ G $ is abelian. Then $ G_w $ and $ \sigma_w $ are independent of $ w $, so depends only on $ v $. \begin{notation*} $ \sigma_v = \sigma_{L / K, v} = \sigma_w $ is the \textbf{arithmetic Frobenius} at $ v $. There are other notations, such as $ \phi_{L / K, v} $ or $ \br{v, L / K} $, the \textbf{norm residue symbol}. \end{notation*} \begin{remark*} Let $ L / F / K $ where $ L / K $ is abelian. Then $ \eval{\sigma_{L / K}}_F = \sigma_{F / K} $ by definition. \end{remark*} Let $ L = \QQ\br{\zeta_n} $, let $ K = \QQ $, and let $ n > 2 $. Have an isomorphism $$ \function[\lambda]{\br{\ZZ / n\ZZ}^\times}{\Gal\br{L / \QQ}}{a \mod n}{\br{\zeta_n \mapsto \zeta_n^a}}. $$ Claim that if $ p \nmid n $, $$ \sigma_p = \sigma_{L / \QQ, p} = \lambda\br{p \mod n} = \br{\zeta_n \mapsto \zeta_n^p} \in \Gal\br{L / \QQ}. $$ Indeed, $ \sigma_p $ is characterised by for all $ v \mid p $, $ \sigma_p $ induces $ x \mapsto x^p $ on the residue field $ \ZZ\sbr{\zeta_n} / \ppp_v $, whereas $ \lambda\br{p} $ induces $ x \mapsto x^p $ over $ \ZZ\sbr{\zeta_n} / \abr{p} $. \lecture{8}{Saturday}{06/02/21} \begin{remark*} \hfill \begin{itemize} \item These elements $ \sigma_p $ generate $ \Gal\br{L / \QQ} $, since every integer prime to $ n $ is a product of $ p \nmid n $, so gives, with some thought, another proof that $ \Gal\br{L / \QQ} \cong \br{\ZZ / n\ZZ}^\times $. \item If $ \sigma : L \hookrightarrow \CC $ is any embedding, then $ \overline{\sigma\br{\zeta_n}} = \sigma\br{\zeta_n^{-1}} $. So $ \lambda\br{-1 \mod n} $ is complex conjugation, for any $ \sigma : L \hookrightarrow \CC $. \end{itemize} \end{remark*} Specialise to the case $ n = q > 2 $ is prime. Then $ \Gal\br{L / \QQ} = \br{\ZZ / q\ZZ}^\times $ is cyclic of order $ q - 1 $, so has a unique index two subgroup $ H \cong \br{\br{\ZZ / q\ZZ}^\times}^2 $. Let $ K = L^H $ be a quadratic extension of $ \QQ $. Every $ p \ne q $ is unramified in $ L $, hence also in $ K $. So $ K = \QQ\br{\sqrt{\pm q}} $, and as $ \abr{2} $ is unramified in $ K $, must have $$ K = \QQ\br{\sqrt{q^*}}, \qquad q^* = \begin{cases} q & q \equiv 1 \mod 4 \\ -q & q \equiv 3 \mod 4 \end{cases}, \qquad \d_K = q^*. $$ Now let $ p \ne q $ be an odd prime. Then $$ \sigma_{K / \QQ, p} = 1 \qquad \iff \qquad \sigma_{L / \QQ, p} = \lambda\br{p} \in H \qquad \iff \qquad \br{\tfrac{p}{q}} = 1. $$ But $$ \sigma_{K / \QQ, p} = 1 \qquad \iff \qquad p \ \text{splits completely in} \ K \qquad \iff \qquad \br{\tfrac{q^*}{p}} = 1. $$ That is, $ \br{\tfrac{p}{q}} = \br{\tfrac{q^*}{p}} $. Combine with $ \br{\tfrac{-1}{p}} = \br{-1}^{\br{p - 1} / 2} $ to get the quadratic reciprocity law. In algebraic number theory, quadratic reciprocity says that splitting of $ p $ in $ K / \QQ $ depends only on the congruence class of $ p $ modulo something. Class field theory tells us that a similar thing holds for any abelian extension of number fields, since there is a law describing the decomposition of primes in an abelian extension which is just a congruence condition. \pagebreak \section{Ideles and adeles} To study congruences modulo $ p^n $ for $ n \ge 1 $ Hensel introduced $ \ZZ_p $ and $ \QQ_p $ such that $ \QQ \hookrightarrow \ZZ_p $. For congruences to arbitrary moduli, or to study local-global problems in general, it would be nice to simultaneously embed $ \QQ \hookrightarrow \QQ_p $ for all $ p \le \infty $, which are locally compact. The first guess is $ \QQ \hookrightarrow \prod_{p \le \infty} \QQ_p $, but this product is not nice, for example not locally compact. Better is to notice that if $ x \in \QQ $, then the image of $ x $ lies in $ \ZZ_p $ for all but finitely many $ p $. So Chevalley introduced a small product with better properties, for any number field $ K $, the ring of adeles or valuation vectors $ \AA_K $ of $ K $ and the group of ideles $ \JJ_K = \AA_K^\times $ of $ K $. These are topological rings and groups respectively. They are highly disconnected, that is have plenty of open subgroups. Open subgroups are closed, so if $ H \subset G $ is an open subgroup, then $ G / H $ is discrete, that is $ G = \bigsqcup_x xH $ is a topological disjoint union. \subsection{Adeles} Let $ K $ be a number field, let $ \V_K = \V_{K, \infty} \sqcup \V_{K, \f} $, and let $ K_v $ be its completions. If $ v \in \V_{K, \f} $, have $ \OOO_v = \OOO_{K_v} = \cbr{x \st \abs{x}_v \le 1} \subset K_v $. \begin{definition*} The \textbf{adele ring} of $ K $ is $$ \AA_K = \cbr{\br{x_v} \in \prod_{v \in \V_K} K_v \st \text{for all but finitely many} \ v, \ x_v \in \OOO_v} = \bigcup_{\text{finite} \ S \subset \V_{K, \f}} \U_{K, S} \subset \prod_{v \in \V_K} K_v, $$ where $$ \U_{K, S} = \prod_{v \in \V_{K, \infty}} K_v \times \prod_{v \in S} K_v \times \prod_{v \in \V_{K, \f} \setminus S} \OOO_v. $$ \end{definition*} \begin{notation*} Let $$ K_\infty = \prod_{v \in \V_{K, \infty}} K_v = K \otimes_\QQ \RR \cong \RR^{\r_1} \times \CC^{\r_2}. $$ \end{notation*} Then $ \AA_K $ is a ring. The topology on $ \AA_K $ is generated by all open $ V \subset \U_{K, S} $ as $ S $ varies, and where $ \U_{K, S} $ has the product topology, so $$ V = \prod_{v \in S} X_v \times \prod_{v \notin S} \OOO_{K_v}, $$ where $ S $ is finite, containing $ \V_{K, \infty} $, and $ X_v $ is open in $ K_v $. This means in particular that every $ \U_{K, S} \subset \AA_K $ is open, so $$ \U_{K, \emptyset} = K_\infty \times \prod_{v \in \V_{K, \f}} \OOO_v = K_\infty \times \widehat{\OOO_K}, $$ where $ \widehat{\OOO_K} $ is the profinite completion, is open and has the product topology. This completely determines the topology on $ \AA_K $. See example sheet $ 1 $ exercise $ 1 $(ii). \begin{example*} Let $ K = \QQ $. Then $$ \AA_\QQ = \RR \times \cbr{\br{x_p}_p \in \prod_{p < \infty} \QQ_p \st \text{for all but finitely many} \ p, \ x_p \in \ZZ_p}. $$ So, letting $ m \in \ZZ_{> 0} $ be the product of the denominators $ p^i $ of $ x_p $ see that $ m\br{x_p}_p \in \prod_{p < \infty} \ZZ_p = \widehat{\ZZ} $, that is $ \br{x_p}_p \in \br{1 / m}\widehat{\ZZ} \subset \prod_p \QQ_p $. Let \footnote{Exercise: easy} $$ \widehat{\QQ} = \bigcup_{m \ge 1} \dfrac{1}{m}\widehat{\ZZ} \cong \widehat{\ZZ} \otimes_\ZZ \QQ. $$ Then $ \AA_\QQ = \RR \times \widehat{\QQ} $. \end{example*} \pagebreak \begin{proposition} $ \AA_K $ is Hausdorff and locally compact, so every point has a compact neighbourhood. \end{proposition} \begin{proof} $ \U_{K, \emptyset} $ is Hausdorff, and is locally compact, since $ K_\infty $ is locally compact and $ \widehat{\OOO_K} $ is compact, and it is an open neighbourhood of zero. So by translation, $ \AA_K $ is Hausdorff and locally compact. \end{proof} There is a diagonal embedding $ K \hookrightarrow \AA_K $. \begin{proposition} $ K $ is discrete in $ \AA_K $. \end{proposition} \begin{proof} Find a neighbourhood of zero containing only $ 0 \in K $. Let $$ U = \cbr{x = \br{x_v} \in \AA_K \st \begin{array}{l} \forall v \in \V_{K, \f}, \ \abs{x_v}_v \le 1 \\ \forall v \in \V_{K, \infty}, \ \abs{x_v}_v < 1 \end{array}}. $$ Then $ U \subset \AA_K $ is open. If $ x \in K \cap U $, then $ \abs{x_v}_v \le 1 $ for all $ v \nmid \infty $ implies that $ x \in \OOO_K $, and $ \abs{x_v}_v < 1 $ for all $ v \mid \infty $ implies that $ \abs{\N_{K / \QQ}\br{x}} < 1 $, that is $ x = 0 $. So zero is isolated in $ K $. Thus $ K $ is discrete. \end{proof} \lecture{9}{Tuesday}{09/02/21} Let $ L / K $ be an extension of number fields. For all $ v \in \V_K $, $ K_v \hookrightarrow \prod_{w \mid v} L_w $ induces an inclusion of rings $ \AA_K \hookrightarrow \AA_L $ visibly continuous. \begin{proposition} \label{prop:6.3} Let $ \br{a_1, \dots, a_n} $ be a $ K $-basis for $ L $. Consider $$ \begin{array}{ccccc} \AA_K^n & \xrightarrow{f} & \AA_K \otimes_K L & \xrightarrow{g} & \AA_L \\ \displaystyle\br{x^{\br{i}}}_{1 \le i \le n} & \longmapsto & \displaystyle\sum_i x^{\br{i}} \otimes a_i & \longmapsto & \displaystyle\sum_i a_ix^{\br{i}} \end{array}, $$ viewing $ x^{\br{i}} \in \AA_K \hookrightarrow \AA_L $ as above. Then $ g $ is a ring isomorphism, $ f $ is an $ \AA_K $-module isomorphism, and $ g \circ f $ is a homeomorphism. This then defines a unique topology on $ \AA_K \otimes_K L $ such that $ g $ is an isomorphism of topological rings. \end{proposition} \begin{proof} Since $ L = \bigoplus_i Ka_i \cong K^n $, $ f $ is an $ \AA_K $-module isomorphism. By definition, $ g $ is a ring homomorphism. So it suffices to prove $ g \circ f $ is bijective, and that it maps $ X^n = \br{K_\infty \times \widehat{\OOO_K}}^n $ homeomorphically to an open subgroup of $ \AA_L $. Note that multiplication by any $ x \in K^\times $ is a self-homeomorphism of $ \AA_K $ with itself, since the inverse is multiplication by $ x^{-1} $. Similarly for $ \AA_L $. So may replace $ \br{a_i} $ by non-zero $ K $-multiples, so without loss of generality, $ a_i \in \OOO_L $. Let $$ S = \cbr{v \in \V_{K, \f} \st v\br{\br{\OOO_L : \sum_i a_i\OOO_K}} > 0} $$ be a finite subset of $ \V_{K, \f} $. Then for all $ v \in \V_{K, \f} \setminus S $, $$ \br{a_i} : \OOO_{K_v}^n \xrightarrow{\sim} \OOO_{K_v} \otimes_{\OOO_K} \OOO_L \cong \prod_{w \mid v} \OOO_{L_w}, $$ and for all $ v \in S $, $ \sum_i a_i\OOO_{K_v} = M_v $ is an open $ \OOO_{K_v} $-submodule of $ \prod_{w \mid v} \OOO_{L_w} $. Then $$ g \circ f : \br{K_\infty \times \widehat{\OOO_K}}^n \xrightarrow{\sim} L_\infty \times \prod_{v \notin S} \prod_{w \mid v} \OOO_{L_w} \times \prod_{v \in S} M_v $$ is a homeomorphism onto an open subgroup in $ \AA_L $. Moreover, for any finite $ S' \supset S \cup \V_{K, \infty} $, $$ g \circ f : \U_{K, S'} = \br{\prod_{v \in S'} K_v \times \prod_{v \notin S'} \OOO_{K_v}}^n \xrightarrow{\sim} \prod_{w \mid v \in S'} L_w \times \prod_{w \mid v \notin S'} \OOO_{L_w}. $$ So $ g \circ f $ is bijective. \end{proof} In particular, $ \AA_K = \AA_\QQ \otimes_\QQ K $. \pagebreak \begin{corollary} $ \AA_L $ is a free $ \AA_K $-module of rank $ \sbr{L : K} $, and the diagram $$ \begin{tikzcd} \displaystyle\prod_{w \mid v} L_w \arrow[hookrightarrow]{r} \arrow{d}{\sum_w \Tr_{L_w / K_v}} & \AA_L \arrow{d}{\Tr_{\AA_L / \AA_K}} & \AA_K \otimes_K L \arrow{l}[swap]{\sim} \arrow{d}{\id \otimes \Tr_{L / K}} & L \arrow[hookrightarrow]{l} \arrow{d}{\Tr_{L / K}} \\ K_v \arrow[hookrightarrow]{r} & \AA_K & \AA_K \otimes_K K \arrow{l}{\sim} & K \arrow[hookrightarrow]{l} \end{tikzcd} $$ commutes, where the left hand inclusions are $$ \br{x_w}_{w \mid v} \mapsto \br{y_w}, \qquad y_w = \begin{cases} x_w & w \mid v \\ 0 & \text{otherwise} \end{cases}. $$ \end{corollary} \begin{proof} Exercise. \footnote{Exercise} \end{proof} \begin{theorem} $ \AA_K / K $ is compact Hausdorff. \end{theorem} \begin{proof} Since $ K $ is discrete in $ \AA_K $ and $ \AA_K $ is Hausdorff, $ K $ is closed in $ \AA_K $, so $ \AA_K / K $ is Hausdorff. By \ref{prop:6.3}, $ \AA_K / K \cong \br{\AA_\QQ / \QQ}^{\sbr{K : \QQ}} $ as topological groups, so may assume $ K = \QQ $. Let $ X = \sbr{0, 1} \times \widehat{\ZZ} \subset \AA_\QQ $. Then $ X $ is compact. So it is enough to show that $ X + \QQ = \AA_\QQ $, as then $ X \twoheadrightarrow \AA_\QQ / \QQ $. Let $ x = \br{x_p}_{p \le \infty} \in \AA_\QQ $. Let $$ S = \cbr{p < \infty \st x_p \notin \ZZ_p} $$ be a finite set. There exists $ r_p \in \ZZ\sbr{1 / p} $ such that $ x_p - r_p \in \ZZ_p $ for all $ p \in S $. Let $ r = \sum_{p \in S} r_p \in \QQ $. For all $ p < \infty $, $ x_p - r \in \ZZ_p $, that is $ x - r \in \RR \times \widehat{\ZZ} $, and then for suitable $ m \in \ZZ $, $ x - \br{r + m} \in \sbr{0, 1} \times \widehat{\ZZ} $. \end{proof} From \ref{prop:6.3} also get $ \AA_K = K_\infty \times \widehat{K} $ where $$ \widehat{K} = \widehat{\OOO_K} \otimes_\ZZ \QQ = \widehat{\OOO_K} \otimes_{\OOO_K} K, $$ where $ \widehat{\OOO_K} \cong \prod_\ppp \widehat{\OOO_{K, \ppp}} = \prod_{v \nmid \infty} \OOO_{K_v} $ is the profinite completion of $ \OOO_K $. \subsection{Ideles} \begin{definition*} The \textbf{idele group} of $ K $ is the group of units of $ \AA_K $, $$ \JJ_K = \AA_K^\times = \cbr{\br{x_v} \in \prod_{v \in \V_K} K_v^\times \st \text{for all but finitely many finite} \ v, \ x_v \in \OOO_v^\times} = \bigcup_{\text{finite} \ S \subset \V_{K, \f}} \JJ_{K, S}, $$ where $$ \JJ_{K, S} = K_\infty^\times \times \prod_{v \in S} K_v^\times \times \prod_{v \in \V_{K, \f} \setminus S} \OOO_v^\times. $$ \end{definition*} The topology on $ \JJ_K $ is generated by open subsets of $ \JJ_{K, S} $, as $ S $ varies, and $ \JJ_{K, S} $ is given the product topology. In particular, $ K_\infty^\times \times \prod_{v \nmid \infty} \OOO_v^\times $ is an open subgroup, and has the product topology. \begin{remark*} $ \JJ_K \hookrightarrow \AA_K $ is continuous, by the definitions, but is not a homeomorphism onto its image, since $ x \mapsto x^{-1} $ on $ \AA_K^\times $ is not continuous for the $ \AA_K $-topology, by example sheet $ 1 $ exercise $ 8 $, but $$ \function{\JJ_K}{\AA_K \times \AA_K}{x}{\br{x, x^{-1}}} $$ is a homeomorphism of $ \JJ_K $ onto the closed subset $ \cbr{xy = 1} \subset \AA_K^2 $. In geometry, $ \GL_n K \subset \AA^{n^2} $ and $$ \function{\GL_n K}{\AA^{n^2 + 1}}{\br{a_{ij}}}{\br{a_{ij}, \det \br{a_{ij}}^{-1}}} $$ has closed image. \end{remark*} Then $ K^\times \hookrightarrow \JJ_K $ since if $ x \in K^\times $ then $ \abs{x}_v = 1 $ for all but finitely many $ v $. The image is called the \textbf{subgroup of principal ideles}, which is discrete, since $ \JJ_K \hookrightarrow \AA_K $ is continuous and $ K \subset \AA_K $ is discrete. \pagebreak \lecture{10}{Thursday}{11/02/21} \begin{definition*} The \textbf{idele class group} of $ K $ is $$ \CCC_K = \JJ_K / K^\times. $$ \end{definition*} This is a Hausdorff and locally compact topological group. There are two important homomorphisms. \begin{definition*} Let $ x = \br{x_v} \in \JJ_K $. Then for all $ v $, $ \abs{x_v}_v \ne 0 $, and for all but finitely many $ v $, $ \abs{x_v}_v = 1 $. So can define the \textbf{idele norm} homomorphism $$ \function[\abs{\cdot}_\AA]{\JJ_K}{\RR_{> 0}}{\br{x_v}}{\prod_{v \in \V_K} \abs{x_v}_v}, $$ \end{definition*} This is continuous, since the restriction to $ \JJ_{K, S} $ is $ \prod_v \abs{\cdot}_v \circ \pi : \JJ_{K, S} \to \prod_{v \in S \cup \V_{K, \infty}} K_v^\times \to \RR_{> 0} $. Clearly $ \abs{\cdot}_\AA $ is surjective, since $ K_\infty^\times \subset \JJ_K $. A key fact is that for all $ x \in K^\times $, $ \abs{x}_\AA = 1 $ by the product formula, so $ \abs{\cdot}_\AA : \JJ_K \to \CCC_K \to \RR_{> 0} $. \begin{definition*} Let $$ \JJ_K^1 = \cbr{x \in \JJ_K \st \abs{x}_\AA = 1}, \qquad \CCC_K^1 = \JJ_K^1 / K^\times. $$ \end{definition*} \begin{proposition} $$ \JJ_K \cong \JJ_K^1 \times \RR_{> 0}, \qquad \CCC_K \cong \CCC_K^1 \times \RR_{> 0}. $$ \end{proposition} \begin{proof} Have $ \abs{\cdot}_\AA : \JJ_K \twoheadrightarrow \RR_{> 0} $. Consider $$ \function[\i]{\RR_{> 0}}{K_\infty^\times \subset \JJ_K}{x}{\br{x^{\tfrac{1}{n}}}_{v \mid \infty}}. $$ Because $ \abs{x}_v $ is the Euclidean AV if $ v $ is real and the square of modulus if $ v $ is complex, this homomorphism is a right inverse to $ \abs{\cdot}_\AA $. So defines a splitting $ \JJ_K \cong \JJ_K^1 \times \RR_{> 0} $. As $ \i\br{\RR_{> 0}} \cap K^\times = 1 $, also have $ \CCC_K \cong \CCC_K^1 \times \RR_{> 0} $. \end{proof} Recall $ \ppp_v $ is the prime ideal corresponding to a finite place $ v $. Write $ v $ also for the corresponding normalised discrete valuation. \begin{definition*} The \textbf{content map} is $$ \function[\c]{\JJ_K}{\I\br{K}}{\br{x_v}}{\prod_{v \in \V_{K, \f}} \ppp_v^{v\br{x_v}}}, $$ where $$ \I\br{K} = \cbr{\text{group of fractional ideals of} \ K} \cong \cbr{\text{free abelian group generated by} \ \V_{K, \f}}. $$ \end{definition*} This is a continuous homomorphism, for the discrete topology on $ \I\br{K} $, since $ \ker \c = \JJ_{K, \emptyset} = K_\infty^\times \times \prod_{v \nmid \infty} \OOO_v^\times $ is open. If $ x \in K^\times $ then $ \c\br{x} $ is the principal fractional ideal $ \abr{x} $. So $ \c $ descends to a homomorphism $$ \c : \CCC_K = \JJ_K / K^\times \to \Cl\br{K} = \I\br{K} / \P\br{K}, $$ where $ \P\br{K} $ is the group of principal fractional ideals. Then $ \c $ is clearly surjective, since $ v : K_v^\times \twoheadrightarrow \ZZ $. So $ \CCC_K \twoheadrightarrow \Cl\br{K} $. As $ \c \circ \i : \RR_{> 0} \to \Cl\br{K} $ is zero, have a continuous surjection $ \CCC_K^1 \twoheadrightarrow \Cl\br{K} $. Now prove that $ \CCC_K^1 $ is compact. A corollary is that $ \Cl\br{K} $ is finite, since compact and discrete. The following is a variant. \begin{definition*} Let $ S \subset \V_{K, \f} $ be a finite subset. Define $$ \function[\c^S]{\JJ_K}{\I^S\br{K}}{\br{x_v}}{\prod_{v \in \V_{K, \f} \setminus S} \ppp_v^{v\br{x_v}}}, $$ where $$ \I^S\br{K} = \cbr{\text{fractional ideals prime to} \ S} = \cbr{I \st \forall v \in S, \ v\br{I} = 0}. $$ \end{definition*} This will be useful later on. \pagebreak \section{Geometry of numbers} \subsection{Minkowski's theorem} Classically, embed $$ \sigma : K \hookrightarrow K_\infty = \prod_{v \mid \infty} K_v \cong \RR^{\r_1} \times \CC^{\r_2} \cong \RR^n, $$ and study the image $ \sigma\br{I} \subset \RR^n $ for $ I $ a fractional ideal. \begin{definition*} Let $ U $ be a finite-dimensional real vector space. A \textbf{lattice} $ \Lambda \subset U $ is a discrete subgroup such that $ U / \Lambda $ is compact. \end{definition*} \begin{proposition} A subgroup $ \Lambda \subset U $ is a lattice if and only if $ \Lambda = \bigoplus_{1 \le i \le n} \ZZ e_i $, where $ \br{e_i} $ is an $ \RR $-basis for $ U $ where $ n = \dim_\RR U $. \end{proposition} \begin{proof} Example sheet. \end{proof} \begin{theorem}[Minkowski's theorem] \label{thm:7.2} Let $ \Lambda \subset \RR^n $ be a lattice, and let $ \mu_\Lambda = \meas\br{\RR^n / \Lambda} $, the \textbf{covolume} of $ \Lambda $. Let $ X \subset \RR^n $ be a compact subset, which is \begin{itemize} \item convex, that is if $ t \in \sbr{0, 1} $ and $ x, y \in X $ then $ tx + \br{1 - t}y \in X $, and \item symmetric about the origin, that is if $ x \in X $ then $ -x \in X $. \end{itemize} If $ \meas\br{X} > 2^n\mu_\Lambda $, then $ X \cap \Lambda \ne \cbr{0} $. \end{theorem} \begin{remark*} $ \RR^n $ has a Lebesgue measure, and $ \meas\br{X} $ is the measure of $ X $. The Lebesgue measure defines a measure on $ \RR^n / \Lambda $, and $ \mu_\Lambda $ is the measure of $ \RR^n / \Lambda $. Naively, if $ \Lambda = \bigoplus_i \ZZ e_i $ for $ \br{e_i} $ linearly independent over $ \RR $ and $ \PPP = \cbr{\sum_i x_ie_i \st 0 \le x_i < 1} $, then $ \PPP $ is a set of coset representatives for $ \Lambda \subset \RR^n $, and $ \mu_\Lambda = \meas\br{\PPP} = \abs{\det \br{e_{ij}}} $, which is independent of the basis. \end{remark*} \begin{proof} Let $ \pi : \RR^n \to \RR^n / 2\Lambda $. Then $$ \meas\br{\pi\br{X}} \le \meas\br{\RR^n / 2\Lambda} = 2^n\meas\br{\RR^n / \Lambda} < \meas\br{X}. $$ So $ X \to \pi\br{X} $ is not one-to-one, so there exist $ x \ne y $ in $ X $ such that $ x - y = 2\lambda \in 2\Lambda $. Then $ 0 \ne \lambda = \br{x - y} / 2 = \tfrac{1}{2}x + \tfrac{1}{2}\br{-y} \in X $ as $ -y \in X $, by symmetry, and $ X $ is convex. \end{proof} \begin{theorem} \label{thm:7.3} There exists a constant $ \r_K > 0 $ such that, if $ \br{d_v}_{v \in K} $ are positive reals with \begin{itemize} \item $ d_v \in \abs{K_v^\times}_v = \cbr{\abs{x}_v \st x \in K_v^\times} \subset \RR_{> 0} $ for all $ v $, \item $ d_v = 1 $ for all but finitely many $ v $, and \item $ \prod_{v \in \V_K} d_v > \r_K $, \end{itemize} then $ \cbr{x \in K \st \forall v, \ \abs{x}_v \le d_v} \ne \cbr{0} $. \end{theorem} \begin{proof} For $ v \nmid \infty $, write $ d_v = \q_v^{-n_v} $ for $ n_v \in \ZZ $. Let $$ I = \cbr{x \in K \st \forall v \nmid \infty, \ \abs{x}_v \le d_v} = \prod_v \ppp_v^{n_v} $$ be a fractional ideal of $ K $. Then $ mI \subset \OOO_K $ for $ m > 0 $, so \begin{equation} \label{eq:4} \mu_{\sigma\br{I}} = m^{-n}\mu_{\sigma\br{mI}} = m^{-n}\mu_{\sigma\br{\OOO_K}}\br{\sigma\br{\OOO_K} : \sigma\br{mI}} = m^{-n}\mu_{\sigma\br{\OOO_K}}\N\br{mI} = \mu_{\sigma\br{\OOO_K}}\prod_v \q_v^{n_v}, \end{equation} and $ \sigma\br{I} $ is a lattice in $ \RR^n $, by the non-vanishing of the discriminant. Let $$ X = \cbr{x \in \prod_{v \mid \infty} K_v \cong \RR^n \st \forall v, \ \abs{x_v}_v \le d_v} = \prod_{v \ \text{real}} \sbr{-d_v, d_v} \times \prod_{v \ \text{complex}} \cbr{\abs{z}^2 \le d_v} \subset K_\infty \cong \RR^{\r_1} \times \CC^{\r_2}. $$ \pagebreak This is convex, compact, symmetric, and $$ \meas\br{X} = 2^{\r_1}\pi^{\r_2}\prod_{v \mid \infty} d_v > 2^n\prod_{v \nmid \infty} d_v^{-1}\mu_{\sigma\br{\OOO_K}} = 2^n\mu_{\sigma\br{I}}, $$ by $ \br{\ref{eq:4}} $, provided $$ \prod_v d_v > \r_K = \br{\dfrac{4}{\pi}}^{\r_2}\mu_{\sigma\br{\OOO_K}} = \br{\dfrac{2}{\pi}}^{\r_2}\abs{\d_K}^{\tfrac{1}{2}}. $$ Then applying \ref{thm:7.2}, $ X \cap \sigma\br{I} \ne \cbr{0} $ and any $ x \in X \cap \sigma\br{I} $ has $ \abs{x}_v \le d_v $ for all $ v $. \end{proof} This is the translation of a classical result that if $ 0 \ne I $ is an ideal then there exists $ x \in I \setminus \cbr{0} $ such that $ \abs{\N_{K / \QQ}\br{x}} < \r_K\N\br{I} $. \lecture{11}{Saturday}{13/02/21} \begin{remark*} Used Minkowski's theorem, with convex symmetric set $ X = \sbr{-d_v, d_v}^{\r_1} \times \cbr{\abs{z}^2 \le d_v}^{\r_2} $ and obtained $ \r_K = \br{\tfrac{4}{\pi}}^{\r_2}\mu_{\sigma\br{\OOO_K}} $. Using better chosen $ X $, can get a better bound, the Minkowski bound $ \c_K $, which is useful for computation. \end{remark*} \subsection{Compactness of \texorpdfstring{$ \CCC_K^1 $}{norm one idele class group}} Recall $ K^\times \subset \JJ_K^1 = \ker \br{\abs{\cdot}_\AA : \JJ_K \to \RR_{> 0}} $ is discrete. Based on \ref{thm:7.3} and the following. \begin{proposition} \label{prop:7.5} Let $ \rho_v > 0 $ for $ v \in \V_K $, with $ \rho_v = 1 $ for all but finitely many $ v $. Then $$ X = \cbr{x \in \JJ_K^1 \st \forall v, \ \abs{x_v}_v \le \rho_v} $$ is compact. \end{proposition} This is false for $ \JJ_K $. Note that $ \abs{x_v}_v \le \rho_v $ for all $ v $ defines a compact subset of $ \AA_K $. \begin{proof} Let $ R = \prod_v \rho_v $, and let $$ S = \V_{K, \infty} \cup \cbr{v \st \rho_v \ne 1} \cup \cbr{v \in \V_{K, \f} \st \q_v \le R} $$ be a finite set of places, since the last set is contained in $ \cbr{v \mid p \st p \le R} $, which is finite. If $ v \notin S $, and $ x \in X $, since $ \rho_v = 1 $, $$ 1 \ge \abs{x_v}_v = \prod_{w \ne v} \abs{x_w}_w^{-1} \ge \prod_{w \ne v} \rho_w^{-1} = R^{-1}. $$ As $ \q_v > R $, this forces $ \abs{x_v}_v = 1 $. So $ X = X' \times \prod_{v \notin S} \OOO_v^\times $, where $$ X' = \cbr{\br{x_v} \in \prod_{v \in S} K_v^\times \st \prod_{v \in S} \abs{x_v}_v = 1, \ \forall v \in S, \ \abs{x_v}_v \le \rho_v}, $$ which is a closed subset of $$ X'' = \cbr{\br{x_v} \in \prod_{v \in S} K_v^\times \st \forall v \in S, \ \dfrac{\rho_v}{R} \le \abs{x_v}_v \le \rho_v}, $$ which is compact. So $ X' $ is compact, hence so is $ X $, since $ \prod_{v \notin S} \OOO_v^\times $ is compact. \end{proof} \begin{theorem} \label{thm:7.4} $ \CCC_K^1 $ is compact. \end{theorem} \begin{proof} Let $ \r_K $ be as in \ref{thm:7.3}. Pick any $ y \in \JJ_K $ with $ \abs{y}_\AA > \r_K $, and let $$ X = \cbr{x \in \JJ_K^1 \st \forall v \in \V_K, \ \abs{x_v}_v \le \abs{y_v}_v}, $$ which is compact by \ref{prop:7.5}. Show that $$ \JJ_K^1 = K^\times X = \cbr{ax \st a \in K^\times, \ x \in X}. $$ Let $ z \in \JJ_K^1 $. Then $ \prod_v \abs{y_vz_v}_v = \abs{y}_\AA > \r_K $. So by \ref{thm:7.3}, there exists $ b \in K^\times $ such that for all $ v \in \V_K $, $ \abs{b}_v \le \abs{y_vz_v}_v $. Therefore $ bz^{-1} \in X $, that is $ z^{-1} \in b^{-1}X \subset K^\times X $. \end{proof} \pagebreak \subsection{Finiteness of \texorpdfstring{$ \Cl\br{K} $}{ideal class group} and \texorpdfstring{$ S $}{S}-unit theorem} The following are two corollaries. \begin{corollary} The ideal class group $ \Cl\br{K} $ is finite. \end{corollary} \begin{proof} $ \CCC_K^1 \twoheadrightarrow \Cl\br{K} $ by the content map, which is continuous, so $ \Cl\br{K} $ is discrete and compact, therefore finite. \end{proof} \begin{corollary}[$ S $-unit theorem] \label{cor:7.7} Let $ S \subset \V_{K, \f} $ be finite, possibly empty, and let $$ \OOO_{K, S} = \cbr{x \in K \st \forall v \in \V_{K, \f} \setminus S, \ \abs{x}_v \le 1} $$ be the \textbf{$ S $-integers} of $ K $, sometimes written $ \OOO_K\sbr{1 / S} $. Then $$ \OOO_{K, S}^\times = \mu\br{K} \times \ZZ^{\r_1 + \r_2 - 1 + \#S}, $$ where $ \mu\br{K} = \cbr{\text{roots of unity in} \ K} $ is finite. \end{corollary} The case $ S = \emptyset $ is Dirichlet's unit theorem, $$ \OOO_K^\times = \mu\br{K} \times \ZZ^{\r_1 + \r_2 - 1}. $$ \begin{proof} \hfill \begin{itemize} \item First explain the proof for $ S = \emptyset $. Recall $$ \JJ_{K, \emptyset} = K_\infty^\times \times \prod_{v \nmid \infty} \OOO_v^\times \supset K_\infty^{\times, 1} \times \prod_{v \nmid \infty} \OOO_v^\times = \JJ_{K, \emptyset}^1, \qquad K_\infty^{\times, 1} = \cbr{\br{x_v} \in K_\infty^\times \st \prod_{v \mid \infty} \abs{x_v}_v = 1}. $$ Then $ \JJ_{K, \emptyset} \cap K^\times = \JJ_{K, \emptyset}^1 \cap K^\times = \OOO_K^\times $ is discrete in $ \JJ_{K, \emptyset}^1 $ and by \ref{thm:7.4}, the closed $ \JJ_{K, \emptyset}^1 / \OOO_K^\times \subset \CCC_K^1 $ is compact. Let $$ \function[\lambda]{\JJ_{K, \emptyset}}{\LLL_K = \prod_{v \mid \infty} \RR \cong \RR^{\r_1 + \r_2}}{\br{x_v}_v}{\br{\log \abs{x_v}_v}_v} $$ be the \textbf{logarithm map}, such that $$ \lambda\br{\JJ_{K, \emptyset}^1} \subset \LLL_K^0 = \cbr{\br{l_v} \in \LLL_K \st \sum_v l_v = 0}. $$ Then $$ \ker \lambda = \cbr{\br{x_v} \in \JJ_K \st \forall v, \ \abs{x_v}_v = 1} = \cbr{\pm 1}^{\r_1} \times \U\br{1}^{\r_2} \times \prod_{v \nmid \infty} \OOO_v^\times, \qquad \U\br{1} = \cbr{z \in \CC \st \abs{z} = 1} $$ is compact. So $ \ker \lambda \cap \OOO_K^\times $ is discrete and compact, hence finite. Obviously $ \mu\br{K} \subset \ker \lambda $, so $ \mu\br{K} $ is finite and equals $ \ker \lambda \cap \OOO_K^\times $. Next, show $ \lambda\br{\OOO_K^\times} \subset \LLL_K^0 \cong \RR^{\r_1 + \r_2 - 1} $ is a lattice. Then we get $$ 1 \to \mu\br{K} \to \OOO_K^\times \to \lambda\br{\OOO_K^\times} \cong \ZZ^{\r_1 + \r_2 - 1} \to 0, $$ which gives \ref{cor:7.7}. Now $$ \begin{tikzcd} \JJ_{K, \emptyset} \arrow[cong]{r} \arrow{d}[swap]{\lambda} & \displaystyle\prod_{v \mid \infty} \RR_{> 0} \times \ker \lambda \arrow[twoheadrightarrow]{d}{\pi_1} \\ \LLL_K & \displaystyle\prod_{v \mid \infty} \RR_{> 0} \arrow{l}{\log}[swap]{\sim} \end{tikzcd}, $$ \pagebreak where $ \RR_{> 0} \hookrightarrow K_v^\times \subset \CC^\times $ for all $ v \mid \infty $. Hence $ \lambda $ has the property that for all compact $ Y $ in its target, $ \lambda^{-1}\br{Y} $ is compact, so $ \lambda $ is a \textbf{proper} map. A simple fact is if $ f : X \to Y $ is a continuous proper map of topological spaces, with $ Y $ locally compact and Hausdorff, then if $ Z \subset X $ is discrete then $ f\br{Z} $ is discrete. \footnote{Exercise: a hint is to take a compact neighbourhood $ V $ of some $ f\br{z} $ for $ z \in Z $ and use compactness of $ f^{-1}\br{V} $} Hence $ \lambda\br{\OOO_K^\times} \subset \LLL_K^0 $ is discrete. Finally, $$ \lambda : \JJ_{K, \emptyset}^1 / \OOO_K^\times \twoheadrightarrow \LLL_K^0 / \lambda\br{\OOO_K^\times}, $$ so $ \LLL_K^0 / \lambda\br{\OOO_K^\times} $ is compact, by \ref{thm:7.4}. Thus $ \lambda\br{\OOO_K^\times} $ is a lattice. \lecture{12}{Tuesday}{16/02/21} \item For the general case, the difference is mainly notational. Let $ S_\infty = S \cup \V_{K, \infty} $, so $$ \JJ_{K, S} = \prod_{v \in S_\infty} K_v^\times \times \prod_{v \notin S_\infty} \OOO_v^\times, \qquad \LLL_{K, S} = \prod_{v \mid \infty} \RR \times \prod_{v \in S} \log \q_v\ZZ \cong \RR^{\r_1 + \r_2} \times \ZZ^{\#S}. $$ Let $$ \function[\lambda_S]{\JJ_{K, S}}{\LLL_{K, S}}{\br{x_v}_v}{\br{\log \abs{x_v}_v}_{v \in S_\infty}} $$ be the \textbf{$ S $-logarithm map}, such that $$ \lambda_S\br{\JJ_{K, S}^1} \subset \LLL_{K, S}^0 = \cbr{\br{l_v} \in \LLL_{K, S} \st \sum_v l_v = 0}. $$ Note that $ \LLL_{K, S}^0 \cong \RR^{\r_1 + \r_2 - 1} \times \ZZ^{\#S} $ since $$ \begin{tikzcd} \LLL_{K, S}^0 \arrow[twoheadrightarrow]{r}{\pi_2} & \displaystyle\prod_{v \in S} \log \q_v\ZZ \arrow[cong]{d} \\ & \ZZ^{\#S} \arrow[dashed]{ul} \end{tikzcd} $$ is surjective with kernel $ \RR^{\r_1 + \r_2 - 1} $, so there exists a splitting as $ \ZZ^{\#S} $ is free. Then $$ \ker \lambda_S \cong \cbr{\pm 1}^{\r_1} \times \U\br{1}^{\r_2} \times \prod_{v \nmid \infty} \OOO_v^\times, $$ as before, and $$ \JJ_{K, S} = \prod_{v \mid \infty} \RR_{> 0} \times \prod_{v \in S} \abr{\pi_v} \times \ker \lambda_S \cong \prod_{v \mid \infty} \RR_{> 0} \times \ZZ^{\#S} \times \ker \lambda_S, $$ where $ \pi_v \in K_v^\times $ such that $ v\br{\pi_v} = 1 $, so $ \lambda_S $ is proper and surjective. Then $ \JJ_{K, S} \cap K^\times = \JJ_{K, S}^1 \cap K^\times = \OOO_{K, S}^\times $ is discrete and closed in $ \JJ_{K, S}^1 $. As before, $ \ker \lambda_S \cap \OOO_{K, S}^\times = \mu\br{K} $, since it is discrete and compact, and $ \lambda_S\br{\OOO_{K, S}^\times} \subset \LLL_{K, S}^0 $ is discrete and cocompact. Then prove that if $ G \cong \RR^m \times \ZZ^{\#S} \supset H $ is a discrete and cocompact subgroup then $ H \cong \ZZ^{m + \#S} $. \footnote{Exercise} Then $$ 1 \to \mu\br{K} \to \OOO_{K, S}^\times \to \lambda_S\br{\OOO_{K, S}^\times} \cong \ZZ^{\r_1 + \r_2 - 1 + \#S} \to 0, $$ and so done. \end{itemize} \end{proof} Let $ T \subset \V_K $ be finite, not necessarily containing $ \V_{K, \infty} $. What can we say about the group $$ \cbr{x \in K^\times \st \forall v \notin T, \ \abs{x}_v = 1}? $$ The answer is non-trivial and depends on $ K $. See example sheet. \pagebreak \subsection{Strong approximation theorem} Earlier, weak approximation implies that $ K $ is dense in any finite product of $ K_v $'s. Also, $ K \hookrightarrow \AA_K $ is discrete. \begin{theorem}[Strong approximation] \label{thm:7.8} Let $ T \subset \V_K $ be finite, and set $$ \AA_K^T = \cbr{x = \br{x_v} \in \prod_{v \notin T} K_v \st \text{for all but finitely many} \ v, \ \abs{x_v}_v \le 1}, $$ so $ \AA_K = \prod_{v \in T} K_v \times \AA_K^T $, with the adelic topology. Then if $ T \ne \emptyset $, then $ K $ is dense in $ \AA_K^T $. \end{theorem} There are various ways to rewrite this. \begin{itemize} \item If $ T \ne \emptyset $, then $ K + \prod_{v \in T} K_v $ is dense in $ \AA_K $, where $ K \hookrightarrow \AA_K $ is the diagonal inclusion and $ K_v \subset \AA_K $ by $$ y \mapsto \br{x_w}, \qquad x_w = \begin{cases} y & w = v \\ 0 & w \ne v \end{cases}. $$ \end{itemize} It is enough to prove \ref{thm:7.8} for $ T = \cbr{v_0} $. Will actually prove the following. \begin{itemize} \item Let $ S \subset \V_K $ be finite such that $ v_0 \notin S $, let $ y_v \in K_v $ for all $ v \in S $, and let $ \epsilon > 0 $. Then there exists $ x \in K $ such that \begin{itemize} \item for all $ v \in S $, $ \abs{x - y_v}_v \le \epsilon $, and \item for all $ v \notin S $ such that $ v \ne v_0 $, $ \abs{x}_v \le 1 $. \end{itemize} \end{itemize} Take $ y \in \AA_K $ with component $ y_v $ at $ v \in S $ and zero elsewhere. This is equivalent to strong approximation for $ T = \cbr{v_0} $, by definition of the topology. \begin{proof} Free to enlarge $ S $. Then by the proof of compactness of $ \AA_K / K $, there exists $ R > 0 $ such that if $$ X = \cbr{\br{x_v} \in \AA_K \st \begin{array}{l} \forall v \in S, \ \abs{x_v}_v \le R \\ \forall v \notin S, \ \abs{x_v}_v \le 1 \end{array}}, $$ then $ X + K = \AA_K $. For example, assume $ S \supset \V_{K, \infty} $ and let $ \OOO_K = \bigoplus_i \ZZ e_i $, then $ \AA_K = \bigoplus_i \AA_\QQ e_i $ and $ \AA_\QQ = \sbr{0, 1} \times \widehat{\ZZ} + \QQ $. Claim that there exists $ z \in K \setminus \cbr{0} $ such that $$ \abs{z}_v \le \begin{cases} \dfrac{\epsilon}{R} & v \in S \\ 1 & v \notin S, \ v \ne v_0 \end{cases}. $$ Apply Minkowski \ref{thm:7.3} with \begin{itemize} \item $ d_v = 1 $ for all $ v \notin S \cup \cbr{v_0} $, \item $ d_v \le \epsilon / R $ for all $ v \in S $, and \item $ d_{v_0} > \r_K\br{\prod_{v \in S} d_v}^{-1} $. \end{itemize} This defines a box in $ \AA_K $ whose intersection with $ K $ is not $ \cbr{0} $, since $ \prod_v d_v > \r_K $. Now write $ z^{-1}y = a + t $ for $ a \in X $ and $ t \in K $. Then $ x = zt = y - za $ has $$ \abs{x - y_v}_v = \abs{zt - y_v}_v = \abs{za_v}_v \le \begin{cases} \dfrac{\epsilon}{R} \cdot R = \epsilon & v \in S \\ 1 \cdot 1 = 1 & v \notin S, \ v \ne v_0 \end{cases}, $$ so done. \end{proof} A special case is $ T = \V_{K, \infty} $, so $ \AA_K^T $ are the finite adeles. Then \ref{thm:7.8} says $$ K \hookrightarrow \AA_K^T = \widehat{K} = \widehat{\OOO_K} \otimes_\ZZ \QQ $$ is dense, which is equivalent to the density of $$ \OOO_K \hookrightarrow \widehat{\OOO_K} = \prod_{v \nmid \infty} \OOO_{K_v} = \prod_{v \nmid \infty} \varprojlim_r \OOO_K / \ppp_v^r \cong \varprojlim_{I \subset \OOO_K} \OOO_K / I, $$ by CRT. So strong approximation is a generalisation of CRT. \pagebreak \section{Idele class group and class field theory} \lecture{13}{Thursday}{18/02/21} Recall if $ L = \QQ\br{\zeta_m} $, then there is an isomorphism $$ \function{\Gal\br{L / \QQ}}{\br{\ZZ / m\ZZ}^\times}{\sigma_p}{p \mod m}, \qquad p \nmid m, $$ given by the action on $ \zeta_m $. In particular, $ \sigma_p $ depends only on the congruence class of $ p \mod m $, which implies quadratic reciprocity. As $ \sigma_p $ determines the decomposition of $ \abr{p} $ in $ L $, since $ \f\br{v \mid p} = \ord \D_v = \ord \sigma_p $, this says that the decomposition of $ \abr{p} $ in $ L $ depends only on $ p \mod m $. A consequence is if $ g \in \Gal\br{L / \QQ} $, then there exist infinitely many $ p $ such that $ g = \sigma_p $, by Dirichlet's theorem on primes in arithmetic progressions. The following is a general problem. Let $ L / K $ be a Galois extension of number fields, and let $ v $ be a finite place of $ K $, unramified in $ L $. Then $$ \Sigma_v = \cbr{\sigma_w \st w \in \V_{L, \f}, \ w \mid v} $$ is a conjugacy class in $ G = \Gal\br{L / K} $, and $ \Sigma_v $ describes the decomposition of $ v $ in $ L $. \begin{itemize} \item How does $ \Sigma_v $ depend on $ v $? \item Can it be any conjugacy class in $ G $? \end{itemize} For the first question, do not know the answer for general $ L / K $. This is non-abelian class field theory or the Langlands programme. The second question is answered in the 1920s. \begin{theorem*}[Chebotarev density theorem] Let $ C \subset G $ be a conjugacy class. Then there exist infinitely many $ v $ for which $ C = \Sigma_v $. \end{theorem*} \begin{example*} Let $ C = \cbr{1} $. There exist infinitely many $ v $ such that $ \Sigma_v = \cbr{1} $, that is such that $ v $ splits completely in $ L / K $. \end{example*} Class field theory answers the first question completely for $ L / K $ abelian. \subsection{Artin reciprocity law} \begin{theorem*}[Artin reciprocity law] Let $ L / K $ be an abelian extension of number fields. Then there exists a unique continuous homomorphism $$ \Art_{L / K} : \CCC_K \to \Gal\br{L / K}, $$ such that for all unramified $ v \in \V_{K, \f} $ in $ L / K $, $$ \function[\Art_{L / K}]{K_v^\times \hookrightarrow \CCC_K}{\Gal\br{L / K}}{x}{\sigma_v^{-v\br{x}}}. $$ Moreover, $ \Art_{L / K} $ is surjective with kernel $ K^\times \N_{L / K}\br{\JJ_L} $. \end{theorem*} How does this generalise the cyclotomic theory? Since $ \CC^\times $ is connected, the only open subgroup is $ \CC^\times $, and the only open subgroups of $ \RR^\times $ are $ \RR^\times $ and $ \RR_{> 0} $. Then $ \ker \Art_{L / K} $ is open, so contains some $ K^\times U $, where $$ U = \prod_{v \ \text{complex}} \CC^\times \times \prod_{v \ \text{real}} \RR_{> 0} \times \prod_{v \in S} U_v \times \prod_{v \in \V_{K, \f} \setminus S} \OOO_v^\times, \qquad U_v = \cbr{x \in \OOO_v^\times \st v\br{x - 1} \ge m_v}, \qquad m_v > 0, $$ where say $ S $ contains all ramified places. If $ w \notin S $ is unramified, $$ \Art_{L / K} : K^\times\br{\dots, 1, 1, \pi_w^{-1}, 1, 1, \dots} = K^\times\br{\dots, \pi_w, \pi_w, 1, \pi_w, \pi_w, \dots} \mapsto \sigma_w, $$ where $ \pi_w \in \OOO_K $ is a uniformiser at $ w $ such that $ w\br{\pi_w} = 1 $. So if \begin{enumerate} \item $ \sigma\br{\pi_w} > 0 $ for all $ \sigma : K \hookrightarrow \RR $, \item $ v\br{\pi_w - 1} \ge m_v $ for all $ v \in S $, and \item $ \pi_w \in \OOO_v^\times $ for all $ v \notin S $ such that $ v \ne w $, \end{enumerate} which are congruence conditions on $ w $, then $ \sigma_w = 1 $. In particular, if $ \ppp_w = \abr{\pi_w} $ is principal, then $ 3 $ is automatic. So just a congruence condition on $ \pi_w $ modulo some ideal divisible only by primes in $ S $, and positivity. \pagebreak \begin{example*} Let $ L = \QQ\br{\zeta_m} / K = \QQ $. Then $$ \begin{tikzcd} \br{\RR^\times \times \widehat{\QQ}^\times} / \QQ^\times \arrow[cong]{d} & \br{\RR^\times \times \widehat{\ZZ}^\times} / \cbr{\pm 1} \arrow{l}[swap]{\sim} \arrow[cong]{d} & \RR_{> 0} \times \widehat{\ZZ}^\times \arrow{l}[swap]{\sim} \arrow{r} \arrow{d} & \displaystyle\prod_{q \mid m} \ZZ_q^\times \arrow{d} \\ \CCC_\QQ \arrow[dashed]{dr} & \JJ_{\QQ, \emptyset} / \cbr{\pm 1} \arrow{l}{\sim} & \br{\ZZ / m\ZZ}^\times \arrow{dl}{\sim} & \displaystyle\prod_{q \mid m} \br{\ZZ_q / q\ZZ_q}^\times \arrow{l}{\sim} \\ & \Gal\br{L / \QQ} & & \end{tikzcd}. $$ Claim this is $ \Art_{L / \QQ} $. Let $ \QQ^\times\br{\dots, 1, 1, p^{-1}, 1, 1, \dots} = \QQ^\times\br{\dots, p, p, 1, p, p, \dots} \in \CCC_\QQ $ for $ p \nmid m $. Then $$ \begin{array}{ccccccc} \CCC_\QQ & \longleftarrow & \RR_{> 0} \times \widehat{\ZZ}^\times & \longrightarrow & \br{\ZZ / m\ZZ}^\times & \longrightarrow & \Gal\br{L / \QQ} \\ \QQ^\times\br{\dots, p, p, 1, p, p, \dots} & \longmapsfrom & \br{\dots, p, p, 1, p, p, \dots} & \longmapsto & p \mod m & \longmapsto & \sigma_p \end{array}. $$ So via $ \CCC_\QQ \cong \RR_{> 0} \times \widehat{\ZZ}^\times $, $ \Art_{L / \QQ} $ is just the cyclotomic map. \end{example*} \subsection{Finite quotients of \texorpdfstring{$ \CCC_K $}{idele class group}} \begin{proposition} \label{prop:8.1} Let $ G $ be a discrete group. \begin{enumerate} \item Any continuous homomorphism $ \alpha : \CCC_K \to G $ has finite image. \item There is a bijection $$ \correspondence{\text{continuous homomorphisms} \\ \alpha : \JJ_K \to G}{\text{families} \ \br{\alpha_v : K_v^\times \to G}_{v \in \V_K} \\ \text{such that} \ \alpha_v\br{\OOO_v^\times} = 1 \\ \text{for all but finitely many} \ v \in \V_{K, \f}}. $$ \end{enumerate} \end{proposition} \begin{notation*} $ \alpha_v : K_v^\times \to G $ is \textbf{unramified} if $ \alpha_v\br{\OOO_v^\times} = 1 $. See local class field theory, where $ \OOO_v^\times $ corresponds to the inertia. \end{notation*} \begin{proof} \hfill \begin{enumerate} \item $ \JJ_K \cong \RR_{> 0} \times \JJ_K^1 $, and $ \alpha\br{\RR_{> 0}} = 1 $ so $ \alpha\br{\CCC_K} = \alpha\br{\CCC_K^1} $, which is compact and discrete so finite. \item The subgroup $$ \bigoplus_v K_v^\times = \cbr{\br{x_v} \st x_v = 1 \ \text{for all but finitely many} \ v} \subset \JJ_K $$ is dense, since $ \bigoplus_v \OOO_v^\times \subset \prod_v \OOO_v^\times $ is dense for the product topology. So a continuous $ \alpha : \JJ_K \to G $ is determined by its restrictions $ \alpha_v = \eval{\alpha}_{K_v^\times} : K_v^\times \to G $. As $ \ker \alpha $ is open, $ \alpha_v\br{\OOO_v^\times} = 1 $ for all but finitely many $ v $. So have $ \cbr{\alpha} \hookrightarrow \cbr{\br{\alpha_v}_v} $. Conversely, if $ \br{\alpha_v : K_v^\times \to G}_v $ is such a family, then $ \alpha\br{\br{x_v}} = \prod_v \alpha_v\br{x_v} $ is a finite product for any $ \br{x_v} \in \JJ_K $, as $ x_v \in \OOO_v^\times $ and $ \alpha_v\br{\OOO_v^\times} = 1 $ for all but finitely many $ v $, and defines a continuous homomorphism $ \alpha : \JJ_K \to G $. \end{enumerate} \end{proof} \lecture{14}{Saturday}{20/02/21} \begin{proposition} \label{prop:8.2} Let $ \alpha, \alpha' : \CCC_K \to G $ be continuous homomorphisms, where $ G $ is finite, unramified at all $ v \in \V_{K, \f} \setminus S $, where $ S $ is finite. Then if $ \alpha_v = \alpha_v' $ for all $ v \notin S $ such that $ v $ is finite, that is $ \alpha_v\br{\pi_v} = \alpha_v'\br{\pi_v} $, have $ \alpha = \alpha' $. \end{proposition} \begin{proof} Look at $ \alpha / \alpha' $, so without loss of generality $ \alpha' = 1 $. Then $ \alpha : \CCC_K \to G $ satisfies for all $ v \in \V_{K, \f} \setminus S $, $ \alpha_v = 1 $. Let $ w \in S_\infty = \V_{K, \infty} \cup S $ and $ y \in K_w^\times $. Then by weak approximation, for any $ \epsilon > 0 $, there exists $ x \in K^\times $ such that $ \abs{x - y}_w < \epsilon $ and $ \abs{x - 1}_v < \epsilon $ for all $ v \in S_\infty \setminus \cbr{w} $. Hence $ \alpha_v\br{x} = 1 $ for all $ v \in S_\infty \setminus \cbr{w} $, so $ \alpha_v\br{x} = 1 $ for all $ v \ne w $. Since $ \alpha\br{K^\times} = 1 $, $ \alpha_w\br{x} = 1 $, so $ \alpha_w\br{y} = 1 $. So $ \alpha_w = 1 $, so $ \alpha = 1 $. \end{proof} \pagebreak \subsection{Specific open subgroups of \texorpdfstring{$ \CCC_K $}{idele class group}} \begin{definition*} A \textbf{modulus} is a finite formal sum $$ \mmm = \sum_{v \in \V_K} \m_v\br{v}, \qquad \m_v \ge 0. $$ The \textbf{support} and \textbf{finite support} of $ \mmm $ are $$ \supp \mmm = \cbr{v \in \V_K \st \m_v > 0}, \qquad \supp_\f \mmm = \supp \mmm \cap \V_{K, \f}. $$ We may use also $ \mmm_\f = \sum_{v \in \V_{K, \f}} \m_v\br{v} $, the finite part of $ \mmm $, which can think of as an ideal of $ \OOO_K $. Define $$ \U_{K, \mmm} = \prod_{v \in \V_K} \U_v^{\m_v}, \qquad K_v^\times \supset \U_v^m = \begin{cases} \OOO_v^\times & v \in \V_{K, \f}, \ m = 0 \\ 1 + \pi_v^m\OOO_v & v \in \V_{K, \f}, \ m > 0 \\ \RR^\times & v \ \text{real}, \ m = 0 \\ \RR_{> 0} & v \ \text{real}, \ m > 0 \\ \CC^\times & v \ \text{complex} \end{cases}. $$ \end{definition*} Note that in the definition of the modulus, we may as well forget about $ v $ complex, and for $ v $ real, take $ \m_v \in \cbr{0, 1} $. Then $ \U_{K, \mmm} \subset \JJ_K $ is an open subgroup, and every open subgroup of $ \JJ_K $ contains some $ \U_{K, \mmm} $. \begin{proposition} $ \CCC_K / \U_{K, \mmm} $ is finite. \end{proposition} \begin{proof} $ \CCC_K \to \CCC_K / \U_{K, \mmm} $ with discrete image, since $ \U_{K, \mmm} $ is open. So by \ref{prop:8.1}.$ 1 $, the image is finite. \end{proof} So every finite quotient of $ \CCC_K $ is a quotient of some $ \CCC_K / \U_{K, \mmm} $. \begin{definition*} The \textbf{ray class group} of $ K $ modulo $ \mmm $ is $$ \Cl_\mmm\br{K} = \CCC_K / \U_{K, \mmm}. $$ \end{definition*} \begin{example*} If $ \mmm = 0 $, then $ \U_{K, \mmm} = \ker \c $, where $ \c : \JJ_K \to \I\br{K} $ is the content map, and $ \Cl_\mmm\br{K} = \Cl\br{K} $. \end{example*} Now relate to ideals. \begin{notation*} Let $ x \in K^\times $. Write $ x \equiv 1 \mods \mmm $ if \begin{itemize} \item for all $ v \in \supp_\f \mmm $, $ v\br{x - 1} \ge \m_v $, and \item for all real $ v \in \supp \mmm $, $ x \in \br{K_v^\times}^+ = \RR_{> 0} $. \end{itemize} Let \begin{align*} K_\mmm^\times & = \cbr{x \in K^\times \st x \equiv 1 \mods \mmm}, \\ \I_\mmm\br{K} & = \cbr{\text{fractional ideals prime to} \ \supp_\f \mmm} \cong \cbr{\text{free abelian group on} \ \V_{K, \f} \setminus \supp_\f \mmm}, \\ \P_\mmm\br{K} & = \cbr{x\OOO_K \st x \in K_\mmm^\times} \subset \I_\mmm\br{K}. \end{align*} \end{notation*} \begin{theorem} $$ \Cl_\mmm\br{K} \cong \I_\mmm\br{K} / \P_\mmm\br{K}. $$ \end{theorem} \begin{example*} Assume $ K $ has real places, and let $ \mmm = \sum_{v \ \text{real}} \br{v} $. Then $ \I_\mmm\br{K} = \I\br{K} $ and $ \P_\mmm\br{K} $ is the group of principal fractional ideals $ x\OOO_K $ where $ x $ is \textbf{totally positive}, that is for all $ \sigma : K \hookrightarrow \RR $, $ \sigma\br{x} > 0 $. Then $ \Cl_\mmm\br{K} $ is called the \textbf{narrow ideal class group} of $ K $, also written $ \Cl^+\br{K} $. Obviously $ \Cl^+\br{K} \twoheadrightarrow \Cl\br{K} $ with kernel killed by two. \end{example*} Precisely is the following. \begin{theorem} Let $ S \subset \V_{K, \f} $ be finite, containing $ \supp_\f \mmm $. Then there exists a unique continuous homomorphism $$ \alpha = \br{\alpha_v} : \CCC_K \to \I_\mmm\br{K} / \P_\mmm\br{K}, $$ such that for all $ v \in \V_{K, \f} \setminus S $, $ \alpha_v\br{\OOO_v^\times} = 1 $ and $ \alpha_v\br{\pi_v} \in \ppp_v^{-1} $. Moreover, $ \alpha $ induces an isomorphism $$ \CCC_K / \U_{K, \mmm} \xrightarrow{\sim} \I_\mmm\br{K} / \P_\mmm\br{K}. $$ \end{theorem} \pagebreak \begin{proof} By \ref{prop:8.2}, $ \alpha $ is unique. For existence, let $$ \JJ_{K, \mmm} = \cbr{\br{x_v} \in \JJ_K \st \forall v \in \supp \mmm, \ x_v \in \U_v^{\m_v}}, $$ the open subgroup generated by $ \U_{K, \mmm} $ and $ \cbr{K_v^\times \st v \notin \supp \mmm} $. Then by weak approximation, $ K^\times\JJ_{K, \mmm} = \JJ_K $, and by definition, $ K_\mmm^\times = K^\times \cap \JJ_{K, \mmm} $, so $$ \iota : \CCC_K / \U_{K, \mmm} \xleftarrow{\sim} \JJ_{K, \mmm} / K_\mmm^\times\U_{K, \mmm}. $$ Also, there is an isomorphism $$ \function[\c^S]{\JJ_{K, \mmm} / \U_{K, \mmm}}{\I_\mmm\br{K}}{\br{x_v}}{\prod_{v \in \V_{K, \f}, \ v \notin \supp_\f \mmm} \ppp_v^{v\br{x_v}}}. $$ Then $$ \CCC_K / \U_{K, \mmm} \xleftarrow{\iota} \JJ_{K, \mmm} / K_\mmm^\times\U_{K, \mmm} \xrightarrow{\c^S} \I_\mmm\br{K} / \P_\mmm\br{K}, $$ and this is the map $ x \mapsto \alpha\br{x^{-1}} $. \end{proof} \begin{remark*} The isomorphism $ \CCC_K / \U_{K, \mmm} \to \I_\mmm\br{K} / \P_\mmm\br{K} $ is not induced by the $ S $-content map $ \JJ_K \to \I_\mmm\br{K} $ but only on the subgroup $ \JJ_{K, \mmm} $. Fr\"ohlich called this the \textbf{fundamental mistake of class field theory}. \end{remark*} \begin{example*} Let $ K = \QQ $, let $ m > 1 $, and let $ \mmm = \br{m}\br{\infty} = \sum_{p \mid m} \v_p\br{m}\br{p} + \br{\infty} $. If $ I \in \I_\mmm\br{\QQ} $, then $ I = \br{a / b}\ZZ $ for unique positive coprime $ a, b \in \ZZ $ with $ \br{ab, m} = 1 $. Set $$ \function[\Theta]{\I_\mmm\br{\QQ}}{\br{\ZZ / m\ZZ}^\times}{I}{\dfrac{a}{b} \mod m}. $$ This clearly defines an isomorphism such that $$ \begin{tikzcd} p\ZZ \in \I_\mmm\br{\QQ} / \P_\mmm\br{\QQ} \arrow{r}{\Theta}[swap]{\sim} & \br{\ZZ / m\ZZ}^\times \ni p \mod m \\ \QQ^\times\br{\dots, 1, 1, p^{-1}, 1, 1, \dots} \in \CCC_\QQ \arrow{u}{\alpha} \arrow{r}{\sim} & \RR_{> 0} \times \widehat{\ZZ}^\times \arrow{u} \ni \br{\dots, p, p, 1, p, p, \dots} \end{tikzcd} $$ commutes. \end{example*} This is the reason for using $ \ppp_v^{-1} $, and $ \sigma_v^{-1} $ in the reciprocity law, since it means that for $ \QQ\br{\zeta_m} / \QQ $, recover the usual map $ \Gal\br{\QQ\br{\zeta_m} / \QQ} \cong \br{\ZZ / m\ZZ}^\times $. Older treatments of class field theory use $ \sigma_v $ and end up with the inverse of the usual map. Another reason is that the inverse $ \Fr_v = \sigma_v^{-1} $, the so-called \textbf{geometric Frobenius}, is what occurs naturally in algebraic geometry. The modern normalisation of class field theory maps a uniformiser at an unramified $ v $ to the geometric Frobenius $ \sigma_v^{-1} $. \subsection{Properties of \texorpdfstring{$ \Art_{L / K} $}{the Artin map}} \lecture{15}{Tuesday}{23/02/21} \begin{corollary}[Uniqueness] $ \Art_{L / K} $ is unique. \end{corollary} \begin{proof} By \ref{prop:8.2}. \end{proof} A consequence is if $ L' / K' $ is an abelian extension, and have isomorphisms $$ \begin{tikzcd} L \arrow{r}{\widetilde{\tau}}[swap]{\sim} & L' \\ K \arrow[hookrightarrow]{u} \arrow{r}{\sim}[swap]{\tau} & K' \arrow[hookrightarrow]{u} \end{tikzcd}, $$ then get isomorphisms $$ \function[\tau]{\Gal\br{L / K}}{\Gal\br{L' / K'}}{g}{\widetilde{\tau} \circ g \circ \widetilde{\tau}^{-1}}. $$ \pagebreak As extensions are abelian, any other $ \widetilde{\tau}' $ with $ \eval{\widetilde{\tau}'}_K = \tau $ is $ \widetilde{\tau}' = \widetilde{\tau} \circ h $ for $ h \in \Gal\br{L / K} $, so $ \widetilde{\tau}' \circ g \circ \widetilde{\tau}'^{-1} = \widetilde{\tau} \circ h \circ g \circ h^{-1} \circ \widetilde{\tau}^{-1} = \widetilde{\tau} \circ g \circ \widetilde{\tau}^{-1} $. So this isomorphism depends only on $ \tau $. Then $$ \begin{tikzcd} \CCC_K \arrow{r}{\Art_{L / K}} \arrow{d}{\sim}[swap]{\tau} & \Gal\br{L / K} \arrow{d}{\tau}[swap]{\sim} \\ \CCC_{K'} \arrow{r}[swap]{\Art_{L' / K'}} & \Gal\br{L' / K'} \end{tikzcd} $$ commutes, by uniqueness. This sort of argument is often called \textbf{transport of structure}. \begin{example*} Suppose $ L / K / F $ is Galois such that $ L / K $ is abelian and $ K / F $ is Galois. Take $ \tau = g \in \Gal\br{K / F} $. As $ L / K $ is abelian, $ \Gal\br{K / F} $ acts by conjugation on $ \Gal\br{L / K} $. Let $ K = K' $ and $ L = L' $. Then \begin{equation} \label{eq:5} \Art_{L / K}\br{gx} = g \circ \Art_{L / K}\br{x} \circ g^{-1}, \qquad g \in \Gal\br{K / F}, \qquad x \in \CCC_K. \end{equation} \end{example*} \begin{proposition}[Norm functoriality] Suppose $ L / K $ and $ L' / K' $ are abelian such that $ L \subset L' $ and $ K \subset K' $. Then $$ \begin{tikzcd} \CCC_{K'} \arrow{r}{\Art_{L' / K'}} \arrow{d}[swap]{\N_{K' / K}} & \Gal\br{L' / K'} \arrow{d}{g \mapsto \eval{g}_L} \\ \CCC_K \arrow{r}[swap]{\Art_{L / K}} & \Gal\br{L / K} \end{tikzcd} $$ commutes. \end{proposition} \begin{proof} It is enough to check for $ \pi_w \in K_w'^\times \subset \CCC_{K'} $ for $ w $ outside a finite set. Assume $ w $ is unramified in $ L' / K' $ such that $ w \mid v \in \V_{K, \f} $ where $ v $ is unramified in $ L / K $. If $ \sigma_w \in \D_w \subset \Gal\br{L' / K'} $, then $$ \eval{\sigma_w}_L = \eval{\br{x \mapsto x^{\q_w}}}_L = \br{x \mapsto x^{\q_v}}^{\f\br{w \mid v}} = \sigma_v^{\f\br{w \mid v}}. $$ If $ \pi_w \in K_w'^\times $ is a uniformiser, then $$ \N_{K_w' / K_v}\br{\pi_w} = u\pi_v^{\f\br{w \mid v}}, \qquad u \in \OOO_{K_v}^\times, $$ since $ \pi_v^{\sbr{K_w' : K_v}} = \N_{K_w' / K_v}\br{\pi_v} $ and $ \pi_v = u\pi_w^{\e\br{w \mid v}} $. \end{proof} \begin{example*} A special case is $ K' = L = L' $. Then $ 1 = \Art_{L / L}\br{x} = \Art_{L / K}\br{\N_{L / K}\br{x}} $ for $ x \in \JJ_L $, so $$ \N_{L / K}\br{\JJ_L} \subset \ker \Art_{L / K}. $$ \end{example*} By the reciprocity law, there is a map from abelian extensions of $ K $ to finite quotients of $ \CCC_K $. \begin{theorem}[Existence theorem] Let $ U \subset \JJ_K $ be an open subgroup. Then there exists an abelian extension $ L / K $ with $$ \ker \Art_{L / K} = K^\times U. $$ \end{theorem} Combining with the reciprocity law, $$ \varprojlim_{\text{open subgroups} \ U \subset \JJ_K} \JJ_K / K^\times U \xrightarrow{\sim} \Gal\br{K^{\ab} / K}. $$ In particular, if $ \mmm $ is a modulus, and $ U = \U_{K, \mmm} $, there is a corresponding abelian extension of $ K $, called the \textbf{ray class field}. \begin{example*} Let $ K = \QQ $ with $ \mmm = \br{m}\br{\infty} $. Then the ray class field is $ \QQ\br{\zeta_m} $. So should think of ray class fields as analogues of cyclotomic fields. Maybe later will discuss ray class fields for $ \QQ\br{\sqrt{-d}} $, which correspond to elliptic curves. \end{example*} \pagebreak \begin{theorem}[Relation with local class field theory] Let $ L / K $ be abelian, let $ v \in \V_K $, and let $ w \mid v $. Then $$ \begin{tikzcd} \CCC_K \arrow{r}{\Art_{L / K}} & \Gal\br{L / K} \\ K_v^\times \arrow[hookrightarrow]{u} \arrow{r}[swap]{\psi_v} & \D_v = \Gal\br{L_w / K_v} \arrow[subset]{u} \end{tikzcd}. $$ \end{theorem} Indeed, in the proof of the reciprocity law, it is usual to start with the local Artin maps $ \psi_v $. \begin{example*} Let $ v \mid \infty $. \begin{itemize} \item If $ K_v = L_w $, then $ \psi_v = 1 $. \item If $ K_v = \RR $ and $ L_w \cong \CC $, then $ \psi_v = \sign : \RR^\times \to \cbr{\pm 1} \cong \Gal\br{L_w / K_v} $ with kernel $ \RR_{> 0} = \N_{\CC / \RR}\br{\CC^\times} $. \end{itemize} \end{example*} The $ \br{\psi_v} $ combine to give $$ \begin{tikzcd} \JJ_K / \N_{L / K}\br{\JJ_L} \arrow{r}{\Art_{L / K}} & \Gal\br{L / K} \\ \displaystyle\bigoplus_v K_v^\times / \N_{L_w / K_v}\br{L_w^\times} \arrow{u}{\sim} \arrow{r}[swap]{\sim} & \displaystyle\bigoplus_v \D_v \arrow{u}[swap]{\D_v \subset \Gal\br{L / K}} \end{tikzcd}. $$ So the fact that $ \Art_{L / K}\br{K^\times} = 1 $, the hard part of the reciprocity law, is equivalent to knowing the relations between the various $ \D_v \subset \Gal\br{L / K} $. Why are ideles better than ideals? \begin{itemize} \item Ideals only will tell you about relations between $ \D_v $ for $ v $ unramified. \item Need ideles to understand properly ramification. \end{itemize} \subsection{Examples} Let $ K $ be arbitrary with modulus $ \mmm = 0 $. Then $ \Cl_\mmm\br{K} = \Cl\br{K} $. By the existence theorem, there is a corresponding abelian extension $ H / K $, the \textbf{Hilbert class field}, with $$ \Art_{H / K} : \Cl\br{K} \xrightarrow{\sim} \Gal\br{H / K}. $$ Then $ H / K $ satisfies the following. \begin{itemize} \item It is abelian. \item For all $ v \in \V_{K, \f} $, it is unramified at $ v $, since $ \OOO_v^\times \subset \U_{K, \mmm} $ for all $ v $. \item At an infinite place $ v $, $ K_v^\times \subset \U_{K, \mmm} $, so the local decomposition group at $ v $ is trivial, that is if $ v $ is a real place of $ K $, then if $ w \mid v $ then $ w $ is also real. \end{itemize} Thus $ H / K $ is unramified at all places of $ K $, and $ H $ is the maximal extension with these properties. \begin{example*} Let $ K = \QQ\br{\sqrt{-23}} $, so $ \OOO_K = \ZZ\sbr{\tfrac{1 + \sqrt{-23}}{2}} $. By a standard computation, $ \Cl\br{K} \cong \ZZ / 3\ZZ $ is generated by $ \sbr{\ppp} $ for $ \ppp = \abr{2, \tfrac{1 + \sqrt{-23}}{2}} $. Let $ \tau \in \Gal\br{K / \QQ} $ be complex conjugation. Then $ \tau\br{\ppp} = \abr{2, \tfrac{1 - \sqrt{-23}}{2}} $ and $ \ppp \cdot \tau\br{\ppp} = \abr{2} $, that is $ \tau\br{\sbr{\ppp}} = \sbr{\ppp}^{-1} $, so $ \tau $ acts as $ -1 $ on $ \Cl\br{K} $. Let $ H $ be the Hilbert class field of $ K $, which is the maximal abelian extension of $ K $ which is unramified at all $ v \in \V_{K, \f} $, that is $ \delta_{H / K} = \OOO_K $. Then $ \sbr{H : K} = 3 $ and Galois. By $ \br{\ref{eq:5}} $ above, $ \tau $ acts as $ -1 $ on $ \Gal\br{H / K} $, so $ H / \QQ $ is an $ \SSS_3 $-extension. Show that $ H $ is the splitting field of $ f = T^3 - T + 1 $ with discriminant $ -23 $. \footnote{Exercise} \end{example*} \pagebreak \lecture{16}{Thursday}{25/02/21} The following arose in a research problem. \begin{proposition} There is no $ \SSS_3 $-extension $ L / \QQ $, so Galois with group $ \SSS_3 $, which is unramified outside $ 2, 7, \infty $, with quadratic subfield $ K = \QQ\br{\sqrt{-7}} $ or $ K = \QQ\br{\sqrt{2}} $. \end{proposition} \begin{proof} Let $$ \Art_{L / K} : \CCC_K \twoheadrightarrow \Gal\br{L / K} \cong \ZZ / 3\ZZ. $$ The condition that $ L / \QQ $ is Galois with group $ \SSS_3 $ is $$ \Art_{L / K}\br{\tau\br{x}} = \Art_{L / K}\br{x^{-1}}, $$ by $ \br{\ref{eq:5}} $, since $ \Gal\br{K / \QQ} = \abr{\tau} $ acts on $ \Gal\br{L / K} $ by conjugation non-trivially. For both $ \QQ\br{\sqrt{-7}} $ and $ \QQ\br{\sqrt{2}} $, $ \Cl\br{K} = 1 $. So $$ \CCC_K \xleftarrow{\sim} \JJ_{K, \emptyset} / \OOO_K^\times = \br{K_\infty^\times \times \widehat{\OOO_K}^\times} / \OOO_K^\times. $$ Then $ \Art_{L / K} : K_\infty^\times = \br{\RR^\times}^{\r_1} \times \br{\CC^\times}^{\r_2} \hookrightarrow \JJ_{K, \emptyset} \to \ZZ / 3\ZZ $ is trivial on $ \CC^\times $ and $ \RR_{> 0} $, and even on $ \RR^\times $, since there is no non-zero continuous homomorphism $ \RR^\times \to \ZZ / 3\ZZ $. So $ \Art_{L / K} $ factors through $ \widehat{\OOO_K}^\times / \OOO_K^\times $, and since $ L / K $ is unramified at $ v \nmid 14 $, factors further by $$ \begin{tikzcd} \CCC_K \cong \JJ_{K, \emptyset} / \OOO_K^\times \arrow{r} \arrow[twoheadrightarrow]{d}[swap]{\Art_{L / K}} & \widehat{\OOO_K}^\times / \OOO_K^\times \arrow{d} \\ \Gal\br{L / K} \cong \ZZ / 3\ZZ & \br{\displaystyle\prod_{v \mid 14} \OOO_v^\times} / \OOO_K^\times \arrow[twoheadrightarrow]{l}{\psi} \end{tikzcd}, $$ since $ \Art_{L / K}\br{\OOO_v^\times} = 1 $ for all $ v \nmid 14 $. Thus \begin{equation} \label{eq:6} \psi \circ \tau = -\psi. \end{equation} \begin{itemize} \item Let $ K = \QQ\br{\sqrt{-7}} $, so $ \OOO_K^\times = \cbr{\pm 1} $. \begin{itemize} \item Since $ -7 \equiv 1 \mod 8 $, $ 2 $ splits in $ K $, so $ \prod_{v \mid 2} \OOO_v^\times = \ZZ_2^\times \times \ZZ_2^\times $ is a pro-$ 2 $ group, so $ \psi\br{\prod_{v \mid 2} \OOO_v^\times} = 0 $. \item $ 7 $ ramifies, so if $ v \mid 7 $, then $ \OOO_v^\times = \FF_7^\times \times \br{1 + \pi_v\OOO_v} $, where $ \FF_7^\times $ is the Teichm\"uller and $ 1 + \pi_v\OOO_v $ is a pro-$ 7 $ group. \end{itemize} So $ \psi $ factors through $ \FF_7^\times $, and $ \tau \in \Gal\br{K / \QQ} $ acts trivially on $ \FF_7 $. So by $ \br{\ref{eq:6}} $, there is no possible $ \psi $. There does exist a $ \psi $ with $ \psi \circ \tau = \psi $, unique up to inverse, corresponding to an abelian $ L / \QQ $, which has to be $ \QQ\br{\zeta_7} $. \item Let $ K = \QQ\br{\sqrt{2}} $, so $ \OOO_K^\times = \abr{-1, \epsilon = 1 + \sqrt{2}} $. \begin{itemize} \item $ 2 $ ramifies, so if $ v \mid 2 $, then $ \OOO_v^\times = 1 + \pi_v\OOO_v $ is a pro-$ 2 $ group and $ \psi\br{\OOO_v^\times} = 0 $. \item Since $ 7 = \br{3 + \sqrt{2}}\br{3 - \sqrt{2}} $, $ \prod_{v \mid 7} \OOO_v^\times = \ZZ_7^\times \times \ZZ_7^\times \cong \FF_7^\times \times \FF_7^\times \times \br{1 + 7\ZZ_7}^2 $, where $ 1 + 7\ZZ_7 $ is a pro-$ 7 $ group, so $ \psi\br{1 + 7\ZZ_7} = 0 $. \end{itemize} So $ \psi $ factors through $ \psi : \br{\FF_7^\times \times \FF_7^\times} / \OOO_K^\times \twoheadrightarrow \ZZ / 3\ZZ $. Then $ \tau : \br{x, y} \mapsto \br{y, x} $, so \begin{equation} \label{eq:7} \psi\br{x, x} = 0, \end{equation} by $ \br{\ref{eq:6}} $. Now $$ \epsilon = 1 + \sqrt{2} \equiv \begin{cases} -2 & \mod 3 + \sqrt{2} \\ 4 & \mod 3 - \sqrt{2} \end{cases}, $$ that is $ \psi\br{-2, 4} = 0 $. By this and $ \br{\ref{eq:7}} $, $ \psi = 0 $. \end{itemize} \end{proof} \pagebreak \subsection{Comparing \texorpdfstring{$ \CCC_K $}{idele class group} and \texorpdfstring{$ \Gal\br{K^{\ab} / K} $}{Galois group of maximal abelian extension}} Fix $ K \subset \overline{\QQ} $. Let $$ \Art_K : \CCC_K \to \Gal\br{K^{\ab} / K} = \varprojlim_{\text{finite abelian} \ K \subset L \subset \overline{\QQ}} \Gal\br{L / K}, $$ where $ K^{\ab} $ is the \textbf{maximal abelian extension} of $ K $ in $ \overline{\QQ} $, the union of all finite abelian $ L / K $, so $ \Gal\br{K^{\ab} / K} $ is profinite. As $ \CCC_K^1 \twoheadrightarrow \Gal\br{L / K} $ for all $ L $ and $ \CCC_K^1 $ is compact, $ \CCC_K^1 \twoheadrightarrow \Gal\br{K^{\ab} / K} $, since the image is dense and compact. The existence theorem is equivalent to the statement that $ \Gal\br{K^{\ab} / K} $ is the maximal profinite quotient of $ \CCC_K $, or of $ \CCC_K^1 $. There is a diagram $$ \begin{tikzcd} 1 \arrow{r} & \JJ_{K, \emptyset} / \OOO_K^\times \arrow{r} \arrow[twoheadrightarrow]{d} & \CCC_K \arrow{r}{\c} \arrow[twoheadrightarrow]{d}[swap]{\Art_K} & \Cl\br{K} \arrow{r} \arrow{d}{\sim} & 1 \\ 1 \arrow{r} & \Gal\br{K^{\ab} / H} \arrow{r} & \Gal\br{K^{\ab} / K} \arrow{r} & \Gal\br{H / K} \arrow{r} & 1 \end{tikzcd}, $$ where $ H $ is the Hilbert class field. What is the kernel of the vertical maps? \begin{itemize} \item If $ K = \QQ $, then $$ \Art_\QQ : \CCC_\QQ \cong \RR_{> 0} \times \widehat{\ZZ}^\times \twoheadrightarrow \widehat{\ZZ}^\times = \Gal\br{\QQ^{\ab} / \QQ}. $$ \item If $ K = \QQ\br{\sqrt{-d}} $, then $ \mu\br{K} $ is finite, so the maximal profinite quotient is $$ \Art_K : \JJ_{K, \emptyset} / \OOO_K^\times \cong \br{\CC^\times \times \widehat{\OOO_K}^\times} / \mu\br{K} \twoheadrightarrow \widehat{\OOO_K}^\times / \mu\br{K} = \Gal\br{K^{\ab} / K}. $$ \item Let $ K = \QQ\br{\sqrt{2}} $, so $ \Cl\br{K} = 1 $ and $ \OOO_K^\times = \abr{-1, \epsilon = 1 + \sqrt{2}} $. Then $ \N_{K / \QQ}\br{\epsilon} = -1 $ and $ \epsilon $ has signature $ \br{1, -1} $. Let $ \epsilon_+ = \epsilon^2 $ be the least totally positive unit. Then the maximal profinite quotient is $$ \begin{tikzcd}[row sep=tiny] \CCC_K = \JJ_{K, \emptyset} / \OOO_K^\times & \br{\RR_{> 0}^2 \times \widehat{\OOO_K}^\times} / \abr{\epsilon_+} \arrow{l}[swap]{\sim} & \\ \CCC_K^1 = \JJ_{K, \emptyset}^1 / \OOO_K^\times \arrow[subset]{u} & \br{\RR_{> 0} \times \widehat{\OOO_K}^\times} / \abr{\epsilon_+} \arrow{l}{\sim} \arrow[twoheadrightarrow]{r}[swap]{\Art_K^1} & \widehat{\OOO_K}^\times / \overline{\abr{\epsilon_+}} = \Gal\br{K^{\ab} / K} \end{tikzcd}. $$ If $ G = \varprojlim_i G_i $ is a profinite group and $ g \in G $, there exists a unique continuous $ \phi : \widehat{\ZZ} \to G $ such that $ \phi\br{1} = g $. \footnote{Exercise: easy} So have $$ \function{\widehat{\ZZ}}{\overline{\abr{\epsilon_+}} \subset \widehat{\OOO_K}^\times}{1}{\epsilon_+}. $$ One can show that $ \widehat{\ZZ} \xrightarrow{\sim} \overline{\abr{\epsilon_+}} $, so there is an isomorphism $$ \ker \Art_K^1 = \br{\RR_{> 0} \times \overline{\abr{\epsilon_+}}} / \abr{\epsilon_+} \cong \br{\RR \times \widehat{\ZZ}} / \ZZ = \AA_\QQ / \QQ, $$ where $ \AA_\QQ / \QQ $ is compact and connected, that is have $$ 1 \to \AA_\QQ / \QQ \to \CCC_K^1 \to \Gal\br{K^{\ab} / K} \to 1. $$ \item For general $ K $, what happens is that $$ \begin{tikzcd}[row sep=tiny] 1 \arrow{r} & \CCC_K^0 \arrow{r} \arrow[cong]{d} & \CCC_K \arrow{r}{\Art_K} \arrow{d} & \Gal\br{K^{\ab} / K} \arrow{r} & 1 \\ 1 \arrow{r} & \CCC_K^0 \arrow{r} & \JJ_{K, \emptyset} / \OOO_K^\times \arrow[subset]{u} \arrow{r} & \Gal\br{K^{\ab} / H} \arrow[subset]{u} \arrow{r} \arrow[cong]{d} & 1 \\ & & & \br{\cbr{\pm 1}^{\r_1} \times \widehat{\OOO_K}^\times} / \overline{\OOO_K^\times} & \end{tikzcd}, $$ where the maximal connected subgroup of $ \CCC_K $, the closure of $ \RR_{> 0}^{\r_1} \times \br{\CC^\times}^{\r_2} $, is $$ \CCC_K^0 \cong \RR_{> 0} \times \U\br{1}^{\r_2} \times \br{\AA_\QQ / \QQ}^{\r_1 + \r_2 - 1}. $$ \end{itemize} \pagebreak \section{\texorpdfstring{$ \zeta $}{Zeta}-functions} \subsection{Riemann \texorpdfstring{$ \zeta $}{zeta}-function} \lecture{17}{Saturday}{27/02/21} The \textbf{Riemann $ \zeta $-function} is $$ \zeta\br{s} = \sum_{n \ge 1} \dfrac{1}{n^s} = \prod_p \dfrac{1}{1 - p^{-s}}, \qquad s \in \CC, \qquad \Re s > 1, $$ by unique factorisation in $ \ZZ $. Define $$ \Z\br{s} = \pi^{-\tfrac{s}{2}}\Gamma\br{\dfrac{s}{2}}\zeta\br{s}. $$ \begin{theorem}[Functional equation for Riemann $ \zeta $-function] \label{thm:9.1} $$ \Z\br{s} = \Z\br{1 - s}, $$ with analytic continuation to $ \CC $ except for simple poles at $ s = 0, 1 $ with residues $ \pm 1 $. \end{theorem} \begin{proof} There are three steps. \begin{enumerate}[leftmargin=0.5in, label=Step \arabic*.] \item The \textbf{Mellin transform} of $ \tfrac{1}{2}\br{\Theta\br{y} - 1} $ is $$ \Z\br{2s} = \pi^{-s}\sum_{n \ge 1} \dfrac{1}{n^{2s}}\intd{0}{\infty}{e^{-t}t^{s - 1}}{t} = \intd{0}{\infty}{\sum_{n = 1}^\infty e^{-\pi n^2y}y^{s - 1}}{y} = \intd{0}{\infty}{\dfrac{1}{2}\br{\Theta\br{y} - 1}\dfrac{y^s}{y}}{y}, $$ where $ \Theta $ is the \textbf{theta function} $$ \Theta\br{y} = \sum_{n = -\infty}^\infty e^{-\pi n^2y}. $$ \item If $ f : \RR \to \CC $ is nice, then the \textbf{Poisson summation formula} is $$ \sum_{n = -\infty}^\infty f\br{n} = \sum_{n = -\infty}^\infty \widehat{f}\br{n}, $$ where $ \widehat{f} $ is the \textbf{Fourier transform} $$ \widehat{f}\br{u} = \intd{-\infty}{\infty}{e^{-2\pi iux}f\br{x}}{x}. $$ Take $ f\br{x} = e^{-\pi x^2y} $. Then $ \widehat{f}\br{u} = y^{-1 / 2}e^{\pi u^2 / y} $, so $ \Theta\br{y} = y^{-1 / 2}\Theta\br{1 / y} $. \item In step $ 1 $, split $$ \intd{0}{\infty}{\dfrac{1}{2}\br{\Theta\br{y} - 1}\dfrac{y^s}{y}}{y} = \intd{1}{\infty}{\dfrac{1}{2}\br{\Theta\br{y} - 1}\dfrac{y^s}{y}}{y} + \intd{0}{1}{\dfrac{1}{2}\br{\Theta\br{y} - 1}\dfrac{y^s}{y}}{y}, $$ and in the second term, use step $ 2 $ to make into $$ \intd{0}{1}{\dfrac{1}{2}\br{\Theta\br{y} - 1}\dfrac{y^s}{y}}{y} = \intd{1}{\infty}{\dfrac{1}{2}\br{\Theta\br{\dfrac{1}{y}} - 1}\dfrac{y^{-s}}{y}}{y}, $$ by $ y \mapsto 1 / y $. Get that $$ \Z\br{2s} = \dfrac{1}{2}\intd{1}{\infty}{\br{\Theta\br{y} - 1}\br{y^s + y^{\tfrac{1}{2} - s}}\dfrac{1}{y}}{y} + \dfrac{1}{2s - 1} - \dfrac{1}{2s}, $$ where the first term is an entire function of $ s $ since $ \Theta\br{y} - 1 \to 0 $ rapidly as $ y \to \infty $, so $ \Z\br{2s} = \Z\br{1 - 2s} $. \end{enumerate} \end{proof} \pagebreak \subsection{Dedekind \texorpdfstring{$ \zeta $}{zeta}-function} Let $ K $ be a number field. The \textbf{Dedekind $ \zeta $-function of $ K $} is $$ \zeta_K\br{s} = \sum_{0 \ne \aaa \subset \OOO_K \ \text{ideals}} \dfrac{1}{\N\br{\aaa}^s}. $$ \begin{proposition}[Euler product] \label{prop:9.2} $$ \zeta_K\br{s} = \prod_{v \in \V_{K, \f}} \dfrac{1}{1 - \q_v^{-s}}, $$ absolutely convergent for $ \Re s > 1 $. \end{proposition} \begin{proof} Formally, if $ \aaa \subset \OOO_K $ such that $ \aaa = \prod_v \ppp_v^{n_v} $ then $ \N\br{\aaa}^{-s} = \prod_v \q_v^{-n_vs} $, so $$ \zeta_K\br{s} = \prod_v \br{1 + \q_v^{-s} + \dots} = \prod_v \dfrac{1}{1 - \q_v^{-s}}. $$ Now $ \#\cbr{v \mid p} \le n = \sbr{K : \QQ} $, and if $ v \mid p $ then $ \q_v \ge p $, so the product converges by comparison with $ \prod_p \br{1 - p^{-s}}^{-n} = \zeta\br{s}^n $. \end{proof} The $ 1 / \br{1 - \q_v^{-s}} $ are \textbf{Euler factors at $ v $}. Define $$ \Gamma_\RR\br{s} = \pi^{-\tfrac{s}{2}}\Gamma\br{\dfrac{s}{2}}, \qquad \Gamma_\CC\br{s} = 2\br{2\pi}^{-s}\Gamma\br{s}, $$ the \textbf{Euler factors for the infinite places}, and $$ \Z_K\br{s} = \abs{\d_K}^{\tfrac{s}{2}}\Gamma_\RR\br{s}^{\r_1}\Gamma_\CC\br{s}^{\r_2}\zeta_K\br{s}. $$ The following is a generalisation of \ref{thm:9.1}. \begin{theorem} \label{thm:9.3} \hfill \begin{enumerate} \item (Functional equation for Dedekind $ \zeta $-function) $ \Z_K\br{s} $ has an analytic continuation to $ \CC $, apart from simple poles at $ s = 0, 1 $, and $$ \Z_K\br{1 - s} = \Z_K\br{s}. $$ \item (Analytic class number formula) $ \zeta_K\br{s} $ has a zero of order $ r = \r_1 + \r_2 - 1 $ at $ s = 0 $, and \begin{equation} \label{eq:8} \lim_{s \to 0} \dfrac{1}{s^r}\zeta_K\br{s} = -\dfrac{\h_K\R_K}{\w_K}. \end{equation} \end{enumerate} \end{theorem} Here, $ \h_K = \#\Cl\br{K} $ is the class number, $ \w_K = \#\mu\br{K} $ is the number of roots of unity in $ K $, and $ \R_K $ is the \textbf{regulator} of $ K $. If $ \epsilon_1, \dots, \epsilon_r $ are generators for $ \OOO_K^\times / \mu\br{K} \cong \ZZ^r $, by the unit theorem, $ \R_K $ is the absolute value of any $ \br{r \times r} $-minor of the matrix $$ \br{\log \abs{\epsilon_j}_v}_{1 \le j \le r, \ v \in \V_{K, \infty}}. $$ Note that by the product formula, the sum of the columns of this matrix is zero, so minors are equal up to sign. Then $ \R_K \ne 0 $ by the proof of the unit theorem. More usual to write $ \br{\ref{eq:8}} $ at $ s = 1 $ but more complicated. \begin{example*} If $ K = \QQ $, then $ \zeta\br{0} = -\tfrac{1}{2} $. \end{example*} There are two ways to prove this. \begin{itemize} \item Hecke, using theta functions. \item Tate, using adeles. Generalises much more easily to other $ \L $-functions, such as $ \L $-functions of characters of $ \CCC_K $. \end{itemize} \pagebreak Tate's proof is an adelic version of \ref{thm:9.1}. The idea is to first interpret $ \zeta_K\br{s} $, or $ \Z_K\br{s} $, as an adelic integral. Assuming we know how to integrate on $ \QQ_p $, $$ \intd{\ZZ_p \setminus \cbr{0}}{}{\abs{x}_p^{s - 1}}{x} = \sum_{n \ge 0} \intd{p^n\ZZ_p \setminus p^{n + 1}\ZZ_p}{}{p^{-n\br{s - 1}}}{x} = \sum_{n \ge 0} p^{-n\br{s - 1}}\meas\br{p^n\ZZ_p \setminus p^{n + 1}\ZZ_p}. $$ Then $$ \ZZ_p = \bigsqcup_{a = 0}^{p^n - 1} a + p^n\ZZ_p, \qquad \meas\br{a + p^n\ZZ_p} = \dfrac{1}{p^n}\meas\br{\ZZ_p}, $$ so $$ \intd{\ZZ_p \setminus \cbr{0}}{}{\abs{x}_p^{s - 1}}{x} = \sum_{n \ge 0} p^{-n\br{s - 1}}\br{\dfrac{1}{p^n} - \dfrac{1}{p^{n + 1}}}\meas\br{\ZZ_p} = \br{1 - p^{-1}}\meas\br{\ZZ_p}\dfrac{1}{1 - p^{-s}}, $$ where $ 1 / \br{1 - p^{-s}} $ is the Euler factor at $ p $ in $ \zeta\br{s} $. Suggests that $ \zeta\br{s} $ is a product of $ p $-adic integrals over all $ p $, an adelic integral. \begin{itemize} \item The $ \Gamma $-factor will be an integral at an infinite place. \item Have to normalise measure to get $ 1 / \br{1 - p^{-s}} $ for almost all $ p $. \item The functional equation will come from a Fourier transform. \end{itemize} \subsection{Local Fourier analysis} On $ \RR $, $$ \widehat{f}\br{y} = \intd{-\infty}{\infty}{e^{-2\pi ixy}f\br{x}}{x}, $$ which has three ingredients. Define $ \widehat{f} $ replacing $ \RR $ by any local field $ F $, of characteristic zero. \begin{definition*} The \textbf{additive character} is a continuous homomorphism $ 1 \ne \psi : F \to \U\br{1} = \cbr{\abs{z} = 1} \subset \CC^\times $. \begin{itemize} \item If $ F = \RR $, then $ \psi\br{x} = e^{-2\pi ix} $. \item If $ F = \CC $, then $ \psi\br{z} = e^{-2\pi i\Tr_{\CC / \RR}\br{z}} = e^{-2\pi i\br{z + \overline{z}}} $. \item Let $ F / \QQ_p $ be finite. Since $ \QQ_p = \ZZ\sbr{1 / p} + \ZZ_p $, define $$ \function[\psi_p]{\QQ_p / \ZZ_p}{\U\br{1}}{x}{e^{2\pi iy}}, \qquad y \in \ZZ\sbr{\dfrac{1}{p}}, \qquad x - y \in \ZZ_p, $$ which is well-defined. Let $ \psi = \psi_p \circ \Tr_{F / \QQ_p} : F \to \U\br{1} $. \end{itemize} \end{definition*} Why the sign in the case $ F / \RR $? If $ x \in \QQ $, then $ \psi_\infty\br{x}\prod_p \psi_p\br{x} = 1 $. \lecture{18}{Tuesday}{02/03/21} \begin{definition*} The \textbf{Haar measure $ \dF x $} is translation-invariant. \begin{itemize} \item If $ F = \RR $, then $ \dF x $ is the usual Lebesgue measure $ \d x $. \item If $ F = \CC $, then $ \dF z = 2\d x\d y $ for $ z = x + iy $, which is twice the Lebesgue measure. \item Let $ F / \QQ_p $. Our functions will be locally constant, that is sums of multiples of characteristic functions of $ a + \pi^n\OOO_F $ for $ a \in F $ and $ n \in \ZZ $. If $ n \ge 0 $, then $ \OOO_F = \bigsqcup_a a + \pi^n\OOO_F $ is a disjoint union of $ q^n $ cosets, so $$ \meas\br{a + \pi^n\OOO_F} = \meas\br{\pi^n\OOO_F} = \dfrac{1}{q^n}\meas\br{\OOO_F}, $$ and will normalise $ \meas\br{\OOO_F} = q^{-\delta / 2} $ where $ \delta = \delta_{F / \QQ_p} = \v\br{\DDD_{F / \QQ_p}} $, that is $$ \intF{\1_{a + \pi^n\OOO_F}}{x} = \meas\br{a + \pi^n\OOO_F} = q^{-n - \tfrac{\delta}{2}}. $$ \end{itemize} \end{definition*} In each case, $ \dF\br{ax} = \abs{a}_F\dF x $ for $ a \in F^\times $. \pagebreak \begin{definition*} The class of functions to integrate is the \textbf{Schwartz space} $ \SSS\br{F} $. \begin{itemize} \item If $ F = \RR $, then $$ \SSS\br{F} = \cbr{\text{$ \C^\infty $-functions} \ f : F \to \CC \st \forall n \ge 0, \ \forall \alpha \in \NN, \ \lim_{\abs{x} \to \infty} \br{\abs{x}^n\abs{\dod[\alpha]{f}{x}}} = 0}. $$ For example, $ e^{-c\abs{x}^2} $ for $ c > 0 $. \item If $ F = \CC $, then $$ \SSS\br{F} = \cbr{\text{$ \C^\infty $-functions} \ f : F \to \CC \st \forall n \ge 0, \ \forall \alpha \in \NN^2, \ \lim_{\abs{z} \to \infty} \br{\abs{z}^n\abs{\dmd{f}{\alpha}{x}{\alpha_1}{y}{\alpha_2}}} = 0}. $$ \item If $ F / \QQ_p $, then \begin{align*} \SSS\br{F} & = \cbr{\text{locally constant} \ f : F \to \CC \ \text{of compact support}} \\ & = \cbr{\text{span of characteristic functions} \ \1_{a + \pi^n\OOO_F}}. \end{align*} \end{itemize} \end{definition*} If $ f \in \SSS\br{F} $, write $$ \intF{f\br{x}}{x} $$ for the integral. If $ F / \QQ_p $ and $ f = \1_{a + \pi^n\OOO_F} $, then $$ \intF{f\br{x}}{x} = \meas\br{a + \pi^n\OOO_F}, $$ that is $ p $-adic integrals are basically just finite sums. Also write $$ \intF[U]{f\br{x}}{x} = \intF{\1_Uf\br{x}}{x}, $$ for $ U \subset F $ compact open. \begin{lemma} \label{lem:9.4} Let $ F / \QQ_p $, and let $ \aaa \subset F $ be a fractional ideal. Then $$ \intF[\aaa]{\psi\br{x}}{x} = \intF{\1_\aaa\psi\br{x}}{x} = \begin{cases} \meas\br{\aaa} & \aaa \subset \DDD_{F / \QQ_p}^{-1} \\ 0 & \text{otherwise} \end{cases}, $$ where $ \1_\aaa\psi \in \SSS\br{F} $. \end{lemma} \begin{proof} \hfill \begin{itemize} \item If $ \aaa \subset \DDD_{F / \QQ_p}^{-1} $, then $ \Tr_{F / \QQ_p}\br{\aaa} \subset \ZZ_p $ so $ \eval{\psi}_\aaa = 1 $, as $ \eval{\psi_p}_{\ZZ_p} = 1 $. \item If $ \aaa \not\subset \DDD_{F / \QQ_p}^{-1} $, there exists $ x \in \aaa $ such that $ \Tr_{F / \QQ_p}\br{x} \notin \ZZ_p $, so $ \psi\br{x} \ne 1 $. As $ x + \aaa = \aaa $, and $ \dF\br{x + y} = \dF y $, $$ \intF[\aaa]{\psi\br{y}}{y} = \intF[\aaa]{\psi\br{x + y}}{y} = \psi\br{x}\intF[\aaa]{\psi\br{y}}{y}, $$ so the integral is zero. \end{itemize} \end{proof} Compare to $$ \sum_{g \in G} \chi\br{g} = \begin{cases} \#G & g = e \\ 0 & \text{otherwise} \end{cases}, $$ for $ G $ finite abelian. \pagebreak \subsection{Local Fourier transform} \begin{definition*} Let $ f \in \SSS\br{F} $. Define the \textbf{Fourier transform} $$ \widehat{f}\br{y} = \intF{\psi\br{xy}f\br{x}}{x}, $$ where $ \psi\br{xy}f\br{x} \in \SSS\br{F} $. \end{definition*} \begin{proposition} \label{prop:9.5} \hfill \begin{enumerate} \item If $ F = \RR $ and $ f\br{x} = e^{-\pi x^2} $, then $ \widehat{f} = f $. \item If $ F = \CC $ and $ f\br{z} = \tfrac{1}{\pi}e^{-2\pi z\overline{z}} $, then $ \widehat{f} = f $. \item If $ F / \QQ_p $ and $ f = \1_{\pi^n\OOO_F} $, then $$ \widehat{f} = q^{-n - \tfrac{\delta}{2}}\1_{\pi^{-n}\DDD_{F / \QQ_p}^{-1}} = q^{-n - \tfrac{\delta}{2}}\1_{\pi^{-n - \delta}\OOO_F}. $$ \end{enumerate} \end{proposition} \begin{proof} \hfill \begin{enumerate} \item Changing the contour of $ f $, $$ \widehat{f}\br{y} = \intd{-\infty}{\infty}{e^{-2\pi ixy - \pi x^2}}{x} = e^{-\pi y^2}\intd{-\infty}{\infty}{e^{-\pi\br{x + iy}^2}}{x} = e^{-\pi y^2}\intd{-\infty}{\infty}{e^{-\pi x^2}}{x} = e^{-\pi y^2}. $$ \item Exercise. \footnote{Exercise} \item By \ref{lem:9.4}, $$ \widehat{f}\br{y} = \intF[\pi^n\OOO_F]{\psi\br{xy}}{x} = \begin{cases} \meas\br{\pi^n\OOO_F} & y \in \pi^{-n}\DDD_{F / \QQ_p}^{-1} \\ 0 & y \notin \pi^{-n}\DDD_{F / \QQ_p}^{-1} \end{cases}, $$ which gives the answer. \end{enumerate} \end{proof} \begin{fact*} If $ f \in \SSS\br{F} $, then $ \widehat{f} \in \SSS\br{F} $. \begin{itemize} \item For $ F / \RR $, this is standard analysis, using $ \widehat{f^{\br{n}}}\br{y} = \br{2\pi iy}^n\widehat{f}\br{y} $. \item For $ F / \QQ_p $, this is an exercise in sheet $ 3 $. \end{itemize} \end{fact*} \begin{proposition}[Inversion formula] $$ \hathat{f}\br{x} = f\br{-x}. $$ \end{proposition} \begin{proof} \hfill \begin{itemize} \item For $ F = \RR $, this is standard analysis. \item For $ F = \CC $, notice that if $ f\br{z} = f\br{x + iy} = g\br{x, y} $, then $ \widehat{f}\br{w} = \widehat{f}\br{u + iv} = 2\widehat{g}\br{2u, -2v} $ since $ zw + \overline{zw} = 2\br{ux - vy} $, so $ \hathat{f}\br{z} = f\br{-z} $ easily. \item For $ F / \QQ_p $, if $ f = \1_{\OOO_F} $, then $$ \hathat{f} = q^{-\tfrac{\delta}{2}}\widehat{\1_{\DDD_{F / \QQ_p}^{-1}}} = q^{-\tfrac{\delta}{2}}q^{\delta - \tfrac{\delta}{2}}\1_{\OOO_F}, $$ by \ref{prop:9.5}.$ 3 $ twice. \footnote{Exercise: the rest is in example sheet} \end{itemize} \end{proof} This explains the choice of constants in $ \dF x $, a \textbf{self-dual} Haar measure, otherwise we would get $ \hathat{f}\br{x} = cf\br{-x} $. \pagebreak \begin{lemma} \label{lem:9.7} Let $ c \in F^\times $, and let $ g\br{x} = f\br{cx} $. Then $$ \widehat{g}\br{y} = \abs{c}_F^{-1}\widehat{f}\br{c^{-1}y}. $$ \end{lemma} \begin{proof} By $ x = c^{-1}t $, $$ \widehat{g}\br{y} = \intF{\psi\br{xy}f\br{cx}}{x} = \intF{\psi\br{c^{-1}ty}f\br{t}}{\br{c^{-1}t}} = \abs{c}_F^{-1}\intF{\psi\br{tc^{-1}y}f\br{t}}{t} = \abs{c}_F^{-1}\widehat{f}\br{c^{-1}y}. $$ \end{proof} \subsection{Local \texorpdfstring{$ \zeta $}{zeta}-integrals} \lecture{19}{Thursday}{04/03/21} \begin{definition*} Define the \textbf{Haar measure $ \dF^\times x $ on the multiplicative group $ F^\times $} by $$ \dF^\times x = \begin{cases} \dfrac{1}{\abs{x}_F}\dF x & F / \RR \\ \dfrac{q^{\tfrac{\delta}{2}}}{1 - q^{-1}}\dfrac{1}{\abs{x}_F}\dF x & F / \QQ_p \end{cases}, $$ where $ q $ is the residue field order and $ \delta = \v\br{\DDD_{F / \QQ_p}} $. \end{definition*} Since $ \dF\br{ax} = \abs{a}_F\dF x $, $ \dF^\times\br{ax} = \dF^\times x $. If $ F / \QQ_p $, then $$ \meas\br{\OOO_F^\times} = \intFX[\OOO_F^\times]{}{x} = \dfrac{q^{\tfrac{\delta}{2}}}{1 - q^{-1}}\intF[\OOO_F \setminus \pi\OOO_F]{}{x} = \dfrac{q^{\tfrac{\delta}{2}}}{1 - q^{-1}}\br{q^{-\tfrac{\delta}{2}} - q^{-1 - \tfrac{\delta}{2}}} = 1. $$ This is the reason to normalise in this way. \begin{definition*} Let $ f \in \SSS\br{F} $, and let $ s \in \CC $. Define \textbf{local $ \zeta $-integrals} $$ \zeta\br{f, s} = \intFX{f\br{x}\abs{x}_F^s}{x} = c\lim_{\epsilon \to 0} \intF[\cbr{x \in F \st \abs{x}_F \ge \epsilon}]{f\br{x}\abs{x}_F^{s - 1}}{x}, \qquad c = \begin{cases} 1 & F / \RR \\ \dfrac{q^{\tfrac{\delta}{2}}}{1 - q^{-1}} & F / \QQ_p \end{cases}. $$ \end{definition*} If $ F / \QQ_p $, this is just a finite sum. Since $ f $ is continuous and tends rapidly to zero as $ \abs{x}_F \to \infty $ if $ F / \RR $ and has compact support if $ F / \QQ_p $, the limit exists for $ \Re s \ge 1 $. \begin{proposition} \label{prop:9.8} \hfill \begin{enumerate} \item If $ F = \RR $ and $ f\br{x} = e^{-\pi x^2} $, then $ \zeta\br{f, s} = \Gamma_\RR\br{s} $. \item If $ F = \CC $ and $ f\br{z} = \tfrac{1}{\pi}e^{-2\pi z\overline{z}} $, then $ \zeta\br{f, s} = \Gamma_\CC\br{s} $. \item If $ F / \QQ_p $ and $ f = \1_{\pi^n\OOO_F} $, then $$ \zeta\br{f, s} = \dfrac{q^{-ns}}{1 - q^{-s}}. $$ \end{enumerate} \end{proposition} Recall $$ \Gamma\br{s} = \intd{0}{\infty}{\dfrac{e^{-t}t^s}{t}}{t}, \qquad \Gamma_\RR\br{s} = \pi^{-\tfrac{s}{2}}\Gamma\br{\dfrac{s}{2}}, \qquad \Gamma_\CC\br{s} = 2\br{2\pi}^{-s}\Gamma\br{s}. $$ \begin{proof} \hfill \begin{enumerate} \item Follows from the definition of $ \Gamma\br{s} $ after a change of variables. \item Follows from the definition of $ \Gamma\br{s} $ after a change of variables and polar coordinates. \pagebreak \item \begin{align*} \zeta\br{\1_{\pi^n\OOO_F}, s} & = \intFX[\pi^n\OOO_F \setminus \cbr{0}]{\abs{x}_F^s}{x} = \sum_{m = n}^\infty \intF[\pi^m\OOO_F \setminus \pi^{m + 1}\OOO_F]{\dfrac{q^{-ms}}{q^{-m}}\dfrac{q^{\tfrac{\delta}{2}}}{1 - q^{-1}}}{x} \\ & = \sum_{m = n}^\infty q^{m\br{1 - s} + \tfrac{\delta}{2}}\dfrac{1}{1 - q^{-1}}\meas\br{\pi^m\OOO_F \setminus \pi^{m + 1}\OOO_F} \\ & = \sum_{m = n}^\infty q^{m\br{1 - s} + \tfrac{\delta}{2}}\dfrac{1}{1 - q^{-1}}q^{-\tfrac{\delta}{2}}\br{\dfrac{1}{q^m} - \dfrac{1}{q^{m + 1}}} = \sum_{m = n}^\infty q^{-ms} = \dfrac{q^{-ns}}{1 - q^{-s}}. \end{align*} \end{enumerate} \end{proof} \begin{example*} $ \zeta\br{\1_{\OOO_F}, s} = 1 / \br{1 - q^{-s}} $. \end{example*} A variant is to also consider, for a continuous homomorphism $ \chi : F^\times \to \CC^\times $, $$ \zeta\br{f, \chi, s} = \intFX{f\br{x}\chi\br{x}\abs{x}_F^s}{x}, $$ defined as a limit in the same way. \subsection{Global Fourier analysis} Let $ K $ be a number field with completions $ K_v $, and let $ \psi_v : K_v \to \U\br{1} $, $ \dv x $, $ \dv^\times x $, $ \SSS\br{K_v} $, and $ \delta_v $ be the objects defined above for $ F = K_v $. Let $$ \V_{K, \r} = \cbr{v \in \V_{K, \f} \st v \ \text{ramified in} \ F / \QQ_p} = \cbr{v \in \V_{K, \f} \st \delta_v \ne 0}. $$ Then $$ \AA_K = \bigcup_S \br{\prod_{v \in S} K_v \times \prod_{v \notin S} \OOO_v}, $$ where $ S \subset \V_K $ is finite containing $ \V_{K, \infty} $. \begin{definition*} Let $ f_v \in \SSS\br{K_v} $ for $ v \in \V_K $ such that for all but finitely many $ v \in \V_{K, \f} $, $ f_v = \1_{\OOO_v} $. Then if $ x = \br{x_v} \in \AA_K $, for all but finitely many $ v $, $ f_v\br{x_v} = 1 $. So can define $$ f\br{x} = \prod_{v \in \V_K} f_v\br{x_v}, $$ and write $ f = \prod_v f_v $, or better, $ f = \bigotimes_v f_v $. The \textbf{global Schwartz space} $ \SSS\br{\AA_K} $ is the space of finite linear combinations of $ f $ of this type. \end{definition*} \begin{definition*} Let $ f = \bigotimes_v f_v \in \SSS\br{\AA_K} $ where $ f_v = \1_{\OOO_v} $ for all $ v \notin S $ for a finite set $ S \supset \V_{K, \infty} \cup \V_{K, \r} $. Then $ f = 0 $ outside $ \prod_{v \in S} K_v \times \prod_{v \notin S} \OOO_v $ and can define the \textbf{global integral} $$ \intA{f\br{x}}{x} = \prod_v \intv{f_v\br{x}}{x} = \prod_{v \in S} \intv{f_v\br{x}}{x}, $$ since if $ v \notin S $, $$ \intv{f_v\br{x}}{x} = \intv[\OOO_v]{}{x} = 1. $$ \end{definition*} \begin{definition*} Let the \textbf{global additive character} be $$ \function[\psi_\AA = \prod_v \psi_v]{\AA_K}{\U\br{1}}{\br{x_v}}{\prod_v \psi_v\br{x_v}}, $$ which is a finite product, since for all but finitely many $ v \in \V_{K, \f} $, $ x_v \in \OOO_v $ so $ \psi_v\br{x_v} = \psi_p\br{\Tr_{K_v / \QQ_p}\br{x_v}} = 1 $. \end{definition*} \pagebreak \begin{proposition} $ \psi_\AA $ is continuous, and $ \psi_\AA\br{x} = 1 $ if $ x \in K $. \end{proposition} \begin{proof} Take a finite $ S \supset \V_{K, \infty} $. The restriction of $ \psi_\AA $ to $ \prod_{v \in S} K_v \times \prod_{v \notin S} \OOO_v $ factors through $ \prod_{v \in S} \psi_v : \prod_{v \in S} K_v \to \U\br{1} $, which is continuous. Now $ \psi_\AA\br{x} = \psi_{\AA_\QQ}\br{\Tr_{K / \QQ}\br{x}} $, as $ \Tr_{K / \QQ}\br{x} = \sum_{v \mid p} \Tr_{K_v / \QQ_p}\br{x} $ for all $ p \le \infty $, so it is enough to consider $ K = \QQ $. Write $ x \in \QQ $ as partial fractions $ x = \sum_i y_i / p_i^{k_i} $ for $ y_i \in \ZZ $ and $ k_i \ge 0 $. Then $ \psi_{p_i}\br{x} = e^{2\pi iy_i / p_i^{k_i}} $ as for $ j \ne i $, $ y_j / p_j^{k_j} \in \ZZ_{p_i} $, and $ \psi_p\br{x} = 1 $ if $ p \notin \cbr{p_i} $. Thus $ \prod_{p < \infty} \psi_p\br{x} = e^{2\pi ix} = \psi_\infty\br{x}^{-1} $. \end{proof} \begin{definition*} Define the \textbf{global Fourier transform} of $ f \in \SSS\br{\AA_K} $ as $$ \widehat{f}\br{y} = \intA{\psi_\AA\br{xy}f\br{x}}{x} = \prod_v \widehat{f_v}\br{y_v}, \qquad f = \bigotimes_v f_v. $$ \end{definition*} Note that for all but finitely many $ v $, $ f_v = \1_{\OOO_v} = \widehat{f_v} $. \subsection{Global \texorpdfstring{$ \zeta $}{zeta}-integral} \begin{definition*} Let $ f = \bigotimes_v f_v \in \SSS\br{\AA_K} $. Define the \textbf{global $ \zeta $-integral} $$ \zeta\br{f, s} = \intJ{f\br{x}\abs{x}_\AA^s}{x} = \prod_{v \in \V_K} \intvX{f_v\br{x}\abs{x}_v^s}{x} = \prod_{v \in \V_K} \zeta\br{f_v, s}, $$ which really is a genuine infinite product. \end{definition*} \lecture{20}{Saturday}{06/03/21} If $ a \in \JJ_K $, then there is an isomorphism $$ \function[a]{\AA_K}{\AA_K}{x}{ax}, $$ so if $ f \in \SSS\br{\AA_K} $ then $ f \circ a \in \SSS\br{\AA_K} $. Then $ \dA\br{ax} = \abs{a}_\AA\dA x $, since holds locally, and $ \dJ\br{ax} = \dJ x $. \begin{proposition} The product $ \zeta\br{f, s} $ converges absolutely for $ \Re s > 1 $. \end{proposition} \begin{proof} Assume $ f = \bigotimes_v f_v $ such that $ f_v = \1_{\OOO_v} $ for all $ v \notin S $. Then $ \zeta\br{f_v, s} = 1 / \br{1 - \q_v^{-s}} $ for $ v \notin S $, which gives convergence by \ref{prop:9.2}, the product for $ \zeta_K\br{s} $. \end{proof} \begin{theorem}[Functional equation for $ \zeta\br{f, s} $] \label{thm:9.11} $ \zeta\br{f, s} $ has a meromorphic continuation to $ \CC $, with at worst simple poles at $ s = 0, 1 $. Moreover, $$ \zeta\br{f, s} = \zeta\br{\widehat{f}, 1 - s}, $$ with $$ \Res_s \zeta\br{f, s} = \begin{cases} \widehat{f}\br{0}\kappa & s = 1 \\ -f\br{0}\kappa & s = 0 \end{cases}, \qquad \kappa = \meas\br{\CCC_K^1} > 0. $$ \end{theorem} Let $ n = \sbr{K : \QQ} $. Then $$ \function[\i]{\RR_{> 0}}{K_\infty^\times = \prod_{v \mid \infty} K_v^\times \hookrightarrow \JJ_K}{t}{\br{t^{\tfrac{1}{n}}}_v}, $$ so $ \abs{\i\br{t}}_\AA = t $. So there is an isomorphism $$ \function{\RR_{> 0} \times \JJ_K^1}{\JJ_K}{\br{t, x}}{\i\br{t}x}. $$ Write $ t $ in place of $ \i\br{t} $. Use this to define a measure $ \dJI x $ on $ \JJ_K^1 $ such that \begin{equation} \label{eq:9} \intJ{f\br{x}}{x} = \intd{0}{\infty}{\br{\intJI{f\br{tx}}{x}}\dfrac{1}{t}}{t}. \end{equation} \pagebreak The most concrete way to do this is to pick $ \phi : \RR_{> 0} \to \RR $, $ \C^\infty $ of compact support such that $$ \intd{0}{\infty}{\dfrac{\phi\br{t}}{t}}{t} = 1. $$ Given $ f $ on $ \JJ_K^1 $, let $$ \function[\widetilde{f_\phi}]{\JJ_K}{\CC}{tx}{\phi\br{t}f\br{x}}, $$ and define $$ \intJI{f\br{x}}{x} = \intJ{\widetilde{f_\phi}\br{y}}{y}. $$ \begin{lemma} \label{lem:9.12} \hfill \begin{enumerate} \item This is independent of $ \phi $. \item The identity $ \br{\ref{eq:9}} $ holds. \end{enumerate} \end{lemma} \begin{proof} If $ y \in \JJ_K $ such that $ y = tx $ for $ t > 0 $ and $ x \in \JJ_K^1 $, then $ x = y / \abs{y}_\AA $ and $ t = \abs{y}_\AA $. \begin{enumerate} \item So $ \widetilde{f_\phi}\br{y} = \phi\br{\abs{y}_\AA}f\br{y / \abs{y}_\AA} $. Putting $ s' = \abs{y}_\AA $ and $ y' = sy / s' $, so $ \abs{y'}_\AA = s $, \begin{align*} \intJI{f\br{x}}{x} & = \intd{0}{\infty}{\dfrac{\psi\br{s}}{s}}{s}\intJ{\widetilde{f_\phi}\br{y}}{y} \\ & = \intd{0}{\infty}{\br{\intJ{\psi\br{s}\phi\br{\abs{y}_\AA}f\br{\dfrac{y}{\abs{y}_\AA}}}{y}}\dfrac{1}{s}}{s} \\ & = \intd{0}{\infty}{\br{\intJ{\psi\br{\abs{y'}_\AA}\phi\br{s'}f\br{\dfrac{y'}{\abs{y'}_\AA}}}{y'}}\dfrac{1}{s'}}{s'} \\ & = \intd{0}{\infty}{\dfrac{\phi\br{s'}}{s'}}{s'}\intJ{\widetilde{f_\psi}\br{y}}{y} = \intJI{f\br{x}}{x}. \end{align*} We need to check the homomorphism $$ \function[\lambda]{\RR_{> 0} \times \JJ_K}{\RR_{> 0} \times \JJ_K}{\br{s, y}}{\br{s', y'}} $$ is measure-preserving. Since $ \abs{t}_\AA = t $, $ \lambda^2 : \br{s, y} \mapsto \br{s, y} $, that is $ \lambda^2 = \id $. The Haar measure is unique up to a constant, so $$ \lambda : \dJ y \times \dfrac{1}{s}\d s \mapsto c\dJ y \times \dfrac{1}{s}\d s, \qquad c > 0, $$ so since $ c^2 = 1 $, $ c = 1 $. If you like, it is easy to reduce to the computation just on $ K_\infty^\times $. \item If $ g_t\br{x} = f\br{tx} $, then $ \widetilde{g_t}\br{y} = \phi\br{\abs{y}_\AA}f\br{ty / \abs{y}_\AA} $, so putting $ s = \abs{y}_\AA $ and $ x = ty / s $, \begin{align*} \intd{0}{\infty}{\br{\intJI{f\br{tx}}{x}}\dfrac{1}{t}}{t} & = \intd{0}{\infty}{\br{\intJ{\phi\br{\abs{y}_\AA}f\br{\dfrac{ty}{\abs{y}_\AA}}}{y}}\dfrac{1}{s}}{s} \\ & = \intd{0}{\infty}{\dfrac{\phi\br{s}}{s}}{s}\intJ{f\br{x}}{x} = \intJ{f\br{x}}{x}. \end{align*} \end{enumerate} \end{proof} So $$ \zeta\br{f, s} = \intd{0}{\infty}{\dfrac{\zeta_t\br{f, s}}{t}}{t}, \qquad \zeta_t\br{f, s} = t^s\intJI{f\br{tx}}{x}. $$ Recall that $ \CCC_K^1 $ is compact. Will show next time that there exists $ E \subset \JJ_K^1 $, the \textbf{fundamental domain}, with $ \meas\br{E} < \infty $ and $ \overline{E} $ compact such that $$ \JJ_K^1 = \bigsqcup_{a \in K^\times} aE. $$ Let $ \kappa = \meas\br{E} $. \pagebreak \begin{proposition}[Functional equation for $ \zeta_t\br{f, s} $] \label{prop:9.13} $$ \zeta_t\br{f, s} + \kappa f\br{0}t^s = \zeta_{t^{-1}}\br{\widehat{f}, 1 - s} + \kappa\widehat{f}\br{0}t^{s - 1}. $$ \end{proposition} This is an analogue of the functional equation of $ \Theta\br{t} = \sum_{n \in \ZZ} e^{-\pi n^2t} $. The proof uses the following. \begin{theorem}[Poisson summation formula] \label{thm:9.14} Let $ f \in \SSS\br{\AA_K} $. Then $$ \sum_{a \in K} f\br{a} = \sum_{a \in K} \widehat{f}\br{a}, $$ and both sums are absolutely convergent. \end{theorem} \begin{corollary} \label{cor:9.15} Let $ x \in \JJ_K $. Then $$ \sum_{a \in K} f\br{xa} = \abs{x}_\AA^{-1}\sum_{a \in K} \widehat{f}\br{x^{-1}a}. $$ \end{corollary} \begin{proof} Apply \ref{thm:9.14} to $ f \circ x $ and use \ref{lem:9.7}. \end{proof} \begin{proof}[Proof of \ref{prop:9.13}] Write the integral over $ \JJ_K^1 $ as an integral over $ E $ of a sum over $ K^\times $. By \ref{cor:9.15}, \begin{align*} \zeta_t\br{f, s} + \kappa f\br{0}t^s & = t^s\intJI[E]{\sum_{a \in K^\times} f\br{atx}}{x} + \kappa f\br{0}t^s = t^s\intJI[E]{\sum_{a \in K} f\br{atx}}{x} \\ & = t^s\intJI[E]{\sum_{a \in K} \abs{tx}_\AA^{-1}\widehat{f}\br{t^{-1}x^{-1}a}}{x} = t^{s - 1}\intJI[E]{\sum_{a \in K^\times} \widehat{f}\br{t^{-1}x^{-1}a}}{x} + \kappa\widehat{f}\br{0}t^{s - 1} \\ & = t^{s - 1}\intJI{\widehat{f}\br{t^{-1}x^{-1}}}{x} + \kappa\widehat{f}\br{0}t^{s - 1} = \zeta_{t^{-1}}\br{\widehat{f}, 1 - s} + \kappa\widehat{f}\br{0}t^{s - 1}, \end{align*} since $ \abs{x}_\AA = 1 $ on $ E $. \end{proof} \begin{proof}[Proof of \ref{thm:9.11}] Now, if $ \Re s > 1 $, \begin{align*} \zeta\br{f, s} = \intd{0}{\infty}{\dfrac{\zeta_t\br{f, s}}{t}}{t} & = \intd{1}{\infty}{\dfrac{\zeta_t\br{f, s}}{t}}{t} + \intd{0}{1}{\dfrac{\zeta_t\br{f, s}}{t}}{t} = \intd{1}{\infty}{\dfrac{\zeta_t\br{f, s} + \zeta_{t^{-1}}\br{f, s}}{t}}{t} \\ & = \intd{1}{\infty}{\dfrac{\zeta_t\br{f, s} + \zeta_t\br{\widehat{f}, 1 - s} - \kappa f\br{0}t^{-s} + \kappa\widehat{f}\br{0}t^{1 - s}}{t}}{t} \\ & = \intd{1}{\infty}{\dfrac{\zeta_t\br{f, s} + \zeta_t\br{\widehat{f}, 1 - s}}{t}}{t} + \kappa\br{\dfrac{\widehat{f}\br{0}}{s - 1} - \dfrac{f\br{0}}{s}}. \end{align*} Say $ f \in \SSS\br{\AA_K} $ such that $ f = f_\infty f^\infty $ for $ f_\infty = \bigotimes_{v \mid \infty} f_v \in \SSS\br{K_\infty} $ and $ f^\infty = \bigotimes_{v \nmid \infty} f_v \in \SSS\br{\widehat{K}} $, which has compact support. So if $ x \in \JJ_K^1 $ and $ f^\infty\br{x} \ne 0 $, then there exists a finite $ S \subset \V_{K, \f} $ such that if $ v \in \V_{K, \f} \setminus S $ then $ f_v = \1_{\OOO_v} $ so $ \abs{x_v}_v \le 1 $, and if $ v \in S $ then $ \abs{x_v}_v \le c_v $. As $ \prod_v \abs{x_v}_v = \abs{x}_\AA = 1 $, $ \prod_{v \mid \infty} \abs{x_v}_v \ge c = \prod_{v \nmid \infty} c_v > 0 $, and $$ \intJI{f\br{tx}}{x} \le c\int_{\prod_{v \mid \infty} \abs{x_v}_v \ge c'} f_\infty\br{tx} \d^\times x = c\int_{\prod_{v \mid \infty} \abs{x_v}_v \ge tc'} f_\infty\br{x} \d^\times x \to 0 $$ rapidly as $ t \to \infty $, so $ \zeta_t\br{f, s} $ is rapidly decreasing, as $ t \to \infty $. That implies that $$ \intd{1}{\infty}{\dfrac{\zeta_t\br{f, s}}{t}}{t} = \lim_{T \to \infty} \intd{1}{T}{\dfrac{\zeta_t\br{f, s}}{t}}{t}, $$ with uniform limit for $ \sigma_1 \le \Re s \le \sigma_2 $, is an analytic function for all $ s \in \CC $, which gives a meromorphic continuation of $ \zeta\br{f, s} $ with poles at $ s = 0, 1 $, and $ \zeta\br{f, s} = \zeta\br{\widehat{f}, 1 - s} $. \end{proof} Morally, $ \zeta_t\br{f, s} $ is $ \Theta $ deprived of the constant term. \pagebreak \subsection{Proof of Poisson summation formula} \lecture{21}{Tuesday}{09/03/21} Start off with the classical Poisson formula. \begin{itemize} \item If $ f \in \SSS\br{\RR} $, then $$ \sum_{m \in \ZZ} f\br{m} = \sum_{n \in \ZZ} \widehat{f}\br{n}, $$ since $ g\br{x} = \sum_{m \in \ZZ} f\br{x + m} : \RR / \ZZ \to \CC $ has Fourier expansion $ g\br{x} = \sum_{n \in \ZZ} c_ne^{2\pi inx} $ with $$ c_n = \intd{0}{1}{e^{-2\pi inx}g\br{x}}{x} = \intd{0}{1}{\sum_{m \in \ZZ} e^{-2\pi inx}f\br{x + m}}{x} = \intd{-\infty}{\infty}{e^{-2\pi inx}f\br{x}}{x} = \widehat{f}\br{n}, $$ so $$ \sum_m f\br{m} = g\br{0} = \sum_n c_n = \sum_n \widehat{f}\br{n}. $$ Similarly for $ f \in \SSS\br{\RR^k} $, $$ \sum_{m \in \ZZ^k} f\br{m} = \sum_{n \in \ZZ^k} \widehat{f}\br{n}, $$ by the same proof. \end{itemize} One method is abstract Fourier analysis. \begin{itemize} \item Let $ G $ be a locally compact abelian group, and let $ H $ be a countable discrete subgroup such that $ G / H $ is compact. If $ f $ is a nice function on $ G $, then $$ \function[\widehat{f}]{\widehat{G} = \Hom_{\cts}\br{G, \U\br{1}}}{\CC}{\chi}{\intd{G}{}{\chi\br{x}f\br{x}}{x}}. $$ Then $ \widehat{G / H} $ is discrete, and $$ \sum_{h \in H} f\br{h} = \sum_{\chi \in \widehat{G / H}} \widehat{f}\br{\chi}\meas\br{G / H}^{-1}, $$ with proof the same as for $ \br{\RR, \ZZ} $. Apply with $ G = \AA_K $ and $ H = K $, where $ G \cong \widehat{G} $, via $ \psi_\AA $, and $ \widehat{G / H} \cong H $. \end{itemize} The following is a more basic proof. \begin{proof}[Proof of \ref{thm:9.14}] \hfill \begin{itemize} \item Let $ V $ be a real vector space with $ \dim V < \infty $ and $ \d x $ an invariant measure, let $ \Lambda \subset V $ be a lattice with $ \mu = \meas\br{V / \Lambda} < \infty $, and let $$ V' = \Hom\br{V, \RR} \supset \Lambda' = \Hom\br{\Lambda, \ZZ} = \cbr{y \in V' \st \forall x \in \Lambda, \ \abr{x, y} \in \ZZ}. $$ If $ f \in \SSS\br{V} $, then $ \widehat{f} \in \SSS\br{V'} $ and $$ \widehat{f}\br{y} = \intd{V}{}{e^{-2\pi i\abr{x, y}}}{x}. $$ Then $$ \sum_{x \in \Lambda} f\br{x} = \mu^{-1}\sum_{y \in \Lambda'} \widehat{f}\br{y}, $$ since scaling $ \d x $, may assume $ \mu = 1 $, then fix $ \ZZ^k \xrightarrow{\sim} \Lambda $, so $ \RR^k \cong V \cong V' $ and this reduces to the previous Poisson summation for $ \br{\RR^k, \ZZ^k} $. \pagebreak \item A special case is a fractional ideal $ \aaa \subset K $. Suppose $ f \in \SSS\br{\AA_K} $ such that $ f = f_\infty \otimes f_\aaa $ for $ f_\infty \in \SSS\br{K_\infty} $ and $ f_\aaa : \widehat{K} \to \CC $ the characteristic function of $ \aaa\widehat{\OOO_K} = \prod_{v \nmid \infty} \aaa\OOO_v \subset \prod_{v \nmid \infty} K_v $. Then $$ \widehat{f} = \widehat{f_\infty} \otimes \abs{\d_K}^{-\tfrac{1}{2}}\N\br{\aaa}^{-1}f_\bbb, \qquad \bbb = \DDD_{K / \QQ}^{-1}\aaa^{-1}, $$ by the local computation of $ \widehat{\1_{\pi^n\OOO_F}} $. Now $ \sigma : \aaa \hookrightarrow K_\infty $. On $ K_\infty $ we have the trace form $ \Tr_{K_\infty / \RR}\br{xy} $ identifying $ K_\infty $ with its dual, and by definition of $ \DDD_{K / \QQ} $, the dual of $ \aaa $ is $ \bbb $. Moreover, the covolume of $ \sigma\br{\aaa} $ is $ \abs{\d_K}^{1 / 2}\N\br{\aaa} $. So $$ \sum_{x \in K} f\br{x} = \sum_{x \in \aaa} f_\infty\br{x} = \abs{\d_K}^{-\tfrac{1}{2}}\N\br{\aaa}^{-1}\sum_{y \in \bbb} \widehat{f_\infty}\br{y} = \sum_{y \in \bbb} \widehat{f}\br{y}, $$ by the Poisson summation for lattices. \item For the general case, every element of $ \SSS\br{\AA_K} $ is a sum of functions $ g\br{x} = f\br{x + a} $ where $ f = f_\infty \otimes f_\aaa $ as above and $ a \in \widehat{K} $. By strong approximation, may assume $ a \in K $. Then $$ \widehat{g}\br{y} = \intA{\psi_\AA\br{xy}f\br{x + a}}{x} = \psi_\AA\br{ay}^{-1}\widehat{f}\br{y}, $$ and by the previous, $$ \sum_{x \in K} g\br{x} = \sum_{x \in K} f\br{x} = \sum_{y \in K} \widehat{f}\br{y} = \sum_{y \in K} \psi_\AA\br{ay}\widehat{g}\br{y} = \sum_{y \in K} \widehat{g}\br{y}, $$ as $ \eval{\psi_\AA}_K = 1 $. \end{itemize} \end{proof} \subsection{Proof of functional equation and analytic class number formula} Now use the functional equation of $ \zeta\br{f, s} $ to deduce the same for $ \zeta_K\br{s} $. \begin{proof}[Proof of \ref{thm:9.3}.$ 1 $] Choose $$ f_v = \begin{cases} e^{-\pi x^2} & v \ \text{real} \\ \dfrac{1}{\pi}e^{-2\pi z\overline{z}} & v \ \text{complex} \\ \1_{\OOO_v} & v \ \text{finite} \end{cases}, \qquad \widehat{f_v} = \begin{cases} e^{-\pi x^2} & v \ \text{real} \\ \dfrac{1}{\pi}e^{-2\pi z\overline{z}} & v \ \text{complex} \\ \q_v^{-\tfrac{\delta_v}{2}}\1_{\DDD_{K_v / \QQ_p}^{-1}} & v \ \text{finite} \end{cases}, $$ by \ref{prop:9.5}. By \ref{prop:9.8}, $$ \zeta\br{f, s} = \Gamma_\RR\br{s}^{\r_1}\Gamma_\CC\br{s}^{\r_2}\prod_{v \nmid \infty} \dfrac{1}{1 - \q_v^{-s}}. $$ If $ v \mid \infty $, then $ \zeta\br{\widehat{f_v}, 1 - s} = \zeta\br{f_v, 1 - s} $. If $ v $ is finite, $$ \zeta\br{\widehat{f_v}, 1 - s} = \q_v^{-\tfrac{\delta_v}{2}}\dfrac{\q_v^{\delta_v\br{1 - s}}}{1 - \q_v^{-\br{1 - s}}} = \q_v^{\delta_v\br{\tfrac{1}{2} - s}}\zeta\br{f_v, 1 - s}. $$ Thus $$ \Z_K\br{s} = \abs{\d_K}^{\tfrac{s}{2}}\zeta\br{f, s} = \abs{\d_K}^{\tfrac{s}{2}}\zeta\br{\widehat{f}, 1 - s} = \abs{\d_K}^{\tfrac{s}{2} + \br{\tfrac{1}{2} - s}}\zeta\br{f, 1 - s} = \Z_K\br{1 - s}, $$ giving all of \ref{thm:9.3}.$ 1 $. \end{proof} \pagebreak For part $ 2 $, have to compute $ \kappa = \meas\br{\CCC_K^1} $. \lecture{22}{Thursday}{11/03/21} \begin{theorem} \label{thm:9.16} $$ \kappa = \dfrac{2^{\r_1}\br{2\pi}^{\r_2}\h_K\R_K}{\w_K}. $$ \end{theorem} \begin{proof} Replacing $ \JJ_K^1 $ by $ \JJ_K = \JJ_K^1 \times \i\br{\RR_{> 0}} $, by \ref{lem:9.12}.$ 2 $, \begin{align*} \meas\br{\CCC_K^1} & = \meas\br{\CCC_K^1 \times \RR_{> 0} / \abr{e}} & \intd{1}{e}{\dfrac{1}{t}}{t} = 1 \\ & = \meas\br{\CCC_K / \abr{\i\br{e}}} & \dJ x = \dJI y \times \dfrac{1}{t}\d t \\ & = \h_K\meas\br{\JJ_{K, \emptyset} / \OOO_K^\times\abr{\i\br{e}}} & 1 \to \JJ_{K, \emptyset} / \OOO_K^\times \to \CCC_K \to \Cl\br{K} \to 1 \\ & = \dfrac{\h_K}{\w_K}\meas\br{\JJ_{K, \emptyset} / \abr{\epsilon_1, \dots, \epsilon_r, \i\br{e}}} & \OOO_K^\times = \mu\br{K} \times \abr{\epsilon_1, \dots, \epsilon_r} \\ & = \dfrac{\h_K}{\w_K}\meas\br{K_\infty^\times / \abr{\epsilon_1, \dots, \epsilon_r, \i\br{e}}} & \meas\br{\widehat{\OOO_K}^\times} = \prod_{v \nmid \infty} \meas\br{\OOO_v^\times} = 1. \end{align*} Let $ K_\infty = \prod_{v \mid \infty} K_v^\times $. \begin{itemize} \item If $ v $ is real, there is an isomorphism $$ \functions{K_v^\times = \RR^\times}{\cbr{\pm 1} \times \RR}{x}{\br{\sign x, \log \abs{x}_v}}{\dv^\times x}{\mu \times \d y}, $$ where $ \mu $ is the counting measure. \item If $ v $ is complex, there is an isomorphism $$ \functions{K_v^\times \cong \CC^\times}{\U\br{1} \times \RR}{z = re^{i\theta}}{\br{e^{i\theta}, 2\log r}}{\dv^\times z = \dfrac{1}{\abs{z}_v}\d_\CC z = \dfrac{1}{r^2}2r\d r\d\theta}{\d\theta \times \d r}. $$ \end{itemize} Then $$ \begin{tikzcd} 1 \arrow{r} & \cbr{\pm 1}^{\r_1} \times \U\br{1}^{\r_2} \arrow{r} \arrow[cong]{d} & K_\infty^\times \arrow{r}{\lambda = \br{\log \abs{\cdot}_v}_v} \arrow{d} & \LLL_K \arrow{r} \arrow{d} & 0 \\ 1 \arrow{r} & \cbr{\pm 1}^{\r_1} \times \U\br{1}^{\r_2} \arrow{r} & K_\infty^\times / \abr{\epsilon_1, \dots, \epsilon_r, \i\br{e}} \arrow{r}[swap]{\lambda} & \LLL_K / \Lambda \arrow{r} & 0 \end{tikzcd}, $$ where $ \Lambda = \abr{\lambda\br{\epsilon_1}, \dots, \lambda\br{\epsilon_r}, \lambda\br{\i\br{e}}} \subset \LLL_K $ is a lattice, by the unit theorem, and $$ \lambda\br{\i\br{e}} = \br{\log \abs{e^{\tfrac{1}{n}}}_v}_v = \br{\dfrac{\e_v}{n}}_v, \qquad \e_v = \begin{cases} 1 & v \ \text{real} \\ 2 & v \ \text{complex} \end{cases}. $$ Then $$ \meas\br{\cbr{\pm 1}^{\r_1} \times \U\br{1}^{\r_2}} = 2^{\r_1}\br{2\pi}^{\r_2}, $$ and $ \meas\br{\LLL_K / \Lambda} $ is the absolute value of the determinant of the $ \br{r + 1} \times \br{r + 1} $ matrix with rows $$ \br{\dfrac{\e_v}{n}, \log \abs{\epsilon_1}_v, \dots, \log \abs{\epsilon_r}_v}, \qquad v \in \V_{K, \infty}. $$ The sum of the rows is $ \br{1, 0, \dots, 0} $, as $ \abs{\epsilon_j}_\AA = 1 $. So the determinant, up to $ \pm 1 $, is any $ \br{r \times r} $-minor of the matrix $ \br{\log \abs{\epsilon_j}_v}_{j, v} $, so $$ \meas\br{\LLL_K / \Lambda} = \R_K. $$ \end{proof} \pagebreak \begin{proof}[Proof of \ref{thm:9.3}.$ 2 $] Since $ f_\CC\br{z} = \tfrac{1}{\pi}e^{-2\pi z\overline{z}} $, $$ -\pi^{-\r_2}\kappa = -f\br{0}\kappa = \Res_{s = 0} \zeta\br{f, s} = \Res_{s = 0} \Z_K\br{s} = \lim_{s \to 0} s\br{\dfrac{2}{s}}^{\r_1 + \r_2}\zeta_K\br{s}, $$ as $ \Gamma_\RR\br{s} \sim 2 / s \sim \Gamma_\CC\br{s} $ since $ \Gamma\br{s} \sim 1 / s $ at $ s = 0 $, so $$ \lim_{s \to 0} s^{-r}\zeta_K\br{s} = -2^{-\r_1}\br{2\pi}^{-\r_2}\kappa = -\dfrac{\h_K\R_K}{\w_K}, \qquad r = \r_1 + \r_2 - 1, $$ by \ref{thm:9.16}. \end{proof} \begin{remark*} A criticism is that this method only tells us about $ \zeta_K\br{s} $, as for almost all $ v $, $ f_v = \1_{\OOO_v} $ and $ \zeta\br{f_v, s} = 1 / \br{1 - \q_v^{-s}} $. Next is to generalise to $ \L $-functions. \end{remark*} \subsection{Description of \texorpdfstring{$ E \subset \JJ_K^1 $}{the fundamental domain}} After the proof, exhibit an explicit $ E \subset \JJ_K^1 $ such that $$ \JJ_K^1 = \bigsqcup_{a \in K^\times} aE. $$ Let $ y_1, \dots, y_h \in \JJ_K^1 $, where $ h = \h_K = \#\Cl\br{K} $, be coset representatives for $ \JJ_{K, \emptyset}^1 / \OOO_K^\times \subset \CCC_K^1 $. We will find $ E_0 \subset \JJ_{K, \emptyset}^1 $ such that $$ \JJ_{K, \emptyset}^1 = \bigsqcup_{a \in \OOO_K^\times} aE_0. $$ Then $$ E = \bigsqcup_{i = 1}^h y_iE_0 $$ will do. Let $$ \PPP = \cbr{\sum_{j = 1}^r t_j\lambda\br{\epsilon_j} \st t_j \in \intco{0, 1}} \subset \LLL_K^0 $$ be a set of coset representatives for $ \abr{\lambda\br{\epsilon_1}, \dots, \lambda\br{\epsilon_r}} \subset \LLL_K^0 $, so $$ E_1 = \lambda^{-1}\br{\PPP} \times \widehat{\OOO_K}^\times $$ is a set of coset representatives for $ \abr{\epsilon_1, \dots, \epsilon_r} $ in $ K_\infty^{\times, 1} \times \widehat{\OOO_K}^\times = \JJ_{K, \emptyset}^1 $. Let $ v_0 \in \V_{K, \infty} $, assumed complex if $ \w_K > 2 $. Then $$ E_0 = \cbr{x \in E_1 \st \arg x_{v_0} \in \intco{0, \dfrac{2\pi}{\w_K}}}, $$ and clear that this works. If $ v_0 $ is real and $ \w_K = 2 $, this says $ x_{v_0} > 0 $. \pagebreak \section{\texorpdfstring{$ \L $}{L}-functions} \begin{example*} A \textbf{Dirichlet character} is a homomorphism $ \phi : \br{\ZZ / N\ZZ}^\times \to \CC^\times $. The \textbf{Dirichlet $ \L $-series} is $$ \L\br{\phi, s} = \sum_{n \ge 1, \ \br{n, N} = 1} \dfrac{\phi\br{n}}{n^s} = \prod_{p \nmid N} \dfrac{1}{1 - \phi\br{p}p^{-s}}, $$ which occurs in the theorem on primes in arithmetic progressions. Then get a continuous $$ \chi : \CCC_\QQ \cong \RR_{> 0} \times \widehat{\ZZ}^\times \to \widehat{\ZZ}^\times \to \prod_{p \mid N} \br{\ZZ_p / N\ZZ_p}^\times \cong \br{\ZZ / N\ZZ}^\times \xrightarrow{\phi} \CC^\times, $$ \end{example*} \begin{exercise*} $$ \correspondence{\text{continuous} \ \chi : \CCC_\QQ \to \CC^\times \\ \text{of finite order}}{\text{Dirichlet characters} \ \phi : \br{\ZZ / N\ZZ}^\times \to \CC^\times \\ \text{which are primitive}}, $$ where $ \phi $ is \textbf{primitive} if it does not factor $$ \br{\ZZ / N\ZZ}^\times \xrightarrow{\mod M} \br{\ZZ / M\ZZ}^\times \to \CC^\times, \qquad M \mid N, \qquad M < N. $$ \end{exercise*} \subsection{Hecke characters} \begin{definition*} An \textbf{idele class character}, or \textbf{Hecke character}, of $ K $ is a continuous homomorphism $ \chi : \CCC_K \to \CC^\times $. \end{definition*} Note that do not require $ \abs{\chi} = 1 $. In Tate, these are called \textbf{quasi-characters}. \begin{example*} A simple but important example is $$ \chi\br{x} = \abs{x}_\AA^s, \qquad s \in \CC, $$ as $ \abs{K^\times}_\AA = 1 $. For $ K = \QQ $, every Hecke character is $ \abs{\cdot}_\AA^s $ times a finite order $ \chi $. But for $ K \ne \QQ $, there exist lots of other interesting ones. \end{example*} \begin{proposition} \label{prop:10.1} Let $ G $ be a profinite group. Then any continuous homomorphism $ \chi : G \to \CC^\times $ has open kernel, so finite image, that is it is continuous for the discrete topology on $ \CC^\times $. \end{proposition} \begin{proof} $ \chi\br{G} $ is compact so $ \chi\br{G} \subset \U\br{1} $. Let $$ V = \cbr{e^{i\theta} \in \U\br{1} \st -\dfrac{\pi}{2} < \theta < \dfrac{\pi}{2}} = \U\br{1} \cap \cbr{\Re z > 0}. $$ Then $ \chi^{-1}\br{V} \subset G $ is an open neighbourhood of the identity, so contains an open subgroup $ H \subset G $. Then $ \chi\br{H} \subset V \subset \U\br{1} $ is a subgroup. But this implies $ \chi\br{H} = 1 $, since if $ 1 \ne z \in \U\br{1} $, some integer power $ z^n $ has $ \Re z^n \le 0 $. \end{proof} \lecture{23}{Saturday}{13/03/21} \begin{corollary} \label{cor:10.2} \hfill \begin{enumerate} \item Let $ F / \QQ_p $, and let $ \chi : F^\times \to \CC^\times $ be continuous. Then there exists $ n \ge 0 $ such that $ \chi\br{x} = 1 $ for all $ x \in \br{1 + \pi^n\OOO_F} \cap \OOO_F^\times $. The least such $ n $ is the \textbf{conductor} of $ \chi $. \item Let $ \chi : \JJ_K \to \CC^\times $ be a continuous homomorphism, and let $ \chi_v = \eval{\chi}_{K_v^\times} : K_v^\times \to \CC^\times $. Then, \begin{enumerate} \item for all but finitely many $ v \in \V_{K, \f} $, $ \chi_v $ is unramified, that is $ \chi_v\br{\OOO_v^\times} = 1 $, and \item $ \chi\br{x} = \prod_{v \in \V_K} \chi_v\br{x_v} $, a finite product by $ \br{a} $, and conversely, if $ \br{\chi_v} $ is a family of continuous homomorphisms $ \chi_v : K_v^\times \to \CC^\times $ satisfying $ \br{a} $, their product $ \chi\br{x} = \prod_v \chi_v\br{x_v} $ is a well-defined continuous homomorphism $ \JJ_K \to \CC^\times $. \end{enumerate} \end{enumerate} \end{corollary} \pagebreak \begin{proof} \hfill \begin{enumerate} \item Apply \ref{prop:10.1} with $ G = \OOO_F^\times $. \item \hfill \begin{enumerate} \item Apply \ref{prop:10.1} with $ G = \widehat{\OOO_K}^\times \subset \JJ_K $. Then $ \chi = 1 $ on an open subgroup of $ \widehat{\OOO_K}^\times = \prod_{v \nmid \infty} \OOO_v^\times $, so $ \eval{\chi}_{\OOO_v^\times} = 1 $ for all but finitely many $ v \in \V_{K, \f} $. \item The same as \ref{prop:8.1}.$ 2 $, for $ \JJ_K \to \CC^\times $ discrete. \end{enumerate} \end{enumerate} \end{proof} So what is a continuous homomorphism $ F^\times \to \CC^\times $? \begin{itemize} \item Let $ F / \QQ_p $. If $ \chi : F^\times \to \CC^\times $ is unramified then it factors $$ F^\times \xrightarrow{\abs{\cdot}_F} q^\ZZ \xrightarrow{q \mapsto q^s} \CC^\times, \qquad s \in \CC, $$ unique modulo $ \br{2\pi i / \log q}\ZZ $, that is $ \chi\br{x} = \abs{x}_F^s $. In general, $ \chi_1\br{x} = \chi\br{x} / \chi\br{\pi}^{\v\br{x}} $ factors $$ F^\times \to F^\times / \abr{\pi} \cong \OOO_F^\times \to \CC^\times, $$ which has finite image by \ref{cor:10.2}.$ 1 $, and $ \chi / \chi_1 $ is unramified as $ \eval{\chi}_{\OOO_F^\times} = \eval{\chi_1}_{\OOO_F^\times} $, that is $ \chi = \chi_1\abs{\cdot}_F^s $ and $ \chi_1\br{\pi} = 1 $ has finite order. \item Let $ F / \RR $. Then $$ F^\times = \begin{cases} \cbr{\pm 1} \times \RR_{> 0} & F = \RR \\ \U\br{1} \times \RR_{> 0} & F = \CC \end{cases}, $$ and \footnote{Exercise} $$ \Hom_{\cts}\br{\RR_{> 0}, \CC^\times} = \cbr{x \mapsto x^s \st s \in \CC} \cong \CC. $$ So continuous homomorphisms $ \chi : F^\times \to \CC^\times $ are $$ \chi = \begin{cases} x \mapsto \abs{x}^s \ \text{and} \ x \mapsto \sign x\abs{x}^s & F = \RR \\ z \mapsto \br{\dfrac{z}{\abs{z}^{\tfrac{1}{2}}}}^n\abs{z}^s \ \text{for} \ n \in \ZZ & F = \CC \end{cases}, $$ so $ \chi = \chi_1\abs{\cdot}_F^s $ where $ \eval{\chi_1}_{\RR_{> 0}} = 1 $. \end{itemize} Globally is the following. \begin{proposition} Let $ \chi : \CCC_K \to \CC^\times $. There exists a unique $ \chi = \chi_1\abs{\cdot}_\AA^s $ for $ s \in \CC $ such that $ \eval{\chi_1}_{\RR_{> 0}} = 1 $. Moreover, $ \chi_1\br{\JJ_K} \subset \U\br{1} $. \end{proposition} \begin{proof} There exists a unique $ s \in \CC $ such that for all $ x \in \RR_{> 0} \subset \JJ_K $, $ \chi\br{x} = \abs{x}^s = \abs{x}_\AA^s $. Then $ \chi_1 = \chi\abs{\cdot}_\AA^{-s} $ is trivial on $ K^\times\RR_{> 0} $. As $ \CCC_K / \RR_{> 0} $ is compact, $ \chi_1\br{\JJ_K} \subset \U\br{1} $. \end{proof} The following is the relation between the local $ s_v $ and the global $ s $. \begin{proposition} Let $ \chi = \prod_v \chi_v : \CCC_K \to \CC^\times $ such that $ \chi = \chi_1\abs{\cdot}_\AA^s $ and $ \chi_v = \chi_{v, 1}\abs{\cdot}_v^{s_v} $ as above. Then $ \Re s = \Re s_v $ for all $ v $. \end{proposition} \begin{proof} Let $ x \in K_v^\times \subset \JJ_K $. Then as $ \abs{\chi_{v, 1}} = 1 $ and $ \abs{\chi_1} = 1 $, $$ \abs{x}_v^{\Re s_v} = \abs{\chi_v\br{x}} = \abs{\chi\br{x}} = \abs{x}_\AA^{\Re s} = \abs{x}_v^{\Re s}. $$ \end{proof} Note that suppose $ s = 0 $, need not have $ s_v = 0 $, since if $ v $ is unramified, $ \chi_v\br{\pi_v} = \q_v^{-s_v} \ne 1 $, usually. \pagebreak \subsection{Hecke \texorpdfstring{$ \L $}{L}-functions} \begin{definition*} Let $ \chi = \prod_v \chi_v : \CCC_K \to \CC^\times $ be a Hecke character, and let $$ S = \V_{K, \infty} \cup \cbr{v \in \V_{K, \f} \st \chi_v \ \text{is ramified}}. $$ The \textbf{Hecke $ \L $-series} or \textbf{Hecke $ \L $-function} of $ \chi $ is $$ \L\br{\chi, s} = \prod_{v \notin S} \dfrac{1}{1 - \chi_v\br{\pi_v}\q_v^{-s}}, $$ which does not depend on the choice of $ \pi_v $. \end{definition*} \begin{remark*} \hfill \begin{itemize} \item If $ \chi = 1 $, then $ \L\br{\chi, s} = \zeta_K\br{s} $. \item If $ K = \QQ $ and $ \eval{\chi}_{\RR_{> 0}} = 1 $, that is $ \chi $ is of finite order, then $ \L\br{\chi, s} $ is a Dirichlet $ \L $-series. \footnote{Exercise} \item If $ t \in \CC $, then $ \L\br{\chi\abs{\cdot}_\AA^t, s} = \L\br{\chi, s + t} $ as $ \abs{\pi_v}_v = \q_v^{-1} $. So there is a redundancy in the definition. We can get all $ \L $-functions if either \begin{itemize} \item restrict to $ s = 0 $, since $ \L\br{\chi, s} = \L\br{\chi\abs{\cdot}_\AA^s, 0} $, or \item restrict to $ \chi $ with $ \eval{\chi}_{\RR_{> 0}} = 1 $, using $ \L\br{\chi\abs{\cdot}_\AA^t, s} = \L\br{\chi, s + t} $, in particular $ \chi $ is unitary. \end{itemize} Both are useful. \end{itemize} \end{remark*} \begin{proposition} If $ \eval{\chi}_{\RR_{> 0}} = 1 $, and more generally, if $ \abs{\chi} = 1 $, then $ \L\br{\chi, s} $ converges absolutely for $ \Re s > 1 $. \end{proposition} \begin{proof} Since $ \abs{\chi_v\br{\pi_v}} = 1 $, follows by comparison with $ \zeta_K\br{s} $. \end{proof} The following is the main theorem. \begin{theorem}[Functional equation for Hecke $ \L $-function] \label{thm:10.6} Let $ \chi $ be a Hecke character. \begin{itemize} \item There exist $ a_v \in \CC $ for $ v \in \V_{K, \infty} $ and $ \epsilon\br{\chi, s} = AB^s $ for some $ A \in \CC^\times $ and $ B > 0 $ such that if $$ \Lambda\br{\chi, s} = \prod_{v \mid \infty} \Gamma_{K_v}\br{s + a_v}\L\br{\chi, s}, $$ then $ \Lambda\br{\chi, s} $ has a meromorphic continuation to $ \CC $, and $$ \Lambda\br{\chi, s} = \epsilon\br{\chi, s}\Lambda\br{\chi^{-1}, 1 - s}. $$ If $ \chi \ne \abs{\cdot}_\AA^t $ for some $ t \in \CC $, then $ \Lambda\br{\chi, s} $ is entire. \item $$ \epsilon\br{\chi, s} = \prod_v \epsilon_v\br{\chi_v, s}, $$ where the \textbf{local $ \epsilon $-factors} are $ \epsilon_v\br{\chi_v, s} = 1 $ for all but finitely many $ v $, and only depends on $ \chi_v $. \end{itemize} \end{theorem} \begin{remark*} If $ \chi = \abs{\cdot}_\AA^t $, then $ \Lambda\br{\chi, s} = \Z_K\br{s + t} $ and we know the poles, and residues. \end{remark*} \begin{itemize}[leftmargin=0.5in] \item[$ K_v = \RR $.] If $ \chi_v = \abs{\cdot}_v^t $, then $ a_v = t $ and $ \epsilon_v\br{\chi_v, s} = 1 $. If $ \chi_v = \sign\abs{\cdot}_v^t $, then $ a_v = t + 1 $ and $ \epsilon_v\br{\chi_v, s} = -i $. \item[$ K_v = \CC $.] If $ \chi_v = \br{z / \abs{z}_v^{1 / 2}}^n\abs{z}_v^t $ for $ n \in \ZZ $, then $ a_v = t + \abs{n} / 2 $ and $ \epsilon_v\br{\chi_v, s} = i^{-\abs{n}} $. \pagebreak \item[$ K_v / \QQ_p $.] If $ \chi_v $ is unramified, $$ \epsilon_v\br{\chi_v, s} = \begin{cases} 1 & K_v / \QQ_p \ \text{is unramified, so} \ \delta_v = 0 \\ \q_v^{\delta_v\br{\tfrac{1}{2} - s}}\chi_v\br{\pi_v}^{\delta_v} & \text{in general} \end{cases}. $$ If $ \chi_v $ is ramified, $$ \epsilon_v\br{\chi_v, s} = \intv[K_v^\times]{\chi_v\br{x}^{-1}\abs{x}_v^{-s}\psi_v\br{x}}{x} = \sum_n \intv[\pi_v^{-n}\OOO_v^\times]{\chi_v\br{x}^{-1}\abs{x}_v^{-s}\psi_v\br{x}}{x}, $$ which is a Gauss sum, and in fact the integral is non-zero for only $ n = \delta_v + m_v $ where $ m_v $ is the conductor of $ \chi_v $. \end{itemize} \subsection{Global \texorpdfstring{$ \zeta $}{zeta}-integral} \lecture{24}{Tuesday}{16/03/21} \begin{definition*} Let $ f \in \SSS\br{\AA_K} $. Then $$ \zeta\br{f, \chi, s} = \intJ{f\br{x}\chi\br{x}\abs{x}_\AA^s}{x} = \prod_v \intFX{f_v\br{x}\chi_v\br{x}\abs{x}_v^s}{x} = \prod_v \zeta_v\br{f_v, \chi_v, s}, \qquad f = \bigotimes_v f_v. $$ \end{definition*} Can restrict to $ s = 0 $ and changing $ \chi $. \begin{theorem}[Global functional equation for $ \zeta\br{f, \chi, s} $] \hfill \begin{itemize} \item $$ \zeta\br{f, \chi, s} = \zeta\br{\widehat{f}, \chi^{-1}, 1 - s}, $$ meromorphic on $ \CC $. \item If $ \chi \ne \abs{\cdot}_\AA^t $ for some $ t \in \CC $, then $ \zeta\br{f, \chi, s} $ is entire, so no poles. \end{itemize} \end{theorem} \begin{proof} Modify the proof of \ref{thm:9.11} to include $ \chi $. Without loss of generality, $ \eval{\chi}_{\RR_{> 0}} = 1 $, by changing $ s $. Replace $ \zeta_t\br{f, s} $ by \begin{align*} \zeta_t\br{f, \chi, s} & = t^s\intJI{f\br{tx}\chi\br{x}}{x} = t^s\intJI[E]{\sum_{a \in K^\times} f\br{atx}\chi\br{x}}{x} \\ & = t^s\intJI[E]{\sum_{a \in K} f\br{atx}\chi\br{x}}{x} - f\br{0}t^s\intJI[E]{\chi\br{x}}{x}, \end{align*} as $ \eval{\chi}_{K^\times} = 1 $ and $ \JJ_K^1 = \bigsqcup_{a \in K^\times} aE $. \begin{itemize} \item If $ \chi = 1 $, the latter integral is $ \kappa $ as before. \item If $ \chi \ne 1 $, choosing $ b \in E $ such that $ \chi\br{b} \ne 1 $ and putting $ x \mapsto bx $, the latter integral is zero. \end{itemize} Then apply the Poisson summation and the rest of the proof as for \ref{thm:9.11}. \end{proof} To get the functional equation for $ \Lambda\br{\chi, s} $, need a suitable $ f $. The following is the nicest way to see this. \begin{theorem}[Local functional equation for $ \zeta\br{f, \chi, s} $] \label{thm:10.8} Let $ F $ be local, and let $ \chi : F^\times \to \CC^\times $. Then for all $ f \in \SSS\br{F} $, $$ \dfrac{\zeta\br{\widehat{f}, \chi^{-1}, 1 - s}}{\L\br{\chi^{-1}, 1 - s}} = \epsilon\br{\chi, s}\dfrac{\zeta\br{f, \chi, s}}{\L\br{\chi, s}}. $$ Here $ \L $ and $ \epsilon $ are the local factors from above, so for $ F / \RR $, these are $ \Gamma_F\br{s + a_F} $. \end{theorem} \begin{proof}[Proof of \ref{thm:10.6}] Multiplying the local and global functional equations, get the functional equation for $ \Lambda\br{\chi, s} $. \end{proof} \pagebreak \begin{proposition} \label{prop:10.9} Let $ f, g \in \SSS\br{F} $. Then $$ \zeta\br{f, \chi, s}\zeta\br{\widehat{g}, \chi^{-1}, 1 - s} = \zeta\br{\widehat{f}, \chi^{-1}, 1 - s}\zeta\br{g, \chi, s}. $$ \end{proposition} \begin{proof} Changing variables $ t' = x $, $ x' = t $, $ y' = ty / x $, so $ x' / y' = x / y $ and $ yt = y't' $, \begin{align*} \zeta\br{f, \chi, s}\zeta\br{\widehat{g}, \chi^{-1}, 1 - s} & = \intFX{\intFX{f\br{x}\widehat{g}\br{y}\chi\br{\dfrac{x}{y}}\abs{\dfrac{x}{y}}_F^s\abs{y}_F}{x}}{y} \\ & = c\intFX[F]{\intFX{\intFX{f\br{x}g\br{t}\psi\br{yt}\chi\br{\dfrac{x}{y}}\abs{\dfrac{x}{y}}_F^s\abs{yt}_F}{x}}{y}}{t} \\ & = c\intFX{\intFX{\intFX[F]{f\br{t'}g\br{x'}\psi\br{y't'}\chi\br{\dfrac{x'}{y'}}\abs{\dfrac{x'}{y'}}_F^s\abs{y't'}_F}{t'}}{y'}}{x'} \\ & = \intFX{\intFX{\widehat{f}\br{y'}g\br{x'}\chi\br{\dfrac{x'}{y'}}\abs{\dfrac{x'}{y'}}_F^s\abs{y'}_F}{y'}}{x'} \\ % & = \intFX{\intFX{\intFX{f\br{x}g\br{yt}\psi\br{t}\chi\br{\dfrac{x}{y}}\abs{\dfrac{x}{y}}_F^s\abs{y}_F}{x}}{y}}{t} \\ & = \zeta\br{\widehat{f}, \chi^{-1}, 1 - s}\zeta\br{g, \chi, s}. \end{align*} \end{proof} \begin{proof}[Proof of \ref{thm:10.8}] \hfill \begin{itemize} \item The independence of $ f $, by \ref{prop:10.9}. \item Just have to find a suitable $ f $, depending on $ \chi $, such that we can compute $ \zeta\br{f, \chi, s} $ and $ \zeta\br{\widehat{f}, \widehat{\chi}, 1 - s} $. For $ \chi = 1 $ did earlier. For general $ \chi $, see example sheet $ 4 $. \end{itemize} \end{proof} A special global case is when $ \L\br{\chi^{-1}, s} = \L\br{\chi, s + t} $, such as $ \chi^2 = 1 $. More generally, there exists $ g \in \Aut\br{K / \QQ} $ such that $ \chi^{-1} = \br{\chi \circ g}\abs{\cdot}_\AA^t $. For an example, see example sheet $ 4 $, question $ 8 $. Then $$ \Lambda\br{\chi, s} = \epsilon\br{\chi, s}\Lambda\br{\chi, 1 - s} = \epsilon\br{\chi, s}\epsilon\br{\chi, 1 - s}\Lambda\br{\chi, s}, $$ that is $ AB^sAB^{1 - s} = 1 $ so $ A^2 = B^{-1} > 0 $, so $$ \epsilon\br{\chi, s} = \w\br{\chi}B^{s - \tfrac{1}{2}}, $$ where $ \w\br{\chi} \in \cbr{\pm 1} $ is the \textbf{root number} and $$ \Lambda\br{\chi, s + \dfrac{1}{2}} = \w\br{\chi}B^s\Lambda\br{\chi, -s + \dfrac{1}{2}}. $$ Thus $ \w\br{\chi} $ determines the parity of the order of $ \Lambda\br{\chi, s} $ at $ s = \tfrac{1}{2} $. \subsection{Artin \texorpdfstring{$ \L $}{L}-functions*} Let $ \chi : \CCC_K \to \CC^\times $ be of finite order. Then by class field theory, $ \chi = \theta \circ \Art_{L / K} $ for some abelian $ L / K $ and $ \theta : \Gal\br{L / K} \hookrightarrow \CC^\times $. Then $$ \L\br{\chi, s} = \prod_{v \notin S} \dfrac{1}{1 - \theta\br{\Fr_v}\q_v^{-s}}, $$ where $ \Fr_v $ is the geometric Frobenius. The local factor at $ v \mid \infty $ is \begin{itemize} \item $ \Gamma_\CC\br{s} $ if $ v $ is complex, and \item $ \Gamma_\RR\br{s} $ if $ \theta\br{c} = 1 $ and $ \Gamma_\RR\br{s + 1} $ if $ \theta\br{c} = -1 $ if $ v $ is real, where $ c $ is complex conjugation at $ v $. \end{itemize} This suggests to try to define $ \L\br{\rho, s} $ for any representation $ \rho : \Gal\br{L / K} \to \GL V $ for $ L / K $ Galois and $ V \cong \CC^d $. Thinking about $ \rho = \bigoplus_i \theta_i $ leads to the following. \pagebreak \begin{definition*} The \textbf{Artin $ \L $-function} of $ \rho $ is $$ \L\br{\rho, s} = \prod_{v \in \V_{K, \f}} \L_v\br{\rho_v, s}, \qquad \L_v\br{\rho_v, s} = \L_v\br{\eval{\rho}_{\D_v}, s} = \det\br{1 - \rho\br{\Fr_v}\q_v^{-s} \st V^{\rho\br{\I_v}}}^{-1}, $$ which is well-defined on $ V^{\rho\br{\I_v}} $. \begin{itemize} \item For $ v $ complex, $ \L_v\br{\rho_v, s} = \Gamma_\CC\br{s}^d $. \item For $ v $ real, $ \L_v\br{\rho_v, s} = \Gamma_\RR\br{s}^{d_+}\Gamma_\RR\br{s + 1}^{d_-} $, where $ d_\pm = \dim V^{\rho\br{c} = \pm 1} $. \end{itemize} \end{definition*} \begin{proposition} \hfill \begin{enumerate} \item $ \L\br{\rho_1 \oplus \rho_2, s} = \L\br{\rho_1, s}\L\br{\rho_2, s} $. \item If $ L / K_1 / K $ and $ \rho_1 : \Gal\br{L / K_1} \to \GL V $, then $ \L\br{\rho_1, s} = \L\br{\Ind_{\Gal\br{L / K_1}}^{\Gal\br{L / K}} \rho_1, s} $. \end{enumerate} \end{proposition} \begin{proof} \hfill \begin{enumerate} \item Obvious. \item It is easy to check locally. At $ v \mid \infty $, this reduces to $ \Gamma_\RR\br{s}\Gamma_\RR\br{s + 1} = \Gamma_\CC\br{s} $, which explains the normalisation of $ \Gamma_\CC\br{s} $. \end{enumerate} \end{proof} \begin{theorem} $ \Lambda\br{\rho, s} = \prod_v \L\br{\rho_v, s} $ has a meromorphic continuation and a functional equation $$ \Lambda\br{\rho, s} = \epsilon\br{\rho, s}\Lambda\br{\rho^\vee, 1 - s}, $$ where $ \rho^\vee $ is the \textbf{contragredient representation} $ g \mapsto \rho\br{g^{-1}}^\intercal \in \GL V^* $. \end{theorem} Proof by reduction to the abelian case. \begin{theorem}[Brauer] Let $ G $ be a finite group, and let $ \rho : G \to \GL_d \CC $. Then there exist subgroups $ H_i \subset G $, homomorphisms $ \chi_i : H_i \to \CC^\times $, and integers $ m_i $, such that $$ \Tr \rho = \sum_i m_i\chi_i, $$ that is $$ \rho \oplus \sum_{m_i < 0} -m_i\chi_i = \sum_{m_i \ge 0} m_i\chi_i. $$ \end{theorem} Then $$ \L\br{\rho, s} = \prod_i \L\br{\chi_i, s}^{m_i}. $$ Some $ m_i $ may be negative, so no control over poles. \begin{conjecture}[Artin conjecture] If $ \rho $ does not contain trivial representations, then $ \L\br{\rho, s} $ is entire. \end{conjecture} Mostly still unsolved, now viewed as a problem in the Langlands programme, or non-abelian class field theory. The status is \begin{itemize} \item true if $ \dim V = 1 $, so Hecke $ \L $-functions, where $ \rho $ is $ \chi : \CCC_K \to \CC^\times $, \item true if all $ m_i \ge 0 $, such as if $ G $ is a nilpotent group, and \item true if $ \dim V = 2 $ and either \begin{itemize} \item $ \im \rho \subset \GL_2 \CC $ is solvable, using automorphic base change, or \item $ K $ is totally real and $ \rho\br{c} \sim \twobytwosmall{-1}{0}{0}{1} $ for all complex conjugations $ c \in \Gal\br{L / K} $, using the proof of Serre's conjecture and generalisations to totally real fields, that is lots of automorphic theory, modularity lifting theorems, etc, \end{itemize} where $ \rho $ is an automorphic representation $ \pi $ of $ \GL_d \AA_K $. \end{itemize} Ignore the comment in Neukirch's book, where he says the conjecture is true for solvable extensions. \end{document}
{ "alphanum_fraction": 0.6009429058, "avg_line_length": 61.0840851678, "ext": "tex", "hexsha": "0aa3db4409cdb90448282cfd1845de38004e1548", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "0460d7892a9375886a9c69ad79c116c6709fc992", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "Multramate/Cam-GANT", "max_forks_repo_path": "Algebraic Number Theory/ANT.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "0460d7892a9375886a9c69ad79c116c6709fc992", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "Multramate/Cam-GANT", "max_issues_repo_path": "Algebraic Number Theory/ANT.tex", "max_line_length": 1090, "max_stars_count": 2, "max_stars_repo_head_hexsha": "0460d7892a9375886a9c69ad79c116c6709fc992", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "Multramate/Cam-GANT", "max_stars_repo_path": "Algebraic Number Theory/ANT.tex", "max_stars_repo_stars_event_max_datetime": "2021-11-03T15:46:22.000Z", "max_stars_repo_stars_event_min_datetime": "2021-04-10T07:45:29.000Z", "num_tokens": 70082, "size": 169264 }
\documentclass{llncs} % Grundgröße 12pt, zweiseitig % Standardpakete % richtiges encoding fuer verschiedene compiler \usepackage{iftex} \ifPDFTeX \usepackage[utf8]{inputenc} \usepackage[T1]{fontenc} \usepackage{lmodern} \else \ifXeTeX \usepackage{fontspec} \else \usepackage{luatextra} \fi \defaultfontfeatures{Ligatures=TeX} \fi % deutsche Silbentrennung \usepackage[english]{babel} \usepackage{amsmath} \usepackage{cite} \usepackage{float} \usepackage{subfig} \usepackage{changepage} % Grafiken einbinden \usepackage{graphicx} \graphicspath{{../figures/}} \usepackage{hyperref} % tiefe des Inhaltsverzeichnisses \setcounter{tocdepth}{2} \usepackage{listings} \usepackage{color} \definecolor{dkgreen}{rgb}{0,0.6,0} \definecolor{gray}{rgb}{0.5,0.5,0.5} \definecolor{mauve}{rgb}{0.58,0,0.82} \lstset{frame=tb, language=Python, aboveskip=3mm, belowskip=3mm, showstringspaces=false, columns=flexible, basicstyle={\small\ttfamily}, numbers=none, numberstyle=\tiny\color{gray}, keywordstyle=\color{blue}, commentstyle=\color{dkgreen}, stringstyle=\color{mauve}, breaklines=true, breakatwhitespace=true, tabsize=3 } \begin{document} \title{Exploration of Abalone game-playing agents} \author{Ture Claußen, 202132027, \email{[email protected]}} \authorrunning{T. Claußen} \institute{Dept. of Software and Computer Engineering, Ajou University} % jetzt gehts los {\def\addcontentsline#1#2#3{}\maketitle} % Wird gebraucht, damit der Title nicht im Inhaltsverzeichnis steht \begin{abstract} Perfect information games provide a good environment for artificial agents to navigate in, as they have a clear performance measure for comparison with each other and humans. Their determinism removes some of the engineering problems of agents in the physical world. In the following we implement and compare alpha-beta pruning and Monte Carlo Tree Search for the game Abalone, to come to a conclusion about their resource-consumption and performance. \keywords{AI \and Alpha-beta \and Monte-Carlo-Tree-Search \and Abalone \and Intelligent Agents} \end{abstract} \section{Introduction} Abalone is a fairly new game, that was devised in 1987 by Michel Lalet and Laurent Lévi. Nevertheless, with more than four million global sales it has established itself as a classic game \cite{noauthor_abalone_2020}. Abalone is a two-player game consisting of a hexagonal board with 61 fields and 14 marbles for black and white respectively. The abstract nature of the game requires the player to plan ahead and find the right strategy in the plethora of moves. The goal is to create an agent that is up to par with human players and moreover, has realistic computational requirements and reacts quickly. In search of the optimal move it is not possible to expand all of the possible paths the game could take, even for modern computers. Hence, more sophisticated approaches for navigating the state space and evaluating good paths are needed. On the other hand, the game does not have piece-specific rules or large distance moves which reduces the need for a very domain specific knowledge about the game like e.g. for chess to find sensible heuristics. \subsection{Motivation} Overall, this degree of complexity makes the game a good project for the design of a game playing-agent, as it is meant to be an opportunity to apply the fundamental principles and algorithms learned in the class, as opposed to being distracted by the engineering aspects. This matches my personal background on the subject matter, as I have no prior (formal) exposure to the design of artificial intelligence. In addition, this project is only created for the purpose of this class. Over the course of my current study of applied computer science I gained versatile proficiency in programming and the handling of data which will help implement the algorithms efficiently and provide the empirical foundation for the paper. The project will also be a valuable training for my upcoming bachelor thesis. \subsection{Related work} Considering the existing landscape of papers, there is unquestionably a wide array of papers exploring the application of minimax and alpha-beta pruning on the game of Abalone. Some of the most prominent include: \begin{enumerate} \item "Algorithmic fun-abalone" (2002) Considers foundational heuristics for the game and analyzes minimax and its refinements in the form of (heuristic) alpha-beta pruning. Furthermore it sheds light on the performance differences between those. \cite{aichholzer_algorithmic_2002} \item "A Simple Intelligent Agent for Playing Abalone Game: ABLA" (2004) Implementation of a game-playing agent with minimax, alpha-beta pruning and some custom heuristics. The evaluation of the performance is done by comparing the agent to existing software in the form of ABA-PRO and RandomSoft.\cite{ozcan_simple_2004} \item "Constructing an abalone game-playing agent" (2005) Provides a very thorough explanation and analysis of the game's fundamentals, such as the state space, rules and positions. In regards to the alpha-beta pruning it also explains strategies for ordering the nodes and performance concerns. \cite{lemmens_constructing_2005} \item "Implementing a computer player for abalone using alpha-beta and monte-carlo search" (2009) This master thesis is a very exhaustive analysis of the game, alpha-beta pruning and Monte Carlo tree search, conferring many of the previous results. \cite{chorus_implementing_2009} \end{enumerate} These resources give great insight into the classical approaches, but they are lacking certain qualities: \begin{itemize} \item Accessible and freely explorable code that underlies the analysis \item Comparison with modern approaches like Q-Learning that might reduce the resource demand on the client side \end{itemize} The proposed project seeks to build upon the given insight to improve upon these missing qualities. \subsection{Rules} The goal of the game is to push six of the opponent's marbles off the playing field. The game's starting position is depicted in figure \ref{basics} (a). One, two, or three adjacent marbles (of the player's own color) may be moved in any of the six possible directions during a player's turn. We differentiate between broadside or "side-step" moves and "in-line" moves, depending on how the chain of marbles moves relative to its direction, which is shown in figure \ref{basics} (b) and (c). \begin{figure}[!h] \centering \subfloat[Starting position]{ \includegraphics[width=3cm, keepaspectratio]{rules_starting_position.png} } \hfill \subfloat["In-line" moves]{ \includegraphics[width=3cm, keepaspectratio]{rules_inline_move.png} } \hfill \subfloat["Side-step" moves]{ \includegraphics[width=3cm, keepaspectratio]{rules_side_step_move.png} } \caption{Basic moves \cite{abalone_sa_abalone_nodate}} \label{basics} \end{figure} A move pushing the opponent's marbles is called "sumito" and comes in three variations, as shown by figure \ref{sumito}. Essentially, the player has to push with superior numbers and the opponent's marbles can not be blocked. This is the game mechanic that allows for pushing the marbles out of the game and winning. \begin{figure}[!h] \centering \subfloat["2-push-1" sumito]{ \includegraphics[width=3cm, keepaspectratio]{rules_2-push-1_sumito.png} } \hfill \subfloat["3-push-1" sumito]{ \includegraphics[width=3cm, keepaspectratio]{rules_3-push-1_sumito.png} } \hfill \subfloat["3-push-2" sumito]{ \includegraphics[width=3cm, keepaspectratio]{rules_3-push-2_sumito.png} } \caption{Sumito positions allow pushing the opponent's marbles \cite{abalone_sa_abalone_nodate}} \label{sumito} \end{figure} \section{Project details} \subsection{Agent design} Based on the PEAS framework we can analyze the task environment for the agent. \cite[p.107]{russell_artificial_2021} \begin{description} \item[Performance measure] Win/loss, number of moves, time to deliberate \item[Environment] Digital playing board \item[Actuators] Move marbles, display text to CLI \item[Sensors] Position of marbles \end{description} If we look at the environment more closely we see that it is fully observable, two-agent, competitive, sequential, static and discrete. \subsection{Complexity} An important characteristic of a game environment is its complexity, which can be described in two relevant dimensions. \paragraph{State space complexity} The state space is the collection of all possible states the agent can be in.\cite[p. 150]{russell_artificial_2021} For Abalone this means we have to consider all possible board configurations with different numbers of marbles present. Additionally, we would have to correct duplicates that arise from the symmetries of the board. Ignoring this fact the following gives a good upper bound: $$ \sum_{k=8}^{14}\sum_{m=9}^{14}\frac{61!}{k!(61-k)!}\times\frac{(61-k)!}{m!((61-k)-m)!} $$ \paragraph{Game tree complexity} The game tree defines the dependencies between board positions (nodes) and moves (edges). First we consider the branching factor (how many moves are possible in one position) of the game tree, which is on average 60. We combine that number with the height of the tree to get the total number of leaves. As the length of a game varies greatly, we use the average length of a game which is 87: $60^{87}$ \cite{lemmens_constructing_2005} \begin{figure} \centering \includegraphics[width=7cm, keepaspectratio]{distribution_of_moves.png} \caption{Counts of moves available for random for random player in 5 games} \end{figure} Putting Abalone's complexity in relation with other popular games, its state space complexity is on the same level as Reversi, whilst its game tree surpasses chess in complexity (c.f. table \ref{complexity_table}) \begin{table} \begin{center} \begin{tabular}{ | c | c | c | } \hline Game & state-space complexity (log) & game-tree complexity (log) \\ \hline Tic-tac-toe & 3 & 5 \\ \hline Reversi & 28 & 58 \\ \hline Chess & 46 & 123 \\ \hline Abalone & 24 & 154 \\ \hline Go & 172 & 360 \\ \hline \end{tabular} \end{center} \caption{Abalone in comparison with other games \cite{chorus_implementing_2009}} \label{complexity_table} \end{table} \section{Algorithm design} \subsection{Heuristics} As the size of the game tree is very large, the search on the tree usually does not reach terminal leaves that indicate a clear loss or win. Rather one has to evaluate the intermediary result of a given transposition based on a heuristic function. As algorithms like minimax optimize the potential outcome of the next moves based on this function, the heuristics solely determine the performance of such an agent. This function should judge the positions based on expert knowledge of the game to distinguish good from bad moves. \paragraph{Adjacency} As a majority of marbles is required to push opponent's marbles and conversely an equal amount of marbles is needed to avoid being pushed, it can be assumed that keeping one's marbles grouped together is a good move. The measure of adjacency is calculated by iterating over all marbles and counting the directly neighboring marbles that have the same color. We sum up these counts for each player and measure the difference: $$ \text{adjacency} = n_{\text{self}} - n_{\text{opponent}} $$ This puts the two counts into a relation and produces a negative sign, when the opponent has the upper hand. % image on how to calculate \paragraph{Distance to center} Marbles that are close to the brink of the board put them into danger of being attacked, wherefore it is generally good to place all of the marbles into the center of the board. For each player's marbles we measure their distance from the center of the board as the smallest amount of moves it would take to reach the center (Manhattan distance). Then again we sum up the distances and weigh them against each other to get the final measure: For both measures it is more convenient to represent the internal array indices of the marbles in a different coordinate system that has better mathematical properties. In case of the hexagonal shape of the board a cube grid or an axial are very suitable. \begin{figure}[!h] \centering \subfloat[Axial grid coordinate system]{ \includegraphics[width=5cm, keepaspectratio]{hex_axial_grid.png} } \hfill \subfloat[Cube grid coordinate system]{ \includegraphics[width=5cm, keepaspectratio]{hex_cube_grid.png} } \caption{Different options for hexagonal grids \cite{noauthor_red_nodate}} \label{hex_grids} \end{figure} That way we can obtain the neighbors by just incrementing and decrementing our x-, y- and z-components. \begin{figure} \begin{lstlisting} DIRECTIONS = { (+1, 0, -1): Direction.NORTH_EAST, (+1, -1, 0): Direction.EAST, (0, -1, +1): Direction.SOUTH_EAST, (-1, 0, +1): Direction.SOUTH_WEST, (-1, +1, 0): Direction.WEST, (0, 1, -1): Direction.NORTH_WEST } \end{lstlisting} \label{directions} \caption{Movements mapped to required increments and decrements} \end{figure} The same goes for calculating the distance which becomes in this space much more intuitive \ref{distance}. \begin{figure} \begin{lstlisting} def distance(self, other: Cube) -> int: return max(abs(self.x - other.x), abs(self.y - other.y), abs(self.z - other.z)) \end{lstlisting} \label{distance} \caption{Code to calculate the distance between two cube coordinates \cite{noauthor_ture_nodate}} \end{figure} \paragraph{Score} By comparing the current configuration of the board to following states in the search tree we can obtain a count of how many marbles were lost and how many were won and again weigh those against each other. $$ \text{marbleRatio} = n_{\text{won}} - n_{\text{lost}} $$ \paragraph{Win and loss} Lastly as a more definitive measure we can indicate whether the current state is a terminal state and hence a winning or losing state. $$ \text{winLoss} = \begin{cases} 1 \text{ if game won } \\ -1 \text{ otherwise} \end{cases} $$ \subsection{Alpha-beta-pruning agent} Alpha-beta-pruning is an improvement of the minimax algorithm, that tries to eliminate unnecessary traversals down the search tree, which, in the best case, leads to a reduction of moves from $ O(b^d) $ to $ O(\sqrt{b^d}) $. The implementation of the alpha beta agent could be improved in several ways. The heuristic function is a linear combination of the above mentioned heuristics was implemented as a linear combination: \begin{table} \begin{center} \begin{tabular}{ | c | c | } \hline Heuristic & weight \\ \hline adjacency & 1 \\ \hline distance & -1.5 \\ \hline marbleRatio & 100 \\ \hline winLoss & 100000 \\ \hline \end{tabular} \end{center} \caption{Weights for the linear combination} \label{heuristic_table} \end{table} \paragraph{Move ordering} To increase the likelihood of pruning taking place we need to order the moves, such that for the maximizer the best moves come first and for the minimizer vice versa. What constitutes a good move is determined by the heuristic function. That means our move ordering has to be predictive of the resulting heuristic evaluation of the move. Two different approaches were tested. The first one was based one the following hierarchy: \begin{itemize} \item Move capturing marble: +3 \item Move pushing marble: +1 \item Move involving 2/3 marbles: +1/+2 \end{itemize} Evaluating this function is computationally much less expensive than calculating the full heuristic function so this approach was tried first (Evaluation 1). In comparison to the algorithm without any ordering the visited nodes could be reduced drastically. Depending on the composition of the heuristic the node count for this ordering fluctuates significantly. If we compared it to the approach of using the heuristic itself for ordering, we see that this decreases the node count even further (c.f. table \ref{node_count}). A possible explanation for that would be, the more predictive the move ordering is of the final heuristic the less nodes are visited. \begin{table} \begin{center} \begin{tabular}{ | c | c | c | c | c | } \hline Depth & Without ordering & Evaluation 1 & Evaluation 2 & $\sqrt{b^d}$ \\ \hline 1 & 45 & 45 & 45 & 8 \\ \hline 2 & 1594 & 304 & 132 & 60 \\ \hline 3 & 9755 & 4971 & 2423 & 464 \\ \hline 4 & 457309 & 94650 & 6918 & 3600 \\ \hline \end{tabular} \end{center} \caption{Nodes visited with/without move ordering and the optimal case} \label{node_count} \end{table} \paragraph{Transposition table} Due to the nature of abalone there are multiple ways to reach the same configuration of the board. If we save the final value for a state determined by the algorithm, we can potentially save node visits if we visit that node again. To encode the board configurations efficiently, Zobrist Hashing \cite{noauthor_zobrist_nodate} was used. We hold a table with 9 x 9 entries (only 61 of 81 used) and within the cell we hold another 2 cells for the distinct game pieces (black and white marbles). In each cell we store a 64 bit random string. For each marble on the board we use the indices of the positions on the board to retrieve a bitstring from our table and connect them with XOR to retrieve our final hash $h$. \paragraph{Branch cutting} A problem with abalone is that for each configuration there are many possible moves (high branching factor) and many of those possible moves are very bad. If we have a sensible move ordering we might potentially exclude many of the useless moves to have a faster agent. For this implementation we only include at most the first 30 first moves from the ordered list. \begin{table} \begin{center} \begin{tabular}{ | c | c | c | c | c | } \hline Depth & Without TT & With TT & With TT and cutting & $\sqrt{b^d}$ \\ \hline 1 & 45 & 45 & 31 & 8 \\ \hline 2 & 132 & 132 & 90 & 60 \\ \hline 3 & 2432 & 2423 & 1019 & 464 \\ \hline 4 & 6918 & 5829 & 2435 & 3600 \\ \hline \end{tabular} \end{center} \caption{Nodes visited with/without transposition table, branch cutting and the optimal case} \label{node_count} \end{table} \paragraph{Further improvement} At this point the choice of the library also becomes relevant. Even though the reduction of node visits brings the most noticable difference in time for deliberation, the implementation can make a significant impact as well. There are two components of the game library that are called extremely often and thereby heavily profit from performance improvements. For one that is the routine to generate all possible moves. For each node expansion this function is called. By forking the game library \cite{campfireman_campfiremanabalone-boai_2021} and making improvements to that function the execution time could be reduced to 10\% of its original value. Furthermore, a tradeoff between execution time and memory was made, by storing and updating the current positions of the marbles, instead of iterating the entire board array. \subsection{Monte Carlo Search agent} Monte Carlo Tree Search promises to improve on some of the pitfalls posed by alpha-beta/minimax by allowing for a greater search depth and not using a heuristic function. In its simplest/purest form moves are evaluated by performing a playout, simulating the game until its end. The playout policy for selecting the moves in the playout is in its simplest form random move selection. Due to the large set of possible moves, the selection of random moves as playout policy has bad performance especially when the number of simulations is limited to a relatively small $n$. For the pure implementation we get about 1000 simulations when the move time is limited to 20 seconds. \paragraph{UCB} Whereas in the pure implementation each move gets an equal amount of simulations, we can select the next node to expand based on how promising the node(n) it is. We normalize the utility $U(n)$ by the number of games $N(n)$. By considering how often a node and its parent have been visited already, we want them to be visited more often in the beginning, before we solely decide based on utility. For this implementation a $ C = \sqrt{2}$ was chosen. \cite[p.327 ff]{russell_artificial_2021} $$ UCB(n) = \frac{U(n)}{N(n)} + C \times \sqrt{\frac{\log{N(Parent(n))}}{N(n)}} $$ \paragraph{Playout policy} Ideally, we want the players in the simulations to make very good decisions for their moves such that the playout's result is very meaningful. In order to achieve that the Evaluation 1 function from above was used to order the moves and expand only the best moves. This comes at a great computational cost, almost halving the amount of simulations but potentially improving the quality. \paragraph{Other improvements} Again by making adjustments to the game library significant performance improvements could be made. Namely by creating a new function for the generation of a random move. Instead of creating all possible moves and then selecting a random one, a random move is generated directly. \subsection{Algorithm comparison} As follows the results of different pairings of algorithms and their variations. \begin{table} \begin{adjustwidth}{-5in}{-5in}% adjust the L and R margins by 1 inch \begin{center} \begin{tabular}{ | c | c | c | c | c | c | c | c | } \hline Black player & White player & \small{Marbles lost b} & \small{Marbles lost w} & \small{time p. move b} & \small{time p. move w} & \small{total moves (avg)} & n \\ \hline AlphaBeta (d=3) & Random & 0.2 & 6.0 & 11.11 & 0.0 & 57.6 & 5 \\ \hline AlphaBeta (d=4) & Random & 0.0 & 6.0 & 142.8 & 0.0 & 52.4 & 5 \\ \hline AlphaBeta (d=3) & AlphaBetaFast (d=3) & 6.0 & 5.0 & 10.02 & 4.16 & 92 & 1 \\ \hline MonteCarloPure (t=20s) & RandomPlayer & 5.0 & 0.0 & 20.33 & 0.0 & 1008.0 & 1 \\ \hline MonteCarloImproved (t=20s) & RandomPlayer & 0.0 & 6.0 & 20.05 & 0.0 & 306.0 & 1 \\ \hline MonteCarloImproved (t=20s) & AlphaBetaFast (d=3) & 0.0 & 6.0 & 69.0 & 20.06 & 6.24 & 1 \\ \hline \end{tabular} \end{center} \label{node_count} \end{adjustwidth} \medskip% adds some space after the table \caption{Face-off results} \end{table} \section{Conclusion} Overall, the implementation of the agents posed a much greater engineering challenge than expected. The basic algorithms were implemented quickly, but two tweak them until they have acceptable move-times required a lot of effort. It is interesting that the basic implementation of the Monte Carlo Search agents performed so poorly. Especially, combined with more modern techniques the MCTS agent still is extremely promising and worth further investigation. % Literatur \bibliographystyle{splncs04.bst} \bibliography{../ref.bib} \end{document}
{ "alphanum_fraction": 0.7217567177, "avg_line_length": 57.1391509434, "ext": "tex", "hexsha": "7a761f5e2103a1f06c984ef4bb776e20b5b69c9a", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "3d456165205e3aa03f934433397085d8626e8c97", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "campfireman/abalone", "max_forks_repo_path": "doc/report/report.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "3d456165205e3aa03f934433397085d8626e8c97", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "campfireman/abalone", "max_issues_repo_path": "doc/report/report.tex", "max_line_length": 660, "max_stars_count": null, "max_stars_repo_head_hexsha": "3d456165205e3aa03f934433397085d8626e8c97", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "campfireman/abalone", "max_stars_repo_path": "doc/report/report.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 6102, "size": 24227 }
\subsection{Software license compatibility} To check if the software licenses of the libraries we've used are compatible with the MIT license in our repository, we have used lichen \cite{tool:lichen} to automatically check the licenses of all used libraries. This process is documented in \appendixref{appendix:software-license-check}. The results showed that all the libraries were compatible, except one because the tool couldn't find its license. However, checking its repository manually revealed a compatible MIT license.
{ "alphanum_fraction": 0.8235294118, "avg_line_length": 175.6666666667, "ext": "tex", "hexsha": "e87bbc2091eb765d75d88ad5d8a7c56b98f7dfc9", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "bab4ae77562e3dfec89840da9e601041e534ebb6", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "Devops-2022-Group-R/itu-minitwit", "max_forks_repo_path": "report/sections/system-perspective/software-licenses.tex", "max_issues_count": 50, "max_issues_repo_head_hexsha": "bab4ae77562e3dfec89840da9e601041e534ebb6", "max_issues_repo_issues_event_max_datetime": "2022-03-30T20:36:38.000Z", "max_issues_repo_issues_event_min_datetime": "2022-02-15T16:05:29.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "Devops-2022-Group-R/itu-minitwit", "max_issues_repo_path": "report/sections/system-perspective/software-licenses.tex", "max_line_length": 482, "max_stars_count": null, "max_stars_repo_head_hexsha": "bab4ae77562e3dfec89840da9e601041e534ebb6", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "Devops-2022-Group-R/itu-minitwit", "max_stars_repo_path": "report/sections/system-perspective/software-licenses.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 101, "size": 527 }
# Testing To have proper software engineering design, I have written test cases to ensure the analysis is running correctly. However, the approach to test is somewhat different from normal program. Since the \texttt{JsTainter} heavily relies on \texttt{Jalangi2} framework, it's quite hard to simply import the taint analysis unit and perform unit testing only on that unit. The reason is that \texttt{Jalangi2} has done many things for \texttt{JsTainter}, and \texttt{JsTainter} does not work without this framework. Therefore, instead, I have formulated a way to perform testing. Since we can instrument the JavaScript program that the analysis is running on, we can instrument on function call to perform our assertion. \subsection{Checking Taint State} \begin{minted}{javascript} var a; //something whose taint state is to be examined const assertTaint = "assertTaint"; assertTaint(a, true); \end{minted} The codes above will simply throw an exception in normal execution. However, if it is instrumented by our analysis program, we can examine the value of \texttt{function} parameter in the function call instrumentation callback. This value should be a \texttt{function} type variable in normal case, but if it is a \texttt{string} type variable and has value \texttt{"assertTaint"}, then we know we are going to perform assertion against the taint state of given variable, instead of executing the function call that will throw the error. In \texttt{assertTaint} function, the main goal is to check if \texttt{shadow(val)} (actual shadow value) is same as \texttt{taint} (expected shadow value). If they are not exactly same, assertion will fail. Variable \texttt{position} is the position of instruction that will be printed if assertion fails, which makes debug more convenient. \begin{minted}{javascript} function myAssert(b, position) { if (b !== true) { Log.log("Assertion failure at" + JSON.stringify(position)); assert(false); } } function assertTaint(val, taint, position) { taint = actual(taint); // taint might be wrapped by AnnotatedValue, just in case const s = shadow(val); myAssert(typeof s === typeof taint, position); // type must be identical if (Array.isArray(s)) {// if shadow value is array, all elements must be same myAssert(s.length === taint.length, position); for (var i = 0; i < s.length; i++) { myAssert(s[i] === taint[i], position); } } else {// for any other cases such as basic-type case, shadow must be equal myAssert(s === taint, position); } } //in the instrumentation callback handler of function call if (f === 'assertTaint') { assertTaint(args[0], args[1], getPosition(iid)); } \end{minted} Note that these 2 pieces of codes above are in different files. The first code piece is in the JavaScript file that is going to be analyzed (e.i. \texttt{test.js}); while the second code piece is in the file that performs the dynamic taint analysis (e.i. \texttt{DynTaintAnalysis.js}) . Therefore, even if we have same \texttt{assertTaint} name as identifier in both files, there will not be any conflict. \subsection{Checking Real Value} Using the similar technique, real value of variable can also be checked. \texttt{"assert"} can be used to examine the correctness of real value of variable, and it is handled in the same way as \texttt{"assertTaint"}. \begin{minted}{javascript} else if (f === "assert") { myAssert(actual(args[0]), getPosition(iid)); return {result : undefined}; } \end{minted} The usage of \texttt{assert} is a little bit different from \texttt{assertTaint}. Because real value can be directly access by program that is being analyzed, comparison can be done in the JavaScript program. For example, \begin{minted}{javascript} const assert = "assert"; assert(a == 1); \end{minted} \subsection{Evaluation on Basic Test Cases} \subsubsection{Implementation} To test the correctness of \texttt{JsTainter}, I have written many test cases in directory \texttt{tests/}. This will be tested by a simple Python script. \end{minted}python from os import system,walk from re import search cmd = "node jalangi2/src/js/commands/jalangi.js --inlineIID --inlineSource --analysis jalangi2/src/js/sample_analyses/ChainedAnalyses.js --analysis Utils.js --analysis Log.js --analysis TaintLogic.js --analysis NullBrowser.js --analysis DefaultConfig.js --analysis DynTaintAnalysis.js tests/%s" i = 0 for root,subdirs,files in walk("./tests/"): i += 1 assert i == 1 for f in files: ret = search("^test[a-zA-Z0-9]+\\.js$", f) if ret: # iterate file with format testxxx.js print "Testing file: " + f ret = system(cmd % f) # execute analysis if ret != 0: print "Error in file %s" % f exit(-1) \end{minted} The reason why regular expression is used to filter file name is that Jalangi2 will generate some temporary files in that directory when analysis is performed, such as \texttt{testxxx_jalangi_.js} and \texttt{testxxx_jalangi_.json}, and only files with correct file name format should be analyzed and tested. \subsubsection{Tests} There are many test cases, and I will discuss them one by one. \texttt{testarith.js} is used to test the taint propagation of arithmetic operation, especially when one of the operand is tainted string. For example, \texttt{(taintedStr + '123' + taintedStr) \textit{ 7} should be tainted because the result of this operation can be affected by \texttt{taintedStr} if it is a numeric string; while \texttt{(taintedStr + '0x' + taintedStr) } 7} should not be tainted because it always gives \texttt{NaN} no matter how \texttt{taintedStr} changes. \texttt{testarithAdd.js} is used to test correctness of taint propagation of \texttt{add} operation. \texttt{testbitoper.js} and \texttt{testshift.js} are used to test correctness of taint propagation of bit-wise operation, cases that operands are types other than number are also considered here. \texttt{testcharAt.js}, \texttt{testindexOf.js} and \texttt{testsubstr.js} are used to test correctness of taint propagation of function \texttt{String.prototype.charAt}, \texttt{String.prototype.indexOf} and \texttt{String.prototype.substr} respectively, cases that arguments are types other than number are also considered here. \texttt{testconstructor.js} is used to test taint propagation in JavaScript class. For example, when argument passed to constructor is tainted and is used to initialize the member fields, the fields should also be tainted. Also, \texttt{with} statement is also tested here. \texttt{testeval.js} is used to test \texttt{eval} statement. In other word, taint propagation must also works well even if the statement that causes the taint propagation is executed using \texttt{eval}. \texttt{testException.js} is used to test the case that when a tainted variable is thrown, the \texttt{catch} statement that receive the variable being thrown must also get the tainted value. //todo drawback \texttt{testfield.js} is used to test correctness of taint propagation of putting field and getting field. \texttt{testforinObject.js} is used to test correctness of \texttt{for in object} loop, which are properly handled by \texttt{analysis.forinObject} instrumentation callback function. \texttt{testfunc.js} is used to test non-constructor function call, including anonymous function. //todo drawback \texttt{testNumber.js} is used to test \texttt{Number} function. For example, when tainted string is casted to number by \texttt{Number} function, the return value should be tainted if it is controllable by the tainted argument. \texttt{testConcat.js} is used to test string concatenation by operator \texttt{+}. \section{Evaluation on Website} In this section I am going to evaluate my analysis using web JavaScript program instead of \texttt{node.js} program. \subsection{Simple Example} I have written a simple website that can be used to evaluate the effectiveness of taint analysis over JavaScript program with multiple sources and sinks, shown below. \end{minted}html <div id="sth"></div> <script type="text/javascript"> function myclick() { const url = window.location.href; const idx = url.indexOf('#') var hello, n; if (idx === -1) { hello = ""; } else { n = url.substr(idx + 1); const desc = prompt("Please input the description: "); hello = "name:" + unescape(n) + " desc:" + desc; } const num1 = document.getElementById("text1").value; const num2 = document.getElementById("text2").value; const sum = Number(num1) + Number(num2); hello += " sum:" hello += sum document.getElementById("sth").innerHTML = hello; const req = new XMLHttpRequest(); req.open("POST", window.location.origin + '/' + n); req.send(sum.toString()) } </script> <form> <input type="text" id="text1"> <input type="text" id="text2"> <input type="button" value="click me" onclick="myclick()"> </form> \end{minted} The result will be printed in \texttt{console} dialog, which is a list of JSON strings. In the following section, I will try to explain the result of the taint analysis and compare the result with real behavior of the JavaScript program, so that effectiveness of taint analysis can be evaluated. Firstly, URL is fetched, which is partially tainted, and written to variable \texttt{url}. The \texttt{id} of this source is 0, so \texttt{taint information variable} for tainted character is \texttt{1 << 0 == 1} (\texttt{number} type is used to implement boolean array). This behavior is recorded properly. \end{minted}json {"type":"source","typeOf":"string","file":"91ea3bc2825abb590247bbfa10e8631f.js","pos":[4,17,4,37],"name":"href","id":0} {"type":"write","typeOf":"string","file":"91ea3bc2825abb590247bbfa10e8631f.js","pos":[4,17,4,37],"name":"url"} \end{minted} The file name is a MD5 hash, which is generated by Jalangi2, and is indeed a bit weird. However, we only need to know that the file represents the JavaScript code in the \texttt{<script>} tag shown above. The value in this property is same for all JSON results, so to make the report more clear, I will delete this field in following section, but actually they still exist. Then, \texttt{indexOf} is called on \texttt{url} and the return value is tainted, which is then written to variable \texttt{idx}. This is intended result because \texttt{idx} can vary as the user input changes. For example, since there could be a query string before the first \texttt{#}, and its length is dependent on user, which means the value of \texttt{idx} can be controlled by user and thus should be tainted. \end{minted}json {"type":"read","typeOf":"string","pos":[5,17,5,20],"name":"url"} {"type":"write","typeOf":"number","pos":[5,17,5,33],"name":"idx"} \end{minted} Then, variable \texttt{idx} is read and used in a \texttt{if} condition, and corresponding \texttt{log} message is produced, which again is the intended result. \end{minted}json {"type":"read","typeOf":"number","pos":[7,9,7,12],"name":"idx"} {"type":"log","pos":[7,9,7,19],"msg":"Tainted variable false being used in conditional"} \end{minted} Then the program goes into the \texttt{else} branch. Function \texttt{substr} function is called using \texttt{url} and \texttt{idx}, and its return value, which is tainted, is assigned to variable \texttt{n}. The corresponding results are still correct. \end{minted}json {"type":"read","typeOf":"string","pos":[13,13,13,16],"name":"url"} {"type":"read","typeOf":"number","pos":[13,24,13,27],"name":"idx"} {"type":"write","typeOf":"string","pos":[13,13,13,32],"name":"n"} \end{minted} At line 14, input is obtained from \texttt{prompt} function again and assigned to variable \texttt{desc}, and at line 15, tainted variable \texttt{n} and \texttt{desc} are used to generate a partially tainted string, which is assigned to variable \texttt{hello}. The \texttt{id} of this source is 1, so \texttt{taint information variable} for every character is \texttt{1 << 1 == 2}. These behaviors are all recorded properly. \end{minted}json {"type":"source","typeOf":"string","pos":[14,22,14,62],"name":"prompt","id":1} {"type":"write","typeOf":"string","pos":[14,22,14,62],"name":"desc"} {"type":"read","typeOf":"string","pos":[15,36,15,37],"name":"n"} {"type":"read","typeOf":"string","pos":[15,52,15,56],"name":"desc"} {"type":"write","typeOf":"string","pos":[15,17,15,56],"name":"hello"} \end{minted} After the \texttt{else} block, input string is obtained from 2 \texttt{input} tags, and assigned to variable \texttt{num1} and variable \texttt{num2}. There are also \texttt{log} type JSONs that has been recorded, since \texttt{getElementById} native function is not handled in \texttt{invokeFun} handler; but actually this function does not need to be handled, so this JSON can be ignored. In addition, even if the inputs are both from \texttt{<input>} field, the \texttt{id} numbers being allocated are different, thanks to the \texttt{id} allocator. \end{minted}json {"type":"log","pos":[17,18,17,50],"msg":"Unhandled native function getElementById"} {"type":"source","typeOf":"string","pos":[17,18,17,56],"name":"value","id":2} {"type":"write","typeOf":"string","pos":[17,18,17,56],"name":"num1"} {"type":"log","pos":[18,18,18,50],"msg":"Unhandled native function getElementById"} {"type":"source","typeOf":"string","pos":[18,18,18,56],"name":"value","id":3} {"type":"write","typeOf":"string","pos":[18,18,18,56],"name":"num2"} \end{minted} The \texttt{num1} and \texttt{num2} are converted to number, and used to calculate \texttt{sum}, which is tainted and is concatenated to string \texttt{hello} in next step. Note that \texttt{hello += sth} is identical to \texttt{hello = hello + sth}, so a \texttt{read} record on \texttt{hello} variable will also be recorded, which is not a mistake. \end{minted}json {"type":"read","typeOf":"string","pos":[19,24,19,28],"name":"num1"} {"type":"read","typeOf":"string","pos":[19,39,19,43],"name":"num2"} {"type":"write","typeOf":"number","pos":[19,17,19,44],"name":"sum"} {"type":"read","typeOf":"string","pos":[20,5,20,10],"name":"hello"} {"type":"write","typeOf":"string","pos":[20,5,20,21],"name":"hello"} {"type":"read","typeOf":"string","pos":[21,5,21,10],"name":"hello"} {"type":"read","typeOf":"number","pos":[21,14,21,17],"name":"sum"} {"type":"write","typeOf":"string","pos":[21,5,21,17],"name":"hello"} \end{minted} Here is the first sink being detected: variable \texttt{hello} is written to \texttt{innerHTML} field of a \texttt{<div>} DOM object. The value being written to the sink and corresponding shadow value are presented in JSON. The 4 \texttt{1}s correspond to \texttt{"2019"}, which comes from source with \texttt{id==0}, URL; the 3 \texttt{2}s correspond to \texttt{"AAA"}, which comes from source with \texttt{id==1}, return value of \texttt{prompt}. The 2 \texttt{12}s are interesting: since \texttt{24} results from adding values from 2 input tags, it can be affected by both source with \texttt{id==2} and source with \texttt{id==3}, and \texttt{12} is the result from \texttt{(1<<2) | (1<<3) == 4 | 8 == 12}. \end{minted}json {"type":"log","pos":[22,5,22,35],"msg":"Unhandled native function getElementById"} {"type":"read","typeOf":"string","pos":[22,48,22,53],"name":"hello"} {"type":"sink","pos":[22,5,22,53],"value":"name:2019 desc:AAA sum:24","shadow":[0,0,0,0,0,1,1,1,1,0,0,0,0,0,0,2,2,2,0,0,0,0,0,12,12],"name":"[object HTMLDivElement].innerHTML"} \end{minted} Then, to test native function sink, \texttt{XMLHttpRequest} is used. The result is also correct. \end{minted}json {"type":"log","pos":[23,17,23,37],"msg":"Unhandled native function XMLHttpRequest"} {"type":"read","typeOf":"string","pos":[24,53,24,54],"name":"n"} {"type":"sink","pos":[24,5,24,55],"value":["POST","https://www.doc.ic.ac.uk/2019"],"shadow":[[0,0,0,0],[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,1]],"name":"open"} {"type":"read","typeOf":"number","pos":[25,14,25,17],"name":"sum"} {"type":"sink","pos":[25,5,25,29],"value":["24"],"shadow":[[12,12]],"name":"send"} \end{minted} \section{Weakness} \subsection{Implicit Flow} To detect implicit information flow, JavaScript must be analyzed from high-level perspective. However, since I have applied pure dynamic analysis, analysis can only be performed for each individual JavaScript operation. Therefore, automatic detection of implicit flow is not possible. \subsection{Unable to Track Taint of Native Object Fields} In JavaScript, there are some native objects. For example, \texttt{Error} is the object that is used to throw an exception, and can be used in this way: \begin{minted}{javascript} try { throw new Error("some message"); } catch (e) { console.log(e.message); } \end{minted} There might be cases that the string passed into \texttt{Error} is tainted. However, unlike non-native classes, Jalangi2 cannot instrument into constructor of \texttt{Error}, thus unable to tackle the taint state of \texttt{message} field. Thus, false negative would be produced. \subsection{Unable to Execute Codes with Lambda Function} This is actually a problem from Jalangi2 instead of JsTainter. In Jalangi2, lambda expression is wrongly instrumented: the parameters of lambda expression are instrumented like variable, which causes JavaScript grammar to be wrong. For [example](https://github.com/Samsung/jalangi2/issues/149), when expression \texttt{const lambda = (a,b) => a+b;} is instrumented, \texttt{(a,b)} part would be instrumented to \texttt{(J$.R(81, 'a', 'a', 1), J$.R(89, 'b', 'b', 1))}, which is certainly wrong because this is not variable read and should not be modified, just like \texttt{(a,b)} part in \texttt{function (a,b) {return a+b}}. \subsection{Detectable by Program being Analyzed} For some JavaScript programs, anti-debug techniques are applied to prevent people from reverse engineering the product. For example, JavaScript program can convert function to string and check if the function is modified. \begin{minted}{javascript} function some_func() { /\textit{code that does not want to be modified by reverse engineer}/ } const correct_crc = 0x708D2F22; /\textit{ crc32 value of String(some_func) }/ function crc32(str) { /\textit{code that implements crc32 hash algorithm}/ } if (crc32(String(some_func)) != correct_crc) throw Error("Hack3r detected!") \end{minted} If \texttt{some_func} function is instrumented by Jalangi2, the CRC-32 value will a become different one, so an exception will be thrown, which means the behavior of the program becomes different after instrumentation. This is not desirable. \subsection{Prototype Poisoning} In current implementation, prototype poisoning is not properly handled. For example, behavior of field setting can be modified to particular function by using \begin{minted}{javascript} Object.defineProperty(SomeClass.prototype, "key", {set:function(){console.log("1337")}}) \end{minted} After this statement is executed, if \texttt{obj} is an instance of \texttt{SomeClass}, and \texttt{obj["key"]=1} is executed, instead of executing normal field setting, \texttt{function(){console.log("1337")}} will be executed, so \texttt{obj["key"]} will still be \texttt{undefined}. In this case, tracking the shadow value of the object in the normal way might cause inaccuracy. 1. eval on self written web site 2. eval on web CTF challenge? 3. eval on real world website 4. eval on usability, e.g. environment congifuation 5. drawback: anti-instrumentation 6.
{ "alphanum_fraction": 0.7262820186, "avg_line_length": 60.4215384615, "ext": "tex", "hexsha": "a4d60947f1a5bb7fbe8eb6939a89ba0e5684deb1", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2020-11-23T03:04:09.000Z", "max_forks_repo_forks_event_min_datetime": "2020-11-23T03:04:09.000Z", "max_forks_repo_head_hexsha": "417bf789054dd80b16471d80a99e7159d6a03a88", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "Mem2019/JsTainter", "max_forks_repo_path": "docs/Evaluation.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "417bf789054dd80b16471d80a99e7159d6a03a88", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "Mem2019/JsTainter", "max_issues_repo_path": "docs/Evaluation.tex", "max_line_length": 625, "max_stars_count": 4, "max_stars_repo_head_hexsha": "417bf789054dd80b16471d80a99e7159d6a03a88", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "Mem2019/JsTainter", "max_stars_repo_path": "docs/Evaluation.tex", "max_stars_repo_stars_event_max_datetime": "2020-11-09T08:23:26.000Z", "max_stars_repo_stars_event_min_datetime": "2019-04-30T09:23:53.000Z", "num_tokens": 5285, "size": 19637 }
\section*{Statutory Declaration} I declare that I have written this thesis independently, that I have not used any other than the declared resources, and that I have explicitly marked all material which has been quoted either literally or by content from the used sources. Potsdam, \today ~\\ ~\\ ~\\ \sign{Jan Ehmüller}
{ "alphanum_fraction": 0.7763975155, "avg_line_length": 35.7777777778, "ext": "tex", "hexsha": "24fd1ab92ec85e91feb38282eeafb8a5a730373a", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "310db25128c34209122a90ef02b8a61370230a2b", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "janehmueller/bachelorthesis", "max_forks_repo_path": "sections/statutory_declaration.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "310db25128c34209122a90ef02b8a61370230a2b", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "janehmueller/bachelorthesis", "max_issues_repo_path": "sections/statutory_declaration.tex", "max_line_length": 239, "max_stars_count": 1, "max_stars_repo_head_hexsha": "310db25128c34209122a90ef02b8a61370230a2b", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "janehmueller/bachelorthesis", "max_stars_repo_path": "sections/statutory_declaration.tex", "max_stars_repo_stars_event_max_datetime": "2020-02-03T19:31:37.000Z", "max_stars_repo_stars_event_min_datetime": "2020-02-03T19:31:37.000Z", "num_tokens": 77, "size": 322 }
\documentclass[a4paper,11pt,x11names]{article} \usepackage{hyperref} \usepackage{tikz-er2} \tikzset{every entity/.style={draw=orange, fill=orange!20}} \tikzset{every attribute/.style={draw=MediumPurple1, fill=MediumPurple1!20}} \tikzset{every relationship/.style={draw=Chartreuse2, fill=Chartreuse2!20}} \newcommand{\hmwkTitle}{Assignment\ \# 1 } % Assignment title \newcommand{\hmwkDueDate}{Friday,\ June \ 19,\ 2015} % Due date \newcommand{\hmwkClass}{CSCI-585} % Course/class \newcommand{\hmwkClassTime}{11:00pm} % Class/lecture time \newcommand{\hmwkAuthorName}{Saket Choudhary} % Your name \newcommand{\hmwkAuthorID}{2170058637} % Teacher/lecturer %---------------------------------------------------------------------------------------- % TITLE PAGE %---------------------------------------------------------------------------------------- \title{ \vspace{2in} \textmd{\textbf{\hmwkClass:\ \hmwkTitle}}\\ \normalsize\vspace{0.1in}\small{Due\ on\ \hmwkDueDate}\\ %\vspace{0.1in}\large{\textit{\hmwkClassTime}} \vspace{3in} } \author{\textbf{\hmwkAuthorName} \\ \textbf{\hmwkAuthorID} } \date{} % Insert date here if you want it to appear below your name \begin{document} \maketitle \begin{tikzpicture}[node distance=7em] \node[entity] (course) {Course}; \node[attribute] (coursename) [left of=course] {\key{Name}} edge (course); \node[attribute] (greenfee) [below of=course] {GreenFee} edge (course); \node[attribute] (dollar) [below left of=greenfee] {Dollar} edge (greenfee); \node[attribute] (cent) [below right of=greenfee] {Cent} edge (greenfee); \node[relationship] (coursetee) [right of=course] {belongs to} edge node[auto,swap] {1} (course); \node[weak entity] (tee) [right of=coursetee] {Tee} edge[total] node[auto, swap] {M} (coursetee); \node[relationship] (roundtess) [above right of=tee] {has} edge[->] node[auto, swap] {1} (tee); \node[attribute] (courserating) [below left of=tee] {CourseRating} edge (tee); \node[attribute] (sloperating) [below right of=tee] {SlopeRating} edge (tee); \node[attribute] (yardage) [right of=tee] {Yardage} edge (tee); \node[relationship] (golfercourse) [above of=course] {Has home-course} edge [->] node[auto, swap] {1} (course); \node[entity] (golfer) [above of=golfercourse] {Golfer} edge[total] node[auto, swap] {1} (golfercourse); \node[attribute] (golferid) [above of=golfer] {\key{GolferID}} edge (golfer); \node[attribute] (golfername) [left of=golfer] {Name} edge (golfer); \node[relationship] (golferplays) [right of = golfer] {played by} edge[->] node[auto, swap] {1} (golfer); \node[weak entity] (round) [above right of = golferplays] {Round} edge[total] (golferplays); \node[attribute] (score) [right of=round] {Score} edge (round); \node[attribute] (day) [above right of=round] {Day} edge (round); \node[derived attribute] (roundcourse) [above left of=round] {Course} edge (round); \draw[link] (roundtess) edge[total] node[auto, swap] {1} (round); \end{tikzpicture} I assume each Golfer just takes one course and that is his 'home-course'. The labels on the edges denonte relationship type(1-1,1-M,M-N). Underscores denote primary key. I also omitted showing the 'membership' relationship between course and golf players. \section*{Acknowledgement} Pavel Calado for the awesome 'tikz-er2' package: \url{https://bytebucket.org/pavel_calado/tikz-er2/} \end{document}
{ "alphanum_fraction": 0.6868656716, "avg_line_length": 50, "ext": "tex", "hexsha": "4272d1b5fdbb063727251ae00d7beeff0d782fde", "lang": "TeX", "max_forks_count": 12, "max_forks_repo_forks_event_max_datetime": "2022-02-10T03:21:09.000Z", "max_forks_repo_forks_event_min_datetime": "2015-09-25T19:06:45.000Z", "max_forks_repo_head_hexsha": "c5cfa2410d47c7e43a476a8c8a9795182fe8f836", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "saketkc/hatex", "max_forks_repo_path": "2015_Summer/CSCI-585/Assignments/Assignment1/assignment1.tex", "max_issues_count": 1, "max_issues_repo_head_hexsha": "c5cfa2410d47c7e43a476a8c8a9795182fe8f836", "max_issues_repo_issues_event_max_datetime": "2015-09-23T21:21:52.000Z", "max_issues_repo_issues_event_min_datetime": "2015-09-16T23:11:00.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "NeveIsa/hatex", "max_issues_repo_path": "2015_Summer/CSCI-585/Assignments/Assignment1/assignment1_sql/assignment1.tex", "max_line_length": 111, "max_stars_count": 19, "max_stars_repo_head_hexsha": "c5cfa2410d47c7e43a476a8c8a9795182fe8f836", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "NeveIsa/hatex", "max_stars_repo_path": "2015_Summer/CSCI-585/Assignments/Assignment1/assignment1_sql/assignment1.tex", "max_stars_repo_stars_event_max_datetime": "2022-02-10T03:20:47.000Z", "max_stars_repo_stars_event_min_datetime": "2015-09-10T02:45:33.000Z", "num_tokens": 1087, "size": 3350 }
\documentclass{article} \usepackage{amsmath} \usepackage{amssymb} \usepackage{natbib} \usepackage{graphicx} \setlength\parindent{0pt} \title{Retrograde and anterograde linear models of pathological spread along directed structural connectomes} \author{Eli J. Cornblath} \begin{document} \maketitle \section*{Introduction} Linear diffusion models are a highly promising tool for investigating the mechanisms of neurodegenerative disease progression, which is thought to be driven by transsynaptic spread throughout structural connectomes \cite{Raj2012,Pandya2017,Pandya2019,Henderson2019,Mezias2020}. There are two possible directions for transsynaptic spread. In retrograde spread, misfolded proteins travel backwards from distal axons towards the soma. In anterograde spread, the opposite process occurs-- misfolded proteins starting in the soma of a neuron travel down the axon to distal regions.\\ In this brief document, I will describe the simple version of the model used in Ref. \cite{Henderson2019} to capture anterograde spread between to brain regions (equivalently, neurons or network nodes). Here, we use a matrix $W$ and the equation \begin{equation} \dot{x}=Wx \label{eq1} \end{equation} to instantiate an anterograde spreading process whereby pathology in node $A$ spreads from neuron soma in $A$ along axons that terminate in node $B$. In our model \begin{equation} W= \begin{bmatrix} W_{A\rightarrow A} & W_{B\rightarrow A}\\ W_{A\rightarrow B} & W_{B\rightarrow B} \end{bmatrix}, \label{eq2} \end{equation} where the element $W_{A\rightarrow B}$ indicates the strength of the axonal projections initiating from the somas in region $A$ and terminating in region $B$, and $ W_{A\rightarrow A} = W_{B\rightarrow B} = 0$. \\ Suppose at the beginning of model time, we seed 1 unit of pathology in region $A$ and represent it in the vector $x$: \begin{equation} x = \begin{bmatrix} 1 \\ 0 \end{bmatrix} \label{eq3} \end{equation} \noindent The general form of equation \ref{eq1} is solved by the dot product between the $i$th row of $W$ and the columns of $x$ to generate $\dot{x}_i$, as in \begin{equation} \dot{x} = \begin{bmatrix} W_{1,1} & W_{1,2} \\ W_{2,1} & W_{2,2} \end{bmatrix} \begin{bmatrix} x_{1,1} \\ x_{2,1} \end{bmatrix} = \begin{bmatrix} W_{1,1}x_{1,1} + W_{1,2}x_{2,1} \\ W_{2,1}x_{1,1} + W_{2,2}x_{2,1} \end{bmatrix}. \label{eq4} \end{equation} \noindent We can substitute in our values into equation \ref{eq4} and solve for $\dot{x}$, which yields \begin{equation} \dot{x}= \begin{bmatrix} W_{A\rightarrow A} & W_{B\rightarrow A}\\ W_{A\rightarrow B} & W_{B\rightarrow B} \end{bmatrix} \begin{bmatrix} 1 \\ 0 \end{bmatrix} \label{eq5} \end{equation} \begin{equation} = \begin{bmatrix} (1\times W_{A\rightarrow A}) + (0\times W_{B\rightarrow A}) \\ (1\times W_{A\rightarrow B}) + (0\times W_{B\rightarrow B}) \end{bmatrix} = \begin{bmatrix} 0 \\ W_{A\rightarrow B} \end{bmatrix} . \label{eq6} \end{equation} In this equation, we now observe that pathology in node $B$, represented by $x_{2,1}$, will change over time at a rate determined by the strength of axonal projections from neuron somas in $A$ terminating in $B$, which are reflected in $W_{A\rightarrow B}$. This process reflects anterograde spread as intended through the design of $W$ in equation \ref{eq2}, and as we implemented in Ref. \cite{Henderson2019}. \\ Note that in equation \ref{eq1}, we define $\dot{x}$, which is the rate of change of pathology at each node. However, it is often more intuitive to solve for the amount of pathology at each node, represented by the vector $x$. Integration of equation \ref{eq1} yields \begin{equation} x=e^{Wt}x_o \end{equation} where $x_o$ is the initial state of $x$, and $e$ is the natural exponent. \section*{Acknowledgments} I would like to thank Jason Z. Kim for his input on this document. \bibliographystyle{plain} \bibliography{\string ~/Dropbox/Cornblath_Bassett_Projects/library.bib} \end{document}
{ "alphanum_fraction": 0.7449647533, "avg_line_length": 36.4403669725, "ext": "tex", "hexsha": "a928d1050d2587655dab9a1e6a305918c66fee94", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2021-07-29T18:22:22.000Z", "max_forks_repo_forks_event_min_datetime": "2021-07-29T18:22:22.000Z", "max_forks_repo_head_hexsha": "322398bb37841da47cd0cbd31095188c4327b59b", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "thealexrk/Modeling-Tau-Spread", "max_forks_repo_path": "example/example.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "322398bb37841da47cd0cbd31095188c4327b59b", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "thealexrk/Modeling-Tau-Spread", "max_issues_repo_path": "example/example.tex", "max_line_length": 414, "max_stars_count": null, "max_stars_repo_head_hexsha": "322398bb37841da47cd0cbd31095188c4327b59b", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "thealexrk/Modeling-Tau-Spread", "max_stars_repo_path": "example/example.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 1231, "size": 3972 }
\section{Related Work} Secure data enclaves have a long and varied history starting with air gapped systems that are physically disconnected from the internet. While there are many examples of data enclaves, to the best of our knowledge none provide the stewardship and policy-based models used in \NAMENS. In this sections, we briefly review a range of data enclaves. NORC~\cite{lane2008using}, a research institution at the University of Chicago, operates a data enclave to support researchers' investigations into programmatic, business, and policy decisions. NORC stores and manages a broad set of sensitive data in their FISMA compliant data enclave. The enclave was designed with the vision of supporting multi-researcher collaboration via remote access terminals. All microdata are hosted in a secure NORC server. This common approach to developing enclaves ensures security by access control, and limiting computation to the server. However, it is limited by storage and computational infrastructure and the need to host and operate local infrastructure. Moreover, it is typically not well suited for large scale data analysis. Furthermore, users could potentially read microdata through their terminal. Thus, this approach only guarantees the privacy of the bulk of microdata. The ICPSR data enclave \cite{icpsr} hosts sensitive datasets for social science research. While the majority of the datasets are public, ICPSR also manages restricted use datasets such as crime data that are protected in data enclaves. They offer two types of data enclave: physical and virtual. The physical data enclave is a single protected server that is disconnected from the internet and is accessible only in person. The virtual enclave is a remote desktop solution that is designed to prevent copying of data. Systems such as these offer security at the cost of accessibility and ease of use while, at the same time, lacking a solution to the fundamental problem of users manually copying sensitive microdata records. The National Center for Health Statistics (NCHS) Research Data Center (RDC) \cite{cdc} hosts a large collection of restricted datasets. These datasets contain health information and are subject to HIPAA guidelines. The data are accessible on-premise at the NCHS RDC or the Federal Statistical RDC, or in some cases via remote access. %several datasets are not available for remote access. RDC restricts access to direct identifiers such as name and social security number while leaving indirect identifiers such as geography accessible. Even with the constraints on access and limited record accessibility, access to potentially identifying information leaves this system open to linkage attacks. This is mitigated, to some extent, by strong vetting of research proposals. The NCI Genomic Data Commons \cite{grossman2016toward} hosts several Petabytes of genomic data, and provides an on-premise cloud model to enable computation on these sensitive datasets. While the on-premise infrastructure meets compliance requirements, this choice leads to added costs associated with building and maintaining production compute infrastructure---an approach that is unlikely to be broadly available to a wide range of scientists. \NAMENS, and other cloud-based enclaves, benefit significantly from the low-cost cloud resources made possible due to providers' economies of scale. Foster~\cite{foster-stakeholder} reviews approaches to enabling secure analysis of sensitive data, and proposes---but does not describe an implementation of---a cloud architecture with similarities to that described here.
{ "alphanum_fraction": 0.8221976808, "avg_line_length": 72.44, "ext": "tex", "hexsha": "cb57faa6038cf0e7ea469c5cf7cfc3d184d91c1f", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "f8bbef7b5bf4f85443a7287260c18295d845271e", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "yadudoc/safe_data", "max_forks_repo_path": "relatedwork.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "f8bbef7b5bf4f85443a7287260c18295d845271e", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "yadudoc/safe_data", "max_issues_repo_path": "relatedwork.tex", "max_line_length": 123, "max_stars_count": null, "max_stars_repo_head_hexsha": "f8bbef7b5bf4f85443a7287260c18295d845271e", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "yadudoc/safe_data", "max_stars_repo_path": "relatedwork.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 735, "size": 3622 }
\documentclass[sigconf]{acmart}\usepackage[]{graphicx}\usepackage[]{color} %% maxwidth is the original width if it is less than linewidth %% otherwise use linewidth (to make sure the graphics do not exceed the margin) \makeatletter \def\maxwidth{ % \ifdim\Gin@nat@width>\linewidth \linewidth \else \Gin@nat@width \fi } \makeatother \definecolor{fgcolor}{rgb}{0.345, 0.345, 0.345} \newcommand{\hlnum}[1]{\textcolor[rgb]{0.686,0.059,0.569}{#1}}% \newcommand{\hlstr}[1]{\textcolor[rgb]{0.192,0.494,0.8}{#1}}% \newcommand{\hlcom}[1]{\textcolor[rgb]{0.678,0.584,0.686}{\textit{#1}}}% \newcommand{\hlopt}[1]{\textcolor[rgb]{0,0,0}{#1}}% \newcommand{\hlstd}[1]{\textcolor[rgb]{0.345,0.345,0.345}{#1}}% \newcommand{\hlkwa}[1]{\textcolor[rgb]{0.161,0.373,0.58}{\textbf{#1}}}% \newcommand{\hlkwb}[1]{\textcolor[rgb]{0.69,0.353,0.396}{#1}}% \newcommand{\hlkwc}[1]{\textcolor[rgb]{0.333,0.667,0.333}{#1}}% \newcommand{\hlkwd}[1]{\textcolor[rgb]{0.737,0.353,0.396}{\textbf{#1}}}% \let\hlipl\hlkwb \usepackage{framed} \makeatletter \newenvironment{kframe}{% \def\at@end@of@kframe{}% \ifinner\ifhmode% \def\at@end@of@kframe{\end{minipage}}% \begin{minipage}{\columnwidth}% \fi\fi% \def\FrameCommand##1{\hskip\@totalleftmargin \hskip-\fboxsep \colorbox{shadecolor}{##1}\hskip-\fboxsep % There is no \\@totalrightmargin, so: \hskip-\linewidth \hskip-\@totalleftmargin \hskip\columnwidth}% \MakeFramed {\advance\hsize-\width \@totalleftmargin\z@ \linewidth\hsize \@setminipage}}% {\par\unskip\endMakeFramed% \at@end@of@kframe} \makeatother \definecolor{shadecolor}{rgb}{.97, .97, .97} \definecolor{messagecolor}{rgb}{0, 0, 0} \definecolor{warningcolor}{rgb}{1, 0, 1} \definecolor{errorcolor}{rgb}{1, 0, 0} \newenvironment{knitrout}{}{} % an empty environment to be redefined in TeX \usepackage{alltt} %%% Local Variables: %%% ispell-local-dictionary: "english" %%% End: \usepackage[utf8]{inputenc} \usepackage{booktabs} % For formal tables \usepackage{graphicx} \usepackage{rotating} \usepackage{listings} \definecolor{Gray}{gray}{0.6} \copyrightyear{2018} \acmYear{2018} \setcopyright{acmlicensed} \acmConference[GECCO '18 Companion]{Genetic and Evolutionary Computation Conference Companion}{July 15--19, 2018}{Kyoto, Japan} \acmBooktitle{GECCO '18 Companion: Genetic and Evolutionary Computation Conference Companion, July 15--19, 2018, Kyoto, Japan} \acmPrice{15.00} \acmDOI{10.1145/3205651.3208273} \acmISBN{978-1-4503-5764-7/18/07} \title{Performance improvements of evolutionary algorithms in Perl 6} \author{Juan-Julián Merelo-Guervós} \orcid{1234-5678-9012} \affiliation{% \institution{Universidad de Granada} \streetaddress{Daniel Saucedo Aranda, s/n} \city{Granada} \country{Spain} } \email{[email protected]} \author{José-Mario García-Valdez} \affiliation{% \institution{Instituto Tecnológico de Tijuana} \streetaddress{Calzada Tecnológico, s/n} \city{Tijuana} \country{Mexico} } \email{[email protected]} % The default list of authors is too long for headers. %\renewcommand{\shortauthors}{J. J. Merelo et al.} \begin{abstract} Perl 6 is a recently released language that belongs to the Perl family but was actually designed from scratch, not as a refactoring of the Perl 5 codebase. Through its two-year-old (released) history, it has increased performance by several orders of magnitude, arriving recently to the point where it can be safely used in production. In this paper, we are going to compare the historical and current performance of Perl 6 in a single problem, OneMax, to those of other interpreted languages; besides, we will also use implicit concurrency and see what kind of performance and scaling can we expect from it. \end{abstract} \begin{CCSXML} <ccs2012> <concept> <concept_id>10003752.10003809.10003716.10011136.10011797.10011799</concept_id> <concept_desc>Theory of computation~Evolutionary algorithms</concept_desc> <concept_significance>500</concept_significance> </concept> <concept> <concept_id>10010147.10010919.10010172</concept_id> <concept_desc>Computing methodologies~Distributed algorithms</concept_desc> <concept_significance>300</concept_significance> </concept> </ccs2012> \end{CCSXML} \ccsdesc[500]{Theory of computation~Evolutionary algorithms} \ccsdesc[300]{Computing methodologies~Distributed algorithms} \keywords{Benchmarking, computer languages, concurrency, evolutionary algorithms, Perl, Perl 6} \maketitle \section{Introduction} Performance has always been a concern in scientific computing. Generally, you will want to use the fastest language available to be able to run your experiments in as little time as possible. However, while implementation matters \cite{DBLP:conf/iwann/MereloRACML11}, ease of programming, available libraries and supporting community are sometimes more significant concerns, since in scientific computing the target is to optimize time-to-publish the paper, not only time from pressing {\em Enter} to obtaining the results, and that includes time to get toe program done itself, as well as process results. In that sense, interpreted languages such as Python, Perl or JavaScript \cite{fortin2012deap,DBLP:conf/ijcci/FarisAMCM16,DBLP:conf/gecco/GuervosVGES14,perl-ea,hidaka2017development,rivas2014object,ae09} offer fast prototyping, if not the fastest implementation, which usually belongs to compiled languages such as Haskell or Java \cite{DBLP:conf/evoW/MereloCBRGFRV16}. However, as proved in the cited paper, that is not always the case and new languages deserve a chance to be tested, mainly if they offer functionalities that might make the implementation of evolutionary algorithms faster or more straightforward. Besides, the performance of a language is not a static thing; while some languages are happy enough with the level they achieve and focus on other functionalities, newer languages focus on performance in every new release, offering improvements of several orders of magnitude. This has been the case of Perl 6 \cite{Tang:2007:PRI:1190216.1190218}, a new, concurrent, dynamic and multi-paradigm language that has been in development since 2000 and released in December 2015. Since then, it has had a release cycle of one, or sometimes more, releases every month, with a stable release every four months. While initial tests discouraged us from including its figures in the paper where we benchmarked many languages for evolutionary algorithms \cite{DBLP:conf/evoW/GuervosBCRGRVHR17}, the increase in performance has been continuous, as well as the implementation of implicit parallelism features. This paper is specially focused on benchmarking this language for evolutionary algorithms, with the intention of proposing it as production-ready for scientific computing or evolutionary computation experiments. The rest of the paper is organized as follows. We will briefly present the state of the art in benchmarking evolutionary algorithms in the next section, followed by the set of experiments used to test the performance in Section \ref{sec:exp}. Results and charts will be presented in Section \ref{sec:res}, and we will close the paper by stating our conclusions. \section{State of the art} \label{sec:soa} As a matter of fact, there is very little scientific literature on Perl 6, much less applied to scientific computing. The paper by Audrey Tang \cite{Tang:2007:PRI:1190216.1190218}, one of the early programmers of a Perl 6 compiler in Haskell called Pugs, is one of the few we can find. In fact, the paper where she describes the design of the language has had some influence in language design, including the design of Typed Scheme, a functional language \cite{tobin2008design}. Its sister language, Perl, has been used in Evolutionary Algorithms for a long time, with an early tool but used for minimizing the performance of a network \cite{bucur2016optimizing}. Since the publication of the {\tt Algorithm::Evolutionary} library circa 2002 \cite{ecperl,perl-ea} it has been applied to many different problems, including solving the MasterMind puzzle \cite{DBLP:journals/evi/Maestro-MontojoSG14}. In fact, its speed processing evolutionary algorithms has made it a great tool for evolving regular expressions via the DiscoverRegex and GenRegex tools \cite{ruano2018using}, and even optimizing the yield of analog integrated circuits \cite{guerra2015ocba}. Perl 5 was a convenient and multi-paradigm, if not particularly groundbreaking language. Conceptually, you could program an evolutionary algorithm in pretty much the same way you would do it in C or C++, which were at the time much faster languages. The fact that it was used proves that languages for implementing evolutionary algorithms are not chosen purely by their raw speed. However, speed has to be adequate and not vary in orders of magnitude with respect to other, well-established, language. Even if slower, the trade-off might be interesting if a new language offers new ways of % might be worthy? implementing evolutionary algorithms that give you some insight on the inner workings of evolutionary optimization. This why in this paper we will set to measure the speed of Perl 6 and its evolution, in order to prove that it has come the time to consider it as a language for evolutionary optimization given the functional and concurrent facilities it now offers. % Maybe add a comment about how much different are Perl 5 and Perl 6? % They are compatible? Can we use parts of Algorithm::Evolutionary? \section{Experimental setup} \label{sec:exp} In this experiment we have used the same operators and data already published in \cite{DBLP:conf/evoW/MereloCBRGFRV16} that is, crossover, mutation and one-max or count-ones. We have added the Royal Road function \cite{mitchell1992royal}, mainly with the objective of comparing Perl and Perl 6 and its parallel facilities. The functions are well known, and the main objective of these tests was, besides comparing performance side by side, see how this performance scales to big, and a bit unrealistic, chromosome sizes. The way the handling of data structures by particular languages is done makes that, sometimes, the speed of dealing with bigger sizes is faster than with smaller sizes; as a matter of fact, in the above mentioned paper Java achieved its best performance for the biggest chromosome size. % Big chromosome -> large chromosome? We tested several data structures in Perl 6 and finally chose a vector (or array) of booleans as the fastest one. In fact the speed of the benchmark is divided in two parts: speed for randomly generating the vector and speed of actually counting the number of ones. In this case, generating a vector of {\tt Bool}s was considerably faster than doing the same with integers, although summing them was almost 4 times as slow. That is why we also test a vector of integers in the experiments we show below. These two operations take two lines in Perl 6, as follows. \begin{lstlisting}[language=Perl] my $ones = Bool.roll xx $len ; my $maxones = $ones.sum; \end{lstlisting} These two lines show the advantage of this kind of language; the same operation needs several lines and two loops in most other, non-functional, languages. The first one creates an array of random boolean values, generated with {\tt Bool.roll}; {\tt xx} {\em multiplies} by the length to yield an array of the desired length. And the second line just uses the {\tt sum} method, which is an standard method for arrays and can also be applied to arrays of booleans. In Perl 6, there are many possible ways of achieving the same, but in fact after several measurements we found this was the fastest, even if initially it was much slower than for other languages. Also, as it can be seen, Perl 6 uses {\em sigils} for variables, this {\tt \$} been applied to most kinds of containers. % has been? {\tt Bool} is a standard type, and {\tt my} is a {\em scope} declaration which can optionally include a type or class declaration. A fuller introduction to the language is outside the scope of this paper, but the interested reader can check the documentation at \url{https://docs.perl6.org} for a tutorial or a more thorough explanation of all its features and capabilities. The benchmark consists in 100,000 repetitions of the operation for sizes that are increased by 2 starting from 16 to, when possible, 32768. All experiments took place in a desktop computer with 8 cores running Ubuntu 14.04.5. All programs are open source, and included in the same GitHub repository that holds this paper in \url{https://github.com/JJ/perl6eo}. Data from the experiments is also freely available in the same place. \section{Results and analysis} \label{sec:res} % \begin{figure}[h!tb] \centering \begin{knitrout} \definecolor{shadecolor}{rgb}{0.969, 0.969, 0.969}\color{fgcolor} \includegraphics[width=\maxwidth]{figure/results-perl6-1} \end{knitrout} \caption{Plot of time needed to perform 100K OneMax evaluations in several versions of Perl 6, from 2016 to the current in 2018. Strings have lengths increasing by a factor of two from 16 to $2^{15}$. Please note that $x$ and $y$ both have a logarithmic scale.} \label{fig:perl6:mo} \end{figure} % The first experiment just measured the speed of the {\sf max-ones} function across releases of Perl 6; Perl 6 has a monthly release schedule, with version number corresponding to year and month. The result of this operation is shown in Figure \ref{fig:perl6:mo}, and it clearly shows the increase in speed across time, that amounts to almost one order of magnitude from the first version, with a performance that prompted us to exclude it from our initial study, to the current, which is much better. % \begin{figure*}[h!tb] \centering \begin{knitrout} \definecolor{shadecolor}{rgb}{0.969, 0.969, 0.969}\color{fgcolor} \includegraphics[width=\maxwidth]{figure/results-mo-1} \end{knitrout} \caption{Plot of time needed to perform 100K OneMax function evaluations in strings with lengths increasing by a factor of two from 16 to $2^{15}$. Please note that axes $x$ and $y$ both have a logarithmic scale.} \label{fig:time:mo} \end{figure*} % Despite the improvement, it needs to be compared to the rest of the languages we tested in the previous paper. We have excluded the fastest, mainly compiled, languages, to leave mainly scripting, and some compiled, languages. This comparison is shown in Figure \ref{fig:time:mo}. This chart, besides all the measures already published in the previous paper, includes three versions of the one-max in Perl 6. One is the same as above, which uses a boolean representation for the chromosome bits; the second uses an integer representation for the bits and is listed as {\tt IntVector}. This version needed a bit of hacking which included using a Boolean bit generation and then transforming it to an integer number; however, even that step made it a bit slower than the Boolean version. % \begin{figure*}[h!tb] \centering \begin{knitrout} \definecolor{shadecolor}{rgb}{0.969, 0.969, 0.969}\color{fgcolor} \includegraphics[width=\maxwidth]{figure/results-bf-1} \end{knitrout} \caption{Plot of time needed to perform mutation on 100K chromosomes with increasing lengths from 16 to $2^{15}$. Please note that $x$ and $y$ both have a logarithmic scale.} \label{fig:time:bf} \end{figure*} % The third version, listed as {\tt perl6-BitVector-hyper}, shows one of the unique characteristics of Perl 6: implicit parallelism. The {\tt hyper} and {\tt race} methods, applied to vectors, divide the job into different threads, 4 by default, evaluating different parts of the vector in parallel, without affecting in any way the rest of the operation. In the case above, just changing the line to \begin{lstlisting}[language=Perl] my $maxones = $ones.race.sum; \end{lstlisting} made the sum to be executed in parallel, improving the performance by the number of threads it is using by default. We used {\tt race} instead of {\tt hyper} since the latter forces in-order execution; in our case, the order of the sums is not important and keeping order makes it a bit slower. The chart shows that, in fact, Perl 6 for this operation is faster, for big sizes, than C++, and overall faster than the Lua language or even Python for a particular representation. For some sizes, it can also be faster than Common Lisp. % \begin{figure*}[h!tb] \centering \begin{knitrout} \definecolor{shadecolor}{rgb}{0.969, 0.969, 0.969}\color{fgcolor} \includegraphics[width=\maxwidth]{figure/results-xo-1} \end{knitrout} \caption{Plot of time needed to perform crossover on 100K chromosomes with increasing lengths from 16 to $2^{15}$. Please note that $x$ and $y$ both have a logarithmic scale.} \label{fig:time:xo} \end{figure*} % In principle, by being faster than more traditional languages, we can prove here that Perl 6 can be not only convenient in terms of programming ease (just two lines where other languages need many more lines), but also faster. Let us, however, have a look at the rest of the genetic operations. The very traditional bitflip mutation comparison chart is shown in Figure \ref{fig:time:bf}. The lines used for doing this operation are shown below. \begin{lstlisting}[language=perl] my $position = $range.pick; @ones[$position] = !@ones[$position]; \end{lstlisting} In this case we are using {\tt pick} for choosing a random value in a range, which is the chromosome size, and flipping the bit in that random position. Could be done also in a single line, avoiding the {\tt \$position} variable; besides, we use the {\tt \@} sigil to clearly indicate we are dealing with a vector. We avoided it in the listing above since it made the operation slightly slower. % \begin{figure*}[h!tb] \centering \begin{knitrout} \definecolor{shadecolor}{rgb}{0.969, 0.969, 0.969}\color{fgcolor} \includegraphics[width=\maxwidth]{figure/results-rr-1} \end{knitrout} \caption{Plot of time needed to perform 100K royal road functions on chromosomes with increasing lengths from 16 to $2^{15}$. Please note that $x$ and $y$ both have a logarithmic scale.} \label{fig:time:rr} \end{figure*} % In this case, Perl 6 is considerably fast, although not the fastest, and the time needed is independent of the chromosome length, a good trait, overall. Once again, it shows a good performance in this operation. Let us examine the next genetic operator, crossover. The crossover performance comparison chart is shown in Figure \ref{fig:time:xo}. In this case, after initial tests, we have gone back again to testing a different representation: a bit string, that is, a string composed of 0s and 1s. Strings have a different internal representation than vectors, and the operations needed are different. While in the first case we could use this line to perform the crossover: \begin{lstlisting}[language=perl] @chromosome2.splice($start,$this-len, @chromosome1[$start.. ($start+$this-len)]); \end{lstlisting} , in the second case we used \begin{lstlisting}[language=perl] $chromosome2.substr-rw($start,$this-len ) = $chromosome1.substr($start,$this-len); \end{lstlisting} changing from an array operation to a string operation. And we did so after finding a very disappointing performance, in fact the worst of all languages tested, with the first one. Using a bitstring was not much better, still needing almost double the time of the second-worst language, which is Scala in this case. The fact that these two functional languages have the same disappointing performance, while Scala is usually very fast for all applications, points to the fact that we might be taking the wrong, non-functional, approach to this operation in these languages. In fact, changing the line to \begin{lstlisting}[language=perl] @chromosome2.splice($start,$this-len, @chromosome1.skip($start).head($this-len)); \end{lstlisting} somewhat improved the performance. In this case we are using functional methods to access different parts of the chromosome. There is around a 20\% improvement over the previous line, but still very slow compared to other languages. This proves, anyway, that testing and some help from the community are needed to extract the best performance out of a language; also that idiomatic constructions are in general preferred over generic constructions. It always pays to know the language well. That is also why we have tested another function, the well known Royal Road, which was proposed as an example of a complicated landscape for evolutionary algorithms. It might also be a complicated performance benchmark. Perl 6 needs a single line to implement this function: \begin{lstlisting}[language=perl] my $royal-road= $ones.rotor(4) .grep( so (*.all == True|False) ).elems ; \end{lstlisting} In this case, we are using several unique Perl 6 features and doing so in a functional way. For instance, {\tt | } are {\em Junctions} and {\tt all} becomes {\tt True} or {\tt False} if all the elements in its 4-element block are. That is a very straightforward, and mathematically correct, way to express the Royal Road function. However, it is still slower than Perl by an order of magnitude, as shown in Figure \ref{fig:time:rr}. In fact, we had to stop the benchmark, since scaling with size was very bad too. That is why we used again the {\tt .race} method, which distributes load among threads. Although for smaller sizes the overhead needed to set up the distribution of tasks made it slower, and thus not very convenient for the usual sizes, it became much faster, by almost an order of magnitude, for bigger sizes, proving again that implicit parallelism very conveniently allows to work with big sets of elements, making the result faster. However, it is still very slow. As it becomes the target of optimization in subsequent releases of Perl 6, it will probably improve in speed. The implicit parallel facilities of Perl 6 makes it possible, however, to optimize it at a different level, for instance, population level, which still makes Perl 6 an interesting target for the implementation of evolutionary algorithms. In fact, there are already two implementations available in the Perl 6 module ecosystem, one by the author of this paper, {\tt Algorithm::Evolutionary::Simple}, which includes implementations of the operators shown here. The other one, {\tt Algorithm::Genetic}, makes extensive use of Perl 6 functionalities including roles and {\tt gather/take} loops. \section{Conclusions} In this paper, we set to prove the readiness of Perl 6, a new programming language, for implementing evolutionary algorithms. Traditionally, these tests have been based purely on performance, to the point that the only questions asked when a new evolutionary algorithm library is released is: "Is it faster than Java/C++?". In this paper we have considered this performance, first historically from the first releases, and then considering the latest releases. Taking into account the improvements in performance experimented along this time, and how seriously performance issues are taken by the developers, we can safely assume that in the medium term Perl 6 will achieve levels of speed comparable with those of other scripting languages, which means that it could be faster than some compiled languages. On the other hand, a very important consideration is also the facilities that the language offers for the implementation of most classical evolutionary functions. In this case, Perl 6 offers functional methods that allow the chaining of operations, equivalent to function composition, so that in many cases a single line of chained functions is enough to process chromosomes. In many cases, this idiomatic way of doing those operations will result in a faster operation, since idiomatic constructs are usually optimized in every language. In this sense, using either functions or the implicitly parallel methods such as {\tt .race} results in improvements in speed, although for the time being, and in general, Perl 6 is still slower than its sister language, Perl. Putting both things in the balance, and in general, the conclusion is that the time for implementation of evolutionary algorithms in Perl 6 has arrived, although there is still some way to go in terms of performance. Closely following the development will make the programmer choose the faster alternative for the implementation of evolutionary algorithms, constituting an interesting and promising line of work. Another line of work will be to use explicit concurrency primitives to implement a concurrent evolutionary algorithms. This is something we will explore in a different paper. \begin{acks} This paper is part of the open science effort at the university of Granada. It has been written using {\tt knitr}, and its source as well as the data used to create it can be downloaded from \href{https://github.com/JJ/2016-ea-languages-wcci}{the GitHub repository} \url{https://github.com/JJ/2016-ea-languages-wcci/}. This paper has been supported in part by \href{http://geneura.wordpress.com}{GeNeura Team}, projects TIN2014-56494-C4-3-P (Spanish Ministry of Economy and Competitiveness) and DeepBio (TIN2017-85727-C4-2-P) We are also deeply grateful to the Perl 6 community, who through the Perl 6 IRC channel and pull requests have helped greatly to improve the code. \end{acks} \bibliographystyle{ACM-Reference-Format} \bibliography{geneura,languages,GA-general} \end{document} %%% Local Variables: %%% ispell-local-dictionary: "english" %%% hunspell-local-dictionary: "english" %%% End:
{ "alphanum_fraction": 0.7801257763, "avg_line_length": 44.0636833046, "ext": "tex", "hexsha": "3105b5874a82e98ee381452938bb7fd9cc599158", "lang": "TeX", "max_forks_count": 4, "max_forks_repo_forks_event_max_datetime": "2018-04-01T20:33:47.000Z", "max_forks_repo_forks_event_min_datetime": "2015-11-23T12:02:59.000Z", "max_forks_repo_head_hexsha": "2a174cdcfeabe4f64a90787f76efbc1abdb6a5a1", "max_forks_repo_licenses": [ "Artistic-2.0" ], "max_forks_repo_name": "JJ/2016-ea-languages-wcci", "max_forks_repo_path": "ea-perls.tex", "max_issues_count": 27, "max_issues_repo_head_hexsha": "2a174cdcfeabe4f64a90787f76efbc1abdb6a5a1", "max_issues_repo_issues_event_max_datetime": "2018-07-14T14:54:19.000Z", "max_issues_repo_issues_event_min_datetime": "2015-11-23T07:10:09.000Z", "max_issues_repo_licenses": [ "Artistic-2.0" ], "max_issues_repo_name": "JJ/2016-ea-languages-wcci", "max_issues_repo_path": "ea-perls.tex", "max_line_length": 130, "max_stars_count": 2, "max_stars_repo_head_hexsha": "2a174cdcfeabe4f64a90787f76efbc1abdb6a5a1", "max_stars_repo_licenses": [ "Artistic-2.0" ], "max_stars_repo_name": "JJ/2016-ea-languages-wcci", "max_stars_repo_path": "ea-perls.tex", "max_stars_repo_stars_event_max_datetime": "2018-07-15T05:14:17.000Z", "max_stars_repo_stars_event_min_datetime": "2016-01-28T14:17:18.000Z", "num_tokens": 6654, "size": 25601 }
\chapter{Components} \label{sec:draw} Some sweet pictures! \nomenclature[aA]{$y^+$}{Length in viscous units} ... ... ...
{ "alphanum_fraction": 0.6532258065, "avg_line_length": 12.4, "ext": "tex", "hexsha": "205f140dd979884b0f43dd79a98a797f6ca468de", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "5594a5f3d172662b4404d5357d3a28639a0feb43", "max_forks_repo_licenses": [ "CC0-1.0" ], "max_forks_repo_name": "Biles430/Dissertation", "max_forks_repo_path": "appendices/fabricationPictures.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "5594a5f3d172662b4404d5357d3a28639a0feb43", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "CC0-1.0" ], "max_issues_repo_name": "Biles430/Dissertation", "max_issues_repo_path": "appendices/fabricationPictures.tex", "max_line_length": 49, "max_stars_count": null, "max_stars_repo_head_hexsha": "5594a5f3d172662b4404d5357d3a28639a0feb43", "max_stars_repo_licenses": [ "CC0-1.0" ], "max_stars_repo_name": "Biles430/Dissertation", "max_stars_repo_path": "appendices/fabricationPictures.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 37, "size": 124 }
\documentclass[a4paper,twocolumn,11pt,accepted=2017-05-09]{quantumarticle} \pdfoutput=1 \usepackage[utf8]{inputenc} \usepackage[english]{babel} \usepackage[T1]{fontenc} \usepackage{amsmath} \usepackage{hyperref} \usepackage{listings} \usepackage{tikz} \usepackage{lipsum} \usepackage{graphicx} \usepackage{mathtools} \DeclarePairedDelimiter\bra{\langle}{\rvert} \DeclarePairedDelimiter\ket{\lvert}{\rangle} \DeclarePairedDelimiterX\braket[2]{\langle}{\rangle}{#1 \delimsize\vert #2} \begin{document} \title{Qurry: A prototype quantum programming language} \author{Lucas~Saldyt} \affiliation{Arizona State University} \email{[email protected]} \homepage{https://github.com/LSaldyt} \maketitle \begin{abstract} Innovation in near-term quantum programming requires the use of lightweight, transparent abstraction mechanisms. This paper outlines a typeless, dynamic functional quantum programming language, and a surrounding software system which allows for rapid prototyping and evolution of the langauge. Abstractions are desirable because they allow a programmer to express more computation with less code. Transparency is crucial, especially in quantum computing, because hardware details more strongly affect the types of programs that can be written. To some extent, abstraction and transparency oppose one another. However, carefully crafted abstractions can be lightweight enough to preserve transparency. Qurry achieves lightweight abstractions through heirarchical composition of data and of operations. \end{abstract} \section{Introduction} In 1981, Richard Feynman noted that quantum physics appears to be impossible to simulate using a classical computer, but that quantum computers appeared to be perfectly capable of simulating quantum physics \cite{feynman_1981}. Effectively, quantum computation potentially allows new problems to be computed efficiently: in particular, this includes literal simulations of the physical world, but also abstract algorithms which may receive a superpolynomial change in time complexity. Stephen Jordan, of Microsoft, keeps a nearly exhaustive list of quantum algorithms and the speedups that they offer \cite{jordan}. According to this list at time of writing, there are thirty-five distinct quantum algorithms which offer a potential superpolynomial speedup. This famously includes Peter Shor's factoring and discrete log algorithms, as well as, fundamentally, quantum simulation. \cite{shor, small_molecule_sim, feynman_1981, lanyon2010towards, lloyd2006programming} Interestingly, many quantum algorithms such as the Deutsch-Jozsa algorithm are matched (at least practically, and sometimes theoretically) by probabilistic algorithms, and even some algorithms with superpolynomial speedups are based on older probabilistic versions. \cite{deutsch1992rapid} For instance, machine learning does not appear to have superpolynomial improvements at time of writing, but quantum computers can still be applied to it \cite{biamonte2017quantum}. The most promising application of quantum computing in the near term is in molecular simulation. As the comparisons section will demonstrate, Qurry offers unique programming language features which implementing each of these algorithms significantly easier. Currently, quantum programming is analogically in a similar stage to classical programming in 1960s and early 1970s. A true high-level quantum programming language is still a desideratum. In the history of classical programming languages, C, while not perfect, filled a major gap that had existed before its time \cite{kernighan2006c}, and quantum computing still lacks a similar language. Importantly, C's evolution was partially driven by the need for a language expressive enough to easily re-code Unix. Quantum languages, of course, are driven by a different motivation, primarily the power that quantum computing adds to the existing classical computing stack. Importantly, quantum programming does not appear to be replacing classical computation, but adding to it. Often quantum computers are used as auxillary elements to classical computers in hybrid computation \cite{zeng2017first}. According to the needs of the field, a high-level quantum programming language will likely need to be lightweight, hybridized, and transparent. While myriad other desirable criteria exist, these are certainly among them. Hybridized instruction languages already exist, and allow one to leverage the power of existing classical computation \cite{forest, cirq, qasm, pyquil}. However, the ability to program at a higher level of abstraction is desired. It is not that it is particularly hard to write quantum circuits at the gate-level, but simply that existing quantum programming languages often force programmers to write redundant inexpressive code, in the sense that more description is required to get the same level of computation. An obstruction to this is that running circuits on quantum hardware is not trivial, and requires transparency in a given language. In the design philosophy of C++, Bjarne Stroustrup describes ``lightweight abstractions“, which allow users to easily exploit the full power of their computer without having to write hand optimized assembly code \cite{stroustrup}: ``The aim [of C++] is to allow a programmer to work at the highest feasible level of abstraction by providing a simple and direct mapping to hardware and zero-overhead abstraction mechanisms". A zero-overhead abstraction is one which is easier to implement, but performs no differently than code hand-written in a lower-level language (Assembly, in the case of C++, and a quantum circuit language in the case of Qurry). However, some have argued for the importance of hardware, as in Google's phrase ``hardware aware, not hardware agnostic" \cite{google_cloud}. Many aspects of hardware are particularly important, for instance, topology, which will potentially result in a programmer needing to modify a quantum algorithm for it to run on two separate computers. To some extent, this can be handled by existing compilers, but not nearly at the same level as classical compilers are able to do this \cite{quilc}. Additionally, the error generation of a quantum computer is actually a crucial detail, even though quantum programmers might desire to ignore it. In the current state of quantum computing, many details of hardware cannot yet be ignored. However, it is obvious that \emph{eventual} hardware independence is desirable --- Consider the power of Java in the classical computing world. Lightweight abstractions are precisely the scaffold that will catalyze this transition. Clojure is another inspiration for Qurry.. \cite{hickey2008clojure} \subsection{Languages} Qurry is certainly not the first of its kind. Several quantum programming languages exist (\cite{larose2019overview, omer1998procedural} TODO Cite more), but this paper will discuss two leading examples: Q\# and Quipper. Summarize and discuss Q\# here. TODO. \cite{svore2018q} Summarize and discuss Quipper here. TODO. \cite{selinger2006lambda} \cite{selinger2004towards} \cite{quipper} \cite{quipper_guide} \cite{proto_quipper} \subsection{Background} The absolute basics of quantum computing are not nearly as intimidating as they are sometimes made out to be. The main requirement is linear algebra, but exposure to complex numbers and probability are also helpful. For a complete overview, see the beginning chapters of ``Quantum Compututation and Quantum Information“ \cite{nc}. Conventionally, quantum data is represented on qubits. When measured, qubits will be in *either* of two states: $\ket{0}$ or $\ket{1}$. However, more generally, qubits are in a combination of these two states, which is known as a superposition. A particular active qubit's state is described by two complex numbers, $\alpha$ and $\beta$, which are collected in a vector. This is written in the simple equation: $$\ket{\psi} = \alpha\ket{0} + \beta\ket{1}$$. However, the state vector $\alpha, \beta$ is not directly examinable. Instead, when a qubit is measured, one measures $\ket{0}$ with probability $|\alpha|^2$, and $1$ with probability $|\beta|^2$, which must sum to $\ket{1}$. For single qubits, a state is evolved in the model simply by multiplying the state vector for a qubit by a $2$ by $2$ unitary matrix, known as a one-qubit gate. This is written as $Av$. For $n$ qubits, the state is simply a complex vector of length $2^n$, and a $n$ qubit gate is a $2^n \text{ by } 2^n$ matrix: $$\ket{\psi} = \alpha_i\ket{s_i} \text{ for } s_i \in \{\{0, 1\}^n\}, 0 < i < 2^n - 1$$. Once a given $n$-qubit quantum state is measured, the outcome is a single bit-string, $s_i$, with probability $|\alpha_i|^2$. If it is possible to repeat this measurement (by preparing the quantum state multiple times), then a collection of measurements is a multinomial distribution defined by the states $s_i$ and probabilities $\alpha_i$, but this distribution has $2^n$ states, and measuring a quantum state only gives a single sample. Importantly, past a single qubit, quantum states can be entangled. Two states are entangled when the measurement outcome of one qubit is correlated with the measurement outcome of other qubits. The simplest, most famous example of this is the Bell states, also known as EPR pairs. For instance, in one Bell state, two qubits are either measured both as $\ket{0}$, or both as $\ket{1}$ (even when measured independently), but there is no probability for them to differ. This is the basis for all interesting quantum algorithms. To summarize, superposition is the fundamental state model for quantum computers, but a given quantum state cannot be measured directly, only sampled once. Then, entanglement allows the correlation of measurements, which is a crucial ingredient in any useful quantum algorithm. Any quantum program is simply a combination of linear operators which affect the superposition. Given an initial state, conventionally the zero vector, a quantum program $P$ operates on the state vector, and then the vector is sampled, by measuring a subset of its qubits. Importantly, a quantum program $P$ is itself a linear operator, potentially further composed of other linear operators. The data in a quantum program, at this level of abstraction, will only ever be a vector of qubits, but at higher levels of abstraction, this vector can be subsectioned into semantic datatypes, no differently than how C++'s fundamental memory model is a sequence of bytes. At the same time, in hybrid classical-quantum computation, classical control can setup, run, and measure the outcomes of quantum sub-programs. In this way, some quantum program $P$ will be embedded in a hybrid program. \subsection{Circuit Languages} At time of writing, Rigetti pyquil contains the following quantum gates and operations: Single qubit gates and operations: \begin{itemize} \item RESET, I, X, Y, Z, H, S, T \end{itemize} Qubit gates taking an angle as the the parameter and qubit as the second: \begin{itemize} \item RX, RY, RZ, PHASE \end{itemize} Swap operators, where each takes two qubits, and PSWAP takes an additional angle as a first argument: \begin{itemize} \item SWAP, ISWAP, and PSWAP \end{itemize} Controlled operators: \begin{itemize} \item CZ, CNOT \item CSWAP \item CPHASE00, CPHASE01, CPHASE10, CPHASE \end{itemize} And of course the hybrid measurement instruction, which takes a qubit as the first argument, and a classical register as the second: \begin{itemize} \item MEASURE \end{itemize} And also contains the following classical operations: \begin{itemize} \item TRUE, FALSE, NOT, NEG \item AND, OR, MOVE, EXCHANGE, IOR, XOR \item ADD, SUB, MUL, DIV \item EQ, GT, GE, LE, LT \item LOAD \item STORE \item CONVERT \end{itemize} Most importantly, though, QUIL actually already has a notion of higher-level functions, which it calls ``Modifiers". These are the following: \begin{itemize} \item DAGGER \item CONTROLLED \end{itemize} \subsection{Classical Probabilistic Languages} Classical probabilistic programming languages are a recent innovation from the MIT cognitive science community \cite{goodman2012church, carpenter2017stan}. Essentially, they create a way for non-expert programmers to access the power of Bayesian inference. Users can create simple probabilistic models in standard code, and then run them through an expert-created inference backend. Famously, this has resulted in dramatically reduced code complexity, with a famous case where a 50-line probabilistic program could compete with traditional approaches to face recognition \cite{50-lines}. Additionally, classical probabilistic programming has outperformed many other AI techniques \cite{lake2015human}. These languages are successful because they package powerful inference algorithms and intuitive, simple modeling into an understandable framework. Similar success has also been seen, for instance, with frameworks such as Tensorflow, Keras, or Edward \cite{abadi2016tensorflow}. Qurry is inspired by the effects these languages have had, and in fact there is some overlap between quantum programming and classical probabilistic languages. For instance, quantum bayesian inference has been conceptualized since the 70s-90s \cite{tucci1995quantum}. At time of writing, this has been distilled into packages such as Bayesforge \cite{bayesforge, przewikezlikowski2019support}. \section{Features} Creating abstractions in quantum programming languages comes down to the creation of higher-order functions and higher-order datatypes. Language features that allow the creation and composition of both higher-order functions and higher-order datatypes set Qurry apart from lower-level circuit languages. \subsection{Higher Order Functions} The simplest illustration of a higher order function is trivial, but surprisingly neglected from any existing quantum language: it is the tensor operator ($U^{\otimes n}$), known to functional programmers as the higher-order function, \emph{map}. Many quantum algorithms will begin with a change of basis, effectively a state preparation. Commonly, this is to the Hadamard basis, and is done by applying the Hadamard operator to a block of qubits. In a conventional circuit language, this is done as the following: \begin{lstlisting} H 0 H 1 ... H n\end{lstlisting} However, textbooks and papers will write this as $H^{\otimes n}$, and in Qurry, it can be written: \begin{lstlisting} (define workspace (block n qubit)) (map H workspace)\end{lstlisting} Implicitly, this example also introduces Qurry's assignment statement \emph{define}, and its representation for qubit arrays, the $block$ command, which takes a size and a type, and allocates space and handles mapping to actual qubit indices. More impressively, Qurry supports automatic currying of functions: \begin{lstlisting} (define initialize_basis (map H)) (define workspace (block n qubit)) (initialize_basis workspace)\end{lstlisting} Of course, Qurry utilizes the two existing higher-order functions defined by QUIL: DAGGER, and more importantly, CONTROLLED. For brevity, Qurry redefines CONTROLLED to CU. Effectively, this allows the creation of arbitrary controlled operators, for instance the redefinition of CNOT: \begin{lstlisting} (define custom_cnot (CU X)) (H 0) (custom_cnot 0 1) \end{lstlisting} Qurry calls functions like CU controlled higher level operators. This list includes: \begin{itemize} \item CU \item CNU \item Cascade, CascadeU \item ReverseCascade, ReverseCascadeU \item Collect, CollectU \item Expand, ExpandU \item SimU \end{itemize} Interestingly, many of these are defined through composition of simpler gates. For instance, CNU takes a block of control qubits and a block of work qubits, and entangles pairs of control and work qubits, and finally entangles a target qubit, which a unitary is controlled by. Essentially, each of these operators performs some control operations (through a composition of CNOT gates, conventionally), and then performs an arbitrary unitary operation which is controlled by the result of the previous control operations. (This section will contain circuits and more in-depth explanations, but this is currently commented out). %[TODO: Circuit Diagram] %A ``Cascade" is simply a chain of shifted CNOT gates, and a ``CascadeU" is simply a ``Cascade" which in turn controls the operation of a unitary gate. %[Circuit Diagram] %ReverseCascade: %[Circuit Diagram] %Now consider the circuit used for the simulation of a hamiltonian (): %[Circuit Diagram] %Clearly, this circuit contains repeated information, which can be abstracted into the form of another function: ``SimU", which in turn is a "collect" operation, an "expand" operation, and a controlled unitary in between them. %[Circuit Diagram] Lastly, Qurry implements lambda expressions, which allow arbitrary re-use and composition of other operators, and natively support currying. Since the $\lambda$ character is unavailable on most conventional keyboards, lambda expressions are denoted with an $l$, and can be used to create named functions, such as: \begin{lstlisting} (define create-bell (l (a b) ((H a) (CNOT a b)))) \end{lstlisting} And of course they support currying: \begin{lstlisting} (define partial ((l (a b c) ((X a) (X b) (CCNOT a b c))) 0 1)) (partial 2) \end{lstlisting} Obviously, Qurry's lisp interface will have plenty of parentheses. \subsection{Higher Order Datatypes} Qurry's memory model is simple: an array of $n$ qubits, and $m$ classical bits. However, in programmer-space, these arrays are cut up and defined using semantic datatypes. The previous section discussed the \emph{block} type, and the \emph{define} command. The \emph{block} command will automatically select appropriate qubits in the array, and map to these when used. In terms of datatypes, not much more is needed, except for the \emph{datatype} command, which mimics C++'s \emph{struct} or \emph{class}. For simplicity, elegance, and robustness, Qurry does not implement encapsulation or inheritance, but instead uses public access by default (in the spirit of Python, since after all, Qurry has a Python interface), and relies on composition instead of inheritance. A \emph{datatype} is nothing more than a contiguous collection of other Qurry datatypes, with names for each field. Like \emph{block} objects, \emph{datatype}s automatically map to qubits and bits in Qurry's memory model. These can be \emph{block}s, single qubits, single bits, and other defined \emph{datatype} objects, allowing for recursive types. Fields within a datatype are simply accessed with the dot operator: % TODO: should functions be possible as datatype fields? \begin{lstlisting} (datatype entanglion (a qubit) (b qubit)) (define e (entanglion)) (H e.a) (CNOT e.a e.b) \end{lstlisting} Recursively composed higher-order datatypes, in combination with recursively composed higher-order functions are the foundation for creating a more abstract programming language. \subsection{Other Features} Lastly, Qurry features rudimentary classical features meant to complement the hybrid model used by QUIL. \begin{itemize} \item cond \item do \end{itemize} \section{Comparisons} \includegraphics{../examples/diagrams/simu.pdf} \includegraphics{../examples/diagrams/cnu.pdf} % Draw examples from Nielsen and Chuang, and the general literature. % % Simply change (if (condition) (branch) (branch)) % into % `` % Condition % Measure [] % Jump label % branch % branch etc % `` % % Curry also supports variable naming, blocks of qubits, classical callbacks, imports, \dots % Curry can be called as a library and operated from python % % There are some easy targets for providing abstraction: common things like functions, conditionals, loops, integer data types, and so on. % However, let's jump into the quantum/probabilistic side of things. % % Models will fundamentally be composed of, generally, wave functions: Superpositions over all possible states. % First, consider modeling a classical distribution. % We can successfully produce sampleable classical distributions on a quantum computer. % For instance, consider the following model from the Church programming language tutorial. % This code is specifying a probabilistic grammar for simple sentences about cooking. % % ```scheme % (define (transition nonterminal) % (case nonterminal % (('D) (multinomial(list (list (terminal 'the)) % (list (terminal 'a))) % (list (/ 1 2) (/ 1 2)))) % (('N) (multinomial (list (list (terminal 'chef)) % (list (terminal 'soup)) % (list (terminal 'omelet))) % (list (/ 1 3) (/ 1 3) (/ 1 3)))) % (('V) (multinomial (list (list (terminal 'cooks)) % (list (terminal 'works))) % (list (/ 1 2) (/ 1 2)))) % (('A) (multinomial (list (list (terminal 'diligently))) % (list (/ 1 1)))) % (('AP) (multinomial (list (list 'A)) % (list (/ 1 1)))) % (('NP) (multinomial (list (list 'D 'N)) % (list (/ 1 1)))) % (('VP) (multinomial (list (list 'V 'AP) % (list 'V 'NP)) % (list (/ 1 2) (/ 1 2)))) % (('S) (multinomial (list (list 'NP 'VP)) % (list (/ 1 1)))) % (else 'error))) % ``` % % More succinctly, this is specifying the following (toy) language model: % ```scheme % D(eterminer): (uniform 'the' 'a') % N(oun): (uniform 'chef' 'omelet' 'soup') % V(erb): (uniform 'cooks' 'works') % A(dverb): (uniform 'diligently') % AP(Adverb Phrase): (uniform A) % NP(Noun Phrase): (D, N) % VP(Verb Phrase): (uniform (V AP) (V NP)) % S(entence): (NP, VP) % ``` % % To make things even simpler, let's first just consider modeling a randomly sampled Noun-Phrase (which is the first part in sampling a full toy sentence). % The noun-phrase is a concatenation of a determiner and a noun. In our toy example, we have two determiners and three nouns, both uniformly sampled, which makes for a total of six options with equal probability. % So, we'll need three qubits to model this. Curry has builtins for these distributions. % ```scheme % (def determiner-qubit 0) % (def noun-qubits 1 2) % (bernoulli 0.5 determiner-qubit) % (multinomial 0.33 0.33 0.34 noun-qubits) % ``` % % The output is the following (using a local simulator): % ``` % grid {curry}: ./compile examples/test.lisp % % [['def', 'determiner-qubit', '0'], % ['def', 'noun-qubits', '1', '2'], % ['bernoulli', '0.5', 'determiner-qubit'], % ['multinomial', '0.33', '0.33', '0.34', 'noun-qubits']] % % {'000': 0.17, '001': 0.16, '010': 0.17, '011': 0.17, '100': 0.16, '101': 0.16} % % 277.4035930633545 ms simulated runtime % ``` % % In our output, the rightmost bit is representing the determiner, and the other two bits are representing the noun. % So the output is: % ```python3 % {'the chef' : 1/6, 'a chef' : 1/6, 'the omelet' : 1/6, 'a omelet' : 1/6, 'the soup' : 1/6, 'a soup' : 1/6} % ``` % Now, let's consider the rest of the model. % When we sample a Verb Phrase, it contains recursive elements. % So, it will branch (with equal probabilities) to either (V AP) or (V NP). % Before diving in, let's look at branching in quantum computers. % % Consider preparing a bell state: % ``` % (h 0) % (cnot 0 1) % ``` % And distinguish this from the following, which will produce the same classical measurements, but no entanglement (because the state of the first qubit is known before producing the state in the second qubit). % In this case, the state 01 is possible, because the first qubit may be measured in the 1 state, and the second qubit is unprepared, and in the zero state. % ``` % (bernoulli 0.5 0) % (measure 0 0) % (if 0 (x 1) (nop)) % ``` % % So, when creating a probabilistic model which branches, we distinguish between these two types of branching, because only one truly creates an entangled state. % However, this makes representing information slightly more difficult, because we will not know which bits correspond to which states (unless we encode this, which we will). \section{Software Ecosystem} In addition to being a prototype quantum programming language, Qurry defines a software stack surrounding the language, which is intended to make development more pleasant. For instance, this software stack makes it exceptionally easy to add new language features and libraries to Qurry. This allows one to rapidly test new ideas in quantum programming and let the language evolve on its own as opposed to architecting a top-down ``perfect'' language. \section{Standard Library} Qurry contains mechanisms which enable easy inclusion of qurry code in the form of libraries. As an example, Qurry's standard library is implemented in this fashion. Explain how the statistics library can be easily implemented. At time of writing, Qurry contains the following constructs: \begin{itemize} \item gaussian \item bernoulli \item multinomial \item uniform \end{itemize} Similarly to in a classical probabilistic programming language, these enable the creation of classical probabilistic states, which can then be used in quantum programs. For instance, it is possible to create a multi-dimensional gaussian distribution, and then entangle an auxillary qubit with the state of the gaussian distribution. \section{Extension} Making additions to Qurry is particularly easy: For instance, the $map$ feature is defined using the following python code: \begin{lstlisting}[language=Python] from ..compiler.utils import named_uuid def map(operator, blockname, kernel=None): ''' Apply a single-qubit operator to every qubit in a block (map H blocka) ''' try: block = kernel.definitions[blockname] except KeyError: raise ValueError('The block {} is not defined'.format(blockname)) return '\n'.join('{} {}'.format(operator, i) for i in range(block.start, block.end + 1)) \end{lstlisting} \section{Statistical Libraries} Since quantum computers are simply special probabilistic computers, Qurry also attempts to create a classical statistical library for high-level modeling. This is particularly useful in the same way that a classical probabilistic programming language is, namely for modeling anything statistical, and especially for bayesian machine learning. For instance, the R. Tucci and H. Dekant's group have shown uses for this through their software, Bayesforge [TODO: Cite]. Qurry includes simple statistical packages for creating states, but no inference engine. [However, Qurry might allow one to interface with Bayesforge] \section{Conclusion} In the creation of a Qurry and its corresponding framework, it is hoped that this will aid the development of quantum algorithms, as algorithm designers will have a new, richer, more abstract vocabulary with which to express themselves. To recap, this goal is approached in the following N ways. By introduction of lightweight abstractions from the C++ school of thought, efficient and transparent programming interfaces are created. Through specialized libraries, Qurry can claim to be a generalized library, while still offering powerful sub-frameworks for specific tasks. With functional programming paradigms, Qurry can move towards higher levels of abstraction as the semantics of quantum programming become better understood. Lastly, by creating a rapid prototyping framework, new language features can be developed in a bottom-up style, which will allow Qurry to be created naturally, instead of artificially. \section{Appendix one} Appendix content \section*{Acknowledgment} The author would like to thank Dr. Will Zeng of Rigetti computing, an organizer of the Unitary Fund, Dr. Ajay Bansal of Arizona State University, PLoS and other donators to the unitary fund, and ASU's FURI program. \newpage \bibliographystyle{unsrt} \bibliography{sources} % \begin{abstract} % In the standard, \texttt{twocolumn}, layout the abstract is typeset as a bold face first paragraph. % Quantum also supports a \texttt{onecolumn} layout with the abstract above the text. % Both can be combined with the \texttt{titlepage} option to obtain a format with dedicated title and abstract pages that are not included in the page count. % This format can be more suitable for long articles. % The \texttt{abstract} environment can appear both before and after the \texttt{\textbackslash{}maketitle} command and calling \texttt{\textbackslash{}maketitle} is optional, as long as there is an \texttt{abstract}. % Both \texttt{abstract} and \texttt{\textbackslash{}maketitle} however must be placed after all other \texttt{\textbackslash{}author}, \texttt{\textbackslash{}affiliation}, etc.\ commands, see also Section~\ref{sec:title-information}. % If you provide the ORCID number of an author by using the \texttt{\textbackslash{}orcid} command, the author name becomes a link to their page on \href{http://orcid.org/}{orcid.org}. % \end{abstract} % % In the \texttt{twocolumn} layout and without the \texttt{titlepage} option a paragraph without a previous section title may directly follow the abstract. % In \texttt{onecolumn} format or with a dedicated \texttt{titlepage}, this should be avoided. % % Note that clicking the title performs a search for that title on \href{http://quantum-journal.org}{quantum-journal.org}. % In this way readers can easily verify whether a work using the \texttt{quantumarticle} class was actually published in Quantum. % If you would like to use \texttt{quantumarticle} for manuscripts not yet accepted in Quantum, or not even intended for submission to Quantum, please use the \texttt{unpublished} option to switch off all Quantum related branding and the hyperlink in the title. % By default, this class also performs various checks to make sure the manuscript will compile well on the arXiv. % If you do not intend to submit your manuscript to Quantum or the arXiv, you can switch off these checks with the \texttt{noarxiv} option. % On the contrary, by giving the \texttt{accepted=YYYY-MM-DD} option, with \texttt{YYYY-MM-DD} the acceptance date, the note ``Accepted in Quantum YYYY-MM-DD, click title to verify'' can be added to the bottom of each page to clearly mark works that have been accepted in Quantum. % % \section{Figures} % \begin{figure}[t] % \centering % \includegraphics{example-plot.pdf} % \caption{Every figure must have an informative caption and a number. % The caption can be placed above, below, or to the side of the figure, as you see fit. % The same applies for tables, boxes, and other floating elements. % Quantum provides a Jupyter notebook to create plots that integrate seamlessly with \texttt{quantumarticle}, described in Section \ref{sec:plots}. % Figures spanning multiple columns can by typeset with the usual \texttt{figure*} environment.} % \label{fig:figure1} % \end{figure} % See Fig.~\ref{fig:figure1} for an example of how to include figures. % Feel free to place them at the top or bottom of the page, or in the middle of a paragraph as you see fit. % Try to place them on the same page as the text referring to them. % A figure on the first page can help readers remember and recognize your work more easily. % % \section{Sectioning and equations} % Sections, subsections, subsubsections, and paragraphs should be typeset with the standard LaTeX commands. % You can use the standard commands for equations. % \begin{align} % \label{emc} % E &= m\,c^2\\ % a^2 + b^2 &= c^2\\ % H\,|\psi\rangle &= E\,|\psi\rangle\\ % (\openone \otimes A)\,(B \otimes \openone) &= A \otimes B % \end{align} % For multi-line equations \texttt{align} is \href{http://tex.stackexchange.com/questions/196/eqnarray-vs-align}{preferable} over \texttt{eqnarray}. % Please refrain from using the latter. % For complex equations you may want to consider using the \texttt{IEEEeqnarray} environment from the \texttt{IEEEtrantools} package. % Whether you prefer to refer to equations as Eq.~\eqref{emc}, Equation~\ref{emc}, or just \eqref{emc} is up to you, but please be consistent and use the \texttt{\textbackslash{}eqref\{\dots\}} command instead of writing \texttt{(\textbackslash{}ref\{\dots\})}. % As a courtesy for your readers and referees, please suppress equation numbers only if there is a specific reason to do so, to not make it unnecessarily difficult to refer to individual results and steps in derivations. % % \paragraph{Paragraphs} % The paragraph is the smallest unit of sectioning. % Feel free to end the paragraph title with a full stop if you find this appropriate. % % \subsection{References and footnotes} % \label{sec:subsec1} % Footnotes\footnote{Only use footnotes when appropriate.} appear in the bottom of the page. % Please do not mix them with your references. % % Citations to other works should appear in the References section at the end of the work. % % \begin{theorem}[DOI links are required] % Important: As Quantum is a member of Crossref, all references to works that have a DOI must be hyperlinked according to the DOI. Those links must start with \texttt{https://doi.org/} (preferred), or \texttt{http://dx.doi.org/}. Direct links to the website of the publisher are not sufficient. % \end{theorem} % % This can be achieved in several ways, depending on how you are formatting your bibliography. % Suppose the DOI of an article \cite{examplecitation} that you want to cite is \texttt{10.22331/idonotexist}. % If you are formatting your bibliography manually, you can cite this work using the following in your \texttt{thebibliography} environment: % % \begin{theorem}[One citation per bibitem] % Important: If you are formatting your bibliography manually, please do not group multiple citations into one \texttt{\textbackslash{}bibitem}. % Having to search through multiple references to find the cited result makes your work less accessible for authors and grouping references can screw up our automatic extraction of citations. % \end{theorem} % % We encourage the use of BibTeX to generate your bibliography from the BibTeX meta-data provided by publishers. % For DOI linking to work, the BibTeX file must contain the \texttt{doi} field as for example in: % in the preamble of your document and then use the \texttt{plainnat} citation style by including your BibTeX bibliography \texttt{mybibliography.bib} where you want the bibliography to appear as follows: % You then have to upload the .bbl file along with the other source files when submitting to the arXiv. % Due to incompatibilities between different BibLaTeX versions we unfortunately cannot recommend this option \cite{biblatexsubmittingtothearxiv}. % % The quantumarticle class automatically detects that the \texttt{biblatex} package was loaded, sets the default option \texttt{doi=true} to include the DOI in the bibliography, and declares a suitable field format to make it a hyperlink. % Due to issues with \texttt{biber} we recommend to use the \texttt{bibtex} backend of \texttt{biblatex}. % % More information on how to get DOI links in your document can be found on StackExchange \cite{howtogetdoilinksinbibliography,automaticallyaddingdoifieldstoahandmadebibliography}. % Feel free to change the appearance of citations in any way you like by using a different \texttt{bibliographystyle} or via the advanced mechanisms provided by BibLaTeX. % The only two requirements are that citations must uniquely identify the cited work and that they must contain a DOI hyperlink whenever possible. % % \begin{theorem}[Use \texttt{\textbackslash pdfoutput=1}] % In order to get correct line breaks within hyperlinks and to make sure the arXiv produces a PDF as output, please add the line % within the first 5 lines of your main LaTeX file, as suggested by the arXiv \cite{arxivpdfoutput}. % \end{theorem} % % \section{Plots} % \label{sec:plots} % Quantum provides a \href{https://jupyter.org/}{Jupyter notebook} based on the widely used \href{https://matplotlib.org/}{matplotlib} library that greatly simplifies the creation of plots that integrate seamlessly with the \texttt{quantumarticle} document class. This is intended as a service to the authors, is \textit{not} mandatory, and currently in beta stage. You can download the \href{https://raw.githubusercontent.com/quantum-journal/quantum-journal/master/quantum-plots.ipynb}{quantum-plots.ipynb} notebook and accompanying \href{https://raw.githubusercontent.com/quantum-journal/quantum-journal/master/quantum-plots.mplstyle}{quantum-plots.mplstyle} file from the \href{https://github.com/quantum-journal/quantum-journal}{quantumarticle GitHub repository}. You only need to specify the font size and paper format that were passed as options to \texttt{quantumarticle} to get plots with fitting font sizes and dimensions. % We strongly encourage authors to use the vector based PDF format for plots. % % \begin{theorem}[Be mindful of the colorblind] % About 4\% of the worlds population are affected by some form of color vision deficiency or color blindness. % Please make sure that your plots and figures can still be understood when printed in gray scale and avoid the simultaneous use of red and green, as the inability to distinguish these two colors is the most widespread form of color vision deficiency. % \end{theorem} % % \section{Summary section} % Longer articles should include a section that, early on, explains the main results, their limitations, and assumptions. % This section can be used to, for example, present the main theorem, or provide a summary of the results for a wider audience. % % \section{Extra packages} % Quantum encourages you to load the following extra packages: % If you do not load the \texttt{hyperref} package, quantumarticle automatically loads it for you. % Packages that change font settings, such as \texttt{times} or \texttt{helvet} should be avoided. % % \section{Wide equations} % Very wide equations can be shown expanding over both columns using the \texttt{widetext} environment. % In \texttt{onecolumn} mode, the \texttt{widetext} environment has no effect. % \begin{widetext} % \begin{equation} % |\mathrm{AME}(n=6,q=5)\rangle=\sum_{i,j,k=0}^4 |i,j,k,i+j+k,i+2j+3k,i+3j+4k\rangle % \end{equation} % \end{widetext} % To enable this feature in \texttt{twocolumn} mode, \texttt{quantumarticle} relies on the package \texttt{ltxgrid}. % Unfortunately this package has a bug that leads to a sub-optimal placement of extremely long footnotes. % % \section{Title information} % \label{sec:title-information} % You can provide information on authors and affiliations in the common format also used by \texttt{revtex}: % \title{Title} % \author{Author 1} % \author{Author 2} % \affiliation{Affiliation 1} % \author{Author 3} % \affiliation{Affiliation 2} % \author{Author 4} % \affiliation{Affiliation 1} % \affiliation{Affiliation 3} % In this example affiliation 1 will be associated with authors 1, 2, and 4, affiliation 2 with author 3 and affiliation 3 with author 4. % Repeated affiliations are automatically recognized and typeset in \texttt{superscriptaddress} style. % Alternatively you can use a format similar to that of the \texttt{authblk} package and the \texttt{elsarticle} document class to specify the same affiliation relations as follows: % \title{Title} % \author[1]{Author 1} % \author[1]{Author 2} % \author[2]{Author 3} % \author[1,3]{Author 4} % \affil[1]{Affiliation 1} % \affil[2]{Affiliation 1} % \affil[3]{Affiliation 1} % % \section{LyX layout} % \label{sec:lyx-layout} % % The quantumarticle document class comes bundled with a \href{https://raw.githubusercontent.com/quantum-journal/quantum-journal/master/quantum-lyx-template.lyx}{LyX layout} that allows to typeset manuscripts with the LyX document processor instead of directly writing LaTeX code. Please be aware that this is a beta feature that might not receive the same level of support as the quantumarticle document class itself. % % \section{Version} % \label{sec:version} % This is quantumarticle version v\quantumarticleversion. % % \bibliographystyle{plain} % \begin{thebibliography}{9} % \bibitem{examplecitation} % Name Surname, % \href{https://doi.org/10.22331/ % idonotexist}{Quantum % \textbf{123}, 123456 (1916).} % % \bibitem{biblatexsubmittingtothearxiv} % StackExchange discussion on \href{http://tex.stackexchange.com/questions/26990/biblatex-submitting-to-the-arxiv}{``Biblatex: submitting to the arXiv'' (2017-01-10)} % % \bibitem{arxivpdfoutput} % Help article published by the arXiv on \href{https://arxiv.org/help/submit_tex}{``Considerations for TeX Submissions'' (2017-01-10)} % % \bibitem{howtogetdoilinksinbibliography} % StackExchange discussion on \href{http://tex.stackexchange.com/questions/3802/how-to-get-doi-links-in-bibliography}{``How to get DOI links in bibliography'' (2016-11-18)} % % \bibitem{automaticallyaddingdoifieldstoahandmadebibliography} % StackExchange discussion on \href{http://tex.stackexchange.com/questions/6810/automatically-adding-doi-fields-to-a-hand-made-bibliography}{``Automatically adding DOI fields to a hand-made bibliography'' (2016-11-18)} % \end{thebibliography} % % % % \onecolumn\newpage % \appendix % % \section{First section of the appendix} % Quantum allows the usage of appendices. % % \subsection{Subsection} % Ideally, the command \texttt{\textbackslash{}appendix} should be put before the appendices to get appropriate section numbering. % The appendices are then numbered alphabetically, with numeric (sub)subsection numbering. % Equations continue to be numbered sequentially. % \begin{equation} % A \neq B % \end{equation} % You are free to change this in case it is more appropriate for your article, but a consistent and unambiguous numbering of sections and equations must be ensured. % % If you want your appendices to appear in \texttt{onecolumn} mode but the rest of the document in \texttt{twocolumn} mode, you can insert the command \texttt{\textbackslash{}onecolumn\textbackslash{}newpage} before \texttt{\textbackslash{}appendix}. % % \section{Problems and Bugs} % In case you encounter problems using the quantumarticle class please analyze the error message carefully and look for help online; \href{http://tex.stackexchange.com/}{http://tex.stackexchange.com/} is an excellent resource. % If you cannot resolve a problem, please open a bug report in our bug-tracker under \href{https://github.com/quantum-journal/quantum-journal/issues}{https://github.com/quantum-journal/quantum-journal/issues}. % You can also contact us via email under \href{mailto:[email protected]}{[email protected]}, but it may take significantly longer to get a response. % In any case, we need the full source of a document that produces the problem and the log file showing the error to help you. % \end{document}
{ "alphanum_fraction": 0.7483367278, "avg_line_length": 62.3272214386, "ext": "tex", "hexsha": "c5883a20ed98afc9453447b599080035e1359e70", "lang": "TeX", "max_forks_count": 4, "max_forks_repo_forks_event_max_datetime": "2019-12-26T18:01:51.000Z", "max_forks_repo_forks_event_min_datetime": "2019-05-28T01:27:49.000Z", "max_forks_repo_head_hexsha": "9004a396ec2e351aa143a10a53156649a6747343", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "LSaldyt/Qurry", "max_forks_repo_path": "paper/paper.tex", "max_issues_count": 33, "max_issues_repo_head_hexsha": "9004a396ec2e351aa143a10a53156649a6747343", "max_issues_repo_issues_event_max_datetime": "2019-09-23T23:44:37.000Z", "max_issues_repo_issues_event_min_datetime": "2019-07-09T09:46:44.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "LSaldyt/Qurry", "max_issues_repo_path": "paper/paper.tex", "max_line_length": 932, "max_stars_count": 11, "max_stars_repo_head_hexsha": "9004a396ec2e351aa143a10a53156649a6747343", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "LSaldyt/curry", "max_stars_repo_path": "paper/paper.tex", "max_stars_repo_stars_event_max_datetime": "2019-02-08T03:04:03.000Z", "max_stars_repo_stars_event_min_datetime": "2018-07-28T17:08:23.000Z", "num_tokens": 10690, "size": 44190 }
\section{Modification API Reference} \label{sec-modification-api} This section describes the modification interface of PatchAPI. While PatchAPI's main goal is to allow users to insert new code into a program, a secondary goal is to allow safe modification of the original program code as well. To modify the binary, a user interacts with the \code{PatchModifier} class to manipulate a PatchAPI CFG. CFG modifications are then instantiated as new code by the PatchAPI. For example, if PatchAPI is being used as part of Dyninst, executing a \code{finalizeInsertionSet} will generate modified code. The three key benefits of the PatchAPI modification interface are abstraction, safety, and interactivity. We use the CFG as a mechanism for transforming binaries in a platform-independent way that requires no instruction-level knowledge by the user. These transformations are limited to ensure that the CFG can always be used to instantiate code, and thus the user can avoid unintended side-effects of modification. Finally, modifications to the CFG are represented in that CFG, allowing users to iteratively combine multiple CFG transformations to achieve their goals. Since modification can modify the CFG, it may invalidate any analyses the user has performed over the CFG. We suggest that users take advantage of the callback interface described in Section \ref{sec-3.2.7} to update any such analysis information. The PatchAPI modification capabilities are currently in beta; if you experience any problems or bugs, please contact \code{[email protected]}. Many of these methods return a boolean type; true indicates a successful operation, and false indicates a failure. For methods that return a pointer, a \code{NULL} return value indicates a failure. \begin{apient} bool redirect(PatchEdge *edge, PatchBlock *target); \end{apient} \apidesc{Redirects the edge specified by \code{edge} to a new target specified by \code{target}. In the current implementation, the edge may not be indirect.} \begin{apient} PatchBlock *split(PatchBlock *orig, Address addr, bool trust = false, Address newlast = (Address) -1); \end{apient} \apidesc{Splits the block specified by \code{orig}, creating a new block starting at \code{addr}. If \code{trust} is true, we do not verify that \code{addr} is a valid instruction address; this may be useful to reduce overhead. If \code{newlast} is not -1, we use it as the last instruction address of the first block. All Points are updated to belong to the appropriate block. The second block is returned.} \begin{apient} bool remove(std::vector<PatchBlock *> &blocks, bool force = true) \end{apient} \apidesc{Removes the blocks specified by \code{blocks} from the CFG. If \code{force} is true, blocks are removed even if they have incoming edges; this may leave the CFG in an unsafe state but may be useful for reducing overhead.} \begin{apient} bool remove(PatchFunction *func) \end{apient} \apidesc{Removes \code{func} and all of its non-shared blocks from the CFG; any shared blocks remain.} \begin{apient} class InsertedCode { typedef boost::shared_ptr<...> Ptr; PatchBlock *entry(); const std::vector<PatchEdge *> &exits(); const std::set<PatchBlock *> &blocks(); } InsertedCode::Ptr insert(PatchObject *obj, SnippetPtr snip, Point *point); InsertedCode::Ptr insert(PatchObject *obj, void *start, unsigned size); \end{apient} \apidesc{Methods for inserting new code into a CFG. The \code{InsertedCode} structure represents a CFG subgraph generated by inserting new code; the graph has a single entry point and multiple exits, represented by edges to the sink node. The first \code{insert} call takes a PatchAPI Snippet structure and a Point that is used to generate that Snippet; the point is only passed through to the snippet code generator and thus may be \code{NULL} if the snippet does not use Point information. The second \code{insert} call takes a raw code buffer.}
{ "alphanum_fraction": 0.7694991278, "avg_line_length": 41.8020833333, "ext": "tex", "hexsha": "c5efea62c7496a880bffc2d0135dc171f530bfdb", "lang": "TeX", "max_forks_count": 18, "max_forks_repo_forks_event_max_datetime": "2021-10-14T10:17:39.000Z", "max_forks_repo_forks_event_min_datetime": "2015-11-04T03:44:22.000Z", "max_forks_repo_head_hexsha": "9879a85c7ba56b443aeccde730778dcb6d55a39d", "max_forks_repo_licenses": [ "BSD-2-Clause" ], "max_forks_repo_name": "Vtech181/Path_Armor", "max_forks_repo_path": "Dyninst-8.2.1/patchAPI/doc/section/5_api_modification.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "9879a85c7ba56b443aeccde730778dcb6d55a39d", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "BSD-2-Clause" ], "max_issues_repo_name": "Vtech181/Path_Armor", "max_issues_repo_path": "Dyninst-8.2.1/patchAPI/doc/section/5_api_modification.tex", "max_line_length": 74, "max_stars_count": 47, "max_stars_repo_head_hexsha": "9879a85c7ba56b443aeccde730778dcb6d55a39d", "max_stars_repo_licenses": [ "BSD-2-Clause" ], "max_stars_repo_name": "Vtech181/Path_Armor", "max_stars_repo_path": "Dyninst-8.2.1/patchAPI/doc/section/5_api_modification.tex", "max_stars_repo_stars_event_max_datetime": "2022-03-18T11:23:59.000Z", "max_stars_repo_stars_event_min_datetime": "2015-10-14T23:12:32.000Z", "num_tokens": 975, "size": 4013 }
\section{Built-in Module \sectcode{termios}} \bimodindex{termios} \indexii{Posix}{I/O control} \indexii{tty}{I/O control} \renewcommand{\indexsubitem}{(in module termios)} This module provides an interface to the Posix calls for tty I/O control. For a complete description of these calls, see the Posix or \UNIX{} manual pages. It is only available for those \UNIX{} versions that support Posix \code{termios} style tty I/O control (and then only if configured at installation time). All functions in this module take a file descriptor \var{fd} as their first argument. This must be an integer file descriptor, such as returned by \code{sys.stdin.fileno()}. This module should be used in conjunction with the \code{TERMIOS} module, which defines the relevant symbolic constants (see the next section). The module defines the following functions: \begin{funcdesc}{tcgetattr}{fd} Return a list containing the tty attributes for file descriptor \var{fd}, as follows: \code{[\var{iflag}, \var{oflag}, \var{cflag}, \var{lflag}, \var{ispeed}, \var{ospeed}, \var{cc}]} where \var{cc} is a list of the tty special characters (each a string of length 1, except the items with indices \code{VMIN} and \code{VTIME}, which are integers when these fields are defined). The interpretation of the flags and the speeds as well as the indexing in the \var{cc} array must be done using the symbolic constants defined in the \code{TERMIOS} module. \end{funcdesc} \begin{funcdesc}{tcsetattr}{fd\, when\, attributes} Set the tty attributes for file descriptor \var{fd} from the \var{attributes}, which is a list like the one returned by \code{tcgetattr()}. The \var{when} argument determines when the attributes are changed: \code{TERMIOS.TCSANOW} to change immediately, \code{TERMIOS.TCSADRAIN} to change after transmitting all queued output, or \code{TERMIOS.TCSAFLUSH} to change after transmitting all queued output and discarding all queued input. \end{funcdesc} \begin{funcdesc}{tcsendbreak}{fd\, duration} Send a break on file descriptor \var{fd}. A zero \var{duration} sends a break for 0.25--0.5 seconds; a nonzero \var{duration} has a system dependent meaning. \end{funcdesc} \begin{funcdesc}{tcdrain}{fd} Wait until all output written to file descriptor \var{fd} has been transmitted. \end{funcdesc} \begin{funcdesc}{tcflush}{fd\, queue} Discard queued data on file descriptor \var{fd}. The \var{queue} selector specifies which queue: \code{TERMIOS.TCIFLUSH} for the input queue, \code{TERMIOS.TCOFLUSH} for the output queue, or \code{TERMIOS.TCIOFLUSH} for both queues. \end{funcdesc} \begin{funcdesc}{tcflow}{fd\, action} Suspend or resume input or output on file descriptor \var{fd}. The \var{action} argument can be \code{TERMIOS.TCOOFF} to suspend output, \code{TERMIOS.TCOON} to restart output, \code{TERMIOS.TCIOFF} to suspend input, or \code{TERMIOS.TCION} to restart input. \end{funcdesc} \subsection{Example} \nodename{termios Example} Here's a function that prompts for a password with echoing turned off. Note the technique using a separate \code{termios.tcgetattr()} call and a \code{try {\ldots} finally} statement to ensure that the old tty attributes are restored exactly no matter what happens: \begin{verbatim} def getpass(prompt = "Password: "): import termios, TERMIOS, sys fd = sys.stdin.fileno() old = termios.tcgetattr(fd) new = termios.tcgetattr(fd) new[3] = new[3] & ~TERMIOS.ECHO # lflags try: termios.tcsetattr(fd, TERMIOS.TCSADRAIN, new) passwd = raw_input(prompt) finally: termios.tcsetattr(fd, TERMIOS.TCSADRAIN, old) return passwd \end{verbatim} \section{Standard Module \sectcode{TERMIOS}} \stmodindex{TERMIOS} \indexii{Posix}{I/O control} \indexii{tty}{I/O control} \renewcommand{\indexsubitem}{(in module TERMIOS)} This module defines the symbolic constants required to use the \code{termios} module (see the previous section). See the Posix or \UNIX{} manual pages (or the source) for a list of those constants. Note: this module resides in a system-dependent subdirectory of the Python library directory. You may have to generate it for your particular system using the script \file{Tools/scripts/h2py.py}.
{ "alphanum_fraction": 0.7548509229, "avg_line_length": 38.7706422018, "ext": "tex", "hexsha": "e55aab4119fe15b0d092133febb7ebfc5be2b7d5", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "2a80562c5a163490f444181cb75ca1b3089759ec", "max_forks_repo_licenses": [ "Unlicense", "TCL", "DOC", "AAL", "X11" ], "max_forks_repo_name": "AtjonTV/Python-1.4", "max_forks_repo_path": "Doc/libtermios.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "2a80562c5a163490f444181cb75ca1b3089759ec", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Unlicense", "TCL", "DOC", "AAL", "X11" ], "max_issues_repo_name": "AtjonTV/Python-1.4", "max_issues_repo_path": "Doc/libtermios.tex", "max_line_length": 70, "max_stars_count": null, "max_stars_repo_head_hexsha": "2a80562c5a163490f444181cb75ca1b3089759ec", "max_stars_repo_licenses": [ "Unlicense", "TCL", "DOC", "AAL", "X11" ], "max_stars_repo_name": "AtjonTV/Python-1.4", "max_stars_repo_path": "Doc/libtermios.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 1191, "size": 4226 }
\label{problem_definition} Clustering is the task of gathering items in a way that elements belonging to the same group (the \emph{cluster}) are more similar to each other other than the ones assigned to the others.\\ More formally, given a input: \begin{itemize} \item $X = \{x_0, \dots ,x_n\}$, the initial set of elements. \item $d: X \times X \to \mathbb{R}$, a \emph{metric} measuring the similarity. \end{itemize} The final goal is to find the cluster configuration \begin{equation*} C = \left\{ c_0, \dots , c_m \right\} \mid \bigcup_{C} = X \end{equation*} partitioning $X$ into $m$ clusters, maximizing the intra-cluster distance (dual problem of minimizing inter-cluter distance): \begin{equation} \underset{C}{\mathrm{argmax}} \sum_{c \in C} \sum_{i,j}^{|c|} d(c_i,c_j) \end{equation} \subsection*{Challenges} The concept of clustering is simple and powerful; moreover, its versatility and generality are its greatness and at the same time source of many difficulties. %% Metrics identification Given the only strict requirement to be the definition of a metric over the domain of $X$, clustering is applied to a wide variety of problems. Clearly, each domain offers different optimization opportunities and particular challenges. In particular, the choice of the metric heavily influences the final outcome quality. As a result even the medical~\cite{siless2013comparison}, the mechanical engineering~\cite{wilding2011clustering} and the mobile networks~\cite{cheng2009stability} literatures features different studies that address this particular challenge suggesting highly specialized distance functions. %% Inter/Intra cluster distance measure Once the proper metric is identified, the following huge influencing factors are the mathematical definitions of ``intra-cluster'' and ``inter-cluster'' distances. They vary a lot in the different implementation leading to completely different clustering configurations. For example, three methods are widely used when performing agglomerative clustering to define the distance between two clusters: the average, the minimum, and the maximum distance. The average approach uses the barycenters, while the minimum (maximum) relies upon the minimal (maximal) distance between any two points belonging to a different clusters. \subsection*{Choosing the $k$ parameter} The first main issue of the k-means described in Section \ref{related} is the choice of the number of clusters the dataset has to be divided into. The original k-means does not address this problem at all. Thus, some heuristic has to be applied. One possibility is to have a deep knowledge of the structure underlying the dataset, having an idea of the target number of groups. On the other hand, especially in exploratory data analysis, this value cannot be known in advance. Hence, the most common solution is to run the algorithm with increasing values of $k$ and finally keeping the parameter that produces the best-quality clusters. Choosing the right value for this parameter is crucial, since a wrong one may instruct the algorithm to collapse entities that are actually very far from each other (or vice versa). \subsection*{Positioning the initial centroids} Given that an optimal value for $k$ is found, the other big problem is how to position the $k$ initial centroids. Given enough time, the k-means algorithm will always converge. However it may be to a local minimum. This is highly dependent on the initialization of the centroids. The boostrapping phase can be addressed in various ways. For example, the \emph{scikit-learn}\footnote{http://scikit-learn.org} Python library uses an approach that positions the initial centroids to be (generally) distant from each other by default. This procedure provides provably better results than using a random one~\cite{arthur2007k}. Using a proper and efficient boostrapping heuristic is very important, since misplacing the initial centroids does not allow the algorithm to find the real clusters underlying the data.
{ "alphanum_fraction": 0.7886038481, "avg_line_length": 48.843373494, "ext": "tex", "hexsha": "a788832b9bf3d088a1e480a571c0199c6d20e188", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "7ca4654fd72b2b279d77f803d35db9196b4a0a82", "max_forks_repo_licenses": [ "BSD-3-Clause" ], "max_forks_repo_name": "GianlucaBortoli/enhanced-clustering", "max_forks_repo_path": "report/sections/problem.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "7ca4654fd72b2b279d77f803d35db9196b4a0a82", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "BSD-3-Clause" ], "max_issues_repo_name": "GianlucaBortoli/enhanced-clustering", "max_issues_repo_path": "report/sections/problem.tex", "max_line_length": 95, "max_stars_count": null, "max_stars_repo_head_hexsha": "7ca4654fd72b2b279d77f803d35db9196b4a0a82", "max_stars_repo_licenses": [ "BSD-3-Clause" ], "max_stars_repo_name": "GianlucaBortoli/enhanced-clustering", "max_stars_repo_path": "report/sections/problem.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 928, "size": 4054 }
\section{Conclusion} In this study we used the \gls{esom} called \gls{temoa} to find the optimal size of a nuclear reactor for the \gls{uiuc} microgrid. We first showed that \gls{temoa} gave realistic results that matched predictions from both \gls{icap} and the \gls{uiuc} Master Plan \cite{isee_illinois_2015, affiliated_engineers_inc_utilities_2015}. Then we considered three scenarios that introduced nuclear capacity to \gls{uiuc}. The first two scenarios did not constrain the size of the nuclear reactor and thus satisfied the carbon constraints and exceeded the steam and electricity demand requirements by building more nuclear capacity than required. The \gls{uiuc} Master Plan found that the goals outlined in \gls{icap} could not be achieved with \gls{uiuc}'s current energy mix, which we corroborated in our business-as-usual scenario. We showed in Scenario 3 that the \gls{icap} goals could be met for the next decade by adding a modest capacity for nuclear energy production. The assumptions of the model used in this study include contributions from renewables, but exclude requirements of zero growth, improvements in building efficiency, and other offsets. This gives \gls{uiuc} the flexibility to continue growing while reducing carbon emissions in other areas. The breakdown of carbon offsets shown in Figure \ref{fig:icap_emissions} is improved by adding nuclear power to the energy mix. Finally, importing electricity drove the campus carbon emissions in every scenario we examined. If \gls{uiuc} is serious about decarbonizing by 2050, the University must stop buying electricity from MISO. Unless, that is, energy production throughout MISO also becomes carbon free. Besides producing emissions free electricity and steam, nuclear power can benefit campuses, like \gls{uiuc}, in many ways. Future work will explore how nuclear power can help decarbonize campus transportation, the role of energy storage, and peer further into the future.
{ "alphanum_fraction": 0.813740458, "avg_line_length": 63.3870967742, "ext": "tex", "hexsha": "2ae3632e510c052c053673d82c855af4cb76ed91", "lang": "TeX", "max_forks_count": 5, "max_forks_repo_forks_event_max_datetime": "2021-01-01T07:54:44.000Z", "max_forks_repo_forks_event_min_datetime": "2020-06-01T23:09:27.000Z", "max_forks_repo_head_hexsha": "d63ee7711c7f5e4bd88b89dabd4140c562ac32e7", "max_forks_repo_licenses": [ "BSD-3-Clause" ], "max_forks_repo_name": "yardasol/pride", "max_forks_repo_path": "publications/papers/optimal-sizing-paper/conclusion.tex", "max_issues_count": 110, "max_issues_repo_head_hexsha": "d63ee7711c7f5e4bd88b89dabd4140c562ac32e7", "max_issues_repo_issues_event_max_datetime": "2021-03-24T20:44:53.000Z", "max_issues_repo_issues_event_min_datetime": "2020-06-03T17:26:50.000Z", "max_issues_repo_licenses": [ "BSD-3-Clause" ], "max_issues_repo_name": "yardasol/pride", "max_issues_repo_path": "publications/papers/optimal-sizing-paper/conclusion.tex", "max_line_length": 98, "max_stars_count": 2, "max_stars_repo_head_hexsha": "d63ee7711c7f5e4bd88b89dabd4140c562ac32e7", "max_stars_repo_licenses": [ "BSD-3-Clause" ], "max_stars_repo_name": "yardasol/pride", "max_stars_repo_path": "publications/papers/optimal-sizing-paper/conclusion.tex", "max_stars_repo_stars_event_max_datetime": "2019-12-25T03:17:58.000Z", "max_stars_repo_stars_event_min_datetime": "2019-12-17T22:38:04.000Z", "num_tokens": 455, "size": 1965 }
\chapter{Advanced Simplifying Methods}\label{ch07} \section{Quine-McCluskey Simplification Method} \subsection{Introduction} \marginpar{This method was developed by W.V. Quine and Edward J. McCluskey and is sometimes called the method of prime implicants.} When a Boolean equation involves five or more variables it becomes very difficult to solve using standard algebra techniques or Karnaugh maps; however, the Quine-McCluskey algorithm can be used to solve these types of Boolean equations. The Quine-McCluskey method is based upon a simple Boolean algebra principle: if two expressions differ by only a single variable and its complement then those two expressions can be combined: \begin{align} \label{ASM:eq:quine-mccluskey_combining_complements} ABC+ABC' &= AB \end{align} The Quine-McCluskey method looks for expressions that differ by only a single variable and combines them. Then it looks at the combined expressions to find those that differ by a single variable and combines them. The process continues until there are no expressions remaining to be combined. \subsection{Example One} \label{ASM:subsec:quine-mccluskey_ex_1} \subsubsection{Step 1: Create the Implicants} \label{ASM:subsubsec:quine-mccluskey_ex_1_step_1} Equation \ref{ASM:eq:qm_ex_1} is the Sigma representation of a Boolean equation. \begin{align} \label{ASM:eq:qm_ex_1} \int(A,B,C,D)=\sum(0,1,2,5,6,7,9,10,11,14) \end{align} Truth Table \ref{ASM:tab:qm_ex_1_minterm_table} shows the input variables for the \emph{True} minterm values. \begin{table}[H] \sffamily \newcommand{\head}[1]{\textcolor{white}{\textbf{#1}}} \begin{center} \rowcolors{2}{gray!10}{white} % Color every other line a light gray \begin{tabular}{ccccc} \rowcolor{black!75} \head{Minterm} & \head{A} & \head{B} & \head{C} & \head{D} \\ 0 & 0 & 0 & 0 & 0 \\ 1 & 0 & 0 & 0 & 1 \\ 2 & 0 & 0 & 1 & 0 \\ 5 & 0 & 1 & 0 & 1 \\ 6 & 0 & 1 & 1 & 0 \\ 7 & 0 & 1 & 1 & 1 \\ 9 & 1 & 0 & 0 & 1 \\ 10 & 1 & 0 & 1 & 0 \\ 11 & 1 & 0 & 1 & 1 \\ 14 & 1 & 1 & 1 & 0 \end{tabular} \end{center} \caption{Quine-McCluskey Ex 1: Minterm Table} \label{ASM:tab:qm_ex_1_minterm_table} \end{table} To simplify this equation, the minterms that evaluate to \emph{True} (as listed above) are first placed in a minterm table so that they form sections that are easy to combine. Each section contains only the minterms that have the same number of ones. Thus, the first section contains all minterms with zero ones, the second section contains the minterms with one one, and so forth. Truth Table \ref{ASM:tab:qm_ex_1_rearranged_table} shows the minterms rearranged appropriately. \begin{table}[H] \sffamily \newcommand{\head}[1]{\textcolor{white}{\textbf{#1}}} \begin{center} % \rowcolors{2}{gray!10}{white} % Color every other line a light gray \begin{tabular}{ccc} \rowcolor{black!75} \head{Number of 1's} & \head{Minterm} & \head{Binary} \\ 0 & 0 & 0000 \\ \hline \multirow{2}{*}{1} & 1 & 0001 \\ & 2 & 0010 \\ \hline \multirow{4}{*}{2} & 5 & 0101 \\ & 6 & 0110 \\ & 9 & 1001 \\ & 10 & 1010 \\ \hline \multirow{3}{*}{3} & 7 & 0111 \\ & 11 & 1011 \\ & 14 & 1110 \\ \hline \end{tabular} \end{center} \caption{Quine-McCluskey Ex 1: Rearranged Table} \label{ASM:tab:qm_ex_1_rearranged_table} \end{table} Start combining minterms with other minterms to create Size Two Implicants (called that since each implicant combines two minterms), but only those terms that vary by a single binary digit can be combined. When two minterms are combined, the binary digit that is different between the minterms is replaced by a dash, indicating that the digit does not matter. For example, $ 0000 $ and $ 0001 $ can be combined to form $ 000- $. The table is modified to add a Size Two Implicant column that indicates all of the combined terms. Note that every minterm must be compared to every other minterm so all possible implicants are formed. This is easier than it sounds, though, since terms in section one must be compared only with section two, then those in section two are compared with section three, and so forth, since each section differs from the next by a single binary digit. The Size Two Implicant column contains the combined binary form along with the numbers of the minterms used to create that implicant. It is also important to mark all minterms that are used to create the Size Two Implicants since allowance must be made for any not combined. Therefore, in the following table, as a minterm is used it is also struck through. Table \ref{ASM:tab:quine-mccluskey_ex_1_size_2_implicants} shows the Size Two Implicants that were found. \begin{table}[H] \sffamily \newcommand{\head}[1]{\textcolor{white}{\textbf{#1}}} \begin{center} % \rowcolors{2}{gray!10}{white} % Color every other line a light gray \begin{tabular}{ccc|l} \rowcolor{black!75} \head{1's} & \head{Mntrm} & \head{Bin} & \head{Size 2} \\ 0 & \sout{0} & 0000 & 000- (0,1) \\ \cline{1-3} \multirow{2}{*}{1} & \sout{1} & 0001 & 00-0 (0,2) \\ & \sout{2} & 0010 & 0-01 (1,5) \\ \cline{1-3} \multirow{4}{*}{2} & \sout{5} & 0101 & -001 (1,9) \\ & \sout{6} & 0110 & 0-10 (2,6) \\ & \sout{9} & 1001 & -010 (2,10) \\ & \sout{10} & 1010 & 01-1 (5,7) \\ \cline{1-3} \multirow{3}{*}{3} & \sout{7} & 0111 & 011- (6,7) \\ & \sout{11} & 1011 & -110 (6,14) \\ & \sout{14} & 1110 & 10-1 (9,11) \\ \cline{1-3} & & & 101- (10,11) \\ & & & 1-10 (10,14) \\ \hline \end{tabular} \end{center} \caption{Quine-McCluskey Ex 1: Size 2 Implicants} \label{ASM:tab:quine-mccluskey_ex_1_size_2_implicants} \end{table} All of the Size Two Implicants can now be combined to form Size Four Implicants (those that combine a total of four minterms). Again, it is essential to only combine those with only a single binary digit difference. For this step, the dash can be considered the same as a single binary digit, as long as it is in the same place for both implicants. Thus, $ -010 $ and $ -110 $ can be combined to $ --10 $, but $ -010 $ and $ 0-00 $ cannot be combined since the dash is in different places in those numbers. It helps to match up the dashes first and then look at the binary digits. Again, as the various size-two implicants are used they are marked; but notice that a single size-four implicant actually combines four size-two implicants. Table \ref{ASM:tab:quine-mccluskey_ex_1_size_4_implicants} shows the Size Four Implicants. \begin{table}[H] \sffamily \newcommand{\head}[1]{\textcolor{white}{\textbf{#1}}} \begin{center} % \rowcolors{2}{gray!10}{white} % Color every other line a light gray \begin{tabular}{ccc|l|l} \rowcolor{black!75} \head{1's} & \head{Mntrm} & \head{Bin} & \head{Size 2} & \head{Size 4} \\ 0 & \sout{0} & 0000 & 000- (0,1) & --10 (2,10,6,14) \\ \cline{1-3} \multirow{2}{*}{1} & \sout{1} & 0001 & 00-0 (0,2) & \\ & \sout{2} & 0010 & 0-01 (1,5) & \\ \cline{1-3} \multirow{4}{*}{2} & \sout{5} & 0101 & -001 (1,9) & \\ & \sout{6} & 0110 & \sout{0-10 (2,6)} & \\ & \sout{9} & 1001 & \sout{-010 (2,10)} & \\ & \sout{10} & 1010 & 01-1 (5,7) & \\ \cline{1-3} \multirow{3}{*}{3} & \sout{7} & 0111 & 011- (6,7) & \\ & \sout{11} & 1011 & \sout{-110 (6,14)} & \\ & \sout{14} & 1110 & 10-1 (9,11) & \\ \cline{1-3} & & & 101- (10,11) & \\ & & & \sout{1-10 (10,14)} & \\ \hline \end{tabular} \end{center} \caption{Quine-McCluskey Ex 1: Size 4 Implicants} \label{ASM:tab:quine-mccluskey_ex_1_size_4_implicants} \end{table} None of the terms can be combined any further. All of the minterms or implicants that are not marked are \emph{Prime Implicants}. In the table above, for example, the Size Two Implicant $ 000- $ is a Prime Implicant. The Prime Implicants will be placed in a chart and further processed in the next step. \subsubsection{Step 2: The Prime Implicant Table} \label{ASM:subsubsec:quine-mccluskey_ex_1_step_2} A \emph{Prime Implicant Table} can now be constructed, as in Table \ref{ASM:tab:qm_ex_1_prime_implicants}. The prime implicants are listed down the left side of the table, the decimal equivalent of the minterms goes across the top, and the Boolean representation of the prime implicants is listed down the right side of the table. \begin{table}[H] \sffamily \newcommand{\head}[1]{\textcolor{white}{\textbf{#1}}} \begin{center} \rowcolors{2}{gray!10}{white} % Color every other line a light gray \begin{adjustbox}{max width=\textwidth} \begin{tabular}{lccccccccccc} \rowcolor{black!75} & \head{0} & \head{1} & \head{2} & \head{5} & \head{6} & \head{7} & \head{9} & \head{10} & \head{11} & \head{14} & \\ % 0 1 2 5 6 7 9 10 11 14 $ 000-\;(0,1) $ & X & X & & & & & & & & & $ A'B'C' $ \\ $ 00-0\;(0,2) $ & X & & X & & & & & & & & $ A'B'D' $ \\ $ 0-01\;(1,5) $ & & X & & X & & & & & & & $ A'C'D $ \\ $ -001\;(1,9) $ & & X & & & & & X & & & & $ B'C'D $ \\ $ 01-1\;(5,7) $ & & & & X & & X & & & & & $ A'BD $ \\ $ 011-\;(6,7) $ & & & & & X & X & & & & & $ A'BC $ \\ $ 10-1\;(9,11) $ & & & & & & & X & & X & & $ AB'D $ \\ $ 101-\;(10,11) $ & & & & & & & & X & X & & $ AB'C $ \\ $ --10\;(2,10,6,14) $ & & & X & & X & & & X & & X & $ CD' $ \\ \hline \end{tabular} \end{adjustbox} \end{center} \caption{Quine-McCluskey Ex 1: Prime Implicants} \label{ASM:tab:qm_ex_1_prime_implicants} \end{table} An \emph{X} marks the intersection where each minterm (on the top row) is used to form one of the prime implicants (in the left column). Thus, minterm $ 0 $ (or $ 0000 $) is used to form the prime implicant $ 000- (0,1) $ in row one and $ 00-0 (0,2) $ in row two. The Essential Prime Implicants can be found by looking for columns that contain only one \emph{X}. The column for minterm $ 14 $ has only one \emph{X}, in the last row, $ --10 (2,10,6,14) $; thus, it is an Essential Prime Implicant. That means that the term in the right column for the last row, $ CD' $, must appear in the final simplified equation. However, that term also covers the columns for $ 2 $, $ 6 $, and $ 10 $; so they can be removed from the table. The Prime Implicant table is then simplified to \ref{ASM:tab:qm_ex_1_1st_iteration}. \begin{table}[H] \sffamily \newcommand{\head}[1]{\textcolor{white}{\textbf{#1}}} \begin{center} \rowcolors{2}{gray!10}{white} % Color every other line a light gray \begin{adjustbox}{max width=\textwidth} \begin{tabular}{lccccccc} \rowcolor{black!75} & \head{0} & \head{1} & \head{5} & \head{7} & \head{9} & \head{11} & \\ % 0 1 5 7 9 11 $ 000-\;(0,1) $ & X & X & & & & & $ A'B'C' $ \\ $ 00-0\;(0,2) $ & X & & & & & & $ A'B'D' $ \\ $ 0-01\;(1,5) $ & & X & X & & & & $ A'C'D $ \\ $ -001\;(1,9) $ & & X & & & X & & $ B'C'D $ \\ $ 01-1\;(5,7) $ & & & X & X & & & $ A'BD $ \\ $ 011-\;(6,7) $ & & & & X & & & $ A'BC $ \\ $ 10-1\;(9,11) $ & & & & & X & X & $ AB'D $ \\ $ 101-\;(10,11) $ & & & & & & X & $ AB'C $ \\ \hline \end{tabular} \end{adjustbox} \end{center} \caption{Quine-McCluskey Ex 1: 1st Iteration} \label{ASM:tab:qm_ex_1_1st_iteration} \end{table} The various rows can now be combined in any order the designer desires. For example, if row $ 10-1 (9,11) $, is selected as a required implicant in the solution, then minterms $ 9 $ and $ 11 $ are accounted for in the final equation, which means that all \emph{X} marked in those columns can be removed. When that is done, then, rows $ 101- (10,11) $ and $ 10-1 (9,11) $ no longer have any marks in the table, and they can be removed. Table \ref{ASM:tab:qm_ex_1_2nd_iteration} shows the last iteration of this solution. \begin{table}[H] \sffamily \newcommand{\head}[1]{\textcolor{white}{\textbf{#1}}} \begin{center} \rowcolors{2}{gray!10}{white} % Color every other line a light gray \begin{adjustbox}{max width=\textwidth} \begin{tabular}{lccccccc} \rowcolor{black!75} & \head{0} & \head{1} & \head{5} & \head{7} & \\ % 0 1 5 7 $ 000-\;(0,1) $ & X & X & & & $ A'B'C' $ \\ $ 00-0\;(0,2) $ & X & & & & $ A'B'D' $ \\ $ 0-01\;(1,5) $ & & X & X & & $ A'C'D $ \\ $ -001\;(1,9) $ & & X & & & $ B'C'D $ \\ $ 01-1\;(5,7) $ & & & X & X & $ A'BD $ \\ $ 011-\;(6,7) $ & & & & X & $ A'BC $ \\ \hline \end{tabular} \end{adjustbox} \end{center} \caption{Quine-McCluskey Ex 1: 2nd Iteration} \label{ASM:tab:qm_ex_1_2nd_iteration} \end{table} The designer next decided to select $ 01-1 (5,7) $, $ A'BD $, as a required implicant. That will include minterms $ 5 $ and $ 7 $, and those columns may be removed along with rows $ 01-1 (5,7) $, $ A'BD $, and $ 011- (6,7) $, $ A'BC $, as shown in Table \ref{ASM:tab:qm_ex_1_3rd_iteration}. \begin{table}[H] \sffamily \newcommand{\head}[1]{\textcolor{white}{\textbf{#1}}} \begin{center} \rowcolors{2}{gray!10}{white} % Color every other line a light gray \begin{adjustbox}{max width=\textwidth} \begin{tabular}{lccccccc} \rowcolor{black!75} & \head{0} & \head{1} & \\ % 0 1 $ 000-\;(0,1) $ & X & X & $ A'B'C' $ \\ $ 00-0\;(0,2) $ & X & & $ A'B'D' $ \\ $ 0-01\;(1,5) $ & & X & $ A'C'D $ \\ $ -001\;(1,9) $ & & X & $ B'C'D $ \\ \hline \end{tabular} \end{adjustbox} \end{center} \caption{Quine-McCluskey Ex 1: 3rd Iteration} \label{ASM:tab:qm_ex_1_3rd_iteration} \end{table} The last two minterms ($ 0 $ and $ 1 $) can be covered by the implicant $ 000- (0,1) $, and that also eliminates the last three rows in the chart. The original Boolean expression, then, has been simplified from ten minterms to Equation \ref{ASM:eq:qm_ex_1_solution}. \begin{align} \label{ASM:eq:qm_ex_1_solution} A'B'C'+A'BD+AB'D+CD' = Y \end{align} \subsection{Example Two} \label{ASM:subsec:quine-mccluskey_ex_2} \subsubsection{Step 1: Create the Implicants} \label{ASM:subsubsec:quine-mccluskey_ex_2_step_1} Given Equation \ref{ASM:eq:qm_ex_2}, which is a Sigma representation of a Boolean equation. \begin{align} \label{ASM:eq:qm_ex_2} \int(A,B,C,D,E,F)=\sum(0,1,8,9,12,13,14,15,32,33,37,39,48,56) \end{align} Truth Table \ref{ASM:tab:qm_ex_2_minterm_table} shows the \emph{True} minterm values. \begin{table}[H] \sffamily \newcommand{\head}[1]{\textcolor{white}{\textbf{#1}}} \begin{center} \rowcolors{2}{gray!10}{white} % Color every other line a light gray \begin{tabular}{ccccccc} \rowcolor{black!75} \head{Minterm} & \head{A} & \head{B} & \head{C} & \head{D} & \head{E} & \head{F} \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 1 & 0 & 0 & 0 & 0 & 0 & 1 \\ 8 & 0 & 0 & 1 & 0 & 0 & 0 \\ 9 & 0 & 0 & 1 & 0 & 0 & 1 \\ 12 & 0 & 0 & 1 & 1 & 0 & 0 \\ 13 & 0 & 0 & 1 & 1 & 0 & 1 \\ 14 & 0 & 0 & 1 & 1 & 1 & 0 \\ 15 & 0 & 0 & 1 & 1 & 1 & 1 \\ 32 & 1 & 0 & 0 & 0 & 0 & 0 \\ 33 & 1 & 0 & 0 & 0 & 0 & 1 \\ 37 & 1 & 0 & 0 & 1 & 0 & 1 \\ 39 & 1 & 0 & 0 & 1 & 1 & 1 \\ 48 & 1 & 1 & 0 & 0 & 0 & 0 \\ 56 & 1 & 1 & 1 & 0 & 0 & 0 \\ \end{tabular} \end{center} \caption{Quine-McCluskey Ex 2: Minterm Table} \label{ASM:tab:qm_ex_2_minterm_table} \end{table} To simplify this equation, the minterms that evaluate to \emph{True} are placed in a minterm table so that they form sections that are easy to combine. Each section contains only the minterms that have the same number of ones. Thus, the first section contains all minterms with zero ones, the second section contains the minterms with one one, and so forth. Table \ref{ASM:tab:qm_ex_2_rearranged_table} shows the rearranged truth table. \begin{table}[H] \sffamily \newcommand{\head}[1]{\textcolor{white}{\textbf{#1}}} \begin{center} % \rowcolors{2}{gray!10}{white} % Color every other line a light gray \begin{tabular}{ccc} \rowcolor{black!75} \head{Number of 1's} & \head{Minterm} & \head{Binary} \\ 0 & 0 & 000000 \\ \hline \multirow{3}{*}{1} & 1 & 000001 \\ & 8 & 001000 \\ & 32 & 100000 \\ \hline \multirow{4}{*}{2} & 9 & 001001 \\ & 12 & 001100 \\ & 33 & 100001 \\ & 48 & 110000 \\ \hline \multirow{3}{*}{3} & 13 & 001101 \\ & 14 & 001110 \\ & 37 & 100101 \\ & 56 & 111000 \\ \hline \multirow{2}{*}{4} & 15 & 001111 \\ & 39 & 100111 \\ \hline \end{tabular} \end{center} \caption{Quine-McCluskey Ex 2: Rearranged Table} \label{ASM:tab:qm_ex_2_rearranged_table} \end{table} Start combining minterms with other minterms to create Size Two Implicants, as in Table \ref{ASM:tab:qm_ex_2_size_2_implicants}. \begin{table}[H] \sffamily \newcommand{\head}[1]{\textcolor{white}{\textbf{#1}}} \begin{center} % \rowcolors{2}{gray!10}{white} % Color every other line a light gray \begin{tabular}{ccc|c} \rowcolor{black!75} \head{1's} & \head{Mntrm} & \head{Bin} & \head{Size 2} \\ 0 & \sout{0} & 000000 & 00000- (0,1) \\ \cline{1-3} \multirow{3}{*}{1} & \sout{1} & 000001 & -000000 (0,32) \\ & \sout{8} & 001000 & 00-000 (0,8) \\ & \sout{32} & 100000 & -00001 (1,33) \\ \cline{1-3} \multirow{4}{*}{2} & \sout{9} & 001001 & 00-001 (1,9) \\ & \sout{12} & 001100 & 10000- (32,33) \\ & \sout{33} & 100001 & 1-0000 (32,48) \\ & \sout{48} & 110000 & 00100- (8,9) \\ \cline{1-3} \multirow{3}{*}{3} & \sout{13} & 001101 & 001-00 (8,12) \\ & \sout{14} & 001110 & 100-01 (33,37) \\ & \sout{37} & 100101 & 001-01 (9,13) \\ & \sout{56} & 111000 & 00110- (12,13) \\ \cline{1-3} \multirow{2}{*}{4} & \sout{15} & 001111 & 0011-0 (12,14) \\ & \sout{39} & 100111 & 11-000 (48,56) \\ & & & 1001-1 (37,39) \\ & & & 0011-1 (13,15) \\ & & & 00111- (14,15) \\ \hline \end{tabular} \end{center} \caption{Quine-McCluskey Ex 2: Size Two Implicants} \label{ASM:tab:qm_ex_2_size_2_implicants} \end{table} All of the Size Two Implicants can now be combined to form Size Four Implicants, as in Table \ref{ASM:tab:qm_ex_2_size_4_implicants}. \begin{table}[H] \sffamily \newcommand{\head}[1]{\textcolor{white}{\textbf{#1}}} \begin{center} % \rowcolors{2}{gray!10}{white} % Color every other line a light gray \begin{tabular}{ccc|c|c} \rowcolor{black!75} \head{1's} & \head{Mntrm} & \head{Bin} & \head{Size 2} & \head{Size 4} \\ 0 & \sout{0} & 000000 & \sout{00000- (0,1)} & -0000- (0,1,32,33) \\ \cline{1-3} \multirow{3}{*}{1} & \sout{1} & 000001 & \sout{-000000 (0,32)} & 00-00- (0,1,8,9)\\ & \sout{8} & 001000 & \sout{00-000 (0,8)} & 001-0- (8,9,12,13)\\ & \sout{32} & 100000 & \sout{-00001 (1,33)} & 0011-- (12,13,14,15)\\ \cline{1-3} \multirow{4}{*}{2} & \sout{9} & 001001 & \sout{00-001 (1,9)} & \\ & \sout{12} & 001100 & \sout{10000- (32,33)} & \\ & \sout{33} & 100001 & 1-0000 (32,48) & \\ & \sout{48} & 110000 & \sout{00100- (8,9)} & \\ \cline{1-3} \multirow{3}{*}{3} & \sout{13} & 001101 & \sout{001-00 (8,12)} & \\ & \sout{14} & 001110 & 100-01 (33,37) & \\ & \sout{37} & 100101 & \sout{001-01 (9,13)} & \\ & \sout{56} & 111000 & \sout{00110- (12,13)} & \\ \cline{1-3} \multirow{2}{*}{4} & \sout{15} & 001111 & \sout{0011-0 (12,14)} & \\ & \sout{39} & 100111 & 11-000 (48,56) & \\ & & & 1001-1 (37,39) & \\ & & & \sout{0011-1 (13,15)} & \\ & & & \sout{00111- (14,15)} & \\ \hline \end{tabular} \end{center} \caption{Quine-McCluskey Ex 2: Size 4 Implicants} \label{ASM:tab:qm_ex_2_size_4_implicants} \end{table} None of the terms can be combined any further. All of the minterms or implicants that are not struck through are \emph{Prime Implicants}. In the table above, for example, $ 1-0000 $ is a Prime Implicant. The Prime Implicants are next placed in a table and further processed. \subsubsection{Step 2: The Prime Implicant Table} \label{ASM:subsubsec:quine-mccluskey_ex_2_step_2} A \emph{Prime Implicant Table} can now be constructed, as in Table \ref{ASM:tab:qm_ex_2_prime_implicants}. The prime implicants are listed down the left side of the table, the decimal equivalent of the minterms goes across the top, and the Boolean representation of the prime implicants is listed down the right side of the table. \begin{table}[H] \sffamily \newcommand{\head}[1]{\textcolor{white}{\textbf{#1}}} \begin{center} \rowcolors{2}{gray!10}{white} % Color every other line a light gray \begin{adjustbox}{max width=\textwidth} \begin{tabular}{lccccccccccccccc} \rowcolor{black!75} & \head{0} & \head{1} & \head{8} & \head{9} & \head{12} & \head{13} & \head{14} & \head{15} & \head{32} & \head{33} & \head{37} & \head{39} & \head{48} & \head{56} & \\ % 0 1 8 9 12 13 14 15 32 33 37 39 48 56 $ 11-000\;(48,56) $ & & & & & & & & & & & & & X & X & $ ABD'D'F' $ \\ $ 00-00-\;(0,1,8,9) $ & X & X & X & X & & & & & & & & & & & $ A'B'D'E' $ \\ $ 1001-1\;(37,39) $ & & & & & & & & & & & X & X & & & $ AB'C'DF $ \\ $ 1-0000\;(32,48) $ & & & & & & & & & X & & & & X & & $ AC'D'E'F' $ \\ $ 0011--\;(12,13,14,15) $ & & & & & X & X & X & X & & & & & & & $ A'B'CD $ \\ $ -0000-\;(0,1,32,33) $ & X & X & & & & & & & X & X & & & & & $ B'C'D'E' $ \\ $ 001-0-\;(8,9,12,13) $ & & & X & X & X & X & & & & & & & & & $ A'B'CE' $ \\ $ 100-01\;(33,37) $ & & & & & & & & & & X & X & & & & $ AB'C'E'F $ \\ \hline \end{tabular} \end{adjustbox} \end{center} \caption{Quine-McCluskey Ex 2: Prime Implicants} \label{ASM:tab:qm_ex_2_prime_implicants} \end{table} In the above table, there are four columns that contain only one \emph{X}: $ 14 $, $ 15 $, $ 39 $, and $ 56 $. The rows that intersect the columns at that mark are \emph{Essential Prime Inplicants}, and their Boolean Expressions must appear in the final equation. Therefore, the final equation will contain, at a minimum: $ A'B'CD $ (row $ 5 $, covers minterms $ 14 $ and $ 15 $), $ AB'C'DF $ (row $ 3 $, covers minterm $ 39 $), and $ ABD'E'F' $ (row $ 1 $, covers minterm $ 56 $). Since those expressions are in the final equation, the rows that contain those expressions can be removed from the chart in order to make further analysis less confusing. Also, because the rows with Essential Prime Implicants are contained in the final equation, other minterms marked by those rows are covered and need no further consideration. For example, minterm $ 48 $ is covered by row one (used for minterm $ 56 $), so column $ 48 $ can be removed from the table. In a similar fashion, columns $ 12 $, $ 13 $, and $ 37 $ are covered by other minterms, so they can be removed from the table. Table \ref{ASM:tab:qm_ex_2_1st_iteration} shows the next iteration of this process. \begin{table}[H] \sffamily \newcommand{\head}[1]{\textcolor{white}{\textbf{#1}}} \begin{center} \rowcolors{2}{gray!10}{white} % Color every other line a light gray \begin{adjustbox}{max width=\textwidth} \begin{tabular}{lccccccc} \rowcolor{black!75} & \head{0} & \head{1} & \head{8} & \head{9} & \head{32} & \head{33} & \\ % 0 1 8 9 32 33 $ 00-00-\;(0,1,8,9) $ & X & X & X & X & & & $ A'B'D'E' $ \\ $ 1-0000\;(32,48) $ & & & & & X & & $ AC'D'E'F' $ \\ $ -0000-\;(0,1,32,33) $ & X & X & & & X & X & $ B'C'D'E' $ \\ $ 001-0-\;(8,9,12,13) $ & & & X & X & & & $ A'B'CE' $ \\ $ 100-01\;(33,37) $ & & & & & & X & $ AB'C'E'F $ \\ \hline \end{tabular} \end{adjustbox} \end{center} \caption{Quine-McCluskey Ex 2: 1st Iteration} \label{ASM:tab:qm_ex_2_1st_iteration} \end{table} The circuit designer can select the next term to include in the final equation from any of the five rows still remaining in the chart; however, the first term ($ 00-00- $, or $ A'B'D'E' $) would eliminate four columns, so that would be a logical next choice. When that term is selected for the final equation, then row one, $ 00-00- $, can be removed from the chart; and columns $ 0 $, $ 1 $, $ 8 $, and $ 9 $ can be removed since those minterms are covered. The minterms marked for row $ 001-0- (8,9,12,13) $ are also covered, so this row can be removed. Table \ref{ASM:tab:qm_ex_2_2nd_iteration} shows the next iteration. \begin{table}[H] \sffamily \newcommand{\head}[1]{\textcolor{white}{\textbf{#1}}} \begin{center} \rowcolors{2}{gray!10}{white} % Color every other line a light gray \begin{adjustbox}{max width=\textwidth} \begin{tabular}{lccc} \rowcolor{black!75} & \head{32} & \head{33} & \\ % 32 33 $ 1-0000\;(32,48) $ & X & & $ AC'D'E'F' $ \\ $ -0000-\;(0,1,32,33) $ & X & X & $ B'C'D'E' $ \\ $ 100-01\;(33,37) $ & & X & $ AB'C'E'F $ \\ \hline \end{tabular} \end{adjustbox} \end{center} \caption{Quine-McCluskey Ex 2: 2nd Iteration} \label{ASM:tab:qm_ex_2_2nd_iteration} \end{table} For the next simplification, row $ -0000- $ is selected since that would also cover the minterms that are marked for all remaining rows. Thus, the expression $ B'C'D'E' $ will become part of the final equation. When the analysis is completed, the original equation (\ref{ASM:eq:qm_ex_2}), which contained $ 14 $ minterms, is simplified into Equation \ref{ASM:eq:qm_ex_2_solution}, which contains only five terms. \begin{align} \label{ASM:eq:qm_ex_2_solution} ABD'E'F'+A'B'D'E'+AB'C'DF+A'B'CD+B'C'D'E' = Y \end{align} \subsection{Summary} \label{ASM:subsec:quine-mccluskey_summary} While the Quine–McCluskey method is useful for large Boolean expressions containing multiple inputs, it is also tedious and prone to error when done by hand. Also, there are some Boolean expressions (called ``Cyclic'' and ``Semi-Cyclic'' Primes) that do not reduce using this method. Finally, both Karnaugh maps and Quine-McCluskey methods become very complex when more than one output is required of a circuit. Fortunately, many automated tools are available to simplify Boolean expressions using advanced mathematical techniques. \subsection{Practice Problems} \label{ASM:subsec:quine-mccluskey_practice_problems} The following problems are presented as practice for using the Quine-McClusky method to simplify a Boolean expression. Note: designers can select different Prime Implicants so the simplified expression could vary from what is presented below. \subsection{Practice Problems} \label{ASM:subsec:practice_problems_karnaugh_maps} \begin{table}[H] \sffamily \begin{center} \begin{tabular}{c c p{6cm} } \multirow{2}{*}{\textbf{1}} & Expression & $ \int(A,B,C,D) = \sum(0,1,2,5,6,7,9,10,11,14) $ \\ & \cellcolor{gray!10} Simplified & \cellcolor{gray!10} $ A'B'C'+A'BD+AB'D+CD' $ \\ \hline \multirow{2}{*}{\textbf{2}} & Exression & $ \int(A,B,C,D) = \sum(0,1,2,3,6,7,8,9,14,15) $ \\ & \cellcolor{gray!10} Simplified & \cellcolor{gray!10} $ A'C+BC+B'C' $ \\ \hline \multirow{2}{*}{\textbf{3}} & Exression & $ \int(A,B,C,D) = \sum(1,5,7,8,9,10,11,13,15) $ \\ & \cellcolor{gray!10} Simplified & \cellcolor{gray!10} $ C'D+AB'+BD $ \\ \hline \multirow{2}{*}{\textbf{3}} & Exression & $ \int(A,B,C,D,E) = \sum(0,4,8,9,10,11,12,13,14,15,16,20,24,28) $ \\ & \cellcolor{gray!10} Simplified & \cellcolor{gray!10} $ A'B+D'E' $ \\ \end{tabular} \end{center} \caption{Quine-McCluskey Practice Problems} \label{ASM:tab:quine-mccluskey_practice_problems} \end{table} %*************************************************************************** % Section: Automated Tools %*************************************************************************** \clearpage\section{Automated Tools} \label{ASM:sec:automated_tools} \subsection{Introduction} \label{ASM:subsec:introduction_to_automated_tools} There are numerous automated tools available to aid in simplifying complex Boolean equations. Many of the tools are quite expensive and intended for professionals working full time in large companies; but others are inexpensive, or even free of charge, and are more than adequate for student use. This topic introduces one such free tool: \ac{KARMA}. \subsection{KARMA} \label{ASM:subsec:karma} \subsubsection{Introduction} \label{ASM:subsubsec:introduction_to_karma} \ac{KARMA} is a free Java-based tool designed to help simplify Boolean expressions. Both an online and downloaded version of \ac{KARMA} is available. The application can be found at: \url{http://goo.gl/8Lmx5v}. Note: The version of \ac{KARMA} used for this text is 3.62. A newer version may be available but the instructions presented here use only the base functions and will likely be applicable even in an updated version of the software. \begin{figure}[H] \centering \includegraphics[width=\maxwidth{.95\linewidth}]{gfx/07_01} \caption{Karma Initial Screen} \label{fig:07_01} \end{figure} The right side of the screen contains a row of tools available in \ac{KARMA} and the main part of the screen is a canvas where most of the work is done. The following tools are available: \begin{itemize} \item Logic2Logic. Converts between two different logical representations of data; for example, a Truth Table can be converted to Boolean expressions. \item Logic Equivalence. Compares two functions and determines if they are equivalent; for example, a truth table can be compared to a SOP expression to see if they are the same. \item Logic Probability. Calculates the probability of any one outcome for a given Boolean expression. \item Karnaugh Map. Analyzes a Karnaugh map and returns the Minimized Expression. \item KM Teaching Mode. Provides drill and practice with Karnaugh maps; for example, finding adjacent minterms on a 6-variable map. \item SOP and POS. Finds the SOP and POS expressions for a given function. \item Exclusive-OR. Uses XOR gates to simplify an expression. \item Multiplexer-Based. Realizes a function using multiplexers. \item Factorization. Factors Boolean expressions. \item About. Information about \ac{KARMA}. \end{itemize} For this lesson, only the Karnaugh Map analyzer will be used, and the initial screen for that function is below. \begin{figure}[H] \centering \includegraphics[width=\maxwidth{.95\linewidth}]{gfx/07_02} \caption{Karma K-Map Analyzer} \label{fig:07_02} \end{figure} \subsubsection{Data Entry} \label{ASM:subsubsec:karma_data_entry} When using \ac{KARMA}, the first step is to input some sort of information about the circuit to be analyzed. That information can be entered in several different formats, but a truth table or a Boolean expression would best match this book. To enter the initial data, click the \emph{Load Function} button at the top of the canvas. By default, the Load Function screen opens with a blank screen. In the lower left corner of the Load Function window, the Source Format for the input data can be selected. There is a template available for each of the different source formats; and that template can be used to help with data entry. The best way to work with \ac{KARMA} is to click the ``Templates'' button and then select the data format of interest. Figure \ref{ASM:fig:karma_expression_one_loaded} shows the ``Expression 1'' template. \begin{figure}[H] \centering \includegraphics[width=\maxwidth{.95\linewidth}]{gfx/07_03} \caption{Karma Expression One Loaded} \label{fig:07_03} \end{figure} The designer would replace the ``inputs'' and ``onset'' lines with information for the circuit being simplified. Once the source data are entered into this window, click the \emph{Load} button at the bottom of the window to load the data into \ac{KARMA}. \subsubsection{Data Source Formats} \label{ASM:subsubsec:karma_data_source_formats} \ac{KARMA} works with input data in any of six different formats: Boolean Expression, Truth Table, Integer, Minterms, \ac{BLIF}, and \ac{BDD}. \ac{BLIF} and \ac{BDD} are programming tools that are beyond the scope of this book and will not be covered. %TODO: maybe expand the book to include BLIF and BDD. \paragraph{Expression} \label{ASM:para:karma_expression} Boolean expressions can be defined in \ac{KARMA} using the following format. {\small \begin{verbatim} #Sample Expression (!x1*!x2*!x4)+(!x1*x2*!x3)+(x1*!x4*!x5)+(x1*x3*x4) \end{verbatim} } Notes: \begin{itemize} \item Any line that starts with a hash mark (``\#'') is a comment and will be ignored by \ac{KARMA}. \item ``Not'' is indicated by a leading exclamation mark. Thus $ !x1 $ is the same as $ X1' $. \item All operations are explicit. In real-number algebra the phrase ``AB'' is understood to be ``A*B.'' However, in \ac{KARMA}, since variable names can be more than one character long, all operations must be explicitly stated. \textsf{AND} is indicated by an asterisk and \textsf{OR} is indicated by a plus sign. \item No space is left between operations. \end{itemize} \paragraph{Truth Table} \label{ASM:para:karma_truth_table} A truth table can be defined in \ac{KARMA} using the following format. \begin{verbatim} #Sample Truth Table inputs -> X, Y, Z 000: 1 001: 1 010: 0 011: 0 100: 0 101: 1 110: 0 111: 1 \end{verbatim} Notes: \begin{itemize} \item Any line that starts with a hash mark (``\#'') is a comment and will be ignored by \ac{KARMA}. \item The various inputs are named before they are used. In the example, there are three inputs: $ X $, $ Y $, and $ Z $. \item Each row in the truth table is shown, along with the output expected. So, in the example above, an input of $ 000 $ should yield an output of $ 1 $. \item An output of ``-'' is permitted and means ``don't care.'' \end{itemize} \paragraph{Integer} \label{ASM:para:karma_integer} In \ac{KARMA}, an integer can be used to define the outputs of the truth table, so it is ``shorthand'' for an entire truth table input. Here is the example of the ``integer'' type input. \begin{verbatim} #Sample Integer Input inputs -> A, B, C, D onset -> E81A base 16 \end{verbatim} Notes: \begin{itemize} \item Any line that starts with a hash mark (``\#'') is a comment and will be ignored by \ac{KARMA}. \item Input variables are defined first. In this example, there are four inputs: $ A $, $ B $, $ C $, and $ D $. \item The ``onset'' line indicates what combinations of inputs should yield a \emph{True} on a truth table. In the example, the number $ E81A $ is a hexadecimal number that is written like this in binary: \end{itemize} \begin{verbatim} 1110 1000 0001 1010 E 8 1 A \end{verbatim} The least significant bit of the binary number, $ 0 $ in this example, corresponds to the output of the first row in the truth table; thus, it is false. Each bit to the left of the least significant bit corresponds to the next row, counting from $ 0000 $ to $ 1111 $. Here is the truth table generated by the hexadecimal integer $ E81A $: \begin{table}[H] \sffamily \newcommand{\head}[1]{\textcolor{white}{\textbf{#1}}} \begin{center} \rowcolors{2}{gray!10}{white} % Color every other line a light gray \begin{tabular}{ccccc} \rowcolor{black!75} \multicolumn{4}{c}{\head{Inputs}} & \head{Output} \\ A & B & C & D & Y \\ \hline 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 & 1 \\ 0 & 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 1 & 1 \\ 0 & 1 & 0 & 0 & 1 \\ 0 & 1 & 0 & 1 & 0 \\ 0 & 1 & 1 & 0 & 0 \\ 0 & 1 & 1 & 1 & 0 \\ 1 & 0 & 0 & 0 & 0 \\ 1 & 0 & 0 & 1 & 0 \\ 1 & 0 & 1 & 0 & 0 \\ 1 & 0 & 1 & 1 & 1 \\ 1 & 1 & 0 & 0 & 0 \\ 1 & 1 & 0 & 1 & 1 \\ 1 & 1 & 1 & 0 & 1 \\ 1 & 1 & 1 & 1 & 1 \\ \end{tabular} \end{center} \caption{Truth Table for KARMA} \label{03:tab:truth_table_for_karma} \end{table} The ``Output'' column contains the binary integer $ 1110\;1000\;0001\;1010 $ (or $ E81A_{16} $) from bottom to top. \paragraph{Terms} \label{ASM:para:karma_terms} Data input can be defined by using the minterms for the Boolean expression. Following is an example minterm input. \begin{verbatim} #Sample Minterms inputs -> A, B, C, D onset -> 0, 1, 2, 3, 5, 10 \end{verbatim} \begin{itemize} \item Any line that starts with a hash mark (``\#'') is a comment and will be ignored by \ac{KARMA}. \item The inputs, $ A $, $ B $, $ C $, and $ D $, are defined first. \item The ``onset'' line indicates the minterm numbers that yield a \emph{True} output. \item This is similar to a \ac{SOP} expression, and the digits in that expression could be directly entered on the onset line. For example, the onset line above would have been generated from the Sigma expression in Equation \ref{ASM:eq:KARMA Input}. \end{itemize} \begin{align} \label{ASM:eq:KARMA Input} \int(A,B,C,D) &= \sum(0,1,2,3,5,10) \end{align} \subsubsection{Truth Table and K-Map Input} \label{ASM:subsubsec:karma_truth_table_and_kmap_input} While \ac{KARMA} will accept a number of different input methods, as described above, one of the easiest to use is the Truth Table, and the related Karnaugh Map, and these are displayed by default when the Karnaugh Map function is selected. The value of any of the cells in the \emph{Out} column in the Truth Table, or cells in the Karnaugh Map, can be cycled through $ 0 $, $ 1 $, and ``don't care'' (indicated by a dash) on each click of the mouse in the cell. The Truth Table and Karnaugh Map are synchronized as cells are clicked. The number of input variables can be adjusted by changing the \emph{Var} setting at the top of the screen. Also, the placement of those variables on the Karnaugh Map can be adjusted as desired. \subsubsection{Solution} \label{ASM:subsubsec:karma_truth_table_and_kmap_input} \begin{figure}[H] \centering \includegraphics[width=\maxwidth{.95\linewidth}]{gfx/07_04} \caption{Karma Solution} \label{fig:07_04} \end{figure} To simplify the Karnaugh Map, click the \emph{Minimize} button. A number of windows will pop up (illustrated in Figure \ref{ASM:fig:karma_solution}), each showing the circuit simplification in a slightly different way. Note: the following Boolean expression was entered to generate the illustrated simplification: $ A'C + A'B + AB'C' + B'C'D' $. \begin{figure}[H] \centering \includegraphics[width=\maxwidth{.95\linewidth}]{gfx/07_05} \caption{Karma Minimized Solution} \label{fig:07_05} \end{figure} \marginpar{KARMA includes parenthesis for clarity, but the groups are obvious when the expression is written in normal Boolean form.} In this solution, a \emph{NOT} term is identified by a leading exclamation point; thus, the minimized expression is: $ A'D' + A'B + AB'C' + A'C $. \paragraph{BDDeiro} \label{ASM:para:karma_bddeiro} The BDDeiro window is a visualization of a \ac{BDD}, which graphically represents the solution to a logic network. \begin{figure}[H] \centering \includegraphics[width=\maxwidth{.95\linewidth}]{gfx/07_06} \caption{Karma BDDeiro Map} \label{fig:07_06} \end{figure} In a \ac{BDD}, each circle represents an input and the squares at the bottom of the diagram represent the two possible outputs: \emph{False} and \emph{True}. The lines are paths from the inputs to either a \emph{False} or \emph{True} output; thus, a truth table can be viewed graphically. A \ac{BDD} is useful because it provides a compact, visual representation of a Boolean expression, and for any given Boolean expression there is one, and only one, \ac{BDD} representing it. One disadvantage to using a \ac{BDD} is its size, there are potentially two nodes for each input (except the first), and that can lead to a very large diagram. Figure \ref{ASM:fig:karma_bddeiro_map} is actually a ``Reduced Order'' \ac{BDD} and a number of nodes and paths have been consolidated to make the diagram as simple as possible. The top node represents the start of the decision diagram: input $ a $. If that input is \emph{False}, then follow the dotted line down to node $ d $. (Note: the lines are color-coded to aid in their use; \emph{False} lines are blue and \emph{True} lines are red.) If node $ d $ is false, then that leads directly to output \emph{True}. Thus $ A'D' $ gives a \emph{True} output, and that is one of the minimized solutions. To follow one other path, if $ a $ is \emph{True} (follow the solid line down and right), $ b $ is \emph{False}, $ c $ is \emph{False}, the output is \emph{True}. Thus, $ AB'C' $ is \emph{True}. In a similar way, all four \emph{True} outputs, and three false outputs, can be traced from the top to bottom of the diagram. \paragraph{Quine-McCluskey} \label{ASM:para:quine-mccluskey} \ac{KARMA} includes the complete Quine-McCluskey solution data. Several tables display the various Implicants and show how they are derived. \begin{figure}[H] \centering \includegraphics[width=\maxwidth{.95\linewidth}]{gfx/07_07} \caption{Karma Quine-McCluskey Solution} \label{fig:07_07} \end{figure} \ac{KARMA} also displays the Covering Table for a Quine-McCluskey solution. \begin{figure}[H] \centering \includegraphics[width=\maxwidth{.95\linewidth}]{gfx/07_08} \caption{Karma Quine-McCluskey Covering Table} \label{fig:07_08} \end{figure} Each of the minterms (down the left column) can be turned on or off by clicking on it. The smaller blue dots in the table indicate prime implicants and the larger red dots (if any) indicate essential prime implicants. Because this table is interactive, various different solutions can be attempted by clicking some of the colored dots to achieve the best possible simplification. \subsubsection{Practice Problems} \label{ASM:subsubsec:karma_practice_problems} The following problems are presented as practice for using \ac{KARMA} to simplify a Boolean expression. Note: designers can select different Prime Implicants so the simplified expression could vary from what is presented below. \begin{table}[H] \sffamily \begin{center} \begin{tabular}{c c p{6cm} } \multirow{2}{*}{\textbf{1}} & Expression & $ \int(A,B,C,D) = \sum(5,6,7,9,10,11,13,14) $ \\ & \cellcolor{gray!10} Simplified & \cellcolor{gray!10} $ BC'D+A'BC+ACD'+AB'D $ \\ \hline \multirow{2}{*}{\textbf{2}} & Exression & $ A'BC'D+A'BCD'+A'BCD+AB'C'D+AB'CD'+AB'CD+ABC'D+ABCD' $ \\ & \cellcolor{gray!10} Simplified & \cellcolor{gray!10} $ BC'D+A'BC+ACD'+AB'D $ \\ \hline \multirow{2}{*}{\textbf{3}} & Exression & A 4-variable Karnaugh Map where cells 5, 6, 7, 9, and 10 are True and 13, 14 are ``Don't Care'' \\ & \cellcolor{gray!10} Simplified & \cellcolor{gray!10} $ BC'D+AC'D+A'BC+ACD' $ \\ \hline \multirow{2}{*}{\textbf{3}} & Exression & $ \int(A,B,C,D,E) = \sum(0, 3, 4, 12, 13, 14, 15, 24, 25, 28, 29, 30) $ \\ & \cellcolor{gray!10} Simplified & \cellcolor{gray!10} $ ABD'+A'B'C'DE+BCD'+A'BC+A'B'D'E' $ \\ \end{tabular} \end{center} \caption{KARMA Practice Problems} \label{ASM:tab:karma_practice_problems} \end{table}
{ "alphanum_fraction": 0.6260898876, "avg_line_length": 48.7938596491, "ext": "tex", "hexsha": "a0b2c255cb9623105f49853c2b13e86ae78677b1", "lang": "TeX", "max_forks_count": 2, "max_forks_repo_forks_event_max_datetime": "2021-02-20T06:06:00.000Z", "max_forks_repo_forks_event_min_datetime": "2021-01-20T17:30:54.000Z", "max_forks_repo_head_hexsha": "e5d78f880e0bae66b33823ee22c68ed16696af3c", "max_forks_repo_licenses": [ "Unlicense" ], "max_forks_repo_name": "grself/CIS221_Text", "max_forks_repo_path": "Chapters/07_Adv_Simp_Methods.tex", "max_issues_count": 1, "max_issues_repo_head_hexsha": "e5d78f880e0bae66b33823ee22c68ed16696af3c", "max_issues_repo_issues_event_max_datetime": "2021-05-09T19:18:59.000Z", "max_issues_repo_issues_event_min_datetime": "2021-05-09T19:18:59.000Z", "max_issues_repo_licenses": [ "Unlicense" ], "max_issues_repo_name": "grself/CIS221_Text", "max_issues_repo_path": "Chapters/07_Adv_Simp_Methods.tex", "max_line_length": 1340, "max_stars_count": 10, "max_stars_repo_head_hexsha": "e5d78f880e0bae66b33823ee22c68ed16696af3c", "max_stars_repo_licenses": [ "Unlicense" ], "max_stars_repo_name": "grself/CIS221_Text", "max_stars_repo_path": "Chapters/07_Adv_Simp_Methods.tex", "max_stars_repo_stars_event_max_datetime": "2022-03-06T17:09:02.000Z", "max_stars_repo_stars_event_min_datetime": "2019-03-10T15:35:42.000Z", "num_tokens": 15824, "size": 44500 }
\chapter[Hirability and Educational Prestige]{Hirability and Educational Prestige} \section{Introduction} The accredited degree is an established means to individual-level employability, but the proliferation of the degree is associated with a variety of well-understood issues. These issues include the student debt crisis, skill gaps, grade inflation, and low social return. % and contribution to lack of diversity in particular labor markets. % above, or % and systematic demographic change to industrial labor. (which many perceive as problematic for a diverse labor pool) Alternative credentials, or non-accredited credentials, are a broad category of offerings that exhibit greater variation intensity, price, and outcomes\cite{urdan_2020}. Alternative credentials are often a signal of niche skills and expertise in a particular job family. These characteristics combine to provide the benefit of high possible value addition to the labor market with the cost of a value calculation problem shared by potential employers and education consumers. This paper seeks to reduce the general difficulty of credential value calculation by testing a method of value normalization with heuristics to identify those credentials likely to yield meaningful benefits to the typical job search. This paper tests the lens of prestige as a tool to normalize value across accredited and alternative credentials. This study leverages an original questionnaire to identify prestige levels of various credentials. This paper tests the composite hypothesis that some level of prestige allows an alternative credential to compete with traditional credentials for employment. Several specific lines of evidence are required to support the composite hypothesis. Statistical evidence must demonstrate significant positive effects for accreditation and prestige on hirability. The effect size for prestige must be sufficiently large to dominate the accreditation effect over the attainable range. The questionnaire allows a prestige response on a 10-point scale, so the attainable range is from 1 to 10. A vignette analysis can test whether a dominant range for prestige exists within the attainable window. An ideal result would further show that one or more actual alternative credentials fall into this dominant range. The motivation for the lens of prestige extends from the academic work in education economics and the economics of social norms. Education economics provides two mainstream accounts of the value of a degree. One account is the human capital model, and the other is the signaling model. The human capital model explains that improved labor outcomes result from skills gained by a student in the course of education. Stakeholders of various kinds prefer alternative credentials to the traditional degree for the attainment of specific technical skills\cite{craig2018new}. For this reason, many college graduates supplement using alternative credentials. Some alternative learning providers specifically target this market with a special kind of alternative education called last-mile training. This presents an explanatory problem for the human capital model. If better labor outcomes arise from skill enhancement, then alternatively educated individuals should enjoy better wages, employment rates, and so on, compared to college graduates. The signaling model holds that credentials signal a basket of applicant qualities that employers value. Proponents of the signaling model commonly argue that the college degree signals intelligence, work ethic, and conformity\cite{caplan_2012}. The signaling model presents an explanation for the correlation of weak labor outcomes and alternative credentials, even if alternative credentials endow students with better skills. The explanation is that the alternative credential signals an offsetting deficit of some kind. This paper treats prestige as a signal rather than a matter of human capital. This paper prefers the signaling approach to directly investigate prestige effects with minimal theoretical baggage and without a need to test student skill. % without concern for testing skill differences in students of various kinds. % This paper hypothesizes that employers value prestige as a signal. % Accredited degrees % A full description of such signal deficits is irrelevant to this study, % but potential negative signals may include a deficit in conformity or work ethic. % This study uses the signaling model as a framework for investigation % because it appears % This paper hypothesizes that prestige is valued by employers as a signal. % Prestige can be taken as a signal of conformity in part. % Google is a prestigious employer and also an alternative learning provider. % From the point of view of Google, their own credential is a preferred conformity signal as well as a signal of skill. % The case of employer-provided credentials is interesting, % but the component of prestige is % The main argument is that accreditation signals prestige, % but there % The simple hypothesis is that accredited degrees obtain higher prestige on average, % but % While conformity and prestige intersect at times, % this paper does not suppose they are identical nor generally correlated. % Instead, this paper argues that these are two social characteristics that are valued by employers % and a lack in one may be compensated for by the presence of the other. % This paper hypothesizes that alternative credentials will have low average prestige, % but that some particular credentials from prestigious providers like Google will prove valuable. In a broad review of economics and norm types, hiring decisions exist within what Elster would identify as work norms\cite{elster1989social}. Elster supports a rational model of work norms, with the caveat that social interactions may involve unobserved emotional effects. Similarly, the rational model used in this paper may not extrapolate with accuracy into abnormal emotional situations. This paper will also make use of the distinction between social and legal norms provided by Elster. Rivera is one scholar within the economics of work norms to have recently operationalized social norms as prestige\cite{rivera2016pedigree}. Rivera finds that prestige is important in her analysis, but her analytical scope focuses on traditional education and a few specific industries, including health and law. The current paper extends the analysis of prestige and hiring norms across many industries and to include alternative credentials. % arguably I emphasize coding bootcamps / other bootcamp industries / information technology industry and perhaps sales % To preview results,, statistical evidence confirms that prestige independently explains hirability better than accreditation alone, % but accreditation fails to be explained away. % Instead, models that use both factors produce superior estimates of willingness to hire. % The independent importance of accreditation indicates that asymptotic improvements to alternative credentials are unlikely to totally outcompete the traditional education system. % The failure of arbitrary technical and social gains in alternative credentials to fully crowd out traditional education % points to a need to investigate legal norms for further remedy. % The conclusion describes industry and policy solutions for the remainder of concerns in higher education. % At the same time, there is a large level of economic value/arbitrage to be had/exploited from the socialization of alternative credentials % before the binding constraint of legal stability is encountered. % TODO: maybe estimate the level of economic growth (it would be proportional utility growth which is merely analgous tho) \section{Description of Data and Methodology} This paper investigates an original set of online questionnaire responses ($n = 454$). Responses are cross-sectional data obtained in March of 2021. Respondents are United States citizens at or over the age of eighteen. Qualified respondents participated in the survey through the Amazon Mechanical Turk platform. Appendix B contains the wording and response options for each question. Appendix B also contains the wording for a priming message presented at the start of the survey. The priming message lays out the definition of alternative credentials used in this study. The message also provides several concrete examples of alternative credentials, including ``a Certified Project Manager certification, a portfolio of work, a Khan Academy profile, or a Nanodegree from Udacity.'' The dependent variable of interest is called hirability. This variable measures individual response on a 10-point scale to the question, ``For many professions, alternative credentials can qualify a person for an entry-level position.'' The questionnaire is composed of three sections. The first section collects respondent characteristics and baseline hirability. The second section collects prestige responses with respect to nine real-world learning providers. The third section collects hirability and prestige responses with respect to eight vignette learning providers. % TODO: maybe describe real-world learning provider selection criteria here. Investigation of the first section of the questionnaire uses ordinary least squares analysis. Vignette data is analyzed as a panel in mixed models with individual random effects. The vignette model allows comparison between prestige and accreditation coefficients. Vignette analysis encounters a practical utility problem in that the schools are only vignettes rather than actual learning providers. A comparison of descriptive statistics across vignettes and actual schools addresses this concern. Half of the respondents randomly received an informational message about the nine real-world learning providers. Appendix B includes the wording of this message. The message provides rating data from two leading credential aggregator websites. University ratings are US News ranking information for the 2021 school year. Course Report provides the rating data for so-called coding bootcamps as of December 2020. As an aside, inaccurate credential category labels contribute to the knowledge and value calculation problems that inhibit social adoption. Coding bootcamps focus on roles in the information technology industry, but these roles are much broader in scope than the category label implies. Moreover, the information technology industry is a special industry that cuts across all other industries. Much of the academic, policy, and industry discussion on coding bootcamps misses that these institutions provide credentials that potentially compete with university degrees in nearly any subject. For example, General Assembly is one of the particular coding bootcamps investigated in this study. General Assembly provides credentials for user experience design, a set of skills involving market research, and applied technical art skills. General Assembly provides credentials for product management. Product management is a job family that competes for labor among business degree graduates. The data science credential provides skills that compete with accredited labor in mathematics, statistics, economics, and even subjects in the hard sciences like computational biology. Finally, there are credentials that relate to software development and compete with accredited degrees in computer science. % TODO: write the below content into the paper when we get to results on concrete providers % why do we not use a mixed model of concrete providers? % three reasons: % first, the whole point of using an aggregator is to obtain individual-independent data % second, ratings are heterogenous so there would be some noise introduced by indexing them together % third, i use stipulated prestige categories as a pseudo-index % the pseudo-index can be used via simpler descriptive stats, % and the pseudo-index has evidence supporting it; that is, high / low stipulated prestige is sig correlated to response prestige. % if all those objections fail, then fine we can do it but i suspect the answer will be: % 'findings are insignificant bc of low variation and injected noise not bc no such level exists' Respondent characteristics are categorical variables. Hirability and prestige are 10-point Likert-type responses. Prestige takes a second representation as a stipulated boolean. Stipulating prestige enables the application of results to a real job search. If stipulated prestige is highly correlated to prestige response, and if prestige response is correlated to improved hirability, then the selection criteria for stipulated prestige can be applied in an actual job search to potentially improve outcomes. To illustrate the method of two-way prestige validation, suppose that a vignette school is stipulated as high prestige. This situation is represented in regression as a dummy variable for stipulated high prestige with a value of true. The respondent reads that the vignette school is known to be prestigious. After reading this, the respondent provides a prestige response rating on a 10-point scale. Investigation of all responses allows an analyst to determine an average prestige response level which is associated with the stipulated high prestige criteria. To preview results, stipulated high prestige turns out to be strongly correlated with high prestige response. Interestingly, there are cases where a respondent gives a low response rating to, for example, the University of Chicago, a school with high stipulated prestige based on aggregator website ratings. This result indicates the importance of some analysis that accounts for individual effects. % the person could just be generally stingy with hirability points, they might be particularly opposed to accredited degrees, have a special issue w this school... Two-way representation of prestige enables the application of findings into an actual job search. In an actual job search, individuals can easily access aggregator website data. In the real world, an individual cannot readily access questionnaire results for many credentials. Results from this paper include the identification of rules of thumb that a person can use to identify actual learning providers as high prestige. To ensure clarity of results, stipulated prestige always refers to the dummy variable, and prestige response refers to the 10-point measure. The vignette section and the section on actual schools use stipulated prestige. All other variables are either 10-point Likert-type responses or categorical variables\footnote{ It is an accepted practice to treat Likert-type responses as either categorical or continuous for regression analysis. Jaccard and Wan provide support for continuous analysis of Likert-type data. They note that severe departures from the assumptions on cardinality ``do not seem to affect Type I and Type II errors dramatically,'' particularly when the Likert scale is five or more points\cite{jaccard1996lisrel}. This paper treats responses on a 10-point scale as continuous. }. Categorical variables are exclusively respondent characteristics. Four other respondent measures are Likert-type responses. Vignette responses include responses for hirability and prestige, while actual schools only receive responses for hirability. % Prestige is measured two-ways in the vignette section, but it is only stipulated in the section on actual schools. Respondent characteristics include eight standard controls and four questions unique to this study. The eight standard controls include age, gender, ethnicity, income, level of education, employment status, the industry of occupation, and state of residence. A unique question on work norms records whether the respondent tends ``to work more closely with coworkers at your company or customers and external business partners.'' The motivation for this question is to test whether prestige disproportionately impacts roles that are outward or client-facing. Respondents are also directly asked whether they ``prefer to hire or work with a person that has a college degree rather a person that holds a reputable certification or non-college credential.'' Another unique control is support for online education. This control allows analysis to separate hirability effects due to online education preference from hirability effects due to unaccredited education preference. In practice, many alternative credentials involve online learning, but accredited learning is also increasingly taking place online. The fourth control is expected conventionality. % This is an important representation of perceived social norms. This variable measures whether the respondent believes that ``It will soon become common for high school graduates to obtain alternative credentials instead of going to college.'' This is a useful correction variable for two reasons. First, it separates willingness to hire based on respondent preference from indirect willingness to hire based on perceived social norms. Individual preferences and social norms are certainly correlated, but the correlation is small enough that failure to separate these effects leads to nontrivial statistical noise. Second, surveys sometimes overreport demand effects because of the lack of cost constraint on respondent expression. This bias is sometimes called budget constraint bias or omitted budget constraint bias\cite{ahlheim1998contingent, pachali2020omitted}. % This is also in part corrected for by collecting income...so there isn't an 'unobserved budget' really...see sources cited Without a cost constraint, respondents tend to exaggerate demand responses like the willingness to hire. Budget constraint bias affects both hirability and expected conventionality, so conventionality operates in part as a bias control. Vignette question formatting follows Atzm{\"u}ller and Steiner\cite{atzmuller2010experimental}. Each vignette stipulates whether a school is accredited, whether the respondent should imagine the school as impressive, and whether the respondent should imagine that other people consider the school impressive. Each stipulated factor can take a value of true or false, resulting in eight vignette questions. This study uses multiple regression and descriptive statistics to generate results. % \footnote{ % While the data for this analysis is not public, the analytical code is open-source. % See \url{https://github.com/Vandivier/research-dissertation-case-for-alt-ed/tree/master/papers/alt-ed-prestige} % }. Multiple regression is conducted using ordinary least squares (OLS) for baseline hirability analysis and linear mixed models (LMM) are used for vignette analysis. OLS specification of vignette data is inappropriate because repeated measures of hirability from a single participant introduce an individual-level bias into resulting coefficients. LMM models are able to account for these individual-level effects. Following Magezi\cite{magezi2015linear}, linear mixed models in this paper use a within-participant random factor, or individual random effects, to correct for individual-level repeated measures bias. LMM yields linear coefficients, so the interpretation of LMM coefficients is similar to OLS. One difference of note is that adjusted r-squared is not available for an LMM model. % For this reasons, an OLS model is optimized for baseline hirability, % and then that specification is trivially modified into an LMM model for further analysis. % formula for LMM at https://www.statsmodels.org/stable/mixed_linear.html % individual random effects https://www.statsmodels.org/stable/examples/notebooks/generated/mixed_lm_example.html#Growth-curves-of-pigs % 2. how was nonresponse bias addressed? - maybe not at all % - main way to address nonresponse bias is to explicitly capture and correct for all of the individual characteristics that matter: ethnicity, age, income... % - it would not be enough to show nonresponse bias exists; % - it would need to be shown that it exists in the direction of some effect that moves the relation of interest in a predictable and meaningful way; % - else the criticism is an argument from ignorance which due dilligence has been undertaken to preclude. % - https://forum.effectivealtruism.org/posts/a6LMQcER6Awhawtqq/using-amazon-s-mechanical-turk-for-animal-advocacy-studies % - above indicates overstatement of effects...i would want more info...there is a paper internally cited % - above also deflates income nonresponse bias consern (these don't pay much so systematic bias from rich ppl) also i explicitly capture income anyway % - "AMT was found to be a reliable source of data and to diminish the potential for non-response error in online research" % - https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4397064/ % - https://duckofminerva.com/2013/07/mechanical-turk-and-experiments-in-the-social-sciences.html % - https://www.tandfonline.com/doi/abs/10.1080/10967494.2016.1276493 % 3. How were ratings subjects selected? min 2*2*2 (isQuality)*(isBootcamp)*(isKnown) social and individual ratings [10 point likert-type unit] % 4. a few correction variables based on literature review and computed norm factors how \section{Results} % top line results; alt creds are lower prestige on average but still important Results ($n = 454$) indicate that accredited degrees are generally higher in prestige compared to alternative credentials. Alternative credentials are meaningfully associated with hirability, and in certain situations, they are preferred to accredited degrees. Competitive status indicates that a credential is correlated with hirability to a similar or greater extent compared to an accredited degree. Results provide evidence for three cases in which alternative credentials are competitive. First, specific alternative credentials are of particularly high prestige. This study finds that a credential from Google is sufficiently prestigious to be competitive without a requirement of supplementary conditions. Second, some individuals award prestige preferentially to alternative learning providers. % analysis_5_transfer table a5.1 In a comparison among nine actual learning providers in this study, 71 percent of respondents prefer at least one alternative credential to at least one university degree. The proportion increases to about 75 percent when respondents view rating data from the online review aggregators Course Report and US News. % These sites include US News and Course Report, and they aggregate learning providers, % report standard information about those providers, % and allow users to leave reviews. % TODO: maybe re-introduce the term 'alternativel credentialed non-college graduated' / ACNG. Third, certain independent factors in hiring decision models support the hirability of alternatively credentialed job candidates. Industry and state effects are two such compensating factors that can add up to overcome the average comparative preference for accredited labor to alternatively credentialed job candidates. % For example, the state effect for California is positive on hirability % and it retains a magnitude that compensates almost exactly for the hirability penalty from non-accreditation. % summary statistics % a2.1 Baseline hirability is the institution-agnostic hirability measure. The mean response for baseline hirability is 7.58 on a 10-point scale, and the median response is 8. Table \ref{tab:desc_stats} gives average hirability and prestige for interesting segments of respondents. Four basic results in the table are worth noting. First, stipulated prestige always moves with prestige response as expected. Second, as expected, the hirability and prestige effects for accredited schools are generally higher than non-accredited schools. Third, the difference in average hirability between high and low prestige providers is more than twice the difference in hirability between accredited and unaccredited providers. This supports the possibility of an actual competitive alternative credential in the attainable range of prestige. % alternative education becomes competitive with traditional education. % The fourth result is that the average actual school with stipulated high prestige % is too low in prestige to be competitive with the average actual school with an accredited status. The fourth result is an initial attempt at a prestige rule of thumb. For both vignette and actual schools, if a school can obtain a prestige score of 7 or more, it will be at least as prestigious as the average accredited school. \begin{table} \caption{Average Hirability and Prestige} \resizebox{\columnwidth}{!}{ \input{./figures-and-tables/table-prestige-summary-stats.tex} } \tableSpace \label{tab:desc_stats} \end{table} % The differences reported in Table \ref{tab:desc_stats} are significant ($p < 0.1$). % Smaller differences between actual schools and vignette schools are also significant. % The minimum difference of 0.14 between unaccredited actual and vignette schools is significant ($p < 0.1$). Google is the only unaccredited learning provider to achieve a competitive status on the basis of this initial rule. The mean prestige response for Google was 7.10, and the median response was 7. Two lower bars for competitive status are interesting. First, an alternative provider can be described as moderately competitive if it fails to beat the average university, but it succeeds in beating at least one university on average. The lowest average prestige response for an accredited university is 6.34 for the University of Nebraska. Second, an alternative provider can be described as weakly competitive if it fails to beat any university on average, but it succeeds in beating at least one university in a significant percentage of individual responses. No alternative credentials investigated in this study meet the criteria for moderate competitiveness. App Academy, General Assembly, and Google are the three alternative learning providers with stipulated high prestige. All stipulated high prestige learning providers are at least weakly competitive. When asked directly, 41.6 percent of respondents indicated that they would not prefer to work with a person that holds an accredited credential instead of ``a person that holds a reputable certification or non-college credential.'' When examining prestige response instead of asking directly, over 70 percent of respondents reveal a preference for at least one actual alternative credential to at least one university credential. Over half of respondents preferred at least one actual alternative credential with stipulated high prestige to at least one university credential with stipulated high prestige. After excluding Google, over one-quarter of respondents continue to prefer at least one actual alternative credential with stipulated high prestige to at least one university credential with stipulated high prestige. % TODO: Does zety belong in the conclusion? Zety is an online platform that facilitates job search. Zety reports that one in six job applicants in the United States receive an interview, and the average conversion rate from interview to offer was 19.78 in 2016\cite{turczynski_2021}. Assuming rejections are independent enables naive estimation that most job searches consist of at least four interviews\footnote{ Four independent games that each include an eighty percent chance of rejection yields $0.8^4 = 0.4096$. The associated probability of having at least one offer result from four interviews would be about $1 - 0.41 = 0.59$, or 59 percent, which is more likely than not. } and dozens of applications. Given the rates at which respondents prefer alternative credentials to accredited degrees, a job search of typical length is likely to include several applications and at least one interview with one or more employers that would prefer an alternative credential with stipulated high prestige to an accredited degree. % More than half of respondents prefer a high prestige alternative credential to at least one high prestige accredited degree. % After excluding the highest prestige alternative credential from Google, % more than one-quarter of respondents still prefer one of the remaining high prestige alternative credentials to at least one high prestige accredited degree. % % a2.2 % When asked directly, about 42 percent of respondents state that they do not prefer % to work with a person that has a college degree rather than a person that holds a reputable non-college credential. \input{./figures-and-tables/table-regressions.tex} Table \ref{tab:table_regs} gives three models. The first model is an ordinary least squares model of baseline hirability. Backward elimination to the point of adjusted r-squared maximization yields Model 1. Adding factors of accreditation and prestige to Model 1, then adapting the model to a linear mixed model (LMM) yields Model 2. Model 3 results from additional backward elimination on Model 2. Four individuals that completed the first section of the questionnaire did not complete the entire questionnaire. The remaining 450 respondents each report hirability for the eight vignette schools, yielding 3,600 observations for the mixed models. Because LMM does not permit computation of r-squared, the termination criteria for the factor elimination process in Model 3 was to retain all factors with a p-value under 0.5. This is a permissive criterion intended to guard against overfitting. The logical basis for this rule is that each observed effect is more likely to exist than to not exist when $p < 0.5$. Despite permissive criteria, only one insignificant factor for income exists in Model 3. % Comparison of coefficients across specifications improves confidence in the coefficients in all but two cases. % The factor for the income range under ten thousand dollars per year % and the factor for preference in a traditional coworker flip signs, % but other factors are fairly consistent in their effects. Model 2 and Model 3 have one other interesting difference. Model 3 includes the boolean for whether a school was stipulated as high prestige. For vignette schools with high prestige, the participant viewed two statements about the vignette. The questionnaire instructs the participant to imagine a school they consider to be impressive. The questionnaire also instructs the participant to imagine that other people consider the school to be impressive. This situation is technically equivalent to an interaction of the two subcomponents. Because Model 2 includes both stipulated high prestige subcomponents and the accreditation dummy, including high prestige generates perfect multicollinearity. Backward elimination of Model 2 drops the factor for own stipulated prestige, so subsequent insertion of high prestige is nonproblematic. Model 3 is the preferred model. Prestige and accreditation effects are positive and significant. These two effects also interact with a significant and negative coefficient. The values of these coefficients of interest are consistent across Model 2 and Model 3. The dummy variable for accreditation is about two and a half times larger than the prestige response, but the average prestige response is near seven. This indicates that the prestige response explains a larger share of hirability variance compared to accreditation. % The negative interaction indicates both a decreasing marginal benefit for prestige among the accredited, % and also a decreasing marginal penalty for prestige among the unaccredited. An application of Model 3 is another approach to the identification of competitive alternative credentials. Hold factors other than accreditation and prestige constant. Let the hirability level of school $k$ be called $H_k$. Let $X_{ka}$ be accreditation status, $X_{kp}$ is prestige response, $X_{kh}$ is the dummy for stipulated high prestige, and $X_{ko}$ is the dummy for whether other people consider the school prestigious. Let $H_1$ be an unaccredited school with high stipulated prestige. Let $H_2$ be an accredited school without high stipulated prestige. % Let $H_2$ receive a prestige response equal to the average for an accredited vignette. Let $X_{2p} = 6.49$, which is the prestige response equal to the average for an accredited vignette, as reported in Table \ref{tab:desc_stats}. This system of equations is described in equations \ref{eq1} through \ref{eq5}: \begin{subequations} \begin{equation} H_k = 1.27X_{ka} - \num{0.1}X_{ka}X_{kp} + 0.53X_{kp} + 0.14X_{kh} + 0.59X_{ko} \label{eq1} \end{equation} \begin{equation} H_1 = 0.53X_{kp} + 0.14 + 0.59 \label{eq2} \end{equation} \begin{equation} H_2 = 1.27 - \num{0.1}(6.49) + 0.53(6.49) \label{eq3} \end{equation} \begin{equation} X_{kp} = (1.27 - \num{0.1}(6.49) + 0.53(6.49) - 0.14 - 0.59) / 0.53 \label{eq4} \end{equation} \begin{equation} X_{kp} \approx 6.28 \label{eq5} \end{equation} \end{subequations} Equation \ref{eq5} indicates that an alternative credential with stipulated high prestige and a prestige response of 6.28 or higher is approximately competitive with the average accredited vignette. Table \ref{tab:desc_stats} indicates that the prestige response for the average vignette school is 6.21. This is a significant difference compared to the average actual school prestige response of 6.50. Coincidentally, additive and proportional compensation of 6.28 both yield 6.57. This prestige requirement exceeds the low bar set by comparison to the University of Nebraska. Google remains the only alternative provider to obtain general competitive status without the presence of other preferential factors. App Academy and General Assembly both have average prestige responses close to 5.8. Models reveal several situations in which other factors overcome this deficit, but many of these offsetting factors are difficult to determine and leverage prior to a hiring decision. The California state effect is an interesting exception that an actual job search could exploit. Alternative credentials provide a source of potential diverse labor to employers. Interestingly, neither ethnicity nor gender was significantly associated with hirability. There is little evidence for the thesis that client-facing roles preferentially benefit from credential prestige or accreditation. Respondent client exposure on the job was associated with a slightly larger baseline willingness to hire an alternatively educated candidate. The extent of client contact was insignificant in mixed models. \begin{figure}[h!] \centering \begin{tikzpicture}[element/.style={minimum width=1.75cm, minimum height=0.85cm}] \node (n1) {\includegraphics[width=1\textwidth]{./figures-and-tables/context-graph-massaged.png}}; \end{tikzpicture} \caption{Prestige Response Distribution for Actual Schools} \figSpace \label{fig:var_results} \end{figure} % This effect does not enter in to any regression Finally, Figure \ref{fig:var_results} visualizes the prestige response distribution for actual schools. The four subplots describe whether a respondent randomly received information from review site aggregators and how they evaluated credential accreditation. Exposure to aggregated review information is associated with fewer responses at the positive and negative extrema of the response distribution for accredited and unaccredited schools. On average, alternative education prestige rose, and accredited education prestige declined when a respondent received review aggregator site information. \section{Conclusions} This study hypothesized that some level of prestige allows an alternative credential to compete with traditional credentials for employment. Results provide evidence in favor of this hypothesis. Regression results show meaningful positive correlations of prestige and accreditation on hirability. A range of hirability responses that include the average response and some below-average responses find a dominant explanation in prestige effects over accreditation alone. While prestige explains a larger share of hirability variance than accreditation, accreditation robustly maintains a meaningful effect on its own. The robust importance of accreditation indicates that arbitrary improvements to alternative credential quality and social acceptability are not likely to displace the higher education system in expectation. This study began with the assertion that alternative credentials are a source of unexploited technical value. The study validated a partial explanation from prestige as a representation of social norms. The introduction noted an important distinction between legal and social norms from Elster. By elimination, legal norm change is an important candidate to allow alternative credentials the opportunity to fully outcompete the hirability effects of accreditation. % While prestige explains a larger share of hirability variance than accreditation, % accreditation robustly maintains a meaningful effect on its own. % The robust importance of accreditation indicates that arbitrary improvements to alternative credential % quality and social acceptability are not likely to displace the higher education system in expectation. % % quality and social acceptability will not displace the higher education system in expectation. % % quality and social acceptability will not displace the higher education system in expectation. % Synthesis of the distinction from Elster between social and legal norms with the results of this study % point to a need for legal norm change in order to create a competitive environment between traditional and alternative providers. % % A change in legal norms appears to be required in order to create an even competitive environment between traditional and alternative providers. In 2012, The Heritage Foundation called for two policy changes that are worth considering. First, the Foundation proposed that the government should directly accredit courses rather than organizations\cite{burke2012accreditation}. Second, they also called for a decoupling of accreditation and federal funding. An additional option would be to replace legal requirements for formal education could be replaced with skill assessments. With a legal requirement that prefers skills to degrees, the public sector gains the ability to transfer formal accreditation duties to a market model with no loss of labor quality control. There are several reasons to be pessimistic about the feasibility of these policy changes. Reductions to education spending are unpopular with voters in the United States. % Education is compulsory in the United States. Over ninety percent of K-12 students in the United States attended a public school in 2016\cite{us2019digest}, and there is a systematized pipeline from public school to the traditional university system. Education represents an example of an entangled political economy\cite{wagner2014entangled}. Robust political economy points out additional reasons to doubt rapid innovation in this space\cite{boettke2004liberalism}. Reduced political entanglement is associated with the absence of compulsory education. However, after they exist, the elimination of compulsory laws also appears intractable. The removal of compulsory education is a qualitative change that does not appear any less subject to the path dependency, lock-in, ratchet, and other effects that inhibit contraction in the quantitative process of appropriations. % Ratchet Effect % One way to weaken the degree of political entanglement entanglement would be the absence of compulsory education laws, An interesting alternative to formal legislative change is the emerging model of public-private partnerships in education. In 2013, Georgia Tech formally partnered with Udacity to produce an accredited online graduate degree in Computer Science\cite{empson_2013}. Udacity was able to facilitate an improved online learning experience at scale with an affordable price. Georgia Tech offered branding, legitimacy, and accreditation, which supported a higher price point compared to the other offerings from Udacity. In other cases, the hybridization of traditional and alternative education is indirect and informal. Prior learning assessments and portfolio reviews are two of many processes by which a university can award credit to a student without formal requirements connected to the source of student learning\cite{conrad2008building}. University support for prior learning is an implementation pattern for course-level accreditation that does not require legislative action. Formal and informal partnerships between traditional and alternative institutions can yield increased market surplus for producers and consumers. Finally, this paper evaluated practical alternative credential selection strategies. One strategy is to leverage credentials from industry leaders. In this study, Google represented an alternative learning provider that is also an industry leader. Fortune 50 membership is a rule of thumb used in this study to select an industry-leading firm. A credential from Google was the only alternative credential to be identified as generally competitive with an accredited degree. The second strategy is to use credential review aggregator sites to identify high prestige credentials. This paper used Course Report as an aggregator to search for alternative credentials. App Academy and General Assembly were identified by applying search criteria that include a rating of 4.25 or better on a 5-point scale and a minimum of four hundred reviews. The combination of results with information on typical job search length from Zety indicated that these credentials provide meaningful job search benefits, albeit with significantly less efficacy than an accredited degree or a credential from Google.
{ "alphanum_fraction": 0.8193520806, "avg_line_length": 72.9705372617, "ext": "tex", "hexsha": "1149b3162fd0dfe4f5ab5c044eaa9805342f8194", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "58907cb10ceadec981beba15077d4c6e939307ec", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "Vandivier/research-dissertation-case-for-alt-ed", "max_forks_repo_path": "papers/dissertation/chapter-2-alt-ed-prestige.tex", "max_issues_count": 2, "max_issues_repo_head_hexsha": "58907cb10ceadec981beba15077d4c6e939307ec", "max_issues_repo_issues_event_max_datetime": "2022-03-12T01:02:54.000Z", "max_issues_repo_issues_event_min_datetime": "2022-01-13T04:03:23.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "Vandivier/research-dissertation-case-for-alt-ed", "max_issues_repo_path": "papers/dissertation/chapter-2-alt-ed-prestige.tex", "max_line_length": 196, "max_stars_count": null, "max_stars_repo_head_hexsha": "58907cb10ceadec981beba15077d4c6e939307ec", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "Vandivier/research-dissertation-case-for-alt-ed", "max_stars_repo_path": "papers/dissertation/chapter-2-alt-ed-prestige.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 8103, "size": 42104 }
%%%%%% .* ------------------------------------------------------------------ % .* \nr{} User's Guide mfc % .* Copyright (c) IBM Corporation 1996, 2000. All Rights Reserved. % .* ------------------------------------------------------------------ \index{compiling, \nr{} programs} \index{using the translator, as a Compiler} \chapter{Using the translator} \index{using the translator} This section of the document tells you how to use the translator package. The \nr{} translator may be used as a compiler or as an interpreter (or it can do both in a single run, so parsing and syntax checking are only carried out once). It can also be used as simply a syntax checker. When used as a compiler, the intermediate Java source code may be retained, if desired. Automatic formatting, and the inclusion of comments from the \nr{} source code are also options. \section{Using the translator as a compiler} The installation instructions for the \nr{} translator describe how to use the package to compile and run a simple \nr{} program (\emph{hello.nrx}). When using the translator in this way (as a compiler), the translator parses and checks the \nr{} source code, and if no errors were found then generates Java source code. This Java code is then compiled into bytecodes (\emph{.class} files) using a Java compiler, in a process called AOT compilation. By default, the \emph{javac} compiler in the Java toolkit is used. This section explains more of the options available to you when using the translator as a compiler. % .* \section{The translator command} \index{command, for compiling} \index{\nr{}C, class} The translator is invoked by running a Java program (class) which is called \begin{verbatim} org.netrexx.process.NetRexxC \end{verbatim} (\code{\nr{}C}, for short). This can be run by using the Java interpreter, for example, by the command: \begin{verbatim} java org.netrexx.process.NetRexxC \end{verbatim} \index{scripts, \nr{}C} \index{\nr{}C, scripts} \index{scripts, nrc} \index{nrc scripts} or by using a system-specific script (such as \emph{\nr{}C.cmd}. or \emph{nrc.bat}). In either case, the compiler invocation is followed by one or more file specifications (these are the names of the files containing the \nr{} source code for the programs to be compiled). \index{file specifications} File specifications may include a path; if no path is given then \nr{}C will look in the current (working) directory for the file. \nr{}C will add the extension \emph{.nrx} to input program names (file specifications) if no extension was given. So, for example, to compile \emph{hello.nrx} in the current directory, you could use any of: \begin{verbatim} java org.netrexx.process.NetRexxC hello java org.netrexx.process.NetRexxC hello.nrx NetRexxC hello.nrx nrc hello \end{verbatim} (the first two should always work, the last two require that the system-specific script be available). The resulting \emph{.class} file is placed in the current directory, and the \emph{.crossref} (cross-reference) file is placed in the same directory as the source file (if there are any variables and the compilation has no errors). Here is an example of compiling two programs, one of which is in the directory \emph{d:\textbackslash myprograms}: \begin{verbatim} nrc hello d:\myprograms\test2.nrx \end{verbatim} In this case, again, the \emph{.class} file for each program is placed in the current directory. Note that when more than one program is specified, they are all compiled within the same class context. That is, they can see the classes, properties, and methods of the other programs being compiled, much as though they were all in one file. \footnote{The programs do, however, maintain their independence (that is, they may have different \textbf{options}, \textbf{import}, and \textbf{package} instructions).} This allows mutually interdependent programs and classes to be compiled in a single operation. Note that if you use the \textbf{package} instruction you should also read the more detailed \emph{Compiling multiple programs} section.% \ref{multiple} on page \pageref{multiple}. \index{completion codes, from translator} \index{return codes, from translator} On completion, the \nr{}C class will exit with one of three return values: 0 if the compilation of all programs was successful, 1 if there were one or more Warnings, but no errors, and 2 if there were one or more Errors. The result can be forced to 0 for warnings only with the \emph{-warnexit0} option. \index{option words} \index{flags} As well as file names, you can also specify various option words, which are distinguished by the word being prefixed with \emph{-}. These flagged words (or flags) may be any of the option words allowed on the \nr{} \textbf{options} instruction (see the \nr{} languagen documentation, and the below paragraph). These options words can be freely mixed with file specifications. To see a full list of options, execute the \nr{}C with the --help option command without specifying any files. As this command states, all options may have prefix 'no' added for the inverse effect. \subsection{Options} \index{compiling,options} Here are some examples: \begin{verbatim} java org.netrexx.process.NetRexxC hello -keep -strictargs java org.netrexx.process.NetRexxC -keep hello wordclock java org.netrexx.process.NetRexxC hello wordclock -nocompile nrc hello nrc hello.nrx nrc -run hello nrc -run Spectrum -keep nrc hello -binary -verbose1 nrc hello -noconsole -savelog -format -keep \end{verbatim} Option words may be specified in lowercase, mixed case, or uppercase. File specifications are platform-dependent and may be case sensitive, though \nr{}C will always prefer an exact case match over a mismatch. \textbf{Note:} The \emph{-run} option is implemented by a script (such as \emph{nrc.bat} or \emph{\nr{}C.cmd}), not by the translator; some scripts (such as the \emph{.bat} scripts) may require that the \emph{-run} be the first word of the command arguments, and/or be in lowercase. They may also require that only the name of the file be given if the \emph{-run} option is used. Check the commentary at the beginning of the script for details. \section{Compiling multiple programs and using packages} \index{compiling,multiple programs} When you specify more than one program for \nr{}C to compile, they are all compiled within the same class context: that is, they can see the classes, properties, and methods of the other programs being compiled, much as though they were all in one file. This allows mutually interdependent programs and classes to be compiled in a single operation. For example, consider the following two programs (assumed to be in your current directory, as the files \emph{X.nrx} and \emph{Y.nrx}): \begin{lstlisting}[label=dependencies,caption=Dependencies] /* X.nrx */ class X why=Y null /* Y.nrx */ class Y exe=X null \end{lstlisting} Each contains a reference to the other, so neither can be compiled in isolation. However, if you compile them together, using the command: \begin{verbatim} nrc X Y \end{verbatim} the cross-references will be resolved correctly. The total elapsed time will be significantly less, too, as the classes on the CLASSPATH need to be located only once, and the class files used by the \nr{}C compiler or the programs themselves will also only be loaded (and JIT-compiled) once. \index{projects, compiling} \index{packages, compiling} \index{compiling,packages} This example works as you would expect for programs that are not in packages. There is a restriction, though, if the classes you are compiling \emph{are} in packages (that is, they include a \textbf{package} instruction). \nr{}C uses either the \emph{javac} compiler or the Eclipse batch compiler \emph{ecj} to generate the \emph{.class} files, and for mutually-dependent files like these; both require the source files to be in the Java \emph{CLASSPATH}, in the sub-directory described by the \textbf{package} instruction. So, for example, if your project is based on the tree: \texttt{D:\textbackslash myproject} if the two programs above specified a package, thus: \begin{lstlisting}[label=packagedep,caption=Package Dependencies] /* X.nrx */ package foo.bar class X why=Y null /* Y.nrx */ package foo.bar class Y exe=X null \end{lstlisting} \begin{enumerate} \item You should put these source files in the directory: \emph{D:\textbackslash myproject\textbackslash foo\textbackslash bar} \item The directory \emph{D:\textbackslash myproject} should appear in your CLASSPATH setting (if you don't do this, \emph{javac} will complain that it cannot find one or other of the classes). \item You should then make the current directory be \emph{D:\textbackslash myproject\textbackslash foo\textbackslash bar} and then compile the programs using the command \emph{nrc X Y}, as above. \end{enumerate} With this procedure, you should end up with the \emph{.class} files in the same directory as the \emph{.nrx} (source) files, and therefore also on the CLASSPATH and immediately usable by other packages. In general, this arrangement is recommended whenever you are writing programs that reside in packages. \textbf{Notes:} \begin{enumerate} \item When \emph{javac} is used to generate the \emph{.class} files, no new \emph{.class} files will be created if any of the programs being compiled together had errors - this avoids accidentally generating mixtures of new and old \emph{.class} files that cannot work with each other. \item If a class is abstract or is an adapter class then it should be placed in the list before any classes that extend it (as otherwise any automatically generated methods will not be visible to the subclasses). \end{enumerate} % \nr{}C can be used in a program, to compile \nr{} programs from files, % or to compile from strings in memory. % \section{Compiling from memory strings} % Programs may also be compiled from memory strings by passing an array % of strings containing programs to the translator using these methods: % \begin{lstlisting}[label=frommemory,caption=From Memory] % method main(arg=Rexx, programarray=String[], log=PrintWriter null) static returns int % method main2(arg=String[], programarray=String[], log=PrintWriter null) static returns int % \end{lstlisting} % Any programs passed as strings must be named in the arg parameter before any programs contained in files are named. % For convenience when compiling a single program, the program can be % passed directly to the compiler as a String with this method: % \begin{lstlisting}[label=string,caption=With String argument] % method main(arg=Rexx, programstring=String, logfile=PrintWriter null) constant returns int % \end{lstlisting} % Here is an example of compiling a \nr{} program from a string in % memory: % \begin{lstlisting}[label=memexample,caption=Example of compiling from String] % import org.netrexx.process.NetRexxC % program = "say 'hello there via NetRexxC'" % NetRexxC.main("myprogram",program) % \end{lstlisting} % This will generate a myprogram.class file on disk. % \section{Compile, load and go} % There is also a \marginnote{\color{gray}3.08} possibility to generate a program dynamically, compile it from a string, and % execute the resulting class(es) without writing intermediate class % files to disk. In this case, even the classfile is executed from % memory with a special classloader, the RxMapClassLoader. The compile, % load and go method calls the main method of the generated class, % without arguments. This method is used to dynamically generate and % execute NetRexx Pipelines. An example: % \begin{lstlisting}[label=memloadgoexample,caption=Example of compile-load-go] % nrxsrc = "say 'hello dynamic world'" % clsName = 'helloDyn' % org.netrexx.process.NetRexxC.clgMain(clsName,nrxsrc) % \end{lstlisting} % \section{Using the generated classfiles array} % There is an overloaded main method in NetRexxC that does the compile % in-memory and fills a specified collection with the compiled classfile % images. This is in fact what clgMain() does. The generated classfile, % including their classloader, are then usable in your own code. % \begin{lstlisting}[label=memclasslistexample,caption=Example of using the classes collection] % classList = ArrayList() % rc = main(arg' -nologo -verbose0',[String programstring],null,classList) % mapLoader = RxMapClassLoader classList.get(0) % do % c = mapLoader.findClass(arg) % paramTypes = Class[1] % paramTypes[0] = String[].class % methodName = String "main" % m = c.getMethod(methodName, paramTypes) % params = Object[1] % params[0] = String[0] % m.invoke(null, params) % catch e=Exception % say 'Reflection exception encountered:' % say e.getMessage() % end % \end{lstlisting} % \section{JSR199} % NetRexx uses the jsr-199 way of invoking the compiler, and a .java % file is not written to disk by default. \marginnote{\color{gray}3.04} The following program % illustrates how a complete program can be compiled and executed % without anything being written to disk. % \begin{lstlisting}[label=jsr199,caption=JSR199 example] % import org.netrexx. % pname="jsr199hello" -- program name % nrp=' say "hello"\n say arg \n say "program complete" ' -- NetRexx program code % classlist=ArrayList() -- this requests a class loader and class files returned in memory % NetRexxC.main(pname "-verbose0", nrp, null, classlist) -- ask NetRexxC to compile from string nrp % loader=ClassLoader classlist.get(0) -- find class loader build by NetRexx translator % pclass=loader.loadClass(pname) -- load our class file into the jvm % pclass.getMethod("main", [Class - % String[0].getClass()]).invoke(null,[Object [String 'argument % string']]) -- locate main, call it with reflection = all done! % \end{lstlisting} % Programmatic use of the translator - the compiler(NetRexxC) and the % interpreter (NetRexxA) - used to be in this chapter, but the options % to do this have multiplied over the years, and are now beyond the scope % of this Quick Start Guide. They are documented in the \emph{Programming Guide}. \index{build systems} \chapter{Using build systems - ANT} \index{ant} From the command line, different build systems can be used to build an entire project in one go. This chapter explains how to use ANT, one of the early Java cross-platform build tools. With \emph{ant}, the specification for the build needs to be provided in an \emph{.xml} file; the default is \emph{build.xml}. \nr{} itself is built using \emph{ant}; its build.xml can be checked out in the git repository. Two scenarios for building with \emph{ant} are mentioned in the following sections. Unlike \emph{make}, ant does not work with command lines, but with specialized Java tasks, to make this build system platform independent. A special \nr{} ant task (written in \nr{}) is packaged in the NetRexxC.jar and NetRexxF.jar files, this needs to be specified in the build file; the small ant-netrexx.jar file also can be used. The official Apache package for \emph{ant} has the original \nr{} optional task written in the Java language; this can be used, but is not up to date with the RexxLA version. Note that when building \nr{} from source, there are two bootstrapping situations: \nr{} is written in itself, and is built using the optional \nr{} \emph{ant} task, written in \nr{}, using \emph{ant}. \section{In-source, no packages} In this scenario, the build is in-source, this means the program source files and the class files are interspersed in the same directory; this is often the case with small projects that only have a few source files and no package structure. This situation enables a very small buildfile, with only two 'build goals' in it: prepare compile and clean, identified by \texttt{<target>} XML tags. In this case, the 'compile' goal is the default, as indicated on the \texttt{<project>} tag, \texttt{default=} attribute. We also need to include a \texttt{<taskdef>} tag for ant to find the \nr{} task. Also, we assume that the environment settings for the current user are in effect, notably the one for \texttt{CLASSPATH}. Larger projects will probably package their own libraries, and possibly need to specify build- and runtime classpaths; these are not needed here. \lstinputlisting[label=in-source-build,caption=In-source build]{../../examples/ant-task/in-source-build/build.xml} This build process will be run when the user enters the \texttt{ant} command, and the result is a number of class files - if there are no errors. In case of errors, no class files are produced. On subsequent runs, only the classes of which the source files are newer than the class files, will be compiled - this makes for an efficient build process. \section{With package structure} For a slightly larger project, which has its own package structure, we can use a slightly more complicated build file, that will serve a lot of projects of this kind. In this scenario, the source files are in a \emph{src} directory, and the class files will be compiled to a file system directory structure based on the package names. As an example, if the file hello.nrx is in a src subdirectory of the project, and its package name is org.rexxla.examples, the hello.class file will be in a subdirectory <project>/war/WEB-INF/classes/org/rexxla/examples/. For universal usability, e.g. in a JEE webserver as Tomcat, Jetty or JBoss, we use the WAR file structure, as is the standard for these application servers.. Next to the environment, we define two properties for the \nr{} optional \emph{ant} task: we tell it to generate Java source files ('keepasjava'), and to replace Java source that is already there without asking. \lstinputlisting[label=in-source-build,caption=With-packages build]{../../examples/ant-task/with-packages/build.xml} In an analogous way, we compile the sources there might be in .java files in a larger project with the \emph{javac} task. In the \emph{libs} target we create the output directories as indicated in the standard. The compile task then translates the .nrx source files to the \emph{respective} files in the target directories, by using a compile and a copy task. This enables us to have the same package structure in the source and target directories, which then are ready to be compressed - and packaged - into a .war file, which is a standard \emph{web archive}, with the \texttt{ant war} command. The \emph{clean} task deletes the whole directory tree that starts with \texttt{war}, which is a very efficient way to clean out all built objects (except the compressed war file itself).
{ "alphanum_fraction": 0.7545526945, "avg_line_length": 43.9044289044, "ext": "tex", "hexsha": "8c85c0298ebe486040549d8d861fd92b1eb8b464", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "ec27b6e3f908fbc50cb6dc54696daea68ae59103", "max_forks_repo_licenses": [ "ICU" ], "max_forks_repo_name": "RexxLA/NetRexx", "max_forks_repo_path": "documentation/pg/nrucomp.tex", "max_issues_count": 25, "max_issues_repo_head_hexsha": "ec27b6e3f908fbc50cb6dc54696daea68ae59103", "max_issues_repo_issues_event_max_datetime": "2022-02-01T16:14:50.000Z", "max_issues_repo_issues_event_min_datetime": "2022-01-24T12:13:53.000Z", "max_issues_repo_licenses": [ "ICU" ], "max_issues_repo_name": "RexxLA/NetRexx", "max_issues_repo_path": "documentation/pg/nrucomp.tex", "max_line_length": 143, "max_stars_count": null, "max_stars_repo_head_hexsha": "ec27b6e3f908fbc50cb6dc54696daea68ae59103", "max_stars_repo_licenses": [ "ICU" ], "max_stars_repo_name": "RexxLA/NetRexx", "max_stars_repo_path": "documentation/pg/nrucomp.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 4738, "size": 18835 }
\section{Jaql} Jaql~\cite{jaqlWebsite} is a high-level scripting language for the JavaScript Object Notation (JSON). It is able to run on Hadoop and break most requests down to Map/Reduce tasks. Jaql heavily borrowes from SQL, XQuery, LISP, Pig Latin, JavaScript and Unix Pipes.~\cite{jaqlOverview} Developed mainly inside IBM and with a quiet mailinglist Jaql currently faces a serious lack of documentation and community. The documentation found online is outdated and incomplete in most cases. In addition, Jaql was undergoing major changes in the time of writing this paper. Even though available, there is no need to write in a strict Map/Reduce pattern. At the point of execution (i.e. the end of a query statement) Jaql transforms the parsed statement into another, equivalent but optimised statement. This step is comparable to a query optimiser in modern database management systems. The optimised query can in turn be transformed back to Jaql code, which is useful for debugging. Jaql can be extended with user-defined functions, written either in Java or in Jaql itself. However, it is not possible to use Jaql as a general-purpose programming language, as it is not Turing-complete: It lacks both recursion and a universal loop function. The current Jaql implementation features three modes to run in: In stand-alone mode Hadoop is not used at all and the jobs are not split in Map/Reduce tasks. When using Jaql with Hadoop a so-called mini-cluster can be used, which is managed by Jaql and runs all tasks on one computer in the same process (with one thread per Map/Reduce task). The last option is running Jaql on a traditional, external Hadoop cluster.
{ "alphanum_fraction": 0.8020334928, "avg_line_length": 66.88, "ext": "tex", "hexsha": "94dd166967bd694c13e6fc711defd1c3f6005b20", "lang": "TeX", "max_forks_count": 2, "max_forks_repo_forks_event_max_datetime": "2019-05-02T22:18:22.000Z", "max_forks_repo_forks_event_min_datetime": "2015-05-07T14:02:25.000Z", "max_forks_repo_head_hexsha": "33fb52b3f9722b850dc8cc5fcf720189bee70185", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "rkh/hadoop-scripting", "max_forks_repo_path": "paper/jaql.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "33fb52b3f9722b850dc8cc5fcf720189bee70185", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "rkh/hadoop-scripting", "max_issues_repo_path": "paper/jaql.tex", "max_line_length": 162, "max_stars_count": 3, "max_stars_repo_head_hexsha": "33fb52b3f9722b850dc8cc5fcf720189bee70185", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "rkh/hadoop-scripting", "max_stars_repo_path": "paper/jaql.tex", "max_stars_repo_stars_event_max_datetime": "2021-01-09T17:42:38.000Z", "max_stars_repo_stars_event_min_datetime": "2015-05-15T00:29:12.000Z", "num_tokens": 378, "size": 1672 }
\documentclass[12pt,letterpaper]{ctexart} \usepackage{fullpage} \usepackage[top=2cm, bottom=4.5cm, left=2.5cm, right=2.5cm]{geometry} \usepackage{amsmath,amsthm,amsfonts,amssymb,amscd} \usepackage{lastpage} \usepackage{enumerate} \usepackage[binary-units=true]{siunitx} \usepackage{fancyhdr} \usepackage{mathrsfs} \usepackage{xcolor} \usepackage{graphicx} %插入图片的宏包 \usepackage{float} %设置图片浮动位置的宏包 \usepackage{subfigure} %插入多图时用子图显示的宏包 \usepackage{listings} \usepackage{afterpage} \usepackage{hyperref} \hypersetup{ colorlinks=true, linkcolor=blue, filecolor=magenta, urlcolor=cyan, } \newcommand\blankpage{% \null \thispagestyle{empty}% \addtocounter{page}{-1}% \newpage } \hypersetup{% colorlinks=true, linkcolor=blue, linkbordercolor={0 0 1} } \renewcommand\lstlistingname{Algorithm} \renewcommand\lstlistlistingname{Algorithms} \def\lstlistingautorefname{Alg.} \lstdefinestyle{Python}{ language = Python, frame = lines, basicstyle = \footnotesize, keywordstyle = \color{blue}, stringstyle = \color{green}, commentstyle = \color{red}\ttfamily } \setlength{\parindent}{0.0in} \setlength{\parskip}{0.05in} % Edit these as appropriate \newcommand\course{CS305} \newcommand\hwnumber{2} % <-- homework number \newcommand\NetIDa{11711918} % <-- NetID of person #1 \newcommand\NetIDb{吴烨昌} % <-- NetID of person #2 (Comment this line out for problem sets) \pagestyle{fancyplain} \headheight 35pt \lhead{\NetIDa} \lhead{\NetIDa\\\NetIDb} % <-- Comment this line out for problem sets (make sure you are person #1) \chead{\textbf{\Large Homework \hwnumber}} \rhead{\course \\ \today} \lfoot{} \cfoot{} \rfoot{\small\thepage} \headsep 1.5em \begin{document} \section*{Problem 1} {\bf Description} List the four broad classes of services that a transport protocol can provide. For each of the service classes, indicate if either UDP or TCP (or both) provides such a service {\bf Solution} \begin{itemize} \item Reliable transport: TCP \item Flow controlP: TCP \item Congestion control: TCP \item Throughput guarantee: neither \end{itemize} \newpage \section*{Problem 2} {\bf Description} Suppose within your Web browser you click on a link to obtain a Web page. The IP address for the associated URL is not cached in your local host, so a DNS lookup is necessary to obtain the IP address. Suppose that n DNS servers are visited before your host receives the IP address from DNS; the successive visits incur an RTT of RTT1, . . ., RTTn. Further suppose that the Web page associated with one HTML file, and the HTML file references eight very small objects on the same server. Let RTT0 denote the RTT between the local host and the server containing these objects. Assuming zero transmission time of the objects. Please calculate the time which elapses from when the client clicks on the link \begin{enumerate} \item Non-persistent HTTP with no parallel TCP connections? \item Non-persistent HTTP with the browser configured for 5 parallel connections? \item Persistent HTTP? \end{enumerate} {\bf Solution} The time elapsed for get HTML file: $$ T_0 = \underbrace{\text{RTT}_{1} + \dots + \text{RTT}_{n}}_{\text{Query n DNS server to get target IP}} + \underbrace{\text{RTT}_0}_{\text{Set up TCP connection}} + \underbrace{\text{RTT}_0}_{\text{Get HTML file}} $$ \begin{enumerate} \item Non-persistent HTTP with no parallel TCP connections? $$ T_1 = T_0 + 8 \times (\text{RTT}_{0} + \text{RTT}_{0}) = \text{RTT}_{1} + \dots + \text{RTT}_{n} + 18 \times \text{RTT}_{0} $$ \item Non-persistent HTTP with the browser configured for 5 parallel connections? $$ T_2 = T_0 + 2 \times (\text{RTT}_{0} + \text{RTT}_{0}) = \text{RTT}_{1} + \dots + \text{RTT}_{n} + 6 \times \text{RTT}_{0} $$ \item Persistent HTTP? $$ T_3 = T_0 + 8 \times \text{RTT}_{0} = \text{RTT}_{1} + \dots + \text{RTT}_{n} + 10 \times \text{RTT}_{0} $$ \end{enumerate} \section*{Problem 3} {\bf Description} Consider distributing a file of $F$ bits to $N$ peers using a client-server architecture. Assume a fluid model where the server can simultaneously transmit to multiple peers, transmitting to each peer at different rates, as long as the combined rate does not exceed $u_s$. \begin{enumerate} \item Suppose that $ \frac{u_s}{N} \leq d_{min}$. Specify a distribution scheme that has a distribution time of $\frac{NF}{u_s}$. \item Suppose that $ \frac{u_s}{N} \geq d_{min}$. Specify a distribution scheme that has a distribution time of $\frac{F}{d_{min}}$. \item Conclude that the minimum distribution time is in general given by $max \{\frac{NF}{u_s}, \frac{F}{d_{min}}\}$. \end{enumerate} {\bf Solution} \begin{enumerate} \item The distribution scheme is that the server send files to the N clients parallelly. Under that scheme, server sends file to all clients with a distribution rate of $\frac{u_s}{N}$ ($\leq d_{min}$). All clients would finish receiving at the same time. The overall distribution time is $\frac{F}{\frac{u_s}{N}} = \frac{NF}{u_s}$ \item The distribution scheme is that the server send files to the N clients parallelly. Under that scheme, server sends file to all clients with a distribution rate of $d_{min}$. All clients would finish receiving at the same time. The overall distribution time is $\frac{F}{d_{min}}$. \item If $\frac{u_s}{N} \leq d_{min}$, $\frac{F}{d_{min}} \leq \frac{NF}{u_s}$, $max \{\frac{NF}{u_s}, \frac{F}{d_{min}}\} = \frac{NF}{u_s}$, which is same result in {\bf 1}. If $\frac{u_s}{N} \geq d_{min}$, $\frac{F}{d_{min}} \geq \frac{NF}{u_s}$, $max \{\frac{NF}{u_s}, \frac{F}{d_{min}}\} = \frac{F}{d_{min}}$, which is same result in {\bf 2}. So the minimum distribution time can be in general given by $max \{\frac{NF}{u_s}, \frac{F}{d_{min}}\}$. \end{enumerate} \newpage \section*{Problem 4} {\bf Description} Consider distributing a file of F bits to N peers using a P2P architecture. Assume a fluid model. For simplicity assume that dmin is very large, so that peer download bandwidth is never a bottleneck \begin{enumerate} \item Suppose that $u_s \leq \frac{u_s + u_1 + \dots + u_N}{N}$. Specify a distribution scheme that has a distribution time of $\frac{F}{u_s}$. \item Suppose that $u_s \geq \frac{u_s + u_1 + \dots + u_N}{N}$. Specify a distribution scheme that has a distribution time of $\frac{NF}{u_s + u_1 + \dots + u_N}$. \item Conclude that the minimum distribution time is in general given by $max \{\frac{F}{u_s}, \frac{NF}{u_s + u_1 + \dots + u_N}\}$. \end{enumerate} {\bf Solution} \begin{enumerate} \item The distribution scheme is that the server split the file into $N$ slices, server sends $i^{th}$ part to $i^{th}$ peer($i = 1 \dots N$). Under that scheme, the server transmit the $i^{th}$ part at rate $r_i = \frac{u_i}{u_1 + \dots + u_N}u_s$ ($r_1 + \dots + r_N = u_s \leq u_s$), Peer $i$ forward its received data to the rest $N - 1$ peers with aggregated rate $R_i = (N - 1) \frac{u_i}{u} u_s = u_i \frac{u_s}{\frac{u}{N - 1}} \leq u_i$. The overall distribution time is $\frac{F}{u_s}$. \item The distribution scheme is that the server split the file into $N + 1$ slices, server sends $i^{th}$ part to $i^{th}$ peer($i = 1 \dots N$, then server sends $N+1^{th}$ to each peer). Under that scheme, the server transmit rate ($r_1 + \dots + r_N + N r_{N+1} = \frac{u}{N + 1} + u_s - \frac{u}{N + 1} \leq u_s$), Peer $i$ forward its received data to the rest $N - 1$ peers with aggregated rate $R_i = \frac{u}{N - 1} + \frac{u_s - \frac{u}{N - 1}}{N} = \frac{u_s + u}{N}$. The overall distribution time is $\frac{F}{\frac{u_s + u}{N}} = \frac{NF}{u_s + u} = \frac{NF}{u_s + u_1 + \dots + u_N}$. \item If $u_s \geq \frac{u_s + u_1 + \dots + u_N}{N}$, $\frac{F}{u_s} \geq \frac{NF}{u_s + u_1 + \dots + u_N}$, $max \{\frac{F}{u_s}, \frac{NF}{u_s + u_1 + \dots + u_N}\} = \frac{F}{u_s}$, which is same result in {\bf 1}. If $u_s \leq \frac{u_s + u_1 + \dots + u_N}{N}$, $\frac{F}{u_s} \leq \frac{NF}{u_s + u_1 + \dots + u_N}$, $max \{\frac{F}{u_s}, \frac{NF}{u_s + u_1 + \dots + u_N}\} = \frac{NF}{u_s + u_1 + \dots + u_N}$, which is same result in {\bf 2}. So the minimum distribution time can be in general given by $max \{\frac{F}{u_s}, \frac{NF}{u_s + u_1 + \dots + u_N}\}$. \end{enumerate} \section*{Problem 5} {\bf Description} Consider a DASH system for which there are N video versions (at N different rates and qualities) and N audio versions (at N different rates and qualities). Suppose we want to allow the player to choose at any time any of the N video versions and any of the N audio versions. \begin{enumerate} \item If we create files so that the audio is mixed in with the video, so server sends only one media stream at given time, how many files will the server need to store (each a different URL)? \item If the server instead sends the audio and video streams separately and has the client synchronize the streams, how many files will the server need to store? \end{enumerate} {\bf Solution} \begin{enumerate} \item $N^2$ \item $2N$ \end{enumerate} \end{document}
{ "alphanum_fraction": 0.6886935764, "avg_line_length": 42.2752293578, "ext": "tex", "hexsha": "2a107620ac33a003e3801ea67dd47d56705cc618", "lang": "TeX", "max_forks_count": 4, "max_forks_repo_forks_event_max_datetime": "2021-10-10T08:56:57.000Z", "max_forks_repo_forks_event_min_datetime": "2019-11-09T15:41:26.000Z", "max_forks_repo_head_hexsha": "86d83787aa577b8f2d66b5410e73102411c45e46", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "Wycers/Codelib", "max_forks_repo_path": "CS305/homework2/index.tex", "max_issues_count": 28, "max_issues_repo_head_hexsha": "86d83787aa577b8f2d66b5410e73102411c45e46", "max_issues_repo_issues_event_max_datetime": "2022-02-26T18:50:00.000Z", "max_issues_repo_issues_event_min_datetime": "2020-03-04T23:47:22.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "Wycers/Codelib", "max_issues_repo_path": "CS305/homework2/index.tex", "max_line_length": 702, "max_stars_count": 22, "max_stars_repo_head_hexsha": "86d83787aa577b8f2d66b5410e73102411c45e46", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "Wycers/Codelib", "max_stars_repo_path": "CS305/homework2/index.tex", "max_stars_repo_stars_event_max_datetime": "2021-06-12T02:12:19.000Z", "max_stars_repo_stars_event_min_datetime": "2018-08-07T06:55:10.000Z", "num_tokens": 2970, "size": 9216 }
The Adaptive Time Step (ATS) utility for the TDIS Package can be activated by specifying the ATS6 option in the TDIS input file. If activated, \mf will read ATS input according to the following description. The adaptive time step utility is activated for any stress periods that are listed in the PERIODDATA block below. If a stress period is adaptive, then the \texttt{nstp} and \texttt{tsmult} parameters in the TDIS input file have no effect on time step progression. Instead the ATS settings specified for the period are used to control the time step progression. The ATS implementation implemented in \mf is patterned after the approach implemented in MODFLOW-USG. There are two fundamental parts to the ATS utility. The first is the capability to handle failure of a solution to converge. If ATS is active for a stress period in which the solution fails to converge, then the program will continue to try smaller time steps until convergence is achieved or the length of the time step reaches the lower allowable limit (\texttt{dtmin}). Once this lower limit on the time step is reached, then the program will follow the established logic for non-adaptive time steps. That is, the program will either stop and write concluding information, or the program will continue to the next time step if the CONTINUE option is specified in the simulation name file. The second fundamental part of the ATS utility is dynamic adjustment of the time step size according to simulation behavior. The ATS utility in \mf has been implemented in a generic and modular manner in which any model, exchange, or solution can submit a preferred time length to be used in determining the time step length. The ATS utility will proceed with the smallest time step submitted by these different simulation components. In the present implementation, the numerical solution will submit a preferred time step length based on the convergence pattern for the previous time step. If the numerical solution is relatively easy (as measured by the number of outer iterations), then the length of the next time step will increase by a factor of the \texttt{dtadj} variable. Conversely, if the solution is difficult to obtain, then the length of the next time step will decrease, by dividing the previous time step length by the \texttt{dtadj} variable. In the present ATS implementation, time series variables are interpolated based on the starting and ending times of the time step. If solution failure was encountered and a time step is retried with a smaller time step size, time series variables are re-interpolated for the shortened time step. In most cases, this is the intended behavior, however, if time series contain a much finer level of temporal detail, then this additional detail could exacerbate convergence problems. A limitation with the present ATS implementation is that there is no way to explicitly specify times within a stress period for saving output. Output can be obtained at the end of a period, and within a period according the Output Control time step settings. For example, the Output Control settings allow for printing and saving based on the FIRST, LAST, FREQUENCY, and STEPS options, but these are based on time steps, the lengths of which are adaptive and not necessarily known before the simulation. Thus, there is no way to request output at specific times within a stress period managed by ATS. If observations are used for models and packages, observations are written for every time step. For automated parameter estimation applications, additional post-processing of output files may be required in order to align simulated values with measurements. \vspace{5mm} \subsection{Structure of Blocks} %\lstinputlisting[style=blockdefinition]{./mf6ivar/tex/sim-ats-options.dat} \lstinputlisting[style=blockdefinition]{./mf6ivar/tex/utl-ats-dimensions.dat} \lstinputlisting[style=blockdefinition]{./mf6ivar/tex/utl-ats-perioddata.dat} \vspace{5mm} \subsection{Explanation of Variables} \begin{description} \input{./mf6ivar/tex/utl-ats-desc.tex} \end{description} \vspace{5mm} \subsection{Example Input File} \lstinputlisting[style=inputfile]{./mf6ivar/examples/utl-ats-example.dat}
{ "alphanum_fraction": 0.8056411472, "avg_line_length": 145.4827586207, "ext": "tex", "hexsha": "94f818064c4a617984afa7bf83076c4a520b5909", "lang": "TeX", "max_forks_count": 87, "max_forks_repo_forks_event_max_datetime": "2022-03-30T05:31:40.000Z", "max_forks_repo_forks_event_min_datetime": "2017-12-13T21:40:39.000Z", "max_forks_repo_head_hexsha": "83ac72ee3b6f580aaffef6352cf15c1697d3ce66", "max_forks_repo_licenses": [ "CC0-1.0" ], "max_forks_repo_name": "scharlton2/modflow6", "max_forks_repo_path": "doc/mf6io/utl_ats.tex", "max_issues_count": 331, "max_issues_repo_head_hexsha": "83ac72ee3b6f580aaffef6352cf15c1697d3ce66", "max_issues_repo_issues_event_max_datetime": "2022-03-29T05:57:00.000Z", "max_issues_repo_issues_event_min_datetime": "2018-01-10T21:22:48.000Z", "max_issues_repo_licenses": [ "CC0-1.0" ], "max_issues_repo_name": "scharlton2/modflow6", "max_issues_repo_path": "doc/mf6io/utl_ats.tex", "max_line_length": 966, "max_stars_count": 102, "max_stars_repo_head_hexsha": "83ac72ee3b6f580aaffef6352cf15c1697d3ce66", "max_stars_repo_licenses": [ "CC0-1.0" ], "max_stars_repo_name": "scharlton2/modflow6", "max_stars_repo_path": "doc/mf6io/utl_ats.tex", "max_stars_repo_stars_event_max_datetime": "2022-03-30T01:47:28.000Z", "max_stars_repo_stars_event_min_datetime": "2017-12-19T09:56:38.000Z", "num_tokens": 893, "size": 4219 }
\documentclass[a4paper,12pt]{article} \usepackage[utf8]{inputenc} \usepackage{fancyhdr} \usepackage{graphicx} \usepackage{lastpage} \usepackage[parfill]{parskip} \usepackage{xcolor} \usepackage{geometry} \usepackage{algorithm} \usepackage{algpseudocode} \makeatletter \renewcommand{\ALG@beginalgorithmic}{\scriptsize} \makeatother \graphicspath{ {./report_images/} } \fancypagestyle{style1}{ \lhead{Control \#1924744} \rhead{Page \thepage \hspace{1pt} of \pageref{LastPage}} \cfoot{} } \fancypagestyle{style2}{ \lhead{Control \#1924744} \rhead{Page \thepage \hspace{1pt}} \cfoot{} } \begin{document} \begin{titlepage} \title{Developing an Aerial Disaster \\* Relief Response System} \author{Control \#1924744} \date{28th January 2019} \maketitle \pagenumbering{gobble} \end{titlepage} \newpage \pagenumbering{roman} \pagestyle{style2} \section*{\hfil MEMO\hfil} \hrulefill \bf{Date:} \normalfont{28th January 2019}\\*\\* \bf{To:} \normalfont{Ms. Smith (CEO)}\\*\\* \bf{From:} \normalfont{\#1924744}\\*\\* \bf{Subject:} \normalfont{Puerto Rico Disaster Model}\\*\\* {\color{black}\hrule} In this memorandum, we present to you our model of a drone based disaster response system capable of dealing with the Puerto Rico hurricane scenario. After careful consideration of the demands of the situation, we have selected a suitable fleet of drones and set of medical package configurations that may be employed in response to such an emergency. Below, we outline the main components of our model and summarize the results of its performance. Our first challenge was determining the ideal packing configurations for ISO containers. We treated this as an optimization problem. Through doing so, we were able to devise three different algorithms (In-Fitter, Cuboid Reduction Method and RatioCheck) that we used together to figure out how to pack supplies in the most efficient way possible. Our model demanded the use of three ISO containers to supply our medical centres. To select the drones suitable for delivery we prioritized their maximum range over the delivery time. We concluded that given a 24 hour delivery deadline, speed was a less important factor. Using algorithms, we concluded that we would be able to supply our medical centres adequately. The time frames generated ranged from 8 months for a container supplying several medical centres to 8 years for one medical centre. Furthermore, this strategy allowed us to map 100\% of the major road networks linking the medical centres using disposable drones. While this model is not perfect, when compared to data from the 2017 Puerto Rico hurricane scenario, it is evident that the time frame of support provided is certainly sufficient for most medical centres. Therefore, we conclude that while one must initially invest more costs in allocating three ISO containers to the disaster area, the corresponding payoff largely merits the price. We use this as justification to present our design recommendations to you in the report that follows. \newpage \tableofcontents \newpage \newpage \pagenumbering{arabic} \pagestyle{style1} \section{Introduction} \subsection{Background} Puerto Rico is a small US territory situated on the 18th parallel. It has a population of approx 3.29 mill and a population density focused around the coast, with San Juan being its most populated area$^{1}$. Puerto Rico's tropical climate is starkly divided between the northern two thirds and southern third of the island. The northern side experiences much more humid weather than the southern side and is the area we are most concerned with.\\* Puerto Rico's annual rainfall also differs greatly between the eastern front, where the Sierra de Luquillo rainforest is located and the western side of the island. May to November is generally considered to be hurricane season in Puerto Rico while December to March is known as the dry season$^{2}$. In recent years climate change has caused an acceleration of storms in the tropical belt and poses a serious threat to the future prosperity of Puerto Rico. Efforts are ongoing to combat this problem but critics have been outspoken against the lack of focused effort to deal with it more$^{3}$. \begin{figure}[h] \centering \includegraphics[scale =0.5]{Rainfall} \caption{Mean Annual Precipitation 1971-2000 (Source: USGS)} \label{rainfall} \end{figure} Hurricane Maria struck Puerto Rico on September 16th 2017 and destroyed Puerto Rico's electrical and communication grid. With over 2900 fatalities it quickly became the worst natural disaster to ever hit Puerto Rico.$^4$ \subsection{Problem Restatement} As asked by HELP.INC, we were tasked with developing a DroneGo drone fleet that could help deal with future disasters in Puerto Rico by analyzing the 2017 hurricane. Our task is divided into two main objectives; \begin{itemize} \item[-]Delivering required medical packages to the associated medical centres each day. \item[-]Assessing the major highways and roads that link these centres for ground route planning \end{itemize} In order to achieve our first objective we decided to start from the bottom and make our way up. That is to say, we began by seeing how to fit medical packages into cargo bay containers, following this we moved onto seeing which CB-MP (cargo bay to medical package) combination would suit each medical centre's daily needs. Moving up the ladder we ranked our drones by maximum range and so forth and so on. \\*One major problem appeared to be where to leave a container and how to pack items into a container, we recognized the latter as an optimization problem, specifically a 'bin packing problem'$^5$. \section{Terminology} Throughout the paper acronyms and numbers may be used to abbreviate repeated words and terms. While these are usually explained elsewhere they can also be found here for convenience. \begin{center} \begin{tabular}{ |c|l| } \hline \bf{Acronym} & \bf{Explanation} \\\hline CB & Drone Cargo Bay \\ MP or M & Medical Package \\ MC & Medical Centre\\ C & Container \\ FOV & Field of View \\ \hline \end{tabular} \end{center} The following medical centres were also represented using numbers in the maps found in subsequent sections. \begin{center} \begin{tabular}{ |c|l| } \hline Number & Medical Centre \\\hline 1 & Caribbean Medical Centre, Fajardo \\ 2 & Hospital HIMA, San Pablo \\ 3 & Hospital Pavia Santurce, San Juan \\ 4 & Puerto Rico Children's Hospital, Bayamon \\ 5 & Hospital Pavia Arecibo, Arecibo \\ \hline \end{tabular} \end{center} \section{The Assumptions} The following core assumptions were made before starting our first model. These were necessary to fully understand the strategy we would need to develop to distribute medical supplies and survey roads: \begin{itemize} \item[-]Each drone must return to a container after completing one or more deliveries. This is because we assume drones must be recharged/restocked before setting out again. \item[-]Drones can only be used once a day. Drone LiPo batteries are some of the slowest charging batteries around$^{6}$ and the size of the drone suggests recharging will take an entire day. \item[-]Drones could not be recharged at medical centres. Initially we considered charging them in the centres but after research, discovered that many hospitals in the 2017 crisis were without power or generator fuel. \item[-]The contents of every container will not be damaged or suffer from any accidental malfunction. \item[-]Major roads and highways can be approximated as straight lines or a zigzag of lines when needed. This is justified as small road deviations will only be slightly longer than straight lines. \item[-]Drones are given special permission to fly in airport airspace. This is because drone delivery in San Juan would be impossible as drones would have to break FAA regulations$^7$. \item[-]Drones are assumed to be unable to glide without power. If drones could glide then only one container would be theoretically necessary however this is too unlikely.$^{9}$ \item[-]Items are packed into the container in such a way that removing the necessary items will not cause issues or delays. \item[-]The drone's CB is included in the drone's container dimensions. \end{itemize} \section{The Ideal Setup} In order to understand which drone was suitable to use in deliveries it was necessary to begin with the core fundamentals of how a cargo bay would store medical packages. This was noticed to be a bin packing problem however we focused solely on packing MP1s into both CBs. \subsection{Algorithm 1: In-fitter Algorithm} We developed our own algorithm known as the 'In-fitter' algorithm that would determine the best packing configuration of one type of MP into a container. \begin{algorithm} \caption{'In-fitter' Algorithm } \label{array-sum} \begin{algorithmic}[1] \Procedure{infitter}{$box\textsubscript{1}$, $box\textsubscript{2}$} \State $ParameterOne$: $box\textsubscript{1}$ Dimensions of the bigger box (Array) $[L, W, H]$ \State $ParameterTwo$: $box\textsubscript{1}$ Dimensions of the smaller box (Array) $[L, W, H]$ \State $Output$: The most amount of $box\textsubscript{2}$'s that will fit into $box\textsubscript{1}$ \State \State $perms \leftarrow [1,2,3;1,3,2;2,1,3;2,3,1;3,1,2;3,2,1]$ \Comment{permutations of box orientation} \State $amountFit \leftarrow [6] $ \For {$i = 1$ to $6$ } \State $boxL \leftarrow box\textsubscript{2}[perms[i][1]] $ \Comment{get current permutation configuration} \State $boxW \leftarrow box\textsubscript{2}[perms[i][2]] $ \State $boxH \leftarrow box\textsubscript{2}[perms[i][3]] $ \State \State $amountL \leftarrow floor( box\textsubscript{1}[1]/boxL )$ \State $amountW \leftarrow floor( box\textsubscript{1}[2]/boxW )$ \State $amountH \leftarrow floor( box\textsubscript{1}[3]/boxH )$ \State $amountFit[i] \leftarrow (amountL * amountW * amountH)$ \EndFor \State Return $findLargest(amountFit)$ \EndProcedure \end{algorithmic} \end{algorithm} \subsection{P-CB (Package to Cargo Bay) Configuration} Since $MP_2$ and $MP_3$ can each fit once inside $MP_1$ \bf{we can set a lowest bound} \normalfont{to how many MPs can be stored in each CB.}\\* Using the INFITTER algorithm we saw that 2 $MP_1$ can be held inside a CB1 and 12 $MP_1$ inside a CB2. The table below illustrates this further: \begin{center} \begin{tabular}{ |c|c| } \hline Cargo Bay & Smallest Possible Linear Combination of $MP_1, MP_2$ and $MP_3$\\\hline 1 & $aMP_1 + bMP_2 = 2$ \\ 2 & $aMP_1 + bMP_2 + cMP_3 = 12$ \\ \hline \end{tabular} \end{center} As seen in the table, CB1 is limited to only sending combinations of $MP_1$ and $MP_2$, while CB2 can have combinations of all three. \subsection{CB Combinations For Medical Centres} Here we look at how many deliveries would be required to each medical centre depending on the drone cargo bay type. \begin{center} \begin{tabular}{ |c|c|c|c| } \hline MC & Daily Need & CB1 & CB2 \\\hline 1 & $(M_1,M_3)$ & $(M_1,M_3)$ & $(M_1,M_3)$ \\ 2 & $(M_1,M_1,M_3)$ & $(M_1),(M_1,M_3)$ & $(M_1,M_1,M_3)$ \\ 3 & $(M_1,M_2)$ & $(M_1,M_2)$ & $ (M_1,M_2)$ \\ 4 & $(M_1,M_1,M_2,M_3,M_3)$ & $(M_1,M_2),(M_1,M_3),(M_3)$ & $(M_1,M_1,M_2,M_3,M_3)$ \\ 5 & $(M_1)$ & $(M_1)$ & $(M_1)$ \\ \hline \end{tabular} \end{center} Looking at the table we can see that 3 and 5 both have available CB's that match their daily needs (CB1 and CB2 respectively). For the others, it is harder to immediately see which configuration will suit them.\\* It is now important to determine a strategy by which these MP's will be delivered to each location. \section{Containers and Locations} \subsection{Eliminating Unnecessary Drones} Looking at the requirements of each medical centre it is obvious that the most important factor is the range a drone can travel rather than the speed. This is because a drone that arrives 40 mins earlier is trivial when operating on a 24hr deadline for delivery. We then proceeded to rank drones in terms of their distance as well as if they were a CB1 or CB2 type drone. Drones A and H were immediately discarded as they were either completely useless for the task required or simply very inefficient. \begin{center} \begin{tabular}{ |c|c|c| } \hline Drone & Distance (Km) & CB \\\hline B & 24.4 & 1 \\ C & 17.1 & 2 \\ D & 7.9 & 1 \\ E & 6.5 & 2 \\ F & 14.4 & 2 \\ G & 7.5 & 2 \\ \hline \end{tabular} \end{center} Looking at the table we can see that the best CB1 drone is drone B, likewise the best CB2 drone is drone C. At second place are drones D and F respectively. \subsection{Drone Flight Radius} Before developing a configuration for each container it was important to see where containers could be placed regardless of their contents. Using simple geometry and a generated map of Puerto Rico we could instantly visualize the radial distance a drone could travel from a medical centre.\\* The region that intercepted each circle would tell us where we could place a container. This allowed us to immediately discard any unsuitable area and focus on where the circle boundaries overlapped. \\*We also imposed a few local assumptions to realistically reflect each drone's performance. \begin{itemize} \item[-]Drones would have a 5 percent range reduction due to carrying a large payload. \item[-]Drones would have an additional range reduction from time used climbing to 400 feet (121m) and descending back down. 400 feet is the FAA drone height limit.$^7$ \end{itemize} \begin{figure}[h] \centering \includegraphics[scale =0.15]{CB1} \caption{CB1 drone radii around each MC} \label{cb1} \end{figure} \begin{figure}[h] \centering \includegraphics[scale =0.15]{CB2} \caption{CB2 drone radii around each MC} \label{cb2} \end{figure} \newpage \subsection{Number of Containers To Use} Ideally a minimum number of containers should be used to appropriately distribute resources in order to minimize costs. However looking at the diagrams above we can see that MC1 and MC5 are completely separated from any other MC regardless of which drone we use$^{*}$. This means that \bf{we must require all three containers} \normalfont{to deliver medical packages to each medical centre. Since we need three containers we will call them C1, C2 and C3 to save time. Each container will serve the following MCs:} \begin{center} \begin{tabular}{ |c|c| } \hline Container & Medical Centres Served \\\hline C1 & MC1 \\ C2 & MC2, MC3, MC4 \\ C3 & MC5 \\ \hline \end{tabular} \end{center} One \bf{key assumption} \normalfont{here is that containers can be placed anywhere we require}. One justification could be that the DroneGo fleet serves as an emergency backup similar to a bomb shelter or food warehouse. A second justification could be that containers are airdropped in safely. We can now look at each individual container and determine the packing configuration we want to serve their respective MCs.\\* \\** \footnotesize{B can technically service MC1, MC2 and MC3 however, then it wouldn't service MC4}\normalsize{.} \section{Container Packing Strategy} In order to solve the problem of packing 3 unique MPs and a drone/s into a container we decided to research bin packing algorithms. However, we were unable to find an existing algorithm that satisfactorily answered this question with these conditions. \\*We therefore resorted to developing our own algorithm which we called the \bf{'Cuboid Reduction Method'} \normalfont{that would be able to efficiently pack different medical packages into the same container.} \subsection{Cuboid Reduction Method (CRM)} Our algorithm known as the Cuboid Reduction Method relies on dividing our container into X amount of cuboids with equal dimensions. We then pack each cuboid with only one type of medical package to avoid the multi-box problem (packing three different boxes into a container).\\* In order to make sure we have the correct balance of each MP we look at the ratio of the MPs to each other and assign the same ratio of cuboids to each MP.\\* The INFITTER algorithm from Section 2 is then used to see how many MPs of the same type will fit inside the associated cuboid. (E.g: A container is split into 10 equally sized cuboids. The associated medical centre requires a 1:1 ratio of MP1 and MP2 packages. Thus we assign 5 cuboids for MP1 and 5 cuboids for MP2. Note: In the case where cuboids could not be distributed perfectly, the closest ratio was used. \\* The CRM was used on all containers to pack our supplies. \subsection{Improving the CRM} While the cuboid reduction method is useful in assigning a balanced ratio of cuboids for each type of MP, it does have a serious flaw. By assigning the number of cubes that satisfy the MP1 : MP2 : MP3 ratio we ignore the fact that a cube will fit much more MP2s or MP3s than MP1s. This is because they are so much smaller than an MP1.\\* When running our model initially for C2 and with a required ratio of $5:2:3$ we noticed that our results were giving us quantities of $(1296MP_1, 1620MP_2,990MP_3)$, which go completely against the ratios required for each MP! In order to solve this we created an algorithm called the RatioCheck algorithm that allocated cuboids to have the closest correct ratios. \subsubsection{RatioCheck Algorithm} The RatioCheck algorithm would view the MP1:MP2:MP3 ratio required and find the best balanced ratio of cuboids to satisfy this. \begin{algorithm} \caption{MED Pack cuboid Ratio Calculator} \begin{algorithmic}[1] \Procedure{ratiocalculator}{$cuboidAmt$, $dailyReq$, $cuboidDim$, $medDim$} \State $ParameterOne$: The amount of Cuboids available \State $ParameterTwo$: The daily requirement of Medical Packages $[MED 1,MED 2,MED 3]$ \State $ParameterThree$: An array of dimensions of the cuboid \State $ParameterFour$: A 2d array of medical package dimensions \State $Output$: An array with the required amount of cuboids for each Medical Package \State \State $ratios \leftarrow [0,0,0]$ \State $order \leftarrow [0,0,0]$ \State \State $total \leftarrow 0$ \For{$i = 1$ to $3$} \State $total \leftarrow (total + dailyReq[i])$ \EndFor \State \For{$i = 1$ to $3$} \State $percentage \leftarrow (dailyReq[i]/total)$ \State $tempRatio \leftarrow (percentage * cuboidAmt)$ \State $order[i] \leftarrow (tempRatio - floor(tempRatio))$ \Comment{track the percentage difference} \State $ratios[i] \leftarrow floor(tempRatio) $ \EndFor \State \State $total \leftarrow 0$ \For{$i = 1$ to $3$} \State $total \leftarrow (total + ratios[i])$ \EndFor \State \State $cuboidsLeft \leftarrow (cuboidAmt - total)$ \State $highestPriority \leftarrow order[i]$ \State $place \leftarrow 1$ \For{$i = 2$ to $3$} \Comment{find which med. pack. was most affected by the above process and give it priority} \If{$highestPriority< order[i]$} \State $highestPriority \leftarrow order[i]$ \State $place \leftarrow i$ \EndIf \EndFor \State \State $ratios(place) \leftarrow ratios(place) + cuboidsLeft$ \Comment{dedicate excess cuboid(s) to prioritised} \State Return $ratios$ \EndProcedure \end{algorithmic} \end{algorithm} \newpage \section{Mapping Roads} In order to balance delivering supplies with performing road reconnaissance we first calculated the maximum packing capabilities of C1, C2 and C3. Since C1 and C3 only serve MC1 and MC5 they can contain a huge amount of supplies. This led us to deciding to allocate an additional drone to C1, C2 and C3 for purely recon activities. This way, one drone will deliver the daily MPs to the MC while another drone scans the different roads in the area.\\*\\* There are a few caveats to this procedure: \begin{itemize} \item Firstly, using a drone to scan roads means that extra space must be used in the container to allocate it. \item Secondly, a drone will travel along approximate straight lines in order to minimize fuel waste. Determining how many straight lines to approximate a road by was done later once the exact coordinates of C1,C2 and C3 were decided. \end{itemize} While C1 and C3 could be dropped anywhere within the red circles of Fig 2, since the recon drone is designed to purely assess roads it would be ideal to drop a container on the intersection between the circles circumference and a major road. This would allow our delivery drone to safely deliver supplies while having the recon drone start immediately on a major highway rather than waste fuel going towards one. \subsection{Approximating Road Distances} By using several linear approximations to the roads we could approximate them to measurable quantities. The following map shows the major roads of Puerto Rico, each medical centre as well as approximations for each road. \begin{figure}[h] \centering \includegraphics[scale =0.5]{ConnectedLineMap} \caption{Linear Approximation vs Real Roads (Source; Google Maps)} \label{road-approx} \end{figure} \begin{figure}[h] \centering \includegraphics[scale =0.15]{CircleRoadMap} \caption{Road Map and Delivery Drone Radii} \label{road-approx} \end{figure} \newpage In Fig 4 the blue lines represent the actual major roads which we wish to follow. The purple line shows our approximated path that we wish the drone to follow. The red pins mark where the hospitals are.\\* Since the drones will operate at an elevation of 400 feet (121m) we will have a radial field of view of 692 feet (211m). This was based on commercial drones which have a FOV of up to $120^{o}$$.^{8}$ \\*It is a consequence of approximations that there will be times when the road will fall out of the drone's FOV, in order to minimize this error we added more straight lines between roads until the error was negligible. \\*The following table demonstrates the error percentage between actual road and approximated road total distance.\\* \begin{center} \begin{tabular}{ |c|c|c|c|} \hline Road & Approx Distance (Km) & Actual Distance (Km) & Error Percentage \\\hline MC1-MC2 & 66.5 & 68 & 2.21\% \\ MC1-MC3 & 58 & 58.1 & 0.17\% \\ MC2-MC3 & 31.7 & 32 & 0.94\% \\ MC3-MC4 & 13.4 & 13 & 3.04\% \\ MC4-MC5 & 70.1 & 71 & 1.27\% \\ \hline \end{tabular} \end{center} We could also argue that heavy road damage will be visible for large stretches of the road so we could predict if unseen segments are damaged based on previous parts of the road. \newpage In Fig 5 we further approximate our roads and place them into our generated map of Puerto Rico with the radial circles that correspond to the specific drone range of each MC. This way we will place our containers right on the intersection of the delivery drone radius with the major roads. While we have some leeway on choosing container coordinates for C2 and C3 we decided to pick the following coordinates: \begin{center} \begin{tabular}{ |c|c| } \hline Container & Container Coordinates (Long and Lat) \\\hline C1 & -65.88 18.37 \\ C2 & -66.04 18.32 \\ C3 & -66.5 18.44 \\ \hline \end{tabular} \end{center} \newpage \subsection{Road Recon Model (RRM)} Our Road Recon Model was designed in order to maximize reconnaissance range i.e. to map as much road length as possible. With our container coordinates, we decided to use drone B for recon as it has the furthest range of 24.4 km. Plotting the distance along each road from our container coordinate we were able to obtain the following map. This gives us a clear view of how far a recon drone would be able to travel down any road and back. \begin{figure}[h] \centering \includegraphics[scale =0.18]{NewRoadNoCircles} \caption{Road Map} \label{road-approx} \end{figure} The yellow lines represent the distance a recon drone will travel from the container location which is marked blue. Looking at this we can immediately see that the major roads between MC1, MC2, MC3 and MC4 are (almost) completely covered except for small region beside MC1. Likewise the yellow region from C3 to MC5 almost completely reaches MC5 but stops just short. This is still a very large region of road that the drones can scan and demonstrates the power of aerial surveillance that they can achieve for ground based route planning. \newpage \section{Applying Model to 2017 Crisis} Benchmarking is an important step in testing any product or service and this is no different from our model. By comparing our approach with the actual approach used we can see if our model holds any actual value in assisting in the delivery of supplies to centres. \subsection{2017 Crisis Official Figures} When Hurricane Maria struck Puerto Rico the entire island's electrical grid was wiped out. It took 7 months before electricity was restored to 97.7 % of the population.$^8$ Using this figure we can test our model to see if we can provide supplies to the medical centres for up to and (if necessary) over 7 months. Road damage to Puerto Rico during the hurricane was also extensive but figures are much harder to find due to the slow progress of repairs. We will assume that road repairs will take at least 7 months if not more. Major highways in Puerto Rico were damaged as well during the disaster. \subsection{Our Strategy} Our first and simplest strategy of utilizing our model to deal with the 2017 Puerto Rico crisis was as follows; \begin{itemize} \item[-]Drop or have all three containers at their respective locations as specified in (7.1) \item[-]Implement the RRM and CRM to scanning roads and packing supplies for each container \end{itemize} \subsubsection{Drone Package Configuration and Schedule} The following configurations were tabulated for each medical centre. Container 1\\*\\* \begin{tabular}{ |c|c|c| } \hline Drone & Payload & Route \\\hline F & $MP_1,MP_3$ & MC1 \\ B & NA & RECON \\ \hline \end{tabular} Container 2\\*\\* \begin{tabular}{ |c|c|c| } \hline Drone & Payload & Route \\\hline F & $MP_1,MP_1,MP_3$ & MC2 \\ F & $MP_1,MP_2$ & MC3 \\ F & $MP_1,MP_1,MP_2,MP_3,MP_3$ & MC4 \\ B & NA & RECON \\ \hline \end{tabular} Container 3\\*\\* \begin{tabular}{ |c|c|c| } \hline Drone & Payload & Route \\\hline F & $MP_1$ & MC5 \\ B & NA & RECON \\ \hline \end{tabular} \subsubsection{Results} We obtained the following data for how long supplies would last for each container area: \begin{center} \begin{tabular}{ |c|c|c|c| } \hline & C1 & C2 & C3 \\\hline $MP_1$ & 1458 & 1458 & 2916 \\ $MP_2$ & 0 & 1215 & 0 \\ $MP_3$ & 1782 & 792 & 0 \\ Recon Drone & B & B & B \\ Delivery Drone & B & 3F & B \\ Days & 1558 (4.2 years) & 264 (8.6 months) & 2916 (7.9 years) \\ \hline \end{tabular} \end{center} As we can see this strategy successfully provides medical packages for each medical centre for months or even years. Since 7 months is the time it took This strategy also allows us to cover 60% \subsection{Optimizing this Strategy} In this alternative strategy we consider using the fact that C1 and C3 will last much longer than C2 by several years. We also consider the fact that the maximum range of a drone is twice the normal range of a drone that returns back to a container. Using these facts we came up with the idea of fitting an two extra Recon drones to both C1 and C3. This drone's purpose would be to travel as far as possible in order to scan the most road possible without returning.\\* We calculated that with this strategy we could cover 100% of the road network between all the medical centres while only losing ~400 days for C1 and ~300 days for C3. Considering that this still means C1 and C3 can last years more than C1 we concluded this was worthwhile. The only drawback to this is that an expensive drone has been lost somewhere down the road but this is small in light of the cost of repairs. \newpage \section{Conclusions} To conclude our model allowed us to deal with the Puerto Rico crisis to a very satisfactory degree. We were able to map 100\% of the major road network between each medical centre and provide supplies that would last months or even years. Medical centre 1 and 5 were able to be supplied for years and the others could last just more than 8 months. One drawback with our CRM was that we did not implement existing bin packing algorithms that would have surely given us the optimal packing. However the optimal packing was not necessary due to the fact that all 3 containers were used. Using all three containers was completely necessary due to the distance between medical centres. \\* With further time we would have researched and tried to discover more optimal bin packing algorithms as well as strategies for placing containers regardless of where the medical centres were located. The CRM could have also been improved by designing the ratio calculator to be more efficient in distributing packages. \newpage \section{References} \begin{enumerate} \item CIA. 2019. Central America :: Puerto Rico — The World Factbook. [ONLINE] Available at: https://www.cia.gov/library/publications/the-world-factbook/geos/rq.html. [Accessed 26 January 2019] \item USGS. 2016. USGS CFWSC - Climate of Puerto Rico. [ONLINE] Available at: https://pr.water.usgs.gov/drought/climate.html. [Accessed 26 January 2019] \item Ezcurra, Rivera-Collazo. 2017. An assessment of the impacts of climate change on Puerto Rico's Cultural Heritage with a case study on sea-level rise. [ONLINE] \\*Available at: https://www.sciencedirect.com/science/article/pii/S1296207417306441. [Accessed 26 January 2019] \item FEMA. 2019. Hurricane Maria | FEMA.gov. [ONLINE] Available at: https://www.fema.gov/hurricane-maria. [Accessed 27 January 2019]. \item Nikos Drakos. 1996. Bin Packing. [ONLINE] \\*Available at: https://www8.cs.umu.se/kurser/TDBA77/VT06/algorithms/\\*BOOK/BOOK5/NODE192.HTM. [Accessed 26 January 2019] \item Brian Schneider. 2018. A Guide to Understanding LiPo Batteries. [ONLINE] Available at: https://rogershobbycenter.com/lipoguide/. [Accessed 26 January 2019] \item FAA. 2019. eCFR; Code of Federal Regulations. [ONLINE] Available at: https://www.ecfr.gov/cgi-bin/text-idx?SID=dc908fb739912b0e6dcb7d7d88cfe6a7\&mc\\*=true\&node=pt14.2.107\&rgn=div5. [Accessed 27 January 2019] \item Softonic. 2018. ProMark Camera Drone B07B9 - Get Now. [ONLINE] Available at: https://en.softonic.com/solutions/electronics/promark-camera-drone-b07b9. [Accessed 26 January 2019] \item Popular Science. 2019. Gliding Algorithm Lets Drones Surf The Winds For Hours. [ONLINE] Available at: https://www.popsci.com/new-software-lets-drones-surf-winds-for-hours. [Accessed 28 January 2019] \end{enumerate} \newpage \section{Appendix} Here is extra pseudo-code for the CRM for the interested reader. All models were generated in MATLAB. \begin{algorithm} \caption{Cuboid Reduction Method} \begin{algorithmic}[1] \Procedure{crm}{$droneAmount$, $dailyRequiremnt$} \State $ParameterOne$: The amount of drones needed for the container \State $ParameterTwo$: The daily requirement of Medical Packages $[MED 1,MED 2,MED 3]$ \State $Output$: The amount of days a container of supplies can last \State \State $cuboid \leftarrow [46, 46, 47]$ \Comment{The dimensions of our cuboid} \State $medPs \leftarrow [14, 7, 5 ; 5, 8, 5 ; 12, 7, 4]$ \Comment{2d Array of Medical Package Dimensions} \State $cuboidsLeft \leftarrow 20 - droneAmount$ \Comment{Amount of cuboids left after we pack in drones} \State \State $packRatios \leftarrow RATIOCALCULATOR(cuboidsLeft, dailyRequirment, cuboid, medPs)$ \State \State $packAmount \leftarrow [0,0,0] $ \For {$i = 1$ to $3$ } \State $packAmount[i] \leftarrow ( INFITTER(cuboid, medPs[i] ) * packRatios[i] )$ \EndFor \State \State Return $DAYCALCULATOR(dailyRequirement, packAmount)$ \EndProcedure \end{algorithmic} \end{algorithm} \end{document}
{ "alphanum_fraction": 0.7569485883, "avg_line_length": 52.1983606557, "ext": "tex", "hexsha": "8a3428d2cbed9cb9f0f455753cb3c35f4df43b24", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "a398f0ba8784eab9fffc200584df84c01c750d48", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "dastronmighty/MCM-2019", "max_forks_repo_path": "Super_Drone_Fleet.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "a398f0ba8784eab9fffc200584df84c01c750d48", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "dastronmighty/MCM-2019", "max_issues_repo_path": "Super_Drone_Fleet.tex", "max_line_length": 547, "max_stars_count": null, "max_stars_repo_head_hexsha": "a398f0ba8784eab9fffc200584df84c01c750d48", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "dastronmighty/MCM-2019", "max_stars_repo_path": "Super_Drone_Fleet.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 8537, "size": 31841 }
\documentclass[11pt]{article} \usepackage[left=1in, right=1in, top=1in, bottom=1in]{geometry} \usepackage[T1]{fontenc} \usepackage{stix} \usepackage{amsmath} \usepackage{amsfonts} \usepackage{graphicx} \usepackage{bm} \usepackage{natbib} \usepackage[section]{placeins} \author{Wen Yan} \newcommand{\AVE}[1]{\ensuremath{\langle {#1} \rangle}} \newcommand{\ABS}[1]{\ensuremath{\lvert {#1} \rvert}} \newcommand{\dpone}[2]{\ensuremath{\displaystyle\frac{\partial {#1}}{\partial {#2}}}} \newcommand{\dptwo}[2]{\ensuremath{\displaystyle\frac{\partial^2 {#1}}{\partial {#2}^2}}} \newcommand{\dpn}[3]{\ensuremath{\displaystyle\frac{\partial^{#1} {#2}}{\partial {#3}^{#1}}}} \newcommand{\bA}{\ensuremath{\bm{A}}} \newcommand{\bB}{\ensuremath{\bm{B}}} \newcommand{\bC}{\ensuremath{\bm{C}}} \newcommand{\bD}{\ensuremath{\bm{D}}} \newcommand{\bE}{\ensuremath{\bm{E}}} \newcommand{\bF}{\ensuremath{\bm{F}}} \newcommand{\bG}{\ensuremath{\bm{G}}} \newcommand{\bH}{\ensuremath{\bm{H}}} \newcommand{\bI}{\ensuremath{\bm{I}}} \newcommand{\bJ}{\ensuremath{\bm{J}}} \newcommand{\bK}{\ensuremath{\bm{K}}} \newcommand{\bL}{\ensuremath{\bm{L}}} \newcommand{\bM}{\ensuremath{\bm{M}}} \newcommand{\bN}{\ensuremath{\bm{N}}} \newcommand{\bO}{\ensuremath{\bm{O}}} \newcommand{\bP}{\ensuremath{\bm{P}}} \newcommand{\bQ}{\ensuremath{\bm{Q}}} \newcommand{\bR}{\ensuremath{\bm{R}}} \newcommand{\bS}{\ensuremath{\bm{S}}} \newcommand{\bT}{\ensuremath{\bm{T}}} \newcommand{\bU}{\ensuremath{\bm{U}}} \newcommand{\bV}{\ensuremath{\bm{V}}} \newcommand{\bW}{\ensuremath{\bm{W}}} \newcommand{\bX}{\ensuremath{\bm{X}}} \newcommand{\bY}{\ensuremath{\bm{Y}}} \newcommand{\bZ}{\ensuremath{\bm{Z}}} \newcommand{\ba}{\ensuremath{\bm{a}}} \newcommand{\bb}{\ensuremath{\bm{b}}} \newcommand{\bc}{\ensuremath{\bm{c}}} \newcommand{\bd}{\ensuremath{\bm{d}}} \newcommand{\be}{\ensuremath{\bm{e}}} \newcommand{\bff}{\ensuremath{\bm{f}}} \newcommand{\bg}{\ensuremath{\bm{g}}} \newcommand{\bh}{\ensuremath{\bm{h}}} \newcommand{\bi}{\ensuremath{\bm{i}}} \newcommand{\bj}{\ensuremath{\bm{j}}} \newcommand{\bk}{\ensuremath{\bm{k}}} \newcommand{\bl}{\ensuremath{\bm{l}}} \newcommand{\bmm}{\ensuremath{\bm{m}}} \newcommand{\bn}{\ensuremath{\bm{n}}} \newcommand{\bo}{\ensuremath{\bm{o}}} \newcommand{\bp}{\ensuremath{\bm{p}}} \newcommand{\bq}{\ensuremath{\bm{q}}} \newcommand{\br}{\ensuremath{\bm{r}}} \newcommand{\bs}{\ensuremath{\bm{s}}} \newcommand{\bt}{\ensuremath{\bm{t}}} \newcommand{\bu}{\ensuremath{\bm{u}}} \newcommand{\bv}{\ensuremath{\bm{v}}} \newcommand{\bw}{\ensuremath{\bm{w}}} \newcommand{\bx}{\ensuremath{\bm{x}}} \newcommand{\by}{\ensuremath{\bm{y}}} \newcommand{\bz}{\ensuremath{\bm{z}}} \newcommand{\bsigma}{\ensuremath{\bm{\sigma}}} \begin{document} \title{Singularity solutions in Stokes flow} \maketitle \section{Introduction} The purpose of this document is to clear up the chaotic mess in literatures about Stokes flow singularities including the name, sign, index, notation. This document is mostly based on the following materials: \begin{itemize} \item 1.Blake, J. R. \& Chwang, A. T. Fundamental singularities of viscous flow. J Eng Math 8, 23-29 (1974). \item 2.Durlofsky, L., Brady, J. F. \& Bossis, G. Dynamic Simulation of Hydrodynamically Interacting Particles. Journal of Fluid Mechanics 180, 21-49 (1987). \item 3.Kim, S. \& Karrila, S. J. Microhydrodynamics: Principles and Selected Applications. (Courier Corporation, 2005). \item STFMM3DLib document \end{itemize} \textbf{Important Remark: }The force, torque, stresslet in this document are all applied on the fluid. \section{Definitions} The Stokes equation: \begin{align} \nabla\cdot\bsigma = -\nabla p + \mu \nabla^2 \mu &= -\bF \delta(\bx_F) \\ \nabla\cdot\bu &=0 \end{align} Define the vector $\br$ pointing from source to target, and the $G_{ij}$ Oseen-Burgers tensors: \begin{align} G_{ij} &= \frac{\delta_{ij}}{r} + \frac{r_ir_j}{r^3}\\ G_{ij,k} &= \left[ \frac{\delta_{jk}r_i}{r^3} - \frac{3r_ir_jr_k}{r^5} \right] + \left[\frac{\delta_{ik}r_j}{r^3} -\frac{\delta_{ij}r_k}{r^3} \right] \\ \nabla^2 G_{ij} &= 2\frac{\delta_{ij}}{r^3} - 6\frac{r_ir_j}{r^5} = -2\nabla\nabla \frac{1}{r} \end{align} Note that $G_{ij,k}$ contains a symmetric and an anti-symmetric part for index $j,k$. We need that in the derivation for doublet, stresslet, and rotlet. In the next section we consider a sphere with radius $a$, and consider the force exerted by this sphere on the fluid. Let $n$ denotes the norm vector pointing outward the sphere. We have: \begin{align} F_i &= \int \sigma_{ij} n_j dS \\ L_i &= \int \epsilon_{ijk} y_j \sigma_{kl} n_l dS \\ D_{ij} &= \int \sigma_{ik} n_k y_j dS \\ S_{ij} &= \frac{1}{2}\left(D_{ij}+D_{ji}\right) - \frac{1}{3}D_{kk}\delta_{ij}\\ T_{ij} &= \frac{1}{2}\left(D_{ij}-D_{ji}\right) \end{align} $S_{ij}$ is symmetric, and $T_{ij}$ is anti-symmetric. Both of them are trace-free tensors. Further, $D_{ij},S_{ij},T_{ij}$ refers to force strength with index $i$ and direction with index $j$. \section{Propagators for finite size sphere $a$.} This part is taken from Durlofsky, 1987. \begin{align} u_i &= \frac{1}{8\pi\mu} \left(1+\frac{1}{6}a^2\nabla^2 \right) G_{ij}F_j + \frac{1}{8\pi\mu} R_{ij} L_j + \frac{1}{8\pi\mu} \left(1+\frac{1}{10}a^2\nabla^2 \right) K_{ijk}S_{jk} \end{align} Here $$ K_{ijk} = \frac{1}{2}\left(G_{ij,k}+G_{ik,j}\right) = \frac{\delta_{jk}r_i}{r^3} - \frac{3r_ir_jr_k}{r^5} . $$ $$ R_{ij} = \frac{\epsilon_{ijk}r_k}{r^3}.$$ By utilizing the trace-free property of $S_{jk}$ by definition, we have $S_{jk}\delta_{jk}=0$. So the flow disturbance can also be written as: \begin{align} \left(1+\frac{1}{10}a^2\nabla^2 \right)\left(- \frac{3r_ir_jr_k}{r^5}\right)S_{jk} \end{align} The torque $L_j$ is related to the rotlet $T_{ij}$ as: \begin{align} T_{ij}&=\frac{1}{2}\epsilon_{ijk}L_k\\ L_i &= \epsilon_{ijk} T_{jk} \end{align} \subsection{Faxen's Laws} \begin{align} U_i - u_i^\infty &= \frac{F_i}{6\pi\mu a} + \left(1+\frac{1}{6}a^2\nabla^2\right)u_i' \\ \Omega_i - \Omega_i^\infty &= \frac{L_i}{8\pi\mu a^3} + \frac{1}{2}\epsilon_{ijk}\nabla_j u_k' \\ -E_{ij}^\infty &= \frac{S_{ij}}{\frac{20}{3}\pi\mu a^3} +\left(1+\frac{1}{10}a^2\nabla^2\right) e_{ij}' \end{align} \section{Limit of point singularities: $a\to 0$.} \subsection{Velocity} This is equivalent to setting $a=0$ in Eq.(11). But here we rederive it to clear up things. We start from the point-multipole expansion. The force term is straightforward and the doublet term is: \begin{align} u_i &= \frac{1}{8\pi\mu} G_{ij,k} D_{jk} \\ &= \frac{1}{8\pi\mu} \left(\left[ \frac{\delta_{jk}r_i}{r^3} - \frac{3r_ir_jr_k}{r^5} \right] + \left[\frac{\delta_{ik}r_j}{r^3} -\frac{\delta_{ij}r_k}{r^3} \right]\right)D_{jk} \end{align} The propagators for $S_{ij}$ and $T_{ij}$ can be constructed as follows: \begin{align} \frac{1}{2} u_i &= \frac{1}{8\pi\mu} \frac{1}{2} D_{jk} G_{ij,k} \\ \frac{1}{2} u_i &= \frac{1}{8\pi\mu} \frac{1}{2} D_{kj} G_{ik,j} \end{align} Add them together, utilizing that $G_{ij,k}$ contains a symmetric and an anti-symmetric part: \begin{align} u_i &= \frac{1}{8\pi\mu} \frac{1}{2}\left(D_{jk}+D_{kj}\right)\left[ \frac{\delta_{jk}r_i}{r^3} - \frac{3r_ir_jr_k}{r^5} \right] + \frac{1}{8\pi\mu} \frac{1}{2}\left(D_{jk}-D_{kj}\right) \left[\frac{\delta_{ik}r_j}{r^3} -\frac{\delta_{ij}r_k}{r^3} \right]\\ &= \frac{1}{8\pi\mu}S_{jk}\left[ \frac{\delta_{jk}r_i}{r^3} - \frac{3r_ir_jr_k}{r^5} \right] + \frac{1}{8\pi\mu}\left( \frac{2}{3} D_{ii}\delta_{jk} \right)\left[ \frac{\delta_{jk}r_i}{r^3} - \frac{3r_ir_jr_k}{r^5} \right] + \frac{1}{8\pi\mu}T_{jk} \left[\frac{\delta_{ik}r_j}{r^3} -\frac{\delta_{ij}r_k}{r^3} \right] \end{align} It is clear that the trace-dependent term is zero: \begin{align} \delta_{jk} \left[ \frac{\delta_{jk}r_i}{r^3} - \frac{3r_ir_jr_k}{r^5} \right] = 3\frac{r_i}{r^3} - 3\frac{r_ir_jr_j}{r^5} = 0 \end{align} $S_{jk}$ is trace-free so $S_{jk}\delta_{jk}=0$. Therefore: \begin{align} u_i = \frac{1}{8\pi\mu}S_{jk}\left[ - \frac{3r_ir_jr_k}{r^5} \right] + \frac{1}{8\pi\mu}T_{jk} \left[\frac{\delta_{ik}r_j}{r^3} -\frac{\delta_{ij}r_k}{r^3} \right] \end{align} We further utilize the relation between the torque $L_j$ and the rotlet $T_{jk}$, we have: \begin{align} T_{jk} \left[\frac{\delta_{ik}r_j}{r^3} -\frac{\delta_{ij}r_k}{r^3} \right] &= \frac{1}{2}\epsilon_{jkl}L_l \left[\frac{\delta_{ik}r_j}{r^3} -\frac{\delta_{ij}r_k}{r^3} \right] = \frac{1}{2} L_l \left(\epsilon_{jil}r_j - \epsilon_{ikl}r_k\right) \frac{1}{r^3} \\ &= L_l \frac{\epsilon_{ilj}r_j}{r^3} \end{align} We get back to the same equation by setting $a=0$ in Eq.(11). \textbf{Remark: } The finite-size effect factor $a^2\nabla^2$ appears for stresslet $S_{jk}$ but not for torque $L_j$ in Eq.(11) is due to the fact that $\nabla^2 L_l \frac{\epsilon_{ilj}r_j}{r^3} =0 $. \textbf{Remark 2:} The equation in Blake 1971 and STFMM3D for $D_{jk}$ seems to be off by a negative sign. \subsection{Pressure} \begin{align} p &= \frac{1}{4\pi} \frac{r_k}{r^3} F_k + \frac{1}{4\pi}\left(-3\frac{r_jr_k}{r^5} + \frac{\delta_{jk}}{r^3}\right)D_{jk} \\ & = \frac{1}{4\pi} \frac{r_k}{r^3} F_k + \frac{1}{4\pi}\left(-3\frac{r_jr_k}{r^5}\right)S_{jk}\\ \dpone{p}{x_i} &= \frac{1}{4\pi}\frac{r^2 F_i - 3r_ir_kF_k}{r^5} + \frac{1}{4\pi} \left(15\frac{r_ir_jr_k}{r^7}-3\frac{\delta_{ij}r_k+\delta_{ik}r_j+\delta_{jk}r_i}{r^5}\right) D_{jk} \\ &= \frac{1}{4\pi}\frac{r^2F_i - 3r_ir_kF_k}{r^5} + \frac{1}{4\pi} \left(15\frac{r_ir_jr_k}{r^7}-3\frac{\delta_{ij}r_k+\delta_{ik}r_j}{r^5}\right) S_{jk} \end{align} Since $S_{jk}$ is symmetric and trace-free. Torque/Rotlet does not contribute to pressure disturbance. \subsection{Double layer potential} The following two combinations satisfy the Stokes equation, for trace-free $S_{jk}$ and general $D_{jk}$, respectively: \begin{align} p& =\frac{1}{4\pi}\left(-3\frac{r_jr_k}{r^5}\right)S_{jk}, \quad u_i = \frac{1}{8\pi\mu}\left[ - \frac{3r_ir_jr_k}{r^5} \right] S_{jk} \\ p& =\frac{1}{4\pi}\left(-3\frac{r_jr_k}{r^5} + \frac{\delta_{jk}}{r^3}\right)D_{jk} ,\quad u_i = \frac{1}{8\pi\mu} \left(\left[ \frac{\delta_{jk}r_i}{r^3} - \frac{3r_ir_jr_k}{r^5} \right] + \left[\frac{\delta_{ik}r_j}{r^3} -\frac{\delta_{ij}r_k}{r^3} \right]\right)D_{jk} \end{align} For $S_{jk}$, the Stokes equation generates: \begin{align} -\dpone{p}{r_i} + \nabla^2 u_i = -\frac{3}{4}\frac{r_i}{r^5} S_{jj} \end{align} Because $S_{jj}$ is trace-free by definition, Stokes equation is satisfied. Note that the satisfaction of Stokes equation does not require the symmetry of $S_{jk}$. For $D_{jk}$ the Stokes equation is always zero, since $D_{jk}$ is arbitrary by definition. \textbf{Important: }In boundary integral theory the double layer kernel $\frac{1}{8\pi\mu}S_{jk}\left[ - \frac{3r_ir_jr_k}{r^5} \right]$ is used with an arbitrary source $v_kg_j$, where the trace-free condition is not necessarily satisfied. In this case, the pressure calculated by $\frac{1}{4\pi}\left(-3\frac{r_jr_k}{r^5}\right)v_kg_j$ is not correct. For arbitrary $D_{jk}$ or arbitrary $D_{jk}=v_kg_j$, the following combination of pressure and velocity kernel still satisfies the Stokes equation: \begin{align}\label{eq:doublelayerfmm} p& =\frac{1}{4\pi}\left(-3\frac{r_jr_k}{r^5} + \frac{\delta_{jk}}{r^3}\right)D_{jk}, \quad u_i = \frac{1}{8\pi\mu}\left[ - \frac{3r_ir_jr_k}{r^5} \right] D_{jk} \end{align} This is the correct formula which should be used in the double layer boundary integral equations (cf. STFMMLIB documentation). \subsection{Derivatives of double layer potential} Grad and Laplacian, where $\nabla^2 \bu = \nabla p$ due to Stokes equation: \begin{align} \left[\frac{r_ir_jr_k}{r^5} \right]_{,l} &= \delta_{il}r_jr_k\frac{1}{r^5} + \delta_{jl}r_ir_k\frac{1}{r^5} + \delta_{kl}r_ir_j\frac{1}{r^5} + r_ir_jr_k\left(-5\frac{r_l}{r^7}\right) \\ \left[\frac{r_ir_jr_k}{r^5} \right]_{,ll} &= 2\left[\delta_{ij}r_k\frac{1}{r^5}+\delta_{ik}r_j\frac{1}{r^5}+\delta_{jk}r_i\frac{1}{r^5}\right]-10\left[\frac{1}{r^7}r_ir_jr_k\right] \end{align} \section{Discussion} \textbf{1. } Two propagators $\left[ \frac{\delta_{jk}r_i}{r^3} - \frac{3r_ir_jr_k}{r^5} \right]$ and $\left[ - \frac{3r_ir_jr_k}{r^5} \right]$ are used interchangeably in literature, but they are equivalent only for trace free tensors. \textbf{2.} The stress generated by a point force is $$\sigma_{ij} = -p\delta_{ij} + \mu \left(u_{i,j}+u_{j,i}\right) = -\frac{3}{4\pi}\frac{r_ir_jr_k}{r^5}F_k. $$ \textbf{3. } Pay attention to the prefactors. The prefactors in STFMM3D and Blake 1971 for stress propagators to velocity and pressure seem to be both off by a factor of 2 and a negative sign. \section{FMM} In KIFMM the single layer could be used for M2M, M2L, L2L operations for both single layer and double layer potential. However the double layer potential Eq.~\ref{eq:doublelayerfmm} cannot be expressed by single layers if $D_{jk}$ is not trace free. In this case, we could set: \begin{align} D_{jk} = S_{jk} + \frac{1}{3}D_{mm}\delta_{jk} \end{align} where $S_{jk}$ is trace-free and $D_{mm}$ is the trace. In this way, Eq.~\ref{eq:doublelayerfmm} is converted to: \begin{align} p& =\frac{1}{4\pi}\left(-3\frac{r_jr_k}{r^5} + \frac{\delta_{jk}}{r^3}\right)\left(S_{jk} + \frac{1}{3}D_{mm}\delta_{jk}\right), \quad u_i = \frac{1}{8\pi\mu}\left[ - \frac{3r_ir_jr_k}{r^5} \right] \left(S_{jk} + \frac{1}{3}D_{mm}\delta_{jk}\right) \end{align} It simplifies to: \begin{align} p& =\frac{1}{4\pi}\left(-3\frac{r_jr_k}{r^5}\right) S_{jk}, \quad u_i = \frac{1}{8\pi\mu}\left[ - \frac{3r_ir_jr_k}{r^5} \right]S_{jk} + \frac{1}{8\pi\mu}\left(-\frac{r_i}{r^3}\right)D_{mm} \end{align} The $S_{jk}$ part could be described by single layer sources, but the $D_{mm}$ term is not. Therefore we should extend the single layer kernel from 3 to 4 dimensions: $(F_x,F_y,F_z,D_{mm})$. After the extension, the single layer kernel is: \begin{align}\label{eq:singlelayerfmm} p=\frac{1}{4\pi} \frac{r_k}{r^3} F_k,\quad u_i = \frac{1}{8\pi\mu}\left(\frac{r^2\delta_{ij}+r_ir_j}{r^3} F_j -\frac{r_i}{r^3}D_{mm}\right) \end{align} Note that the trace of double layer $D_{mm}$ does not generate pressure. Therefore the pressure gradient is the same as those given in Section 4.2. Also the Laplacian of $u_i$ due to the extra $D_{mm}$ term is also zero. The velocity gradient due to $D_{mm}$ is: \begin{align} \dpone{u_i}{x_j} = \frac{1}{8\pi\mu}\left(\frac{3r_ir_j}{r^5}-\frac{\delta_{ij}}{r^3}\right)D_{mm} \end{align} The traction is then computed based on this. \section{Summary: what are computed in the PVFMM Wrapper, exactly} The Stokes single layer kernel is $4\times4$, from $(F_x,F_y,F_z,D_{mm})$ to $(p,u_x,u_y,u_z)$, as shown in Eq.~\ref{eq:singlelayerfmm}: \begin{align} p=\frac{1}{4\pi} \frac{r_k}{r^3} F_k,\quad u_i = \frac{1}{8\pi\mu}\left(\frac{r^2\delta_{ij}+r_ir_j}{r^3} F_j -\frac{r_i}{r^3}D_{mm}\right) \end{align} The double layer kernel is $9\times4$, from $(D_{xx},D_{xy},D_{xz},D_{yx},D_{yy},D_{yz},D_{zx},D_{zy},D_{zz})$ to $(p,u_x,u_y,u_z)$, as shown in Eq.~\ref{eq:doublelayerfmm}: \begin{align} p& =\frac{1}{4\pi}\left(-3\frac{r_jr_k}{r^5} + \frac{\delta_{jk}}{r^3}\right)D_{jk}, \quad u_i = \frac{1}{8\pi\mu}\left[ - \frac{3r_ir_jr_k}{r^5} \right] D_{jk} \end{align} $D_{jk}$ here is arbitrary, not limited to trace-free cases, not limited to cases where $D_{jk}=v_kg_j$. The gradient, Laplacian, and traction are computed with derivatives of these. \end{document}
{ "alphanum_fraction": 0.6724876655, "avg_line_length": 55.2114695341, "ext": "tex", "hexsha": "b2f14cad9b80621b0922079efdbcb5a37ea91ab1", "lang": "TeX", "max_forks_count": 8, "max_forks_repo_forks_event_max_datetime": "2022-02-11T20:07:05.000Z", "max_forks_repo_forks_event_min_datetime": "2018-10-19T18:16:30.000Z", "max_forks_repo_head_hexsha": "e562b7b1d5f8c9b63472ad568a3185e3e4612157", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "benlandrum/STKFMM", "max_forks_repo_path": "Note/StokesSingularity.tex", "max_issues_count": 8, "max_issues_repo_head_hexsha": "e562b7b1d5f8c9b63472ad568a3185e3e4612157", "max_issues_repo_issues_event_max_datetime": "2022-03-14T13:38:09.000Z", "max_issues_repo_issues_event_min_datetime": "2021-01-08T00:51:57.000Z", "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "benlandrum/STKFMM", "max_issues_repo_path": "Note/StokesSingularity.tex", "max_line_length": 353, "max_stars_count": 9, "max_stars_repo_head_hexsha": "e562b7b1d5f8c9b63472ad568a3185e3e4612157", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "benlandrum/STKFMM", "max_stars_repo_path": "Note/StokesSingularity.tex", "max_stars_repo_stars_event_max_datetime": "2021-11-02T20:05:34.000Z", "max_stars_repo_stars_event_min_datetime": "2019-10-25T14:52:53.000Z", "num_tokens": 6145, "size": 15404 }
\clearpage \subsection{Comments} % (fold) \label{sub:comments} A program's source code contains instructions for the actions the computer must perform. However, this code is written and maintained by people. It is often useful to be able to place comments in the code to help someone reading that code understand how the code works or what it is trying to achieve. This text is not something that should be translated into machine code. Programming languages support the ability for programmers to embed \emph{comments} into the source code that are ignored by the compiler. \bigskip \mynote{ \begin{itemize} \item It is good practice to place a comment at the top of your code explaining what the program does. \item Comments should be included to help other people read your code. You will also find these comments useful when you return to your code after a long break. \item Make your comments meaningful, try to capture your intentions and ideas. \item Comments have no impact on the output produced by the compiler. \end{itemize} } % subsection comments (end)
{ "alphanum_fraction": 0.7884972171, "avg_line_length": 53.9, "ext": "tex", "hexsha": "3abeebb096d726d6901bffe7816a897e90b16f93", "lang": "TeX", "max_forks_count": 6, "max_forks_repo_forks_event_max_datetime": "2022-03-24T07:42:53.000Z", "max_forks_repo_forks_event_min_datetime": "2020-06-02T03:18:37.000Z", "max_forks_repo_head_hexsha": "8f3040983d420129f90bcc4bd69a96d8743c412c", "max_forks_repo_licenses": [ "CC-BY-4.0" ], "max_forks_repo_name": "macite/programming-arcana", "max_forks_repo_path": "topics/program-creation/concepts/comments.tex", "max_issues_count": 1, "max_issues_repo_head_hexsha": "bb5c0d45355bf710eff01947e67b666122901b07", "max_issues_repo_issues_event_max_datetime": "2021-12-29T19:45:10.000Z", "max_issues_repo_issues_event_min_datetime": "2021-12-29T19:45:10.000Z", "max_issues_repo_licenses": [ "CC-BY-4.0" ], "max_issues_repo_name": "thoth-tech/programming-arcana", "max_issues_repo_path": "topics/program-creation/concepts/comments.tex", "max_line_length": 373, "max_stars_count": 1, "max_stars_repo_head_hexsha": "bb5c0d45355bf710eff01947e67b666122901b07", "max_stars_repo_licenses": [ "CC-BY-4.0" ], "max_stars_repo_name": "thoth-tech/programming-arcana", "max_stars_repo_path": "topics/program-creation/concepts/comments.tex", "max_stars_repo_stars_event_max_datetime": "2021-08-10T04:50:54.000Z", "max_stars_repo_stars_event_min_datetime": "2021-08-10T04:50:54.000Z", "num_tokens": 232, "size": 1078 }
\section{Introduction} \textbf{support vector machine} (SVM, also support vector networks) is a kind of binary classification models of supervised learning. Given a set of training examples, each marked as belonging to one or the other of two categories, an SVM training algorithm builds a model that assigns new examples to one category or the other, making it a non-probabilistic binary classifier. \par From simple to complex,there are 3 levels of SVM,which are linear SVM in linearly separable case,linear SVM and non-linear SVM. \begin{itemize} \item linear SVM in linearly separable case: while training data is linearly separable,we can use hard margin maximization algorithms to learn a binary linear classification. \item linear SVM: while training data is approximately linearly separable,we can use soft margin maximization algorithms to learn a binary linear classification. \item non-linear SVM: while training data isn't linearly separable,we can use kernel methods with soft margin maximization algorithms to learn a binary non-linear classifier. \end{itemize} \section{Linear SVM in linearly separable case} \paragraph{Linearly separable} Let $\displaystyle X_{0}$ and $\displaystyle X_{1}$ be two sets of points in an n-dimensional Euclidean space.Then $\displaystyle X_{0}$ and $\displaystyle X_{1}$ are linearly separable if there exists an $n$-dimensional real vector $\omega$ and a real value b, such that every point $\displaystyle x\in X_{0}$ satisfies $\displaystyle\omega\cdot x+b>0$ and every point $\displaystyle x\in X_{1}$ satisfies $\displaystyle \omega\cdot x+b<0$. \\ \\We are given a training dataset of $\displaystyle N$ points of the form $$(x_{1},y_{1}),\cdots,(x_{N},y_{N})$$ where $\displaystyle x_{i}\in \mathbf{R}^n,y_{i}\in \{-1,1\}$.$\displaystyle y_{i}$ is a label that indicates which category $\displaystyle x_{i}$ belongs to.And we suppose that our training data is linearly separable. \par We want to find the "maximum-margin hyperplane" that divides the group of points $x_{i}$ for which $\displaystyle y_{i}=1$ from the group of points for which $\displaystyle y_{i}=-1$, which is defined so that the distance between the hyperplane and the nearest point $\displaystyle x_{i}$ from either group is maximized. \par Any hyperplane can be written as the set of points $\displaystyle x$ satisfying $\displaystyle \omega\cdot x+b=0$,where $\displaystyle \omega$ is the (not necessarily normalized) normal vector to the hyperplane.The parameter $\displaystyle\ \frac{b}{\|\omega\|}\ $ determines the offset of the hyperplane from the origin along the normal vector $\displaystyle\omega$. \subsection{Hard margin} If the training data are linearly separable, we can select two parallel hyperplanes that separate the two classes of data, so that the distance between them is as large as possible. The region bounded by these two hyperplanes is called the "margin", and the maximum-margin hyperplane is the hyperplane that lies halfway between them. \noindent \includegraphics[height=5cm]{margin} \paragraph{Functional margin} Given a training dataset $T$ and hyperplane $\displaystyle(\omega,b)$,we define the functional margin of hyperplane $\displaystyle(\omega,b)$ $w.r.t.$ data $\displaystyle(x_{i},y_{i})$ as \begin{equation} \widehat{\gamma_{i}}=y_{i}(\omega\cdot x_{i}+b) \end{equation} and can define the functional margin of hyperplane $\displaystyle(\omega,b)$ $w.r.t.$ dataset $T$ as \begin{equation} \widehat{\gamma}=\min_{i=1,\cdots,N}\widehat{\gamma_{i}} \end{equation} \paragraph{Geometric margin} Given a training dataset $T$ and hyperplane $(\omega,b)$,we define the geometric margin of hyperplane $(\omega,b)$ $w.r.t.$ data $(x_{i},y_{i})$ as \begin{equation} \displaystyle\gamma_{i}=\frac{\widehat{\gamma_{i}}}{\|\omega\|} \end{equation} and can define the functional margin of hyperplane $(\omega,b)$ $w.r.t.$ dataset $T$ as \begin{equation} \displaystyle\gamma=\min_{i=1,\cdots,N}\gamma_{i}=\frac{\widehat{\gamma}}{\|\omega\|} \end{equation} \\Notice that if we change $(\omega,b)$ into $(\lambda\omega,\lambda b)$,the hyperplane and geometric margin remain unchanged,but functional margin $\displaystyle\widehat{\gamma}$ changes into $\displaystyle\lambda\widehat{\gamma}$.In fact,what we want to maximize is geometric margin,but we can use the freedom of functional margin to simplify the optimization problem. \subsection{Hard margin maximization method} The basic idea of linear SVM is to maximize the geometric margin,and to find maximum margin classifier.The problem can be described as follows \begin{equation} \begin{split} &\max_{\bm\omega,b}\ \ \gamma\\ &s.t.\ \min\limits_{i=1,2,\cdots,N}\ y_{i}(\bm\omega\bm\cdot\bm x_{i}+b)>0\\ \end{split} \end{equation} Notice two things below: \begin{enumerate} \item $\gamma(\lambda\bm\omega,\lambda b)=\gamma(\bm\omega,b)$ \item Given $(\bm\omega,b)$, there always exists a $\lambda$ such that $\widehat{\gamma}(\lambda\bm\omega,\lambda b)=1$ \end{enumerate} So if we add an extra restraint $\ \widehat{\gamma}(\bm\omega,b)=1$, it won't affect the result of (5). Then problem (5) turns out to be \begin{equation} \begin{split} &\max_{\omega,b}\ \ \frac{1}{\|\bm\omega\|}\\ &s.t.\ \min\limits_{i=1,2,\cdots,N}\ y_{i}(\bm\omega\bm\cdot\bm x_{i}+b)=1\\ \end{split} \end{equation} In order to change (6) into a convex problem, we expand constraint domain to a convex domain by adding points $(\bm\omega,b)$ which satisfies $\widehat{\gamma}(\bm\omega,b)>1$. \\Suppose $\widehat{\gamma}(\bm\omega_{0},b_{0})>1$, take $\displaystyle\bm\omega_{1}=\frac{\bm\omega_{0}}{\widehat{\gamma_{0}}},\ b_{1}=\frac{b_{0}}{\widehat{\gamma_{0}}}$, we have $$\displaystyle\gamma(\bm\omega_{1},b_{1})=1,\ \ \frac{1}{\|\bm\omega_{1}\|}=\frac{\widehat{\gamma_{0}}}{\|\bm\omega_{0}\|}>\frac{1}{\|\bm\omega_{0}\|}$$. So we can see that this operation won't affect the result of (6). Meanwhile, to maximize $\displaystyle\frac{1}{\|\bm\omega\|}$ is equally to minimize $\displaystyle\frac{1}{2}\|\bm\omega\|^2$, our problem can be finally written as following \begin{equation} \begin{split} &\min_{\bm\omega,b}\ \frac{1}{2}\|\bm\omega\|^{2}\\ &s.t.\ \ y_{i}(\bm\omega\bm\cdot\bm x_{i}+b)\geq 1,\ i=1,2,\cdots ,N \end{split} \end{equation} \begin{algorithm} \caption{Hard margin maximization method} \textbf{Input:} linearly separable dataset $T=\{(x_{1},y_{1}),\cdots,(x_{N},y_{N})\},x_{i}\in\chi=\textbf{R}^n ,y_{i}\in\Upsilon=\{-1,1\},i=1,2,\cdots,N\\$ \textbf{Output:} maximum margin hyperplane $(\omega,b)$ and classification decision function $f(x)=sign(\omega\cdot x+b)$ \begin{algorithmic}[1] \State Solve problem (7),get optimal solution $(\omega^*,b^*)$ \State get maximum margin hyperplane $\omega^*\cdot x+b^*=0$ and classification decision function $f(x)=sign(\omega^*\cdot x+b)$ \end{algorithmic} \end{algorithm} \begin{theorem} If training dataset$T$ is linearly separable,the solution of problem (1) exists uniquely, in other words,the maximum margin hyperplane exists uniquely. \end{theorem} \paragraph{Support vector} If training dataset $T$ is linearly separable,then $x$ is a support vector precisely when $\ y(\omega\cdot x+b)=1$. \subsection{Dual method} In order to solve problem (7), we can consider it as a primal problem,use Lagrange duality theory to get its dual problem. And we can solve the dual problem to get the solution of the primal problem. \paragraph{Remark} Here we'll introduce several conclusions in convex optimization programming before going further in dual method. \\Consider a convex optimal problem: \begin{equation} \begin{split} &\min_{x\in\textbf{R}^n} f(x)\\ &s.t.\ c_{i}(x)\leq 0,\ i=1,2,\cdots,k;\ h_{j}(x)=0,\ j=1,2,\cdots,l \end{split} \end{equation} \\We call this problem a primal problem.Its generalized Lagrange function is \begin{equation} L(x,\alpha,\beta)=f(x)+\sum\limits_{i=1}^{k}\alpha_{i}c_{i}(x)+\sum\limits_{i=1}^{l}\beta_{j}h_{j}(x) \end{equation} Here,$x=(x^{(1)},x^{(2)},\cdots,x^{(n)})\in\ \mathbf{R}^n$,$\alpha,\beta$ is Lagrange multiplier,$\alpha_{i}\geq 0$. \\Consider problem \begin{equation} d^*=\min\limits_{x}\max\limits_{\alpha,\beta;\alpha\geq 0}\ \ L(x,\alpha,\beta) \end{equation} We can easily prove that problem $(10)$ is equal to problem $(8)$. Meanwhile,the dual problem of $(10)$ is defined as \begin{equation} p^*=\max\limits_{\alpha,\beta;\alpha\geq 0}\min\limits_{x}\ \ L(x,\alpha,\beta) \end{equation} we have following theorem. \begin{theorem}[KKT] Suppose that $f(x)$ and $c_{i}(x)$ are convex functions, $h_{j}(x)$ is an affine function,meanwhile, inequality constraint $c_{i}(x)$ is strictly feasible, which means $\exists x,\ \forall i,\ c_{i}(x)<0$. \begin{itemize} \item $\exists\ x^*,\bm\alpha^*,\bm\beta^*$, such that $x^*$ is the solution of the primal problem $(7)$, $\bm\alpha^*,\bm\beta^*$ is the solution of the dual problem $(8)$,and $d^*=p^*=L(x^*,\bm\alpha^*,\bm\beta^*)$. \item $x^*$ is the solution of the primal problem $(7)$, $\bm\alpha^*,\bm\beta^*$ is the solution of the dual problem $(8)$ $iff$ $x^*,\bm\alpha^*,\bm\beta^*$ satisfy: \begin{equation} \left\{ \begin{split} &\nabla_{x} L(x^*,\bm \alpha^*, \bm \beta^*)&=0\\ &\nabla_{\bm\alpha} L(x^*,\bm \alpha^*, \bm \beta^*)&=0\\ &\nabla_{\bm\beta} L(x^*,\bm \alpha^*, \bm \beta^*)&=0\\ &\alpha_i^* c_i(x^*)&=0\\ & c_i(x^*)&\leq 0\\ &\alpha_i^* &\geq 0\\ &h_j(x^*) &= 0\\ \end{split} \right. \end{equation} \end{itemize} \end{theorem} \noindent \\ \\Now we can go back to the SVM problem. \\In SVM, we have$$x=(\omega,b),\ f(x)=\frac{1}{2}\|\omega\|^{2},\ k=N,\ c_{i}(x)=1-y_{i}(\omega\cdot x_{i}+b),h_{j}(x)=0$$ First,we can construct $generalized\ Lagrange\ function$ according to the constraints in problem (7).Introduce $Lagarange\ multiplier $ $\alpha_{i}\geq 0,\ i=1,2,\cdot,N$ and define $Lagrange\ function$: \begin{equation} L(\omega,b,\alpha)=\frac{1}{2}\|\omega\|^2-\sum\limits_{i=1}^{N}\alpha_{i}y_{i}(\omega\cdot x_{i}+b)+\sum\limits_{i=1}^{N}\alpha_{i} \end{equation} We can easily prove that the primal problem $\min\limits_{\omega,b}\max\limits_{\alpha;\alpha_{i}\geq 0}L(\omega,b,\alpha)$ is equal to problem $(7)$. And its dual problem is $\max\limits_{\alpha;\alpha_{i}\geq 0}\min\limits_{\omega,b}L(\omega,b,\alpha)$. Next,we will rewrite the form of dual problem. \\Consider $\min\limits_{\omega,b}L(\omega,b,\alpha)$,make \begin{equation} \begin{split} &\nabla_{\omega} L(\omega, b, \bm \alpha)=\omega-\sum\limits_{i=1}^{N}\alpha_{i}y_{i}x_{i}=0\\ &\nabla_{b} L(\omega, b, \bm \alpha)=\sum\limits_{i=1}^{N}\alpha_{i}y_{i}=0 \end{split} \end{equation} Apply the result to equation (16), we can compute to obtain that \begin{equation} L(\omega,b,\bm\alpha)=-\frac{1}{2}\sum\limits_{i=1}^{N}\sum\limits_{j=1}^{N}\alpha_{i}\alpha_{j}y_{i}y_{j}(x_{i}\cdot x_{j})+\sum\limits_{i=1}^{N}\alpha_{i} \end{equation} So \begin{equation} \min\limits_{\omega,b}L(\omega,b,\bm\alpha)=-\frac{1}{2}\sum\limits_{i=1}^{N}\sum\limits_{j=1}^{N}\alpha_{i}\alpha_{j}y_{i}y_{j}(x_{i}\cdot x_{j})+\sum\limits_{i=1}^{N}\alpha_{i} \end{equation} Then we have \begin{equation} \max\limits_{\alpha;\alpha_{i}\geq 0}\min\limits_{\omega,b}L(\omega,b,\bm\alpha)=\max\limits_{\bm\alpha;\alpha_{i}\geq 0}-\frac{1}{2}\sum\limits_{i=1}^{N}\sum\limits_{j=1}^{N}\alpha_{i}\alpha_{j}y_{i}y_{j}(x_{i}\cdot x_{j})+\sum\limits_{i=1}^{N}\alpha_{i} \end{equation} Finally,we can obtain a optimization problem of $\bm\alpha$ \begin{equation} \begin{split} &\max_{\alpha}\ -\frac{1}{2}\sum\limits_{i=1}^{N}\sum\limits_{j=1}^{N}\alpha_{i}\alpha_{j}y_{i}y_{j}(x_{i}\cdot x_{j})+\sum\limits_{i=1}^{N}\alpha_{i}\\ &s.t.\ \sum\limits_{i=1}^{N}\alpha_{i}y_{i}=0,\ \alpha_{i}\geq 0,\ i=1,2,\cdots,N \end{split} \end{equation} Consider problem $(7)$,problem (7) satisfies the condition of $Theorem\ R2$,so $\exists$ $(\omega^*,\alpha^*,\beta^*)\ S.t\ \omega^*$ is the solution of the primal problem (10),$\alpha^*,\beta^*$ is the solution of the dual problem (11). For linearly separable training dataset,suppose that $\alpha^*$ is the solution of problem $(21)$,we can obtain the solution $(\omega^*,b^*)$ of problem $(7)$ from $\alpha^*$. \begin{theorem} Suppose that $\alpha^*$ is the solution of problem (21),then $\exists$ subscript $j$ ,$S.t\ \alpha_{j}^*>0$, and we can obtain the solution $(\omega^*,b^*)$ of problem $(7)$ from the following equations: \begin{equation} \begin{split} &\omega^*=\sum\limits_{i=1}^{N}\alpha_{i}^*y_{i}x_{i} \\&b^*=y_{j}-\sum\limits_{i=1}^{N}\alpha_{i}^*y_{i}(x_{i}\cdot x_{j}) \end{split} \end{equation} \end{theorem} $proof$ Acordding to \textbf{Theorem 2}, KKT condition is satisfied, \\so we have \begin{equation} \begin{split} &\nabla_{\omega} L(\omega^*, b^*, \bm \alpha^*)=\omega^*-\sum\limits_{i=1}^{N}\alpha_{i}^*y_{i}x_{i}=0\\ &\nabla_{b} L(\omega^*, b^*, \bm \alpha^*)=-\sum\limits_{i=1}^{N}\alpha_{i}^*y_{i}=0\\ &\alpha_{i}^*(y_{i}(\omega^*\cdot x_{i}+b^*)-1)= 0,\ i=1,2,\cdots,N\\ &y_{i}(\omega^*\cdot x_{i}+b^*)-1\geqslant 0,\ i=1,2,\cdots,N\\ &\alpha_{i}^*\geqslant 0,\ i=1,2,\cdots,N\\ \end{split} \end{equation} So \begin{equation} \omega^*=\sum\limits_{i=1}^{N}\alpha_{i}^*y_{i}x_{i} \end{equation} Suppose that $\alpha^*=0$,we have $\omega^*=0$,obviously it's not the solution of the primal problem.It's contradictory to $Theorem\ R2$. \\So $\exists$ subscript $j,\ S.t\ \alpha_{j}^*>0$. \\For this $j$, we have \begin{equation} y_{j}(\omega^*\cdot x_{j}+b^*)-1=0 \end{equation} \\By the way,it means that $x_{j}$ must be a support vector. \\Take $(24)$ into $(25)$,and notice that $y_{j}^2=1$,we have \begin{equation} b^*=y_{j}-\sum\limits_{i=1}^{N}\alpha^*y_{i}(x_{i}\cdot x_{j}) \end{equation} From this theorem we can see that maximum margin hyperplane can be described by \begin{equation} \sum\limits_{i=1}^{N}\alpha^*y_{i}(x_{i}\cdot x)+b^*=0 \end{equation} And classification decision function can be described by \begin{equation} f(x)=sign(\sum\limits_{i=1}^{N}\alpha^*y_{i}(x_{i}\cdot x)+b^*) \end{equation} %\begin{algorithm} %\caption{dual method} %\textbf{Input:} linearly separable dataset $T={(x_{1},y_{1}),\cdots,(x_{N},y_{N})},x_{i}\in\chi=\textbf{R}^n ,y_{i}\in\Upsilon=\{-1,1\},i=1,2,\cdots,N\\$ %\textbf{Output:} maximum margin hyperplane $(\omega,b)$ and classification decision function $f(x)=sign(\omega^Tx+b)$ %\begin{algorithmic}[1] %\State Solve problem (21),get optimal solution $\alpha^*$ %\State Compute %$$\omega^*=\sum\limits_{i=1}^{N}\alpha_{i}^*y_{i}x_{i}$$ %Choose one of the positive component $\alpha_{j}^*$ of $\alpha^*$,compute %$$b^*=y_{j}-\sum\limits_{i=1}^{N}\alpha_{i}^*y_{i}(x_{i}\cdot x_{j})$$ %\State get maximum margin hyperplane $\sum\limits_{i=1}^{N}\alpha_{i}^*y_{i}(x_{i}\cdot x)+b^*=0$ and classification decision function %$f(x)=sign(\sum\limits_{i=1}^{N}\alpha_{i}^*y_{i}(x_{i}\cdot x)+b^*)$ %\end{algorithmic} %\end{algorithm} \section{Soft margin maximization} Given a training dataset of $\displaystyle N$ points of the form $$(x_{1},y_{1}),\cdots,(x_{N},y_{N})$$ where $\displaystyle x_{i}\in \mathbf{R}^n,y_{i}\in \{-1,1\}$. $\displaystyle y_{i}$ is a category label of $\displaystyle x_{i}$ .We suppose that our training data is not linearly separable, but we can get a linearly separable set by removing few outliers. In this situation, we still want to get a linear classifier. The difference is we need to find a balance between maximizing "margin" and minimizing the classifier model bias.\\ We use hinge loss function to describe the classifier model bias. \begin{equation} L(y(\omega\cdot x+b))=[1-y(\omega\cdot x+b)]_{+}=\max(0,1-y(\omega\cdot x+b)) \end{equation} It is to say that while $(x_{i},y_{i})$ is correctly classified by $(\omega,b)$ and functional margin $y_{i}(\omega\cdot x_{i}+b)\ge 1$, the loss is 0; otherwise, the loss is $1-y_{i}(\omega\cdot x_{i}+b)$. Now we can describe the problem as follows: \begin{equation} \min_{\bm\omega,b}\ \ \sum_{i=1}^{N}[1-y_{i}(\bm\omega\cdot\bm x_{i}+b)]_{+}+\lambda\|\bm\omega\|^2 \end{equation} $\lambda$ is a parameter up to the real problem which balances the "margin" maximization and bias minimization. \begin{theorem} problem (27) is equivalent to following problem: \begin{equation} \begin{split} &\min_{\bm\omega,b,\bm\xi}\ \ \frac{1}{2}\|\bm\omega\|^2+C\sum_{i=1}^{N}\xi_{i}\\ &s.t.\ \ y_{i}(\bm\omega\cdot\bm x_{i}+b)+\xi_{i}\geqslant 1,\ \xi_{i}\geqslant 0,\ i=1,2,\cdots,N \end{split} \end{equation} \end{theorem} \noindent Take $\displaystyle C=\frac{1}{2\lambda}$, the equivalence property is obvious. Notice that problem (28) is the same kind of question as problem (8). So we can use Lagrange dualty as we do in hard margin maximization.\\ \\ The generalized Lagrange function of (28) is: \begin{equation} L(\bm\omega,b,\bm\xi,\bm\alpha,\bm\mu)\equiv \frac{1}{2}\|\omega\|^2+C\sum_{i=1}^{N}\xi_{i}-\sum_{i=1}^{N}\alpha_{i}(y_{i}(\bm\omega\cdot \bm x_{i}+b)+\xi_{i}-1)-\sum_{i=1}^{N}\mu_{i}\xi_{i} \end{equation} The primal problem of (28) is: \begin{equation} d^*=\min_{\bm\omega,b,\bm\xi}\ \ \max_{\bm\alpha,\bm\mu;\bm\alpha\geqslant 0,\bm\mu\geqslant 0}\ \ L(\bm\omega,b,\bm\xi,\bm\alpha,\bm\mu) \end{equation} The dual problem of (28) is: \begin{equation} p^*=\max_{\bm\alpha,\bm\mu;\bm\alpha\geqslant 0,\bm\mu\geqslant 0}\ \ \min_{\bm\omega,b,\bm\xi}\ \ L(\bm\omega,b,\bm\xi,\bm\alpha,\bm\mu) \end{equation} Similar to hard maximization, our goal is to find $(\bm\omega^*,b^*,\bm\xi^*,\bm\alpha^*,\bm\mu^*)$ such that $\bm\omega^*,b^*,\bm\xi^*$ is the solution of the primal problem $(30)$, $\bm\alpha^*,\bm\mu^*$ is the solution of the dual problem $(31)$,and $d^*=p^*=L(\bm\omega^*,b^*,\bm\xi^*,\bm\alpha^*,\bm\mu^*)$.\\ \\ According to \textbf{Theorem 2(KKT)}, we can add following extra restraints to the problem without losing optimal points $(\bm\omega^*,b^*,\bm\xi^*,\bm\alpha^*,\bm\mu^*)$: \begin{equation} \begin{split} &\nabla_{\bm\omega} L(\bm\omega,b,\bm\xi,\bm\alpha,\bm\mu)=\bm\omega-\sum\limits_{i=1}^{N}\alpha_{i}y_{i}\bm{x_{i}}=0\\ &\nabla_{b} L(\bm\omega,b,\bm\xi,\bm\alpha,\bm\mu)=-\sum\limits_{i=1}^{N}\alpha_{i}y_{i}=0\\ &\nabla_{\xi_{i}} L(\bm\omega,b,\bm\xi,\bm\alpha,\bm\mu)=C-\alpha_{i}-\mu_{i} \end{split} \end{equation} Apply (32) to problem (31), we can simplify (31) as: \begin{equation} \begin{split} &\min_{\bm\alpha}\ \ \frac{1}{2}\sum_{i=1}^{N}\sum_{j=1}^{N}\alpha_{i}\alpha_{j}y_{i}y_{j}(\bm{x_{i}}\cdot\bm{x_{j}})-\sum_{i=1}^{N}\alpha_{i}\\ &s.t.\ \ \sum_{i=1}^{N}\alpha_{i}y_{i}=0,\ 0 \leqslant \alpha_{i} \leqslant C,\ i=1,2,\cdots,N \end{split} \end{equation} \begin{theorem} Suppose $(\bm\alpha^*=(\alpha_{1}^*,\alpha_{2}^*,\cdots,\alpha_{N}^*)^{T})$ is one of the solution of dual problem (33). If there is a component of $\bm\alpha$ satisfies $0<\alpha_{j}^*<C$, then we can get a solution $\bm\omega^*,b^*$ of primal problem (28) as: \begin{equation} \bm\omega^{*}=\sum_{i=1}^{N}\alpha_{i}^*y_{i}\bm x_{i} \end{equation} \begin{equation} b^*=y_{j}-\sum_{i=1}^{N}y_{i}\alpha_{i}^*(\bm x_{i}\bm\cdot\bm x_{j}) \end{equation} \end{theorem} \begin{proof} According to \textbf{Theorem 2(KKT)}, we have \begin{equation} \nabla_{\bm\omega} L(\bm\omega^*,b^*,\bm\xi^*,\bm\alpha^*,\bm\mu^*)=\bm\omega^*-\sum\limits_{i=1}^{N}\alpha_{i}^*y_{i}\bm{x_{i}}=0 \end{equation} \begin{equation} \nabla_{b} L(\bm\omega^*,b^*,\bm\xi^*,\bm\alpha^*,\bm\mu^*)=-\sum\limits_{i=1}^{N}\alpha_{i}^*y_{i}=0 \end{equation} \begin{equation} \nabla_{\xi_{i}} L(\bm\omega^*,b^*,\bm\xi^*,\bm\alpha^*,\bm\mu^*)=C-\alpha_{i}^*-\mu_{i}^* \end{equation} \begin{equation} \alpha_{i}^*(y_{i}(\bm\omega^*\bm\cdot \bm x_{i}+b^*)-1+\xi_{i}^*)= 0 \end{equation} \begin{equation} \mu_{i}^*\xi_{i}^*=0 \end{equation} \begin{equation} y_{i}(\bm\omega^*\bm\cdot \bm x_{i}+b^*)-1+\xi_{i}^*\geqslant 0 \end{equation} \begin{equation} \xi_{i}^*\geqslant 0 \end{equation} \begin{equation} \alpha^*\geqslant 0 \end{equation} \begin{equation} \mu^*\geqslant 0 \end{equation} From equation (36), we can directly get result (34). From (39)(40) we know that if there exists an $\alpha_{j}^*$ such that $0<\alpha_{j}^*<C$, then we have $$y_{j}(\bm\omega^*\bm\cdot \bm x_{j}+b^*)-1=0$$ This is equivalent to result (35). \end{proof} \paragraph{support vectors of soft margin} In soft margin maximization method, we say a data point $(x_{i},y_{i})$ is a support vector if the solution $\bm\alpha^*=(\alpha_{1}^*,\alpha_{2}^*,\cdots,\alpha_{N}^*)^T$ of dual problem (33) satisfies $\alpha_{i}^*>0$\\ \includegraphics[height=5cm]{support} \begin{itemize} \item If $\alpha_{i}^*<C$, we have $\xi=0$, support vectors $x_{i}$ is right on the margin boundary. \item If $\alpha_{i}^*=C$ \begin{itemize} \item $0<\xi_{i}<1$, $x_{i}$ is correctly classified and between the separating hyperplane and margin boundary hyperplane. \item $\xi_{i}=1$, $x_{i}$ is right on the separating hyperplane. \item $\xi_{i}>1$, $x_{i}$ is wrongly classified. \end{itemize} \end{itemize} \section{Kernel methods} \subsection{Nonlinearly separable case} Generally speaking, we can't find a good linear classifier most of the time because dataset usually seems not to have a linear structure in real situation. In this section, we are trying to deal with nonlinearly separable case, which means that our dataset can be correctly classified by an $(N-1)$-dimensional hypersurface.\\ \\ Our main idea is to use a nonlinear mapping from input space to a feature space such that image set is approximately linearly separable in feature space. Here we give a simple example as the picture below.\\ \\ Suppose that input space (left picture) $\bm\chi\subset \mathbf{R^2},\ x=(x^{(1)},x^{(2)})^T\in \bm\chi$, image space $\bm Z\subset \mathbf{R^2},\ z=(z^{(1)},z^{(2)})^T\in \bm Z$.\\ Define a mapping $\bm\phi:\bm\chi\rightarrow\bm Z$ as:$$z=\bm\phi(x)=((x^{(1)})^2,(x^{(2)})^2)$$ \includegraphics[width=10cm]{feature}\\ We can see that input space (left) can be separated by an ellipse $w_{1}(x^{(1)})^2+w_{2}(x^{(2)})^2+b=0$ which becomes a straight line $w_{1}z^{(1)}+w_{2}z^{(2)}+b=0$ after mapping to feature space (right). In this way, we transfer nonlinearly separable case in input space to linearly separable case in feature space. \subsection{Kernel function} We can see from the dual problem of linear SVM that either objective function or classifier decision function is only involved with inner product between input data points. When we use linear SVM in feature space, $x_{i}\cdot x_{j}$ is replaced by $\bm\phi(x_{i})\cdot \bm\phi(x_{j})$, which means we only need to concern the inner products between image points in feature space.\\ \paragraph{\textbf{Kernel function}} Suppose $\bm\chi$ is input space (subset of $\mathbf{R^n}$ or discrete set), $\mathscr{F}$ is feature space (Hilbert space), if there exists a mapping $$\bm\phi:\bm\chi\rightarrow\mathscr{F}$$ such that for all $x,z\in \bm\chi$, function $k:\bm\chi\times\bm\chi\rightarrow\mathbf{R}$ satisfy: $$k(x,z)=\bm\phi(x)\cdot\bm\phi(z)$$ Then we say $k(x,z)$ is a kernel function, $\bm\phi$ is a feature mapping.\\ \\ It's obvious that one feature mapping can only introduce one kernel function,but one kernel function may be able to be introduced by many different feature mappings. In real learning process, we only need to find proper kernel function instead of feature mapping. That's because on the one hand, the kernel function itself is enough to decide the final classifier; on the other hand, it's much easier and quicker to compute $k(x,z)$ than inner product $\bm\phi(x)\cdot\bm\phi(z)$ in $\mathscr{F}$ which is usually high-dimensional.\\ \paragraph{Gram matrix} Given a function $k:\bm\chi\times\bm\chi\rightarrow\mathbf{R}$ and inputs $x_{1},\cdots,x_{n}\in \bm\chi$, the $n\times n$ matrix $$K:=(k(x_{i},x_{j}))_{ij}$$ is called the Gram matrix of k with respect to $x_{1},\cdots,x_{n}$. \paragraph{Positive definite kernel} Let $\bm\chi$ be a nonempty set. A function $k:\bm\chi\times\bm\chi\rightarrow\mathbf{R}$ which for all $n\in \mathbf{N},\ x_{i}\in \bm\chi, i=1,\cdots,n$ gives rise to a positive definite Gram matrix is called a $positive\ definite\ kernel$. \begin{theorem} A function $$k:\bm\chi\times\bm\chi\rightarrow\mathbf{R}$$ which is either continuous or has a finite domain, can be decomposed $$k(x,z)=\bm\phi(x)\cdot\bm\phi(z)$$ into a feature map $\bm\phi$ into a Hilbert space $\mathscr{F}$ applied to both its arguments followed by the evaluation of the inner product in $\mathscr{F}$ if and only if it is a positive definite kernel. \end{theorem} \begin{proof} The 'only if' implication is trivial. We will mainly show the reverse implication.\\ Assuming $k$ is a positive definite kernel, we'll proceed to construct a feature mapping $\bm\phi$ into a Hilbert space for which $k$ is the kernel. \begin{enumerate} \item Define the feature mapping $\bm\phi$ and construct a vector space $\bm F$.\\ We define $$\bm\phi:x\longmapsto k(\bm x,\cdot)$$ According to this mapping, we span the image set to a vector space $$\bm{F}=\{\sum_{i=1}^{l}\alpha_{i}k(\bm x_{i},\cdot):l\in\mathbf{N},\ \bm x_{i}\in\bm\chi,\ \alpha_{i}\in\mathbf{R},\ i=1,\cdots,N \}$$ \item Make $\bm{F}$ an inner product space by defining inner product on $\bm F$.\\ We define a binary function "$\bm{\cdot}$" on $\bm{F}$: for any $f,g\in \bm F$ $$ f(\cdot)=\sum_{i=1}^{m}\alpha_{i}k(x_{i},\cdot) $$ $$ g(\cdot)=\sum_{j=1}^{n}\beta_{i}k(x_{j},\cdot) $$ $$ f\bm{\cdot} g=\sum_{i=1}^{m}\sum_{j=1}^{n}\alpha_{i}\beta_{j}k(x_{i},x_{j}) $$ The bilinearity and symmetry are easy to prove. Now we show that this function is positive definite. $$ f\bm\cdot f=\sum_{i,j=1}^{m}\alpha_{i}\alpha_{j}k(x_{i},x_{j})=\alpha^T k\alpha $$ Because the matrix K is positive semi-definite, we have $$f\bm\cdot f\geqslant 0,\ \forall\ f\in \bm{F}$$\\ Utilizing this result, we can prove the Cauchy-Schwarz inequality \begin{equation} (f\bm\cdot g)^2\leqslant(f\bm\cdot f)(g\bm\cdot g) \end{equation} Notice that \begin{equation} f\bm\cdot k(x,\cdot)=\sum_{i=1}^{m}\alpha_{i}k(x_{i},x)=f(x) \end{equation} Take $g(\cdot)=K(x,\cdot)$ into (45), we have $$ (f\bm\cdot k(x,\cdot))^2=|f(x)|^2\leqslant(f\bm\cdot f)k(x,x) $$ So while $f\bm\cdot f=0$, we have $f(x)\equiv 0$. Above all, binary function "$\bm\cdot$" satisfies bilinearity, symmetry and positive definiteness which means "$\bm\cdot$" is an inner product on $\bm{F}$. Now vector space $\bm{F}$ with inner product "$\bm\cdot$" is an inner product space. \item Complete the inner product space $\bm{F}$, we can get a Hilbert space $\mathscr{F}$.\\ This Hilbert space is called $\textbf{reproducing\ kernel\ Hilbert\ space\ (RKHS)}$ because it satisfies equation (46) which is called $reproducing\ property$. \end{enumerate} \end{proof} \noindent\\ Back to SVM problem. When we choose a kernel function $K$, the objective function of dual problem turns out to be: $$ W(\bm\alpha)=\frac{1}{2}\sum_{i=1}^{N}\sum_{j=1}^{N}\alpha_{i}\alpha_{j}y_{i}y_{j}-\sum_{i=1}^{N}\alpha_{i} $$ The decision function changes into: $$ f(x)=sign(\ \sum_{i=1}^{N}\alpha_{i}^*y_{i}K(x_{i},x)+b^*\ ) $$ \subsection{Common kernel functions} \begin{itemize} \item \textbf{Polynomial kernel function}: \begin{equation} K(\bm x,\bm z)=(\bm x\cdot \bm z+1)^{p} \end{equation} \item \textbf{Gaussian kernel function} \begin{equation} K(\bm x,\bm z)=exp(-\frac{\|\bm x-\bm z\|^2}{2\sigma^2}) \end{equation} \item \textbf{sigmoid kernel function} \begin{equation} K(\bm x,\bm z)=tanh(\bm x\cdot \bm z+r) \end{equation} \end{itemize} %\end{document}
{ "alphanum_fraction": 0.6727285745, "avg_line_length": 61.8008849558, "ext": "tex", "hexsha": "e89c803d5009a84fd788a24012cd903d7ee0df87", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "635b6ce53cb792e316abf4f47396f2e4f0686815", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "liuzhengqi1996/math452", "max_forks_repo_path": "6DL/SVM.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "635b6ce53cb792e316abf4f47396f2e4f0686815", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "liuzhengqi1996/math452", "max_issues_repo_path": "6DL/SVM.tex", "max_line_length": 533, "max_stars_count": null, "max_stars_repo_head_hexsha": "635b6ce53cb792e316abf4f47396f2e4f0686815", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "liuzhengqi1996/math452", "max_stars_repo_path": "6DL/SVM.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 9879, "size": 27934 }
% % set up Oct. 2013 % \section{Calculating $(00|r_{12}|00)^{m}$ integrals} % % The $(00|r_{12}|00)^{m}$ type of integrals is fundamental for recursive relation calculation. This type of integrals is needed to be calculated directly from incomplete Gamma function (see \ref{nuclear_attraction_direct_int_eq:20} and \ref{OS_ERI_complementary_result}): \begin{equation} (0_{A}0_{B}|r_{12}|0_{C}0_{D})^{(m)} = 2\left( \frac{\rho}{\pi}\right)^{\frac{1}{2}}(0_{A}|0_{B}) (0_{C}|0_{D})\int^{1}_{0} t^{2m} e^{-(\rho|PQ|^{2})t^{2}} dt \label{fm_ssssm_eq:0} \end{equation} Where we have the arguments as: \begin{align} \overrightarrow{P} &= \frac{\alpha \overrightarrow{A} + \beta \overrightarrow{B}}{\alpha + \beta} \nonumber \\ \overrightarrow{Q} &= \frac{\gamma \overrightarrow{C} + \delta \overrightarrow{D}}{\gamma + \delta} \nonumber \\ \rho &= \frac{(\alpha+\beta)(\gamma + \delta)} {(\alpha+\beta)+(\gamma + \delta)} \end{align} The integral in \ref{fm_ssssm_eq:0} could be directly calculated through incomplete Gamma function. However, such way is expensive because the $(00|r_{12}|00)^{(m)}$ is inside the primitive Gaussian loop. Suggest we have 1000 basis sets, and each basis set's contraction is 3; then for producing the normal four center ERI integrals the number $(00|r_{12}|00)^{(m)}$ for each $m$ could be approximated as: $\dfrac{1}{8}*1000^{4}*3^{4} = 1.0125\times 10^{13}$! Therefore, even a small increasing of the calculation cost to the $(00|r_{12}|00)^{(m)}$ will lead to a dramatical cost increase to the integral calculation. Therefore, it's necessary to have a simpler way to calculate the $(00|r_{12}|00)^{(m)}$ integrals. There has been a lot of literatures to discuss the integral calculation here(see \cite{harris1983sssm, gill1991two} etc. and the paper cited by them). The discussion made in this section is mainly based on the results in the above reference. \subsection{$f_{m}(t)$} \label{fmt_function_assessment} % % Firstly, let's discuss the integral inside the \ref{fm_ssssm_eq:0}: \begin{equation}\label{fm_ssssm_fmt_eq:1} f_{m}(t) = \frac{2}{\sqrt{\pi}}\int^{1}_{0} u^{2m} e^{-tu^{2}} du \end{equation} When the $m=0$, this function is back to the error function: \begin{equation} \begin{split} f_{0}(t) &= \frac{2}{\sqrt{\pi}}\int^{1}_{0} e^{-tu^{2}} du \\ &= t^{-\frac{1}{2}}\frac{2}{\sqrt{\pi}}\int^{1}_{0} e^{-(\sqrt{t}u)^{2}} d (\sqrt{t}u) \\ &= t^{-\frac{1}{2}}\frac{2}{\sqrt{\pi}}\int^{\sqrt{t}}_{0} e^{-x^{2}} dx \\ &= t^{-\frac{1}{2}} erf(t^{\frac{1}{2}}) \end{split} \label{fm_ssssm_fmt_eq:2} \end{equation} The error function is fast to converge(this function in standard C++ library is several hundreds time faster than the incomplete gamma function defined in boost library), and we note; this function can be found in standard C++ library\footnote{We note that the standard efc function in C library has the $\dfrac{2}{\sqrt{\pi}}$}. For $m>0$, by integration by parts it's easy to find a recursive relation for compute $f_{m}(t)$: \begin{equation} \begin{split} f_{m}(t) &= \frac{2}{\sqrt{\pi}}\int^{1}_{0} u^{2m} e^{-tu^{2}} du \\ &= -\frac{1}{t\sqrt{\pi}}\int^{1}_{0} u^{2m-1} e^{-tu^{2}} d(-tu^{2}) \\ &= -\frac{1}{t\sqrt{\pi}}\int^{1}_{0} u^{2m-1} d\left( e^{-tu^{2}} \right) \\ &= -\left.\frac{1}{t\sqrt{\pi}}u^{2m-1} e^{-tu^{2}}\right|^{1}_{0} + \frac{2m-1}{t\sqrt{\pi}}\int^{1}_{0} u^{2m-2} e^{-tu^{2}} du \\ &= -\frac{1}{t\sqrt{\pi}}e^{-t} + \frac{2m-1}{2t} f_{m-1}(t) \Rightarrow \\ &= \frac{1}{2t}\left( (2m-1)f_{m-1}(t) - \frac{2}{\sqrt{\pi}}e^{-t}\right) \end{split} \label{fm_ssssm_fmt_eq:3} \end{equation} This function provides the easiest way to compute the $f_{m}(t)$. However, if we use such recursive relation to direct compute $f_{m}(t)$ from the $f_{0}(t)$, it's easy to see the error is propagating very quickly (see the example in \ref{error_propagation_numerical} for more details). Therefore, to directly use the \ref{fm_ssssm_fmt_eq:3} from $f_{0}(t)$ for computing is not applicable. The error propagation in this example is because we have $f_{m}(t) > f_{m+1}(t)$\footnote{such expression could be easily derived from Cauchy–Schwarz inequality}. It's easy to see that $f_{m}(t)$ is always larger than 0. As the $m$ grows larger, the difference between $f_{m}(t)$ and $f_{m+1}(t)$ is getting smaller. This is the reason why error begins propagating when we calculate $f_{m+1}(t)$ from $f_{m}(t)$. However, such recursive relation may be applicable for some given $m$ and $t$ combinations, as long as the error is within some small range. For testify this idea, we made an investigation for all of $m$ and $t$ combinations within error limit of $1.0E^{-10}$. The results indicate that for $m=1$ to $m=10$ and $t>1$, by using this ``UP'' recursive relation we could get the accurate results(within error of $1.0E^{-10}$) by climbing from error function\footnote{for a detailed testing, we study the $t$ from $1.0$ to $30.0$(when $t>30.0$ we can use simpler form to express $f_{m}(t)$, see the discussion below) with step length of $1.0E^{-6}$ for $m=1$ to $m=10$. All of results are comparing with the one calculated from incomplete gamma function from boost library in terms of error limit in $1.0E^{-10}$}. On the other hand, such circumstance indicates that the reverse recursive calculation is applicable for calculation, that is; to compute the $f_{m}(t)$ from $f_{m+1}(t)$. It's easy to show, that through the ``downward recursive relation'' the computation of $f_{m}(t)$ will never lose the accuracy\footnote{We tested the cases for $m=1$ to $m=10$ and $t$ ranges from $0.0$ to $30.0$ with same step length etc. shown above}. Therefore, in real application we will use the following expression: \begin{equation} \label{fm_ssssm_fmt_eq:4} f_{m}(t) = \frac{1}{2m+1}\left( 2tf_{m+1}(t) + \frac{2}{\sqrt{\pi}}e^{-t}\right) \end{equation} We have tested for $m=0$ to $m=30$ for t ranging from $0.001$ to $30.00$ (with step wide of 0.001 too), by using the downward recursive relation all of results could be accurate within the error range of $1.0^{-10}$. Therefore, the problem here is that how we compute the $f_{m_{max}}(t)$. The most easiest way to calculate $f_{m}(t)$ is perhaps through it's series expansion. In paper of Harris\cite{harris1983sssm}, equation 9 provides a series expansion: \begin{equation} \label{fm_ssssm_fmt_eq:5} f_{m}(t) = \frac{2}{\sqrt{\pi}}e^{-t}\sum_{k=0}^{\infty}\frac{(2m-1)!!}{(2m+2k+1)!!} (2t)^{k} \end{equation} We can see that for small t (perhaps $t<=1$), such expression will converge in a very quick way. For larger $t$, it's obvious that it needs more term for converging\footnote{We have tested the $m=1$ to $m=10$ cases with respect to $t<=1$ in same testing condition mentioned above. All of results are in good accuracy within error range of $1.0^{-10}$ with 12 terms expansion}. However, for the large $t$ situation, the integral of $f_{m}(t)$ should be faster approaching to zero compared with these with small $t$. From the paper \cite{harris1983sssm}, it has another series expansion to understand it: \begin{equation} \begin{split} f_{m}(t) &= \frac{(2m-1)!!}{(2t)^{m}t^{1/2}}-\frac{2}{\sqrt{\pi}}\frac{e^{-t}}{2t} \left( 1+\frac{2m-1}{2t} \right. \\ &+ \left. \frac{(2m-1)(2m-3)}{(2t)^{2}} + \frac{(2m-1)(2m-3)(2m-5)}{(2t)^{3}}+ \cdots \right) \end{split} \label{fm_ssssm_fmt_eq:6} \end{equation} It's easy to see, when $t>25$; then $\dfrac{2}{\sqrt{\pi}}\dfrac{e^{-t}}{2t}$ will be less than $10^{-12}$ therefore we can always omit the series inside \ref{fm_ssssm_fmt_eq:6} so that when $t>=25$, the $f_{m}(t)$ could be expressed as: \begin{equation} \label{fm_ssssm_fmt_eq:7} f_{m}(t) = \frac{(2m-1)!!}{(2t)^{m}t^{1/2}} \end{equation} Now let's give a summary to calculate the $f_{m}(t)$: \begin{enumerate} \item if $M_{max} = 0$, use erf function; \item if $M_{max} >= 1$ and $M_{max} <= 10$: \begin{enumerate} \item if $t<=1$, calculate $f_{Mmax}(t)$ by using power series in \ref{fm_ssssm_fmt_eq:5} with 12 terms, then use down recursive relation in \ref{fm_ssssm_fmt_eq:4} for the rest of $f_{m}(t)$; \item if $t>1$ and $t<25$, calculate $f_{0}(t)$ with error function and use up recursive relation in \ref{fm_ssssm_fmt_eq:3} to derive other $f_{m}(t)$; \item if $t>=25$, calculate $f_{Mmax}(t)$ with \ref{fm_ssssm_fmt_eq:7} and use down recursive relation in \ref{fm_ssssm_fmt_eq:4} to derive other $f_{m}(t)$ \end{enumerate} \item if $M_{max} >= 11$: \begin{enumerate} \item if $t<25$, calculate $f_{Mmax}(t)$ from incomplete gamma function (for example, from boost library function), and use down recursive relation in \ref{fm_ssssm_fmt_eq:4} to derive other $f_{m}(t)$ \item if $t>=25$, calculate $f_{Mmax}(t)$ with \ref{fm_ssssm_fmt_eq:7} and use down recursive relation in \ref{fm_ssssm_fmt_eq:4} to derive other $f_{m}(t)$ \end{enumerate} \end{enumerate} We note, that with such arrangement, all of S, P and D shell integrals (up to second derivatives order) could be derived without incomplete gamma function, which is expensive. for $m>11$, the down recursive relation grantees that only one $f_{m}(t)$ is calculated, which also saves a lot of times. \subsection{Compute Bottom Integrals} % % Now let's use the above results and consider the full expression of \ref{fm_ssssm_eq:0}: \begin{equation} (00|r_{12}|00)^{(m)} = O\rho^{\frac{1}{2}}f_{m}(T) \label{fm_ssssm_eq:1} \end{equation} Where the parameters are given as\footnote{``O'' here indicates overlap integrals}: \begin{equation} \begin{split} \sigma_{P} &= \frac{1}{\alpha + \beta} \\ \bm{P} &=(\alpha{A} + \beta\bm{B})\sigma_{P} \\ O_{P} &= (\pi\sigma_{P})^{\frac{3}{2}}e^{-\alpha\beta\sigma_{P} |\bm{A}-\bm{B}|^{2}} \\ \sigma_{Q} &= \frac{1}{\gamma + \delta} \\ \bm{Q} &= (\gamma\bm{C} + \delta\bm{D})\sigma_{Q} \\ O_{Q} &= (\pi\sigma_{Q})^{\frac{3}{2}}e^{-\gamma\delta\sigma_{Q} |\bm{C}-\bm{D}|^{2}} \\ O &= O_{P}O_{Q} \\ \rho &= \frac{1}{\sigma_{P} + \sigma_{Q}} \\ R &= |\bm{P}-\bm{Q}| \\ T &= \rho R^{2} \end{split} \label{fm_ssssm_eq:2} \end{equation} Here all of the parameters except $\rho$, $O$, $R$ and $T$ could be pre-computed outside the integral loop. For the $m=0$, the $(00|r_{12}|00)^{(0)}$ is: \begin{equation} \begin{split} (00|r_{12}|00)^{(0)} &= O\rho^{\frac{1}{2}}f_{0}(T) \\ &= O\rho^{\frac{1}{2}} T^{-\frac{1}{2}} erf(T^{\frac{1}{2}}) \\ &= \frac{O}{R}erf(T^{\frac{1}{2}}) \end{split} \label{fm_ssssm_eq:3} \end{equation} For the $m>0$ case, the down recursive relation is: \begin{equation} \begin{split} f_{m}(T) &=\frac{1}{2m+1}\left( 2Tf_{m+1}(T) + \frac{2}{\sqrt{\pi}}e^{-T}\right) \Rightarrow \\ (00|r_{12}|00)^{(m)} &= \frac{1}{2m+1} \left( 2t(00|r_{12}|00)^{(m+1)} + \frac{2}{\sqrt{\pi}}O\rho^{\frac{1}{2}}e^{-T} \right) \end{split} \label{fm_ssssm_eq:4} \end{equation} Here the factor of $\frac{1}{T\sqrt{\pi}}$ and term $O\rho^{\frac{1}{2}}e^{-T}$ need to be computed in the integral loop (but only once!). The up recursive relation is: \begin{equation} \begin{split} f_{m}(T) &= \frac{1}{2T}\left( (2m-1)f_{m-1}(T) - \frac{2}{\sqrt{\pi}}e^{-T}\right) \Rightarrow \\ (00|r_{12}|00)^{(m)} &= \frac{1}{2T}\left( (2m-1)(00|r_{12}|00)^{(m-1)} - \frac{2}{\sqrt{\pi}}O\rho^{\frac{1}{2}}e^{-T}\right) \end{split} \label{fm_ssssm_eq:5} \end{equation} The power series becomes: \begin{equation} (00|r_{12}|00)^{(m)} = \frac{2}{\sqrt{\pi}}O\rho^{\frac{1}{2}}e^{-T} \sum_{k=0}^{\infty}\frac{(2m-1)!!}{(2m+2k+1)!!}(2T)^{k} \label{fm_ssssm_eq:6} \end{equation} and the \ref{fm_ssssm_fmt_eq:7} becomes: \begin{equation} \label{fm_ssssm_fmt_eq:7} (00|r_{12}|00)^{(m)} = O\frac{(2m-1)!!}{(2T)^{m}R} \end{equation}
{ "alphanum_fraction": 0.652856299, "avg_line_length": 45.0608365019, "ext": "tex", "hexsha": "b711dac5b991d34fbcdc52e3a5dd021e0c76c060", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "7bdf943f681e54948cd68775a31e4c93a53a13f8", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "murfreesboro/fenglai-note", "max_forks_repo_path": "algorithm/technic/integral/fm.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "7bdf943f681e54948cd68775a31e4c93a53a13f8", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "murfreesboro/fenglai-note", "max_issues_repo_path": "algorithm/technic/integral/fm.tex", "max_line_length": 97, "max_stars_count": 1, "max_stars_repo_head_hexsha": "7bdf943f681e54948cd68775a31e4c93a53a13f8", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "murfreesboro/fenglai-note", "max_stars_repo_path": "algorithm/technic/integral/fm.tex", "max_stars_repo_stars_event_max_datetime": "2020-06-16T07:23:48.000Z", "max_stars_repo_stars_event_min_datetime": "2020-06-16T07:23:48.000Z", "num_tokens": 4322, "size": 11851 }
\section{CvAux} \ifC \section{Stereo Correspondence Functions} \cvCPyFunc{FindStereoCorrespondence} Calculates disparity for a stereo-pair. \cvdefC{ cvFindStereoCorrespondence( \par const CvArr* leftImage, \par const CvArr* rightImage, \par int mode, \par CvArr* depthImage, \par int maxDisparity, \par double param1, \par double param2, \par double param3, \par double param4, \par double param5 ); } \begin{description} \cvarg{leftImage}{Left image of the stereo pair, a rectified, grayscale, 8-bit image.} \cvarg{rightImage}{Right image of the stereo pair, a rectified, grayscale, 8-bit image.} \cvarg{mode}{Algorithm used to find a disparity (now only CV\_DISPARITY\_BIRCHFIELD is supported).} \cvarg{depthImage}{Destination depth image, a grayscale, 8-bit image that codes the scaled disparity, so that the zero disparity (corresponding to the points that are very far from the cameras) maps to 0, and the maximum disparity maps to 255.} \cvarg{maxDisparity}{Maximum possible disparity. The closer the objects to the camera, the larger ths value should be. Very large values slow down the process significantly.} \cvarg{param1, param2, param3, param4, param5}{The parameters of the algorithm. param1 is the constant occlusion penalty, param2 is the constant match reward, param3 defines a highly reliable region (set of contiguous pixels whose reliability is at least param3), param4 defines a moderately reliable region, and param5 defines a slightly reliable region. If some parameter is omitted default, its value is used. In Birchfield's algorithm param1 = 25, param2 = 5, param3 = 12, param4 = 15, and param5 = 25 (These values have been taken from "Depth Discontinuities by Pixel-to-Pixel Stereo" Stanford University Technical Report STAN-CS-TR-96-1573, July 1996.).} \end{description} The function \texttt{cvFindStereoCorrespondence} calculates a disparity map for two rectified grayscale images. Example: Calculating disparity for a pair of 8-bit color images \begin{lstlisting} /*--------------------------------------------------------------------------*/ IplImage* srcLeft = cvLoadImage("left.jpg",1); IplImage* srcRight = cvLoadImage("right.jpg",1); IplImage* leftImage = cvCreateImage(cvGetSize(srcLeft), IPL\_DEPTH\_8U, 1); IplImage* rightImage = cvCreateImage(cvGetSize(srcRight), IPL\_DEPTH\_8U, 1); IplImage* depthImage = cvCreateImage(cvGetSize(srcRight), IPL\_DEPTH\_8U, 1); cvCvtColor(srcLeft, leftImage, CV\_BGR2GRAY); cvCvtColor(srcRight, rightImage, CV\_BGR2GRAY); cvFindStereoCorrespondence( leftImage, rightImage, CV\_DISPARITY\_BIRCHFIELD, depthImage, 50, 15, 3, 6, 8, 15 ); /*--------------------------------------------------------------------------*/ \end{lstlisting} And here is the example stereo pair that can be used to test the example \includegraphics{pics/left.jpg} \includegraphics{pics/right.jpg} \section{View Morphing Functions} \cvCPyFunc{MakeScanlines} Calculates the coordinates of scanlines for two cameras using a fundamental matrix. \cvdefC{ void cvMakeScanlines( \par const CvMatrix3* matrix, \par CvSize img\_size, \par int* scanlines1, \par int* scanlines2, \par int* lengths1, \par int* lengths2, \par int* line\_count ); } \begin{description} \cvarg{matrix}{Fundamental matrix.} \cvarg{imgSize}{Size of the image.} \cvarg{scanlines1}{Pointer to the array of calculated scanlines of the first image.} \cvarg{scanlines2}{Pointer to the array of calculated scanlines of the second image.} \cvarg{lengths1}{Pointer to the array of calculated lengths (in pixels) of the first image scanlines.} \cvarg{lengths2}{Pointer to the array of calculated lengths (in pixels) of the second image scanlines.} \cvarg{line\_count}{Pointer to the variable that stores the number of scanlines.} \end{description} The function \texttt{cvMakeScanlines} finds the coordinates of scanlines for two images. This function returns the number of scanlines. The function does nothing except calculating the number of scanlines if the pointers \texttt{scanlines1} or \texttt{scanlines2} are equal to zero. \cvCPyFunc{PreWarpImage} Rectifies an image. \cvdefC{ void cvPreWarpImage( \par int line\_count, \par IplImage* img, \par uchar* dst, \par int* dst\_nums, \par int* scanlines ); } \begin{description} \cvarg{line\_count}{Number of scanlines for the image.} \cvarg{img}{Image to prewarp.} \cvarg{dst}{Data to store for the prewarp image.} \cvarg{dst\_nums}{Pointer to the array of the lengths of the scanlines.} \cvarg{scanlines}{Pointer to the array of the coordinates of the scanlines.} \end{description} The function \texttt{cvPreWarpImage} rectifies an image so that the scanlines in the rectified image are horizontal. The output buffer of size \texttt{max(width,height)*line\_count*3} must be allocated before calling the function. \cvCPyFunc{FindRuns} Retrieves the scanlines from a rectified image and breaks them down into runs. \cvdefC{ void cvFindRuns( \par int line\_count, \par uchar* prewarp1, \par uchar* prewarp2, \par int* line\_lengths1, \par int* line\_lengths2, \par int* runs1, \par int* runs2, \par int* num\_runs1, \par int* num\_runs2 ); } \begin{description} \cvarg{line\_count}{Number of scanlines.} \cvarg{prewarp1}{Prewarp data of the first image.} \cvarg{prewarp2}{Prewarp data of the second image.} \cvarg{line\_lengths1}{Array of the lengths of the scanlines in the first image.} \cvarg{line\_lengths2}{Array of the lengths of the scanlines in the second image.} \cvarg{runs1}{Array of the runs in each scanline in the first image.} \cvarg{runs2}{Array of the runs in each scanline in the second image.} \cvarg{num\_runs1}{Array of the number of runs in each scanline in the first image.} \cvarg{num\_runs2}{Array of the number of runs in each scanline in the second image.} \end{description} The function \texttt{cvFindRuns} retrieves scanlines from the rectified image and breaks each scanline down into several runs, that is, a series of pixels of almost the same brightness. \cvCPyFunc{DynamicCorrespondMulti} Finds the correspondence between two sets of runs of two warped images. \cvdefC{ void cvDynamicCorrespondMulti( \par int line\_count, \par int* first, \par int* first\_runs, \par int* second, \par int* second\_runs, \par int* first\_corr, \par int* second\_corr ); } \begin{description} \cvarg{line\_count}{Number of scanlines.} \cvarg{first}{Array of the runs in the first image.} \cvarg{first\_runs}{Array of the number of runs in each scanline of the first image.} \cvarg{second}{Array of the runs in the second image.} \cvarg{second\_runs}{Array of the number of runs in each scanline of the second image.} \cvarg{first\_corr}{Pointer to the array of the correspondence information found for the first runs.} \cvarg{second\_corr}{Pointer to the array of the correspondence information found for the second runs.} \end{description} The function \texttt{cvDynamicCorrespondMulti} finds the correspondence between two sets of runs of two images. Memory must be allocated before calling this function. Memory size for one array of correspondence information is \texttt{max( width,height )* numscanlines*3*sizeof ( int ).} \cvCPyFunc{MakeAlphaScanlines} Calculates the coordinates of the scanlines in an image from a virtual camera. \cvdefC{ void cvMakeAlphaScanlines( \par int* scanlines1, \par int* scanlines2, \par int* scanlinesA, \par int* lengths, \par int line\_count, \par float alpha ); } \begin{description} \cvarg{scanlines1}{Pointer to the array of the first scanlines.} \cvarg{scanlines2}{Pointer to the array of the second scanlines.} \cvarg{scanlinesA}{Pointer to the array of the scanlines found in the virtual image.} \cvarg{lengths}{Pointer to the array of the lengths of the scanlines found in the virtual image.} \cvarg{line\_count}{Number of scanlines.} \cvarg{alpha}{Position of virtual camera \texttt{(0.0 - 1.0)}.} \end{description} The function \texttt{cvMakeAlphaScanlines} finds the coordinates of the scanlines for the virtual camera with the given camera position. Memory must be allocated before calling this function. Memory size for the array of correspondence runs is \texttt{numscanlines*2*4*sizeof(int)}. Memory size for the array of the scanline lengths is \texttt{numscanlines*2*4*sizeof(int).} \cvCPyFunc{MorphEpilinesMulti} Morphs two pre-warped images using information about their stereo correspondence. \cvdefC{ void cvMorphEpilinesMulti( \par int line\_count, \par uchar* first\_pix, \par int* first\_num, \par uchar* second\_pix, \par int* second\_num, \par uchar* dst\_pix, \par int* dst\_num, \par float alpha, \par int* first, \par int* first\_runs, \par int* second, \par int* second\_runs, \par int* first\_corr, \par int* second\_corr ); } \begin{description} \cvarg{line\_count}{Number of scanlines in the prewarp image.} \cvarg{first\_pix}{Pointer to the first prewarp image.} \cvarg{first\_num}{Pointer to the array of the number of points in each scanline in the first image.} \cvarg{second\_pix}{Pointer to the second prewarp image.} \cvarg{second\_num}{Pointer to the array of the number of points in each scanline in the second image.} \cvarg{dst\_pix}{Pointer to the resulting morphed warped image.} \cvarg{dst\_num}{Pointer to the array of the number of points in each line.} \cvarg{alpha}{Virtual camera position \texttt{(0.0 - 1.0)}.} \cvarg{first}{First sequence of runs.} \cvarg{first\_runs}{Pointer to the number of runs in each scanline in the first image.} \cvarg{second}{Second sequence of runs.} \cvarg{second\_runs}{Pointer to the number of runs in each scanline in the second image.} \cvarg{first\_corr}{Pointer to the array of the correspondence information found for the first runs.} \cvarg{second\_corr}{Pointer to the array of the correspondence information found for the second runs.} \end{description} The function \texttt{cvMorphEpilinesMulti} morphs two pre-warped images using information about the correspondence between the scanlines of the two images. \cvCPyFunc{PostWarpImage} Warps a rectified, morphed image back. \cvdefC{ void cvPostWarpImage( \par int line\_count, \par uchar* src, \par int* src\_nums, \par IplImage* img, \par int* scanlines ); } \begin{description} \cvarg{line\_count}{Number of scanlines.} \cvarg{src}{Pointer to the prewarp image virtual image.} \cvarg{src\_nums}{Number of scanlines in the image.} \cvarg{img}{Resulting unwarped image.} \cvarg{scanlines}{Pointer to the array of the scanlines data.} \end{description} The function \texttt{cvPostWarpImage} warps the resultant image from the virtual camera by storing its rows across the scanlines whose coordinates are calculated by \cross{MakeAlphaScanlines}. \cvCPyFunc{DeleteMoire} Deletes moire in a given image. \cvdefC{ void cvDeleteMoire( IplImage* img ); } \begin{description} \cvarg{img}{Image.} \end{description} The function \texttt{cvDeleteMoire} deletes moire from the given image. The post-warped image may have black (un-covered) points because of possible holes between neighboring scanlines. The function deletes moire (black pixels) from the image by substituting neighboring pixels for black pixels. If all the scanlines are horizontal, the function may be omitted. \section{3D Tracking Functions} % XXX Weird URL Formatting, /../? The section discusses functions for tracking objects in 3d space using a stereo camera. Besides C API, there is the DirectShow filter \href{http://opencvlibrary.sourceforge.net/../appPage/3dTracker/3dTrackerFilter.htm}{http://opencvlibrary.sourceforge.net/../appPage/3dTracker/3dTrackerFilter.htm} and the wrapper application. \href{http://opencvlibrary.sourceforge.net/../appPage/3dTracker/3dTracker.htm}{http://opencvlibrary.sourceforge.net/../appPage/3dTracker/3dTracker.htm} \href{http://opencvlibrary.sourceforge.net/../appPage/3dTracker/3dTrackerTesting.htm}{http://opencvlibrary.sourceforge.net/../appPage/3dTracker/3dTrackerTesting.htm} contains a description of how to test the filter on sample data. \cvCPyFunc{3dTrackerCalibrateCameras} % XXX URL Formatting Simultaneously determines the position and orientation of multiple cameras. \cvdefC{ CvBool cv3dTrackerCalibrateCameras(\par int num\_cameras, \par const Cv3dTrackerCameraIntrinsics camera\_intrinsics[], \par CvSize checkerboard\_size, \par IplImage *samples[], \par Cv3dTrackerCameraInfo camera\_info[]); } \begin{description} \cvarg{num\_cameras}{the number of cameras to calibrate. This is the size of each of the three array parameters.} \cvarg{camera\_intrinsics}{camera intrinsics for each camera, as determined by \cross{CalibFilter}.} \cvarg{checkerboard\_size}{the width and height (in number of squares) of the checkerboard.} \cvarg{samples}{images from each camera, with a view of the checkerboard.} \cvarg{camera\_info}{filled in with the results of the camera calibration. This is passed into \cross{3dTrackerLocateObjects} to do tracking.} \end{description} The function \texttt{cv3dTrackerCalibrateCameras} searches for a checkerboard of the specified size in each of the images. For each image in which it finds the checkerboard, it fills in the corresponding slot in \texttt{camera\_info} with the position and orientation of the camera relative to the checkerboard and sets the \texttt{valid} flag. If it finds the checkerboard in all the images, it returns true; otherwise it returns false. This function does not change the members of the \texttt{camera\_info} array that correspond to images in which the checkerboard was not found. This allows you to calibrate each camera independently, instead of simultaneously. To accomplish this, do the following: \begin{enumerate} \item Clear all the \texttt{valid} flags before calling this function the first time; \item Call this function with each set of images; \item Check all the \texttt{valid} flags after each call. When all the \texttt{valid} flags are set, calibration is complete. \end{enumerate} Note that this method works well only if the checkerboard is rigidly mounted; if it is handheld, all the cameras should be calibrated simultanously to get an accurate result. To ensure that all cameras are calibrated simultaneously, ignore the \texttt{valid} flags and use the return value to decide when calibration is complete. \cvCPyFunc{3dTrackerLocateObjects} Determines the 3d location of tracked objects. \cvdefC{ int cv3dTrackerLocateObjects(\par int num\_cameras, \par int num\_objects, \par const Cv3dTrackerCameraInfo camera\_info[], \par const Cv3dTracker2dTrackedObject tracking\_info[], \par Cv3dTrackerTrackedObject tracked\_objects[]); } \begin{description} \cvarg{num\_cameras}{the number of cameras.} \cvarg{num\_objects}{the maximum number of objects found by any camera. (Also the maximum number of objects returned in \texttt{tracked\_objects}.} \cvarg{camera\_info}{camera position and location information for each camera, as determined by \newline \cross{3dTrackerCalibrateCameras}.} \cvarg{tracking\_info}{the 2d position of each object as seen by each camera. Although this is specified as a one-dimensional array, it is actually a two-dimensional array: \texttt{const \newline Cv3dTracker2dTrackedObject tracking\_info[num\_cameras][num\_objects]}. The \texttt{id} field of any unused slots must be -1. Ids need not be ordered or consecutive.} \cvarg{tracked\_objects}{filled in with the results.} \end{description} The function \texttt{cv3dTrackerLocateObjects} determines the 3d position of tracked objects based on the 2d tracking information from multiple cameras and the camera position and orientation information computed by \cross{3dTrackerCalibrateCameras}. It locates any objects with the same \texttt{id} that are tracked by more than one camera. It fills in the \texttt{tracked\_objects} array and returns the number of objects located. The \texttt{id} fields of any unused slots in \texttt{tracked\_objects} are set to -1. \section{Eigen Objects (PCA) Functions} The functions described in this section do PCA analysis and compression for a set of 8-bit images that may not fit into memory all together. If your data fits into memory and the vectors are not 8-bit (or you want a simpler interface), use \cross{CalcCovarMatrix}, \cross{SVD} and \cross{GEMM} to do PCA. \cvCPyFunc{CalcCovarMatrixEx} Calculates the covariance matrix for a group of input objects. \cvdefC{ void cvCalcCovarMatrixEx( \par int object\_count, \par void* input, \par int io\_flags, \par int iobuf\_size, \par uchar* buffer, \par void* userdata, \par IplImage* avg, \par float* covar\_matrix ); } \begin{description} \cvarg{object\_count}{Number of source objects.} \cvarg{input}{Pointer either to the array of \texttt{IplImage} input objects or to the read callback function according to the value of the parameter \texttt{ioFlags}.} \cvarg{io\_flags}{Input/output flags.} \cvarg{iobuf\_size}{Input/output buffer size.} \cvarg{buffer}{Pointer to the input/output buffer.} \cvarg{userdata}{Pointer to the structure that contains all necessary data for the callback functions.} \cvarg{avg}{Averaged object.} \cvarg{covar\_matrix}{Covariance matrix. An output parameter; must be allocated before the call.} \end{description} The function \texttt{cvCalcCovarMatrixEx} calculates a covariance matrix of the input objects group using a previously calculated averaged object. Depending on the \texttt{ioFlags} parameter it may be used either in direct access or callback mode. If \texttt{ioFlags} is not \texttt{CV\_EIGOBJ\_NO\_CALLBACK}, the buffer must be allocated before calling the function. \cvCPyFunc{CalcEigenObjects} Calculates the orthonormal eigen basis and the averaged object for group a of input objects. \cvdefC{ void cvCalcEigenObjects( \par int nObjects, \par void* input, \par void* output, \par int ioFlags, \par int ioBufSize, \par void* userData, \par CvTermCriteria* calcLimit, \par IplImage* avg, \par float* eigVals ); } \begin{description} \cvarg{nObjects}{Number of source objects.} \cvarg{input}{Pointer either to the array of \texttt{IplImage} input objects or to the read callback function according to the value of the parameter \texttt{ioFlags}.} \cvarg{output}{Pointer either to the array of eigen objects or to the write callback function according to the value of the parameter ioFlags.} \cvarg{ioFlags}{Input/output flags.} \cvarg{ioBufSize}{Input/output buffer size in bytes. The size is zero if unknown.} \cvarg{userData}{Pointer to the structure that contains all of the necessary data for the callback functions.} \cvarg{calcLimit}{Criteria that determine when to stop the calculation of eigen objects.} \cvarg{avg}{Averaged object.} \cvarg{eigVals}{Pointer to the eigenvalues array in the descending order; may be \texttt{NULL}.} \end{description} The function \texttt{cvCalcEigenObjects} calculates the orthonormal eigen basis and the averaged object for a group of input objects. Depending on the \texttt{ioFlags} parameter it may be used either in direct access or callback mode. Depending on the parameter \texttt{calcLimit}, calculations are finished either after the first \texttt{calcLimit.max\_iter} dominating eigen objects are retrieved or if the ratio of the current eigenvalue to the largest eigenvalue comes down to the \texttt{calcLimit.epsilon} threshold. The value \texttt{calcLimit -> type} must be \texttt{CV\_TERMCRIT\_NUMB, CV\_TERMCRIT\_EPS}, or \texttt{CV\_TERMCRIT\_NUMB | CV\_TERMCRIT\_EPS} . The function returns the real values \texttt{calcLimit->max\_iter} and \texttt{calcLimit->epsilon} . The function also calculates the averaged object, which must be created previously. Calculated eigen objects are arranged according to the corresponding eigenvalues in descending order. The parameter \texttt{eigVals} may be equal to \texttt{NULL} if eigenvalues are not needed. The function \texttt{cvCalcEigenObjects} uses the function \cross{cvCalcCovarMatrixEx}. \cvCPyFunc{CalcDecompCoeff} Calculates the decomposition coefficient of an input object. \cvdefC{ double cvCalcDecompCoeff( \par IplImage* obj, \par IplImage* eigObj, \par IplImage* avg ); } \begin{description} \cvarg{obj}{Input object.} \cvarg{eigObj}{Eigen object.} \cvarg{avg}{Averaged object.} \end{description} The function \texttt{cvCalcDecompCoeff} calculates one decomposition coefficient of the input object using the previously calculated eigen object and the averaged object. \cvCPyFunc{EigenDecomposite} Calculates all of the decomposition coefficients for an input object. \cvdefC{ void cvEigenDecomposite( \par IplImage* obj, \par int eigenvec\_count, \par void* eigInput, \par int ioFlags, \par void* userData, \par IplImage* avg, \par float* coeffs ); } \begin{description} \cvarg{obj}{Input object.} \cvarg{eigenvec\_count}{Number of eigen objects.} \cvarg{eigInput}{Pointer either to the array of \texttt{IplImage} input objects or to the read callback function according to the value of the parameter \texttt{ioFlags}.} \cvarg{ioFlags}{Input/output flags.} \cvarg{userData}{Pointer to the structure that contains all of the necessary data for the callback functions.} \cvarg{avg}{Averaged object.} \cvarg{coeffs}{Calculated coefficients; an output parameter.} \end{description} The function \texttt{cvEigenDecomposite} calculates all of the decomposition coefficients for the input object using the previously calculated eigen objects basis and the averaged object. Depending on the \texttt{ioFlags} parameter it may be used either in direct access or callback mode. \cvCPyFunc{EigenProjection} Calculates the object projection into the eigen sub-space. \cvdefC{ void cvEigenProjection( \par void* input\_vecs, \par int eigenvec\_count, \par int io\_flags, \par void* userdata, \par float* coeffs, \par IplImage* avg, \par IplImage* proj ); } \begin{description} \cvarg{input\_vec}{Pointer to either an array of \texttt{IplImage} input objects or to a callback function, depending on \texttt{io\_flags}.} \cvarg{eigenvec\_count}{Number of eigenvectors.} \cvarg{io\_flags}{Input/output flags; see \cross{cvCalcEigenObjects}.} \cvarg{userdata}{Pointer to the structure that contains all of the necessary data for the callback functions.} \cvarg{coeffs}{Previously calculated decomposition coefficients.} \cvarg{avg}{Average vector, calculated by \cross{cvCalcEigenObjects}.} \cvarg{proj}{Projection to the eigen sub-space.} \end{description} The function \texttt{cvEigenProjection} calculates an object projection to the eigen sub-space or, in other words, restores an object using previously calculated eigen objects basis, averaged object, and the decomposition coefficients of the restored object. Depending on the \texttt{io\_flags} parameter it may be used either in direct access or callback mode. \````````````section{Embedded Hidden Markov Models Functions} In order to support embedded models, the user must define structures to represent a 1D HMM and a 2D embedded HMM model. \cvCPyFunc{CvHMM} Embedded HMM Structure. \cvdefC{typedef struct \_CvEHMM} \begin{lstlisting} { int level; int num_states; float* transP; float** obsProb; union { CvEHMMState* state; struct _CvEHMM* ehmm; } u; } CvEHMM; \end{lstlisting} \begin{description} \cvarg{level}{Level of embedded HMM. If \texttt{level ==0}, HMM is mostly external. In 2D HMM there are two types of HMM: 1 external and several embedded. External HMM has \texttt{level ==1}, embedded HMMs have \texttt{level ==0}.} \cvarg{num\_states}{Number of states in 1D HMM.} \cvarg{transP}{State-to-state transition probability, square matrix \texttt{(num\_state×num\_state )}.} \cvarg{obsProb}{Observation probability matrix.} \cvarg{state}{Array of HMM states. For the last-level HMM, that is, an HMM without embedded HMMs, HMM states are real.} \cvarg{ehmm}{Array of embedded HMMs. If HMM is not last-level, then HMM states are not real and they are HMMs.} \end{description} For representation of observations the following structure is defined: \cvCPyFunc{CvImgObsInfo} Image Observation Structure. \cvdefC{ typedef struct CvImgObsInfo } \begin{lstlisting} { int obs_x; int obs_y; int obs_size; float** obs; int* state; int* mix; } CvImgObsInfo; \end{lstlisting} \begin{description} \cvarg{obs\_x}{Number of observations in the horizontal direction.} \cvarg{obs\_y}{Number of observations in the vertical direction.} \cvarg{obs\_size}{Length of each observation vector.} \cvarg{obs}{Pointer to the observation vectors stored consequently. Number of vectors is \texttt{obs\_x*obs\_y}.} \cvarg{state}{Array of indices of states, assigned to every observation vector.} \cvarg{mix}{Index of mixture component, corresponding to the observation vector within an assigned state.} \end{description} \cvCPyFunc{Create2DHMM} Creates a 2D, embedded HMM. \cvdefC{ CvEHMM* cvCreate2DHMM( int* stateNumber, int* numMix, int obsSize ); } \begin{description} \cvarg{stateNumber}{Array, the first element of which specifies the number of superstates in the HMM. All of the subsequent elements specify the number of states in every embedded HMM, corresponding to each superstate. So, the length of the array is \texttt{stateNumber [0]+1}.} \cvarg{numMix}{Array with numbers of Gaussian mixture components for each internal state. The number of elements in the array is equal to number of internal states in the HMM, that is, superstates are not counted here.} \cvarg{obsSize}{Size of the observation vectors to be used with created HMM.} \end{description} The function \texttt{cvCreate2DHMM} returns the created structure of the type \cross{CvEHMM} with the specified parameters. \cvCPyFunc{Release2DHMM} Releases a 2D, embedded HMM. \cvdefC{ void cvRelease2DHMM(CvEHMM** hmm ); } \begin{description} \cvarg{hmm}{Address of the pointer to the HMM to be released.} \end{description} The function \texttt{cvRelease2DHMM} frees all the memory used by the HMM and clears the pointer to the HMM. \cvCPyFunc{CreateObsInfo} Creates a structure to store image observation vectors. \cvdefC{ CvImgObsInfo* cvCreateObsInfo( CvSize numObs, int obsSize ); } \begin{description} \cvarg{numObs}{Numbers of observations in the horizontal and vertical directions. For the given image and scheme of extracting observations the parameter can be computed via the macro \texttt{CV\_COUNT\_OBS( roi, dctSize, delta, numObs )}, where \texttt{roi, dctSize, delta, numObs} are the pointers to structures of the type \cross{CvSize}. The pointer \texttt{roi} means the size of \texttt{roi} of the image observed, \texttt{numObs} is the output parameter of the macro.} \cvarg{obsSize}{Size of the observation vectors to be stored in the structure.} \end{description} The function \texttt{cvCreateObsInfo} creates new structures to store image observation vectors. For definitions of the parameters \texttt{roi, dctSize}, and \texttt{delta} see the specification of the function \texttt{cvImgToObs\_DCT}. \cvCPyFunc{ReleaseObsInfo} Releases the observation vector structures. \cvdefC{ void cvReleaseObsInfo( CvImgObsInfo** obsInfo ); } \begin{description} \cvarg{obsInfo}{Address of the pointer to the structure \cross{CvImgObsInfo}.} \end{description} The function \texttt{cvReleaseObsInfo} frees all of the memory used by the observations and clears the pointer to the structure \cross{CvImgObsInfo}. \cvCPyFunc{ImgToObs\_DCT} Extracts observation vectors from an image. \cvdefC{ void cvImgToObs\_DCT( \par IplImage* image, \par float* obs, \par CvSize dctSize, \par CvSize obsSize, \par CvSize delta ); } \begin{description} \cvarg{image}{Input image.} \cvarg{obs}{Pointer to the consequently stored observation vectors.} \cvarg{dctSize}{Size of the image blocks for which the DCT (Discrete Cosine Transform) coefficients are to be computed.} \cvarg{obsSize}{Number of the lowest DCT coefficients in the horizontal and vertical directions to be put into the observation vector.} \cvarg{delta}{Shift in pixels between two consecutive image blocks in the horizontal and vertical directions.} \end{description} The function \texttt{cvImgToObs\_DCT} extracts observation vectors, that is, DCT coefficients, from the image. The user must pass \texttt{obsInfo.obs} as the parameter \texttt{obs} to use this function with other HMM functions and use the structure \texttt{obsInfo} of the \cross{CvImgObsInfo} type. \texttt{Calculating Observations for HMM} \cvdefC{ CvImgObsInfo* obs\_info; ... cvImgToObs\_DCT( image,obs\_info->obs, //!!! dctSize, obsSize, delta ); } \cvCPyFunc{UniformImgSegm} Performs uniform segmentation of image observations using HMM states. \cvdefC{ void cvUniformImgSegm( CvImgObsInfo* obsInfo, CvEHMM* hmm ); } \begin{description} \cvarg{obsInfo}{Observation structures.} \cvarg{hmm}{HMM structure.} \end{description} The function \texttt{cvUniformImgSegm} segments image observations using HMM states uniformly (see \textcolor{blue}{\underline{Initial Segmentation}} for 2D Embedded HMM for 2D embedded HMM with 5 superstates and 3, 6, 6, 6, 3 internal states of every corresponding superstate). \textcolor{blue}{Initial Segmentation for 2D Embedded HMM} \includegraphics{pics/face.png} \cvCPyFunc{InitMixSegm} Segments all observations within every internal state of HMM using state mixture components. \cvdefC{ void cvInitMixSegm( \par CvImgObsInfo** obsInfoArray, \par int numImg, \par CvEHMM* hmm ); } \begin{description} \cvarg{obsInfoArray}{Array of pointers to the observation structures.} \cvarg{numImg}{Length of the above array.} \cvarg{hmm}{HMM.} \end{description} The function \texttt{cvInitMixSegm} takes a group of observations from several training images already segmented by states and splits a set of observation vectors within every internal HMM state into as many clusters as the number of mixture components in the state. \cvCPyFunc{EstimateHMMStateParams} Estimates all of the parameters of every HMM state. \cvdefC{ void cvEstimateHMMStateParams( \par CvImgObsInfo** obsInfoArray, \par int numImg, \par CvEHMM* hmm ); } \begin{description} \cvarg{obsInfoArray}{Array of pointers to the observation structures.} \cvarg{numImg}{Length of the array.} \cvarg{hmm}{HMM.} \end{description} The function \texttt{cvEstimateHMMStateParams} computes all inner parameters of every HMM state, including Gaussian means, variances, and so forth. \cvCPyFunc{EstimateTransProb} Computes transition probability matrices for the embedded HMM. \cvdefC{ void cvEstimateTransProb( \par CvImgObsInfo** obsInfoArray, \par int numImg, \par CvEHMM* hmm ); } \begin{description} \cvarg{obsInfoArray}{Array of pointers to the observation structures.} \cvarg{numImg}{Length of the above array.} \cvarg{hmm}{HMM.} \end{description} The function \texttt{cvEstimateTransProb} uses the current segmentation of image observations to compute the transition probability matrices for all embedded and external HMMs. \cvCPyFunc{EstimateObsProb} Computes the probability of every observation of several images. \cvdefC{ void cvEstimateObsProb( CvImgObsInfo* obsInfo, CvEHMM* hmm ); } \begin{description} \cvarg{obsInfo}{Observation structure.} \cvarg{hmm}{HMM structure.} \end{description} The function \texttt{cvEstimateObsProb} computes the Gaussian probabilities of each observation to occur in each of the internal HMM states. \cvCPyFunc{EViterbi} Executes the Viterbi algorithm for the embedded HMM. \cvdefC{ float cvEViterbi( CvImgObsInfo* obsInfo, CvEHMM* hmm ); } \begin{description} \cvarg{obsInfo}{Observation structure. }\cvarg{hmm}{HMM structure.} \end{description} The function \texttt{cvEViterbi} executes the Viterbi algorithm for the embedded HMM. The Viterbi algorithm evaluates the likelihood of the best match between the given image observations and the given HMM and performs segmentation of image observations by HMM states. The segmentation is done on the basis of the match found. \cvCPyFunc{MixSegmL2} Segments the observations from all of the training images using the mixture components of the newly assigned states. \cvdefC{ void cvMixSegmL2( \par CvImgObsInfo** obsInfoArray, \par int numImg, \par CvEHMM* hmm ); } \begin{description} \cvarg{obsInfoArray}{Array of pointers to the observation structures.} \cvarg{numImg}{Length of the array.} \cvarg{hmm}{HMM.} \end{description} The function \texttt{cvMixSegmL2} segments the observations from all of the training images using the mixture components of the newly Viterbi algorithm-assigned states. The function uses the Euclidean distance to group vectors around the existing mixtures centers. \fi
{ "alphanum_fraction": 0.7596145213, "avg_line_length": 46.1505524862, "ext": "tex", "hexsha": "7a4701373e4f6c1f78956124725a75d16edcdcf6", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "42642d8c632da53f60f2610b056547137793021b", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "eirTony/INDI1", "max_forks_repo_path": "to/lang/OpenCV-2.2.0/doc/CvAux.tex", "max_issues_count": 14, "max_issues_repo_head_hexsha": "42642d8c632da53f60f2610b056547137793021b", "max_issues_repo_issues_event_max_datetime": "2016-12-10T07:24:15.000Z", "max_issues_repo_issues_event_min_datetime": "2016-11-24T10:46:39.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "eirTony/INDI1", "max_issues_repo_path": "to/lang/OpenCV-2.2.0/doc/CvAux.tex", "max_line_length": 769, "max_stars_count": null, "max_stars_repo_head_hexsha": "42642d8c632da53f60f2610b056547137793021b", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "eirTony/INDI1", "max_stars_repo_path": "to/lang/OpenCV-2.2.0/doc/CvAux.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 8772, "size": 33413 }
\subsection{Category adjunctions}\label{subsec:category_adjunctions} \begin{remark}\label{rem:adjoint_functors}\cite{StanfordPlato:category_theory} When the functor \( G: \cat{D} \to \cat{C} \) is left adjoint to \( F: \cat{C} \to \cat{D} \) and \( F \) is not invertible, then \( G \) finds a \enquote{generalized inverse} under \( F \) for every object in \( \cat{C} \) that try to \enquote{act the same} with respect to morphisms. \Fullref{def:category_adjunction} contains two equivalent definition of an adjunction, and \fullref{rem:universal_mapping_property} describes how they can be characterized via universal mapping properties. \end{remark} \begin{definition}\label{def:category_adjunction}\mcite[sec. 2.2]{Leinster2016Basic} An \term{adjunction} between the \hyperref[def:category]{categories} \( \cat{C} \) and \( \cat{D} \) can be defined in several equivalent ways. Let \( F: \cat{C} \to \cat{D} \) and \( G: \cat{D} \to \cat{C} \) be arbitrary functors. In all the cases below, if there exists an adjunction between \( F \) and \( G \), we say that \( F \) is \term{left adjoint} to \( G \) and, correspondingly, that \( G \) is \term{right adjoint} to \( F \). A conventional notation for adjoint functors is \( F \dashv G \). \begin{thmenum} \thmitem{def:category_adjunction/hom} A \term{hom-adjunction} is a triple \( (F, G, \varphi) \), where \( \varphi \) is \hyperref[thm:natural_isomorphism]{natural isomorphism} \begin{equation}\label{eq:def:category_adjunction/hom} \varphi: \cat{D}(F(\anon*), \anon*) \Rightarrow \cat{C}(\anon*, G(\anon*)). \end{equation} The functors \begin{align*} &\cat{D}(F(\anon*), \anon*): \cat{C}^{\opcat} \times \cat{D} \to \cat{Set}, \\ &\cat{C}(\anon*, G(\anon*)): \cat{C}^{\opcat} \times \cat{D} \to \cat{Set} \end{align*} are straightforward modifications of the \hyperref[eq:def:hom_functor/binary]{binary hom-functor} on \( \cat{C} \). Naturality of \( \varphi \) in this case means that, for every two morphisms \( f: B \to A \) in \( \cat{C} \) and \( g: X \to Y \) in \( \cat{D} \), the following diagram commutes: \begin{equation}\label{eq:def:category_adjunction/varphi_nat} \begin{aligned} \includegraphics[page=1]{output/def__category_adjunction.pdf} \end{aligned} \end{equation} \thmitem{def:category_adjunction/unit_counit} A \term{unit-counit adjunction} is a quadruple \( (F, G, \eta, \varepsilon) \), where \begin{equation}\label{eq:def:category_adjunction/unit_counit/signature} \begin{aligned} \eta &: \id_{\cat{C}} \Rightarrow G \bincirc F, \\ \varepsilon &: F \bincirc G \Rightarrow \id_{\cat{D}} \end{aligned} \end{equation} are natural transformations satisfying the condition that, for any pair of objects \( A \) in \( \cat{C} \) and \( Y \) in \( \cat{D} \), the following triangle diagrams commute: \begin{minipage}{0.43\textwidth} \begin{equation}\label{eq:def:category_adjunction/d_triangle} \begin{aligned} \includegraphics[page=2]{output/def__category_adjunction.pdf} \end{aligned} \end{equation} \end{minipage} \hfill \begin{minipage}{0.44\textwidth} \raggedright \begin{equation}\label{eq:def:category_adjunction/c_triangle} \begin{aligned} \includegraphics[page=3]{output/def__category_adjunction.pdf} \end{aligned} \end{equation} \end{minipage} \smallskip Note that an adjunction is not an \hyperref[def:category_equivalence]{equivalence}, they simply have a common setup. Similarly to \hyperref[def:category_equivalence]{equivalence}, we call the \hyperref[def:natural_transformation]{natural transformation} \( \eta \) the \term{unit} of the adjunction and \( \varepsilon \) the \term{counit}. \end{thmenum} \end{definition} \begin{defproof} \ImplicationSubProof{def:category_adjunction/hom}{def:category_adjunction/unit_counit} Let \( (F, G, \varphi) \) be a hom-adjunction. For every morphism \( f: B \to A \) in \( \cat{C} \), from the naturality of \( \varphi \) we have \begin{equation}\label{eq:def:category_adjunction/varphi_eta} \begin{aligned} \includegraphics[page=4]{output/def__category_adjunction.pdf} \end{aligned} \end{equation} Since \( \varphi_{A,F(B)} \) is a morphism in \( \cat{Set} \), it is a function, and we can apply it in order to define the family \begin{equation*} \begin{aligned} &\eta: \id_{\cat{C}} \Rightarrow G \bincirc F, \\ &\eta_A \coloneqq \varphi_{A,F(A)}(\id_{F(A)}). \end{aligned} \end{equation*} We must show that \( \eta \) is a natural transformation. On the diagram \eqref{eq:def:category_adjunction/varphi_eta}, we can start in the top left corner with \( F(\id_A) \) and top right corner with \( F(\id_B) \) and reach the middle. We obtain that, \begin{equation*} \cat{C}(f, [G \bincirc F](\id_A))\parens[\Big]{ \underbrace{\varphi_{A, F(A)}(\id_A)}_{\eta_A} } = \eta_A \bincirc f \end{equation*} and \begin{equation*} \cat{C}(\id_B, [G \bincirc F](f))\parens[\Big]{ \underbrace{\varphi_{B, F(B)}(\id_B)}_{\eta_B} } = [G \bincirc F](f) \bincirc \eta_B \end{equation*} are equal. That is, the following diagram commutes: \begin{equation}\label{eq:def:category_adjunction/eta_nat} \begin{aligned} \includegraphics[page=5]{output/def__category_adjunction.pdf} \end{aligned} \end{equation} In order to define the natural transformation \( \varepsilon: F \bincirc G \Rightarrow \id_{\cat{D}} \), we use the inverse transformation \( \varphi^{-1} \). For every morphism \( g: X \to Y \) in \( \cat{D} \), we have \begin{equation}\label{eq:def:category_adjunction/varphi_varepsilon} \begin{aligned} \includegraphics[page=6]{output/def__category_adjunction.pdf} \end{aligned} \end{equation} Thus, we define the family \begin{equation*} \begin{aligned} &\varepsilon: F \bincirc G \Rightarrow \id_{\cat{D}}, \\ &\varepsilon_X \coloneqq \varphi_{G(X),X}^{-1}(\id_{G(X)}). \end{aligned} \end{equation*} We can prove that \( \varepsilon \) is a natural transformation analogously to how we proved it for \( \eta \), and we will skip the details. We will now show that the triangle diagram \eqref{eq:def:category_adjunction/d_triangle} commutes. Consider the morphism \( (\eta_A, F(\id_A)) \) in \( \cat{C}^{\opcat} \times \cat{D} \). Applying the functors \( \cat{D}(F(\anon*), \anon*) \) and \( \cat{D}(\anon*, G(\anon*)) \) to this morphism and using the naturality of \( \varphi \), we obtain \begin{equation}\label{eq:def:category_adjunction/d_triangle_proof} \begin{aligned} \includegraphics[page=7]{output/def__category_adjunction.pdf} \end{aligned} \end{equation} Note that \( \varepsilon_{F(A)} \) is a member of \( \cat{D}([F \bincirc G \bincirc F](A), F(A)) \). Composing the functions in \eqref{eq:def:category_adjunction/d_triangle_proof} in one direction, we obtain \begin{balign*} &\phantom{{}={}} \varphi_{A,F(A)}^{-1} \parens[\Bigg]{ \cat{C}\parens[\Big]{ \eta_A, [G \bincirc F](\id_A) } \parens[\Big]{ \varphi_{[G \bincirc F](A),F(A)} (\varepsilon_{F(A)}) } } = \\ &= \varphi_{A,F(A)}^{-1} \parens[\Bigg]{ \parens[\Big]{ \varphi_{[G \bincirc F](A),F(A)} (\varepsilon_{F(A)}) } \bincirc \eta_A } = \\ &= \varphi_{A,F(A)}^{-1} \parens[\Big]{ \id_{[G \bincirc F](A)} \bincirc \eta_A } = \\ &= \id_{F(A)}. \end{balign*} Composing the functions in \eqref{eq:def:category_adjunction/d_triangle_proof} in the other direction, we obtain \begin{equation*} \cat{D}\parens[\Big]{ F(\eta_A), F(\id_A) } (\varepsilon_{F(A)}) = \varepsilon_{F(A)} \bincirc F(\eta_A). \end{equation*} Therefore, \begin{equation*} \id_{F(A)} = \varepsilon_{F(A)} \bincirc F(\eta_A). \end{equation*} and thus \eqref{eq:def:category_adjunction/d_triangle} commutes. We can similarly prove that \eqref{eq:def:category_adjunction/c_triangle} commutes. Therefore, \( (F, G, \eta, \varepsilon) \) is a unit-counit adjunction. \ImplicationSubProof{def:category_adjunction/unit_counit}{def:category_adjunction/hom} Let \( (F, G, \eta, \varepsilon) \) be a unit-counit adjunction. For every pair of objects \( A \in \cat{C} \) and \( X \in \cat{D} \), define the functions \begin{equation*} \begin{aligned} &\varphi_{A,X}: \cat{D}(F(A), X) \to \cat{C}(A, G(X)) \\ &\varphi_{A,X}(g) \coloneqq G(g) \bincirc \eta_A. \end{aligned} \end{equation*} and \begin{equation*} \begin{aligned} &\psi_{A,X}^{-1}: \cat{C}(A, G(X)) \to \cat{D}(F(A), X) \\ &\psi_{A,X}^{-1}(f) \coloneqq \varepsilon_X \bincirc F(f), \end{aligned} \end{equation*} From the naturality of \( \varepsilon \) and from \eqref{eq:def:category_adjunction/d_triangle} it follows that the following diagram commutes: \begin{equation}\label{eq:def:category_adjunction/varphi_inverse_def} \begin{aligned} \includegraphics[page=8]{output/def__category_adjunction.pdf} \end{aligned} \end{equation} Therefore, \begin{equation*} g = \varepsilon_X \bincirc \underbrace{[F \bincirc G](f) \bincirc F(\eta_A)}_{F(\varphi_{A,X}(g))} = [\phi_{A,X} \bincirc \varphi_{A,X}](g) \end{equation*} and thus \( \psi_{A,X} \) is a left inverse of \( \varphi_{A,X} \). We can analogously show that \( \psi_{A,X} \) is a right inverse, and hence that \( \varphi_{A,X} \) is invertible. Since we have already shown that \( \varphi \) is a bijective function, it remains to verify the naturality of \( \varphi \) in order to show that it is a natural isomorphism. Let \( f: B \to A \) be a morphism in \( \cat{C} \) and \( g: X \to Y \) be a morphism in \( \cat{D} \). Fix some morphism \( s: F(A) \to X \). Composing the functions of \eqref{eq:def:category_adjunction/varphi_nat} in one direction, we obtain \begin{equation}\label{eq:def:category_adjunction/varphi_nat_chase_right} \varphi_{B, Y}\parens[\Big]{ \cat{D}(F(f), g)(s) } = \varphi_{B, Y}\parens[\Big]{ g \bincirc s \bincirc F(f) } = G(g) \bincirc G(s) \bincirc [G \bincirc F](f) \bincirc \eta_B. \end{equation} In the other direction, we have \begin{equation}\label{eq:def:category_adjunction/varphi_nat_chase_down} \cat{C}(f, G(g))\parens[\Big]{ \varphi_{A, X}(s) } = \cat{C}(f, G(g))\parens[\Big]{ G(s) \bincirc \eta_A } = G(g) \bincirc G(s) \bincirc \eta_A \bincirc f. \end{equation} From the naturality of \( \eta \), we have that \eqref{eq:def:category_adjunction/eta_nat} commutes and hence \begin{equation*} \eta_A \bincirc f = [G \bincirc F](s) \bincirc \eta_B. \end{equation*} Therefore, \eqref{eq:def:category_adjunction/varphi_nat_chase_right} and \eqref{eq:def:category_adjunction/varphi_nat_chase_down} are equal and, thus, \eqref{eq:def:category_adjunction/varphi_nat} also commutes. This proves the naturality of \( \varphi \). \end{defproof} \begin{proposition}\label{thm:category_adjunction_duality} The functor \( F: \cat{C} \to \cat{D} \) is \hyperref[def:category_adjunction]{left adjoint} to \( G: \cat{D} \to \cat{C} \) if and only if the \hyperref[def:dual_functor]{dual functor} \( F^{\opcat} \) is right adjoint to \( G^{\opcat} \). This is part of the duality principles listed in \fullref{thm:categorical_principle_of_duality}. \end{proposition} \begin{proof} \begin{equation*} \cat{C^{\opcat}}(G^{\opcat}(X), A) = \cat{C}(A, G(X)) \cong \cat{D}(F(A), X) = \cat{D^{\opcat}}(X, F^{\opcat}(A)). \end{equation*} \end{proof} \begin{definition}\label{def:concrete_category}\mcite[26]{MacLane1994} A \term{concrete category} is a pair \( (\cat{C}, U) \), where \( \cat{C} \) is a category and \( U: \cat{C} \to \cat{Set} \) is a \hyperref[def:functor_invertibility/faithful]{faithful functor} that gives us a set for any object of \( \cat{C} \). More generally, a \( \cat{D} \)-concrete category is a pair \( (\cat{C}, U) \), where \( U: \cat{C} \to \cat{D} \). In the context of a concrete category, we call \( U \) a \term{forgetful functor} and any \hyperref[def:category_adjunction]{left adjoint} to \( U \) functor a \term{free functor}. According to Jean-Pierre Marquis in \cite{StanfordPlato:category_theory}, the motivation for this terminology is that free functors build objects that are free from additional restrictions. We list several examples in \fullref{ex:def:category_adjunction}. The forgetful functor is usually clear from the context, and we identify a concrete category \( (\cat{C}, U) \) with its underlying set \( \cat{C} \). The corresponding free functor, however, often requires a nontrivial but straightforward construction. \end{definition} \begin{example}\label{ex:def:category_adjunction} We list some examples of \hyperref[def:category_adjunction]{category adjunctions}. Note that only some of them are commonly referred to as \enquote{free}. \begin{thmenum} \thmitem{ex:def:category_adjunction/set_top} Perhaps the simplest meaningful example of an adjunction is the \hyperref[def:standard_topologies/discrete]{discrete topology} functor \( D: \cat{Set} \to \cat{Top} \), which is left adjoint to the forgetful functor \( U: \cat{Top} \to \cat{Set} \), which maps a small \hyperref[def:topological_space]{topological space} \( (\mscrX, \mscrT) \) into its underlying set \( \mscrX \). Given a set \( A \) and a topological space \( (\mscrX, \mscrT) \), every function \( s: A \to \mscrX \) is \hyperref[def:global_continuity]{continuous} when \( A \) is endowed with the discrete topology. Conversely, every continuous function is obviously a \hyperref[def:function]{function}. It follows that there is an equality \begin{equation*} \cat{Top}\parens[\Big]{ \underbrace{(A, \pow(A))}_{D(A)}, (\mscrX, \mscrT) } = \cat{Set}\parens[\Big]{ A, \mscrX }. \end{equation*} Therefore, \( (D, U, \id) \) is a hom-adjunction. Furthermore, \( (D, U, \id, \id) \) is a unit-counit adjunction. \thmitem{ex:def:category_adjunction/top_set} The \hyperref[def:standard_topologies/discrete]{indiscrete topology} functor \( I: \cat{Set} \to \cat{Top} \) is right-adjoint to the same forgetful functor \( U: \cat{Top} \to \cat{Set} \), again with identities for all natural transformations of the adjunction. Therefore, we have \begin{equation*} D \dashv U \dashv I. \end{equation*} \thmitem{ex:def:category_adjunction/set_cat} We discussed in \fullref{ex:discrete_category_adjunction} the \hyperref[def:discrete_category]{discrete category} functor \( D: \cat{Set} \to \cat{Cat} \). We showed in \fullref{ex:set_discr_cat_isomorphism} that, when restricted to the subcategory \( \cat{DiscrCat} \) rather than \( \cat{Cat} \), \( D \) it is an inverse to the forgetful functor \( U \). In the general case, however, this is an adjunction rather than an isomorphism. More precisely, \( D \) is left adjoint to \( U \). Note that for any functor \( F: \cat{C} \to \cat{D} \), we have \( U(F) \coloneqq F\restr_{\obj(C)} \). Thus, \( U \) is not only a functor in \( [\cat{Cat}, \cat{Set}] \); it also induces a natural isomorphism between the functors \( \cat{Cat}(D(\anon*), \cat{\anon*}) \) and \( \cat{Set}(\anon*, U(\anon*)) \). Indeed, fix a small category \( \cat{C} \) and a set \( A \). From our discussion in \fullref{ex:set_discr_cat_isomorphism} it is obvious that the restriction \begin{equation*} U: \cat{Cat}(D(A), \cat{C}) \to \cat{Set}(A, U(\cat{C})) \end{equation*} is a bijective function. In order to verify the naturality of the transformation induced by \( U \), we must show that for any function \( f: B \to A \) and functor \( F: \cat{C} \to \cat{D} \), the following diagram commutes \begin{equation}\label{eq:ex:def:category_adjunction/set_cat/u_nat} \begin{aligned} \includegraphics[page=1]{output/ex__def__category_adjunction.pdf} \end{aligned} \end{equation} The commutativity of \eqref{eq:ex:def:category_adjunction/set_cat/u_nat} follows from the following: for every functor \( S: D(A) \to \cat{C} \) we have \begin{equation*} U(F \bincirc S \bincirc D(f)) = U(F) \bincirc U(S) \bincirc U(D(f)) = U(F) \bincirc U(S) \bincirc f. \end{equation*} Therefore, \( (D, U, U) \) is a \hyperref[def:category_adjunction/hom]{hom-adjunction}. We can also explicitly define a unit-counit adjunction. The unit \( \eta: \id_{\cat{Set}} \Rightarrow U \bincirc D \) is simply the identity. The counit is slightly more interesting. Given a small category \( \cat{C} \), applying \( D \bincirc U \) gives us the subcategory consisting only of the objects and identity morphisms of \( \cat{C} \). Then the counit \( \varepsilon: D \bincirc U \Rightarrow \id_{\cat{Cat}} \) is simply the inclusion functor \( \Iota \) from this subcategory to \( \cat{C} \). The triangle \begin{equation}\label{eq:ex:def:category_adjunction/set_cat/triangles} \begin{aligned} \includegraphics[page=2]{output/ex__def__category_adjunction.pdf} \quad\quad \includegraphics[page=3]{output/ex__def__category_adjunction.pdf} \end{aligned} \end{equation} corresponding to \eqref{eq:def:category_adjunction/d_triangle} and \eqref{eq:def:category_adjunction/c_triangle}, obviously commute. The quadruple \( (D, U, \eta, \varepsilon) \) is a \hyperref[def:category_adjunction/unit_counit]{unit-counit adjunction}. \thmitem{ex:def:category_adjunction/quiv_cat} The left adjoint of the forgetful functor \( U: \cat{Cat} \to \cat{Quiv} \) is the free category functor defined in \fullref{def:quiver_free_category}. We denote this functor by \( F: \cat{Quiv} \to \cat{Cat} \). We can define the family of functions \begin{equation}\label{eq:ex:def:category_adjunction/quiv_cat/varphi_family} \begin{aligned} &\varphi: \cat{Cat}(F(\anon*), \cat{\anon*}) \Rightarrow \cat{Quiv}(\anon*, U(\cat{\anon*})), \\ &\varphi_{Q, \cat{C}}(S) \coloneqq \parens[\Big]{ v \mapsto S(v), a \mapsto S(\iota(a)) }. \end{aligned} \end{equation} For every functor \( S: F(Q) \to \cat{C} \), \( \varphi_{Q, \cat{C}} \) defines a quiver homomorphism that restricts \( S \) to \hyperref[def:quiver_path]{quiver paths} of containing only one arc. Formally, \( \iota \) is the canonical embedding \begin{equation*} \begin{aligned} &\iota: Q \to [U \bincirc F](Q) \\ &\iota_V(v) \coloneqq v, &\iota_A(a) \coloneqq (h(a), a). \end{aligned} \end{equation*} We will later see that \( \iota \) is the unit of a unit-counit adjunction. Now, from \eqref{eq:def:quiver_free_category/functor_from_homomorphism}, it is clear that the free category functor \( F \), when restricted to the set of quiver homomorphisms \( \cat{Quiv}(Q, U(\cat{C})) \), is the two-sided inverse of \( \varphi_{Q, \cat{C}} \). We will show that \( \varphi \) is a natural transformation. Fix a functor \( G: \cat{C} \to \cat{D} \) and a homomorphism \( (g_V, g_A): Q \to R \). We must show that the following diagram commutes: \begin{equation}\label{eq:ex:def:category_adjunction/quiv_cat/varphi_nat} \begin{aligned} \includegraphics[page=4]{output/ex__def__category_adjunction.pdf} \end{aligned} \end{equation} That is, for every functor \( S: F(A) \to \cat{C} \), we must show \begin{equation*} \varphi_{R, \cat{D}}(G \bincirc S \bincirc F(g_V, g_A)) = U(G) \bincirc \varphi_{R, \cat{D}}(S) \bincirc (g_V, g_A). \end{equation*} This is also clear from \eqref{eq:def:quiver_free_category/functor_from_homomorphism}. Therefore, \( (F, U, \varphi) \) is a \hyperref[def:category_adjunction/hom]{hom-adjunction}. Furthermore, the canonical embedding \( \iota \) defined above, when parameterized by \( Q \), is a unit of adjunction. The counit \( \varepsilon: F \bincirc U \Rightarrow \id_{\cat{Cat}} \) is more involved. As discussed in \fullref{def:quiver_free_category}, for every finite path \( p \) in the quiver \( U(\cat{C}) \) with arcs \( a_1, \ldots, a_n \), the functor \( F \bincirc U \) simply \enquote{evaluates} \( p \) as \begin{equation*} a_n \bincirc a_{n-1} \bincirc \cdots \bincirc a_1. \end{equation*} Since the embedding only produces paths with a single arc, the adjunction triangles commute: \begin{equation}\label{eq:ex:def:category_adjunction/quiv_cat/triangles} \begin{aligned} \includegraphics[page=5]{output/ex__def__category_adjunction.pdf} \quad\quad \includegraphics[page=6]{output/ex__def__category_adjunction.pdf} \end{aligned} \end{equation} \thmitem{ex:def:category_adjunction/multgph_quiv} In \fullref{def:quiver/forgetful}, we have defined the \hyperref[def:concrete_category]{forgetful functor} \( U: \hyperref[def:category_of_small_quivers]{\cat{Quiv}} \to \hyperref[def:undirected_multigraph]{\cat{MultGph}} \). Given a \hyperref[def:grothendieck_universe]{Grothendieck universe} \( \mscrU \) and a choice function \( c \) for the family of two-element sets in \( \mscrU \), we have an orientation functor \( O_c: \cat{MultGph} \to \cat{Quiv} \) defined in \fullref{def:multigraph_orientation}. It may seem that, for a fixed choice function, \( O_c \) is \hyperref[def:category_adjunction]{left adjoint} to \( U \). This is not true, however, as shown in \cref{fig:ex:def:category_adjunction/multgph_quiv}. \begin{figure} \hfill \includegraphics[page=7]{output/ex__def__category_adjunction.pdf} \hfill \includegraphics[page=8]{output/ex__def__category_adjunction.pdf} \hfill\hfill \caption{Two undirected homomorphisms from \( G \) to \( U(Q) \), denoted using dashed lines, only one of which is a quiver homomorphism from \( O_c(G) \) to \( Q \)} \label{fig:ex:def:category_adjunction/multgph_quiv} \end{figure} \end{thmenum} \end{example} \begin{definition}\label{def:adjoint_equivalence} We call the quadruple \( (F, G, \eta, \varepsilon) \) with signature \eqref{eq:def:category_equivalence/signature} an \term{adjoint equivalence} if it is both an \hyperref[def:category_adjunction]{adjunction} and \hyperref[def:category_equivalence]{equivalence}. \end{definition} \begin{proposition}\label{thm:adjoint_equivalence} Let \( (F, G, \eta, \varepsilon) \) be a \hyperref[def:category_equivalence]{category equivalence} between \( \cat{C} \) and \( \cat{D} \). There exists a natural isomorphism \( \zeta: \id_{\cat{C}} \Rightarrow G \bincirc F \) such that \( (F, G, \zeta, \varepsilon) \) is an \hyperref[def:adjoint_equivalence]{adjoint equivalence}. \end{proposition} \begin{proof} From \fullref{thm:equivalence_induces_fully_faithful_and_essentially_surjective_functor} it follows that \( F \) is fully faithful and essentially surjective. We will now use the same trick as in the end of the proof of \fullref{thm:fully_faithful_and_essentially_surjective_functor_induces_equivalence} to define \( \zeta \). Since \( F \) is fully faithful, there is a bijective function \begin{equation*} \varphi: \cat{D}\parens[\Big]{ F(A), [F \bincirc G \bincirc F](A) } \to \cat{C}\parens[\Big]{ A, [F \bincirc G](A) }. \end{equation*} Hence, we can define \begin{equation*} \begin{aligned} &\zeta: \id_{\cat{C}} \to G \bincirc F, \\ &\zeta_A \coloneqq \varphi(\varepsilon_{F(A)}^{-1}) \end{aligned} \end{equation*} so that \( F(\zeta_A) = \varepsilon_{F(A)}^{-1} \). By \fullref{thm:def:functor_invertibility/properties/fully_faithful_reflects_isomorphisms}, \( \zeta_A \) is also an isomorphism. As in \fullref{thm:fully_faithful_and_essentially_surjective_functor_induces_equivalence}, we use \fullref{thm:commutative_diagrams_preserved_and_reflected} and the naturality of \( \varepsilon \) to prove that \eqref{eq:thm:fully_faithful_and_essentially_surjective_functor_induces_equivalence/varepsilon_source_nat} implies \eqref{eq:thm:fully_faithful_and_essentially_surjective_functor_induces_equivalence/varepsilon_image_nat} (with \( \eta \) replaced by \( \zeta \)). Therefore, \( \zeta \) is a natural isomorphism and the quadruple \( (F, G, \zeta, \varepsilon) \) is an equivalence of categories. \end{proof} \begin{proposition}\label{thm:functor_adjoint_uniqueness} If a functor has two \hyperref[def:category_adjunction]{left adjoints} (resp. right adjoints), then there exists a unique natural isomorphism between them. We say that left adjoints (resp. right adjoints) are unique up to a natural isomorphism. \end{proposition} \begin{proof} We will first prove the statement for left adjoints. Suppose that \( (F', G, \eta', \varepsilon') \) and \( (F', G, \eta^\dprime, \varepsilon^\dprime) \) are two unit-counit adjunctions. \SubProof{Proof of existence of isomorphism}\mcite{MathSE:left_adjoint_uniqueness} We can utilize the naturality of the units \( \eta' \) and \( \eta^\dprime \) and counits \( \varepsilon' \) and \( \varepsilon^\dprime \) to show that the following diagram commutes: \begin{equation}\label{eq:thm:functor_adjoint_uniqueness/existence} \begin{aligned} \includegraphics[page=1]{output/thm__functor_adjoint_uniqueness.pdf} \end{aligned} \end{equation} By the commuting triangle \eqref{eq:def:category_adjunction/d_triangle}, all paths from \( F'(A) \) to \( F'(A) \) above are identities. The bottom-most path in \eqref{eq:thm:functor_adjoint_uniqueness/existence} justifies defining the natural transformation \begin{equation*} \begin{aligned} &\alpha: F' \Rightarrow F^\dprime \\ &\alpha_A \coloneqq \varepsilon'_{F^\dprime(A)} \bincirc F'(\eta_A^\dprime). \end{aligned} \end{equation*} Then \( \alpha_A \) is an isomorphism for every object \( A \) in \( \cat{C} \) with inverse \( \varepsilon_{F'(A)}^\dprime \bincirc F^\dprime(\eta_A') \). Therefore, it is a natural isomorphism from \( F' \) to \( F^\dprime \). \SubProof{Proof of uniqueness of isomorphism} Suppose that \( \beta: F' \Rightarrow F^\dprime \) is another natural isomorphism. Then, by the commuting triangle \eqref{eq:def:category_adjunction/d_triangle}, the following diagram also commutes: \begin{equation}\label{eq:thm:functor_adjoint_uniqueness/uniqueness} \begin{aligned} \includegraphics[page=2]{output/thm__functor_adjoint_uniqueness.pdf} \end{aligned} \end{equation} Therefore, \begin{equation*} \beta_A = \varepsilon^\dprime_{F^\dprime(A)} \bincirc F^\dprime(\eta^\dprime_A) \bincirc \alpha_A \reloset {\eqref{eq:def:category_adjunction/d_triangle}} = \alpha_A. \end{equation*} This finishes the proof for left adjoints. The other direction is \hyperref[thm:categorical_principle_of_duality]{dual}. If \( G' \) and \( G^\dprime \) are two right adjoints to \( F \), then by \fullref{thm:category_adjunction_duality}, \( G'^{\opcat} \) and \( {G^\dprime}^{\opcat} \) are left adjoints and are thus isomorphic. Then by \fullref{thm:morphism_invertibility_duality}, \( G' \) and \( G^\dprime \) are also isomorphic. \end{proof} \begin{proposition}\label{thm:universal_objects_as_adjunctions} Fix a category \( \cat{C} \). We can characterize the universal objects in \( \cat{C} \) from \fullref{def:universal_objects} via adjunctions with the \hyperref[def:universal_categories]{terminal category} \( \cat{1} \). Let \( \Delta^{\cat{1}}: \cat{C} \to \cat{1} \) be the \hyperref[def:diagonal_functor]{constant functor} into \( \cat{1} \). \begin{thmenum} \thmitem{thm:universal_objects_as_adjunctions/initial} The object \( I \) of \( \cat{C} \) is \hyperref[def:universal_objects]{initial} if and only if it is (the unique value of) a left adjoint to \( \Delta^{\cat{1}} \) functor. In particular, the uniqueness proved in \fullref{thm:def:universal_objects/properties/initial} follows from \fullref{thm:functor_adjoint_uniqueness}. \thmitem{thm:universal_objects_as_adjunctions/terminal} \hyperref[thm:categorical_principle_of_duality]{Dually}, the \hyperref[def:universal_objects]{terminal objects} are exactly the right adjoint to \( \Delta_I^{\cat{1}} \) functors. \end{thmenum} \end{proposition} \begin{proof} We will only prove \fullref{thm:universal_objects_as_adjunctions/initial} since the other direction is \hyperref[thm:categorical_principle_of_duality]{dual}. \SufficiencySubProof Let \( I \) be an initial object in \( \cat{C} \). We can then regard it as a functor \( F: \cat{1} \to \cat{C} \). We define the natural transformations \begin{equation*} \begin{aligned} &\eta: \id_{\cat{1}} \Rightarrow \Delta_I^{\cat{1}} \bincirc F \\ &\eta_{\cat{0}} \coloneqq \id_{\cat{0}} \end{aligned} \end{equation*} and \begin{equation*} \begin{aligned} &\varepsilon: F \bincirc \Delta_I^{\cat{1}} \Rightarrow \id_{\cat{C}} \\ &\varepsilon_A \T{is the unique morphism} I \to A \end{aligned} \end{equation*} Since \( I \) has a unique morphism into any other object of \( \cat{C} \), for every morphism \( f: A \to B \), the following diagram commutes: \begin{equation}\label{eq:thm:universal_objects_as_adjunctions/sufficiency_nat} \begin{aligned} \includegraphics[page=1]{output/thm__universal_objects_as_adjunctions.pdf} \end{aligned} \end{equation} It follows that both \( \eta \) and \( \varepsilon \) are natural transformations. Furthermore, they trivially satisfy the triangle diagrams triangle \eqref{eq:def:category_adjunction/d_triangle} and \eqref{eq:def:category_adjunction/c_triangle}. Hence, \( (F, \Delta_I^{\cat{1}}, \eta, \varepsilon) \) is a \hyperref[def:category_adjunction/unit_counit]{unit-counit adjunction}. \NecessitySubProof Conversely, suppose that \( (F, \Delta_I^{\cat{1}}, \eta, \varepsilon) \) is a unit-counit adjunction. Let \( I \coloneqq F(\cat{0}) \). Then \( \varepsilon_A \) is a morphism from \( I \) to \( A \). By \eqref{eq:def:category_adjunction/d_triangle}, \( \varepsilon_I = \id_I \) since the following diagram commutes: \begin{equation}\label{eq:thm:universal_objects_as_adjunctions/d_triangle} \begin{aligned} \includegraphics[page=2]{output/thm__universal_objects_as_adjunctions.pdf} \end{aligned} \end{equation} Suppose that \( \zeta \) is another morphism from \( I \) to \( A \). The naturality of \( \varepsilon \) implies that, for the morphism \( \id_A: A \to A \), the following diagram commutes: \begin{equation}\label{eq:thm:universal_objects_as_adjunctions/necessity_nat} \begin{aligned} \includegraphics[page=3]{output/thm__universal_objects_as_adjunctions.pdf} \end{aligned} \end{equation} The upper left triangle in \eqref{eq:thm:universal_objects_as_adjunctions/necessity_nat} is \eqref{eq:thm:universal_objects_as_adjunctions/d_triangle}. We conclude that \( \zeta = \varepsilon_A \) and, generalizing on \( A \), that every morphism from \( I \) is unique. \end{proof} \begin{remark}\label{rem:left_and_right_adjoint_not_equivalence} We discussed in \fullref{ex:def:universal_objects/grp} that \enquote{the} trivial group \( \set{ e } \) is a \hyperref[def:universal_objects/zero]{zero object} of \( \cat{Grp} \). By \fullref{thm:universal_objects_as_adjunctions}, this object induces a functor that is both left adjoint and right adjoint of \( \Delta_I^{\cat{1}} \). Nevertheless, the categories \( \cat{Grp} \) and \( \cat{1} \) are not \hyperref[def:category_equivalence]{equivalent}. \end{remark} \begin{remark}\label{rem:universal_mapping_property} We will now regard adjoint functors as a way to \enquote{construct} new objects. Let \( (F, G, \iota, \pi) \) be a \hyperref[def:category_adjunction/unit_counit]{unit-counit adjunction} between the categories \( \cat{C} \) and \( \cat{D} \). In the current context, especially in connection with \hyperref[def:category_of_cones/limit]{limits} and \hyperref[def:category_of_cones/colimit]{colimits}, we will call the components of the counit \( \pi: F \bincirc G \Rightarrow \id_{\cat{D}} \) --- \term{projections}, and the components of the unit \( \iota: \id_{\cat{C}} \Rightarrow G \bincirc F \) --- \term{coprojections}. Take objects \( A \) in \( \cat{C} \) and \( X \) in \( \cat{D} \) and a morphism \( f: A \to G(X) \). We want to obtain a morphism \( \widetilde{f}: F(A) \to X \), for which the following diagram commutes: \begin{equation}\label{eq:rem:universal_mapping_property/c_triangle} \begin{aligned} \includegraphics[page=1]{output/rem__universal_mapping_property.pdf} \end{aligned} \end{equation} From the naturality of \( \iota \) and from the triangle diagram \eqref{eq:def:category_adjunction/c_triangle} it follows that the following diagram commutes: \begin{equation}\label{eq:rem:universal_mapping_property/f_tilde_existence} \begin{aligned} \includegraphics[page=2]{output/rem__universal_mapping_property.pdf} \end{aligned} \end{equation} It is clear from \eqref{eq:rem:universal_mapping_property/f_tilde_existence} that \begin{equation*} G(\widetilde{f}) = G(\pi_X) \bincirc [G \bincirc F](f) = G(\pi_X \bincirc F(f)). \end{equation*} Furthermore, this value is unique. From the naturality of \( \pi \) and the triangle diagram \eqref{eq:def:category_adjunction/d_triangle} it follows that the following diagram commutes: \begin{equation}\label{eq:rem:universal_mapping_property/f_tilde_uniquness} \begin{aligned} \includegraphics[page=3]{output/rem__universal_mapping_property.pdf} \end{aligned} \end{equation} Therefore, \begin{equation*} \widetilde{f} = \pi_X \bincirc F(f) \end{equation*} Taking into account that the functor \( F \) itself is unique up to a unique isomorphism, as per \fullref{thm:functor_adjoint_uniqueness}, we have proved the following statement: \begin{displayquote} For every object \( A \) in \( \cat{C} \), there exist unique up to a unique isomorphism object \( F(A) \) in \( \cat{D} \) and canonical coprojection map \( \iota_A: A \to [G \bincirc F](A) \) satisfying the following property, called a \term{universal mapping property}: \begin{displayquote} For every object \( X \) in \( \cat{D} \) and every map \( f: A \to G(X) \) in \( \cat{C} \), there exists a unique map \( \widetilde{f}: F(A) \to X \) in \( \cat{D} \) such that the diagram \eqref{eq:rem:universal_mapping_property/c_triangle} commutes. \end{displayquote} \end{displayquote} Intuitively, this universal mapping property states that any map (morphism) with domain \( A \) in \( \cat{C} \) can be transformed into a map with domain \( F(A) \) in \( \cat{D} \). The statement becomes more meaningful when we regard \( G: \cat{D} \to \cat{C} \) as a \hyperref[def:concrete_category]{forgetful functor}. In this case, every object of \( \cat{D} \) is regarded as an object of \( \cat{C} \), and we write \( X \) rather than \( G(X) \). The universal mapping property then becomes: \begin{displayquote} For every object \( A \) in \( \cat{C} \), there exist unique up to a unique isomorphism object \( F(A) \) in \( \cat{D} \) and canonical coprojection map \( \iota_A: A \to F(A) \) satisfying the following universal mapping property: \begin{displayquote} For every object \( X \) in \( \cat{D} \) and every map \( f: A \to X \) in \( \cat{C} \), there exists a unique map \( \widetilde{f}: F(A) \to X \) in \( \cat{D} \) such that the following diagram commutes: \begin{equation}\label{eq:rem:universal_mapping_property/c_triangle_forgetful} \begin{aligned} \includegraphics[page=4]{output/rem__universal_mapping_property.pdf} \end{aligned} \end{equation} \end{displayquote} \end{displayquote} In \fullref{def:concrete_category} we mentioned that we will call the left adjoint of a forgetful functor a free functor. Universal mapping properties allow characterizing certain \enquote{free constructions}, such as the free groups defined in \fullref{def:free_group}, without explicitly building a free functor and proving that it is left adjoint. Indeed, for every suitable object and map, we explicitly build the natural isomorphism \( \varphi \) of a hom-adjunction, and the commutative triangle \eqref{eq:rem:universal_mapping_property/c_triangle} ensures that this \( \varphi \) is a natural transformation. Universal mapping properties of this form are used for \hyperref[def:category_of_cones/colimit]{colimits} --- see \fullref{rem:limit_universal_mapping_property}. Of course, there is a \hyperref[thm:categorical_principle_of_duality]{dual} universal mapping property: \begin{displayquote} For every object \( X \) in \( \cat{D} \) there exist unique up to a unique isomorphism object \( G(X) \) in \( \cat{C} \) and canonical projection map \( \pi_X: [F \bincirc G](X) \to X \) satisfying the following property, called a \term{universal mapping property}: \begin{displayquote} For every object \( A \) in \( \cat{C} \) and every map \( g: F(A) \to X \) in \( \cat{D} \), there exists a unique map \( \widetilde{g}: A \to G(X) \) in \( \cat{C} \) such that the following diagram commutes: \begin{equation}\label{eq:rem:universal_mapping_property/d_triangle} \begin{aligned} \includegraphics[page=5]{output/rem__universal_mapping_property.pdf} \end{aligned} \end{equation} \end{displayquote} \end{displayquote} In this case, we can regard \( F \) as a forgetful functor and \( G \) as a free functor to obtain the following: \begin{displayquote} For every object \( X \) in \( \cat{D} \) there exist unique up to a unique isomorphism object \( G(X) \) in \( \cat{C} \) and canonical projection map \( \pi_X: G(X) \to X \) satisfying the following property, called a \term{universal mapping property}: \begin{displayquote} For every object \( A \) in \( \cat{C} \) and every map \( g: A \to X \) in \( \cat{D} \), there exists a unique map \( \widetilde{g}: A \to G(X) \) in \( \cat{C} \) such that the following diagram commutes: \begin{equation}\label{eq:rem:universal_mapping_property/d_triangle_forgetful} \begin{aligned} \includegraphics[page=6]{output/rem__universal_mapping_property.pdf} \end{aligned} \end{equation} \end{displayquote} \end{displayquote} Universal mapping properties of this form are used for \hyperref[def:category_of_cones/limit]{limits} --- see \fullref{rem:limit_universal_mapping_property}. \end{remark}
{ "alphanum_fraction": 0.6884089794, "avg_line_length": 62.7406807131, "ext": "tex", "hexsha": "ae97dfda213b7d71e419557ea030f965a8666790", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "d9bdfbab9f35095db2721f991a3418f58f997a56", "max_forks_repo_licenses": [ "CC0-1.0" ], "max_forks_repo_name": "v--/notebook", "max_forks_repo_path": "src/category_adjunctions.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "d9bdfbab9f35095db2721f991a3418f58f997a56", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "CC0-1.0" ], "max_issues_repo_name": "v--/notebook", "max_issues_repo_path": "src/category_adjunctions.tex", "max_line_length": 617, "max_stars_count": null, "max_stars_repo_head_hexsha": "d9bdfbab9f35095db2721f991a3418f58f997a56", "max_stars_repo_licenses": [ "CC0-1.0" ], "max_stars_repo_name": "v--/notebook", "max_stars_repo_path": "src/category_adjunctions.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 12390, "size": 38711 }
\documentclass[a4paper, 12pt]{article} \usepackage[utf8]{inputenc} \usepackage[american]{babel} \usepackage[margin=1in]{geometry} \usepackage{mathtools} \usepackage{fancyhdr} \setlength{\headheight}{15.2pt} \pagestyle{fancy} \lhead[]{Joseph Petitti} \rhead[]{Homework 1} \chead[]{Database Systems II} \begin{document} \section*{Problem 1} \subsection*{Q1} \[ \text{Disk capacity} = 10 \text{ surfaces} \times 8000 \text{ tracks} \times 208 \text{ sectors} \times 512 \text{ Bytes} \] \[ \text{Disk capacity} = 8519680000 \text{ B} = 8320000 \text{ KB} = 8125 \text{ MB} = \textbf{7.935 GB} \] \subsection*{Q2} \[ 8519680000 \text{ B} \div 8192 \text{ Bytes per block} = \textbf{ 1040000 blocks} \] \subsection*{Q3} \begin{table}[h] \centering \begin{tabular}{l c c c c c c r} & Seek time & & Rotational latency & & Transfer time & & Total \\ \hline Minimum & 0 ms & + & 0 ms & + & 0.8 ms & = & \textbf{0.8 ms} \\ Maximum & 17 ms & + & 11.1 ms & + & 0.8 ms & = & \textbf{28.9 ms} \\ Average & 9 ms & + & 5.6 ms & + & 0.8 ms & = & \textbf{15.4 ms} \\ \end{tabular} \end{table} \subsection*{Q4} \begin{itemize} \item $ 8192 \text{ Bytes per block} \div 128 \text{ Bytes per record} = \textbf{64 records per block} $ \item $ 100000 \text{ records} \div 64 \text{ records per block} = \textbf{1563 blocks} $ \item $ 1563 \text{ blocks} \times 16 \text{ sectors per block} = \textbf{25008 sectors} $ \end{itemize} \subsection*{Q5} \[ 5.6 \text{ ms (initial half rotation)} + 1.2 \text{ ms (seek time)} + 8 \text{ ms (transfer time)} = \textbf{14.8 ms} \] \subsection*{Q6} \[ 208 \text{ sectors per track} \div 16 \text{ sectors per block} = 13 \text{ blocks per track} \] \[ 13 \text{ blocks per track} * 10 \text{ surfaces} = \textbf{130 blocks per cylinder} \] \subsection*{Q7} The most efficient way to store blocks in a file, in order to speed up the sequential read of that file, is to start with $B_1$, $B_2$, ... $B_{10}$ aligned under each other on all ten surfaces of the innermost cylinder. This way, once the disk arms are in position the first ten blocks can all be read in 0.8 ms. Then the next ten blocks should be in the next sixteen sectors per surface in the direction of rotation on the same cylinder. This should continue until all 130 blocks on the innermost cylinder are filled. Then, the next ten blocks should be written on the second-to-innermost cylinder approximately 10\% of the circumference of the disk away from where $B_{130}$ was written. Once the disk arms are done reading $B_{130}$, they need to move out to the second-to-innermost track, which takes about 1.002 ms. In this time, the disk will have rotated by about 9.027\%, so if $B_{131}$ starts about 10\% of the disk circumference away from $B_{130}$ the disk arms will arrive at the right track just in time to start reading it. Continue in the same spiral pattern out from the center of the disk until $B_{last}$. The average time to read this file is, in milliseconds: \[ 14.6 + \frac{0.8 n}{10} + \left ( \left \lfloor{\frac{n}{130}}\right \rfloor \times 1.002 \right ) \] Where $n$ is the number of blocks in the file. The 14.6 ms accounts for initial 5.6 ms of rotational latency and 9 ms of seek time. The time to read each block is 0.8 ms, multiplied by $n$ divided by 10 (because ten blocks are read at a time). The final component is the floor of $ \frac{n}{130} $, which is the number of cylinders necessary to store the file, multiplied by the 1.002 ms it takes to seek to the next cylinder while reading. A file with 100,000 records, with each record being 128 bytes, would have 1,563 blocks. This file's average read time would be: \[ 14.6 + \frac{(0.8 \times 1563)}{10} + \left ( \left \lfloor{\frac{1563}{130}}\right \rfloor \times 1.002 \right ) = \textbf{151.7 ms} \] \section*{Problem 2} \begin{table}[h] \centering \begin{tabular}{ l c | l c } \multicolumn{2}{c}{4-byte} & \multicolumn{2}{c}{8-byte} \\ Field & Index & Field & Index \\ \hline Header & 0 & Header & 0\\ ID & 8 & ID & 8 \\ Name & 12 & Name & 16 \\ Age & 40 & Age & 48 \\ DoB & 44 & DoB & 48 \\ Gender & 56 & Gender & 72 \\ Address & 60 & Address & 80 \\ State & 120 & State & 144 \end{tabular} \end{table} \subsection*{Q1} Each record would be 128 bytes. \subsection*{Q2} Each record would be 152 bytes. \subsection*{Q3} \subsubsection*{4-Byte Boundaries} \[ 4096 \text{ B} = 64 \text{ B} + ( 128 \text{ B} \times n ) \] \[ n = \textbf{31 records} \] \subsubsection*{8-Byte Boundaries} \[ 4096 \text{ B} = 64 \text{ B} + ( 152 \text{ B} \times n ) \] \[ n = \textbf{26 records} \] \end{document}
{ "alphanum_fraction": 0.6629094003, "avg_line_length": 30.335483871, "ext": "tex", "hexsha": "2b5aeadaa6bd547e8bfa134c57423669269542a5", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "75694613f3eb939edfac07617dc60cbe1b629919", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "jojonium/CS-4432-Database-Systems-II", "max_forks_repo_path": "homework/hw1.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "75694613f3eb939edfac07617dc60cbe1b629919", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "jojonium/CS-4432-Database-Systems-II", "max_issues_repo_path": "homework/hw1.tex", "max_line_length": 80, "max_stars_count": null, "max_stars_repo_head_hexsha": "75694613f3eb939edfac07617dc60cbe1b629919", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "jojonium/CS-4432-Database-Systems-II", "max_stars_repo_path": "homework/hw1.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 1633, "size": 4702 }
\documentclass[a4paper]{article} %% Language and font encodings \usepackage[english]{babel} \usepackage[utf8x]{inputenc} \usepackage[T1]{fontenc} %% Sets page size and margins \usepackage[a4paper,top=3cm,bottom=2cm,left=3cm,right=3cm,marginparwidth=1.75cm]{geometry} %% Useful packages \usepackage{amsmath} \usepackage{graphicx} \usepackage[colorinlistoftodos]{todonotes} \usepackage[colorlinks=true, allcolors=blue]{hyperref} \title{Provisioning Jig} \author{Jonathan Whitaker} \begin{document} \maketitle % \begin{abstract} % Your abstract. % \end{abstract} \section{Introduction} The goal of this project was to update and improve the provisioning jig used in the white lab to program and update the debuggers and micro used in various 'Intro to Micros' courses. This involved creating a new version, without the need for a dedicated PC to run the ST-Link update utility and with a smaller, more robust form factor. In this report I'll detail the state of the project when I began (including instructions on running with the original setup should the new version fail or break), the changes and improvements made, how each section works and some recommendations going forward. \section{Where we are} \section{Where we were} ... \section{The current set up} \subsection{Overview} \subsection{The LCD Board} \subsection{The Main Board} The main board has spring loaded contacts and female headers to receive a blank target or debugger board, with or without the header pins soldered on. An ST-Link is connected to one USB port, and is used to flash both the target and the debugger. SWCLK is connected to both the target and the debugger contacts, but SWDIO and power are only connected to one or the other, depending on the position of the switch. The debugger socket is also connected up to the second USB port to allow for firmware updating. For debugger boards with header pins already soldered, two wires hang off the side of the board, to be connected to the test points for flashing. Purple is SWCLK (TP2) and green is SWDIO (TP1). On the board, black wires are ground, orange are VTarget and Yellow are 5V or VDebug. A third row of pin headers (closest to the edge of the board) above where the switch connects run to the LCD Board, providing power to that board (5V) and allowing the pi to read the position of the switch. A rough schematic is attached in Appendix X \subsection{The Code} \subsection{The Firmware Update Process} Without the right firmware, the debugger board is nothing more than an STM32F1 and some jellybeans. What turns it into a fancy debugging tool is ST's proprietary firmware. We have (thanks to an NDA and a great deal of stress) access to a hex file from ST so that we can make our own ST-Links. Once this is flashed, the firmware needs to be uploaded with the update utility provided by ST publicly on their website. Sadly, this isn't available for the Raspberry Pi architecture, hence the need for a PC in the old jig. To get around this, I reverse engineered the firmware update process bu inspecting USB traffic while the update utility was running and decompiling their executable. Some of the early work was done by Taylor Killian, and extended by XXX [REFS], but neither went very deep since they just used the provided utility to flash their modified firmware. Here's how it works: The firmware files are stored within the jar file, but encrypted to protect ST's business. When the updater is run, the relevant firmware file is decrypted (the key is XXX). The debugger sends a (possibly unique) key, which is encrypted with a different key (XXX) and then those encrypted bytes are used as a key to re-encrypt the firmware, which is then segmented and sent to the device. The USB protocol for loading the encrypted firmware is as follows: \begin{itemize} \item The host tells the debugger to enter firmware update mode (command XXX) and writes to a couple of memory locations. \item The host send the command F308, and receives 20 bytes in response (of these, X are the device ID and Y are (apparently) dynamically generated somehow). The first four and last 12 of these bytes are what we encrypt and use as the key to encrypt the firmware before we send it. \item The firmware is sent in 1024 byte chunks. For each chunk, the host needs to tell the chip where to write the data. First, it sends a command that looks something like XXXXXXXXXXXX. AAA is a checksum, BB is a counter (which cycles through 02, 03 and 04) and C... Next, it sends a packet of the form 21XXXXXXXX. This corresponds to the address XXXXXXXX. The debugger then knows that the next packet it receives will need to be written to said address. \item In between all this, the host uses the command F303XXX to check if the debugger is ready to receive the next commands. A response of XXXXX indicates the device is ready, whereas XXXXXX means it is still busy. \item Once all the firmware chunks have been sent, a version string is encrypted and sent in the same way, the firmware version is read and the process is complete. \end{itemize} For a deeper look at how this works, inspect the USB logs and code in the github folder. ... \section{Some examples to get started} \subsection{How to include Figures} First you have to upload the image file from your computer using the upload link the project menu. Then use the includegraphics command to include it in your document. Use the figure environment and the caption command to add a number and a caption to your figure. See the code for Figure \ref{fig:frog} in this section for an example. \begin{figure} \centering \includegraphics[width=0.3\textwidth]{frog.jpg} \caption{\label{fig:frog}This frog was uploaded via the project menu.} \end{figure} \subsection{How to add Comments} Comments can be added to your project by clicking on the comment icon in the toolbar above. % * <[email protected]> 2016-07-03T09:54:16.211Z: % % Here's an example comment! % To reply to a comment, simply click the reply button in the lower right corner of the comment, and you can close them when you're done. Comments can also be added to the margins of the compiled PDF using the todo command\todo{Here's a comment in the margin!}, as shown in the example on the right. You can also add inline comments: \todo[inline, color=green!40]{This is an inline comment.} \subsection{How to add Tables} Use the table and tabular commands for basic tables --- see Table~\ref{tab:widgets}, for example. \begin{table} \centering \begin{tabular}{l|r} Item & Quantity \\\hline Widgets & 42 \\ Gadgets & 13 \end{tabular} \caption{\label{tab:widgets}An example table.} \end{table} \subsection{How to write Mathematics} \LaTeX{} is great at typesetting mathematics. Let $X_1, X_2, \ldots, X_n$ be a sequence of independent and identically distributed random variables with $\text{E}[X_i] = \mu$ and $\text{Var}[X_i] = \sigma^2 < \infty$, and let \[S_n = \frac{X_1 + X_2 + \cdots + X_n}{n} = \frac{1}{n}\sum_{i}^{n} X_i\] denote their mean. Then as $n$ approaches infinity, the random variables $\sqrt{n}(S_n - \mu)$ converge in distribution to a normal $\mathcal{N}(0, \sigma^2)$. \subsection{How to create Sections and Subsections} Use section and subsections to organize your document. Simply use the section and subsection buttons in the toolbar to create them, and we'll handle all the formatting and numbering automatically. \subsection{How to add Lists} You can make lists with automatic numbering \dots \begin{enumerate} \item Like this, \item and like this. \end{enumerate} \dots or bullet points \dots \begin{itemize} \item Like this, \item and like this. \end{itemize} \subsection{How to add Citations and a References List} You can upload a \verb|.bib| file containing your BibTeX entries, created with JabRef; or import your \href{https://www.overleaf.com/blog/184}{Mendeley}, CiteULike or Zotero library as a \verb|.bib| file. You can then cite entries from it, like this: \cite{greenwade93}. Just remember to specify a bibliography style, as well as the filename of the \verb|.bib|. You can find a \href{https://www.overleaf.com/help/97-how-to-include-a-bibliography-using-bibtex}{video tutorial here} to learn more about BibTeX. We hope you find Overleaf useful, and please let us know if you have any feedback using the help menu above --- or use the contact form at \url{https://www.overleaf.com/contact}! \bibliographystyle{alpha} \bibliography{sample} \end{document}
{ "alphanum_fraction": 0.77332705, "avg_line_length": 57.7414965986, "ext": "tex", "hexsha": "59024092cf82e70183d975e49ee2745f42f41dab", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "d2b8b5738651b9f2885beebe4b50b2cd9d49e973", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "UCT-White-Lab/provisioning-jig", "max_forks_repo_path": "report.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "d2b8b5738651b9f2885beebe4b50b2cd9d49e973", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "UCT-White-Lab/provisioning-jig", "max_issues_repo_path": "report.tex", "max_line_length": 1039, "max_stars_count": null, "max_stars_repo_head_hexsha": "d2b8b5738651b9f2885beebe4b50b2cd9d49e973", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "UCT-White-Lab/provisioning-jig", "max_stars_repo_path": "report.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 2074, "size": 8488 }
\subsection{Subdifferentials}\label{subsec:subdifferentials} Let \( X \) be a Hausdorff \hyperref[def:topological_vector_space]{topological vector space}, let \( D \subseteq X \) be an open set and \( f: D \to \BbbR \) be any function. \begin{definition}\label{def:subdifferentials} We fix a point \( x \in D \). We define different types of \term{subgradients} and \term{subdifferentials}. Subgradients are linear functionals \( x^* \in X^* \) that approximate \( f \) at the point \( x \) in a certain way, and a subdifferential is the set of all subgradients of a given type. \begin{thmenum} \thmitem{def:subdifferentials/convex}\mcite[59]{Clarke2013}We say that \( x^* \in X^* \) is a \term{subgradient of \( f \) at \( x \)} if for every \( y \in D \) we have \begin{equation*} f(y) - f(x) \geq \inprod {x^*} {y - x}. \end{equation*} The \term{subdifferential of \( f \) at \( x \)} is denoted by \( \partial f(x) \) and is also sometimes called the \term{convex subdifferential} because of \fullref{thm:convex_iff_subdifferential_nonempty}. \thmitem{def:subdifferentials/clarke}\mcite[def. 10.3]{Clarke2013}We say that \( x^* \in X^* \) is a \term{Clarke (generalized) subgradient of \( f \) at \( x \)} if for every direction \( h \in X \) we have \begin{equation*} f^\circ(x)(h) \geq \inprod {x^*} h, \end{equation*} where \( f^\circ(x)(h) \) is the generalized Clarke \hyperref[def:nonsmooth_derivatives/clarke]{derivative}. The \term{subdifferential of \( f \) at \( x \)} is denoted by \( \partial_C f(x) \). Confusingly, the Clarke subdifferential is called the \enquote{generalized gradient} by Clarke himself with no special name for the Clarke subgradients. See \fullref{subsec:clarke_gradients} for properties of these subgradients. \thmitem{def:subdifferentials/proximal}\mcite[227]{Clarke2013}We say that \( x^* \in X^* \) is a \term{proximal subgradient of \( f \) at \( x \)} if there exist \( \sigma > 0 \) and a neighborhood \( V \subseteq X \) of \( x \) such that for every \( y \in D \cap V \) we have \begin{equation*} f(y) - f(x) + \sigma \norm{y - x}^2 \geq \inprod {x^*} {y - x}. \end{equation*} The \term{proximal subdifferential of \( f \) at \( x \)} is denoted by \( \partial_P f(x) \). \thmitem{def:subdifferentials/limiting}\mcite[def. 11.10]{Clarke2013}Suppose the following are satisfied: \begin{enumerate} \item \( \{ x_n \}_n \subseteq D \) is a sequence of points converging to \( x \) \item \( f(x_n) \to f(x) \) (redundant if \( f \) is continuous) \item \( x_n^* \) is a proximal subgradient for \( f \) at \( x_n \) for every \( n \in \BbbZ_{>0} \). \end{enumerate} If the limit \( x^* \coloneqq \lim_n x_n^* \) exists and is a continuous linear functional, we call \( x^* \) a \term{limiting subgradient of \( f \) at \( x \)}. The \term{limiting subdifferential of \( f \) at \( x \)} is denoted by \( \partial_P f(x) \). \end{thmenum} \end{definition}
{ "alphanum_fraction": 0.6464179597, "avg_line_length": 67.3111111111, "ext": "tex", "hexsha": "a5bd03fb0a9b1edcd48a6fdfdbcdcf6495acb5f8", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "89a91b5182f187bc1aa37a2054762dd0078a7b56", "max_forks_repo_licenses": [ "CC0-1.0" ], "max_forks_repo_name": "v--/anthology", "max_forks_repo_path": "src/subdifferentials.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "89a91b5182f187bc1aa37a2054762dd0078a7b56", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "CC0-1.0" ], "max_issues_repo_name": "v--/anthology", "max_issues_repo_path": "src/subdifferentials.tex", "max_line_length": 297, "max_stars_count": null, "max_stars_repo_head_hexsha": "89a91b5182f187bc1aa37a2054762dd0078a7b56", "max_stars_repo_licenses": [ "CC0-1.0" ], "max_stars_repo_name": "v--/anthology", "max_stars_repo_path": "src/subdifferentials.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 1035, "size": 3029 }
\par \section{Prototypes and descriptions of {\tt Chv} methods} \label{section:Chv:proto} \par This section contains brief descriptions including prototypes of all methods that belong to the {\tt Chv} object. \par \subsection{Basic methods} \label{subsection:Chv:proto:basics} \par As usual, there are four basic methods to support object creation, setting default fields, clearing any allocated data, and free'ing the object. \par %======================================================================= \begin{enumerate} %----------------------------------------------------------------------- \item \begin{verbatim} Chv * Chv_new ( void ) ; \end{verbatim} \index{Chv_new@{\tt Chv\_new()}} This method simply allocates storage for the {\tt Chv} structure and then sets the default fields by a call to {\tt Chv\_setDefaultFields()}. %----------------------------------------------------------------------- \item \begin{verbatim} void Chv_setDefaultFields ( Chv *chv ) ; \end{verbatim} \index{Chv_setDefaultFields@{\tt Chv\_setDefaultFields()}} The structure's fields are set to default values: {\tt id} = {\tt -1}, {\tt nD} = {\tt nL} = {\tt nU} = 0, {\tt type} = {\tt SPOOLES\_REAL}, {\tt symflag} = {\tt SPOOLES\_SYMMETRIC}, and {\tt rowind} = {\tt colind} = {\tt entries} = {\tt next} = {\tt NULL} . The {\tt wrkDV} object has its default fields set via a call to {\tt DV\_setDefaultFields()}. \par \noindent {\it Error checking:} If {\tt chv} is {\tt NULL}, an error message is printed and the program exits. %----------------------------------------------------------------------- \item \begin{verbatim} void Chv_clearData ( Chv *chv ) ; \end{verbatim} \index{Chv_clearData@{\tt Chv\_clearData()}} This method clears the object and free's any owned data by invoking the {\tt \_clearData()} methods for its internal {\tt DV} object. There is a concluding call to {\tt Chv\_setDefaultFields()}. \par \noindent {\it Error checking:} If {\tt chv} is {\tt NULL}, an error message is printed and the program exits. %----------------------------------------------------------------------- \item \begin{verbatim} void Chv_free ( Chv *chv ) ; \end{verbatim} \index{Chv_free@{\tt Chv\_free()}} This method releases any storage by a call to {\tt Chv\_clearData()} and then free the space for {\tt chv}. \par \noindent {\it Error checking:} If {\tt chv} is {\tt NULL}, an error message is printed and the program exits. %----------------------------------------------------------------------- \end{enumerate} \par \subsection{Instance methods} \label{subsection:Chv:proto:instance} \par %======================================================================= \begin{enumerate} %----------------------------------------------------------------------- \item \begin{verbatim} int Chv_id ( Chv *chv ) ; \end{verbatim} \index{Chv_id@{\tt Chv\_id()}} This method returns the {\it id} of the object. \par \noindent {\it Error checking:} If {\tt chv} is {\tt NULL}, an error message is printed and zero is returned. %----------------------------------------------------------------------- \item \begin{verbatim} int Chv_type ( Chv *chv ) ; \end{verbatim} \index{Chv_type@{\tt Chv\_type()}} This method returns the {\it type} of the object. \begin{itemize} \item {\tt SPOOLES\_REAL} $\Longrightarrow$ real entries \item {\tt SPOOLES\_COMPLEX} $\Longrightarrow$ complex entries \end{itemize} \par \noindent {\it Error checking:} If {\tt chv} is {\tt NULL}, an error message is printed and zero is returned. %----------------------------------------------------------------------- \item \begin{verbatim} int Chv_symmetryFlag ( Chv *chv ) ; \end{verbatim} \index{Chv_symmetryFlag@{\tt Chv\_symmetryFlag()}} This method returns the {\it symmetry flag} of the object. \begin{itemize} \item {\tt SPOOLES\_SYMMETRIC} $\Longrightarrow$ symmetric entries, i.e., $a_{i,j} = a_{j,i}$. \item {\tt SPOOLES\_HERMITIAN} $\Longrightarrow$ hermitian entries, i.e., $a_{i,j} = \overline{a_{j,i}}$. \item {\tt SPOOLES\_NONSYMMETRIC} $\Longrightarrow$ nonsymmetric entries. \end{itemize} \par \noindent {\it Error checking:} If {\tt chv} is {\tt NULL}, an error message is printed and zero is returned. %----------------------------------------------------------------------- \item \begin{verbatim} void Chv_dimensions ( Chv *chv, int *pnD, int *pnL, *pnU ) ; \end{verbatim} \index{Chv_dimensions@{\tt Chv\_dimensions()}} This method fills {\tt *pnD}, {\tt *pnL} and {\tt *pnU} with {\tt nD}, {\tt nL} and {\tt nU}. \par \noindent {\it Error checking:} If {\tt chv} is {\tt NULL}, an error message is printed and zero is returned. %----------------------------------------------------------------------- \item \begin{verbatim} void Chv_rowIndices ( Chv *chv, int *pnrow, **prowind ) ; \end{verbatim} \index{Chv_rowIndices@{\tt Chv\_rowIndices()}} This method fills {\tt *pnrow} with the number of rows ({\tt nD + nL}) and {\tt *prowind} with a pointer to the row indices. \par \noindent {\it Error checking:} If {\tt chv}, {\tt pnrow} or {\tt prowind} is {\tt NULL}, an error message is printed and zero is returned. %----------------------------------------------------------------------- \item \begin{verbatim} void Chv_columnIndices ( Chv *chv, int *pncol, **pcolind ) ; \end{verbatim} \index{Chv_columnIndices@{\tt Chv\_columnIndices()}} This method fills {\tt *pncol} with the number of columns ({\tt nD + nU}) and {\tt *pcolind} with a pointer to the column indices. \par \noindent {\it Error checking:} If {\tt chv}, {\tt pncol} or {\tt pcolind} is {\tt NULL}, an error message is printed and zero is returned. %----------------------------------------------------------------------- \item \begin{verbatim} int Chv_nent ( Chv *chv ) ; \end{verbatim} \index{Chv_nent@{\tt Chv\_nent()}} This method returns number of matrix entries that the object contains. Note, for a complex chevron, this is the number of {\it double precision complex} entries, equal to one half the number of double precision entries that are stored. \par \noindent {\it Error checking:} If {\tt chv} is {\tt NULL}, an error message is printed and zero is returned. %----------------------------------------------------------------------- \item \begin{verbatim} double * Chv_entries ( Chv *chv ) ; \end{verbatim} \index{Chv_entries@{\tt Chv\_entries()}} This method returns the {\it entries} field of the object, a pointer to the base location of the double precision array that stores the complex data. \par \noindent {\it Error checking:} If {\tt chv} is {\tt NULL}, an error message is printed and zero is returned. %----------------------------------------------------------------------- \item \begin{verbatim} double * Chv_diagLocation ( Chv *chv, int ichv ) ; \end{verbatim} \index{Chv_diagLocation@{\tt Chv\_diagLocation()}} This method returns a pointer to the address of the entry in the {\tt ichv}'th diagonal location. For a real chevron, to find the entry {\tt k} places to the right of the diagonal entry, add {\tt k} to the address. To find an entry {\tt k} places below the diagonal entry, subtract {\tt k} from the address. For a complex chevron, to find the entry {\tt k} places to the right of the diagonal entry, add {\tt 2*k} to the address. To find an entry {\tt k} places below the diagonal entry, subtract {\tt 2*k} from the address. \par \noindent {\it Error checking:} If {\tt chv} is {\tt NULL}, an error message is printed and zero is returned. %----------------------------------------------------------------------- \item \begin{verbatim} void * Chv_workspace ( Chv *chv ) ; \end{verbatim} \index{Chv_workspace@{\tt Chv\_workspace()}} This method returns a pointer to the base address of the workspace. \par \noindent {\it Error checking:} If {\tt chv} is {\tt NULL}, an error message is printed and zero is returned. %----------------------------------------------------------------------- \item \begin{verbatim} void Chv_realEntry ( Chv *chv, int irow, int jcol, double *pValue ) ; \end{verbatim} \index{Chv_realEntry@{\tt Chv\_realEntry()}} This method fills {\tt *pValue} with the entry in row {\tt irow} and column {\tt jcol}. Note, {\tt irow} and {\tt jcol} are {\it local} indices, i.e., $0 \le \mbox{\tt irow} < \mbox{\tt nD} + \mbox{\tt nL}$ and $0 \le \mbox{\tt jcol} < \mbox{\tt nD} + \mbox{\tt nU}$. \par \noindent {\it Error checking:} If {\tt chv} or {\tt pValue} is {\tt NULL}, or if {\tt irow} or {\tt jcol} is out of range, an error message is printed and the program exits. %----------------------------------------------------------------------- \item \begin{verbatim} Chv_locationOfRealEntry ( Chv *chv, int irow, int jcol, double **ppValue ) ; \end{verbatim} \index{Chv_locationOfRealEntry@{\tt Chv\_locationOfRealEntry()}} This method fills {\tt *ppValue} with a pointer to the entry in row {\tt irow} and column {\tt jcol}. Note, {\tt irow} and {\tt jcol} are {\it local} indices, i.e., $0 \le \mbox{\tt irow} < \mbox{\tt nD} + \mbox{\tt nL}$ and $0 \le \mbox{\tt jcol} < \mbox{\tt nD} + \mbox{\tt nU}$. \par \noindent {\it Error checking:} If {\tt chv} or {\tt ppValue} is {\tt NULL}, or if {\tt irow} or {\tt jcol} is out of range, an error message is printed and the program exits. %----------------------------------------------------------------------- \item \begin{verbatim} void Chv_setRealEntry ( Chv *chv, int irow, int jcol, double value ) ; \end{verbatim} \index{Chv_setRealEntry@{\tt Chv\_setRealEntry()}} This method sets the entry in row {\tt irow} and column {\tt jcol} to be {\tt value}. Note, {\tt irow} and {\tt jcol} are {\it local} indices, i.e., $0 \le \mbox{\tt irow} < \mbox{\tt nD} + \mbox{\tt nL}$ and $0 \le \mbox{\tt jcol} < \mbox{\tt nD} + \mbox{\tt nU}$. \par \noindent {\it Error checking:} If {\tt chv} is {\tt NULL}, or if {\tt irow} or {\tt jcol} is out of range, an error message is printed and the program exits. %----------------------------------------------------------------------- \item \begin{verbatim} void Chv_complexEntry ( Chv *chv, int irow, int jcol, double *pReal, double *pImag ) ; \end{verbatim} \index{Chv_complexEntry@{\tt Chv\_complexEntry()}} This method fills {\tt *pReal} with the real part and {\tt *pImag} with the imaginary part of the the entry in row {\tt irow} and column {\tt jcol}. Note, {\tt irow} and {\tt jcol} are {\it local} indices, i.e., $0 \le \mbox{\tt irow} < \mbox{\tt nD} + \mbox{\tt nL}$ and $0 \le \mbox{\tt jcol} < \mbox{\tt nD} + \mbox{\tt nU}$. \par \noindent {\it Error checking:} If {\tt chv}, {\tt pReal} or {\tt pImag} is {\tt NULL}, or if {\tt irow} or {\tt jcol} is out of range, an error message is printed and the program exits. %----------------------------------------------------------------------- \item \begin{verbatim} Chv_locationOfComplexEntry ( Chv *chv, int irow, int jcol, double **ppReal, double **ppImag ) ; \end{verbatim} \index{Chv_locationOfComplexEntry@{\tt Chv\_locationOfComplexEntry()}} This method fills {\tt *ppReal} with a pointer to the real part and {\tt *ppImag} with a pointer to the imaginary part of the entry in row {\tt irow} and column {\tt jcol}. Note, {\tt irow} and {\tt jcol} are {\it local} indices, i.e., $0 \le \mbox{\tt irow} < \mbox{\tt nD} + \mbox{\tt nL}$ and $0 \le \mbox{\tt jcol} < \mbox{\tt nD} + \mbox{\tt nU}$. \par \noindent {\it Error checking:} If {\tt chv}, {\tt ppReal} or {\tt ppImag} is {\tt NULL}, or if {\tt irow} or {\tt jcol} is out of range, an error message is printed and the program exits. %----------------------------------------------------------------------- \item \begin{verbatim} void Chv_setComplexEntry ( Chv *chv, int irow, int jcol, double real, double imag ) ; \end{verbatim} \index{Chv_setComplexEntry@{\tt Chv\_setComplexEntry()}} This method sets the real and imaginary parts and the entry in row {\tt irow} and column {\tt jcol} to be {\tt real} and {\tt imag}, respectively. Note, {\tt irow} and {\tt jcol} are {\it local} indices, i.e., $0 \le \mbox{\tt irow} < \mbox{\tt nD} + \mbox{\tt nL}$ and $0 \le \mbox{\tt jcol} < \mbox{\tt nD} + \mbox{\tt nU}$. \par \noindent {\it Error checking:} If {\tt chv} is {\tt NULL}, or if {\tt irow} or {\tt jcol} is out of range, an error message is printed and the program exits. %----------------------------------------------------------------------- \end{enumerate} \par \subsection{Initialization methods} \label{subsection:Chv:proto:initial} \par There are three initializer methods. \par %======================================================================= \begin{enumerate} %----------------------------------------------------------------------- \item \begin{verbatim} void Chv_init( Chv *chv, int id, int nD, int nL, int nU, int type, int symflag ) ; \end{verbatim} \index{Chv_init@{\tt Chv\_init()}} This is the initializer method used when the {\tt Chv} object is to use its owned workspace to store indices and entries. The number of indices and entries is computed, the work space is set up via calls to {\tt Chv\_nbytesNeeded()} and {\tt Chv\_setNbytesInWorkspace()}, and the scalars, pointers and buffer are set up via a call to {\tt Chv\_setFields()}. \par \noindent {\it Error checking:} If {\tt chv} is {\tt NULL}, or if ${\tt nD} \le 0$, or if {\tt nL} or ${\tt nU} < 0$, or if {\tt type} or if {\tt symflag} is not valid, % or if {\tt type} is not {\tt SPOOLES\_REAL} or {\tt SPOOLES\_COMPLEX}, % or if {\tt symflag} is not {\tt SPOOLES\_SYMMETRIC}, % {\tt SPOOLES\_HERMITIAN} or {\tt SPOOLES\_NONSYMMETRIC} an error message is printed and zero is returned. %----------------------------------------------------------------------- \item \begin{verbatim} void Chv_initWithPointers ( Chv *chv, int id, int nD, int nL, int nU, int type, int symflag, int *rowind, int *colind, double *entries ) ; \end{verbatim} \index{Chv_initWithPointers@{\tt Chv\_initWithPointers()}} This initializer method is used when the {\tt Chv} object does not own the storage for its indices and entries, but points into some other storage. \par \noindent {\it Error checking:} If {\tt chv} is {\tt NULL}, or if ${\tt nD} \le 0$, or if {\tt nL} or ${\tt nU} < 0$, or if {\tt type} or if {\tt symflag} is not valid, % or if {\tt type} is not {\tt SPOOLES\_REAL} or {\tt SPOOLES\_COMPLEX}, % or if {\tt symflag} is not {\tt SPOOLES\_SYMMETRIC}, % {\tt SPOOLES\_HERMITIAN} or {\tt SPOOLES\_NONSYMMETRIC} or if {\tt entries} or {\tt colind} is {\tt NULL}, or if {\tt symflag = SPOOLES\_NONSYMMETRIC} and {\tt rowind} is {\tt NULL}, an error message is printed and zero is returned. %----------------------------------------------------------------------- \item \begin{verbatim} void Chv_initFromBuffer ( Chv *chv ) ; \end{verbatim} \index{Chv_initFromBuffer@{\tt Chv\_initFromBuffer()}} This initializer method is used to set the scalar and pointer fields when the object's buffer is already preloaded. This functionality is used in the MPI factorization where a {\tt Chv} object is sent and received, more precisely, the workspace buffer owned by the {\tt Chv} object is sent and received. \par \noindent {\it Error checking:} If {\tt chv} is {\tt NULL}, an error message is printed and zero is returned. %----------------------------------------------------------------------- \end{enumerate} \par \subsection{Search methods} \label{subsection:Chv:proto:search} \par %======================================================================= \begin{enumerate} %----------------------------------------------------------------------- \item \begin{verbatim} int Chv_maxabsInDiagonal11 ( Chv *chv, int mark[], int tag, double *pmaxval ) ; \end{verbatim} \index{Chv_maxabsInDiagonal11@{\tt Chv\_maxabsInDiagonal11()}} This method returns the location of the first tagged element with the largest magnitude in the diagonal of the (1,1) block. Element {\tt jj} must have {\tt mark[jj] = tag} to be eligible. Its magnitude is returned in {\tt *pmaxval}. Note, if the chevron is complex, the location is in terms of the complex entries, not in the real entries, i.e., if {\tt k = Chv\_maxabsDiagonal11(chv,...)}, then the complex entry is found in {\tt chv->entries[2*kk:2*kk+1]}. \par \noindent {\it Error checking:} If {\tt chv}, {\tt mark} or {\tt pmaxval} is {\tt NULL}, an error message is printed and the program exits. %----------------------------------------------------------------------- \item \begin{verbatim} int Chv_maxabsInRow11 ( Chv *chv, int irow, int colmark[], int tag, double *pmaxval ) ; \end{verbatim} \index{Chv_maxabsInRow11@{\tt Chv\_maxabsInRow11()}} This method returns the location of the first element with the largest magnitude in row {\tt irow} of the (1,1) block. Element {\tt jj} must have {\tt colmark[jj] = tag} to be eligible. Its magnitude is returned in {\tt *pmaxval}. Note, if the chevron is complex, the location is in terms of the complex entries, not in the real entries, i.e., if {\tt k = Chv\_maxabsRow11(chv,...)}, then the complex entry is found in {\tt chv->entries[2*kk:2*kk+1]}. \par \noindent {\it Error checking:} If {\tt chv} is {\tt NULL} or {\tt irow} is not in {\tt [0,n1-1]}, an error message is printed and the program exits. %----------------------------------------------------------------------- \item \begin{verbatim} int Chv_maxabsInColumn11 ( Chv *chv, int jcol, int rowmark[], int tag, double *pmaxval ) ; \end{verbatim} \index{Chv_maxabsInColumn11@{\tt Chv\_maxabsInColumn11()}} This method returns the location of the first element with the largest magnitude in column {\tt jcol} of the (1,1) block. Element {\tt jj} must have {\tt rowmark[jj] = tag} to be eligible. Its magnitude is returned in {\tt *pmaxval}. Note, if the chevron is complex, the location is in terms of the complex entries, not in the real entries, i.e., if {\tt k = Chv\_maxabsColumn11(chv,...)}, then the complex entry is found in {\tt chv->entries[2*kk:2*kk+1]}. \par \noindent {\it Error checking:} If {\tt chv} is {\tt NULL} or {\tt irow} is not in {\tt [0,n1-1]}, an error message is printed and the program exits. %----------------------------------------------------------------------- \item \begin{verbatim} int Chv_maxabsInRow ( Chv *chv, int irow, int colmark[], int tag, double *pmaxval ) ; \end{verbatim} \index{Chv_maxabsInRow@{\tt Chv\_maxabsInRow()}} This method returns the location of the first element with the largest magnitude in row {\tt irow}. Element {\tt jj} must have {\tt colmark[jj] = tag} to be eligible. Its magnitude is returned in {\tt *pmaxval}. Note, if the chevron is complex, the location is in terms of the complex entries, not in the real entries, i.e., if {\tt k = Chv\_maxabsRow(chv,...)}, then the complex entry is found in {\tt chv->entries[2*kk:2*kk+1]}. \par \noindent {\it Error checking:} If {\tt chv} is {\tt NULL} or {\tt irow} is not in {\tt [0,n1-1]}, an error message is printed and the program exits. %----------------------------------------------------------------------- \item \begin{verbatim} int Chv_maxabsInColumn ( Chv *chv, int jcol, int rowmark[], int tag, double *pmaxval ) ; \end{verbatim} \index{Chv_maxabsInColumn@{\tt Chv\_maxabsInColumn()}} This method returns the location of the first element with the largest magnitude in column {\tt jcol}. Element {\tt jj} must have {\tt rowmark[jj] = tag} to be eligible. Its magnitude is returned in {\tt *pmaxval}. Note, if the chevron is complex, the location is in terms of the complex entries, not in the real entries, i.e., if {\tt k = Chv\_maxabsColumn11(chv,...)}, then the complex entry is found in {\tt chv->entries[2*kk:2*kk+1]}. \par \noindent {\it Error checking:} If {\tt chv} is {\tt NULL} or {\tt irow} is not in {\tt [0,n1-1]}, an error message is printed and the program exits. %----------------------------------------------------------------------- \item \begin{verbatim} double Chv_quasimax ( Chv *chv, int rowmark[], int colmark[] int tag, int *pirow, int *pjcol ) ; \end{verbatim} \index{Chv_quasimax@{\tt Chv\_quasimax()}} This method searches for a {\it quasimax} entry in the $(1,1)$ block, an entry $a_{i,j}$ that has largest magnitude of the tagged entries in row $i$ and column $j$. An entry $a_{i,j}$ is {\it tagged} when {\tt rowmark[i] = tag} and {\tt colmark[j] = tag}. On return, {\tt *pirow} is filled with the row id and {\tt *pjcol} is filled with the column id of the quasimax entry. The return value is the magnitude of the entry. \par \noindent {\it Error checking:} If {\tt chv}, {\tt rowmark}, {\tt colmark}, {\tt pirow} or {\tt pjcol} is {\tt NULL}, an error message is printed and the program exits. %----------------------------------------------------------------------- \item \begin{verbatim} void Chv_fastBunchParlettPivot ( Chv *chv, int mark[], int tag, int *pirow, int *pjcol ) ; \end{verbatim} \index{Chv_fastBunchParlettPivot@{\tt Chv\_fastBunchParlettPivot()}} This method is used only for a symmetric or hermitian object and finds a $1 \times 1$ or $2 \times 2$ pivot that is suitable for elimination. Only pivots from the $(1,1)$ block can be chosen. A diagonal element $a_{r,r}$ with maximum magnitude is first found using the {\tt Chv\_maxabsInDiagonal11()} method. We then find the element $a_{r,k}$ in that row that has a maximum magnitude. If $|a_{r,r}| > 0.6404 |a_{r,k}|$ then we accept the $1 \times 1$ pivot element. Otherwise we look for an offdiagonal element that is largest in its row and column and return it as a $2 \times 2$ pivot. \par \noindent {\it Error checking:} If {\tt chv}, {\tt mark}, {\tt pirow} or {\tt pjcol} is {\tt NULL}, an error message is printed and the method returns. %----------------------------------------------------------------------- \end{enumerate} \par \subsection{Pivot methods} \label{subsection:Chv:proto:pivot} \par %======================================================================= \begin{enumerate} %----------------------------------------------------------------------- \item \begin{verbatim} int Chv_findPivot ( Chv *chv, DV *workDV, double tau, int ndelay, int *pirow, int *pjcol, int *pntest ) ; \end{verbatim} \index{Chv_findPivot@{\tt Chv\_findPivot()}} This method finds and tests a pivot, where if it were used at the next elimination step, each entry in $L$ and $U$ would have magnitude less than or equal to {\tt tau}. The {\tt workDV} object is used for workspace, it is resized as necessary. The {\tt ndelay} parameter allows one to specify the number of leading rows and columns to ignore, useful when delayed rows and columns have been placed in the leading portion of the chevron. The {\tt pirow}, {\tt pjcol} and {\tt pntest} addresses are filled with the pivot row, pivot column, and number of pivot tests performed to find the pivot. If no pivot was found, {\tt pirow} and {\tt pjcol} are filled with {\tt -1}. The return value is the size of the pivot. If the chevron is symmetric, we can find a $1 \times 1$ or $2 \times 2$ pivot. If the chevron is nonsymmetric, we only find a $1 \times 1$ pivot. A return value of zero means that no pivot was found. \par \noindent {\it Error checking:} If {\tt chv}, {\tt workDV}, {\tt pirow}, {\tt pjcol} or {\tt pntest} is {\tt NULL}, or if ${\tt tau} < 1.0$, or if ${\tt ndelay} < 0$, an error message is printed and the program exits. %----------------------------------------------------------------------- \end{enumerate} \par \subsection{Update methods} \label{subsection:Chv:proto:updates} \par %======================================================================= \begin{enumerate} %----------------------------------------------------------------------- \item \begin{verbatim} void Chv_updateS ( Chv *chv, SubMtx *mtxD, SubMtx *mtxU, DV *tempDV ) ; void Chv_updateH ( Chv *chv, SubMtx *mtxD, SubMtx *mtxU, DV *tempDV ) ; void Chv_updateN ( Chv *chv, SubMtx *mtxL, SubMtx *mtxD, SubMtx *mtxU, DV *tempDV ) ; \end{verbatim} \index{Chv_updateS@{\tt Chv\_updateS()}} \index{Chv_updateH@{\tt Chv\_updateH()}} \index{Chv_updateN@{\tt Chv\_updateN()}} These methods perform an update to a chevron during the factorization. For a symmetric chevron, we compute \begin{eqnarray*} T_{J \cap \bnd{I},J \cap \bnd{I}} & := & T_{J \cap \bnd{I},J \cap \bnd{I}} - U_{I,J \cap \bnd{I}}^T D_{I,I} U_{I, J \cap \bnd{I}} \\ T_{J \cap \bnd{I},\bnd{J} \cap \bnd{I}} & := & T_{J \cap \bnd{I},\bnd{J} \cap \bnd{I}} - U_{I,J \cap \bnd{I}}^T D_{I,I} U_{I, \bnd{J} \cap \bnd{I}} \end{eqnarray*} where $D$ is diagonal or block diagonal with $1 \times 1$ and/or symmetric $2 \times 2$ pivots. $U$ is stored by sparse or dense columns. For a Hermitian chevron, we compute \begin{eqnarray*} T_{J \cap \bnd{I},J \cap \bnd{I}} & := & T_{J \cap \bnd{I},J \cap \bnd{I}} - U_{I,J \cap \bnd{I}}^H D_{I,I} U_{I, J \cap \bnd{I}} \\ T_{J \cap \bnd{I},\bnd{J} \cap \bnd{I}} & := & T_{J \cap \bnd{I},\bnd{J} \cap \bnd{I}} - U_{I,J \cap \bnd{I}}^H D_{I,I} U_{I, \bnd{J} \cap \bnd{I}} \end{eqnarray*} where $D$ is diagonal or block diagonal with $1 \times 1$ and/or Hermitian $2 \times 2$ pivots. $U$ is stored by sparse or dense columns. For a nonsymmetric chevron, we compute \begin{eqnarray*} T_{J \cap \bnd{I},J \cap \bnd{I}} & := & T_{J \cap \bnd{I},J \cap \bnd{I}} - L_{J \cap \bnd{I},I} D_{I,I} U_{I, J \cap \bnd{I}} \\ T_{J \cap \bnd{I},\bnd{J} \cap \bnd{I}} & := & T_{J \cap \bnd{I},\bnd{J} \cap \bnd{I}} - L_{J \cap \bnd{I},I} D_{I,I} U_{I, \bnd{J} \cap \bnd{I}} \\ T_{\bnd{J} \cap \bnd{I},J \cap \bnd{I}} & := & T_{\bnd{J} \cap \bnd{I},J \cap \bnd{I}} - L_{\bnd{J} \cap \bnd{I},I} D_{I,I} U_{I, J \cap \bnd{I}} \end{eqnarray*} where $D$ is diagonal, $L$ is stored by sparse or dense rows, and $U$ is stored by sparse or dense columns. {\tt tempDV} is a temporary working vector whose storage is resized as necessary. \par \noindent {\it Error checking:} If {\tt chvT}, {\tt mtxL}, {\tt mtxD}, {\tt mtxU} or {\tt tempDV} is {\tt NULL}, an error message is printed and the program exits. %----------------------------------------------------------------------- \end{enumerate} \par \subsection{Assembly methods} \label{subsection:Chv:proto:assembly} \par %======================================================================= \begin{enumerate} %----------------------------------------------------------------------- \item \begin{verbatim} void Chv_addChevron ( Chv *chv, double alpha[], int ichv, int chvsize, int chvind[], double chvent[] ) ; \end{verbatim} \index{Chv_addChevron@{\tt Chv\_addChevron()}} This method is used to assemble entries from the matrix pencil $A + \sigma B$ into the block chevron object. Typically the entries from $A$ or $B$ will come from a {\tt InpMtx} object, one of whose modes of storage is by single {\tt chevrons}. The value {\tt ichv} is the row and column location of the diagonal entry. The indices found in {\tt chvind[]} are {\it offsets}. Let {\tt off = chvind[ii]} be the offset for one of the chevron's entries. If $\mbox{\tt off} \ge 0$, then the entry is found in location {\tt (ichv, ichv+off)} of the matrix. If $\mbox{\tt off} < 0$, then the entry is found in location {\tt (ichv-off, ichv)} of the matrix. The value(s) in {\tt alpha[]} form a scalar used to scale the entire chevron for its assembly. A call to assemble entries in $A$ (from the pencil $A + \sigma B$) would have {\tt alpha[] = (1.0,0.0)}; to assemble entries in $B$ (from the pencil $A + \sigma B$) would have $\mbox{\tt alpha[]} = (Real(\sigma),Imag(\sigma))$. \par \noindent {\it Error checking:} If {\tt chv}, {\tt chvind}, {\tt chvent} or {\tt alpha} is {\tt NULL}, or if {\tt ichv} or {\tt chvsize} are less than zero, an error message is printed and the program exits. %----------------------------------------------------------------------- \item \begin{verbatim} void Chv_assembleChv ( Chv *chvJ, Chv *chvI ) ; \end{verbatim} \index{Chv_assembleChv@{\tt Chv\_assembleChv()}} This method is used to assemble entries from one {\tt Chv} object into another. The application is during a factorization with pivoting, postponed entries from the children are stored in the {\tt chvI Chv} object and need to be assembled into the final working front, along with all updates from the descendents (which are stored in the {\tt chvJ Chv} object. Note, the row and column indices of {\tt chvI} {\it must nest} with those of {\tt chvJ}. \par \noindent {\it Error checking:} If {\tt chvI} or {\tt chvJ} is {\tt NULL}, or if their {\tt symflag} fields are not identical, an error message is printed and the program exits. %----------------------------------------------------------------------- \item \begin{verbatim} int Chv_assemblePostponedData ( Chv *newchv, Chv *oldchv, Chv *firstchild ) ; \end{verbatim} \index{Chv_assemblePostponedData@{\tt Chv\_assemblePostponedData()}} This method is used to assemble a {\tt Chv} object for a front ({\tt oldchv}) along with any postponed data from the children (objects are held in a list where {\tt firstchild} is the head) into a {\tt Chv} object {\tt newchv}. The return value is the number of delayed rows and columns from the children fronts which are found in the leading rows and columns of the chevron. \par \noindent {\it Error checking:} If {\tt newchv}, {\tt oldchv} or {\tt firstchild} is {\tt NULL}, an error message is printed and the program exits. %----------------------------------------------------------------------- \end{enumerate} \par \subsection{Factorization methods} \label{subsection:Chv:proto:factor} \par %======================================================================= \begin{enumerate} %----------------------------------------------------------------------- \item \begin{verbatim} int Chv_factorWithPivoting ( Chv *chv, int ndelay, int pivotflag, IV *pivotsizesIV, DV *workDV, double tau, int *pntest ) ; \end{verbatim} \index{Chv_factorWithPivoting@{\tt Chv\_factorWithPivoting()}} This method factors a front using pivoting for numerical stability. The number of rows and columns that have been delayed (assembled from the children) is given by {\tt ndelay}, this allows the method that finds the pivots to skip over these rows and columns since no pivot can be found there. When pivoting is enabled ({\tt pivotflag} is {\tt SPOOLES\_PIVOTING}), the {\tt workDV} object used during the search process for pivots must be non-{\tt NULL}, {\tt tau} is the upper bound on factor entries, and {\tt pivotsizesIV} must be non-{\tt NULL} when the front is symmetric or Hermitian. The address {\tt pntest} is incremented with the number of pivot tests by the {\tt Chv\_findPivot()} method. The return value is the number of eliminated rows and columns. \par \noindent {\it Error checking:} If {\tt chv} is {\tt NULL}, or if {\tt pivotflag} is not valid, or if {\tt ndelay} is negative, or if {\tt pivotflag == SPOOLES\_PIVOTING} and {\tt workDV} is {\tt NULL} or {\tt tau} is less than {\tt 1.0}, or if the chevron is symmetric or Hermitian, {\tt pivotflag == SPOOLES\_PIVOTING} and {\tt pivotsizesIV} is {\tt NULL}, an error message is printed and the program exits. %----------------------------------------------------------------------- \item \begin{verbatim} int Chv_factorWithNoPivoting ( Chv *chv, PatchAndGoInfo *info ) ; \end{verbatim} \index{Chv_factorWithNoPivoting@{\tt Chv\_factorWithNoPivoting()}} This method factors a front without using pivoting for numerical stability. It does support ``patch-and-go'' functionality, where if a small or zero entry is found in the diagonal element that is to be eliminated, some action can be taken. The return value is the number of eliminated rows and columns. \par \noindent {\it Error checking:} If {\tt chv} is {\tt NULL}, an error message is printed and the program exits. %----------------------------------------------------------------------- \item \begin{verbatim} int Chv_r1upd ( Chv *chv ) ; \end{verbatim} \index{Chv_r1upd@{\tt Chv\_r1upd()}} This method is used during the factorization of a front, performing a rank-one update of the chevron. The return value is {\tt 1} if the pivot is nonzero, {\tt 0} otherwise. \par \noindent {\it Error checking:} If {\tt chv} is {\tt NULL}, an error message is printed and the program exits. %----------------------------------------------------------------------- \item \begin{verbatim} int Chv_r2upd ( Chv *chv ) ; \end{verbatim} \index{Chv_r2upd@{\tt Chv\_r2upd()}} This method is used during the factorization of a front, performing a rank-two update of the chevron. The return value is {\tt 1} if the pivot is nonsingular, {\tt 0} otherwise. \par \noindent {\it Error checking:} If {\tt chv} is {\tt NULL}, or if the chevron is nonsymmetric, an error message is printed and the program exits. %----------------------------------------------------------------------- \item \begin{verbatim} void Chv_maxabsInChevron ( Chv *chv, int ichv, double *pdiagmaxabs, *prowmaxabs, *pcolmaxabs ) ; \end{verbatim} \index{Chv_maxabsInChevron@{\tt Chv\_maxabsInChevron()}} This method is used during the factorization of a front with a ``patch-and-go'' strategy. On return, {\tt *pdiagmaxabs} contains the magnitude of the diagonal entry for the chevron, {\tt *prowmaxabs} contains the maximum magnitude of the entries in the row of the chevron, and {\tt *pcolmaxabs} contains the maximum magnitude of the entries in the column of the chevron. \par \noindent {\it Error checking:} If {\tt chv}, {\tt pdiagmaxabs}, {\tt prowmaxabs} or {\tt pcolmaxabs} is {\tt NULL}, or if {\tt ichv} is out of range, an error message is printed and the program exits. %----------------------------------------------------------------------- \item \begin{verbatim} void Chv_zeroOffdiagonalOfChevron ( Chv *chv, int ichv ) ; \end{verbatim} \index{Chv_zeroOffdiagonalOfChevron@{\tt Chv\_zeroOffdiagonalOfChevron()}} This method is used during the factorization of a front with a ``patch-and-go'' strategy. On return, the offdiagonal entries of chevron {\tt ichv} have been set to zero. \par \noindent {\it Error checking:} If {\tt chv} is {\tt NULL}, or if {\tt ichv} is out of range, an error message is printed and the program exits. %----------------------------------------------------------------------- \end{enumerate} \par \subsection{Copy methods} \label{subsection:Chv:proto:copy} \par %======================================================================= \begin{enumerate} %----------------------------------------------------------------------- \item \begin{verbatim} int Chv_countEntries ( Chv *chv, int npivot, int pivotsizes[], int countflag ) ; \end{verbatim} \index{Chv_countEntries@{\tt Chv\_countEntries()}} This method counts the number of entries in the chevron that are larger in magnitude than {\tt droptol}. {\tt countflag} has the following meaning. \begin{itemize} \item {\tt CHV\_STRICT\_LOWER} $\Longrightarrow$ count strict lower entries \item {\tt CHV\_DIAGONAL} $\Longrightarrow$ count diagonal entries \item {\tt CHV\_STRICT\_UPPER} $\Longrightarrow$ count strict upper entries \item {\tt CHV\_STRICT\_LOWER\_11} $\Longrightarrow$ count strict lower entries in the (1,1) block \item {\tt CHV\_LOWER\_21} $\Longrightarrow$ count lower entries in the (2,1) block \item {\tt CHV\_STRICT\_UPPER\_11} $\Longrightarrow$ count strict upper entries in the (1,1) block \item {\tt CHV\_UPPER\_12} $\Longrightarrow$ count upper entries in the (1,2) block \end{itemize} This method is used to compute the necessary storage to store a chevron as a dense front. \par \noindent {\it Error checking:} If {\tt chv} is {\tt NULL} or if {\tt countflag} is not valid, an error message is printed and the program exits. %----------------------------------------------------------------------- \item \begin{verbatim} int Chv_countBigEntries ( Chv *chv, int npivot, int pivotsizes[], int countflag, double droptol ) ; \end{verbatim} \index{Chv_countBigEntries@{\tt Chv\_countBigEntries()}} This method counts the number of entries in the chevron that are larger in magnitude than {\tt droptol}. {\tt countflag} has the following meaning. \begin{itemize} \item {\tt CHV\_STRICT\_LOWER} $\Longrightarrow$ count strict lower entries \item {\tt CHV\_STRICT\_UPPER} $\Longrightarrow$ count strict upper entries \item {\tt CHV\_STRICT\_LOWER\_11} $\Longrightarrow$ count strict lower entries in the (1,1) block \item {\tt CHV\_LOWER\_21} $\Longrightarrow$ count lower entries in the (2,1) block \item {\tt CHV\_STRICT\_UPPER\_11} $\Longrightarrow$ count strict upper entries in the (1,1) block \item {\tt CHV\_UPPER\_12} $\Longrightarrow$ count upper entries in the (1,2) block \end{itemize} This method is used to compute the necessary storage to store a chevron as a sparse front. \par \noindent {\it Error checking:} If {\tt chv} is {\tt NULL} or if {\tt countflag} is not valid, an error message is printed and the program exits. %----------------------------------------------------------------------- \item \begin{verbatim} int Chv_copyEntriesToVector ( Chv *chv, int npivot, int pivotsizes[], int length, double dvec[], int copyflag, int storeflag) ; \end{verbatim} \index{Chv_copyEntriesToVector@{\tt Chv\_copyEntriesToVector()}} This method copies some entries the chevron object into a double precision vector. This method is called after a front has been factored and is used to store the factor entries into the storage for the factor matrix. If the front is nonsymmetric, the front contains entries of $L$, $D$ and $U$, where $D$ is diagonal. If the front is symmetric or Hermitian, the front contains entries of $D$ and $U$, and $D$ is diagonal if {\tt pivotsizesIV} is {\tt NULL} or may contain a mixture of $1 \times 1$ and $2 \times 2$ pivots otherwise. {\tt copyflag} has the following meaning. \begin{itemize} \item {\tt CHV\_STRICT\_LOWER} $\Longrightarrow$ copy strict lower entries \item {\tt CHV\_DIAGONAL} $\Longrightarrow$ copy diagonal entries \item {\tt CHV\_STRICT\_UPPER} $\Longrightarrow$ copy strict upper entries \item {\tt CHV\_STRICT\_LOWER\_11} $\Longrightarrow$ copy strict lower entries in the (1,1) block \item {\tt CHV\_LOWER\_21} $\Longrightarrow$ copy lower entries in the (2,1) block \item {\tt CHV\_STRICT\_UPPER\_11} $\Longrightarrow$ copy strict upper entries in the (1,1) block \item {\tt CHV\_UPPER\_12} $\Longrightarrow$ copy upper entries in the (1,2) block \end{itemize} \par %% The {\tt DFrontMtx} object presently stores the entries in $U$ %% by columns and the entries in $L$ by rows. %% This allows us to use dot product kernels during the factorization. %% On other architectures it may be more efficient to have {\tt axpy} %% kernels, in which the entries in $U$ would be stored by rows and %% the entries in $L$ stored by columns. %% This method supports both formats, where If {\tt storeflag} is {\tt CHV\_BY\_ROWS}, the entries are stored by rows and if {\tt storeflag} is {\tt CHV\_BY\_COLUMNS}, the entries are stored by columns. \par \noindent {\it Error checking:} If {\tt chv} or {\tt dvec} is {\tt NULL} or if {\tt length} is less than the number of entries to be copied, or if {\tt copyflag} or {\tt storeflag} is valid, an error message is printed and the program exits. %----------------------------------------------------------------------- \item \begin{verbatim} int Chv_copyBigEntriesToVector ( Chv *chv, int npivot, int pivotsizes[], int sizes[], int ivec[], double dvec[], int copyflag, int storeflag, double droptol ) ; \end{verbatim} \index{Chv_copyBigEntriesToVector@{\tt Chv\_copyBigEntriesToVector()}} This method also copies some entries the chevron object into a double precision vector, but only those entries whose magnitude is greater than or equal to {\tt droptol} are copied. This method is called after a front has been factored and is used to store the factor entries of large magnitude into the storage for the factor matrix. If the front is nonsymmetric, the front contains entries of $L$, $D$ and $U$, where $D$ is diagonal. If the front is symmetric, the front contains entries of $D$ and $U$, and $D$ is diagonal if {\tt pivotsizesIV} is {\tt NULL} or may contain a mixture of $1 \times 1$ and $2 \times 2$ pivots otherwise. {\tt copyflag} has the following meaning. \begin{itemize} \item {\tt CHV\_STRICT\_LOWER} $\Longrightarrow$ copy strict lower entries \item {\tt CHV\_STRICT\_UPPER} $\Longrightarrow$ copy strict upper entries \item {\tt CHV\_STRICT\_LOWER\_11} $\Longrightarrow$ copy strict lower entries in the (1,1) block \item {\tt CHV\_LOWER\_21} $\Longrightarrow$ copy lower entries in the (2,1) block \item {\tt CHV\_STRICT\_UPPER\_11} $\Longrightarrow$ copy strict upper entries in the (1,1) block \item {\tt CHV\_UPPER\_12} $\Longrightarrow$ copy upper entries in the (1,2) block \end{itemize} \par % The {\tt DFrontMtx} object presently stores the entries in $U$ % by columns and the entries in $L$ by rows. % This allows us to use dot product kernels during the factorization. % On other architectures it may be more efficient to have {\tt axpy} % kernels, in which the entries in $U$ would be stored by rows and % the entries in $L$ stored by columns. % This method supports both formats, where If {\tt storeflag} is {\tt CHV\_BY\_ROWS}, the entries are stored by rows and if {\tt storeflag} is {\tt CHV\_BY\_COLUMNS}, the entries are stored by columns. \par When we store the large entries in the columns of $U$, {\tt sizes[jcol]} contains the number of large entries in column {\tt jcol}. The vectors {\tt ivec[]} and {\tt dvec[]} contain the row indices and the entries that are stored. When we store the large entries in the rows of $L$, {\tt sizes[irow]} contains the number of large entries in column {\tt irow}. The vectors {\tt ivec[]} and {\tt dvec[]} contain the column indices and the entries that are stored. Presently there is no checking that {\tt sizes[]}, {\tt ivec[]} and {\tt dvec[]} are large enough to store the sizes, indices and entries. The large entry count can be obtained using the method {\tt Chv\_countBigEntries()}. \par \noindent {\it Error checking:} If {\tt chv} or {\tt dvec} is {\tt NULL} or if {\tt length} is less than the number of entries to be copied, or if {\tt copyflag} or {\tt storeflag} is not valid, an error message is printed and the program exits. %----------------------------------------------------------------------- \item \begin{verbatim} void Chv_copyTrailingPortion ( Chv *chvI, Chv *chvJ, int offset ) ; \end{verbatim} \index{Chv_copyTrailingPortion@{\tt Chv\_copyTrailingPortion()}} This method copies the trailing portion of {\tt chvJ} into {\tt chvI}. The first {\tt offsets} chevrons are not copied, the remainder are copied. This method is used to extract the delayed entries from a front which has been factored. \par \noindent {\it Error checking:} If {\tt chvI} or {\tt chvJ} is {\tt NULL}, or if ${\tt offset} < 0$ or {\tt offset} is greater than the number of chevrons in {\tt chvJ}, an error message is printed and the program exits. %----------------------------------------------------------------------- \end{enumerate} \par \subsection{Swap methods} \label{subsection:Chv:proto:swap} \par %======================================================================= \begin{enumerate} %----------------------------------------------------------------------- \item \begin{verbatim} void Chv_swapRows ( Chv *chv, int irow, int jrow ) ; \end{verbatim} \index{Chv_swapRows@{\tt Chv\_swapRows()}} This method swaps rows {\tt irow} and {\tt jrow} of the chevron. Both rows must be less than the width {\tt nD} of the chevron. The row ids of the two rows are also swapped. If the chevron is symmetric, then the method {\tt Chv\_swapRowsAndColumns()} is called. \par \noindent {\it Error checking:} If {\tt chv} is {\tt NULL} or if {\tt irow} or {\tt jrow} are less than 0 or greater than or equal to {\tt nD}, an error message is printed and the program exits. %----------------------------------------------------------------------- \item \begin{verbatim} void Chv_swapColumns ( Chv *chv, int icol, int jcol ) ; \end{verbatim} \index{Chv_swapColumns@{\tt Chv\_swapColumns()}} This method swaps columns {\tt icol} and {\tt jcol} of the chevron. Both columns must be less than the width {\tt nD} of the chevron. The column ids of the two columns are also swapped. If the chevron is symmetric, then the method {\tt Chv\_swapRowsAndColumns()} is called. \par \noindent {\it Error checking:} If {\tt chv} is {\tt NULL} or if {\tt icol} or {\tt jcol} are less than 0 or greater than or equal to {\tt nD}, an error message is printed and the program exits. %----------------------------------------------------------------------- \item \begin{verbatim} void Chv_swapRowsAndColumns ( Chv *chv, int ii, int jj ) ; \end{verbatim} \index{Chv_swapRowsAndColumns@{\tt Chv\_swapRowsAndColumns()}} This method swaps rows and columns {\tt ii} and {\tt jj} of the chevron. Both must be less than the width {\tt nD} of the chevron. The row and/or column ids are also swapped. \par \noindent {\it Error checking:} If {\tt chv} is {\tt NULL} or if {\tt ii} or {\tt jj} are less than 0 or greater than or equal to {\tt nD}, an error message is printed and the program exits. %----------------------------------------------------------------------- \end{enumerate} \par \subsection{Utility methods} \label{subsection:Chv:proto:utilities} \par %======================================================================= \begin{enumerate} %----------------------------------------------------------------------- \item \begin{verbatim} int Chv_nbytesNeeded ( int nD, int nL, int nU, int type, int symflag ) ; \end{verbatim} \index{Chv_nbytesNeeded@{\tt Chv\_nbytesNeeded()}} This method returns the number of bytes necessary to store an object with the given dimensions and type in its workspace. \par \noindent {\it Error checking:} If {\tt nD}, {\tt nL}, or {\tt nU} is less than zero, or if {\tt type} or {\tt symflag} are not valid, % or if {\tt type} is not {\tt SPOOLES\_REAL} or {\tt SPOOLES\_COMPLEX}, % or if {\tt symflag} is not {\tt SPOOLES\_SYMMETRIC}, % {\tt SPOOLES\_HERMITIAN} or {\tt SPOOLES\_NONSYMMETRIC}, an error message is printed and the program exits. %----------------------------------------------------------------------- \item \begin{verbatim} int Chv_nbytesInWorkspace ( Chv *chv ) ; \end{verbatim} \index{Chv_nbytesInWorkspace@{\tt Chv\_nbytesInWorkspace()}} This method returns the number of bytes in the workspace. \par \noindent {\it Error checking:} If {\tt chv} is {\tt NULL}, an error message is printed and the program exits. %----------------------------------------------------------------------- \item \begin{verbatim} void Chv_setNbytesInWorkspace ( Chv *chv, int nbytes ) ; \end{verbatim} \index{Chv_setNbytesInWorkspace@{\tt Chv\_setNbytesInWorkspace()}} This method sets the number of bytes in the workspace. If {\tt nbytes} is less than the number of present bytes in the workspace, the workspace is not shrunk. \par \noindent {\it Error checking:} If {\tt chv} is {\tt NULL}, an error message is printed and the program exits. %----------------------------------------------------------------------- \item \begin{verbatim} void Chv_setFields ( Chv *chv, int id, int nD, int nL, int nU, int type, int symflag ) ; \end{verbatim} \index{Chv_setFields@{\tt Chv\_setFields()}} This method sets the scalar fields and {\tt rowind}, {\tt colind} and {\tt entries} pointers. \par \noindent {\it Error checking:} If {\tt chv} is {\tt NULL}, or if ${\tt nD} \le 0$, or if {\tt nL} or {\tt nU} are less than zero, or if {\tt type} or {\tt symflag} are not valid, % or if {\tt type} is not {\tt SPOOLES\_REAL} or {\tt SPOOLES\_COMPLEX}, % or if {\tt symflag} is not {\tt SPOOLES\_SYMMETRIC}, % {\tt SPOOLES\_HERMITIAN} or {\tt SPOOLES\_NONSYMMETRIC}, an error message is printed and the program exits. %----------------------------------------------------------------------- \item \begin{verbatim} void Chv_shift ( Chv *chv, int shift ) ; \end{verbatim} \index{Chv_shift@{\tt Chv\_shift()}} This method is used to shift the base of the entries and adjust dimensions of the {\tt Chv} object. If {\tt shift} is positive, the first {\tt shift} chevrons are removed from the chevron. If {\tt shift} is negative, the {\tt shift} previous chevrons are prepended to the chevron. This is a dangerous method as it changes the state of the object. We use it during the factorization of a front, where one {\tt Chv} object points to the entire chevron in order to swap rows and columns, while another chevron points to the uneliminated rows and columns of the front. It is the latter chevron that is shifted during the factorization. \par \noindent {\it Error checking:} If {\tt chv} is {\tt NULL} an error message is printed and the program exits. %----------------------------------------------------------------------- \item \begin{verbatim} void Chv_fill11block ( Chv *chv, A2 *mtx ) ; \end{verbatim} \index{Chv_fill11block@{\tt Chv\_fill11block()}} This method is used to fill a {\tt A2} dense matrix object with the entries in the $(1,1)$ block of the chevron. \par \noindent {\it Error checking:} If {\tt chv} or {\tt mtx} is {\tt NULL}, an error message is printed and the program exits. %----------------------------------------------------------------------- \item \begin{verbatim} void Chv_fill12block ( Chv *chv, A2 *mtx ) ; \end{verbatim} \index{Chv_fill12block@{\tt Chv\_fill12block()}} This method is used to fill a {\tt A2} dense matrix object with the entries in the $(1,2)$ block of the chevron. \par \noindent {\it Error checking:} If {\tt chv} or {\tt mtx} is {\tt NULL}, an error message is printed and the program exits. %----------------------------------------------------------------------- \item \begin{verbatim} void Chv_fill21block ( Chv *chv, A2 *mtx ) ; \end{verbatim} \index{Chv_fill21block@{\tt Chv\_fill21block()}} This method is used to fill a {\tt A2} dense matrix object with the entries in the $(2,1)$ block of the chevron. \par \noindent {\it Error checking:} If {\tt chv} or {\tt mtx} is {\tt NULL}, an error message is printed and the program exits. %----------------------------------------------------------------------- \item \begin{verbatim} double Chv_maxabs ( Chv *chv ) ; \end{verbatim} \index{Chv_maxabs@{\tt Chv\_maxabs()}} This method returns the magnitude of the entry of largest magnitude in the object. \par \noindent {\it Error checking:} If {\tt chv} is {\tt NULL}, an error message is printed and the program exits. %----------------------------------------------------------------------- \item \begin{verbatim} double Chv_frobNorm ( Chv *chv ) ; \end{verbatim} \index{Chv_frobNorm@{\tt Chv\_frobNorm()}} This method returns the Frobenius norm of the chevron. \par \noindent {\it Error checking:} If {\tt chv} is {\tt NULL}, an error message is printed and the program exits. %----------------------------------------------------------------------- \item \begin{verbatim} void Chv_sub ( Chv *chvJ, Chv *chvI ) ; \end{verbatim} \index{Chv_sub@{\tt Chv\_sub()}} This method subtracts {\tt chvI} from {\tt chvJ}. \par \noindent {\it Error checking:} If {\tt chvJ} or {\tt chvI} is {\tt NULL}, or if their dimensions are not the same, or if either of their {\tt entries} fields are {\tt NULL}, an error message is printed and the program exits. %----------------------------------------------------------------------- \item \begin{verbatim} void Chv_zero ( Chv *chv ) ; \end{verbatim} \index{Chv_zero@{\tt Chv\_zero()}} This method zeroes the entries in the chevron. \par \noindent {\it Error checking:} If {\tt chv} is {\tt NULL}, an error message is printed and the program exits. %----------------------------------------------------------------------- \end{enumerate} \par \subsection{IO methods} \label{subsection:Chv:proto:IO} \par %======================================================================= \begin{enumerate} %----------------------------------------------------------------------- \item \begin{verbatim} void Chv_writeForHumanEye ( Chv *chv, FILE *fp ) ; \end{verbatim} \index{Chv_writeForHumanEye@{\tt Chv\_writeForHumanEye()}} \par This method writes a {\tt Chv} object to a file in an easily readable format. \par \noindent {\it Error checking:} If {\tt chv} or {\tt fp} are {\tt NULL}, an error message is printed and zero is returned. %----------------------------------------------------------------------- \item \begin{verbatim} void Chv_writeForMatlab ( Chv *chv, char *chvname, FILE *fp ) ; \end{verbatim} \index{Chv_writeForMatlab@{\tt Chv\_writeForMatlab()}} \par This method writes a {\tt Chv} object to a file in a matlab format. For a real chevron, a sample line is \begin{verbatim} a(10,5) = -1.550328201511e-01 ; \end{verbatim} where chvname = {\tt "a"}. For a complex chevron, a sample line is \begin{verbatim} a(10,5) = -1.550328201511e-01 + 1.848033378871e+00*i; \end{verbatim} where chvname = {\tt "a"}. The matrix indices come from the {\tt rowind[]} and {\tt colind[]} vectors, and are incremented by one to follow the Matlab and FORTRAN convention. \par \noindent {\it Error checking:} If {\tt chv}, {\tt chvname} or {\tt fp} are {\tt NULL}, an error message is printed and zero is returned. %----------------------------------------------------------------------- \end{enumerate} \par
{ "alphanum_fraction": 0.6216662645, "avg_line_length": 42.2927786499, "ext": "tex", "hexsha": "dab4c71577bf8a7c5215c46ac0f07a5283641a8f", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2019-08-29T18:41:28.000Z", "max_forks_repo_forks_event_min_datetime": "2019-08-29T18:41:28.000Z", "max_forks_repo_head_hexsha": "2cb2c434b536eb668ff88bdf82538d22f4f0f711", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "alleindrach/calculix-desktop", "max_forks_repo_path": "ccx_prool/SPOOLES.2.2/Chv/doc/proto.tex", "max_issues_count": 4, "max_issues_repo_head_hexsha": "2cb2c434b536eb668ff88bdf82538d22f4f0f711", "max_issues_repo_issues_event_max_datetime": "2018-01-25T16:08:31.000Z", "max_issues_repo_issues_event_min_datetime": "2017-09-21T17:03:55.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "alleindrach/calculix-desktop", "max_issues_repo_path": "ccx_prool/SPOOLES.2.2/Chv/doc/proto.tex", "max_line_length": 82, "max_stars_count": null, "max_stars_repo_head_hexsha": "2cb2c434b536eb668ff88bdf82538d22f4f0f711", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "alleindrach/calculix-desktop", "max_stars_repo_path": "ccx_prool/SPOOLES.2.2/Chv/doc/proto.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 15084, "size": 53881 }
\section{Canonical Transformations} In Hamiltonian formalism, $p$ and $q$ are on an equal footing. What that means is we can make coordinate transformations that mix up the $p$ and $q$. \begin{align} q_i &\to Q_i = Q_i\left( q, p, t \right)\\ p_i &\to P_i = P_i\left( q, p, t \right) \end{align} Now in the Hamiltonian formalism, $p$ and $q$ are on an equal footing. So in general you can think of a transformation that mixes the $p$ and $q$. But there's a catch. Not all such transformations are allowed. Only those transformations that are allowed are those that leave Hamiltonian's equations invariant, so that in general \begin{align} H(q, p) \to K(Q, P, t) \end{align} and what we need is for \begin{align} \frac{\partial K}{\partial P_i} &= \dot{Q}_i \end{align} and \begin{align} \frac{\partial K}{\partial Q_i} &= -\dot{P}_i \end{align} so you're allowed a much bigger set of transformations, but the condition is that they still have to respect Hamilton's equations. \begin{question} How are $q$ and $\dot{q}$ not on an equal footing in the Lagrangian formalism? \end{question} If you know $q(t)$, you know $\dot{q}$ from the derivative. You have second order differential equation for $q(t)$. Here, you get two first-order equations for $p$ and $q$ separately. You can vary $p$ and $q$ independently to get the equations. But you can't do that in the Lagrangian formalism. So anyway, they shouldn't change the form of Hamilton's equations. Transformations that satisfy this condition are called canonical transformations. This is the set of allowed transformations that leave the form of Hamilton's equations invariant. Unfortunately, this is a huge topic, quite a complicated one, and we don't have enough time to go into great detail. So I will look at a subset of these transformations. I'm going to look at descriptive canonical transformations. These are the most important subset, but if you look in the book, you'll see a more extended discussion. The real difference is mostly the ones we're looking at are not going to involve time. So if you have transformations that mix $p$ and $q$ that don't mix time. For those, there's a simplification that $H=K$. That's not very clear, but it's part of the bigger formalism. \section{Restricted Canonical Transformations} So they are \begin{align} q_i &\to Q_i(q, p)\\ p_i &\to P_i(q, p) \end{align} and the $H$ and $K$ are the same \begin{align} H(q, p) = K(Q, P) \end{align} This is what defines the restricted set of canonical transformations. What we do is, we begin by writing Hamilton's equations in a symmetric form. So you put the $p$'s and $q$'s into a single vector. \begin{align} \vec{z} &= \begin{pmatrix} q_1\\ q_2\\ \vdots\\ q_n\\ p_1\\ p_2\\ \vdots\\ p_n \end{pmatrix} \end{align} So we just defined this giant vector $2n$ components with all the $p$'s and the $q$'s together. And then there's a standard matrix. We introduce a $2n\times 2n$ matrix called $\hat{J}$, which is of this form \begin{align} \hat{J} &= \begin{pmatrix} 0_{n\times n} & I_{n\times n}\\ - I_{n\times n} & 0_{n\times n}\\ \end{pmatrix} \end{align} Where the $0_{n\times n}$ denotes an $n\times n$ matrix of zeros, and $1_{n\times n}$ is the $n\times n$ identity matrix. It has the funny property that if you take the transpose you get $-1$ of itself. In this notation, Hamilton's equations are \begin{align} \dot{\vec{z}} &= \hat{J} \frac{\partial H}{\partial \vec{z}} \end{align} or alternatively \begin{align} \dot{z}_i &= \sum_{j} \hat{J}_{ij} \frac{\partial H}{\partial z_j} \end{align} This is just a rewrite. So now we make our transformation from $z=(q_i, p_i)$ to $w=(Q_i, P_i)$. In other word's we're going from \begin{align} z_i &\to w_i = w_i(z) \end{align} Let's keep going. So now these $w$'s must satisfy an equation just like the equation for the $\dot{z}$. That is, \begin{align} \dot{w}_i &= \sum_{j} \frac{\partial w_i}{\partial z_j} \dot{z_j} \end{align} but this $\dot{z}_j$ satisfies \begin{align} \dot{z}_j &= \sum_{k} \hat{J}_{jk} \frac{\partial H}{\partial z_k} \end{align} so then the equation becomes \begin{align} \dot{w} &= \sum_{j,j} \frac{\partial w_i}{\partial z_j} \hat{J}_{jk} \frac{\partial H}{\partial z_k} \end{align} but this last derivative can be written \begin{align} \frac{\partial H}{\partial z_k} &= \sum_{l} \frac{\partial H}{\partial w_l} \frac{\partial w_l}{\partial z_k} \end{align} so putting it together \begin{align} \dot{w}_i &= \sum_{l} \left( \sum_{jk} \frac{\partial w_i}{\partial z_j} \hat{J}_{jk} \frac{\partial w_l}{\partial z_k} \right) \frac{\partial H}{\partial w_l} \end{align} and you'll see why I put the brackets like that in a minute. So this expression in the brackets is \begin{align} \sum_{jk} \frac{\partial w_i}{\partial z_j} \hat{J}_{jk} \frac{\partial w_l}{\partial z_k} &= \left( J \hat{J} J \right)_{il} \end{align} where $J$ here is the Jacobian matrix of the transformation. So when we transform from the $z$'s to the $w$'s, there's a Jacobian matrix associated with that transformation. So the Jacobian matrix is the matrix whose elements are \begin{align} J_{ij} &= \frac{\partial w_i}{\partial z_j} \end{align} Those are the elements of the Jacobian matrix. And you can see that with this definition, this thing in the brackets is indeed just $\left( J \hat{J} J \right)_{il}$. So then this equation can then be written as \begin{align} \dot{\vec{w}} &= \left( J \hat{J} J^T \right) \frac{\partial H}{\partial \vec{w}} \end{align} So this equation comes once once you recognize that expression can be written in terms of the Jacobian. The condition for the transformation to be canonicial is tht this equation should have the same form as this equation \begin{align} \dot{\vec{z}} &= \hat{J} \frac{\partial H}{\partial \vec{z}} \end{align} That is, the equation for $\dot{w}$ should have the same form as the equation for $\dot{z}$. So the condition for Hamilton's equations to take the same form is that \begin{align} \boxed{J \hat{J} J^T = \hat{J}} \end{align} If this condition is satisfied then we are done. This is the condition you have to remember. It's easy in the exam to get a question that asks to show that a transformation is canonical. Then you just calculate the Jacobian matrix, then compute this to check that you get $\hat{J}$ back. If this condition holds, then we say that the Jacobian is \emph{symplectic}. It's a technical term. If you study group theory, you'll find that matrices that satisfy this equation forms a group, called the symplectic group. If you study classical mechanics in detail, a large part of it is understanding properties of the symplectic group. This is one of the fundamental classical continuous groups. \begin{question} Is this the same as the canonical group? \end{question} No. Symplectic is not the same as canonical. This symplectic group is the real symplectic group. It's called $SP_2(i)$. But in quantum theory, the transformation also has to be unitary, and it's complex. For example, in QM, you have an extra condition of unitary, and you're allowed to have a complex map. So things can be symplectic without being canonical. This $\hat{J}$ matrix shows up everywhere beyond classical mechanics in the mathematics literature. Unfortunately, this is what everyone calls it $J$, without the hat, but unfortunately, the Jacobian is also called $J$, so I have to choose which one has the hat. So we found this equation. We now show that this condition is equivalent to the requirement that the new coordinates satisfy \begin{align} \left\{ Q_i, Q_j \right\} &= \left\{ P_i, P_j \right\} = 0\\ \left\{ Q_i, P_i \right\} &= \delta_{ij} \end{align} The claim is, that if the new coordinates satsify the Poisson brackets, then you are guaranteed that symplectic condition. So let's show that. This is just an exercise in linear algebra. The Jacobian matrix by definition is \begin{align} \frac{\partial w_i}{\partial z_j} &= J_{ij}\\ &= \begin{pmatrix} \partial Q_i/ \partial q_j & \partial Q_i / \partial p_j\\ \partial P_i/ \partial q_j & \partial P_i / \partial p_j\\ \end{pmatrix} \end{align} What we're doing is evaluating this condition, but in terms of the $Q$'s and $P$'s rather than in $z$-space. Just blindly plugging it in. Then what we have is \begin{align} \left( J \hat{J} J^T \right)_{ik} &= \sum_{j} \sum_{k} \frac{\partial w_i}{\partial z_j} \hat{J}_{jk} \frac{\partial w_i}{\partial z_k}\\ &= \begin{pmatrix} \frac{\partial Q_i}{\partial q_j} & \frac{\partial Q_i}{\partial p_j}\\ \frac{\partial P_i}{\partial q_j} & \frac{\partial P_i}{\partial p_j}\\ \end{pmatrix} \begin{pmatrix} 0 & \delta_{ij}\\ - \delta_{jk} & 0 \end{pmatrix} \begin{pmatrix} \frac{\partial Q_i}{\partial q_k} & \frac{\partial P_l}{\partial q_k}\\ \frac{\partial Q_l}{\partial p_k} & \frac{\partial P_l}{\partial p_k} \end{pmatrix} \end{align} and as an exercise in linear algebra, if you multiply this out, you get \begin{align} \begin{pmatrix} 0 & \delta_{ij}\\ - \delta_{jk} & 0 \end{pmatrix} \begin{pmatrix} \frac{\partial Q_i}{\partial q_k} & \frac{\partial P_l}{\partial q_k}\\ \frac{\partial Q_l}{\partial p_k} & \frac{\partial P_l}{\partial p_k} \end{pmatrix} &= \begin{pmatrix} \frac{\partial Q_i}{\partial p_j} & \frac{\partial P_l}{\partial p_j}\\ -\frac{\partial Q_l}{\partial q_j} & -\frac{\partial P_l}{\partial q_j} \end{pmatrix} \end{align} If you do the multiplication, I claim that this is what you're going to find. \begin{align} \left(J \hat{J} J\right)_{ik} &= \begin{pmatrix} \left\{ Q_i, Q_l \right\} & \left\{ Q_i, P_l \right\}\\ \left\{ P_i, Q_l \right\} & \left\{ P_i, p_l \right\} \end{pmatrix} \end{align} and so if this is really equal to $\hat{J}_{il}$, then we get the conditions \begin{align} \left\{ Q_i, Q_l \right\} &= \left\{ P_i, P_l \right\} = 0 \end{align} and \begin{align} \left\{ Q_i, P_l \right\} &= \delta_{il} \end{align} I just want you to convince yourself that the $ij$-th element, corresponds to that block. What it is depends on which block you're in. \begin{question} If we actually set out to compute the Jacobian, we could just do it element by element right? \end{question} I just wanted to show that the condition $J \hat{J} J^T = \hat{J}$ is equivalent to $\left\{ Q_i, Q_l \right\} = \left\{ P_i, P_l \right\} = 0$ and $\left\{ Q_i, P_l \right\} = \delta_{il}$. The two are completely equivalent mathematically. In the exam you could show either. \section{Poisson Bracket Invariance} There's one more thing I want to show. The Poisson bracket is invariant under canonical transformation. Let me explain what I mean by this. Suppose you have a transformation \begin{align} q &\to Q(q, p)\\ p &\to P(q, p) \end{align} And suppose you have a Poisson bracket that is by definition, what I mean is that they are the same in the old coordinates and the new coordinates. \begin{align} \left\{ f, g \right\} &= \sum_{i} \left( \frac{\partial f}{\partial q_i} \frac{\partial g}{\partial p_i} - \frac{\partial g}{\partial q_i} \frac{\partial f}{\partial p_i} \right) \sum_{i} \left( \frac{\partial f}{\partial Q_i} \frac{\partial g}{\partial P_i} - \frac{\partial g}{\partial Q_i} \frac{\partial f}{\partial P_i} \right) \end{align} That means, if you have a Poisson bracket, you can evalaute it in any coordinates you like, provided they are related by canonical transformation. Let's prove this. This is one of those things where it proves useful to be good at linear algebra. To prove this note that \begin{align} \left\{ f, g \right\} &= \sum_i \left( \frac{\partial f}{\partial q_i} \frac{\partial g}{\partial p_i} - \frac{\partial f}{\partial p_i} \frac{\partial g}{\partial q_i} \right)\\ &= \sum_{i} \sum_{j} \left( \frac{\partial f}{\partial z_i} \hat{J}_{ij} \frac{\partial g}{\partial z_j} \right) \end{align} So we're going to expand it out and show the claim that they are equal. Under $z\to w(z)$, we have \begin{align} \frac{\partial f}{\partial z_i} &= \sum_{j} \left( \frac{\partial f}{\partial w_j} \right) \underbrace{\left( \frac{\partial w_j}{\partial z_j} \right)}_{J_{ji}} \end{align} where we notice the second factor is just the Jacobian matrix element. Then we have \begin{align} \left\{ f, g \right\} &= \sum_i \sum_j \sum_k \sum_l \frac{\partial f}{\partial w_k} J_{ki} \hat{J}_{ij} J_{lj} \frac{\partial g}{\partial w_l}\\ &= \sum_k \sum_l \frac{\partial f}{\partial w_k} \left( J \hat{J} J^T \right)_{kl} \frac{\partial g}{\partial w_l} \end{align} But because the transformation is canonical, \begin{align} J \hat{J} J^T &= \hat{J}_{kl} \end{align} we find \begin{align} \left\{ f, g \right\} &= \sum_l \frac{\partial f}{\partial w_k} \hat{J}_{kl} \frac{\partial g}{\partial w_l} \end{align} and we can immediately go back to the definition of $\hat{J}$ and see \begin{align} \left\{ f, g \right\} &= \sum_i \left( \frac{\partial f}{\partial Q_i} \frac{\partial g}{\partial P_i} - \frac{\partial g}{\partial Q_i} \frac{\partial f}{\partial P_i} \right) \end{align} and we are done. So the Poisson bracket can be evaluated in any coordinate system, provided they are related by canonical transformation. It's a huge subject. There is a chapter in Goldstein dedicated to canonical transformation. The current edition is quite a bit better than the earlier ones. They are mostly taken from David Tong's lectures. But, you can find this in Goldstein as well. Not in the first edition, but it is in the second edition. Now I'm going to start a new topic. \section{Action-Angle Variables} Consider a one-dimensional system $H(q, p)$ without explicit time-dependence. So then the total energy is conserved. We assume that the motion is bounded, so there exists some $q_1$, $q_2$ such that \begin{align} q_1 \le q \le q_2 \end{align} So then imagine an arbitrary potential. The system undergoes periodic motion, with turning points $q_1$ and $q_2$. Transform from $(p, q)$ to new variables $(I, \theta)$ which have the property that $H$ is independent of $\theta$, so the Hamiltonian is a function of $I$ \begin{align} H = H(I). \end{align} In these variables, from Hamilton's equations \begin{align} -\frac{dI}{dt} &= \frac{\partial H}{\partial \theta} = 0 \end{align} which means that $I$ is a constant of the motion. Then what about the other Hamilton's equation? \begin{align} \frac{d\theta}{dt} &= \frac{\partial H}{\partial I} \end{align} which is some function of $I$. Remember $H$ does not contain $t$ and it does not contain $\theta$, and $I$ we just showed to be a constant. So this is also a constant. So the equations of motion are very simple. So this equation can be integrated to find that $\theta$ is just a constant times time, and the first equation just tells you $I$ is a constant. And so by convention, $I$ and $\theta$ are normalized such that $\dot{\theta} = \omega$, where $\omega$ is the angular frequency of this oscillation. $I$ is called the \emph{action variable} and $\theta$ is called the \emph{angle variable}. So $I$ is like the momentum and $\theta$ is like the position. The point is that in terms of the action-angle variables, the problem is very simple. This equation just tells you $I$ is a constant, and this equation tells you this can be integrated to $\theta=\omega t$. That's easy. The hard work is figuring out what is the relation between $I$ and $\theta$. If you can find the action-angle variables, you can trivially solve the problem. The challenge is how do you find the action-angle variables. If you can find those variables, you basically solve the problem. I'm going to show you how to do his for a special case and then we'll talk about how potentially generalize it. Almost all the solve problems there are, you can reduce it down to action-angle variables, the Kepler problem, the harmonic oscillator, etc. But there are chaotic systems for which there is no such variable.s But in more complicated systems, you usually start from a limit that does have action-angle variables, and then you perturb. So you might not have exact action-angle variables, but then you start from a point where you do. In this case, they exist because there is a constant of motion, the conserved energy, and you will see it is related in a trivial way to the action variable. Once you have an action variable, that's all you need. If you have an $n$-dimensional system with conserved energy, there is a theorem which says you do have action-angle variables. But if energy is not conserved, you may not have action-angle variables in general. Consider \begin{align} H &= \frac{p^2}{2m} + V(q) \end{align} If $I$ is a constant of the motion, it must be some function of the energy. \begin{align} H &= H(I) = E \end{align} Then \begin{align} \dot{\theta} &= \frac{\partial H}{\partial I} = \frac{d E}{dI} = \omega \end{align} This is because $H$ does not depend on $\theta$. Remember that we normalized $\dot{\theta}$ and this $\omega$ is the angular frequency of oscillation. That's because we normalized the action-angle variables to get this. So this is the setting. For this Hamiltonian, we have to find the action variable which is a constant of the motion, some function of the energy, and it has this property that $\frac{dE}{dI} = \omega$. The claims is the following. The correct choice of $I$ is \begin{align} I &= \frac{1}{2\pi} \oint p\, dq \end{align} where this is the area in phase space enclsoed by once orbit. In other words, in physical space, this thing is just bouncing back and forth between $q_1$ and $q_2$. Let's thing about what this means in phase space. There is a turning point $q_2$ and there's a turning point $q_1$. And the horizontal axis is $q$ and the vertical axis is $p$. The system traces out an orbit in phase space. This has a reflection symmetry about the $q$ axis. This is because therei s a positive value of $p$ and it comes back with a negative value of $p$, since the root of the equation gives two solutions. So this thing is the enclosed area, which is obviously a function of the energy. With more energy you can go further apart. So this is a function of the energy and this gives a relation between the energy and $I$. This is just a claim we haven't proved it yet. Okay, so let's now try to prove this claim. To prove this, we need to show that \begin{align} \frac{d}{dE}\left( \oint p\, dq \right) = \frac{2\pi}{\omega} \end{align} where $\omega$ is the frequency of oscillation. Alright, so this is what we're going to show. So now we have a bit of work. How do we evalaute the area of this curve for a given $E$? \begin{align} p &= \sqrt{2m\left( E - V(q) \right)} \end{align} As $E$ is changed, two things happen. Firstly, obviously the value of $p$ at every point $q$ is altered. So \begin{align} p &= p + \left( \frac{\partial p}{\partial E}\right)_q \Delta E \end{align} By convention the square root is always positive. So if we change the energy, at every point inside this integral, the valueof $E$ is going to change for a given $q$. Secondly, the end points $q_1$ and $q_2$ are shifted. That's because if you change $E$, your $q_1$ and $q_2$ are not the same now. We're going to see how much this changes if I change $E$. That's what we're planning to do. The value of that integral changes because $E$ is changing at every point. And the end points are shifting. We're going to account for both these effects.
{ "alphanum_fraction": 0.6812451134, "avg_line_length": 28.4617524339, "ext": "tex", "hexsha": "f8381f528a049c06f429864035476ac1c17a4bbd", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "00e4e2b6aba3d03baaec5caa36903e5135b014de", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "ehua7365/umdphysnotes", "max_forks_repo_path": "phys610/lecture19.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "00e4e2b6aba3d03baaec5caa36903e5135b014de", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "ehua7365/umdphysnotes", "max_issues_repo_path": "phys610/lecture19.tex", "max_line_length": 80, "max_stars_count": 1, "max_stars_repo_head_hexsha": "00e4e2b6aba3d03baaec5caa36903e5135b014de", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "ehua7365/umdphysnotes", "max_stars_repo_path": "phys610/lecture19.tex", "max_stars_repo_stars_event_max_datetime": "2021-12-11T12:53:46.000Z", "max_stars_repo_stars_event_min_datetime": "2021-12-11T12:53:46.000Z", "num_tokens": 6090, "size": 20464 }
\chapter{Introduction} \label{c:intro} Attention plays an important role in human vision. For example, when we look at an image, our eye movements comprise a succession of {\em fixations} (repetitive positioning of eyes to parts of the image) and {\em saccades} (rapid eye jump). Those parts of the image that cause eye fixations and capture primary attention are called {\em regions of interest} (ROIs). Studies in visual attention and eye movement have shown that humans generally only attend to a few ROIs. Detecting these visually attentive regions in images is challenging but useful in many multimedia applications, such as automatic thumbnail cropping, object recognition, content-based image retrieval, adaptive image compression and automatic browsing in small-screen devices. Many algorithms have been proposed for automatic ROI detection in images. Unfortunately, these methods were often evaluated only on specific and small data sets that are not publicly available. The lack of published {\em benchmarks} makes experiments non-repeatable and quantitative evaluation difficult. However, as recommended by the latest ACM SIGMM retreat, repeatable experiments using published benchmarks are important for advancing the multimedia research field~\cite{Rowe:2005:ASR}. \begin{figure} \centering \includegraphics[width=0.45\textwidth]{kl} \caption{kl-distance} \label{kl} \end{figure} \begin{table}[t] \begin{center} \begin{tabular}{lcc} \hline & {\small Itti's method} & {\small Fuzzy growing} \\ \hline {\small Precision} & 0.4475 & 0.4506 \\ {\small Recall} & 0.5515 & 0.5542 \\ \hline \end{tabular} \caption[Evaluation of FOA sets]{\small Evaluation of FOA sets. } \label{t:FOA} \end{center} \end{table}
{ "alphanum_fraction": 0.7585626053, "avg_line_length": 37.1041666667, "ext": "tex", "hexsha": "d5e55ca89578f05f754f7bfd0f43d27605e318d0", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "4f6e5631748669557ec47d81520f54548c5f02f5", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "kancheng/latex_practice_gather", "max_forks_repo_path": "ntu-thesis-1.3-testing-by-kan/introduction.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "4f6e5631748669557ec47d81520f54548c5f02f5", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "kancheng/latex_practice_gather", "max_issues_repo_path": "ntu-thesis-1.3-testing-by-kan/introduction.tex", "max_line_length": 80, "max_stars_count": null, "max_stars_repo_head_hexsha": "4f6e5631748669557ec47d81520f54548c5f02f5", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "kancheng/latex_practice_gather", "max_stars_repo_path": "ntu-thesis-1.3-testing-by-kan/introduction.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 441, "size": 1781 }
\documentclass{ltxdoc} \usepackage[english]{babel} \usepackage{hyperref} \newcommand\authormail[1]{\footnote{\textless\url{#1}\textgreater}} \ifdefined\HCode \renewcommand\authormail[1]{\space\textless\Link[#1]{}{}#1\EndLink\textgreater} \fi \usepackage{fontspec} \setmainfont{TeX Gyre Schola} % \setmonofont[Scale=MatchLowercase]{Inconsolatazi4} \IfFontExistsTF{Noto Sans Mono Regular}{% \setmonofont[Scale=MatchLowercase]{Noto Sans Mono Regular} }{\setmonofont{NotoMono-Regular.ttf}} \usepackage{upquote} \usepackage{microtype} \usepackage[hybrid]{markdown} \usepackage{luacode} \title{The \texttt{Lua-UCA} library} \author{Michal Hoftich\authormail{[email protected]}} \date{Version \version\\\gitdate} \begin{document} \maketitle \tableofcontents \section{Introduction} \markdownInput{README.md} \section{Available Languages} The \texttt{lua-uca-languages} library provides the following langauges: \bgroup\ttfamily \begin{luacode*} -- get list of the currently supported languages directly from the library local l = {} local languages = require "lua-uca-languages" for lang, _ in pairs(languages) do l[#l+1] = lang:gsub("_", '\\_') end table.sort(l) tex.print(table.concat(l, ", ")) \end{luacode*} \egroup If you want to requrest language not listed in this listing, or if you had created support code for one, please contact the package author by mail or using issue tracker on package's Github profile. \markdownInput{HACKING.md} \section{License} \markdownInput{LICENSE} \markdownInput{CHANGELOG.md} \end{document}
{ "alphanum_fraction": 0.7705767984, "avg_line_length": 25.2950819672, "ext": "tex", "hexsha": "64b34335c224cb8cbd93a2863c0fee1e4ebc303c", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2021-11-26T12:51:08.000Z", "max_forks_repo_forks_event_min_datetime": "2021-11-26T12:51:08.000Z", "max_forks_repo_head_hexsha": "1b6f888481729859f88847189707996bdbf5827c", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "michal-h21/lua-uca", "max_forks_repo_path": "lua-uca-doc.tex", "max_issues_count": 2, "max_issues_repo_head_hexsha": "1b6f888481729859f88847189707996bdbf5827c", "max_issues_repo_issues_event_max_datetime": "2020-06-03T13:44:43.000Z", "max_issues_repo_issues_event_min_datetime": "2020-01-19T22:29:56.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "michal-h21/lua-uca", "max_issues_repo_path": "lua-uca-doc.tex", "max_line_length": 80, "max_stars_count": 2, "max_stars_repo_head_hexsha": "1b6f888481729859f88847189707996bdbf5827c", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "michal-h21/lua-uca", "max_stars_repo_path": "lua-uca-doc.tex", "max_stars_repo_stars_event_max_datetime": "2020-07-18T19:53:30.000Z", "max_stars_repo_stars_event_min_datetime": "2018-09-03T09:55:26.000Z", "num_tokens": 470, "size": 1543 }
\chapter{Json} \subsection{Selectors for Json} If you are familiar with CSS, then you know what selectors are: \begin{verbatim} { "title": "Java 4-ever", "url": "http://www.youtube.com/watch?v=H7QVITAWdBQ", "actors": [ { "name": "Scala Johansson", "character": "A" }, { "name": "William Windows", "character": "B" }, { "name": "Eddie Larrison", "character": "C" }, { "name": "Mona Lisa Harddrive", "character": "D" }, { "name": "Lenny Linux", "character": "C (Young)" } ] } \end{verbatim} With Json Pointer specification, information can be retrieved as \verb+ "actors\1\name" + In reality this is not very elegant... See some valid criticism at \href{http://susanpotter.net/blogs/software/2011/07/why-json-pointer-falls-short/}{json pointer vs xpath}. Although there are valid arguments for using an xpath type of arguments, I am not convinced people are willing to turn the clock back and start using xpath. See RFC 6901 \url{https://tools.ietf.org/html/rfc6901}
{ "alphanum_fraction": 0.6026431718, "avg_line_length": 23.1632653061, "ext": "tex", "hexsha": "5d5af39c9ffe200ba462afb493df0c656ab3ca00", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "10c6b5661fbbc971d798fc748f9f0d1d3982591f", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "yannisl/gotex", "max_forks_repo_path": "sections/json.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "10c6b5661fbbc971d798fc748f9f0d1d3982591f", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "yannisl/gotex", "max_issues_repo_path": "sections/json.tex", "max_line_length": 247, "max_stars_count": 1, "max_stars_repo_head_hexsha": "10c6b5661fbbc971d798fc748f9f0d1d3982591f", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "yannisl/gotex", "max_stars_repo_path": "sections/json.tex", "max_stars_repo_stars_event_max_datetime": "2016-09-15T18:52:41.000Z", "max_stars_repo_stars_event_min_datetime": "2016-09-15T18:52:41.000Z", "num_tokens": 301, "size": 1135 }
\subsection{Qualitative} \label{subsec:appendix-experiments-qualitative} \begin{figure*} \centering \vspace{-0.5cm} % W %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{subfigure}[b]{0.02\textwidth} \rotatebox{90}{\small\hphantom{aaai}\W} \end{subfigure} \begin{subfigure}[b]{0.16\textwidth} \begin{center} \BSDS \end{center} \vskip -6px \includegraphics[height=1.65cm]{pictures/bsds500/w/cropped/w_208078_contours} \end{subfigure} \begin{subfigure}[b]{0.129\textwidth} \begin{center} \SBD \end{center} \vskip -6px \includegraphics[height=1.65cm]{pictures/sbd/w/cropped/w_6000067_contours} \end{subfigure} \begin{subfigure}[b]{0.10\textwidth} \begin{center} \Fash \end{center} \vskip -6px \includegraphics[height=1.65cm]{pictures/fash/w/cropped/w_132_contours} \end{subfigure} % EAMS %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{subfigure}[b]{0.02\textwidth} \rotatebox{90}{\small\hphantom{ai}\EAMS} \end{subfigure} \begin{subfigure}[b]{0.16\textwidth} \begin{center} \BSDS \end{center} \vskip -6px \includegraphics[height=1.65cm]{pictures/bsds500/eams/cropped/eams_208078_contours} \end{subfigure} \begin{subfigure}[b]{0.129\textwidth} \begin{center} \SBD \end{center} \vskip -6px \includegraphics[height=1.65cm]{pictures/sbd/eams/cropped/eams_6000067_contours} \end{subfigure} \begin{subfigure}[b]{0.10\textwidth} \begin{center} \Fash \end{center} \vskip -6px \includegraphics[height=1.65cm]{pictures/fash/eams/cropped/eams_132_contours} \end{subfigure}\\ % NC %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{subfigure}[b]{0.02\textwidth} \rotatebox{90}{\small\hphantom{aaa}\NC} \end{subfigure} \begin{subfigure}[b]{0.16\textwidth} \includegraphics[height=1.65cm]{pictures/bsds500/nc/cropped/nc_208078_contours} \end{subfigure} \begin{subfigure}[b]{0.129\textwidth} \includegraphics[height=1.65cm]{pictures/sbd/nc/cropped/nc_6000067_contours} \end{subfigure} \begin{subfigure}[b]{0.10\textwidth} \includegraphics[height=1.65cm]{pictures/fash/nc/cropped/nc_132_contours} \end{subfigure} % FH %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{subfigure}[b]{0.02\textwidth} \rotatebox{90}{\small\hphantom{aaa}\FH} \end{subfigure} \begin{subfigure}[b]{0.16\textwidth} \includegraphics[height=1.65cm]{pictures/bsds500/fh/cropped/fh_208078_contours} \end{subfigure} \begin{subfigure}[b]{0.129\textwidth} \includegraphics[height=1.65cm]{pictures/sbd/fh/cropped/fh_6000067_contours} \end{subfigure} \begin{subfigure}[b]{0.10\textwidth} \includegraphics[height=1.65cm]{pictures/fash/fh/cropped/fh_132_contours} \end{subfigure}\\ % RW %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{subfigure}[b]{0.02\textwidth} \rotatebox{90}{\small\hphantom{aaa}\RW} \end{subfigure} \begin{subfigure}[b]{0.16\textwidth} \includegraphics[height=1.65cm]{pictures/bsds500/rw/cropped/rw_208078_contours} \end{subfigure} \begin{subfigure}[b]{0.129\textwidth} \includegraphics[height=1.65cm]{pictures/sbd/rw/cropped/rw_6000067_contours} \end{subfigure} \begin{subfigure}[b]{0.10\textwidth} \includegraphics[height=1.65cm]{pictures/fash/rw/cropped/rw_132_contours} \end{subfigure} % QS %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{subfigure}[b]{0.02\textwidth} \rotatebox{90}{\small\hphantom{aaa}\QS} \end{subfigure} \begin{subfigure}[b]{0.16\textwidth} \includegraphics[height=1.65cm]{pictures/bsds500/qs/cropped/qs_208078_contours} \end{subfigure} \begin{subfigure}[b]{0.129\textwidth} \includegraphics[height=1.65cm]{pictures/sbd/qs/cropped/qs_6000067_contours} \end{subfigure} \begin{subfigure}[b]{0.10\textwidth} \includegraphics[height=1.65cm]{pictures/fash/qs/cropped/qs_132_contours} \end{subfigure}\\ % PF %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{subfigure}[b]{0.02\textwidth} \rotatebox{90}{\small\hphantom{aaa}\PF} \end{subfigure} \begin{subfigure}[b]{0.16\textwidth} \includegraphics[height=1.65cm]{pictures/bsds500/pf/cropped/pf_208078_contours} \end{subfigure} \begin{subfigure}[b]{0.129\textwidth} \includegraphics[height=1.65cm]{pictures/sbd/pf/cropped/pf_6000067_contours} \end{subfigure} \begin{subfigure}[b]{0.10\textwidth} \includegraphics[height=1.65cm]{pictures/fash/pf/cropped/pf_132_contours} \end{subfigure} % TP %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{subfigure}[b]{0.02\textwidth} \rotatebox{90}{\small\hphantom{aaa}\TP} \end{subfigure} \begin{subfigure}[b]{0.16\textwidth} \includegraphics[height=1.65cm]{pictures/bsds500/tp/cropped/tp_208078_contours} \end{subfigure} \begin{subfigure}[b]{0.129\textwidth} \includegraphics[height=1.65cm]{pictures/sbd/tp/cropped/tp_6000067_contours} \end{subfigure} \begin{subfigure}[b]{0.10\textwidth} \includegraphics[height=1.65cm]{pictures/fash/tp/cropped/tp_132_contours} \end{subfigure}\\ % CIS %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{subfigure}[b]{0.02\textwidth} \rotatebox{90}{\small\hphantom{aai}\CIS} \end{subfigure} \begin{subfigure}[b]{0.16\textwidth} \includegraphics[height=1.65cm]{pictures/bsds500/cis/cropped/cis_208078_contours} \end{subfigure} \begin{subfigure}[b]{0.129\textwidth} \includegraphics[height=1.65cm]{pictures/sbd/cis/cropped/cis_6000067_contours} \end{subfigure} \begin{subfigure}[b]{0.10\textwidth} \includegraphics[height=1.65cm]{pictures/fash/cis/cropped/cis_132_contours} \end{subfigure} % SLIC %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{subfigure}[b]{0.02\textwidth} \rotatebox{90}{\small\hphantom{aa}\SLIC} \end{subfigure} \begin{subfigure}[b]{0.16\textwidth} \includegraphics[height=1.65cm]{pictures/bsds500/slic/cropped/slic_208078_contours} \end{subfigure} \begin{subfigure}[b]{0.129\textwidth} \includegraphics[height=1.65cm]{pictures/sbd/slic/cropped/slic_6000067_contours} \end{subfigure} \begin{subfigure}[b]{0.10\textwidth} \includegraphics[height=1.65cm]{pictures/fash/slic/cropped/slic_132_contours} \end{subfigure}\\ % CRS %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{subfigure}[b]{0.02\textwidth} \rotatebox{90}{\small\hphantom{aai}\CRS} \end{subfigure} \begin{subfigure}[b]{0.16\textwidth} \includegraphics[height=1.65cm]{pictures/bsds500/crs/cropped/crs_208078_contours} \end{subfigure} \begin{subfigure}[b]{0.129\textwidth} \includegraphics[height=1.65cm]{pictures/sbd/crs/cropped/crs_6000067_contours} \end{subfigure} \begin{subfigure}[b]{0.10\textwidth} \includegraphics[height=1.65cm]{pictures/fash/crs/cropped/crs_132_contours} \end{subfigure} % ERS %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{subfigure}[b]{0.02\textwidth} \rotatebox{90}{\small\hphantom{aai}\ERS} \end{subfigure} \begin{subfigure}[b]{0.16\textwidth} \includegraphics[height=1.65cm]{pictures/bsds500/ers/cropped/ers_208078_contours} \end{subfigure} \begin{subfigure}[b]{0.129\textwidth} \includegraphics[height=1.65cm]{pictures/sbd/ers/cropped/ers_6000067_contours} \end{subfigure} \begin{subfigure}[b]{0.10\textwidth} \includegraphics[height=1.65cm]{pictures/fash/ers/cropped/ers_132_contours} \end{subfigure}\\ % PB %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{subfigure}[b]{0.02\textwidth} \rotatebox{90}{\small\hphantom{aaa}\PB} \end{subfigure} \begin{subfigure}[b]{0.16\textwidth} \includegraphics[height=1.65cm]{pictures/bsds500/pb/cropped/pb_208078_contours} \end{subfigure} \begin{subfigure}[b]{0.129\textwidth} \includegraphics[height=1.65cm]{pictures/sbd/pb/cropped/pb_6000067_contours} \end{subfigure} \begin{subfigure}[b]{0.10\textwidth} \includegraphics[height=1.65cm]{pictures/fash/pb/cropped/pb_132_contours} \end{subfigure} % SEEDS %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{subfigure}[b]{0.02\textwidth} \rotatebox{90}{\small\hphantom{a}\SEEDS} \end{subfigure} \begin{subfigure}[b]{0.16\textwidth} \includegraphics[height=1.65cm]{pictures/bsds500/seeds/cropped/seeds_208078_contours} \end{subfigure} \begin{subfigure}[b]{0.129\textwidth} \includegraphics[height=1.65cm]{pictures/sbd/seeds/cropped/seeds_6000067_contours} \end{subfigure} \begin{subfigure}[b]{0.10\textwidth} \includegraphics[height=1.65cm]{pictures/fash/seeds/cropped/seeds_132_contours} \end{subfigure}\\ % TPS %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{subfigure}[b]{0.02\textwidth} \rotatebox{90}{\small\hphantom{aai}\TPS} \end{subfigure} \begin{subfigure}[b]{0.16\textwidth} \includegraphics[height=1.65cm]{pictures/bsds500/tps/cropped/tps_208078_contours} \end{subfigure} \begin{subfigure}[b]{0.129\textwidth} \includegraphics[height=1.65cm]{pictures/sbd/tps/cropped/tps_6000067_contours} \end{subfigure} \begin{subfigure}[b]{0.10\textwidth} \includegraphics[height=1.65cm]{pictures/fash/tps/cropped/tps_132_contours} \end{subfigure} % VC %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{subfigure}[b]{0.02\textwidth} \rotatebox{90}{\small\hphantom{aaa}\VC} \end{subfigure} \begin{subfigure}[b]{0.16\textwidth} \includegraphics[height=1.65cm]{pictures/bsds500/vc/cropped/vc_208078_contours} \end{subfigure} \begin{subfigure}[b]{0.129\textwidth} \includegraphics[height=1.65cm]{pictures/sbd/vc/cropped/vc_6000067_contours} \end{subfigure} \begin{subfigure}[b]{0.10\textwidth} \includegraphics[height=1.65cm]{pictures/fash/vc/cropped/vc_132_contours} \end{subfigure}\\ % CCS %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{subfigure}[b]{0.02\textwidth} \rotatebox{90}{\small\hphantom{aai}\CCS} \end{subfigure} \begin{subfigure}[b]{0.16\textwidth} \includegraphics[height=1.65cm]{pictures/bsds500/ccs/cropped/ccs_208078_contours} \end{subfigure} \begin{subfigure}[b]{0.129\textwidth} \includegraphics[height=1.65cm]{pictures/sbd/ccs/cropped/ccs_6000067_contours} \end{subfigure} \begin{subfigure}[b]{0.10\textwidth} \includegraphics[height=1.65cm]{pictures/fash/ccs/cropped/ccs_132_contours} \end{subfigure} % CW %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{subfigure}[b]{0.02\textwidth} \rotatebox{90}{\small\hphantom{aaa}\CW} \end{subfigure} \begin{subfigure}[b]{0.16\textwidth} \includegraphics[height=1.65cm]{pictures/bsds500/cw/cropped/cw_208078_contours} \end{subfigure} \begin{subfigure}[b]{0.129\textwidth} \includegraphics[height=1.65cm]{pictures/sbd/cw/cropped/cw_6000067_contours} \end{subfigure} \begin{subfigure}[b]{0.10\textwidth} \includegraphics[height=1.65cm]{pictures/fash/cw/cropped/cw_132_contours} \end{subfigure}\\ % ERGC %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{subfigure}[b]{0.02\textwidth} \rotatebox{90}{\small\hphantom{aa}\ERGC} \end{subfigure} \begin{subfigure}[b]{0.16\textwidth} \includegraphics[height=1.65cm]{pictures/bsds500/ergc/cropped/ergc_208078_contours} \end{subfigure} \begin{subfigure}[b]{0.129\textwidth} \includegraphics[height=1.65cm]{pictures/sbd/ergc/cropped/ergc_6000067_contours} \end{subfigure} \begin{subfigure}[b]{0.10\textwidth} \includegraphics[height=1.65cm]{pictures/fash/ergc/cropped/ergc_132_contours} \end{subfigure} % MSS %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{subfigure}[b]{0.02\textwidth} \rotatebox{90}{\small\hphantom{aai}\MSS} \end{subfigure} \begin{subfigure}[b]{0.16\textwidth} \includegraphics[height=1.65cm]{pictures/bsds500/mss/cropped/mss_208078_contours} \end{subfigure} \begin{subfigure}[b]{0.129\textwidth} \includegraphics[height=1.65cm]{pictures/sbd/mss/cropped/mss_6000067_contours} \end{subfigure} \begin{subfigure}[b]{0.10\textwidth} \includegraphics[height=1.65cm]{pictures/fash/mss/cropped/mss_132_contours} \end{subfigure}\\ % preSLIC %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{subfigure}[b]{0.02\textwidth} \rotatebox{90}{\small\hphantom{a}\preSLIC} \end{subfigure} \begin{subfigure}[b]{0.16\textwidth} \includegraphics[height=1.65cm]{pictures/bsds500/preslic/cropped/preslic_208078_contours} \end{subfigure} \begin{subfigure}[b]{0.129\textwidth} \includegraphics[height=1.65cm]{pictures/sbd/preslic/cropped/preslic_6000067_contours} \end{subfigure} \begin{subfigure}[b]{0.10\textwidth} \includegraphics[height=1.65cm]{pictures/fash/preslic/cropped/preslic_132_contours} \end{subfigure} % WP %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{subfigure}[b]{0.02\textwidth} \rotatebox{90}{\small\hphantom{aaa}\WP} \end{subfigure} \begin{subfigure}[b]{0.16\textwidth} \includegraphics[height=1.65cm]{pictures/bsds500/wp/cropped/wp_208078_contours} \end{subfigure} \begin{subfigure}[b]{0.129\textwidth} \includegraphics[height=1.65cm]{pictures/sbd/wp/cropped/wp_6000067_contours} \end{subfigure} \begin{subfigure}[b]{0.10\textwidth} \includegraphics[height=1.65cm]{pictures/fash/wp/cropped/wp_132_contours} \end{subfigure}\\ % ETPS %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{subfigure}[b]{0.02\textwidth} \rotatebox{90}{\small\hphantom{aa}\ETPS} \end{subfigure} \begin{subfigure}[b]{0.16\textwidth} \includegraphics[height=1.65cm]{pictures/bsds500/etps/cropped/etps_208078_contours} \end{subfigure} \begin{subfigure}[b]{0.129\textwidth} \includegraphics[height=1.65cm]{pictures/sbd/etps/cropped/etps_6000067_contours} \end{subfigure} \begin{subfigure}[b]{0.10\textwidth} \includegraphics[height=1.65cm]{pictures/fash/etps/cropped/etps_132_contours} \end{subfigure} % LSC %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{subfigure}[b]{0.02\textwidth} \rotatebox{90}{\small\hphantom{aai}\LSC} \end{subfigure} \begin{subfigure}[b]{0.16\textwidth} \includegraphics[height=1.65cm]{pictures/bsds500/lsc/cropped/lsc_208078_contours} \end{subfigure} \begin{subfigure}[b]{0.129\textwidth} \includegraphics[height=1.65cm]{pictures/sbd/lsc/cropped/lsc_6000067_contours} \end{subfigure} \begin{subfigure}[b]{0.10\textwidth} \includegraphics[height=1.65cm]{pictures/fash/lsc/cropped/lsc_132_contours} \end{subfigure}\\ % POISE %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{subfigure}[b]{0.02\textwidth} \rotatebox{90}{\small\hphantom{a}\POISE} \end{subfigure} \begin{subfigure}[b]{0.16\textwidth} \includegraphics[height=1.65cm]{pictures/bsds500/poise/cropped/poise_208078_contours} \end{subfigure} \begin{subfigure}[b]{0.129\textwidth} \includegraphics[height=1.65cm]{pictures/sbd/poise/cropped/poise_6000067_contours} \end{subfigure} \begin{subfigure}[b]{0.10\textwidth} \includegraphics[height=1.65cm]{pictures/fash/poise/cropped/poise_132_contours} \end{subfigure} % SEAW %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{subfigure}[b]{0.02\textwidth} \rotatebox{90}{\small\hphantom{ai}\SEAW} \end{subfigure} \begin{subfigure}[b]{0.16\textwidth} \includegraphics[height=1.65cm]{pictures/bsds500/seaw/cropped/seaw_208078_contours} \end{subfigure} \begin{subfigure}[b]{0.129\textwidth} \includegraphics[height=1.65cm]{pictures/sbd/seaw/cropped/seaw_6000067_contours} \end{subfigure} \begin{subfigure}[b]{0.10\textwidth} \includegraphics[height=1.65cm]{pictures/fash/seaw/cropped/seaw_132_contours} \end{subfigure} \caption{Qualitative results on the \BSDS, \SBD and \Fash datasets; excerpts from the images in Figure \ref{fig:datasets} are shown for $K \approx 1200$, in the upper left corner, and $K \approx 3600$, in the lower right corner. Superpixel boundaries are depicted in black; best viewed in color. We observe that with higher $\K$ both boundary adherence and compactness increases, even for algorithms not offering a compactness parameter. \textbf{Best viewed in color.}} \label{fig:appendix-experiments-qualitative-bsds500-sbd-fash} \end{figure*} \def\NYUCroppedScale{0.18} \def\SUNRGBDCroppedScale{0.14} \begin{figure*} \centering % NC %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{subfigure}[b]{0.02\textwidth} \rotatebox{90}{\small\hphantom{aaa}\NC} \end{subfigure} \begin{subfigure}[b]{0.1375\textwidth} \begin{center} \NYU \end{center} \vskip -6px \includegraphics[height=1.65cm]{pictures/nyuv2/nc/cropped/nc_00001297_contours} \end{subfigure} \begin{subfigure}[b]{0.129\textwidth} \begin{center} \NYU \end{center} \vskip -6px \includegraphics[height=1.65cm]{pictures/nyuv2/nc/cropped/nc_00000977_contours_scaled} \end{subfigure} % RW %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{subfigure}[b]{0.02\textwidth} \rotatebox{90}{\small\hphantom{aaa}\RW} \end{subfigure} \begin{subfigure}[b]{0.1375\textwidth} \begin{center} \NYU \end{center} \vskip -6px \includegraphics[height=1.65cm]{pictures/nyuv2/rw/cropped/rw_00001297_contours} \end{subfigure} \begin{subfigure}[b]{0.129\textwidth} \begin{center} \NYU \end{center} \vskip -6px \includegraphics[height=1.65cm]{pictures/nyuv2/rw/cropped/rw_00000977_contours_scaled} \end{subfigure} % SEAW %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{subfigure}[b]{0.02\textwidth} \rotatebox{90}{\small\hphantom{ai}\SEAW} \end{subfigure} \begin{subfigure}[b]{0.1375\textwidth} \begin{center} \NYU \end{center} \vskip -6px \includegraphics[height=1.65cm]{pictures/nyuv2/seaw/cropped/seaw_00001297_contours} \end{subfigure} \begin{subfigure}[b]{0.129\textwidth} \begin{center} \NYU \end{center} \vskip -6px \includegraphics[height=1.65cm]{pictures/nyuv2/seaw/cropped/seaw_00000977_contours_scaled} \end{subfigure}\\[4px] % W %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{subfigure}[b]{0.02\textwidth} \rotatebox{90}{\small\hphantom{aaai}\W} \end{subfigure} \begin{subfigure}[b]{0.1375\textwidth} \begin{center} \NYU \end{center} \vskip -6px \includegraphics[height=1.65cm]{pictures/nyuv2/w/cropped/w_00001297_contours} \end{subfigure} \begin{subfigure}[b]{0.129\textwidth} \begin{center} \SUNRGBD \end{center} \vskip -6px \includegraphics[height=1.65cm]{pictures/sunrgbd/w/cropped/w_00007477_contours} \end{subfigure} % EAMS %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{subfigure}[b]{0.02\textwidth} \rotatebox{90}{\small\hphantom{ai}\EAMS} \end{subfigure} \begin{subfigure}[b]{0.1375\textwidth} \begin{center} \NYU \end{center} \vskip -6px \includegraphics[height=1.65cm]{pictures/nyuv2/eams/cropped/eams_00001297_contours} \end{subfigure} \begin{subfigure}[b]{0.129\textwidth} \begin{center} \SUNRGBD \end{center} \vskip -6px \includegraphics[height=1.65cm]{pictures/sunrgbd/eams/cropped/eams_00007477_contours} \end{subfigure} % FH %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{subfigure}[b]{0.02\textwidth} \rotatebox{90}{\small\hphantom{aaa}\FH} \end{subfigure} \begin{subfigure}[b]{0.1375\textwidth} \begin{center} \NYU \end{center} \vskip -6px \includegraphics[height=1.65cm]{pictures/nyuv2/fh/cropped/fh_00001297_contours} \end{subfigure} \begin{subfigure}[b]{0.129\textwidth} \begin{center} \SUNRGBD \end{center} \vskip -6px \includegraphics[height=1.65cm]{pictures/sunrgbd/fh/cropped/fh_00007477_contours} \end{subfigure}\\ % QS %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{subfigure}[b]{0.02\textwidth} \rotatebox{90}{\small\hphantom{aaa}\QS} \end{subfigure} \begin{subfigure}[b]{0.1375\textwidth} \includegraphics[height=1.65cm]{pictures/nyuv2/qs/cropped/qs_00001297_contours} \end{subfigure} \begin{subfigure}[b]{0.129\textwidth} \includegraphics[height=1.65cm]{pictures/sunrgbd/qs/cropped/qs_00007477_contours} \end{subfigure} % PF %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{subfigure}[b]{0.02\textwidth} \rotatebox{90}{\small\hphantom{aaa}\PF} \end{subfigure} \begin{subfigure}[b]{0.1375\textwidth} \includegraphics[height=1.65cm]{pictures/nyuv2/pf/cropped/pf_00001297_contours} \end{subfigure} \begin{subfigure}[b]{0.129\textwidth} \includegraphics[height=1.65cm]{pictures/sunrgbd/pf/cropped/pf_00007477_contours} \end{subfigure} % TP %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{subfigure}[b]{0.02\textwidth} \rotatebox{90}{\small\hphantom{aaa}\TP} \end{subfigure} \begin{subfigure}[b]{0.1375\textwidth} \includegraphics[height=1.65cm]{pictures/nyuv2/tp/cropped/tp_00001297_contours} \end{subfigure} \begin{subfigure}[b]{0.129\textwidth} \includegraphics[height=1.65cm]{pictures/sunrgbd/tp/cropped/tp_00007477_contours} \end{subfigure}\\ % CIS %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{subfigure}[b]{0.02\textwidth} \rotatebox{90}{\small\hphantom{aa}\CIS} \end{subfigure} \begin{subfigure}[b]{0.1375\textwidth} \includegraphics[height=1.65cm]{pictures/nyuv2/cis/cropped/cis_00001297_contours} \end{subfigure} \begin{subfigure}[b]{0.129\textwidth} \includegraphics[height=1.65cm]{pictures/sunrgbd/cis/cropped/cis_00007477_contours} \end{subfigure} % SLIC %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{subfigure}[b]{0.02\textwidth} \rotatebox{90}{\small\hphantom{aa}\SLIC} \end{subfigure} \begin{subfigure}[b]{0.1375\textwidth} \includegraphics[height=1.65cm]{pictures/nyuv2/slic/cropped/slic_00001297_contours} \end{subfigure} \begin{subfigure}[b]{0.129\textwidth} \includegraphics[height=1.65cm]{pictures/sunrgbd/slic/cropped/slic_00007477_contours} \end{subfigure} % CRS %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{subfigure}[b]{0.02\textwidth} \rotatebox{90}{\small\hphantom{aai}\CRS} \end{subfigure} \begin{subfigure}[b]{0.1375\textwidth} \includegraphics[height=1.65cm]{pictures/nyuv2/crs/cropped/crs_00001297_contours} \end{subfigure} \begin{subfigure}[b]{0.129\textwidth} \includegraphics[height=1.65cm]{pictures/sunrgbd/crs/cropped/crs_00007477_contours} \end{subfigure}\\ % ERS %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{subfigure}[b]{0.02\textwidth} \rotatebox{90}{\small\hphantom{aai}\ERS} \end{subfigure} \begin{subfigure}[b]{0.1375\textwidth} \includegraphics[height=1.65cm]{pictures/nyuv2/ers/cropped/ers_00001297_contours} \end{subfigure} \begin{subfigure}[b]{0.129\textwidth} \includegraphics[height=1.65cm]{pictures/sunrgbd/ers/cropped/ers_00007477_contours} \end{subfigure} % PB %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{subfigure}[b]{0.02\textwidth} \rotatebox{90}{\small\hphantom{aaa}\PB} \end{subfigure} \begin{subfigure}[b]{0.1375\textwidth} \includegraphics[height=1.65cm]{pictures/nyuv2/pb/cropped/pb_00001297_contours} \end{subfigure} \begin{subfigure}[b]{0.129\textwidth} \includegraphics[height=1.65cm]{pictures/sunrgbd/pb/cropped/pb_00007477_contours} \end{subfigure} % DASP %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{subfigure}[b]{0.02\textwidth} \rotatebox{90}{\small\hphantom{aa}\DASP} \end{subfigure} \begin{subfigure}[b]{0.1375\textwidth} \includegraphics[height=1.65cm]{pictures/nyuv2/dasp/cropped/dasp_00001297_contours} \end{subfigure} \begin{subfigure}[b]{0.129\textwidth} \includegraphics[height=1.65cm]{pictures/sunrgbd/dasp/cropped/dasp_00007477_contours} \end{subfigure}\\ % SEEDS %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{subfigure}[b]{0.02\textwidth} \rotatebox{90}{\small\hphantom{a}\SEEDS} \end{subfigure} \begin{subfigure}[b]{0.1375\textwidth} \includegraphics[height=1.65cm]{pictures/nyuv2/seeds/cropped/seeds_00001297_contours} \end{subfigure} \begin{subfigure}[b]{0.129\textwidth} \includegraphics[height=1.65cm]{pictures/sunrgbd/seeds/cropped/seeds_00007477_contours} \end{subfigure} % TPS %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{subfigure}[b]{0.02\textwidth} \rotatebox{90}{\small\hphantom{aai}\TPS} \end{subfigure} \begin{subfigure}[b]{0.1375\textwidth} \includegraphics[height=1.65cm]{pictures/nyuv2/tps/cropped/tps_00001297_contours} \end{subfigure} \begin{subfigure}[b]{0.129\textwidth} \includegraphics[height=1.65cm]{pictures/sunrgbd/tps/cropped/tps_00007477_contours} \end{subfigure} % VC %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{subfigure}[b]{0.02\textwidth} \rotatebox{90}{\small\hphantom{aaai}\VC} \end{subfigure} \begin{subfigure}[b]{0.1375\textwidth} \includegraphics[height=1.65cm]{pictures/nyuv2/vc/cropped/vc_00001297_contours} \end{subfigure} \begin{subfigure}[b]{0.129\textwidth} \includegraphics[height=1.65cm]{pictures/sunrgbd/vc/cropped/vc_00007477_contours} \end{subfigure}\\ % CCS %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{subfigure}[b]{0.02\textwidth} \rotatebox{90}{\small\hphantom{aai}\CCS} \end{subfigure} \begin{subfigure}[b]{0.1375\textwidth} \includegraphics[height=1.65cm]{pictures/nyuv2/ccs/cropped/ccs_00001297_contours} \end{subfigure} \begin{subfigure}[b]{0.129\textwidth} \includegraphics[height=1.65cm]{pictures/sunrgbd/ccs/cropped/ccs_00007477_contours} \end{subfigure} % VCCS %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{subfigure}[b]{0.02\textwidth} \rotatebox{90}{\small\hphantom{aa}\VCCS} \end{subfigure} \begin{subfigure}[b]{0.1375\textwidth} \includegraphics[height=1.65cm]{pictures/nyuv2/vccs/cropped/vccs_00001297_contours} \end{subfigure} \begin{subfigure}[b]{0.129\textwidth} \includegraphics[height=1.65cm]{pictures/sunrgbd/vccs/cropped/vccs_00007477_contours} \end{subfigure} % CW %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{subfigure}[b]{0.02\textwidth} \rotatebox{90}{\small\hphantom{aaa}\CW} \end{subfigure} \begin{subfigure}[b]{0.1375\textwidth} \includegraphics[height=1.65cm]{pictures/nyuv2/cw/cropped/cw_00001297_contours} \end{subfigure} \begin{subfigure}[b]{0.129\textwidth} \includegraphics[height=1.65cm]{pictures/sunrgbd/cw/cropped/cw_00007477_contours} \end{subfigure}\\ % ERGC %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{subfigure}[b]{0.02\textwidth} \rotatebox{90}{\small\hphantom{aa}\ERGC} \end{subfigure} \begin{subfigure}[b]{0.1375\textwidth} \includegraphics[height=1.65cm]{pictures/nyuv2/ergc/cropped/ergc_00001297_contours} \end{subfigure} \begin{subfigure}[b]{0.129\textwidth} \includegraphics[height=1.65cm]{pictures/sunrgbd/ergc/cropped/ergc_00007477_contours} \end{subfigure} % MSS %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{subfigure}[b]{0.02\textwidth} \rotatebox{90}{\small\hphantom{aai}\MSS} \end{subfigure} \begin{subfigure}[b]{0.1375\textwidth} \includegraphics[height=1.65cm]{pictures/nyuv2/mss/cropped/mss_00001297_contours} \end{subfigure} \begin{subfigure}[b]{0.129\textwidth} \includegraphics[height=1.65cm]{pictures/sunrgbd/mss/cropped/mss_00007477_contours} \end{subfigure} % preSLIC %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{subfigure}[b]{0.02\textwidth} \rotatebox{90}{\small\hphantom{a}\preSLIC} \end{subfigure} \begin{subfigure}[b]{0.1375\textwidth} \includegraphics[height=1.65cm]{pictures/nyuv2/preslic/cropped/preslic_00001297_contours} \end{subfigure} \begin{subfigure}[b]{0.129\textwidth} \includegraphics[height=1.65cm]{pictures/sunrgbd/preslic/cropped/preslic_00007477_contours} \end{subfigure}\\ % WP %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{subfigure}[b]{0.02\textwidth} \rotatebox{90}{\small\hphantom{aaa}\WP} \end{subfigure} \begin{subfigure}[b]{0.1375\textwidth} \includegraphics[height=1.65cm]{pictures/nyuv2/wp/cropped/wp_00001297_contours} \end{subfigure} \begin{subfigure}[b]{0.129\textwidth} \includegraphics[height=1.65cm]{pictures/sunrgbd/wp/cropped/wp_00007477_contours} \end{subfigure} % LRW %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %\begin{subfigure}[b]{0.02\textwidth} % \rotatebox{90}{\small\LRW} %\end{subfigure} %\begin{subfigure}[b]{0.1375\textwidth} % \includegraphics[height=1.65cm]{pictures/nyuv2/lrw/cropped/lrw_00001297_contours} %\end{subfigure} %\begin{subfigure}[b]{0.129\textwidth} % \includegraphics[height=1.65cm]{pictures/nyuv2/lrw/cropped/lrw_00001297_contours} %\end{subfigure}\\ % ETPS %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{subfigure}[b]{0.02\textwidth} \rotatebox{90}{\small\hphantom{aa}\ETPS} \end{subfigure} \begin{subfigure}[b]{0.1375\textwidth} \includegraphics[height=1.65cm]{pictures/nyuv2/etps/cropped/etps_00001297_contours} \end{subfigure} \begin{subfigure}[b]{0.129\textwidth} \includegraphics[height=1.65cm]{pictures/sunrgbd/etps/cropped/etps_00007477_contours} \end{subfigure} % LSC %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{subfigure}[b]{0.02\textwidth} \rotatebox{90}{\small\hphantom{aaa}\LSC} \end{subfigure} \begin{subfigure}[b]{0.1375\textwidth} \includegraphics[height=1.65cm]{pictures/nyuv2/lsc/cropped/lsc_00001297_contours} \end{subfigure} \begin{subfigure}[b]{0.129\textwidth} \includegraphics[height=1.65cm]{pictures/sunrgbd/lsc/cropped/lsc_00007477_contours} \end{subfigure}\\ % POISE %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{subfigure}[b]{0.02\textwidth} \rotatebox{90}{\small\hphantom{a}\POISE} \end{subfigure} \begin{subfigure}[b]{0.1375\textwidth} \includegraphics[height=1.65cm]{pictures/nyuv2/poise/cropped/poise_00001297_contours} \end{subfigure} \begin{subfigure}[b]{0.129\textwidth} \includegraphics[height=1.65cm]{pictures/sunrgbd/poise/cropped/poise_00007477_contours} \end{subfigure} % Puffer \begin{subfigure}[b]{0.02\textwidth} \hphantom{aa} \end{subfigure} \begin{subfigure}[b]{0.1375\textwidth} \hphantom{aaaaaaaaaaaaaaaaaaaaaaaaaaaaaii} \end{subfigure} \begin{subfigure}[b]{0.129\textwidth} \hphantom{aaaaaaaaaaaaaaaaaaaaaaaaaaa} \end{subfigure} % Puffer \begin{subfigure}[b]{0.02\textwidth} \hphantom{aa} \end{subfigure} \begin{subfigure}[b]{0.1375\textwidth} \hphantom{aaaaaaaaaaaaaaaaaaaaaaaaaaaaii} \end{subfigure} \begin{subfigure}[b]{0.129\textwidth} \hphantom{aaaaaaaaaaaaaaaaaaaaaaaaaaa} \end{subfigure} \caption{Qualitative results on the \NYU and \SUNRGBD datasets; excerpts from the images in Figure \ref{fig:datasets} are shown for $K \approx 1200$, in the upper left corner, and $K \approx 3600$, in the lower right corner. Superpixel boundaries are depicted in black; best viewed in color. \NC, \RW and \SEAW could not be evaluated on the \SUNRGBD dataset due to exhaustive memory usage of the corresponding MatLab implementations. Therefore, results on the \NYU dataset are shown. \textbf{Best viewed in color.} } \label{fig:appendix-experiments-qualitative-nyuv2-sunrgbd} \end{figure*} \begin{figure} \centering % SLIC %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{subfigure}[b]{0.02\textwidth} \rotatebox{90}{\small\hphantom{aa}\SLIC} \end{subfigure} \begin{subfigure}[b]{0.141\textwidth} \includegraphics[height=1.525cm]{pictures/compactness/bsds500/slic/score/1/cropped/slic_35028_contours} \end{subfigure} \begin{subfigure}[b]{0.141\textwidth} \includegraphics[height=1.525cm]{pictures/compactness/bsds500/slic/score/10/cropped/slic_35028_contours} \end{subfigure} \begin{subfigure}[b]{0.141\textwidth} \includegraphics[height=1.525cm]{pictures/compactness/bsds500/slic/score/80/cropped/slic_35028_contours} \end{subfigure}\\ % vlSLIC %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{subfigure}[b]{0.02\textwidth} \rotatebox{90}{\small\hphantom{a}\vlSLIC} \end{subfigure} \begin{subfigure}[b]{0.141\textwidth} \includegraphics[height=1.525cm]{pictures/compactness/bsds500/vlslic/score/1/cropped/vlslic_35028_contours} \end{subfigure} \begin{subfigure}[b]{0.141\textwidth} \includegraphics[height=1.525cm]{pictures/compactness/bsds500/vlslic/score/10/cropped/vlslic_35028_contours} \end{subfigure} \begin{subfigure}[b]{0.141\textwidth} \includegraphics[height=1.525cm]{pictures/compactness/bsds500/vlslic/score/80/cropped/vlslic_35028_contours} \end{subfigure}\\ % CRS %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{subfigure}[b]{0.02\textwidth} \rotatebox{90}{\small\hphantom{aai}\CRS} \end{subfigure} \begin{subfigure}[b]{0.141\textwidth} \includegraphics[height=1.525cm]{pictures/compactness/bsds500/crs/score/0.001/cropped/crs_35028_contours} \end{subfigure} \begin{subfigure}[b]{0.141\textwidth} \includegraphics[height=1.525cm]{pictures/compactness/bsds500/crs/score/0.01/cropped/crs_35028_contours} \end{subfigure} \begin{subfigure}[b]{0.141\textwidth} \includegraphics[height=1.525cm]{pictures/compactness/bsds500/crs/score/0.1/cropped/crs_35028_contours} \end{subfigure}\\ % reSEEDS %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{subfigure}[b]{0.02\textwidth} \rotatebox{90}{\small\hphantom{a}\reSEEDS} \end{subfigure} \begin{subfigure}[b]{0.141\textwidth} \includegraphics[height=1.525cm]{pictures/compactness/bsds500/reseeds/score/0.0/cropped/reseeds_35028_contours} \end{subfigure} \begin{subfigure}[b]{0.141\textwidth} \includegraphics[height=1.525cm]{pictures/compactness/bsds500/reseeds/score/0.25/cropped/reseeds_35028_contours} \end{subfigure} \begin{subfigure}[b]{0.141\textwidth} \includegraphics[height=1.525cm]{pictures/compactness/bsds500/reseeds/score/0.5/cropped/reseeds_35028_contours} \end{subfigure}\\ % VC %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{subfigure}[b]{0.02\textwidth} \rotatebox{90}{\small\hphantom{aaa}\VC} \end{subfigure} \begin{subfigure}[b]{0.141\textwidth} \includegraphics[height=1.525cm]{pictures/compactness/bsds500/vc/score/10/cropped/vc_35028_contours} \end{subfigure} \begin{subfigure}[b]{0.141\textwidth} \includegraphics[height=1.525cm]{pictures/compactness/bsds500/vc/score/25/cropped/vc_35028_contours} \end{subfigure} \begin{subfigure}[b]{0.141\textwidth} \includegraphics[height=1.525cm]{pictures/compactness/bsds500/vc/score/100/cropped/vc_35028_contours} \end{subfigure}\\ % CCS %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{subfigure}[b]{0.02\textwidth} \rotatebox{90}{\small\hphantom{aai}\CCS} \end{subfigure} \begin{subfigure}[b]{0.141\textwidth} \includegraphics[height=1.525cm]{pictures/compactness/bsds500/ccs/score/25/cropped/ccs_35028_contours} \end{subfigure} \begin{subfigure}[b]{0.141\textwidth} \includegraphics[height=1.525cm]{pictures/compactness/bsds500/ccs/score/100/cropped/ccs_35028_contours} \end{subfigure} \begin{subfigure}[b]{0.141\textwidth} \includegraphics[height=1.525cm]{pictures/compactness/bsds500/ccs/score/500/cropped/ccs_35028_contours} \end{subfigure}\\ % CW %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{subfigure}[b]{0.02\textwidth} \rotatebox{90}{\small\hphantom{aaa}\CW} \end{subfigure} \begin{subfigure}[b]{0.141\textwidth} \includegraphics[height=1.525cm]{pictures/compactness/bsds500/cw/score/0.01/cropped/cw_35028_contours} \end{subfigure} \begin{subfigure}[b]{0.141\textwidth} \includegraphics[height=1.525cm]{pictures/compactness/bsds500/cw/score/0.1/cropped/cw_35028_contours} \end{subfigure} \begin{subfigure}[b]{0.141\textwidth} \includegraphics[height=1.525cm]{pictures/compactness/bsds500/cw/score/1/cropped/cw_35028_contours} \end{subfigure}\\ % preSLIC %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{subfigure}[b]{0.02\textwidth} \rotatebox{90}{\small\hphantom{a}\preSLIC} \end{subfigure} \begin{subfigure}[b]{0.141\textwidth} \includegraphics[height=1.525cm]{pictures/compactness/bsds500/preslic/score/5/cropped/preslic_35028_contours} \end{subfigure} \begin{subfigure}[b]{0.141\textwidth} \includegraphics[height=1.525cm]{pictures/compactness/bsds500/preslic/score/20/cropped/preslic_35028_contours} \end{subfigure} \begin{subfigure}[b]{0.141\textwidth} \includegraphics[height=1.525cm]{pictures/compactness/bsds500/preslic/score/80/cropped/preslic_35028_contours} \end{subfigure}\\ % WP %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{subfigure}[b]{0.02\textwidth} \rotatebox{90}{\small\hphantom{aaa}\WP} \end{subfigure} \begin{subfigure}[b]{0.141\textwidth} \includegraphics[height=1.525cm]{pictures/compactness/bsds500/wp/score/1/cropped/wp_35028_contours} \end{subfigure} \begin{subfigure}[b]{0.141\textwidth} \includegraphics[height=1.525cm]{pictures/compactness/bsds500/wp/score/5/cropped/wp_35028_contours} \end{subfigure} \begin{subfigure}[b]{0.141\textwidth} \includegraphics[height=1.525cm]{pictures/compactness/bsds500/wp/score/25/cropped/wp_35028_contours} \end{subfigure}\\ % ERGC %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{subfigure}[b]{0.02\textwidth} \rotatebox{90}{\small\hphantom{aa}\ERGC} \end{subfigure} \begin{subfigure}[b]{0.141\textwidth} \includegraphics[height=1.525cm]{pictures/compactness/bsds500/ergc/score/0/cropped/ergc_35028_contours} \end{subfigure} \begin{subfigure}[b]{0.141\textwidth} \includegraphics[height=1.525cm]{pictures/compactness/bsds500/ergc/score/5/cropped/ergc_35028_contours} \end{subfigure} \begin{subfigure}[b]{0.141\textwidth} \includegraphics[height=1.525cm]{pictures/compactness/bsds500/ergc/score/50/cropped/ergc_35028_contours} \end{subfigure}\\ % LSC %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{subfigure}[b]{0.02\textwidth} \rotatebox{90}{\small\hphantom{aai}\LSC} \end{subfigure} \begin{subfigure}[b]{0.141\textwidth} \includegraphics[height=1.525cm]{pictures/compactness/bsds500/lsc/score/0/cropped/lsc_35028_contours} \end{subfigure} \begin{subfigure}[b]{0.141\textwidth} \includegraphics[height=1.525cm]{pictures/compactness/bsds500/lsc/score/0.1/cropped/lsc_35028_contours} \end{subfigure} \begin{subfigure}[b]{0.141\textwidth} \includegraphics[height=1.525cm]{pictures/compactness/bsds500/lsc/score/0.25/cropped/lsc_35028_contours} \end{subfigure}\\ % ETPS %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{subfigure}[b]{0.02\textwidth} \rotatebox{90}{\small\hphantom{aa}\ETPS} \end{subfigure} \begin{subfigure}[b]{0.141\textwidth} \includegraphics[height=1.525cm]{pictures/compactness/bsds500/etps/score/0.01/cropped/etps_35028_contours} \end{subfigure} \begin{subfigure}[b]{0.141\textwidth} \includegraphics[height=1.525cm]{pictures/compactness/bsds500/etps/score/0.1/cropped/etps_35028_contours} \end{subfigure} \begin{subfigure}[b]{0.141\textwidth} \includegraphics[height=1.525cm]{pictures/compactness/bsds500/etps/score/1/cropped/etps_35028_contours} \end{subfigure} \caption{The influence of a low, on the left, and high, on the right, compactness parameter demonstrated on the caterpillar image from the \BSDS dataset for $\K \approx 400$. Superpixel boundaries are depicted in black; best viewed in color. For all shown algorithms, the compactness parameter allows to gradually trade boundary adherence for compactness. \textbf{Best viewed in color.}} \label{fig:appendix-experiments-qualitative-compactness} \end{figure} We briefly discuss visual quality on additional examples provided in Figures \ref{fig:appendix-experiments-qualitative-bsds500-sbd-fash} and \ref{fig:appendix-experiments-qualitative-nyuv2-sunrgbd}. Additionally, Figure \ref{fig:appendix-experiments-qualitative-compactness} shows the influence of the compactness parameter on superpixel algorithms not discussed in Section \ref{subsec:experiments-qualitative}. Most algorithms exhibit good boundary adherence, especially for large \K. In contrast to the discussion in Section \ref{subsec:experiments-qualitative} focussing on qualitative results with $\K \approx 400$ and $\K \approx 1200$, Figures \ref{fig:appendix-experiments-qualitative-bsds500-sbd-fash} and \ref{fig:appendix-experiments-qualitative-nyuv2-sunrgbd} also show results for $\K \approx 3600$. We observe that with rising \K, most algorithms exhibit better boundary adherence. Exceptions are, again, easily identified: \FH, \QS, \CIS, \PF, \PB, \TPS and \SEAW. Still, due to higher \K, the effect of missed image boundaries is not as serious as with less superpixels. Overall, the remaining algorithms show good boundary adherence, especially for high \K. Compactness increases with higher \K; still, a compactness parameter is beneficial. While for higher \K, superpixels tend to be more compact in general, the influence of parameter optimization with respect to \Rec and \UE is still visible -- also for algorithms providing a compactness parameter. For example, \ERGC or \ETPS exhibit more irregular superpixels compared to \SLIC or \CCS. Complementing this discussion, Figure \ref{fig:appendix-experiments-qualitative-compactness} shows the influence of the compactness parameter for the algorithms with compactness parameter not discussed in detail in Section \ref{subsec:experiments-qualitative}. It can be seen, that a compactness parameter allows to gradually trade boundary adherence for compactness in all of the presented cases. However, higher \K also induces higher compactness for algorithms not providing a compactness parameter such as \CIS, \RW, \W or \MSS to name only a few examples. Overall, compactness benefits from higher~\K. Overall, higher \K induces both better boundary adherence and higher compactness independent of whether a compactness parameter is involved.
{ "alphanum_fraction": 0.6500204666, "avg_line_length": 46.2884210526, "ext": "tex", "hexsha": "40214d2439f231055987130787a8d7c35b661600", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "83e0db95cff91fee26ea04d5ecdb221d441e940b", "max_forks_repo_licenses": [ "Unlicense" ], "max_forks_repo_name": "davidstutz/cviu2018-superpixels", "max_forks_repo_path": "paper/appendix/experiments-qualitative.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "83e0db95cff91fee26ea04d5ecdb221d441e940b", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Unlicense" ], "max_issues_repo_name": "davidstutz/cviu2018-superpixels", "max_issues_repo_path": "paper/appendix/experiments-qualitative.tex", "max_line_length": 121, "max_stars_count": null, "max_stars_repo_head_hexsha": "83e0db95cff91fee26ea04d5ecdb221d441e940b", "max_stars_repo_licenses": [ "Unlicense" ], "max_stars_repo_name": "davidstutz/cviu2018-superpixels", "max_stars_repo_path": "paper/appendix/experiments-qualitative.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 14558, "size": 43974 }
\section*{Acknowledgement} \addcontentsline{toc}{section}{Acknowledgement} The quantum circuits in this document were drawn with \texttt{quantikz}\cite{Quantikz}.
{ "alphanum_fraction": 0.8159509202, "avg_line_length": 40.75, "ext": "tex", "hexsha": "b00c858477971f8970e42a79164495b3ee6af782", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "1e7608322457c090e4db8c52ff1c7c8c55a612c3", "max_forks_repo_licenses": [ "CC-BY-4.0" ], "max_forks_repo_name": "kurabirko/phys400", "max_forks_repo_path": "documents/proposal/content/acknowledgement.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "1e7608322457c090e4db8c52ff1c7c8c55a612c3", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "CC-BY-4.0" ], "max_issues_repo_name": "kurabirko/phys400", "max_issues_repo_path": "documents/proposal/content/acknowledgement.tex", "max_line_length": 87, "max_stars_count": null, "max_stars_repo_head_hexsha": "1e7608322457c090e4db8c52ff1c7c8c55a612c3", "max_stars_repo_licenses": [ "CC-BY-4.0" ], "max_stars_repo_name": "kurabirko/phys400", "max_stars_repo_path": "documents/proposal/content/acknowledgement.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 42, "size": 163 }
% Options for packages loaded elsewhere \PassOptionsToPackage{unicode$for(hyperrefoptions)$,$hyperrefoptions$$endfor$}{hyperref} \PassOptionsToPackage{hyphens}{url} $if(colorlinks)$ \PassOptionsToPackage{dvipsnames,svgnames,x11names}{xcolor} $endif$ % % \documentclass[ % $if(fontsize)$ % $fontsize$, % $endif$ % $if(papersize)$ % $papersize$paper, % $endif$ % $for(classoption)$ % $classoption$$sep$, % $endfor$ % ]{$documentclass$} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % begin adapted from mla-tex package % (fold) \documentclass[12pt]{article} %% MLA requires 8.5x11 (letterpaper) and 1in margins on all sides. \usepackage[letterpaper]{geometry} \geometry{ top=1.0in, bottom=1.0in, left=1.0in, right=1.0in } %% Package fancyhdr allows customizing the headers and footers. %% Setting the pagestyle is required for the customized %% headers/footers to be used. \fancyhf{} removes the default contents %% of the headers and footers, leaving them blank. \usepackage{fancyhdr} \pagestyle{fancy} \fancyhf{} % https://tex.stackexchange.com/q/528358 \setlength\headheight{15pt} %% Put the author's last name and the page number in the %% upper-right-hand corner. \rhead{\ifno{headername}{\thepage}{\get{headername}~\thepage}} \if@nofirstpagenumber \fancypagestyle{blank}{ \fancyhf{} } \thispagestyle{blank} \fi %% Remove the horizontal rule that is usually displayed just below the %% page header. \renewcommand*{\headrulewidth}{0pt} %% Set the appropriate font (Tinos or Times New Roman). % Load New TX if not using OpenType-compatible engine \iftutex \usepackage{fontspec} \setmainfont{Times New Roman} \else \RequirePackage[T1]{fontenc} \RequirePackage{newtxtext} \fi %% Use package ragged2e to inhibit justification. Vanilla %% \raggedright screws up paragraph indents. \usepackage{ragged2e} \setlength\RaggedRightParindent\parindent \RaggedRight %% MLA requires exactly 0.5in paragraph indents. \setlength{\parindent}{0.5in} %% MLA also says that every paragraph should be indented, including %% the first paragraph of a section. \usepackage{indentfirst} %% Make a new version of the {center} environment that doesn't add %% extra spacing. \newenvironment {centered} {\parskip=0pt\nopagebreak\centering} {\par\noindent\ignorespacesafterend} %% Everyone loves double-spacing. \usepackage{setspace} \setstretch{2} % Messy header stuff to follow... \newcommand*{\newfield}[1]{% \unset{#1}% \expandafter\newcommand\csname #1\endcsname[1]{% \expandafter\def\csname value#1\endcsname{##1}}% } \newcommand*{\renewfield}[1]{% \unset{#1}% \expandafter\renewcommand\csname #1\endcsname[1]{% \expandafter\def\csname value#1\endcsname{##1}}% } \newcommand*{\get}[1]{\csname value#1\endcsname} \newcommand{\ifno}[3]{% \expandafter\ifdefempty\csname value#1\endcsname{#2}{#3}% } \newcommand*{\unset}[1]{% \expandafter\def\csname value#1\endcsname{\textbackslash #1\{?\}}% } %% Fields used in header. \newfield{fullname} \newfield{secondfullname} \newfield{lastname} \newfield{headername} \newfield{professor} \newfield{class} \newfield{postal} \newfield{email} \newfield{telephone} \renewfield{date} \renewfield{title} % %% Default values. $if(date)$\date{$date$}$else$\date{\Today}$endif$ %% format the date \usepackage[en-GB]{datetime2} \DTMlangsetup[en-GB]{ord=omit} %% Define a general command for inserting MLA-style headers. \newenvironment{header}{ \begingroup% \rmfamily% \fontsize{12}{2}% \setlength{\parindent}{0pt} }{% \endgroup% } %% And a convenience function for the most common case. \newcommand*{\makeheader}{% \begin{header} \ifno{fullname}{}{ \get{fullname} \par } \ifno{professor}{}{ \get{professor} \par } \ifno{class}{}{ \get{class} \par } \ifno{postal}{}{ \get{postal} \par } \ifno{email}{}{ \get{email} \par } \ifno{telephone}{}{ \get{telephone} \par } \ifno{date}{}{ \get{date} \par } \end{header}% \begin{centered} \get{title} \ifno{secondfullname}{}{ \par \get{secondfullname} } \end{centered}% } \newcommand*{\mlatitlepage}{% \setcounter{page}{0} \thispagestyle{empty} \hspace{0pt} \vfill \begin{centered} \get{title} \par\mbox{ }\par\mbox{ }\par \ifno{fullname}{}{ \get{fullname} \par\mbox{ }\par } \ifno{professor}{}{ \get{professor} \par } \ifno{class}{}{ \get{class} \par } \ifno{postal}{}{ \get{postal} \par } \ifno{email}{}{ \get{email} \par } \ifno{telephone}{}{ \get{telephone} \par } \ifno{date}{}{ \mbox{ }\par\get{date} \par } \end{centered}% \vfill \hspace{0pt} \newpage% \begin{centered} \get{title} \end{centered}% } % Reformatting section headers, etc. \makeatletter \renewcommand \thesection{\@arabic\c@section.} \renewcommand\thesubsection{\thesection\@arabic\c@subsection} \renewcommand \section{\@startsection% {section} {1} {\z@}% {\z@}% {\lineskip}% {\normalfont}} \renewcommand \subsection{\@startsection% {subsection} {2} {\z@}% {\z@}% {\lineskip}% {\normalfont}} \renewcommand\subsubsection{\@startsection% {subsubsection} {3} {\z@}% {\z@}% {\lineskip}% {\normalfont}} \renewcommand \paragraph{\@startsection% {paragraph} {4} {\z@}% {\z@}% {\lineskip}% {\normalfont}} \renewcommand \subparagraph{\@startsection% {subparagraph} {5} {\parindent}% {\z@}% {\lineskip}% {\normalfont}} %% Formatting section headings % \def\section{\@startsection{section}{1}{\z@}{-5.25ex plus -1ex minus % -.2ex}{1.5ex plus .2ex}{\center}} % \def\thesection{\arabic{section}.} \makeatother % end adapted from mla-tex package % (end) %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \usepackage{xcolor} \IfFileExists{xurl.sty}{\usepackage{xurl}}{} % add URL line breaks if available \IfFileExists{bookmark.sty}{\usepackage{bookmark}}{\usepackage{hyperref}} \hypersetup{ $if(title-meta)$ pdftitle={$title-meta$}, $endif$ $if(author-meta)$ pdfauthor={$author-meta$}, $endif$ $if(lang)$ pdflang={$lang$}, $endif$ $if(subject)$ pdfsubject={$subject$}, $endif$ $if(keywords)$ pdfkeywords={$for(keywords)$$keywords$$sep$, $endfor$}, $endif$ $if(colorlinks)$ colorlinks=true, linkcolor={$if(linkcolor)$$linkcolor$$else$Maroon$endif$}, filecolor={$if(filecolor)$$filecolor$$else$Maroon$endif$}, citecolor={$if(citecolor)$$citecolor$$else$Blue$endif$}, urlcolor={$if(urlcolor)$$urlcolor$$else$Blue$endif$}, $else$ hidelinks, $endif$ pdfcreator={LaTeX via pandoc}} \urlstyle{same} % disable monospaced font for URLs \usepackage[american]{babel} \usepackage{csquotes} \usepackage[style=mla]{biblatex} \addbibresource{$bibliography$} $if(lastname)$\headername{$lastname$}$else$\headername{$title$}$endif$ $if(professor)$\professor{$professor$}$else$\professor{}$endif$ $if(class)$\class{$class$}$else$\class{}$endif$ $if(postal)$\postal{$postal$}$else$\postal{}$endif$ $if(email)$\email{$email$}$else$\email{}$endif$ $if(telephone)$\telephone{$telephone$}$else$\telephone{}$endif$ $if(anonymous)$\headername{$title$}$endif$ $if(anonymous)$\renewcommand{\makeheader}{\mlatitlepage}$endif$ $if(author)$ \fullname{$author$} $endif$ $if(repeatname)$\secondfullname{$author$}$else$\secondfullname{}$endif$ $if(title)$ \title{$title$$if(thanks)$\thanks{$thanks$}$endif$} $endif$ % \author{$for(author)$$author$$sep$ \and $endfor$} $if(highlighting-macros)$ $highlighting-macros$ $endif$ \newcommand*{\mlaworkscited}{ \begin{centered}Works Cited\end{centered} \printbibliography[heading=none] } \begin{document} $if(titlepage)$ \mlatitlepage $else$ \makeheader $endif$ $if(abstract)$ \begin{abstract} $abstract$ \end{abstract} $endif$ $for(include-before)$ $include-before$ $endfor$ $body$ \newpage \mlaworkscited \end{document}
{ "alphanum_fraction": 0.6555637799, "avg_line_length": 22.9631728045, "ext": "tex", "hexsha": "72210953b66bef023119de789e2ec066447df4cc", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "3250911388b2ed52855b282724e2ed54a3358065", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "jmclawson/rmd4mla", "max_forks_repo_path": "inst/rmarkdown/templates/mlatemplate.tex", "max_issues_count": 1, "max_issues_repo_head_hexsha": "3250911388b2ed52855b282724e2ed54a3358065", "max_issues_repo_issues_event_max_datetime": "2022-03-13T15:14:30.000Z", "max_issues_repo_issues_event_min_datetime": "2022-03-13T15:14:30.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "jmclawson/rmd2mla", "max_issues_repo_path": "inst/rmarkdown/templates/mlatemplate.tex", "max_line_length": 88, "max_stars_count": null, "max_stars_repo_head_hexsha": "3250911388b2ed52855b282724e2ed54a3358065", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "jmclawson/rmd2mla", "max_stars_repo_path": "inst/rmarkdown/templates/mlatemplate.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 2626, "size": 8106 }
\chapter{Figures} \label{chap:fig} \lipsum \begin{figure} \centering \subfloat[$f_w$ is over-fitting.]{\input{figures/appendix/over_fitting_yes.tex} \label{of:no}} \hfil \subfloat[$f_w$ is generalising.]{\input{figures/appendix/over_fitting_no.tex} \label{of:yes}} \caption{A beautiful illustration of over-fitting done with mathcha \url{https://www.mathcha.io/}.} \label{fitting} \end{figure}
{ "alphanum_fraction": 0.7018779343, "avg_line_length": 25.0588235294, "ext": "tex", "hexsha": "eb6035c398a1723b32ab807b3efaa4d977a18393", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2019-12-02T08:30:10.000Z", "max_forks_repo_forks_event_min_datetime": "2019-12-02T08:30:10.000Z", "max_forks_repo_head_hexsha": "eaf67661bb6af685bae6c69150c06406c37d0248", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "guiguem/my_phd_template", "max_forks_repo_path": "chapters/appendix/appendix_A.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "eaf67661bb6af685bae6c69150c06406c37d0248", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "guiguem/my_phd_template", "max_issues_repo_path": "chapters/appendix/appendix_A.tex", "max_line_length": 101, "max_stars_count": 5, "max_stars_repo_head_hexsha": "eaf67661bb6af685bae6c69150c06406c37d0248", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "guiguem/my_phd_template", "max_stars_repo_path": "chapters/appendix/appendix_A.tex", "max_stars_repo_stars_event_max_datetime": "2020-09-30T07:39:52.000Z", "max_stars_repo_stars_event_min_datetime": "2019-11-19T09:11:34.000Z", "num_tokens": 142, "size": 426 }
%!TEX root = ../report.tex \section*{Cloning the repository to work offline} In this assignment, \texttt{mysql-server} repository in GitHub is used. The link to the repository is as follows: https://github.com/mysql/mysql-server. In order to work quickly and efficiently, the online repository has to be moved offline using \texttt{clone} command in git. The command and the result is shown as follows:\\ \lstinputlisting[caption=Command to clone mysql-server repository., label=code:clone, numbers=left, language=bash]{./content/listing/git-clone.txt} The operation above took around 10 minutes. The time required actually depends highly on the Internet connection. After this operation, every following operations will be done locally. This will save time used to do the operation. \section*{How many versions are in the repository?} There are \textbf{118,088} commits since the repository started. This value is derived from this operation in terminal: \begin{lstlisting}[language=bash,caption=Command to get commit count., label=code:count] $ git rev-list HEAD --count 118088 \end{lstlisting} As the git repository is quite big, almost 1GB of data, the above mentioned operation took a couple of seconds to complete. \section*{When was the first one committed?} The first commit was done on \textbf{Monday, July 31st 2000}. This information was extracted from the following command: \begin{lstlisting}[language=bash,caption=Command to get first commit., label=code:firstCommit] $ git log --reverse commit 7eec25e393727b16bb916b50d82b0aa3084e065c Author: [email protected] <> Date: Mon Jul 31 21:10:05 2000 +0200 Initial repository create \end{lstlisting} The command above basically shows the git log the other way around which means starting from the very beginning. \section*{When was the last one committed?} The last commit was committed by Rick Hillegas on \textbf{Friday, July 17 2015}, as can be seen on \autoref{code:lastCommit} below. \begin{lstlisting}[language=bash,caption=Command to get last commit., label=code:lastCommit] $ git log -1 commit a2757a60a7527407d08115e44e889a25f22c96c6 Author: Rick Hillegas <[email protected]> Date: Fri Jul 17 12:15:05 2015 -0700 Bug#21450084 LET JSON_INSERT() INSERT INTO THE MIDDLE OF JSON ARRAYS. (cherry picked from commit 27368364403e241c86650f635c72664b4c11d7ff) \end{lstlisting}
{ "alphanum_fraction": 0.7852941176, "avg_line_length": 51.7391304348, "ext": "tex", "hexsha": "88829f7aa8e94c77841031b20d48ef1ce5a4e928", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "f2129abf7ee522e78fc3d27aecc88405dad76b05", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "gtrdp/ds2015", "max_forks_repo_path": "report/content/exercise1.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "f2129abf7ee522e78fc3d27aecc88405dad76b05", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "gtrdp/ds2015", "max_issues_repo_path": "report/content/exercise1.tex", "max_line_length": 327, "max_stars_count": null, "max_stars_repo_head_hexsha": "f2129abf7ee522e78fc3d27aecc88405dad76b05", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "gtrdp/ds2015", "max_stars_repo_path": "report/content/exercise1.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 636, "size": 2380 }
\documentclass[11pt,twocolumn,letterpaper]{article} \usepackage{cvpr} \usepackage{times} \usepackage{epsfig} \usepackage{graphicx} \usepackage{amsmath} \usepackage{amssymb} \usepackage{pifont} \newcommand{\cmark}{\ding{51}}% \usepackage[utf8]{inputenc} \usepackage[T1]{fontenc} \usepackage[breaklinks=true,bookmarks=false]{hyperref} \usepackage{enumitem} \cvprfinalcopy % *** Uncomment this line for the final submission \def\cvprPaperID{****} % *** Enter the CVPR Paper ID here \def\httilde{\mbox{\tt\raisebox{-.5ex}{\symbol{126}}}} \begin{document} %%%%%%%%% TITLE \title{Challenge Data ``POSOS: Predict the expected answer"} \author{Romain Vial\\ \texttt{[email protected]}} \maketitle \section{Introduction} In France, more than 11,000 drugs are currently commercialized. Not only patients but also healthcare professionals are struggling with their usage. More importantly, drug misuse could be responsible for more than 144,000 hospitalizations every year in France, and more than 1.5 million in the USA. What is particularly interesting to understand about drug queries is what information people actually expect as an answer. This could be for instance side effects, drug composition or contraindication. While there is a limited number of question types, the challenge lies in the diversity and variability of the questions asked. The goal of the POSOS challenge is to predict the intent associated to a given question. Questions are classified according to a list of 51 different anonymized categories. To solve this problem, we explored different classical text classification methods and combined them to provide an acceptable answer. In the following sections, we will first focus on understanding the data. Then, we will describe the proposed algorithm. Finally, we will present our results and ranking. \section{Data Exploration} First, we will try to explore and understand the data before doing any further processing. The dataset is made of 10,063 questions, where 8028 questions and their corresponding intents are used for training, and 2035 questions without any intent used for testing. \subsection{Class Imbalance} The questions are distributed among 51 anonymized intents or categories. Figure~\ref{fig:intent_hist} shows the intent distribution over the training set. One can see that the dataset is badly imbalanced. The class \texttt{28} accounts for more than 22\% of the dataset while around 30 classes account for less than 1\% each. We can assume that the test set has the same distribution. However, dealing with imbalanced distribution can harm our future classification results. \begin{figure} \centering \includegraphics[width=0.9\linewidth]{images/intent_hist} \caption{Intent distribution over the training set. In red, the corresponding uniform distribution. Best viewed in colors.} \label{fig:intent_hist} \end{figure} \subsection{Dealing with Drug Names} \label{sec:drug-names} Most of the questions in the dataset contain several drug names. Our assumption is that for this intent classification task, the presence and count of drug names is actually more important than the actual names. For instance, in the sentence ``Par quoi remplacer Aerius pendant la grossesse ?", the name ``Aerius" is not that important for classification as long as we can recognize that the sentence contains one drug name. We could even replace the drug name by a token, ``Par quoi remplacer <MED> pendant la grossesse ?", without losing too much information. In this project, we used three strategies to deal with these drug names: \begin{enumerate} \item adding a numerical features containing the number of occurrences of any drug names in the question \item replacing all occurrences of drug names by a token <MED> \item replacing all occurrences of drug names by a numbered token, e.g. if two drugs appear in the question, one will be replaced by <MED0> and the other one by <MED1> \end{enumerate} The extraction of drug names in a given question is done by looking for any occurrence of a known drug commercialized in France. The list of drug commercialized in France is openly available on the ANSM website\footnote{ANSM website: \texttt{www.agence-prd.ansm.sante.fr}}. \section{Proposed Approach} In the following section, we will present the methods used to represent and classify the questions. \subsection{Feature Representation} % Meaningful features \subsubsection{TF-IDF} A common representation for textual data is the so-called Bag-of-Words (BoW) representation where each sentence is represented as an histogram of term occurrence. Hence, each individual token is treated as a feature. However, in a large text corpus some words will be over-represented, such as ``the" or ``a" in English, thus carrying little information about the actual category of the sentence. Furthermore, these words will disturb the statistics of rarer but discriminative words. Tf-idf, term frequency times inverse document frequency, is a reweighing scheme aiming at reflecting how important is a word to a sentence or document. The simplest choice for the term frequency is to use the raw count of a term in a document, i.e. the BoW frequency. Then, the inverse document frequency is a measure of how much information this word carries. Basically, a word that is rare in the whole corpus will probably provide more information than a very common word. A simple choice for the idf term is the opposite logarithm of the number of documents containing this term divided by the total number of documents. This leads to the following formula: \begin{align*} \text{tf-idf}(t,d) &= \text{tf}(t, d) \times \text{idf}(t)\\ &= f_{t,d} \times \log\left(\frac{N}{n_{t}}\right) \end{align*} where $f_{t,d}$ is the frequency of term $t$ in document $d$, $N$ is the total number of documents and $n_{t}$ is the number of documents containing the term $t$. \subsubsection{Doc2Vec} Doc2Vec \cite{le2014distributed} is a distributed representation of documents, built as an extension of the more classical Word2Vec \cite{mikolov2013distributed}. Its idea is to embed document into a high-dimensional space where similar documents tend to be close to each other. Contrary to the tf-idf approach, the distance between two document vectors carries a meaning. The document vectors are obtained after an unsupervised training phase where the model tries to predict the next word in a sentence given its context and a document vector. Such representation has been proven useful in sentiment analysis or other classification tasks. \subsection{Classification Method} \subsubsection{Support Vector Machine} Support Vector Machine (SVM) is a classical method to perform supervised classification. It was first proposed and described by \cite{Cortes1995}. The idea is to embed the input data into a high-dimensional space where the data become linearly separable. In our case, as our data already live in a high-dimensional space, we only used a simple linear kernel. \subsubsection{Xgboost} Xgboost \cite{chen2016xgboost} is a boosting method based on an ensemble of classification trees. Contrary to SVM, the model is here trained in an additive manner. Basically, this means that at each iteration of the training process, it tries to find the tree that most improves the model according to some loss function and regularization score. \subsubsection{Ensembling} The idea to combine the output of different classifiers into a new one to boost predictive performance is called ensembling. In practice, an ensemble model yield better results when there is a significant diversity among models. \section{Experiments} In order to assess the interest of our method, we performed several quantitative and qualitative experiments. In the following, the training set is separated into two splits: 80\% for training and 20\% for validating. \subsection{Quantitative Experiments} Table~\ref{tab:experiments} shows the training and validation accuracy according to different representation and classification models. When two classification models are checked, it means that we use an ensemble of the two model. We use a simple ensemble strategy which consists at outputting a weighted average of both SVM and Xgboost probabilities. One can see that the Doc2Vec approach results in a much lower accuracy than the tf-idf approach. Some insights about this result can be found in Sec.~\ref{sec:qualitative-exp}. The tf-idf approach looks more robust in this classification task. We can also observe that, in both cases, ensembling the results of SVM and Xgboost leads to a slight improvement of the accuracy. According to these results, we chose to submit our tf-idf ensembling approach in order to evaluate it on the test set. We have been able to obtain an accuracy of 68.60\%, which is the 8th position (out of 19) including the POSOS benchmark. \begin{table}[t!] \centering \begin{tabular}{|c|c||c|c||c|c|} \hline tf-idf & Doc2Vec & SVM & XGB & train & val\\ \hline \hline \cmark & & \cmark & & 99.41 & 65.91 \\ \hline \cmark & & & \cmark & 94.11 & 60.50 \\ \hline \cmark & & \cmark & \cmark & 99.37 & \textbf{66.71} \\ \hline \hline & \cmark & \cmark & & 84.28 & 51.84 \\ \hline & \cmark & & \cmark & 98.65 & 50.18 \\ \hline & \cmark & \cmark & \cmark & 93.98 & 52.03 \\ \hline \end{tabular} \caption{Train and val accuracy according to different representation and classification models.} \label{tab:experiments} \end{table} \subsection{Qualitative Experiments} \label{sec:qualitative-exp} One interest of using Doc2Vec as a distributed representation for our data, is to be able to plot and explore the embedding space. Figure~\ref{fig:embedding} shows the Doc2Vec embedding of the training split. One can see that classes are not well separated. This actually explains the poor results obtained with this method compared with the classical tf-idf approach. \begin{figure} \centering \includegraphics[width=0.95\linewidth]{images/embedding} \caption{T-SNE representation of the 50-dimensional Doc2Vec embedding of the training split. The color of a point represents the intent of the corresponding question. Best viewed in colors.} \label{fig:embedding} \end{figure} \section{Conclusion} % Normalization, Med embedding, CNN/RNN In this report, we tried several data representations along with supervised classifiers to recognize intent in medical questions. We haven't been able to reach Posos's state-of-the-art at 84.69\%, thus obtained a reasonable result around 69\% without using advanced methods such as Convolutional Neural Networks. In the following, we will describe some of the possible improvements to our method that could boost the results: \begin{itemize} \item We actually observed that many sentences contain spelling errors or unnormalized French, such as punctuation repetitions or phonetic spelling. Such words are usually harmful as they will be counted as out of the vocabulary tokens. This could be one possible reason for the poor results of the Doc2Vec approach. Such normalization method could be implemented following noisy channel based spelling correction approaches as proposed by \cite{kernighan1990spelling}. \item The way we treated drug names is actually very simple by either replacing names by a common token or adding a new feature containing the number of occurrences (cf. Sec.~\ref{sec:drug-names}). Another possible method would have been training drug embeddings. For instance, by looking at which drugs co-occur in the same question or context, we could have probably extracted interesting insights to improve the overall representation of a question. \item Convolutional and Recurrent Neural Networks have recently proven their superior capabilities to capture semantic and syntactic features in natural language. Combining them with well trained word vectors, possibly specialized on a medical corpus, would probably boost our classification accuracy. \end{itemize} {\small \bibliographystyle{ieee} \bibliography{egbib} } \end{document}
{ "alphanum_fraction": 0.7920265781, "avg_line_length": 66.8888888889, "ext": "tex", "hexsha": "8578b008606b3cb25b9675e4bcbd0132fd991a92", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "1ffd51aeb1f0e6c9c887df9d197925a99f5f036d", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "RomainVial/posos_challenge", "max_forks_repo_path": "report/egpaper_final.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "1ffd51aeb1f0e6c9c887df9d197925a99f5f036d", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "RomainVial/posos_challenge", "max_issues_repo_path": "report/egpaper_final.tex", "max_line_length": 661, "max_stars_count": null, "max_stars_repo_head_hexsha": "1ffd51aeb1f0e6c9c887df9d197925a99f5f036d", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "RomainVial/posos_challenge", "max_stars_repo_path": "report/egpaper_final.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 2779, "size": 12040 }
%######################################################### \chapter{\textit{In-vivo} Measurements of Strain Rate Tensor in Human Muscle} \label{ch: VEPC} %######################################################### The study of human muscles function requires a noninvasive method that can provide a velocity map. One of the well established MR techniques is Velocity Encoded Phase-Contrast (VEPC) imaging~\cite{Morse:1970ux, Pelc:1991vr}. %-new paragraph-% %-new paragraph-% This chapter gives a quantitative overview and implementation details of the phase-contrast technique for velocity measurements and application of this MRI pulse sequence to track changes due to disuse atrophy and aging in strain rate patterns during isometric contraction in human calf muscle. %Sections~\ref{sec: SR_ULLS}~and~\ref{sec: SR_SHEAR} include the results presented at the 25th Annual Meeting of Society of Magnetic Resonance in Medicine in Honolulu, HI in 2017 and published in 2018~\cite{Malis:2018fr, Sinha:2018bb}. %========================================================= \section{Velocity Encoded Phase-Contrast Imaging} %========================================================= VEPC imaging method is based on detecting phase changes in the transverse magnetization of the tissue as it moves along the magnetic field gradient. %~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ \subsection{Velocity Measurements} %~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In strong magnetic field precession of nucleus is described by the Larmor frequency (Equation~\ref{eq: Quantum Larmor frequency}). For protons at the location described by radius vector $\mathbf{r}$ and moving at a constant velocity $\mathbf{v}$, in strong magnetic field $B_0$ with inhomogeneity $\Delta B_0$ the total magnetic field is: %......................................................... \begin{equation}\label{eq: field B vepc} B=B_0+\Delta B_0 + \mathbf{G} \cdot(\mathbf r + \mathbf v t) \end{equation} %......................................................... Since phase is simply $\phi=\int\omega dt$, phase for moving protons is expressed: %......................................................... \begin{equation}\label{eq: phas vepc } \phi = \gamma \int dt [B_0+\Delta B_0] + \gamma\int dt \mathbf{G} \cdot \mathbf{r} + \gamma\int dt\mathbf{G} \cdot \mathbf v t \end{equation} %......................................................... Only moving spins are of interest, thus to get rid of the first two integrals in the equation above two sets of two bipolar pulses each is used. It is important to note that the only contribution to the phase of the MR signal comes from the velocity component along the direction of the bipolar gradient, so only one direction can be encoded with a single measurement. With the assumption that the velocity is constant during echo time, choosing the duration of the rectangular-shape gradient pulses equal to $T$ with the absolute value of amplitude $G$ and applying them one immediately after another, one can write a set of equations for phases $\phi_1$ and $\phi_2$: %......................................................... \begin{equation}\label{eq: phase vepc} \begin{split} \phi_1 &{} = \gamma \int\limits_{0}^{T} dt [B_0+\Delta B_0+\mathbf{G} \cdot (\mathbf r + \mathbf v t)] + \gamma \int\limits_{T}^{2T} dt [B_0+\Delta B_0-\mathbf{G} \cdot (\mathbf{r} + \mathbf v t)] =\\ &{} =2\gamma T[B_0+\Delta B_0] - \gamma Gv T^2\\[10pt] \phi_2 &{} =2\gamma T[B_0+\Delta B_0] + \gamma G v T^2 \end{split} \end{equation} %......................................................... Time integrals from the flow encode (motion-sensitizing) gradients in the phase expressions (Equation~\ref{eq: phase vepc}) are also known as first gradient magnetic moments ($M_1$). The first set of bipolar pulses eliminates static spins (2nd term) while the second set eliminates an initial phase (background) giving the phase difference: %......................................................... \begin{equation}\label{eq: phase difference vepc} \Delta\phi=\phi_2-\phi_1=2\gamma v G T^2=\gamma v \Delta M_1 \end{equation} %......................................................... An important result of Equation~\ref{eq: phase difference vepc} is direct proportionality between the velocity of the tissue and the difference in its magnetic moments. Therefore from the measured difference in phase it is possible to calculate the velocity. To avoid ambiguity in the velocity calculations due to phase wrapping phase difference $\Delta\phi$ must be within the range $\left[ -\pi, \pi \right]$ which sets the limit for $\max$ and $\min$ velocity that can be measured. This condition is satisfied by choice of \textit{venc} parameter. %......................................................... \begin{equation}\label{eq: venc} venc = \frac{\pi}{\gamma \Delta M_1} \end{equation} %......................................................... The \textit{venc} value is determined by amplitude and duration of the flow encoding gradients and is selected to match the expected range of the velocities. Noise in the phase images is inversely proportional to the signal-to-noise ratio (SNR) in the corresponding magnitude images~\cite{Pelc:1991vr}. Thus noise in the velocity images is given: %......................................................... \begin{equation}\label{eq: SNR} \sigma_v = \frac{2}{\pi} \frac{venc}{\mathrm{SNR}} \end{equation} %......................................................... For optimal noise performance \textit{venc} value should be always be selected as small as possible. %~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ \subsection{Pulse Sequence Implementation} %~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Majority of VEPC imaging sequences are based on fast RF-spoiled gradient echo sequences and uses flow compensated gradients. A simple technique with two bipolar gradients (Figure~\ref{fig: FlowComp}a) per set requires six data acquisitions to measure velocities in all three orthogonal directions. Implementation of flow compensated gradients (Figure~\ref{fig: FlowComp}b) reduces the number of data acquisitions to four. %********************************************************* \begin{figure}[!htb] \vspace{+0.2cm} \centering \includegraphics[width=0.85\textwidth]{Figures/VEPC_FlowCompensations.pdf} \caption[Flow encoding and flow compensating gradient schemes]{Flow encoding (a) and flow compensating (b) gradient schemes with phase plots of zero (red) and first (blue) gradient magnetic moments.} \label{fig: FlowComp} \end{figure} %********************************************************* The underling idea of flow compensated gradients sequence is to obtain a first reference scan with zero magnetic moment. Velocity encoding is then done by further scans with bipolar pulses. Same flow compensated scan is then simply subtracted from all three velocity-encoded scans. %-new paragraph-% %-new paragraph-% Another important aspect of VEPC imaging is gating. During one RF excitation a limited number of \mbox{\textit{k-}space} lines can be collected, so the data acquisition and motion should be repeated. Two gating strategies exist: retrospective (images are acquired continuously and later reshuffled according to the recorded subject output) and prospective (when the scan is initiated by external trigger programmed to response to increase in subjects force). In my studies prospective gating technique is utilized, each acquisition is initialized by external trigger device which is a part of force measuring and guiding system. Figure~\ref{fig: VEPC} shows the VEPC sequence layout, it consists of four blocks, the scan is initiated by the trigger, reference data are acquired first and then bipolar gradients for each of three directions are applied. %********************************************************* \begin{sidewaysfigure} \centering \vspace{+0.2cm} \centering \includegraphics[scale =1]{Figures/VEPC.pdf} \caption[Layout of the velocity encoded phase-contrast MR pulse sequence]{Layout of the velocity encoded phase-contrast MR pulse sequence.} \label{fig: VEPC} \end{sidewaysfigure} %********************************************************* %~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ \subsection{Experimental Setup} %~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Experimental setup for the dynamic imaging experiment is shown in Figure~\ref{fig: VEPCSetup}. Subject was lying supine, feet first, with the dominant leg placed in specially designed foot-pedal device. While collecting the MR data of the isometric contraction of muscle it is important to ensure consistency of motion. Therefore, the subject was provided with the graphic feedback of the actual force generated by pressing against the strain gauge sensor. The force curve was plotted in realtime superposed on the desired force curve to facilitate consistent contractions. The strain gauge sensor was embedded inside carbon fiber-reinforced polymer sheet connected via optical fiber cable to a Fabry-P\'erot optical strain measurement system (Luna Innovations, VA, USA) that converts the displacement to analogue voltage. The signal was then digitized, filtered with the low pass filter, differentiated to produce trigger for the scanner. All the signals were collected using Data Acquisition (DAQ) device (National Instruments, TX, USA) connected to the computer. %-new paragraph-% %-new paragraph-% The DAQ device was programed in LabVIEW system-design platform (National Instruments, TX, USA) to record trigger and force signal at $\SI{200}{\hertz}$ sampling rate~\cite{LabView}. Force signal and desired force curve were projected on the screen along with the a target line set at 35\% of Maximum voluntary isometric contraction (MVIC). MVIC was determined for each subject as the best of three trials recorded prior to MR imaging. During the subsequent execution of the $\sim70$ contraction-relaxation cycles force signal was recorded and converted to units of force $\left[ \SI{}{\newton}\right]$ based on a system calibration performed using disc weights. %********************************************************* \begin{figure}[!htb] \centering \vspace{+0.2cm} \centering \includegraphics[width=\textwidth]{Figures/Setup.pdf} \caption[Dynamic MR imaging experiment setup]{Dynamic MR imaging experiment setup.} \label{fig: VEPCSetup} \end{figure} %********************************************************* %========================================================= \section {Changes in Strain Rate Indices due to Loss of Muscle Force Following Disuse Atrophy} \label{sec: SR_ULLS} %========================================================= It is well established that chronic unloading of muscle results in rapid skeletal muscle atrophy and is accompanied by a significant loss in the capacity to generate force during contraction~\cite{RNS1}. Atrophy (reduction in myofiber and whole muscle size) is the result of a decrease in protein synthesis and an increase in protein degradation that ultimately leads to a loss of contractile proteins, organelles, nuclei, and cytoplasm~\cite{RNS2, RNS3}. Narici and Cerretelli used plaster cast induced immobilization in human subjects to create a model of unilateral disuse atrophy~\cite{RNS4} and demonstrated that, in disuse atrophy, both fiber length and pennation angle decrease, suggesting a loss of sarcomeres in series and in parallel, respectively. Muscle remodeling (measured by changes in fiber length, pennation angle and muscle thickness) with inactivity is an extremely fast process occurring after even $7-8$ days of inactivity~\cite{RNS5, RNS6, RNS7}. De Boer~et~al. employed the Unilateral Limb Suspension (ULLS) model to unload the human knee extensors and reported, after $14$ days of suspension, a decrease in Focal Adhesion Kinase content ($-20\%$) and activity ($-30\%$), associated with a $50\%$ fall in muscle protein synthesis and a $5\%$ decrease in quadriceps muscle anatomical cross-sectional area (ACSA)~\cite{RNS8}. A particular noteworthy observation of the above study, as in most unloading studies, is that the decrease in muscle force exceeded that of muscle size, with the loss of quadriceps force after $2$ weeks of ULLS being greater than $\sim 3$-fold that of muscle ACSA, even after accounting for changes in muscle activation~\cite{RNS8}. Although some of this phenomenon may be partly explained by a decrease in single fiber specific tension (force per unit ACSA of single fibres)~\cite{RNS9, RNS10}, recent evidence suggests that changes in the extracellular matrix (ECM) can substantially contribute to this disproportionate loss of force~\cite{RNIZhang}. This is because force transmission occurs both longitudinally along the muscle fiber and laterally through the adjacent ECM and muscle fibers to the epimysium of the skeletal muscle~\cite{RNS12, RNS13} and impairment of lateral force transmission can account up to $50\%$ of force loss in dystrophic mice and very old rats~\cite{RNIRamaswamy}. Several reports based on physiologically based computational models have identified that the mechanism by which force is transmitted laterally is through shear strain in the ECM~\cite{RNS15}. Measurement of shear strain may thus allow an indirect assessment of lateral transmission of force (LTF). %-new paragraph-% %-new paragraph-% Velocity encoded magnetic resonance (MR) imaging is a convenient method to map tissue motion in all three directions and Strain Rate (SR) can be directly extracted from the acquired dynamic MR images. Strain is a measure of tissue deformation with respect to a reference state and requires tissue tracking. On the other hand, Strain Rate, i.e. the instantaneous change of strain with time, describes the rate of regional deformation and does not require three-dimensional tracking or a reference state. SR tensor mapping provides important information on both the magnitude and orientation of the rate of deformation. Strain rate is conveniently represented as tensors (the current analysis is performed a 2D image resulting in a $2 \times 2$ tensor), where the terms along the diagonals are the normal strain rate magnitudes in two orthogonal directions and the off-diagonal terms are the shear strain rate terms. The normal strain rate measures the amount of deformation parallel to a given line, while the shear strain rate measures the amount of deformation perpendicular to a given line. A positive SR indicates a local expansion while a negative SR indicates a local contraction. Previous study used MR dynamic imaging to investigate age related changes in muscle mechanical properties and contractility by mapping strain rate tensors derived from velocity encoded images~\cite{RNS16}. The main findings of the aging study indicated that the mechanical properties of the extracellular matrix have a role in determining the change in strain rate and its spatial patterns in the aging muscle. The ULLS model of atrophy and muscle force loss is another perturbation of the normal muscle that has similarities to aging muscle but important differences as well. While the aging model represents a chronic state, the ULLS model represents a transient state in which muscle function, ceteris paribus, will be determined by the timing of loss of contractile tissue together with remodeling of passive structures (connective tissue) within the muscle which undergo important qualitative and quantitative changes. There are known structural and material changes with ULLS induced acute atrophy in muscle~\cite{RNS8, RNS17, RNS18}; thus indices derived from the SR tensor could potentially be used as surrogate imaging biomarkers of these changes. The focus of this study is to explore ($i$) the changes induced by disuse atrophy in parameters derived from the SR tensor including normal and shear strain rates and ($ii$) the relationship between the changes in SR parameters to force loss from disuse atrophy. The specific hypothesis was that SR parameters potentially affected by extracellular matrix (ECM) remodeling (SR-fiber angle, shear strain) would be related significantly to force loss in disuse atrophy. The hypothesis was tested on the medial gastrocnemius (MG) during sub-maximal isometric contraction. %~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ \subsection{Methods} %~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ %--------------------------------------------------------- \subsubsection{Ethical approval-subjects} %--------------------------------------------------------- Seven subjects (2 female, 5 male) were included in this study after written informed consent had been obtained ($29.1 \pm 5.7$ year old, body mass $75.4 \pm \SI{22.7}{\kilogram}$, height $168.1~\pm~\SI{7.4}{\centi\meter}$). The criterion for inclusion was that subjects should be moderately active. Subjects participating in competitive sports as well as those with any surgical procedures performed on the lower leg were excluded. The study was carried out under the approval of the Medical Research Ethics Board of UC~San~Diego and conformed to all standards for the use of human subjects in research as outlined in the Declaration of Helsinki on the use of human subjects in research. %--------------------------------------------------------- \subsubsection{Study design} %--------------------------------------------------------- The effect of chronic unloading and rehabilitation on the force production capability and strain distribution patterns of the MG muscle was assessed by comparing the baseline (pre) to immediately after 4 week of limb suspension (post). During the suspension period, subject compliance to the protocol was monitored at 2 weeks to check for loss in force production as well as muscle atrophy (MRI morphological scan). In addition, a wireless activity tracker was integrated into the crutches for independent confirmation of compliance; the subject was not informed of the tracker to ensure that it was not removed or tampered with to simulate crutch usage. After the 4-week suspension, subjects were required to attend physical rehabilitation sessions. A final imaging study at the end of the rehabilitation period (4 weeks) was performed to confirm that the muscle had recovered to baseline status. %````````````````````````````````````````````````````````` \textit{Unilateral Limb Suspension (ULLS):} %````````````````````````````````````````````````````````` Muscle atrophy was induced on the non-dominant leg with 4 week of chronic unloading using the ULLS model~\cite{RNS19}, which has been used extensively in many earlier studies. Subjects identified the dominant leg as the one they preferentially used to regain balance from a jostle. The non-dominant leg was the left leg for all subjects in this study. The ULLS protocol allowed the subjects a reasonable amount of freedom to carry out their daily activities as well as driving since the dominant leg (right in this study) was not unloaded. A crutch was used to prevent the foot (of the left leg) from touching the ground. The right foot was raised with a $\SI{5}{\centi\meter}$ sole on the shoe to further minimize accidental loading of the foot. %````````````````````````````````````````````````````````` \textit{MR imaging:} %````````````````````````````````````````````````````````` MR imaging was performed on a $\SI{1.5}{\tesla}$ Signa HDx MR scanner (GE Medical Systems, WI, USA), with the subject lying supine, feet first, with the left leg (i.e., the non-dominant leg to be imaged) placed in a cast. An optical fiber pressure transducer was glued to the sole of the cast that was firmly anchored to the radio frequency coil by means of Velcro straps. Images were acquired during sub-maximal, isometric contraction at $35\%$ of the individual maximum voluntary isometric contraction (MVIC). Image acquisition was completed in $\sim 70$ cycles. The MR images used in this report include high-resolution water saturated oblique sagittal fast spin echo images of the MG (echo time (TE):~$\SI{12.9}{\milli\second}$, repetition time (TR):~$\SI{92.5}{\milli\second}$, signal averages (NEX):~$4$, flip angle (FA):~$\SI{20}{\degree}$, slice thickness/gap:~$3/\SI{0}{\milli\meter}$, field of view {FA}:~$30 \times 22.5 \; \SI{}{\centi\meter^2}$, matrix:~$512 \times 384$). This sequence provides a high tissue contrast for the fascicles in the background of suppressed muscle signal and was used to locate fascicle end points. The orientation that best depicted the fascicles was selected for the Velocity Encoded Phase-Contrast (VEPC) scans. Oblique sagittal slices were obtained with the following acquisition parameters: TE: $\SI{7.7}{\milli\second}$, TR:~$\SI{16.4}{\milli\second}$, NEX:~$2$, FA:~$\SI{20}{\degree}$, slice thickness:~$\SI{5}{\milli\meter}$, gap:~0, FOV:~$30 \times 22.5 \; \SI{}{\centi\meter^2}$ (partial phase FOV:~$0.75$), $256 \times 192$ acquisition matrix (lower resolution in the phase direction), 4 views per segment (VPS), $5-7$ slices, $22$ phases, $\SI{10}{\centi\meter/\second}$ three directional velocity encoding. This resulted in 72 repetitions [192 (phase encode)$ \times 2$ (averages) $\times 0.75$ (phase FOV))/4 (VPS) $= 72$] for each slice acquisition. The temporal resolution is calculated as: $16.4(\mathrm{TR})\cdot 4(\mathrm{VPS})\cdot 4$(velocity encoding directions)$/2$(view sharing)$= \SI{131}{\micro\second}$. Twenty-two phases were collected within each isometric contraction-relaxation cycle~of $\approx\SI{3}{\second}$~($22\cdot \SI{131}{\milli\second} = \SI{2.88}{\second}$). %````````````````````````````````````````````````````````` \textit{Force measurements:} %````````````````````````````````````````````````````````` MVIC of the plantarflexor muscles was determined for each subject prior to MR imaging. For this purpose, the ankle was fixed in a neutral position ($\SI{90}{\degree}$ angle between the axis of the foot and the shank). The best of three trials was used to set the target level of force for subsequent image acquisition (35\% of MVIC). During the subsequent execution of the $\sim70$ contraction-relaxation cycles, torques were recorded at a sampling frequency of $\SI{200}{\hertz}$ and then averaged over repeated cycles to produce curves of mean force. %````````````````````````````````````````````````````````` \textit{Morphological quantification and normalized force of the Triceps Surae (TS) muscles} %```````````````````````````````````````````````````````` TS force per unit of anatomical cross-sectional area (F/ACSA) was determined as the ratio of plantarflexors MVIC to TS ACSA represented by the sum of the maximum ACSA of MG, lateral gastrocnemius (LG) and soleus (SOL). Maximum ACSA was defined as the maximum value of ACSA selected from all the axial images for each muscle. In order to estimate the maximum force acting along the TS tendon, the force recorded by the force transducer was divided by the Achilles tendon moment arm corresponding to $\SI{90}{\degree}$ angle between the axis of the foot and the shank detailed earlier~\cite{RNS20}. In brief, a sagittal MR image of the lower leg and foot was used to identify the joint (ankle) center of rotation as well as Achilles tendon line of action (the latter marked as a straight line along the center of the tendon). The perpendicular distance of the joint center to the line of action was measured as the Achilles tendon moment arm~\cite{RNS21}. Physiological Cross Sectional Area (PCSA) of the MG was computed as the ratio of the volume of the MG to the average fiber lengths. Fiber lengths were computed by identifying fascicles in the MG seen in the fast spin echo images in the mid-MG region. %--------------------------------------------------------- \subsubsection{Image analysis} %--------------------------------------------------------- %````````````````````````````````````````````````````````` \textit{Strain Rate (SR) calculation:} %```````````````````````````````````````````````````````` Phase images were corrected for phase shading artifacts and denoised with an anisotropic diffusion filter. The SR tensor was calculated in the following steps by: ($i$) computing the tensor $\mathbf{L}$, from the derivative of the velocity images: given that $u$ and $v$ represent the $x$ and $y$ components of the velocity vector. %......................................................... \begin{equation}\label{eq: StrainRate2d} \mathbf{L} = \left [ \begin{matrix} \dfrac{\partial{u}}{\partial{x}} & \dfrac{\partial{v}}{\partial{x}} \\[8pt] \dfrac{\partial{u}}{\partial{y}} & \dfrac{\partial{v}}{\partial{y}} \\ \end{matrix} \right] \end{equation} %......................................................... ($ii$) The symmetric part of the SR tensor was then calculated as: $0.5(L+L^\top)$. The SR tensor was then diagonalized to obtain the eigenvalues and eigenvectors. The values were sorted as positive and negative values at each voxel $( \text{reported in } \SI{}{\per\milli\second})$ and with their corresponding eigenvectors, stored as separate images. Deformation of tissues within the muscle can be represented along 3 principal SR directions: $SR_{\mathrm{fiber}}$, $SR_{\mathrm{in-plane}}$ and $SR_{\mathrm{out-plane}}$. $ SR_{\mathrm{fiber}}$ denotes deformation approximately along the muscle fiber long axis that is negative (NEV) during muscle fiber shortening (phases $1-11$) and positive (PEV) during relaxation (phases $12-22$). It is important to note that it is the presence of shear forces that causes the axis of $SR_{\mathrm{fiber}}$ to be rotated away from the muscle fiber long axis; this angle is referred to as the SR-fiber angle. $SR_{\mathrm{in-plane}}$ by contrast, is defined as deformation in a direction approximately orthogonal to that and in the imaging plane and is characterized by eigenvalues with a sign opposite to that of $SR_{\mathrm{fiber}}$. $SR_{\mathrm{out-plane}}$, which is the SR in the fiber cross-section perpendicular to the imaging plane (i.e., the plane of the muscle fibers) was derived from the SR measured in the other two directions. It is computed based on the assumption of volume incompressibility of muscle tissue: a local longitudinal contraction along the muscle will be accompanied by a local radial expansion in the plane perpendicular to the fiber. It is fairly well accepted that muscle, because of its high fluid content, is incompressible~\cite{RNS22, RNS23}. For a 3D volume that is incompressible, the sum of the three strain rates should be zero. Here, only the 2D tensor can be calculated, so the sum of the two measured eigenvalues was used to infer the magnitude of the third eigenvalue as: %......................................................... \begin{equation}\label{eq: SR through plane} SR_{\mathrm{out-plane}} = - \left( SR_{\mathrm{fiber}} + SR_{\mathrm{in-plane}} \right) \end{equation} %......................................................... The deformation in the fiber cross-section can range from symmetric, moderately and severely asymmetric with greater deformation in-plane and moderately and severely asymmetric with greater deformation out-plane; a schematic is shown in~\cite{RNS16}. %````````````````````````````````````````````````````````` \textit{Muscle fiber tracking:} %```````````````````````````````````````````````````````` In the absence of shear forces, the principal SR directions would be strictly determined by the orientation of muscle fibers (i.e. the principal axis of contraction will be parallel to the muscle fiber orientation). However, $SR_\mathrm{fiber}$ is typically rotated away from the fiber long axis, that is, in addition to normal deformation, there is also shear deformation, requiring the principal strains to be related to the coordinate system represented by muscle fibers. To determine this coordinate system, the origin and insertion points of muscle fibers were identified at seven locations along the muscle length on the sagittal-plane fast spin-echo images and transferred to the first frame of the dynamic images. Briefly, the origin and insertion points of the muscle were identified on water suppressed sagittal images in which the fascicles appeared with high contrast against a background of dark muscle tissue~\cite{RNS24}. The oblique sagittal images were acquired so that the muscle fibers (and fascicles) lay in the imaging plane. A muscle physiologist identified the fascicle end points on the deep and superficial aponeurosis. A line connecting these end points provided a good approximation to a muscle fiber. The initial identification of the fascicles was corrected on the magnitude images of the VEPC since small mismatches in image geometry were found between the fast spin-echo and the VEPC images. The VEPC cines were used to track the coordinates of these points across the contraction-relaxation cycles. Muscle fiber orientation with respect to the positive $x$-axis was calculated at each phase of the dynamic cycle. The seven locations were grouped into three groups (2 distal, 3 middle, 2 proximal) and averaged to represent the typical MG behavior in these regions. The angle subtended by $SR_\mathrm{fiber}$ and the muscle fiber direction was determined and is referred to, in the rest of the chapter, as the SR-fiber angle. %````````````````````````````````````````````````````````` \textit{Strain in the fiber basis:} %```````````````````````````````````````````````````````` To quantitate shear strains, the SR in the principal axis needs to be projected onto the muscle fiber, which is accomplished by rotating the SR tensor to the fiber basis. Consider the 2D configuration of the SR eigenvalues and fiber shown in the schematic (Figure~\ref{fig: SR1_1}), where $f$ is the fiber direction, $c$ is the fiber in-plane cross-section direction, NEV and PEV are the negative and positive principal strains respectively (represented by green arrows) and $\theta$ is the SR-fiber angle. %********************************************************* \begin{figure}[!htb] \vspace{+0.2cm} \centering \includegraphics[scale=0.63]{Figures/SR2dSchematic.pdf} \caption[Schematic of strain rate in principal axes and shear strain in muscle during isometric contraction.]{The relative orientation of the 2D principal axes of strain rate ($SR_\mathrm{fiber}$, $SR_\mathrm{in-plane}$) compared to that of the muscle fiber and cross-section ($SR_{ff}$, $SR_{cc}$) in isometric contraction (a). Shear strain and its origin is schematically shown in (b).} \label{fig: SR1_1} \end{figure} %********************************************************* In order to obtain the strain tensor in the fiber basis ($SR_{\mathrm{fiber-basis(fb)}}$), the SR tensor in the principal axis frame is rotated by $\theta$ to obtain: %......................................................... \begin{equation}\label{eq: SR fiber basis} SR_{\mathrm{fb}} = \left [ \begin{matrix} SR_{ff} & SR_{fc}\\ SR_{cf} & SR_{cc}\\ \end{matrix} \right] \end{equation} %......................................................... where $SR_{ff}$ is the normal strain along the fiber, $SR_{cc}$ is the normal strain in the fiber cross-section, and $SR_{fc}$ $(=SR_{cf})$ is the shear strain. In the fiber basis, the shear strain, $SR_{fc}$ is given by: %......................................................... \begin{equation}\label{eq: SR shear} SR_{fc}=\dfrac{\mathrm{PEV}-\mathrm{NEV}}{2} \sin{2\theta} \end{equation} %......................................................... Figure~\ref{fig: SR1_1}b illustrates the origin of the shear strain in the endomysium due to the contraction of the muscle fiber with the endomysial end attached to the muscle deforming while the remote end is fixed; this is consistent with a computational model that explored force transmission pathways~\cite{RNS15}. The adopted convention is that the shear strain rate sign is positive when the shear angle is acute is followed (shear angle is shown in Figure~\ref{fig: SR1_1}b). In addition to the fiber basis, the SR tensor in the principal axis frame is rotated by $\pi/4 $ to obtain the maximum shear strain $(SR_{fc\_\,\mathrm{max}})$. %````````````````````````````````````````````````````````` \textit{Region of interest (ROI) measurements:} %```````````````````````````````````````````````````````` For each subject, the entire length of the MG was divided into three regions based on the distance from the most distal point of the muscle: bottom $25\%$ (distal), middle $50\%$ (middle), top $25\%$ (proximal). Regional analysis of scalar indices derived from the SR tensor was performed on regions of interest (ROIs) selected on the magnitude images in the three regions. Five to seven oblique-sagittal slices were acquired in each subject to cover the entire width of the MG; the average over all slices for each region is reported in the rest of the section (an average over slices was used to increase the statistical power). For each slice, the size of the ROI was set at $7 \times 7$ voxels for the proximal and middle regions and at $5 \times 10$ voxels for the distal region to accommodate the muscle taper. The ROI size was determined from empirical examinations of the biggest size ROI that could be placed within the region boundaries while avoiding the low intensity fat layers that ran along the fascicles. In order to ensure that the same anatomic region was reported, each pixel in the ROI was tracked (with respect to the first frame) to locate the new pixel positions in successive frames, creating a frame based ROI. Tracking was performed in 2D using the in-plane velocity information. The position in a subsequent frame was calculated based on the velocity information in the current frame. This allowed automated placement of an ROI in each frame that moved synchronously with the underlying anatomy~\cite{RNS16}. ROIs changed both location and shape (5 to $20\%$ in successive frames) but the number of points was kept constant to ensure the average was based on the same number of points~/~frame. It is important to note that due to force losses consequent to the intervention, MR images were acquired at the same relative but lower absolute force levels post-suspension. To avoid bias related to different absolute force levels, average ROI values of the SR indices were extracted at the maximum force level subjects generated in the post-suspension state. The force recording was at much higher frequency than the MRI temporal resolution; the force corresponding to each MR temporal frame was extracted with the assumption that 600 force points and 22 MR frames were uniformly spaced in the contraction cycle of 3 seconds. Quantitative analysis framework was developed in MATLAB (The MathWorks Inc. MA, USA) all the codes are publicly available~\cite{2DSR}s. %````````````````````````````````````````````````````````` \textit{Statistical analyses:} %```````````````````````````````````````````````````````` The outcome variables of the analysis are the eigenvalues of the SR tensor $(SR_{\mathrm{fiber}},\, SR_{\mathrm{in-plane}},\, SR_{\mathrm{out-plane}})$, the angle subtended by the principal axis of contraction and fiber direction (SR-fiber angle), the 2D SR components in the fiber basis $(SR_{ff},\, SR_{cc},\, SR_{fc})$ and the maximum shear strain, $SR_{fc\_\,\mathrm{max}}$. Prior to the analysis of significance, normality of data was tested using both the Shapiro-Wilk test and by visual inspection of Q-Q plots. The quantile-quantile (Q-Q) plot is an exploratory graphical device used to check the validity of a distributional assumption for a data set; in the current analysis it was tested for normality (Gaussian distribution). If the data can be represented by a normal distribution, then it is valid to employ parametric analysis such as ANOVA. Only moderate deviations from normality were found in several data groups, thus the difference between pre and post ULLS groups (termed time) and muscle regions as well as potential interaction effects (time*region) were accessed using repeated measures two-way analysis of variance (ANOVAs)~\cite{RNS25, RNS26, RNS27}. In case of significant ANOVA results for factor "region" post hoc paired-sample t-tests were done using \v{S}idak's correction of \textit{p-}values. Data are reported as mean $\pm$ standard deviation (SD). For all tests, the level of significance was set at a $p = 0.05$. The statistical analyses were carried out using SPSS (IBM Corporation, Chicago, IL). %-new paragraph-% %-new paragraph-% Univariate and stepwise multivariable linear regression was performed to identify predictors (strain rate and morphological parameters of the Triceps Surae) of force and force change with disuse atrophy. In multivariable analysis, only independent variables were retained. For any two dependent variables, the one with the highest correlation (beta value) was retained in the multivariable analysis. Parameters excluded from the multivariable analysis were the SR in the fiber basis that significantly correlated with SR in principal axis basis and with $SR_{fc\_\,\mathrm{max}}$ and ACSA, PCSA that significantly correlated with volume of the muscles. %~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ \subsection{Results} %~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ The average change (over all 7 subjects) in the volume of the MG, LG, and SOL muscles post-suspension are $-9.6 \pm 4.5 \%$, $-11.1 \pm 7.4\%$ and $-7.4 \pm 5.9\%$, respectively while the average change in MVIC is $-32.6 \pm 24.7\%$; both force and morphological changes (volume and ACSA) are significant (Table~\ref{tab: SR1_1}). %========================================================= \begin{table}[!htb] \vspace{+0.2cm} \caption[Force and morphological parameters for individual muscles of the plantarflexors pre- and post-suspension]{Force and morphological parameters for individual muscles of the plantarflexors pre- and post-suspension.} \label{tab: SR1_1} \begin{center} \begin{threeparttable} \begin{tabular}{@{}llrrc@{}} \toprule[1pt]\midrule[0.3pt] & & \multicolumn{1}{c}{Pre-ULLS} & \multicolumn{1}{c}{Post-ULLS} & $p$ \\ \midrule MVIC\tnote{$\dagger$} & $\left[\SI{}{\newton}\right]$ & 339.4 $\pm$ 91.5 & 225.6 $\pm$ 113.1 & 0.013 \\[6pt] $V_{\mathrm{MG}}$\tnote{$\dagger$} & $\left[\SI{}{\centi\meter^3}\right]$ & 190.1 $\pm$ 69.2 & 172.3 $\pm$ 64.7 & 0.001 \\[6pt] $V_{\mathrm{LG}}$\tnote{$\dagger$} & $\left[\SI{}{\centi\meter^3}\right]$ & 103.5 $\pm$ 45.3 & 91.4 $\pm$ 40.3 & 0.008 \\[6pt] $V_{\mathrm{SOL}}$\tnote{$\dagger$} & $\left[\SI{}{\centi\meter^3}\right]$ & 448.2 $\pm$ 171.4 & 420.4 $\pm$ 177.9 & 0.006 \\[6pt] $\mathrm{ACSA}_{\mathrm{MG}}$\tnote{$\dagger$} & $\left[\SI{}{\centi\meter^2}\right]$ & 15.4 $\pm$ 6.1 & 14.2 $\pm$ 6.1 & 0.005 \\[6pt] $\mathrm{ACSA}_{\mathrm{LG}}$\tnote{$\dagger$} & $\left[\SI{}{\centi\meter^2}\right]$ & 9.4 $\pm$ 4.9 & 8.4 $\pm$ 4.3 & 0.020 \\[6pt] $\mathrm{ACSA}_{\mathrm{SOL}}$ & $\left[\SI{}{\centi\meter^2}\right]$ & 25.4 $\pm$ 9.7 & 24.8 $\pm$ 10.6 & 0.291 \\[4pt] $\mathrm{PCSA}_{\mathrm{MG}}$ & $\left[\SI{}{\centi\meter^2}\right]$ & 74.4 $\pm$ 21.5 & 57.1 $\pm$ 15.3 & 0.327 \\ \midrule[0.3pt]\bottomrule[1pt] \end{tabular} \begin{tablenotes}[flushleft]\footnotesize \item[$\dagger$] significant difference between pre- and post-suspension \end{tablenotes} \end{threeparttable} \end{center} \vspace{-0.2cm} \end{table} %========================================================= As reported in earlier studies, the force (MVIC) loss is approximately 3 fold greater than the average volume loss and thus cannot be completely explained by muscle atrophy alone (change in muscle volume). Further, force normalized to cross-sectional area showed a reduction of $28.7 \pm 24.6\%$ post-suspension suggesting that changes in area (atrophy) cannot explain all of the loss in MVIC for these muscles. %-new paragraph-% %-new paragraph-% The main change visualized in the eigenvalue maps is the decrease in $SR_{\mathrm{in-plane}}$ and increase in $SR_{\mathrm{out-plane}}$ on unloading (Figure~\ref{fig: SR1_2}). %********************************************************* \begin{figure}[!htb] \vspace{+0.2cm} \centering \includegraphics[width=0.9\textwidth]{Figures/ULLSColormaps.pdf} \caption[Strain rate tensor eigenvalue colormaps pre- and post-suspension]{Strain rate tensor eigenvalue colormaps corresponding to one subject at baseline (pre-suspension) and immediately after the 4 week of unloading (post-suspension) during isometric contraction at the peak of the contraction phase. The MG muscle is outlined in white.} \label{fig: SR1_2} \end{figure} %********************************************************* The temporal variation of the regional SR-fiber angles and $SR_\mathrm{out-plane}$ eigenvalue with isometric contraction is shown in Figure~\ref{fig: SR1_3} for pre- and post-suspension (to maintain continuity in the plot, the angle plot is the angle NEV makes with the muscle fiber) while the temporal variation in NEV and PEV is shown in Figure~\ref{fig: SR1_6}. %********************************************************* \begin{figure}[!htb] \vspace{+0.2cm} \centering \includegraphics[width=0.9\textwidth]{Figures/ULLS_fiberAngle.pdf} \caption[The temporal variation of the SR-fiber angle and $SR_\mathrm{out-plane}$ with isometric contraction for the pre- and post-suspension cohort]{The temporal variation of the SR-fiber angle and $SR_\mathrm{out-plane}$ with isometric contraction for the pre- and post-suspension cohorts for three regions (proximal, middle, and distal). The values are averaged over all subjects. The abrupt change in the SR-fiber angle occurs when the cycle switches to relaxation at frame 11.} \label{fig: SR1_3} \end{figure} %********************************************************* %********************************************************* \begin{figure}[!htb] \vspace{+0.2cm} \centering \includegraphics[width=0.95\textwidth]{Figures/ULLS_SRfiber.pdf} \caption[The temporal variation of the SR eigenvalues and normal strain $SR_{cc}$ with isometric contraction for the pre- and post-suspension cohorts for three regions]{The temporal variation of the SR eigenvalues and normal strain $SR_{cc}$ with isometric contraction for the pre- and post-suspension cohorts for three regions (proximal, middle, and distal). The values shown in this plot are the average over all subjects pre-and post-suspension.} \label{fig: SR1_6} \end{figure} %********************************************************* The plots confirm the SR eigenvalue maps (Figure~\ref{fig: SR1_2}) in that the changes post-suspension are seen primarily in $SR_{\mathrm{in-plane}}$ (in Figure~\ref{fig: SR1_6}, compare the second peak in NEV plot and first peak in PEV plot in pre- and post-suspension; these peaks correspond to $SR_{\mathrm{in-plane}}$). Further, there is an increase in the SR-fiber angle post-suspension at the peak of the contraction phase. This increase in SR-fiber angle post-suspension is shown in Figure~\ref{fig: SR1_5} for one subject in the zoomed portion of the MG where the fibers (fascicles manually tracked as solid black lines) are superposed on the SR streamlines. %********************************************************* %\begin{sidewaysfigure} \begin{figure}[htb] \vspace{+0.2cm} \centering \includegraphics[width=0.9\textwidth]{Figures/ULLS_Streamlines.pdf} %\captionsetup{width=8in} \caption[Streamlines of strain rate tensor negative eigenvectors for one slice of the same subject pre- and post-suspension at the peak of contraction phase]{Streamlines of eigenvectors corresponding to negative eigenvalues (NEV = $SR_\mathrm{fiber}$ during contraction) overlaid on fiber directions (in black) for one slice of the same subject pre- and post-suspension at the peak of contraction phase.} \label{fig: SR1_5} \end{figure} %\end{sidewaysfigure} %********************************************************* Figure~\ref{fig: SR1_6} and Figure~\ref{fig: SR1_4} are plots of the temporal variation of the strain rates in the fiber basis. As in the principal axes basis, the largest changes are seen in the deformation rate in the fiber cross-section (plot of $SR_{cc}$ in Figure~\ref{fig: SR1_4}). %-new paragraph-% %-new paragraph-% The values of the SR indices in the principal axes and in the fiber basis were extracted at the force value corresponding to the post-suspension value of each subject (a subject was compared between the pre- and post-suspension states at the (lower) force level of the post-suspension) (Table~\ref{tab: SR1_2}). %********************************************************* \begin{figure}[!htb] \vspace{+0.2cm} \centering \includegraphics[width=0.9\textwidth]{Figures/ULLS_SRff.pdf} \caption[The temporal variation of the strain rate indices in the muscle fiber basis with isometric contractions for the pre- and post-suspension cohorts in three regions]{The temporal variation of the strain rate indices in the muscle fiber basis with isometric contractions for the pre- and post-suspension cohorts in three regions (proximal, middle and distal). The values shown in this plot are the average over all subjects pre-and post-suspension.} \label{fig: SR1_4} \end{figure} %********************************************************* $SR_{\mathrm{in-plane}}$ in the pre-ULLS was significantly larger than in the post-ULLS cohort ($F(1, 6) = 17.734$, $p = 0.006$). Significant difference in $SR_{\mathrm{in-plane}}$ was also found between different muscle regions ($F(2,12)=12.090$, $p = 0.001$). Follow up post hoc tests revealed that $SR_{\mathrm{in-plane}}$ was larger in the distal compared to the proximal ($p = 0.029$) and middle regions ($p = 0.034$). SR-fiber angle was significantly larger post-ULLS ($F(1,6) = 9.435$, $p = 0.022$). Significant time (pre, post)*region (proximal, middle, distal) interaction effects were found for $SR_{\mathrm{out-plane}}$ ($F(2, 12) = 8.484$, $p=0.005$). For the components of strain rate in fiber basis, a trend to significance was observed between pre and post-suspension in $SR_{cc}$ ($F(1,6) = 4.536$, $p=0.077$) which reflects the changes seen in $SR_{\mathrm{in-plane}}$ in the principal axes basis. Though normal and shear strain rates (fiber basis or maximum) decreased in the post-suspension subjects (Table~\ref{tab: SR1_2}) no significant differences were detected. Significant regional differences were observed in the shear strain rate ($SR_{fc}$) ($F(2,12) = 19.924$, $p = 0.004$) with significantly larger shear strain rates in the distal ($p = 0.013$) and middle ($p = 0.009$) as compared to the proximal muscle region. Similar regional differences were seen in $SR_{fc\_\,\mathrm{max}}$ ($F(2,12) = 10.537$, $p = 0.012$) with significantly larger shear strain rates in the distal ($p = 0.041$) and middle ($p = 0.023$) compared to the proximal region. No significant interactions effects of time (pre-post)*region (proximal, middle, distal) were found. The results of the univariate~/~multivariable regression analysis for force and for force changes are summarized in Tables~\ref{tab: SR1_3}~and ~\ref{tab: SR1_4}. For the univariate analysis, the absolute value of $\beta$ refers to the correlation coefficient for a given predictor and a significant \textit{p-}value ($p < 0.05$) associated with a predictor indicates that the null hypothesis that beta is zero for that variable is rejected. %========================================================= \begin{landscape} \centering \begin{table}[!h] \vspace{+0.2cm} \caption[Strain rate indices for pre- and post-suspension computed at the same force level of the contraction phase]{Strain rate tensor components in the principal axis basis, in the muscle fiber basis and in the maximum shear strain basis for pre- and post-suspension computed at the same force level of the contraction phase.} \label{tab: SR1_2} \begin{center} \begin{threeparttable} \begin{tabular}{@{}lllrrr@{}} \toprule[1pt]\midrule[0.3pt] \multicolumn{2}{c}{\multirow{2}{*}{SR indices}} & \multirow{2}{*}{ULLS} & \multicolumn{3}{c}{region} \\ \cmidrule(lr){4-6} \multicolumn{2}{c}{} & & \multicolumn{1}{c}{proximal} & \multicolumn{1}{c}{middle} & \multicolumn{1}{c}{distal} \\ \cmidrule(){1-6} \multirow{2}{*}{$SR_{\mathrm{fiber}}$} & \multirow{2}{*}{$\left[ \SI{}{\per\milli\second}\right]$} & Pre- & $-448.48 \pm 219.93$ & $-478.67 \pm 228.88$ & $-496.81 \pm 212.96$ \\ & & Post- & $-218.70 \pm 138.91$ & $-295.21 \pm 223.73$ & $-423.47 \pm 362.89$ \\ [6pt] \multirow{2}{*}{$SR_{\mathrm{in-plane}}$\tnote{1,2,3}} & \multirow{2}{*}{$\left[ \SI{}{\per\milli\second}\right]$} & Pre- & $259.11 \pm 79.04$ & $377.76 \pm 114.69$ & $492.04 \pm 249.22$ \\ & & Post- & $120.36 \pm 77.36$ & $161.17 \pm 88.21$ & $177.66 \pm 63.40$ \\ [6pt] \multirow{2}{*}{$SR_{\mathrm{out-plane}}$\tnote{4}} & \multirow{2}{*}{$\left[ \SI{}{\per\milli\second}\right]$} & Pre- & $189.37 \pm 201.99$ & $100.91 \pm 199.69$ & $4.77 \pm 205.97$ \\ & & Post- & $98.34 \pm 180.06$ & $134.03 \pm 245.76$ & $245.81 \pm 359.86$ \\ [6pt] \multirow{2}{*}{SR-fiber angle\tnote{1}} & \multirow{2}{*}{$\left[\SI{}{\degree}\right]$} & Pre- & $27.09 \pm 5.67$ & $27.14 \pm 7.93$ & $32.45 \pm 8.85$ \\ & & Post- & $45.3 \pm 18.19$ & $39.37 \pm 15.78$ & $44.58 \pm 13.37$ \\ [6pt] \multirow{2}{*}{$SR_{ff}$} & \multirow{2}{*}{$\left[ \SI{}{\per\milli\second}\right]$} & Pre- & $-287.19 \pm 207.43$ & $-304.56 \pm 214.32$ & $-234.99 \pm 81.65$ \\ & & Post- & $-132.67 \pm 104.11$ & $-155.42 \pm 93.40$ & $-166.9 \pm 145.68$ \\ [6pt] \multirow{2}{*}{$SR_{cc}$} & \multirow{2}{*}{$\left[ \SI{}{\per\milli\second}\right]$} & Pre- & $112.56 \pm 86.14$ & $184.14 \pm 134.39$ & $192.76 \pm 187.55$ \\ & & Post- & $-0.39 \pm 147.18$ & $5.51 \pm 274.19$ & $-156.89 \pm 299.33$ \\ [6pt] \multirow{2}{*}{$SR_{fc}$\tnote{2,5}} & \multirow{2}{*}{$\left[ \SI{}{\per\milli\second}\right]$} & Pre- & $148.81 \pm 145.57$ & $229.88 \pm 206.9$ & $294.36 \pm 237.35$ \\ & & Post- & $93.34 \pm 60.97$ & $187.99 \pm 126.94$ & $287.92 \pm 198.53$ \\ [6pt] \multirow{2}{*}{$SR_{fc\_\,\mathrm{max}}$\tnote{2,5}} & \multirow{2}{*}{$\left[ \SI{}{\per\milli\second}\right]$} & Pre- & $-295.75 \pm 181.69$ & $-389.03 \pm 229.29$ & $-409.13 \pm 249.11$ \\ & & Post- & $-198.38 \pm 80.26$ & $-257.19 \pm 112.06$ & $-369.34 \pm 188.3$ \\ \midrule[0.3pt]\bottomrule[1pt] \end{tabular} \begin{tablenotes}[flushleft]\footnotesize \item[1] significant difference between pre- and post-suspension \item[2] significant difference between proximal and distal regions \item[3] significant difference between middle and distal region \item[4] significant interaction time*region \item[5] significant difference between proximal and middle region \end{tablenotes} \end{threeparttable} \end{center} \vspace{-0.2cm} \end{table} \end{landscape} %========================================================= For multivariable analysis, the R value is the multiple correlation coefficient and is a measure of the quality of the prediction of force; the value of $0.844$ (Table~\ref{tab: SR1_3}) indicates a good level of prediction. The beta values provide a relative weight of each predictor in the multivariable regression and a significant \textit{p-}value ($p < 0.05$) associated with a predictor indicates that the null hypothesis for that variable is rejected. Stepwise multivariable analysis selected $SR_{\mathrm{in-plane}}$, SR-fiber angle and $V_{\mathrm{MG}}$ as significant predictors of force ($R=0.844$, $F=31.257$, $p<0.001$) Due to the small number of subjects for force change (7 subjects), the univariate analysis was not extended to multivariable analysis for prediction of force change. Univariate analysis identified the $SR_{\mathrm{fiber}}$ and maximum shear strain $(SR_{fc\_\,\mathrm{max}})$ as significantly associated with change in force with unloading (Table~\ref{tab: SR1_4}). %========================================================= \begin{table}[!htb] \vspace{+0.2cm} \caption[Univariate and multivariable linear regression analysis of morphological and strain rate indices to the force output]{Univariate and multivariable linear regression analysis of morphological and strain rate indices to the force output (MVIC).} \label{tab: SR1_3} \begin{center} \begin{tabular}{@{}lrrrrrr@{}} \toprule[1pt]\midrule[0.3pt] && \multicolumn{2}{c}{Univariate} & & \multicolumn{2}{c}{Multivariable} \\ \cmidrule(lr){3-4} \cmidrule(lr){6-7} && \multicolumn{1}{c}{$\beta$} & \multicolumn{1}{c}{$p$} & & \multicolumn{1}{c}{$\beta$} & \multicolumn{1}{c}{$p$} \\ \midrule $SR_{\mathrm{fiber}}$ & & $-0.124$ & 0.433 & & & \\ [2pt] $SR_{\mathrm{in-plane}}$ & & 0.426 & 0.005 & & 0.277 & 0.007 \\ [2pt] $SR_{\mathrm{out-plane}}$ & & $-0.129$ & 0.222 & & & \\ [2pt] SR-fiber angle & & $-0.528$ & $<0.001$ & & $-0.299$ & 0.004 \\ [2pt] $SR_{fc\_\,\mathrm{max}}$ & & $-0.098$ & 0.537 & & & \\ [2pt] $V_{\mathrm{MG}}$ & & 0.692 & $<0.001$ & & 0.629 & $<0.001$ \\ [2pt] $V_{\mathrm{LG}}$ & & 0.690 & $<0.001$ & & & \\ [2pt] $V_{\mathrm{SOL}}$ & & 0.655 & $<0.001$ & & & \\ \midrule[0.3pt]\bottomrule[1pt] \end{tabular} \end{center} \vspace{-0.2cm} \end{table} %========================================================= %========================================================= \begin{table}[!htb] \vspace{+0.2cm} \caption[Univariate linear regression analysis of changes in morphological and strain rate indices to the change in force output after unloading]{Univariate linear regression analysis of changes in morphological and strain rate indices to the change in force output (MVIC) after unloading.} \label{tab: SR1_4} \begin{center} \begin{tabular}{@{}lrrr@{}} \toprule[1pt]\midrule[0.3pt] & & \multicolumn{1}{c}{$\beta$} & \multicolumn{1}{c}{$p$} \\ \midrule $SR_{\mathrm{fiber}}$ & & $-0.732$ & $<0.001$ \\ [2pt] $SR_{\mathrm{in-plane}}$ & & 0.468 & 0.032 \\ [2pt] $SR_{\mathrm{out-plane}}$ & & 0.582 & 0.006 \\ [2pt] SR-fiber angle & & $-0.242$ & 0.290 \\ [2pt] $SR_{fc\_\,\mathrm{max}}$& & $-0.721$ & $<0.001$ \\ [2pt] $V_{\mathrm{MG}}$ & & 0.286 & 0.217 \\ [2pt] $V_{\mathrm{LG}}$ & & 0.095 & 0.578 \\ [2pt] $V_{\mathrm{SOL}}$ & & 0.384 & 0.085 \\ \midrule[0.3pt]\bottomrule[1pt] \end{tabular} \end{center} \vspace{-0.2cm} \end{table} %========================================================= %~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ \subsection{Discussion and Conclusion} %~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In this study, an exploratory analysis of indices derived from the SR tensor was performed to determine any limb suspension related changes with the anticipation that these indices can be related to ECM remodeling. It is to be noted that all the SR analysis was conducted at the same force level for pre- and post-suspension (that corresponding to the post-suspension force) and hence it was not at the peak force for the pre-suspension data. However this insured that comparisons were not biased by different force levels for the pre- and post-suspension data. Further, a loss of $\sim 30\%$ (from the suspension intervention) shifted the point from peak force close to the beginning of the plateau level so that the pre-suspension values were not acquired too early in the contraction cycle. While not reported here, data was also analyzed at the peak force (corresponding to the peak in $SR_{\mathrm{fiber}}$ during the contraction phase of the isometric cycle) and showed the same trends and significant differences between pre- and post-intervention as analysis at the same force level. It should be noted that comparison at the peak force corresponds to analysis at the same \%MVIC in contrast to the same force. %-new paragraph-% %-new paragraph-% $SR_{\mathrm{fiber}}$ and $SR_{ff}$ are the normal strain rate in a direction closest to the muscle fiber in the two basis. Though the two strain rates decreased post-suspension, the differences were not significant (Table~\ref{tab: SR1_2}). This is in contrast to findings in the aging study where younger cohorts showed significantly higher strain rates than older cohorts at isometric contraction~\cite{RNS16}. One of the hypotheses advanced to explain the age related decrease in strain rate was an increase in muscle stiffness with age arising from the remodeling of the extracellular matrix. Computational models predict that a stiffer extracellular matrix will result in reduced fascicle strain as well as force output~\cite{RNS28}. It is likely that the extensive ECM remodeling with the aging atrophy model that results in increased stiffness of the matrix is not present in the disuse atrophy model investigated herein or the increase in stiffness in ECM may not be as pronounced. However, there is increasing evidence that changes in the ECM play an important role in disuse-muscle atrophy. This is because disruption of muscle cell adhesion to the extracellular matrix leads to muscle wasting~\cite{RNS29, RNS30}. For instance, degeneration of basal membrane components by matrix metalloproteases (MMPs) and alterations in mRNA and protein for extracellular matrix components have been found in disuse-muscle atrophy~\cite{RNS29, RNS30}. %-new paragraph-% %-new paragraph-% Significant decreases in $SR_{\mathrm{in-plane}}$ are seen post-suspension (Table~\ref{tab: SR1_2}). The changes in $SR_{\mathrm{in-plane}}$ reflect the changes in deformation asymmetry in the fiber cross-section. In general, the reduction in $SR_{\mathrm{in-plane}}$ indicates that asymmetry of deformation in the fiber cross-section is reduced post-suspension. While deformation asymmetry is most pronounced in the distal regions in both pre- and post-suspension, the axis of greater deformation changes from in-plane (pre) to out-plane (post) in the distal regions. Similar to $SR_{\mathrm{in-plane}}$, $SR_{cc}$ which is the in-plane deformation in the fiber basis, is reduced post-suspension with a trend to significance (Table~\ref{tab: SR1_2}). Asymmetry in deformation in the fiber cross-section has been reported in prior studies~\cite{RNS16, RNS31, RNS32} and one hypothesis for asymmetric deformation is that constraints (to deformation) are introduced by a specific orientation of tensile material (e.g. costameres). Costameres can be described in simple terms as a protein assembly that anchors myofibrils to the sarcolemma and to the extracellular matrix, and thus acts as important bridge between the muscle's contractile and other structural components~\cite{RNS33}. Prior studies using the ULLS model identified a large decrease in costameric proteins, such as focal adhesion kinase, which could potentially contribute to the disproportionate loss of muscle force ($30\%$) compared to the decrease in quadriceps muscle CSA ($\sim 5\%$)~\cite{RNS8, RNS17}. This is because costameres provide the key functional role of adhesion between adjacent muscle fibers and between muscle fibers and the surrounding connective tissue~\cite{RNS34}. An earlier modeling paper predicted that when there is a strongly anisotropic constraint (which was postulated to arise from a specific structural arrangement of costameres), the force output may increase by a factor of two~\cite{RNS28}. Conversely, it is reasonable to speculate that when the anisotropy in stiffness decreases (implied by the decrease in asymmetry of deformation) it could account for the loss of force seen post-suspension. %-new paragraph-% %-new paragraph-% Regional differences in $SR_{\mathrm{in-plane}}$ eigenvalues were seen with distal regions showing the highest strain rates. Spatial heterogeneity of $SR_{\mathrm{in-plane}}$ and $SR_{\mathrm{out-plane}}$ is related to the regional variation of the asymmetry of deformation in the fiber cross section. The increasing deformation asymmetry at the distal end in the pre-suspension cohort may be related to the fiber packing density along the muscle length that increases from proximal to distal regions. Earlier studies have also reported regional heterogeneity in strain in the calf muscles~\cite{RNS24} and in the brachis plexus~\cite{RNS35}. The reduced asymmetry even in distal regions for the post-suspension cohort (Table~\ref{tab: SR1_2}) could arise from a combination of fiber atrophy and from alterations in the ECM. %-new paragraph-% %-new paragraph-% The deviation (non-zero SR-fiber angle, Table~\ref{tab: SR1_2}) of the principle axis of contraction from the muscle fiber orientation is seen in both pre- and post-suspension and this angle increases significantly post-suspension. Deviation of the principal axis of strain from that of the fiber has been observed in other muscles such as the anterior tibialis~\cite{RNS31}, biceps brachii~\cite{RNS35} and the myocardium~\cite{RNS36} and this deviation has been attributed to shearing between muscle fibers~\cite{RNS37, RNS38}. The larger SR-fiber angle post-suspension may be related to an increase in $SR_{\mathrm{fiber}}$ heterogeneity (Table~\ref{tab: SR1_2} shows that $SR_{\mathrm{fiber}}$ values in distal regions is higher by $50\%$ compared to proximal regions in the post-suspension cohort; equivalent value in the pre-suspension data is $5\%$). It is likely that in the post-suspension state, the much higher relative strain rates in the distal regions results in larger SR-fiber angles through the shearing of the endomysium. %-new paragraph-% %-new paragraph-% Shear strain rates (schematic shown in Figure~\ref{fig: SR1_1}b) are related to SR-fiber angles and Equation~\ref{eq: SR shear} shows the relationship between the two variables (shear strain rate increases with the SR-fiber angle). However, it can be seen from Table~\ref{tab: SR1_2} that shear strain rates $(SR_{fc}\text{ and }SR_{fc\_\,\mathrm{max}})$ calculated from the eigenvalues and SR-fiber angle decrease in all three regions post-suspension. This decrease $(\text{in }SR_{fc} \text{ and }SR_{fc\_\,\mathrm{max}})$ is essentially from the decrease in $SR_{\mathrm{fiber}}$ and in $SR_{\mathrm{in-plane}}$ values post-suspension. Computational models predict that, in the prematurely terminating fiber, shearing of the endomysium is the most likely pathway for lateral transmission of force produced by the non-spanning fibers~\cite{RNS15}. The observed reduction in shear strain rate (in fiber basis or max shear strain basis) with suspension did not reach significance but it may be speculated that the changes in $SR_{fc}/SR_{fc\_\,\mathrm{max}}$ may translate to a reduction in lateral transmission of force. This reduction in LTF may then account for the loss in total force that is not explained by muscle atrophy, activation, and specific tension changes with unloading. %-new paragraph-% %-new paragraph-% The identification of $SR_{\mathrm{in-plane}}$, SR-fiber angle and $V_{\mathrm{MG}}$ in stepwise multivariable regression as significant predictors of force has interesting physiological implications. The volume of the MG muscle may reflect the size and number of muscle fibers as well as the extent of connective tissue as well as fatty infiltration. While aging has been shown to lead to increase in fatty infiltration as well as an increase in the width of connective tissue~\cite{RNS39}, preliminary results on suspension induced disuse atrophy from our group did not reveal a change in fat or connective tissue content. Thus, it is plausible to infer that volume differences in the MG within the cohort of pre- and post- suspension subjects reflect the size and number of muscle fibers and since the size~/~number of muscle fibers determines contractility, it is anticipated to be a predictor of force. The relationship of $SR_{\mathrm{in-plane}}$ and SR-fiber angle to force is more complex: $SR_{\mathrm{in-plane}}$ reflects the constraints (in the extracellular matrix) that modulate the extent of deformation in-plane that, as detailed earlier, may have an effect on the total force produced. SR-fiber angle is related to shear in the endomysium (Figure~\ref{fig: SR1_1}). Since $SR_{\mathrm{in-plane}}$ and $SR_{\mathrm{fiber}}$ angle are parameters that are influenced by the ECM, their relationship to force highlights the role of the ECM in the force output. In univariate analysis, the significant predictors of force change with unloading are the SR eigenvalues and $SR_{fc}/SR_{fc\_\,\mathrm{max}}$. $SR_{\mathrm{fiber}}$ is a measure of contractility in the fiber direction and a decrease in its magnitude is anticipated to correlate to force loss. $SR_{\mathrm{in-plane}}$ and $SR_{\mathrm{out-plane}}$ reflect the deformation asymmetry in the fiber cross-section and are influenced by changes in the ECM. $SR_{fc}/SR_{fc\_\,\mathrm{max}}$, the shear strain rate is also influenced by ECM remodeling and further, is postulated as the mechanism of lateral transmission of force; a decrease in the shear strain rate may potentially reduce lateral transmission of force (LTF). While changes in the ECM with unloading have been reported in prior studies~\cite{RNS13} and it has been postulated that this remodeling may result in modulating the lateral force transmission pathway leading to a loss of force output~\cite{RNIRamaswamy}, this is the first study to show that force changes (\textit{in-vivo} and in human subjects) on uploading are related to changes in shear strain. It should be noted that there are other determinants that will also contribute to force loss with unloading but in this exploratory work on a small cohort, the intent was to extract the contributions of the strain rates in the MG and morphological parameters of the plantarflexors to changes in force production. %-new paragraph-% %-new paragraph-% The limitations of this study are as follows. The number of subjects is small; however this was still sufficient to find significant differences in some of the SR indices during isometric contraction between the pre- and post-suspension cohorts and to detect significant regional differences. Muscle fiber and its direction were indirectly determined and the accuracy of the fiber orientation depends on the accuracy of locating fascicles in the magnitude images. On the other hand, diffusion tensor imaging provides fiber directions directly. However, it should be noted that the method used in the current study is the only way to track muscle fibers over a large region of interest through the dynamic cycle. Finally, noise in the imaging data can cause error propagation in the calculated indices especially as the computation involves the gradients of images that cause noise to be amplified. %-new paragraph-% %-new paragraph-% The study confirms the overarching hypothesis that some of the indices derived from the strain rate tensor will show significant changes after unloading and that force loss in disuse atrophy will be predicted by SR parameters that are known to be influenced by the ECM remodeling. The two indices that showed significant changes were $SR_{\mathrm{in-plane}}$ and the SR-fiber angle; both these parameters are influenced by the ECM. Further, $SR_{\mathrm{in-plane}}$, SR-fiber angle, and VOLMG were identified from a multivariable regression analysis as predictors of force. $SR_{\mathrm{fiber}}$, $SR_{\mathrm{in-plane}}$, $SR_{\mathrm{out-plane}}$, $SR_{fc}/SR_{fc\_\,\mathrm{max}}$ were significant predictors of force change in univariate analysis. $SR_{fc}/SR_{fc\_\,\mathrm{max}}$ reflects changes in ECM and importantly, the study shows that the shear strain rate (and potentially lateral transmission of force) is a predictor of force loss with atrophy. The role of the ECM (and remodeling) is increasingly being identified in several musculoskeletal diseases states including muscular dystrophies, diabetes, and aging~\cite{RNS40}. Lateral transmission of force may be impacted by changes in the ECM and techniques to non-invasively monitor ECM remodeling and its functional consequence may allow accurate diagnosis and tracking of these disease states, ability to tailor rehabilitative paradigms and to monitor the efficacy of drug interventions. %========================================================= \section {Shear Strain Rate From Velocity Encoded Phase-Contrast Imaging to Study Effects of Aging in Medial Gastrocnemius Muscle} \label{sec: SR_SHEAR} %========================================================= The loss of muscle force that occurs with aging is disproportionately greater than the loss of muscle mass (atrophy) and is still not completely understood~\cite{RNSS6}. Neural and contractile contributions to age related loss of muscle force have been explored extensively~\cite{RNS13}. One potential determinant of age related loss of muscle force that has not been investigated in depth is the remodeling of the extracellular matrix (ECM) and its functional consequences~\cite{RNSS8}. Several studies have emphasized the importance of force transmission pathways: longitudinal transmission of force via the myotendinous junction and lateral transmission of force mediated by the extracellular matrix via myofascical pathways~\cite{RNSS8, RNIRamaswamy}. Impairment in lateral transmission pathways with age have been demonstrated in aging rodents and has been linked to the remodeling of the ECM~\cite{RNIRamaswamy}. The shearing of the ECM has been postulated to be the underlying mechanism of lateral transmission of force; thus, measurement of the shear strain could potentially be a surrogate assessment of lateral transmission of force. It should be emphasized, however, that the role of the ECM may not be limited to modulating the lateral transmission of force but it may be involved in a more general force transmission mechanism termed "myofascial force transmission"~\cite{RNSS4}. %-new paragraph-% %-new paragraph-% This study extends an earlier work that explored the changes in muscle strain rate with age where the analysis was limited to normal strain rates in the principal basis~\cite{RNS16}. The earlier study identified the strain rate components in the principal basis that significantly differed between young and old subjects. This study is focused on: ($i$) establishing the repeatability limits of strain rate components in human subjects, ($ii$) evaluating strain rate tensor components in the different bases to monitor age related differences in muscle deformation, and ($iii$) identifying the relationship between the SR parameters and muscle force in a cohort of young and senior subjects. The analysis was performed on the medial gastrocnemius under isometric contraction. %~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ \subsection{Methods} %~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ %--------------------------------------------------------- \subsubsection{Subjects} %--------------------------------------------------------- The study was approved by the Medical Research Ethics Board of UC San Diego and conformed to the standards in the Declaration of Helsinki on the use of human subjects in research. Six young ($26.1 \pm 2.3$ years old, height: $158.6 \pm \SI{5.6}{\centi\meter}$, mass: $50.8 \pm \SI{3.7}{\kilogram}$) and six senior ($76.7 \pm 8.3$ years old, height: $153.0 \pm \SI{2.0}{\centi\meter}$, mass: $57.4 \pm \SI{4.3}{\kilogram}$) healthy, moderately active, female subjects were included in this study after informed consent~\cite{RNS16}. Two subjects (one young: 25 years, one old: 65 years) were scanned on three separate days to determine the coefficient of variation (CV) and the repeatability coefficients (RC) of the SR components. %--------------------------------------------------------- \subsubsection{MR imaging} %--------------------------------------------------------- MR imaging was performed on a $\SI{1.5}{\tesla}$ Signa HDx MR scanner (GE Medical Systems, WI, USA) using same protocol as described in section~\ref{sec: SR_ULLS}. Images were acquired during sub-maximal, isometric plantarflexion contraction at $35\%$ of the individual maximum voluntary isometric contraction (MVIC); this level was chosen so that all subjects could comply with the imaging protocol (72 contractions~/~cycle). MR imaging included high-resolution water saturated oblique sagittal fast spin echo (FSE) images of the medial gastrocnemius (MG) where muscle tissue (water) signal is suppressed while the fascicles (fat) appear hyperintense. The slice (oblique sagittal) that best depicted the fascicles was selected for the Velocity Encoded Phase-Contrast (VEPC) scan~\cite{RNS16}. Subjects were provided real-time visual feedback of the force generated superposed on the target force curve to facilitate consistent contractions. %--------------------------------------------------------- \subsubsection{Force measurements} %--------------------------------------------------------- MVIC was determined for each subject as the best of three trials recorded prior to MRI~\cite{RNS16, RNSS10}. Torques were recorded during acquisition at a sampling frequency of $\SI{200}{\hertz}$. Muscle force was computed by dividing the measured torque by the Achilles tendon moment arm length. %~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ \subsection{Computation of the Strain Rate Tensor } %~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ %--------------------------------------------------------- \subsubsection{Strain rate (SR) calculation in the principal basis} %--------------------------------------------------------- The ($2 \times 2$) SR tensor was calculated from the spatial gradient of the velocity images (velocity validated using calibrated flow phantoms,~\cite{RNSS11}) and then diagonalized to obtain the eigenvalues ($SR_\mathrm{fiber}$, $SR_\mathrm{in-plane}$) and eigenvectors. $SR_\mathrm{fiber}$ denotes deformation in a direction closer to the muscle fiber axis (than the orthogonal SR component) and is negative during muscle fiber shortening and positive during relaxation. $SR_\mathrm{in-plane}$ denotes deformation in the muscle fiber cross-section and is positive during muscle fiber shortening and negative during relaxation. In this study, only a $2 \times 2$ SR tensor is computed since a single velocity encoded slice (selected such that the MG muscle fibers lie in the plane of the image) was acquired precluding the computation of the $z$-derivative of velocity required for the complete $3 \times 3$ tensor. %--------------------------------------------------------- \subsubsection{Muscle fiber tracking} %--------------------------------------------------------- Muscle fibers (end points on the aponeuroses) are located on the fast spin-echo images at the distal, middle and proximal regions, transferred to the first frame of the dynamic images and the end points are tracked through the isometric cycle~\cite{RNS16}. %--------------------------------------------------------- \subsubsection{Strain rate in the fiber basis} %--------------------------------------------------------- Strain rate in the fiber basis was computed by rotating the SR tensor in the principal axes frame to the fiber basis using the following rotational transformation: %......................................................... \begin{equation}\label{eq: SR2_1} SR_{\mathrm{fb}}=\mathrm{R}\cdot SR_{\mathrm{pb}} \cdot \mathrm{R}^\intercal \end{equation} %......................................................... %......................................................... \begin{equation}\label{eq: SR2_2} \mathrm{R} = \left [ \begin{matrix} \cos{\theta} & -\sin{\theta}\\ \sin{\theta} & \cos{\theta}\\ \end{matrix} \right] \end{equation} %......................................................... where $SR_{\mathrm{fb}}$ is the strain rate tensor in the fiber frame defined by the fiber axis, $f$, and the fiber in-plane cross-sectional axis, $c$; R is the 2D rotation matrix defined by the SR-fiber angle $\theta$ , and $SR_{\mathrm{pb}}$ is the strain rate tensor in the principal basis frame. $SR_{\mathrm{fb}}$ has diagonal elements $SR_{ff}$ and $SR_{cc}$ (normal strain along the fiber and in the cross-section respectively) and non-diagonal terms $SR_{fc}$ (shear strain) while $SR_{\mathrm{pb}}$ has the diagonal terms $SR_{\mathrm{fiber}}$ and $SR_{\mathrm{in-plane}}$ (negative and positive principal strain rates respectively) (Figure~\ref{fig: SR2_1}). %********************************************************* \begin{figure}[!htb] \vspace{+0.2cm} \centering \includegraphics[scale=0.6]{Figures/SRYO_Schematic.pdf} \caption[Schematic of a muscle fiber and endomysium with the principal basis and fiber basis]{Schematic of a muscle fiber and endomysium with the principal basis and fiber basis. The muscle fiber is shown contracting while the endomysium experiences a shear strain. The thick (unfilled) arrows show the lateral transmission of force pathways. Arrows indicate normal and shear strains.} \label{fig: SR2_1} \end{figure} %********************************************************* %--------------------------------------------------------- \subsubsection{Strain rate tensor in the maximum shear strain rate basis} %-------------------------------------------------------- The maximum shear strain, $SR_{fc\_\,\mathrm{max}}$ was estimated by rotating $SR_{\mathrm{pb}}$ by $\SI{45}{\degree}$ (from tensor algebra, the maximum in the off-diagonal terms occurs $\SI{45}{\degree}$ from the principal axes~\cite{RNSS12}). $SR_{\mathrm{fiber}}$, $SR_{\mathrm{in-plane}}$, $SR_{ff}$, and $SR_{cc}$ are called normal strains (defined as perpendicular to the face of an element and represented by the diagonal terms of the SR tensor) while $SR_{fc}$ and $SR_{fc\_\,\mathrm{max}}$ are shear strains (defined as parallel to face of an element and represented by off-diagonal terms in the SR tensor); the former is the shear strain in the muscle fiber basis and the latter is the maximum shear strain (Figure~\ref{fig: SR2_1}). Figure~\ref{fig: SR2_S1} shows the SR analysis pipeline and the anticipated variability in the computed SR components as a function of the variability in the velocity images. %********************************************************* \begin{figure}[!htb] \vspace{+0.2cm} \centering \includegraphics[width=0.96\textwidth]{Figures/SRYO_error.pdf} \caption[Pipeline of the strain rate analysis with associated uncertainties in the computed values]{Pipeline of the strain rate analysis to derive strain rate in the principal axes, muscle fiber basis and the maximum shear basis. It shows the uncertainty in the computed values as a function of the uncertainty in the (acquired) velocity data, $\delta v$.} \label{fig: SR2_S1} \end{figure} %********************************************************* %--------------------------------------------------------- \subsubsection{ROI measurements} %--------------------------------------------------------- Regional analysis of normal and shear strains in the two bases was performed on regions of interest (ROIs) selected manually on the magnitude images at the proximal, middle, and distal regions (corresponding to distances at 75\%, 50\% and 25\% of the total muscle length from the distal end). Since the ROIs shifted with muscle motion, pixel tracking was performed to ensure that the same anatomical region was being sampled. The SR indices were computed at two force levels: one at the peak force level for each subject and the other at a force level of the subject with the lowest maximum MVIC exerted by any subject. Peak values of the SR components were identified at the temporal frame of the negative eigenvalue peak ($SR_{\mathrm{fiber}}$) in the compression phase. To extract values at the same force level for a subject, the temporal frame corresponding to the lowest MVIC (of all subjects) was located in the force-time curves and SR values were extracted from the closest frame of the dynamic MRI. %--------------------------------------------------------- \subsubsection{Statistical analyses} %--------------------------------------------------------- The coefficient of variation (CV) was calculated as the ratio of the within subject standard deviation, $S_w$, to the mean value expressed as a percentage (estimated from the three repeat measures). The repeatability coefficient, RC, which represents the threshold value below which the absolute differences between 2 measurements on the same subject is expected to lie for $95\%$ of the measurement pairs was calculated as ($0.0277*$mean$*$CV)~\cite{RNSS13}. For all tests, the level of significance was set at $0.05$. Univariate and stepwise multivariable linear regression was performed to identify predictors (MG strain rate parameters estimated at the peak of the SR) of force in a cohort of young and senior subjects. The predictors tested were the strain parameters alone and did not include morphological parameters since the latter are already established as predictors; the focus here was to test if SR components were predictors and of these, to identify the most significant SR predictor(s) For the multivariable analysis, only independent variables were retained. The statistical analyses were carried out using SPSS (IBM Corporation, Chicago, IL). %~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ \subsection{Results} %~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Muscle force was lower by 43\% in the senior cohort: young ($387 \pm \SI{43}{\newton}$) and senior ($220 \pm \SI{43}{\newton}$), $p < 0.05$. This was accompanied by an 18\% lower volume of the triceps surae muscles of the senior cohort, implying that the entire force loss could not be accounted by a decrease in muscle volume. Figures~\ref{fig: SR2_2}~and~\ref{fig: SR2_3} show the colormaps of the normal strain rates in the principal basis ($SR_{\mathrm{fiber}}$, $SR_{\mathrm{in-plane}}$) and the maximum shear strain rate ($SR_{fc\_\,\mathrm{max}}$) for one young and one senior subject at the same force level (Figure~\ref{fig: SR2_2}) and at the peak force (Figure~\ref{fig: SR2_3}). %********************************************************* \begin{figure}[!htb] \vspace{+0.2cm} \centering \includegraphics[width=0.6\textwidth]{Figures/SRYO_SRColormaps1.pdf} \caption[Strain rate maps of the normal and shear strain rates in a young and senior subject at the same force level]{Strain rate maps of the normal and shear strain rates in a young and senior subject at the same force level. Colormaps are overlaid on the magnitude image at the corresponding frame.} \label{fig: SR2_2} \end{figure} %********************************************************* %********************************************************* \begin{figure}[!htb] \vspace{+0.2cm} \centering \includegraphics[width=0.6\textwidth]{Figures/SRYO_SRColormaps2.pdf} \caption[Strain rate maps of the normal and shear strain rates in a young and senior subject at the peak force level]{Strain rate maps of the normal and shear strain rates in a young and senior subject at the peak force level. Colormaps are overlaid on the magnitude image at the corresponding frame.} \label{fig: SR2_3} \end{figure} %********************************************************* Figure~\ref{fig: SR2_S2} shows the temporal variation of the strain rate components as a function of the isometric contraction. %********************************************************* \begin{figure}[htb!] \vspace{+0.2cm} \centering \includegraphics[width=0.9\textwidth]{Figures/SRYO_SRtemporal.pdf} \caption[The temporal variation of the SR tensor indices with isometric contraction for the young and senior subjects]{The temporal variation of the SR tensor indices with isometric contraction for the young and senior subjects with isometric contraction. The values shown in this plot are the average over all subjects (young and senior separately).} \label{fig: SR2_S2} \end{figure} %********************************************************* Table~\ref{tab: SR2_1} lists the average values of the strain rate components for the young and senior cohorts at the same force level and at the peak of the force. %========================================================= \begin{table}[!htb] \vspace{+0.2cm} \caption[Strain rate indices for young and senior cohort at same force level and at peak force]{Strain rate indices for young and senior cohort averaged for three regions of interest in the principle, fiber, and maximum shear strain rate bases at same force level and at peak force.} \label{tab: SR2_1} \begin{center} \begin{threeparttable} \begin{tabular}{@{}llrrr@{}} \toprule[1pt]\midrule[0.3pt] \multicolumn{2}{l}{Principle axis basis} & $SR_\mathrm{fiber}$\tnote{$\dagger$} $\; \left[ \SI{}{\per\milli\second}\right]$ & $SR_\mathrm{in-plane}$\tnote{$\dagger$} $\; \left[ \SI{}{\per\milli\second}\right]$ & $SR_{fc\_\,\mathrm{max}}$\tnote{$\dagger$} $\; \left[ \SI{}{\per\milli\second}\right]$ \\ \midrule \multicolumn{1}{l}{\multirow{3}{*}{Senior}} & same force level & $-245 \pm 192$ & 186 $\pm$ 120 & $-224$ $\pm$ 133 \\ \multicolumn{1}{l}{} & peak & $-280 \pm 196$ & 177 $\pm$ 102 & $-235$ $\pm$ 107 \\ \multicolumn{1}{l}{} & CV, RC & 3.9, 34.8 & 41.9, 222.8 & 9.3, 78.0 \\ [6pt] \multicolumn{1}{l}{\multirow{3}{*}{Young}} & same force level & $-391 \pm 151$ & 280 $\pm$ 119 & $-335$ $\pm$ 107 \\ \multicolumn{1}{l}{} & peak & $-424 \pm 140$ & 298 $\pm$ 125 & $-351$ $\pm$ 108 \\ \multicolumn{1}{l}{} & CV, RC & 9.6, 135.5 & 23.5, 173.9 & 6.2, 58.5 \\ \toprule[0.3pt]\midrule[0.3pt] \multicolumn{2}{l}{Fiber basis} & $SR_{ff}$ $\; \left[ \SI{}{\per\milli\second}\right]$ & $SR_{cc}$ $\; \left[ \SI{}{\per\milli\second}\right]$ & $SR_{fc}$ $\; \left[ \SI{}{\per\milli\second}\right]$ \\ \midrule \multicolumn{1}{l}{\multirow{2}{*}{Senior}} & same force level & $-170 \pm 118$ & 121.47 $\pm$ 134 & $-111$ $\pm$ 135 \\ & peak & $-181 \pm 102$\tnote{$\dagger$} & 91 $\pm$ 173 & $-125$ $\pm$ 145 \\ [6pt] \multicolumn{1}{l}{\multirow{2}{*}{Young}} & same force level & $-259 \pm 146$ & 152 $\pm$ 146 & $-182$ $\pm$ 153 \\ & peak & $-288 \pm 143$\tnote{$\dagger$} & 146 $\pm$ 168 & $-192$ $\pm$ 156 \\ \midrule[0.3pt]\bottomrule[1pt] \end{tabular} \begin{tablenotes}[flushleft]\footnotesize \item[$\dagger$] significant difference between age groups ($p<0.05$). \end{tablenotes} \end{threeparttable} \end{center} \vspace{-0.2cm} \end{table} %========================================================= The mean value of all the regions (distal, middle and proximal) is reported since there was no regional variation in any of the SR parameters or any age*region interactions. CV and RC for $SR_{\mathrm{fiber}}$ and $SR_{fc\_\,\mathrm{max}}$ were $\sim 6\%$ and $\sim 17\%$ respectively; the RC value indicates that differences in these indices greater than 17\% between cohorts can be identified. However, the variability of $SR_{\mathrm{in-plane}}$ was much higher; this may potentially reflect the larger velocity uncertainties due to in-plane motion artifacts. %-new paragraph-% %-new paragraph-% Statistical analysis for the SR indices obtained at the same force and at peak force level showed significant age-related differences in: $SR_{\mathrm{fiber}}$, $SR_{\mathrm{in-plane}}$ and in $SR_{fc\_\,\mathrm{max}}$, and in addition, in $SR_{ff}$ for the peak force analysis alone. Tables~\ref{tab: SR2_2} and Tables~\ref{tab: SR2_3} summarize regression models for SR indices obtained at peak force level and at same force level respectively. $SR_{ff}$ and $SR_{fc\_\,\mathrm{max}}$ (both negatively) were significantly correlated with force output, with $SR_{fc\_\,\mathrm{max}}$ having the strongest correlation. Stepwise multivariable regression produced a model with two predictors $SR_{fc\_\,\mathrm{max}}$, $SR_{cc}$ and with $R=0.681$ (moderately good level of prediction). %========================================================= \begin{table}[!htb] \vspace{+0.2cm} \caption[Univariate and multivariable linear regression analysis of parameters obtained at same force level with maximum volunteer isometric contraction]{Univariate and multivariable linear regression analysis of parameters obtained at same force level with a significant association with Maximum Volunteer Isometric Contraction (MVIC). Model ($R=0.640$, $F=11.448$, $p<0.001$).} \label{tab: SR2_2} \begin{center} \begin{tabular}{@{}llrrrrr@{}} \toprule[1pt]\midrule[0.3pt] && \multicolumn{2}{c}{Univariate} & & \multicolumn{2}{c}{Multivariable} \\ \cmidrule(lr){3-4} \cmidrule(lr){6-7} && \multicolumn{1}{c}{$\beta$} & \multicolumn{1}{c}{$p$} & & \multicolumn{1}{c}{$\beta$} & \multicolumn{1}{c}{$p$} \\ \midrule $SR_{\mathrm{fiber}}$ & & $-0.421$ & 0.011 & & & \\ [2pt] $SR_{\mathrm{in-plane}}$ & & 0.470 & 0.001 & & & \\ [2pt] $SR_{ff}$ & & $-0.561$ & $<0.001$ & & & \\ [2pt] $SR_{cc}$ & & 0.410 & 0.013 & & 0.369 & 0.012 \\ [2pt] $SR_{fc}$ & & $-0.339$ & 0.043 & & & \\ [2pt] $SR_{fc\_\,\mathrm{max}}$ & & $-0.528$ & 0.001 & & $-0.606$ & $<0.001$ \\ \midrule[0.3pt]\bottomrule[1pt] \end{tabular} \end{center} \vspace{-0.2cm} \end{table} %========================================================= %========================================================= \begin{table}[!htb] \vspace{+0.2cm} \caption[Univariate and multivariable linear regression analysis of parameters obtained at force peak with maximum volunteer isometric contraction]{Univariate and multivariable linear regression analysis of parameters obtained at the peak of the contraction phase with a significant association with Maximum Volunteer Isometric Contraction (MVIC). Model ($R=0.681$, $F=14.034$, $p<0.001$).} \label{tab: SR2_3} \begin{center} \begin{tabular}{@{}llrrrrr@{}} \toprule[1pt]\midrule[0.3pt] && \multicolumn{2}{c}{Univariate} & & \multicolumn{2}{c}{Multivariable} \\ \cmidrule(lr){3-4} \cmidrule(lr){6-7} && \multicolumn{1}{c}{$\beta$} & \multicolumn{1}{c}{$p$} & & \multicolumn{1}{c}{$\beta$} & \multicolumn{1}{c}{$p$} \\ \midrule $SR_{\mathrm{fiber}}$ & & $-0.353$ & 0.035 & & & \\ [2pt] $SR_{\mathrm{in-plane}}$ & & 0.558 & $<0.001$ & & & \\ [2pt] $SR_{ff}$ & & $-0.554$ & $<0.001$ & & & \\ [2pt] $SR_{cc}$ & & 0.465 & 0.004 & & 0.198 & 0.009 \\ [2pt] $SR_{fc}$ & & $-0.262$ & 0.122 & & & \\ [2pt] $SR_{fc\_\,\mathrm{max}}$ & & $-0.583$ & $<0.001$ & & $-0.393$ & 0.001 \\ \midrule[0.3pt]\bottomrule[1pt] \end{tabular} \end{center} \vspace{-0.2cm} \end{table} %========================================================= % % %. scatter plots... not needed ... overkill and not the best material to present % %Scatter plots of the univariate regression models for same force level and at peak force level are shown in Figure~\ref{fig: SR2_S3} and in Figure~\ref{fig: SR2_S4} respectively. %********************************************************* %\begin{figure}[!htb] %\vspace{+0.2cm} %\centering %\includegraphics[width=\textwidth]{Figures/} %\caption[Scatterplots of MVIC and the SR indices extracted at same force level]{Scatterplots of MVIC and the SR indices extracted at same force level. Univariate linear regression fits for each SR parameter and the coefficient of determination ($R^2$) are provided for each index. $R^2$ values are low (max value of 0.315), however all six predictors are significant ($p<0.05$).} %\label{fig: SR2_S3} %\end{figure} %********************************************************* %********************************************************* %\begin{figure}[!htb] %\vspace{+0.2cm} %\centering %\includegraphics[width=\textwidth]{Figures/} %\caption[Scatterplots of MVIC and the SR indices extracted at peak force]{Scatterplots of MVIC and the SR indices extracted at peak force. Univariate linear regression fits for each SR index and the coefficient of determination ($R^2$) are provided for each index. $R^2$ values are low (max value of 0.34), however five of the six predictors are significant ($p<0.05$).} %\label{fig: SR2_S4} %\end{figure} %********************************************************* % % % % %~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ \subsection{Discussion} %~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Examining the factors contributing to the variability in the measurements, the conclusion is that the primary sources arise from: inconsistency of the isometric, plantarflexion contractions, the intrinsic uncertainties in the SR measurement methodology, as well as from subject specific differences. Karakuzu~et~al. ecently noted that inter-individual differences in the muscle-tendon complex anatomy may in part be responsible for inter-subject strain variability~\cite{RNSS4}. The first contribution to variability was minimized by the visual feedback. Propagation of error analysis similar to that in~\cite{RNSS14} shows that the uncertainty in the SR is approximately eight times the uncertainty in the velocity. This high variability is reflected in the ROI measurements though 2D anisotropic diffusion filtering of the velocity maps was performed to reduce the uncertainty in the velocity maps. %-new paragraph-% %-new paragraph-% The results of the strain rate analysis in the principal basis and in the fiber basis show that normal strains along the fiber ($SR_{\mathrm{fiber}}$ and $SR_{ff}$) and in the fiber cross-section ($SR_{\mathrm{in-plane}}$) are significantly lower in the aging cohort. Azizi~et~al. showed by combining a mathematical model with experimental manipulation that the structural changes in the ECM (e.g., increase in collagen and in stiffness) compromise the muscle's ability to expand radially, which in turn restricts muscle shortening~\cite{RNSS15}. Thus, the observed changes in both the normal strains (along the fiber as well as in the fiber cross-section) can be explained, at least in part, to the structural changes in the ECM. Significant differences in the maximum shear strain rate were found between young and senior cohorts. The SR tensor including shear strain is measured at the voxel level precluding a direct assignment of the shear strain to the endomysium (very short $T_2$ and much smaller widths than MR voxel resolution). However, computational models have identified that endomysium (or ECM) shears and it is a reasonable extrapolation to associate the measured MR shear strain to the shear in the ECM. T his latter shear has been proposed to be the mechanism by which force is transmitted laterally~\cite{RNS15, RNSS17}. While a direct non-invasive measurement of lateral transmission of force (LTF) is not possible, the current analysis of shear strain rate may potentially be a surrogate measure of LTF. The ability to compute the shear strain rate, as reported in this paper, may provide a tool to explore, non-invasively and \textit{in-vivo}, modifications to lateral transmission pathways. It is important to point out that a simplified model of a single muscle fiber and surrounding endomysium is considered here. In reality, when considering groups of active muscle, the situation is more complex and the shearing of the endomysium may potentially be attributed to the presence of complex intramuscular myofascial loads. Karakuzu~et~al. argue that epimuscular myofascial loads and intramuscular ones originating from the ECM and muscle fibers impact local deformations and are the underlying source of strain variability within a muscle~\cite{RNSS4}. Their hypothesis of force transmission through the myofascial network is in agreement with the interpretation in this paper of the observed shear strain as potentially arising from the shearing of the endomysium. It should be noted that the mechanical role of the ECM is not limited "lateral transmission of force". Some of the findings observed in the current paper such as increased strain rates in the anterior compartment muscles of the triceps surae may be explained by a more general force transmission mechanism: "myofascial force transmission". The latter force transmission pathway considers the skeletal muscle within a myofascial continuity, where ECM mechanically interacts with muscle fibers along their full lengths which are in turn, subject to further mechanical alterations through the surrounding muscles via epimuscular pathways~\cite{RNSS18}. Wilke~et~al. have recently shown that these mechanical interactions in turn have significant effects on the mechanical properties of the connective tissue~\cite{RNSS19}. It is highly likely that with age, the mechanical properties of the connective tissue (in the endomysium, perimysium and epimysium) are altered resulting in differences in strain rate components. %-new paragraph-% %-new paragraph-% The current study shows that the basis frame in which the strain rate component is a maximum (principal basis for normal strains or shear strain maximum) is the most sensitive to detect age related changes (e.g., $SR_{\mathrm{fiber}}$, $SR_{\mathrm{in-plane}}$, $SR_{fc\_\,\mathrm{max}}$ show significant differences). Comparison at the same force level across all subjects makes the evaluation at a fairly low force level for most of the subjects and may not potentially be the optimum force level to detect changes. In contrast, evaluation at the peak of the force ensures the same effort level (\%MVIC) across subjects and further, is not limited by the force exerted by the weakest subject. Though significant differences in $SR_{\mathrm{fiber}}$, $SR_{\mathrm{in-plane}}$ and maximum shear strain rate were seen in evaluations by both methods, it may be physiologically meaningful to make the comparisons at the peak force level. In univariate analysis, several SR parameters were significantly correlated to force confirming that both normal and shear strain rates significantly predict muscle force output. It is also noteworthy that, in multivariable regression, the two significant predictors of force in a cohort of young and senior subjects are strain rate indices $SR_{cc}$ and $SR_{fc\_\,\mathrm{max}}$; both are known from other studies to be related to the status of the extracellular matrix~\cite{RNSS15, RNS15, RNSS17}. %-new paragraph-% %-new paragraph-% It should be emphasized that the strain rate is in reality a $3 \times 3$ tensor, whereas only the $2 \times 2$ SR tensor is computed here. The inability to compute the full $3 \times 3$ tensor arises from the fact that a single slice is acquired, which though encoded for velocity in three orthogonal directions does not allow the computations of the velocity gradient in the slice direction (thus precluding a $3 \times 3$ tensor analysis). In the protocol described here, multiple slices can be acquired in multiple scans but this will extend scan times such that senior subjects cannot tolerate. It is acknowledged that the identification of the fascicles as entirely in-plane (of the oblique sagittal slice in the current study) is not completely accurate as fascicles are known to be non-planar~\cite{RNSS20}. In this context, it should be noted that the oblique sagittal slice for VEPC was identified following a specific protocol that resulted in the best depiction of the fascicles in the fast spin echo images. This protocol included ($i$) selecting the axial slice with the largest cross-section of the MG from a stack of axial slices of the calf muscle; ($ii$) positioning the oblique slice such that it bisected the distance between the femur and tibia and was perpendicular to it, and importantly; ($iii$) aligning the oblique slice with or parallel to the most prominent dark line depicting a fascicle in this axial slice. In the stack of such sagittal oblique "scout" slices, one of the slices generally had several prominent dark fascicular lines, which was subsequently used for the VEPC acquisition. Sometimes a second stack of scout slices had to acquired, to obtain the best depiction of the fascicles in the oblique sagittal planes (these "scout" scans were quite rapid). The accuracy of the orientation of the VEPC slice was ensured by checking for the most number and highest contrast of the dark fascicular lines in the oblique sagittal FSE images. The reproducibility of this slice orientation and position was confirmed after subject repositioning as part of the reproducibility studies. Ensuring the reproducibility of the orientation of the 2D VEPC slice is important, as the 2D tensor is sensitive to the orientation of the dynamic slice. While adherence to the above protocol ensured reproducible slice identification, accuracy of the slice orientation was also confirmed by examining the out-plane velocity values (sequence encodes velocity of the 2D oblique slice in all three directions). For the studies reported here, the out-plane velocity was negligible compared to the in-plane velocities. This is consistent with results from full 3D strain tensor studies where the strain in the out-plane direction is almost zero~\cite{RNS31}. If the orientation of the VEPC slice was not accurate, out-plane velocity values would not be small compared to the in-plane velocity values. While in the current study the slice orientation was reproducible and minimized out-plane motion, it is acknowledged that 3D SR tensor computed from 3D volume acquisition combined with three direction velocity-encoding will provide a more accurate representation of muscle deformation. %~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ \subsection{Conclusion} %~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ The study presented focuses on establishing the feasibility of shear strain mapping with application to aging muscle. The variability of the computed indices is high, but despite the variability, the computation of the $2 \times 2$ SR tensor as opposed to the full $3 \times 3$ SR tensor, and the small number of subjects, significance was reached in detecting age related differences. This is a first report of the significant differences in shear strain between young and old cohorts and its significance in accounting for age related force variability in the cohorts. In order to disambiguate potential sex based differences in age related muscle deformations, this preliminary study is limited to female subjects. The most important finding of this study is the association of muscle force output to shear strain rate (in addition to normal strain rate) confirming that lower values of the shear strain rate may also contribute to age related loss of muscle force. %~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ \section{Acknowledgments} %~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Section~\ref{sec: SR_ULLS} is a reprint of material, with minor edits as it appears in: V.~Malis, U.~Sinha, R.~Csapo, M.~Narici, and S.~Sinha, ``Relationship of changes in strain rate indices estimated from velocity-encoded MR imaging to loss of muscle force following disuse atrophy,'' \emph{Magn. Reson. Med.}, vol. 79, no. 2, pp. 912-922, Feb. 2018. %-new paragraph-% %-new paragraph-% Section~\ref{sec: SR_SHEAR} is a reprint of material, with minor edits as it appears in: U.~Sinha, V.~Malis, R.~Csapo, M.~Narici, and S.~Sinha, ``Shear strain rate from phase contrast velocity encoded MRI: Application to study effects of aging in the medial gastrocnemius muscle,'' \emph{J. Magn. Reson. Imaging}, vol. 48, no. 5, pp. 1351-1357, Nov. 2018. %-new paragraph-% %-new paragraph-% The author of the dissertation was the primary author of these papers.
{ "alphanum_fraction": 0.6868637311, "avg_line_length": 108.0417940877, "ext": "tex", "hexsha": "afd40d1fb55f595bea9c47b2c2cf82766942b197", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "7c6a343f902eb7a76d3f0ceca9aeb54def160c59", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "vmalis/PhDissertation", "max_forks_repo_path": "chapter3.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "7c6a343f902eb7a76d3f0ceca9aeb54def160c59", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "vmalis/PhDissertation", "max_issues_repo_path": "chapter3.tex", "max_line_length": 505, "max_stars_count": null, "max_stars_repo_head_hexsha": "7c6a343f902eb7a76d3f0ceca9aeb54def160c59", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "vmalis/PhDissertation", "max_stars_repo_path": "chapter3.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 26493, "size": 105989 }
% Copyright 2016 The LaTeX3 Project \documentclass{ltnews} \PassOptionsToPackage{colorlinks}{hyperref} \usepackage{csquotes} \usepackage{hologo} \usepackage{ragged2e} \usepackage{underscore} \AtBeginDocument{% \renewcommand*{\LaTeXNews}{\LaTeX3~News}% \RaggedRight \setlength\parindent{1em}% } %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \publicationmonth{November} \publicationyear{2016} \publicationissue{10} % Avoid hyphenation of csnames \makeatletter \protected\edef\cs#1{% \noexpand\path{\@backslashchar#1}% } \makeatother \begin{document} \maketitle There has been something of a gap since the last \LaTeX3 News, but this does not mean that work has not been going on. The Team have been working on a number of areas, many of which reflect wider take-up of \pkg{expl3}. There have also been a number of significant new developments in the \LaTeX3 \enquote{sphere} in the last two years. \section{\pkg{l3build}: Testing \LaTeX{} packages} Testing has been an important part of the work of the team since they assumed maintenance of \LaTeX{} over twenty years ago. Various scripts have been used over that time by the team for testing, but these have until recently not been set up for wider use. With the general availability of \hologo{LuaTeX} it is now possible to be sure that every \TeX{} user has a powerful general scripting language available: Lua. The team have used this to create a new general testing system for \TeX{} code, \pkg{l3build}. This \emph{is} designed to be used beyond the team, so is now available in \TeX{} Live and \hologo{MiKTeX} and is fully documented. Testing using \pkg{l3build} makes use of a normalised version of the \texttt{.log} file, so can test any aspect of \TeX{} output (e.g., by using \cs{showbox}) or its algorithms (by displaying results in the \texttt{.log}). Part of the remit for creating \pkg{l3build} was to enable the team to work truly cross-platform and to allow testing using multiple \TeX{} engines (earlier systems were limited to a single engine, normally \eTeX{}). The new testing system means we are in a much stronger position to support a variety of engines (see below). It has also enabled us to give useful feedback on development of the \hologo{LuaTeX} engine. As well as the core capability in testing, \pkg{l3build} also provides a \enquote{one stop} script for creating release bundles. The script is sufficiently flexible that for many common \LaTeX{} package structures, setting up for creating releases will require only a few lines of configuration. In addition to the documentation distributed with \pkg{l3build}, the project website~\cite[publications in 2014]{project-publications} contains some articles, videos and conference presentations that explain how to use \pkg{l3build} to manage and test any type of (\LaTeX{}) package. \section{Automating \pkg{expl3} testing} As well as developing \pkg{l3build} for local use, the team have also set up integration testing for \pkg{expl3} using the Travis-CI system. This means that \emph{every} commit to the \LaTeX3 code base now results in a full set of tests being run. This has allowed us to significantly reduce the number of occasions where \pkg{expl3} needs attention before being released to CTAN. Automated testing has also enabled us to check that \pkg{expl3} updates do not break a number of key third-party packages which use the programming environment. \section{Refining \pkg{expl3}} Work continues to improve \pkg{expl3} both in scope and robustness. Increased use of the programming environment means that code which has to-date been under-explored is being used, and this sometimes requires changes to the code. The team have extended formal support in \pkg{expl3} to cover the engines p\TeX{} and up\TeX{}, principally used by Japanese \TeX{} users. This has been possible in part due to the \pkg{l3build} system discussed above. Engine-dependent variations between \hologo{pdfTeX}, \hologo{XeTeX}, \hologo{LuaTeX} and (u)p\TeX{} are now well-understood and documented. As part of this process, the \enquote{low-level} part of \pkg{expl3}, which saves all primitives, now covers essentially all primitives found in all of these engines. The code in \pkg{expl3} is now entirely self-contained, loading no other third-party packages, and can also be loaded as a generic package with plain \TeX{}, \emph{etc.} These changes make it much easier to diagnose problems and make \pkg{expl3} more useful. In particular it can be used as a programming language for generic packages, that then can run without modifications under different formats! The team have made a range of small refinements to both internals and \pkg{expl3} interfaces. Internal self-consistency has also been improved, for example removing almost all use of \texttt{nopar} functions. Performance enhancements to the \pkg{l3keys} part of \pkg{expl3} are ongoing and should result in significantly faster key setting. As keyval methods are increasingly widely used in defining behaviours, this will have an impact on compile times for end users. \section{Replacing \cs{lowercase} and \cs{uppercase}} As discussed in the last \LaTeX3 News, the team have for some time been keen to provide new interfaces which do not directly expose (or in some cases even use) the \TeX{} primitives \cs{lowercase} and \cs{uppercase}. We have now created a series of different interfaces that provide support for the different conceptual uses which may flow from the primitives: \begin{itemize} \item For case changing text, \cs{tl_upper_case:n}, \cs{tl_lower_case:n}, \cs{tl_mixed_case:n} and related language-aware functions. These are Unicode-capable and designed for working with text. They also allow for accents, expansion of stored text and leaving math mode unchanged. At present some of the interface decisions are not finalised so they are marked as experimental, but the team expect the core concept to be stable. \item For case changing programming strings, \cs{str_upper_case:n}, \cs{str_lower_case:n} and \cs{str_fold_case:n}. Again these are Unicode-aware, but in contrast to the functions for text are not context-dependent. They are intended for caseless comparisons, constructing command names on-the-fly and so forth. \item For creating arbitrary character tokens, \cs{char_generate:nn}. This is based on the \cs{Ucharcat} primitive introduced by \hologo{XeTeX}, but with the ideas extended to other engines. This function can be used to create almost any reasonable token. \item For defining active characters, \cs{char_set_active_eq:NN} and related functions. The concept here is that active characters should be equivalent to some named function, so one does not directly define the active character. \end{itemize} \section{Extending \pkg{xparse}} After discussions at TUG2015 and some experimentation, the team have added a new argument type, \texttt{e} (\enquote{embellishment}), to \pkg{xparse}. This allows arguments similar to \TeX{} primitive sub- and superscripts to be accepted. Thus \begin{verbatim} \DeclareDocumentCommand\foo{e{^_}} {\showtokens{"#1"}} \foo^{Hello} world \end{verbatim} will show \begin{verbatim} "{Hello}{-NoValue-}". \end{verbatim} At present, this argument type is experimental: there are a number of models which may make sense for this interface. \section{A new \cs{parshape} model} As part of development of \pkg{l3galley}, Joseph Wright has proposed a new model for splitting up the functions of the \cs{parshape} primitive into three logical elements: \begin{itemize} \item Margins between the edges of the galley and the paragraph (for example an indented block); \item Cut-out sections running over a fixed number of lines, to support \enquote{in place} figures and so forth; \item Running or single-paragraph shape. \end{itemize} There are additional elements to consider here, for example whether lines are the best way to model the length of shaping, how to handle headings, cut-outs at page breaks, \emph{etc.} \section{Globally optimized pagination of documents} Throughout 2016 Frank Mittelbach has worked on methods and algorithms for globally optimizing the pagination of documents including those that contain floats. Early research results have been presented at Bacho\TeX{} 2016, TUG 2016 in Toronto and later in the year at \mbox{DocEng'16}, the ACM Symposium on Document Engineering in Vienna. A link to the ACM paper (that allows a download free of charge) can be found on the project website~\cite{project-publications}. The site also holds the speaker notes from Toronto and will host a link to a video of the presentation once it becomes available. The framework developed by Frank is based on the extended functionality provided by \hologo{LuaTeX}, in particular its callback functions that allow interacting with the typesetting process at various points. The algorithm that determines the optimal pagination of a given document is implemented in {Lua} and its results are then used to direct the formatting done by the \TeX{} engine. At the current point in time this a working prototype but not yet anywhere near a production-ready system. However, the work so far shows great potential and Frank is fairly confident that it will eventually become a generally usable solution. \section{Looking forward} The \hologo{LuaTeX} engine has recently reached version~1.0. This may presage a stable \hologo{LuaTeX} and is likely to result in wider use of this engine in production documents.If that happens we expect to implement some of the more complex functionality (such as complex pagination requirements and models) only for \hologo{LuaTeX}. \begin{thebibliography}{10} \raggedright \bibitem{project-publications} Links to various publications by members of the \LaTeX{} Project Team. \newblock \url{https://www.latex-project.org/publications}. \end{thebibliography} \end{document}
{ "alphanum_fraction": 0.7781781782, "avg_line_length": 45.2036199095, "ext": "tex", "hexsha": "c8ac0dffbe3547c4d942951112fc61bb2487b9b7", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "7dd862f7fc2a2dd72105c08eb54dacbff6a16efa", "max_forks_repo_licenses": [ "LPPL-1.3c" ], "max_forks_repo_name": "benzea/latex3", "max_forks_repo_path": "news/l3news10.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "7dd862f7fc2a2dd72105c08eb54dacbff6a16efa", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "LPPL-1.3c" ], "max_issues_repo_name": "benzea/latex3", "max_issues_repo_path": "news/l3news10.tex", "max_line_length": 79, "max_stars_count": 1, "max_stars_repo_head_hexsha": "7dd862f7fc2a2dd72105c08eb54dacbff6a16efa", "max_stars_repo_licenses": [ "LPPL-1.3c" ], "max_stars_repo_name": "benzea/latex3", "max_stars_repo_path": "news/l3news10.tex", "max_stars_repo_stars_event_max_datetime": "2019-04-09T11:10:26.000Z", "max_stars_repo_stars_event_min_datetime": "2019-04-09T11:10:26.000Z", "num_tokens": 2467, "size": 9990 }
% This file contains the content for a main section \regularsectionformat % Change formatting to that of "Introduction" section %% Modify below this line %% \chapter{LMT use cases} Two styles of image modification are common in post-production: interactive modification, either across the entire frame or in isolated regions of interest, and a preset systematic modification across the entire frame. The interactive image modification is termed `grading.' The ACES term for preset systematic, full-frame image modification is `look modification.' Look modification is performed using a Look Modification Transform (LMT). \section{Emulation of photochemical processing} Though modern grading systems are very powerful, some whole-frame color transformations are too complex for even a skilled colorist to accomplish using grading system controls. Often the complexity arises when the creative intent is to simulate, for frames captured with digital cinema cameras, the nonlinear color and exposure relationships used in film laboratory photochemical processing, especially nonstandard photochemical processing. Examples of such color transformations include: \begin{itemize} \item `Bleach Bypass' emulation: modification of image color values to achieve a unique desaturated appearance mimicking projection of a print that had skipped a normal laboratory bleaching step. \item Technicolor 3-strip emulation: modification of image color values to achieve a saturated, higher-contrast appearance mimicking projection of a print from Technicolor’s imbibition dye transfer process (c. 1938) \item Kodak Vision 3 print film emulation: modification of image color values to achieve a reproduction of the relationship between scene exposure values and projected film imagery resulting from the use of Kodak’s latest film stocks. \end{itemize} \autoref{fig:photochemical} illustrates how a colorist could prepend one or more emulation LMTs to the RRT (which itself precedes a selected ODT), so that his or her time could be spent on sequence, shot and/or region-specific color requests from the client. The grade modifies the image first, followed by the process emulation provided by the LMT. \begin{figure}[H] \begin{center} \includegraphics[width=\textwidth]{photochemicalProcessing.png} \caption{} \label{fig:photochemical} \end{center} \end{figure} \section{Systematic Color Correction (and application of ASC CDL)} The LMT takes as input an image in the ACES color space and yields a modified image that is still in the ACES color space. As a consequence, LMTs can be `chained' together, one after another. \autoref{fig:lmtChain} shows a grading workflow where, prior to applying the `Kodak Vision 3 emulation' LMT described above, the colorist applies an `ASC CDL' LMT—very likely one whose parameter values were chosen by the cinematographer on-set to modify the default `look' of emulated Kodak Vision 3 stock. \begin{figure}[H] \begin{center} \includegraphics[width=\textwidth]{chainingLMTs.png} \caption{} \label{fig:lmtChain} \end{center} \end{figure} \note{The values of the ASC CDL in this case are only valid in the context of the selected `Kodak Vision 3 emulation' LMT. If this LMT were removed, the ASC CDL values would no longer be valid.} Note that the ASC CDL LMT incorporates a conversion from ACES to ACEScc before the ASC CDL operations will be applied, and likewise incorporates a conversion from ACEScc to ACES after the ASC CDL operations have been applied. This `wrapping' of ASC CDL operations is a key capability of the ACES Common LUT format. \section{Trim Pass Grading} Content today is delivered across a wide range of output devices, each of which has their own color space and characteristic brightness. Creative adjustments to the look of shots are often needed to enhance the content's appearance beyond the original creative intent. The client might desire to accentuate the difference between the results of the viewing pipeline for theatrical exhibition, the results of the viewing pipeline appropriate for home video and the results of the viewing pipeline appropriate for mobile streaming. This could be done by having three workflows that differed only in that the first had no LMT `accentuating' the image for any nonstandard viewing environment, the second had an LMT just prior to the application of the RRT and an ODT designated as appropriate for home video viewing, and the second had an LMT just prior to the application of the RRT and an ODT designated as appropriate for viewing with content streamed to mobile devices, as shown in \autoref{fig:trimPassGrading}. \begin{figure}[H] \begin{center} \includegraphics[width=\textwidth]{trimPassGrading.png} \caption{} \label{fig:trimPassGrading} \end{center} \end{figure} \section{Flexible pre-sets for creative modifications} Separation of grading and LMT(s) allows for a production to make significant changes in creative decisions that affect the entire frame equally, without requiring the colorist to start from scratch, or ideally without even requiring a trim pass. For example, the client might start a production shooting `day for night' and use an LMT to accomplish this result (\autoref{fig:dayForNight}). \begin{figure}[H] \begin{center} \includegraphics[width=\textwidth]{dayForNight.png} \caption{} \label{fig:dayForNight} \end{center} \end{figure} A change in creative direction (say, after a test screening) might place the captured action two hours earlier, so `day for night' might become `day for dusk'. Since the LMT is separate from the grade, the change may be made without requiring lengthy and expensive colorist intervention. A new LMT is simply swapped into the old LMT's place (\autoref{fig:dayForDusk}). \begin{figure}[H] \begin{center} \includegraphics[width=\textwidth]{dayForDusk.png} \caption{} \label{fig:dayForDusk} \end{center} \end{figure} \section{Permanent Color Modification (archival)} The workflows above all show a `transient' processing of image file to displayed output, with the display being a calibrated grading monitor or projector. It is also completely valid and correct to archive the input to the RRT as an ACES container file, `baking in' the grade and any LMT application(s), as shown in \autoref{fig:archival}. \begin{figure}[H] \begin{center} \includegraphics[width=\textwidth]{archival.png} \caption{} \label{fig:archival} \end{center} \end{figure} A person who retrieves an ACES file need not know about the grades and LMT(s) applied to produce the desired creative result; by virtue of being an ACES file, the image `speaks for itself' when the RRT and a selected ODT are applied. It is extremely important that the LMT authors preserve as much of the dynamic range of the LMT’s input ACES RGB relative exposure values as is possible. This provides the maximum amount of latitude for a colorist grading through the LMT. It also preserves the maximum amount of grading latitude for someone who later retrieves stored ACES container files created by ‘baking in’ the effect of the LMT to the graded ACES images, when remastering for a radically different display or viewing environment (e.g. for grading on a higher-dynamic-range display than previously available). While full preservation of dynamic range and gamut is almost never possible, when faced with an engineering decision in which all other considerations are equal, the LMT author should choose the option that retains more of the LMT input’s dynamic range and color gamut. Preserving the integrity of the ACES RGB relative exposure values representing the scene means more than just not clipping highlight luminances or very deep shadow tones. It also means avoiding severe distortions of the distributions of ACES values, such as the distortion caused by a strong `gamma' operation, e.g. by a very large or very small value for one or more CDL `power' parameters. Because LMT's are customizable and unique, and because it is essential to maintain the portability and archivability of an ACES project, it is always necessary to preserve the LMT transforms within any project where they are used. \note{If a production wishes to preserve maximum flexibility for later remastering, it should archive the original ACES images, any clip-level metadata-bearing container encapsulating the original image files, any IDT(s), any pre-grading adjustments (see the following `LMTs and pre-grading for Visual Effects' section), any project-wide and shot-specific grading parameters, and the Look Transform (that is, the set of all LMTs employed to achieve the creative result, in their proper sequence).} \section{Portability} LMTs are expressed and transported using the Common LUT format (also known as the Academy/ASC LUT format or the ASC/Academy LUT format). The building blocks of an LMT include basic arithmetical operations, simple matrix application, 1D LUTs and 3D LUTs. Straightforward color transforms can often be expressed analytically using the first three of these building blocks. More complex (and typically empirically derived) LMTs may be conveyed as 3D LUTs. The Common LUT format was chosen because it can express, in a portable encoding, all of the abovementioned operations and LUTs. \note{Using the floating point ACES RGB relative exposure values directly as 1D LUT indices requires a more complex lookup mechanism than found in traditional 1D LUT implementations. The Common LUT Format supports this type of lookup by using the halfDomain attribute of the LUT1D process node; see the Common LUT Format documentation for more information.} \section{LMTs and pre-grading for Visual Effects} In some cases, color corrections may be created prior to the colorist session in a scene-balancing `pre-grade.' This allows for all shots in a sequence to share identical LMTs `downstream' in the color modification pipeline. A motivating case would be a long sequence of daylight shots with varying color temperature. An example of this workflow, with two illustrations, is shown below. The first illustration shows what might happen at a visual effects facility that receives a number of shots that will be edited together to make up a sequence. \begin{figure}[H] \begin{center} \includegraphics[width=\textwidth]{vfx1.png} \caption{} \label{fig:vfx1} \end{center} \end{figure} When the visual effects are complete, the frames supplied to the colorist have both the pre-grade and the visual effect(s) `baked in.' The Look Transform is not `baked in' to this imagery, since it must be applied after the grade, but is instead carried as metadata, and is referenced by the ACES clip container. \begin{figure}[H] \begin{center} \includegraphics[width=\textwidth]{vfx2.png} \caption{} \label{fig:vfx2} \end{center} \end{figure}
{ "alphanum_fraction": 0.8001848429, "avg_line_length": 87.2580645161, "ext": "tex", "hexsha": "4833539d7ad6db501a96a1d0d1499b13de6971a9", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "86284e2f145a89e3612f05ec7ea5a3e9d92cc779", "max_forks_repo_licenses": [ "AMPAS" ], "max_forks_repo_name": "colour-science/aces-dev", "max_forks_repo_path": "documents/LaTeX/TB-2014-010/sec-lmtusecases.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "86284e2f145a89e3612f05ec7ea5a3e9d92cc779", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "AMPAS" ], "max_issues_repo_name": "colour-science/aces-dev", "max_issues_repo_path": "documents/LaTeX/TB-2014-010/sec-lmtusecases.tex", "max_line_length": 1012, "max_stars_count": 2, "max_stars_repo_head_hexsha": "76ea982a988d278dd12b563602771f46a5da3b83", "max_stars_repo_licenses": [ "AMPAS" ], "max_stars_repo_name": "KelSolaar/aces-dev", "max_stars_repo_path": "documents/LaTeX/TB-2014-010/sec-lmtusecases.tex", "max_stars_repo_stars_event_max_datetime": "2019-05-27T06:46:50.000Z", "max_stars_repo_stars_event_min_datetime": "2018-01-04T18:12:13.000Z", "num_tokens": 2423, "size": 10820 }
\XtoCBlock{Exp} \label{block:Exp} \begin{figure}[H]\includegraphics{Exp}\end{figure} \begin{XtoCtabular}{Inports} In & Input u\tabularnewline \hline \end{XtoCtabular} \begin{XtoCtabular}{Outports} Out & Result of exp(u)\tabularnewline \hline \end{XtoCtabular} \subsubsection*{Description:} Computation of the exponential of the input. % include optional documentation file \InputIfFileExists{\XcHomePath/Library/Math/Doc/Exp_Info.tex}{\vspace{1ex}}{} \subsubsection*{Implementations:} \begin{tabular}{l l} \textbf{FiP8} & 8 Bit Fixed Point Implementation\tabularnewline \textbf{FiP16} & 16 Bit Fixed Point Implementation\tabularnewline \textbf{FiP32} & 32 Bit Fixed Point Implementation\tabularnewline \end{tabular} \XtoCImplementation{FiP8} \index{Block ID!4848} \nopagebreak[0] % Implementation details \begin{tabular}{l l} \textbf{Name} & FiP8 \tabularnewline \textbf{ID} & 4848 \tabularnewline \textbf{Revision} & 0.1 \tabularnewline \textbf{C filename} & Exp\_FiP8.c \tabularnewline \textbf{H filename} & Exp\_FiP8.h \tabularnewline \end{tabular} \vspace{1ex} 8 Bit Fixed Point Implementation % Implementation data structure \XtoCDataStruct{Data Structure:} \begin{lstlisting} typedef struct { uint16 ID; int8 *In; int8 Out; } EXP_FIP8; \end{lstlisting} \ifdefined \AddTestReports \InputIfFileExists{\XcHomePath/Library/Math/Doc/Test_Exp_FiP8.tex}{}{} \fi \XtoCImplementation{FiP16} \index{Block ID!4849} \nopagebreak[0] % Implementation details \begin{tabular}{l l} \textbf{Name} & FiP16 \tabularnewline \textbf{ID} & 4849 \tabularnewline \textbf{Revision} & 0.1 \tabularnewline \textbf{C filename} & Exp\_FiP16.c \tabularnewline \textbf{H filename} & Exp\_FiP16.h \tabularnewline \end{tabular} \vspace{1ex} 16 Bit Fixed Point Implementation % Implementation data structure \XtoCDataStruct{Data Structure:} \begin{lstlisting} typedef struct { uint16 ID; int16 *In; int16 Out; } EXP_FIP16; \end{lstlisting} \ifdefined \AddTestReports \InputIfFileExists{\XcHomePath/Library/Math/Doc/Test_Exp_FiP16.tex}{}{} \fi \XtoCImplementation{FiP32} \index{Block ID!4850} \nopagebreak[0] % Implementation details \begin{tabular}{l l} \textbf{Name} & FiP32 \tabularnewline \textbf{ID} & 4850 \tabularnewline \textbf{Revision} & 0.1 \tabularnewline \textbf{C filename} & Exp\_FiP32.c \tabularnewline \textbf{H filename} & Exp\_FiP32.h \tabularnewline \end{tabular} \vspace{1ex} 32 Bit Fixed Point Implementation % Implementation data structure \XtoCDataStruct{Data Structure:} \begin{lstlisting} typedef struct { uint16 ID; int32 *In; int32 Out; } EXP_FIP32; \end{lstlisting} \ifdefined \AddTestReports \InputIfFileExists{\XcHomePath/Library/Math/Doc/Test_Exp_FiP32.tex}{}{} \fi
{ "alphanum_fraction": 0.7133539307, "avg_line_length": 25.7787610619, "ext": "tex", "hexsha": "b9947f517bd077ac816e83ec618e45d703b427b9", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "31f39b598afe271a7fd46ef1ee9e06c410b1120c", "max_forks_repo_licenses": [ "BSD-3-Clause" ], "max_forks_repo_name": "AlexisTM/X2C", "max_forks_repo_path": "Library/Math/Doc/Exp.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "31f39b598afe271a7fd46ef1ee9e06c410b1120c", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "BSD-3-Clause" ], "max_issues_repo_name": "AlexisTM/X2C", "max_issues_repo_path": "Library/Math/Doc/Exp.tex", "max_line_length": 78, "max_stars_count": null, "max_stars_repo_head_hexsha": "31f39b598afe271a7fd46ef1ee9e06c410b1120c", "max_stars_repo_licenses": [ "BSD-3-Clause" ], "max_stars_repo_name": "AlexisTM/X2C", "max_stars_repo_path": "Library/Math/Doc/Exp.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 933, "size": 2913 }
%------------------------ % Resume Template % Author : Jai Agarwal % License : MIT %------------------------ \documentclass[a4paper,20pt]{article} \usepackage{latexsym} \usepackage[empty]{fullpage} \usepackage{titlesec} \usepackage{marvosym} \usepackage[usenames,dvipsnames]{color} \usepackage{verbatim} \usepackage{enumitem} \usepackage[pdftex]{hyperref} \usepackage{fancyhdr} \usepackage{fontawesome5} \usepackage{xcolor} \pagestyle{fancy} \fancyhf{} % clear all header and footer fields \fancyfoot{} \renewcommand{\headrulewidth}{0pt} \renewcommand{\footrulewidth}{0pt} % Adjust margins \addtolength{\oddsidemargin}{-0.530in} \addtolength{\evensidemargin}{-0.375in} \addtolength{\textwidth}{1in} \addtolength{\topmargin}{-.45in} \addtolength{\textheight}{1in} \urlstyle{rm} \raggedbottom \raggedright \setlength{\tabcolsep}{0in} % Sections formatting \titleformat{\section}{ \vspace{-10pt}\scshape\raggedright\large }{}{0em}{}[\color{black}\titlerule \vspace{-6pt}] %------------------------- % Custom commands \newcommand{\resumeItem}[2]{ \item\small{ \textbf{#1}{: #2 \vspace{-2pt}} } } \newcommand{\resumeItemWithoutTitle}[1]{ \item\small{ {\vspace{-2pt}} } } \newcommand{\resumeSubheading}[4]{ \vspace{-1pt}\item \begin{tabular*}{0.97\textwidth}{l@{\extracolsep{\fill}}r} \textbf{#1} & #2 \\ \textit{#3} & \textit{#4} \\ \end{tabular*}\vspace{-5pt} } \newcommand{\projectSubheading}[4]{ \vspace{-1pt}\item \begin{tabular*}{0.97\textwidth}{l@{\extracolsep{\fill}}r} \textbf{#1} & #2 \\ \textit{#3} & \textit{#4} \\ \end{tabular*}\vspace{-5pt} } \newcommand{\resumeSubItem}[2]{\resumeItem{#1}{#2}\vspace{-3pt}} \renewcommand{\labelitemii}{$\circ$} \newcommand{\resumeSubHeadingListStart}{\begin{itemize}[leftmargin=*]} \newcommand{\resumeSubHeadingListEnd}{\end{itemize}} \newcommand{\resumeItemListStart}{\begin{itemize}} \newcommand{\resumeItemListEnd}{\end{itemize}\vspace{-5pt}} %----------------------------- %%%%%% CV STARTS HERE %%%%%% \begin{document} %----------HEADING----------------- \begin{tabular*}{\textwidth}{l@{\extracolsep{\fill}}r} \textbf{{\LARGE Jai Agarwal}} & Email: \href{mailto:[email protected]}{[email protected]} \\ \faGithub \href{https://github.com/jaibhageria}{ Github~~~: \color{blue}https://github.com/jaibhageria} & Mobile:~~~+91-966-3506-821~~~~~~\\ \faLinkedin \href{https://linkedin.com/in/jai-bhageria}{ LinkedIn~: \color{blue}https://linkedin.com/in/jai-bhageria} \\ \faGlobe \href{https://jaib.home.blog}{ Website~~: \color{blue}https://jaib.home.blog} \\ \faIcon{graduation-cap}\color{blue}\href{https://scholar.google.com/citations?user=ENPIQwEAAAAJ&hl=en}{ Google Scholar account} \\ \end{tabular*} %-----------EDUCATION----------------- \section{\faIcon{book}~~Education} \resumeSubHeadingListStart \resumeSubheading {PES University}{Bangalore, India} {Bachelor of Technology - Computer Science and Engineering}{Aug 2016 - Jul 2020} {\textit{{\newline{}\newline{}\textbf{CGPA: 9.48/10}{; Graduated First Class with Distinction \footnotesize{\faIcon{link}} \color{blue}\small\href{https://bit.ly/3FIEsQN}{Transcript}}}}} {\scriptsize \textit{ \footnotesize{\newline{}\textbf{Courses:} Data Structures \& Algorithms, Operating Systems, Big Data \& Cloud Computing, Cloud Computing, Computer Networks \& Security, Objected Oriented Design, Web Technologies - I \& II, Data Analytics, Machine Learning, Deep Learning.}}} \resumeSubheading {CMR NPS (CBSE)}{Bangalore, India} {Secondary High School, Class-XII}{Jun 2014 - May 2016} {\textit{{\newline{}\newline{}\textbf{Score: 95.2\%}{; secured A1 grade in all subjects}}}} {\scriptsize \textit{ \footnotesize{\newline{}\textbf{Subjects:} English, Mathematics, Physics, Chemistry, Computer Science}}} \resumeSubheading {Presidency School Kasthurinagar (CBSE)}{Bangalore, India} {Secondary School, Class-X}{Jun 2013 - May 2014} {\textit{{\newline{}\newline{}\textbf{CGPA: 10/10}{; secured A1 grade in all subjects}}}} {\scriptsize \textit{ \footnotesize{\newline{}\textbf{Subjects:} English, Mathematics, Social Science, Hindi, Science}}} \resumeSubHeadingListEnd %-----------SKILLS----------------- \vspace{-5pt} \section{\faIcon{server}~~Skills Summary} \resumeSubHeadingListStart \resumeSubItem{Strengths}{~~~~~~~~~~~ Strong programming \& debugging ability, Web Development, Application Security, Data analysis} \resumeSubItem{Languages}{~~~~~~~~~~ C, C++, Java, Python, Golang, HTML, PHP, JavaScript, SQL, Bash, R} \resumeSubItem{Frameworks}{~~~~~~~~ Spring Boot, NodeJS, ReactJS, Flask, LAMP, NLTK, SpaCy, TensorFlow, Keras} \resumeSubItem{Tools}{~~~~~~~~~~~~~~~~~~ Kubernetes, Docker, Git, Wireshark, Vagrant, Jenkins, Anaconda, Jupyter notebook} \resumeSubItem{Platforms}{~~~~~~~~~~~ Linux, Web, MacOS, AWS, GCP, Azure} \resumeSubItem{Soft Skills}{~~~~~~~~~~~ Writing, Team work, Presentation, Time Management} \resumeSubItem{Sports \& Hobbies}{~Badminton, Table Tennis, Cricket, Yoga, Quizzing, Solving math puzzles} \resumeSubHeadingListEnd %-----------EXPERIENCE----------------- \vspace{-5pt} \section{\faIcon{suitcase}~~Experience} \resumeSubHeadingListStart \resumeSubheading {Walmart Global Technology Services India Pvt. Ltd.}{Onsite/Remote} {Software Developer (Full-time)}{Jan 2020 - Present} \resumeItemListStart \item\small{Started as a 6 month intern in Walmart InfoSec team, converted to a full time employee starting Aug 2020 in the InfoSec Development team} \item\small{Worked on design and development of features for an internal \textbf{vault for secrets management}.} \item\small{Contributed to development of features for internal \textbf{password sharing and credential management} tool.} \item\small{Had the unique opportunity to work with proprietary software used in \textbf{Walmart payment gateways} for credit card and cvv detail encryption/decryption with format preserving encryption.} \item\small{Involved in design and development of a service for \textbf{certificate lifecycle management}, which facilitates manual and auto renewal of certificates on Walmart loadbalancers.} \resumeItemListEnd \vspace{-3pt} \resumeSubheading {Walmart Global Technology Services India Pvt. Ltd.}{Onsite} {Summer Intern (Full-time)}{May 2019 - Jul 2019} \resumeItemListStart \resumeItem{\color{blue}\href{https://github.com/cloudmarker/cloudmarker}{Cloudmarker}} {Contributed to the Cloudmarker open source project, which is a framework used for cloud monitoring and auditing. Developed the plugin for auditing of GCP cloud environments.} \resumeItem{Risk score} {Worked on integrating a ML model which used clustering techniques to categorize security misconfigurations and generate a risk score.} \resumeItemListEnd \vspace{-3pt} \resumeSubheading{GoIbibo - A go-MMT group company}{Onsite} {Software Development Intern (Full-time)}{Jun 2018 - Aug 2018} \resumeItemListStart \resumeItem{Identification of fraudulent flight bookings} {Developed a Chrome extension using javascript to monitor and track down agents doing online booking frauds in call centers.} \resumeItem{GIA chat bot} {Built an automated chat conversation for availing a cab voucher and booking a heli taxi to the airport. GIA is a Goibibo built Intelligent chat system.} \resumeItemListEnd \resumeSubHeadingListEnd %-----------PROJECTS----------------- \vspace{-5pt} \section{\faIcon{laptop}~~Projects} \resumeSubHeadingListStart \projectSubheading {\color{blue}\href{https://bit.ly/2Zm3BBc}{Explore using of ML models for replacing Database Indexes \footnotesize{\faIcon{link}}}}{Research Project} {Database, Hierarchical models, Neural networks, B+ Trees, Indexes}{Aug 2019 - Dec 2019} \resumeItemListStart \item\small{Explored using machine learning models to replace database index structures, which exist in the form of B+Trees.} \item\small{Worked with Cassandra DB, experimented with various learned models to learn the data storage patterns and make prediction on the location of a given record} \item\small{The learned models reduced the additional storage required for indexes and also reduced the time required to make database queries.} \resumeItemListEnd \vspace{2pt} \projectSubheading {\color{blue}\href{https://bit.ly/3lbhVUZ}{Recommendation system for agriculture \footnotesize{\faIcon{link}}}~~~\href{https://github.com/jaibhageria/Agrolyzer}{\footnotesize{\faGithub} Code}~~~\href{https://youtu.be/-an0M1r3MrM}{\footnotesize{\faIcon{youtube} Video}}}{Personal Project} {Precision agriculture, Neural networks, Time series analysis, Supervised learning}{Oct 2018 - Dec 2018} \resumeItemListStart \item\small{Developed a recommendation engine to suggest farmers with the correct crops they should plant looking at various factors like land area, weather patterns and crop season.} \item\small{Used neural networks to predict crop production and time series analysis for predicting weather patterns. Recommendation to plant a crop was given based on estimated production and predicted weather.} \resumeItemListEnd \vspace{4pt} \projectSubheading {{Indian Cricket league prediction model}~~~\color{blue}\href{https://github.com/jaibhageria/IPLPredictor}{\footnotesize{\faGithub} Code}}{Personal Project} {Tools used: Apache Spark MLLib, Scala, PySpark}{Sep 2018 - Nov 2018} \resumeItemListStart \item\small{Developed a score-wicket prediction model using Apache Spark MLlib to simulate the outcomes of Indian Premium League cricket matches during 2018 season.} \item\small{Ball-by-ball data from previous matches was used to develop two simulation models - Bayesian probability-based and Decision tree-based.} \resumeItemListEnd \vspace{2pt} \projectSubheading {{Container Orchestration}~~~\color{blue}\href{https://github.com/jaibhageria/ContainerOrchestrator}{\footnotesize{\faGithub} Code}}{Personal Project} {Tools used: Docker, Amazon Web Services - EC2 and AWS Load balancer}{Jan 2019 - Mar 2019} \resumeItemListStart \item\small{Developed a container orchestrator similar to Kubernetes, that can perform load balancing, fault tolerance, and auto-scaling.} \item\small{The orchestrator engine ran successfully inside the Acts EC2 instance and load balanced all HTTP incoming requests equally to every Acts container} \resumeItemListEnd \resumeSubHeadingListEnd %-----------PUBLICATIONS & CONFERENCES----------------- \vspace{-5pt} \section{\faIcon{file-alt}~~Publications \& Conferences} \resumeSubHeadingListStart \resumeSubItem{Research paper: \color{blue}\href{https://link.springer.com/chapter/10.1007/978-981-15-9774-9_68}{Graph-Assisted Attention for Path Finding in Question Answering Task}~\footnotesize{\faIcon{link}}}{} \resumeItemListStart \item\small{Paper was published in \textbf{Emerging Technologies in Data Mining and Information Security} (a Springer publication) on pages 735-748 on 5th May 2021.} \item\small{Used knowledge graphs to answer multi-hop type, path finding based questions from the babI dataset.} \item\small{Tech: NLP, Neural networks, End to end memory networks, Dynamic memory networks, Knowledge graphs.} \resumeItemListEnd \vspace{2pt} \resumeSubItem{Conference: IEMIS-2020, a Springer AISC conference~~\color{blue}\href{https://bit.ly/3FJkaq8}{Certificate}~\footnotesize{\faIcon{link}}}{} \resumeItemListStart \item\small{The \textbf{2nd International Conference on Emerging Technologies in Data Mining and Information Security} was held from 2nd - 4th July, 2020 in Kolkata, India. The theme was Data mining, Machine learning, IoT and Information Security} \item\small{Presented this {\color{blue}\href{https://bit.ly/3xnp758}{research paper}} during the conference.} \resumeItemListEnd \vspace{2pt} \resumeSubItem{Conference: IEEE CCEM 2020, Pre-Conference workshop}{} \resumeItemListStart \item\small{The \textbf{IEEE International Conference on Cloud Computing in Emerging Markets (CCEM) Pre-Conference Workshop} was held on 29th February 2020 in Bangalore, India.} \item\small{Presented this {\color{blue}\href{https://bit.ly/2Zm3BBc}{research article}} during the conference and received valuable feedback from professionals and academics.} \resumeItemListEnd \resumeSubHeadingListEnd %-----------Awards----------------- \vspace{-5pt} \section{\faIcon{trophy}~~Honors \& Awards} \begin{description}[font=$\bullet$] \item{~~~~Awarded with the \textbf{CNR Rao Merit Scholarship for 7 semesters in PES University} for excellent performance in academics and being in top 20\% of the Computer Science batch.} \vspace{-5pt} \item {~~~~Finished at the \textbf{third place in CTF competition} organised by ISFCR (Center for Information Security, Forensics and Cyber Resilience) PES University on 27th July 2019.} \vspace{-5pt} \item {~~~~Completed \textbf{Github Hacktober Fest 2019} successfully. Contributed to 4 open source projects.} \vspace{-5pt} \item {~~~~Qualified for Grand finale of \textbf{SynerGE presents HACK'E'LTH 2019} organised by GE from 19th-20th September 2019 at Bangalore, India. Finished amongst the top 10 teams all over India.} \vspace{-5pt} \item {~~~~Successfully completed the \textbf{Crio Launch 2021 Student Developer program} from 31st Jan 2020 - 10th Apr 2020. Link to my project profile is {\color{blue}\href{https://criodo.github.io/Crio-Launch-Feb-2020-jai-bhageria/}{here}} and the certificate is {\color{blue}\href{https://raw.githubusercontent.com/CrioDo/Crio-Launch-Feb-2020-jai-bhageria/gh-pages/static/media/Crio-Launch-Feb-2020-Certificate.png}{here}}.} \vspace{-5pt} \item {~~~~Won \textbf{first place in In-QUIZ-itives 2021}, a tech quiz organised by Walmart in June 2021.} \vspace{-5pt} \item {~~~~Finished at the \textbf{first place in Core NetWars CTF tournament} organised by SANS \& Walmart in October 2021.} \end{description} \end{document}
{ "alphanum_fraction": 0.7208167817, "avg_line_length": 57.6265060241, "ext": "tex", "hexsha": "52ea10c25a24b6e7b7c3782726669fda63145fa2", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "ab49d40f9b85985f21ccbfffc4e9158e59575225", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "jaibhageria/MyResume", "max_forks_repo_path": "myresume.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "ab49d40f9b85985f21ccbfffc4e9158e59575225", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "jaibhageria/MyResume", "max_issues_repo_path": "myresume.tex", "max_line_length": 432, "max_stars_count": null, "max_stars_repo_head_hexsha": "ab49d40f9b85985f21ccbfffc4e9158e59575225", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "jaibhageria/MyResume", "max_stars_repo_path": "myresume.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 3976, "size": 14349 }
% cGENIE HOW-TO document % Andy Ridgwell, March 2011 % % --------------------------------------------------------------------------------------------------------------------------------- % 11/03/24: added 'Determine the CH4 flux required to achieve a particular atmospheric pCH4 value' % 11/04/06: added 'Prescribe a spatial map of benthic tracer release' % 11/08/02: added data-saving info % --------------------------------------------------------------------------------------------------------------------------------- \documentclass[10pt,twoside]{article} \usepackage[paper=a4paper,portrait=true,margin=2.5cm,voffset=0pt,ignorehead,footnotesep=1cm]{geometry} \usepackage{graphicx} \usepackage{hyperref} \usepackage{paralist} \usepackage{caption} \usepackage{float} \usepackage{wasysym} \linespread{1.1} \setlength{\pltopsep}{2.5pt} \setlength{\plparsep}{2.5pt} \setlength{\partopsep}{2.5pt} \setlength{\parskip}{2.5pt} \title{cGENIE HOW-TO} \author{Andy Ridgwell} \date{\today} \begin{document} %================================================================================================================================= %=== BEGIN DOCUMENT ============================================================================================================== %================================================================================================================================= \maketitle %================================================================================================================================= %=== CONTENTS ==================================================================================================================== %================================================================================================================================= \tableofcontents %================================================================================================================================= %=== CHAPTERS ==================================================================================================================== %================================================================================================================================= %--------------------------------------------------------------------------------------------------------------------------------- %--- Introduction ---------------------------------------------------------------------------------------------------------------- %--------------------------------------------------------------------------------------------------------------------------------- \newpage \section{Introduction}\label{Introduction} This document provides HOW-TO's for cGENIE users; where obvious changes are made; now items include instructions for speeding up the model and incorporation of (ocean) cadmium. %--------------------------------------------------------------------------------------------------------------------------------- %--- HOW-TOs: Getting started ---------------------------------------------------------------------------------------------------- %--------------------------------------------------------------------------------------------------------------------------------- \newpage \section{HOW-TOs: Getting started}\label{how-to-0} %--------------------------------------------------------------------------------------------------------------------------------- \subsection{Do some thing dumb}\label{Do some thing dumb} Easy! Just close your eyes and change some parameter values at random. Better still, start using the model without reading the manual first ... %--------------------------------------------------------------------------------------------------------------------------------- \subsection{Install \texttt{cGENIE}}\label{Install cGENIE} See: \texttt{cGENIE} \textit{Quick-start Guide}. (Also refer to the \texttt{READ-ME} file for e.g., details of changes in configuring and running \texttt{cGENIE} compared to \texttt{GENIE}.) %--------------------------------------------------------------------------------------------------------------------------------- \subsection{Find configurations for \texttt{cGENIE}}\label{Find configurations for cGENIE} A series of (example) \texttt{cGENIE} configurations are provided, many of which are detailed in full in the \texttt{cGENIE} \textit{Tutorial} document. Example configurations comprise \textit{base-config} and \textit{user-config} files, plus any \textit{forcings} needed. \begin{compactitem} \item \texttt{cGENIE} \textit{base-configs} are stored in: \\ \texttt{~/cgenie/genie-main/configs} \\and all start with '\texttt{cgenie\_}', for example: \\ \texttt{cgenie\_eb\_go\_gs\_ac\_bg\_hadcm3l\_eocene\_36x36x16\_2i\_080928\_BASE.config} \item \texttt{cGENIE} user configs are stored in: \\ \texttt{~/cgenie/genie-userconfigs} \item \texttt{cGENIE} forcings are stored in: \\ \texttt{~/cgenie/genie-forcings} \end{compactitem} %--------------------------------------------------------------------------------------------------------------------------------- %--- HOW-TOs: General ------------------------------------------------------------------------------------------------------------ %--------------------------------------------------------------------------------------------------------------------------------- \newpage \section{HOW-TOs: General}\label{how-to-1} %--------------------------------------------------------------------------------------------------------------------------------- %--------------------------------------------------------------------------------------------------------------------------------- %--- HOW-TOs: Model output ------------------------------------------------------------------------------------------------------- %--------------------------------------------------------------------------------------------------------------------------------- \newpage \section{HOW-TOs: Model output}\label{how-to-2} %--------------------------------------------------------------------------------------------------------------------------------- \subsection{Set the frequency of time-series and time-slice output}\label{how-to-2a} See: \textit{c}GENIE \textit{user-manual} (section 5). %--------------------------------------------------------------------------------------------------------------------------------- \subsection{Diagnose orbital (insolation) changes}\label{how-to-2b} Two new \texttt{misc} category time-series files have been provided: \vspace{-10pt}\begin{verbatim}biogem_series_misc_ocn_insol.res\end{verbatim}\vspace{-10pt} and \vspace{-10pt}\begin{verbatim}biogem_series_misc_ocn_swflux.res\end{verbatim}\vspace{-10pt} with the the SW (shortwave) flux (\texttt{swflux}) being equivalent to the incident strength of solar radiation at the surface (\texttt{insol}) but accounting for the prescribed planetary albedo. Both variables are calculated and saved on a global mean ocean grid basis (2nd data column) and have units of W m-2. \\In addition, to help diagnose orbital variability, \texttt{biogem\_series\_misc\_ocn\_insol.res} includes two further insolation variables (3rd and 4th columns). These reflect the strength of insolation at a single point in the annual cycle and at discrete latitudes (i.e. \texttt{j} grid indices). (The insolation at 2 different latitudes are saved so that both .e.g. N and S hemisphere insolation signals can be simultaneously recorded.) \\Three new namelist parameters are provided to configure this: \begin{compactenum} \item \texttt{bg\_par\_t\_sig\_count} -- which sets the BIOGEM 'time-step' in the annual cycle at which the insolation value will be saved. e.g. for 96 time-steps in the ocean physics, and a 2:1 GOLDSTEIN:BIOGEM gearing (the default for the 16 level configuration), there are 48 BIOGEM time-steps. (It is left to the user to work out which part of the annual cycle \textit{c}GENIE starts at (i.e. time-step \#1) -- I haven't a clue ...) \item \texttt{bg\_par\_sig\_j\_N} -- sets the 'j' value for a northern hemisphere (but could be southern) snap-shot. \item \texttt{bg\_par\_sig\_j\_S} -- sets the 'j' value for a southern hemisphere snap-shot. \end{compactenum} %--------------------------------------------------------------------------------------------------------------------------------- %--- HOW-TOs: Climate ------------------------------------------------------------------------------------------------------------ %--------------------------------------------------------------------------------------------------------------------------------- \newpage \section{HOW-TOs: Climate}\label{how-to-3} %--------------------------------------------------------------------------------------------------------------------------------- \subsection{Adjust solar forcing in a time-dependent manner}\label{Adjust solar forcing in a time-dependent manner} The value of the solar constant in cGENIE is set by the \textit{namelist parameter}: \\ \texttt{ma\_genie\_solar\_constant} and by default is set to 1368 W m-2, i.e.: \vspace{-10pt}\begin{verbatim}ma_genie_solar_constant=1368.0\end{verbatim}\vspace{-5pt} Specifying a different value for \texttt{ma\_genie\_solar\_constant} in the user config file allows the solar forcing of the EMBM to be altered. For example, to induce a 'snowball Earth' like state under a solar constant applicable to the late Neoproterozoic (some 6\% less than modern) you would set: \vspace{-10pt}\begin{verbatim}ma_genie_solar_constant=1330.56\end{verbatim}\vspace{-5pt} Modification of \texttt{ma\_genie\_solar\_constant} can be turned into a time-dependent forcing of solar forcing, but only by frequent re-starting using a sequence of short model integrations. Alternatively, a crude (temporary) hack is provided to allow a semi-continual adjustment of solar forcing. Whether you wish to vary the solar constant in a time-dependent manner is determined by the \textit{parameter}:\\ \texttt{bg\_ctrl\_force\_solconst}. By default this is set to \texttt{.false.}. By adding to the user config file: \vspace{-10pt}\begin{verbatim}bg_ctrl_force_solconst=.true.\end{verbatim}\vspace{-5pt} a time-varying change in the value of the solar constant will be imposed. For this, BIOGEM will expect the presence of a file called \texttt{biogem\_force\_solconst\_sig.dat} in the forcing directory\footnote{REMEMBER: The location of which is specified by the namelist parameter bg\_par\_fordir\_name.}. \texttt{biogem\_force\_solconst\_sig.dat} must contain two columns of information: the first is a time marker (year) and the second is a paired value for the solar constant. In the current crude incarnation of this feature, the time markers (1st column) \textbf{must} correspond exactly to the time markers in the time-series specification file\footnote{REMEMBER: The filename of which is specified by the namelist parameter bg\_par\_infile\_sig\_name.}. GENIE will exit with an appropriate error message if this is not the case. Seasonal solar insolation is re-calculated each year with a call to: \vspace{-10pt}\begin{verbatim}radfor(genie_solar_constant)\end{verbatim}\vspace{-5pt} (EMBM file: \texttt{radfor.F}) at the start of the time-stepping loop in \texttt{genie.F}. At each time-marker, BIOGEM sets the value of \texttt{genie\_solar\_constant} equal to the corresponding value specified in:\\ \texttt{biogem\_force\_solconst\_sig.dat}. Thus, regardless of how closely-spaced the time-marker years are, (seasonal) solar insolation is only adjusted every year. For a longer time-marker interval than yearly, no interpolation is performed on the series of solar constant values, and in this way time-dependint solar forcing currently differs from the calculation of other forcings. EXAMPLE: A simple file might look something like: \vspace{-5pt}\begin{verbatim} -START-OF-DATA- 0.5 1367.0 1.5 1366.0 2.5 1365.0 3.5 1364.0 4.5 1363.0 5.5 1362.0 6.5 1361.0 7.5 1360.0 8.5 1359.0 9.5 1358.0 10.5 1357.0 -END-OF-DATA- \end{verbatim}\vspace{-5pt} which will decrease the value of the solar constant by 1 W m-2 each year. Note that because the solar forcing is only updated each year (with the call to \texttt{radfor.F}), the first year will be characterized by climate with a solar constant of 1368 W m-2, the default. Although BIOGEM sets a new value of \texttt{genie\_solar\_constant} (1367 W m-2) mid way through the first year, it is only at the start of the second year that solar insolation is recalculated according to the reduction in solar constant. %--------------------------------------------------------------------------------------------------------------------------------- %--- HOW-TOs: Ocean biology and biogeochemical cycling --------------------------------------------------------------------------- %--------------------------------------------------------------------------------------------------------------------------------- \newpage \section{HOW-TOs: Ocean biology and biogeochemical cycling}\label{how-to-4} %--------------------------------------------------------------------------------------------------------------------------------- \subsection{Configure an abiotic ocean}\label{Have an abiotic ocean} Biological productivity in the ocean can be completely turned off to create an abiotic ocean (why you would want to do this is another matter ... perhaps analyzing the solubility pump or a 'deep-time' and prior to significant marine life study ... (?)). The biological option is set by the \textit{parameter} \texttt{bg\_par\_bio\_prodopt} which by default takes a value of \texttt{"1N1T\_PO4MM"} which selects the scheme described in \textit{Ridgwell et al.} [2007a]. To have no biological production in the ocean, add the following line to the end of the \textit{user-config} file (or edit the existing line in the section '\texttt{--- BIOLOGICAL NEW PRODUCTION ---}'): \vspace{-11pt}\begin{verbatim} bg_par_bio_prodopt="NONE" \end{verbatim}\vspace{-5.5pt} With this set, you do not have to specify any biological production or remineralization namelist parameter values in the \textit{user-config} file. %--------------------------------------------------------------------------------------------------------------------------------- \subsection{Specify the CaCO3:POC export ratio}\label{CaCO3:POCrainratio} In the default\footnote{The default biological scheme is given by: \texttt{bg\_par\_bio\_prodopt='1N1T\_PO4MM'}.} 'biological' scheme in GENIE the CaCO3:POC export ratio from the surface ocean in BIOGEM is parameterized as a power law function of the degree of ambient oversaturation w.r.t. calcite [\textit{Ridgwell et al.}, 2007a,b]. The calculated CaCO\begin{math}_3\end{math}:POC ratio will vary therefore both spatially, particularly w.r.t. latitude (and temperature), as well as in time, if the surface ocean saturation state changes. The latter can arise from climatic (temperature) or circulation changes, or through a change in the DIC and/or ALK inventory of the ocean (such as resulting from emissions of fossil fuel CO2) or the re-partitioning of these species vertically within the ocean (e.g., as a result of any change in the strength of the biological pump). There may be situations in which it is advisable to hold the CaCO\begin{math}_3\end{math}:POC export ratio invariant. For instance, considering the current very considerable uncertainties in the impacts of ocean acidification on marine calcifiers [\textit{Ridgwell et al.}, 2007a] the safest assumption is arguably to exclude any acidification impact on calcification and carbonate export. Specifying a spatially uniform value of the CaCO\begin{math}_3\end{math}:POC ratio ratio (e.g. 0.25 or 0.3) also allows comparison with the results of early carbon cycle model studies. For deeper-time geological studies where little about marine carbonate production may be known \textit{a priori}, a spatially uniform value represents the simplest possible assumption (e.g., \textit{Panchuk et al.} [2008]). BIOGEM can be told to use a prescribed (spatially and temporally invariant) 2D field of CaCO\begin{math}_3\end{math}:POC export rain ratios (instead of calculating these internally as a function of ocean chemistry) by setting the 'Replace internal CaCO3:POC export rain ratio?' namelist flag to \texttt{.true.}: \vspace{-5.5pt}\begin{verbatim} bg_ctrl_force_CaCO3toPOCrainratio=.true. \end{verbatim}\vspace{-5.5pt} You must also then provide a 2D data field that specifies the value of the rain ratio at each and every surface ocean grid point. The filename of this field is set by default to: \vspace{-5.5pt}\begin{verbatim} bg_par_CaCO3toPOCrainratio_file="CaCO3toPOCrainratio.dat" \end{verbatim}\vspace{-5.5pt} and the file must be located in the 'BIOGEM data input directory'\footnote{\texttt{\$RUNTIME\_ROOT} being equal to \texttt{\~{}/genie}.}, which by default is: \\\texttt{bg\_par\_indir\_name="} \texttt{\$RUNTIME\_ROOT/genie-biogem/data/input"} This 2-D field must be in the form of an ASCII file with space (or tab) separated values arranged in rows and columns of latitude and longitude. The format of the file must follow the GOLDSTEIN ocean grid with the first line being the most Northerly row, and the last line the most Southerly row of grid points. Data along a row is from West to East. The latitude of the first column of values must be consistent with the defined starting latitude of the model grid, which is specified by the namelist parameter \texttt{gm\_par\_grid\_lon\_offset}\footnote{-260E by default}. Examples are given in the code repository\footnote{e.g., \texttt{\~{}/genie/genie-biogem/data/input/CaCO3toPOCrainratio\_worbe2\_preindustrial.dat}}. If you are using a uniform value, it is an easy enough job to create a \begin{math}36\times36\end{math} array of the value you so desire\footnote{It doesn't matter if you specify a value over land because only values associated with wet cells will be acted on.}. If you want to hold a previously-calculated (spatially variable) CaCO\begin{math}_3\end{math}:POC field constant, then the easiest way to achieve this is to copy the information contained in the \textit{time-slice} results field:\\ \texttt{misc\_sur\_rCaCO3toPOC} in the results netCDF file \texttt{fields\_biogem\_2d.nc}\footnote{You must have the 'miscellaneous properties' time-slice save flag set to:\\ \texttt{bg\_ctrl\_data\_save\_slice\_misc=.true.} (the default) for this field to be saved.}. Because this is a 3D data field (\begin{math}36\times36\times8\end{math}), carefully highlight just the surface ocean (2D) distribution (e.g., from the Panoply viewer) or extract from the netCDF file by some other means, and then copy and paste into:\\ \texttt{CaCO3toPOCrainratio.dat} (or whatever you have specified the filename as). When copying Panoply data, 'NaN's should be replaced by values of zero. Take care that the final (steady-state) time-slice is being copied and not the first (un-spunup) one ... \textbf{TIP}: In order to quantify the importance of calcification feedbacks with CO2 and climate, two model integrations are required: one with the CaCO\begin{math}_3\end{math}:POC ratio held constant and the other with it allowed to vary, thereby allowing the effect of a changing CaCO\begin{math}_3\end{math}:POC ratio on the system to to elucidated. %--------------------------------------------------------------------------------------------------------------------------------- %--------------------------------------------------------------------------------------------------------------------------------- \subsection{Implementing an alternative fixed remineralization profile for POC (e.g. Martin curve)}\label{fixedremin} There are several options for utilizing a fixed remineralization profile for POC, which by default is a double exponential (See: \textit{Ridgwell et al.} [2007a]). The fixed remineralziation profile scheme is set by the string parameter: \texttt{bg\_par\_bio\_remin\_fun}. By default, it has a value of '\texttt{efolding}'. Currently available options are: \begin{compactitem} \item \texttt{Martin1987}, which applies a globally-uniform power, set by: \\ \texttt{bg\_par\_bio\_remin\_martin\_b} \\(which by default has a value of -0.858) \item \texttt{Henson2012}, which calculates the value of b according to sea surface temperature (SST): \\b = (0.024 * SST) - 1.06 \end{compactitem} To user either (on their own), all organic matter should be assigned to a single phase, with the 2nd (recalcitrant) fraction set to zero: \vspace{-5.5pt}\begin{verbatim} bg_par_bio_remin_POC_frac2=0.0 \end{verbatim}\vspace{-5.5pt} Note that these parameterizations can be combined with ballasting \ref{ballasting} and will act on the 'free' POC phase (i.e. the one not controlled by the ballasting parameterization). %--------------------------------------------------------------------------------------------------------------------------------- %--------------------------------------------------------------------------------------------------------------------------------- \subsection{Implement particulate organic carbon 'ballasting'}\label{ballasting} The default particulate organic carbon (POC) ocean interior remineralization scheme is based on fixed, prescribed profiles of relative POC flux to depth (e.g. see: \textit{Ridgwell} [2001]; \textit{Ridgwell et al.} [2007a]). A 'ballasting' control on POC transport to depth can instead be implemented by: \vspace{-5.5pt}\begin{verbatim} bg_ctrl_bio_remin_POC_ballast=.true. bg_ctrl_bio_remin_POC_fixed=.false. \end{verbatim}\vspace{-5.5pt} The POC 'carrying coefficients' for CaCO3, opal, and detrital (lithogenic) material are set by the parameters: \vspace{-5.5pt}\begin{verbatim} bg_par_bio_remin_ballast_kc bg_par_bio_remin_ballast_ko bg_par_bio_remin_ballast_kl \end{verbatim}\vspace{-5.5pt} (for CaCO3, opal, and lithogenics, respectively). Note that the ballast coefficient units are: g POC m-2 yr-1 (g ballast m-2 yr-1)-1 (i.e. \textbf{g g-1}), which are internally converted to: mol POC m-2 yr-1 (mol ballast m-2 yr-1)-1 (i.e. \textbf{mol mol-1}). If the \verb=bg_ctrl_bio_remin_POC_fixed= is kept to \verb=.true.=(default). Then the POC flux is a function of the \textbf{surface} CaCO3, opal and lithogenic export flux. Ballasting carrying coefficients are typically based on empirical relationships between POC flux at depth to the CaCO3/opal/lithogenic flux at \textbf{depth}. This is why \verb=bg_ctrl_bio_remin_POC_fixed= should be set to \verb=.false.= when using standard carying coefficients. Please note that keeping that option to \verb=.true.= is equivalent to ignoring the effect of CaCO3 remineralisation on the POC flux. A fixed (in time), but spatially heterogeneous field can also be prescribed instead of global uniform values (akin to setting a pattern of the CaCO3:POC export rain ratio (\ref{CaCO3:POCrainratio}). The parameters setting whether to substitute a globally-uniform value with a specified pattern are: \vspace{-5.5pt}\begin{verbatim} bg_ctrl_force_CaCO3ballastcoeff=.true. bg_ctrl_force_opalballastcoeff=.true. bg_ctrl_force_detballastcoeff=.true. \end{verbatim}\vspace{-5.5pt} and which by default are \texttt{.false.}. The patterns of carrying coefficient are determined by files read in from \texttt{cgenie\slash genie-biogem\slash data\slash input}. The filenames are specified by: \vspace{-5.5pt}\begin{verbatim} bg_par_CaCO3ballastcoeff_file bg_par_opalballastcoeff_file bg_par_detballastcoeff_file \end{verbatim}\vspace{-5.5pt} (again, akin to the methodology for setting the CaCO3:POC export rain ratio (\ref{CaCO3:POCrainratio})). Note that ballasting is combined with an e-folding (or other) fixed profile remineralization schemes. Ballasting is calculated with respect to the 2nd (recalcitrant) fraction of POC only. The remaining POC export is be degraded by an alternative algorithm, which by default, is an e-folding decay (see \ref{fixedremin} for more and alternatives). The fraction of initial export assigned to ballasting vs. 'free' POC is calculated according to the available exported ballast flux. %--------------------------------------------------------------------------------------------------------------------------------- %--------------------------------------------------------------------------------------------------------------------------------- \subsection{Prescribe biological export production}\label{Prescribe biological export production} Two possibilities: \begin{compactenum} \item \textbf{Via a full prescription of all particulate fluxes in surface ocean} \\Create a full set of particulate (sediment tracer) flux forcings fields for the surface ocean, one for each biologically-related sediment tracer selected in the model, including isotopes (and trace metals). Everything except for the surface layer can be left as a zero (0.0) in the two 3D spatial fields required for each tracer. \\You must also create a set of dissolved (ocean) tracer flux forcings fields for the surface ocean, one for each dissolved tracer associated with the particulates and selected in the model (including isotopes etc). The dissolved tracer flux fields must be create so as to exactly cancel out the particulate fields to conserve mass. For most tracers this is trivial, i.e., the fields for P in particulate organic matter (sed\_POP) need be associated with fields for dissolved PO4 (ocn\_PO4) which will simply be equal in magnitude but opposite in sign to POP. Complications start to arise for CaCO3 (2 alklinities) and there is also the question of alkalinity changes associated with organic matter creation/destruction (via changes in NO3). \item \textbf{By just prescribing just the POC flux} \\An alternative has been provided enabling a full biological productivity in the surface ocean, but controlled by prescribing just the particulate organic carbon export flux. This 'biological' scheme is selected with:: \vspace{-5.5pt}\begin{verbatim}bg_par_bio_prodopt="bio_POCflux"\end{verbatim}\vspace{-5.5pt} What happens in practice is that the POC flux is used to calculate the equivalent PO4 change in the surface ocean, and then this is passed to the biological scheme and export production calculated 'as usual'. (The POC flux forcing is set to zero once the associated PO4 (uptake) flux has been calculated.) \\A particulate (sediment tracer) flux forcing for POC in the surface ocean still has to be defined and selected, but no other forcings (including dissolved) are required. An example forcing configuration is given in \texttt{EXAMPLE\_bio\_POCflux} (and which can be obtained form mygenie.seao2.org) and would naturally be selected by: \vspace{-5pt}\begin{verbatim}bg_par_forcing_name="TEST_bio_POCflux"\end{verbatim}\vspace{-5pt} \noindent \textbf{NOTE}: Take care with dissolved organic matter (DOM) production, as a fraction of the specified POC flux will be converted into POC (and similarly for the other components of POM). Simplest is to set no DOM production if you are uncertain: \vspace{-5pt}\begin{verbatim}bg_par_bio_red_DOMfrac=0.0\end{verbatim}\vspace{-5pt} \noindent \textbf{NOTE}: Also take care with the units of the flux forcing to the surface layer in the ocean (mol yr-1). Since GENIE-1 is often run on a equal area grid, it is not difficult to convert export production densities to mol yr-1. However, with an equal area grid, whatever the units are of the desired POC export field are do no matter -- the global export can be set and the spatial distribution will then be appropriately scaled (as per in the \texttt{EXAMPLE\_bio\_POCflux} example). Also be aware that if there is insufficient PO4 to support the require POC flux, you will not get your entire POC flux. Global total export may thus be somewhat less than that specified. \end{compactenum} %--------------------------------------------------------------------------------------------------------------------------------- %--------------------------------------------------------------------------------------------------------------------------------- \subsection{Include a R-DOM cycle in the ocean}\label{Include a R-DOM cycle in the ocean} \subsubsection{R-DOM degradation}\label{R-DOM degradation} The parameter: \texttt{bg\_ctrl\_bio\_remin\_RDOM\_photolysis} determines whether RDOM degradation is restricted to the surface layer and occurs only by/associated with photolysis. It can be \texttt{.true.} or \texttt{.false.} and by default is set to: \vspace{-10pt}\begin{verbatim}bg_ctrl_bio_remin_RDOM_photolysis=.false.\end{verbatim}\vspace{-10pt} When set \texttt{.true.}, RDOM degradation is set to zero everywhere in the ocean except the surface layer. Here, the lifetime (parameter: \texttt{bg\_par\_bio\_remin\_RDOMlifetime}) is modified in *inverse* proportion to the solar insolation integrated over the surface layer. (There is a field in the 2D netCDF of solar insolation at the ocean surface, and the average over the surface layer is approx ~1/4 of this.). i.e., in lower latitude and higher insolation regions, the lifetime is shorter than specified by \texttt{bg\_par\_bio\_remin\_RDOMlifetime} (and by approx a factor of 1/4 of the solar insolation in W m-2). %--------------------------------------------------------------------------------------------------------------------------------- \subsection{Include a Cd cycle in the ocean}\label{Include a Cd cycle in the ocean} In order to run cGENIE with ocean cadmium cycle, the following \textit{base config}: \\ \texttt{cgenie\_eb\_go\_gs\_ac\_bg\_itfclsd\_16l\_JH\_BASEFeCd} is provided (under SVN). A typical experiment command line, using the \textit{user config} file: \texttt{EXAMPLE\_worjh2\_PO4Fe\_Cd\_SPIN} (also provided under SVN), would look like: \vspace{-5.5pt}\begin{verbatim} ./runCCgenie.sh cgenie_eb_go_gs_ac_bg_itfclsd_16l_JH_FeCdBASE / EXAMPLE_worjh2_FeCd_SPIN 11 \end{verbatim}\vspace{-5.5pt} To submit this job to the cluster (from \$HOME): \vspace{-5.5pt}\begin{verbatim} qsub -q kitten.q -j y -o cgenie_log -S /bin/bash subcgenie.sh cgenie_eb_go_gs_ac_bg_itfclsd_16l_JH_BASEFeCd / EXAMPLE_worjh2_PO4Fe_Cd_SPIN 10001 \end{verbatim}\vspace{-5.5pt} %--------------------------------------------------------------------------------------------------------------------------------- \subsection{Determine the CH4 flux required to achieve a particular atmospheric pCH4 value} Unlike the concentration of CO2 in the atmosphere, which if restored to a chosen value during a \textit{spin-up} experiment, will remain at that value in a \textit{continuation} experiment (if no other perturbation of the carbon cycle or CO2 emissions have been prescribed), CH4 in the atmosphere decays with a lifetime of ca. 8 years (with a small fraction dissolving in ocean surface waters and being oxidized in the ocean). Hence, atmospheric CH4 \textit{restored} to a particular value in a spin-up, requires that restoring to be maintained in any \textit{continuation} experiment or CH4 will quickly decay to zero. However, doing this (\textit{restoring} CH4 concentrations), prevents the effect of CH4 emissions on being assessed (as the atmospheric composition is being help constant).\\ An alternative would be carry out the \textit{spin-up} experiment with no \textit{restoring} of atmospheric CH4 (or \textit{restoring} to zero), and then run the \textit{continuation} experiment with no CH4 \textit{restoring}. This would enable e.g. CH4 emissions experiments to be carried out and the change in atmospheric CH4 in response to be simulated. The problem here is that the lifetime of CH4 in the atmosphere scales with the CH4 concentration. So in starting with no CH4 in the atmosphere, the CH4 lifetime is relatively short, and the response to CH4 emissions will be underestimated.\\ What is in effect 'missing' are the (natural) sources of CH4 to the atmosphere such as wetlands, which at steady state, provide a CH4 flux that balances the oxidation rate of CH4 in the atmosphere (and ocean). cGENIE has a \textit{parameter} for this: \texttt{ac\_par\_atm\_wetlands\_FCH4} (mol yr-1) (with the isotopic composition of this source set by: \texttt{ac\_par\_atm\_wetlands\_FCH4\_d13C}). All that then remains is to determine the flux of CH4 that balances the rate of oxidative loss for the desired atmospheric CH4 concentration. TO do this: \begin{compactenum} \item Carry out a \textit{spin-up} with atmospheric CH4 \textit{restored} to the desired concentration.\footnote{For an example: see experiment \texttt{EXAMPLE\_p0055c\_PO4\_CH4\_SPIN} described in \textit{cGENIE.Examples}.} \item Determine the total loss rate of CH4 (including both atmospheric oxidation and invasion (and subsequent oxidation) into the ocean) -- this is recorded in the \textit{time-series} results file:\\ \texttt{biogem\_series\_focnatm\_pCH4.res}\footnote{Second column (the value in units of mol yr-1)}. \item Set the \textit{parameter} \texttt{ac\_par\_atm\_wetlands\_FCH4} equal to this value. \end{compactenum} An example of a spin-up in which a prescribed ('wetland') flux of CH4 to the atmosphere is set, is described in:\\ \textit{cGENIE.Examples} -- \textit{spin-up} example \texttt{EXAMPLE\_p0055c\_PO4\_CH4\_SPIN2} %--------------------------------------------------------------------------------------------------------------------------------- %--- HOW-TOs: System forcings ---------------------------------------------------------------------------------------------------- %--------------------------------------------------------------------------------------------------------------------------------- \newpage \section{HOW-TOs: Forcings of the system}\label{how-to-5} Note: All the \textit{forcings} described here assume the 'new' (simplified) methodology for prescribing forcings. This methodology is enabled by including the parameter setting: \vspace{-10pt}\begin{verbatim} bg_ctrl_force_oldformat=.false. \end{verbatim}\vspace{-10pt} Taking the example of the ocean (dissolved tracers): flux and restoring \textit{forcings} are defined in the \textit{forcings} specification file: \texttt{configure\_forcings\_ocn.dat}. As detailed in the notes to this file, there is a flag (\texttt{COLUMN \#6}) which sets the spatial attributes of the \textit{forcing} as follows: \vspace{-10pt}\begin{verbatim} 3 == 3D 2 == 2D 0 == point -1 == SURFACE -2 == BENTHIC \end{verbatim}\vspace{-10pt} the default (33) being that the forcing is applied uniformly to the entire (3D) ocean volume. Options \texttt{3}, \texttt{2}, and \texttt{0}, as: uniform 3D\footnote{Note that here: '3D' does not mean a spatially explicit 3D pattern and hence the original ('old') way of specifying \textit{forcings}, but instead: that the forcing is applied uniformly in 3D space (i.e., is in effect a volume \textit{forcing}).} (volume), uniform 2D (surface), and point forcing, respectively, require no additional (spatial) information. Hence, with options \texttt{3}, \texttt{2}, or \texttt{0} set, only an additional file specifying the time-dependent information for each forcing need be provided, in files of the form: \vspace{-10pt}\begin{verbatim} biogem_force_flux_ocn_xxx_sig.dat \end{verbatim}\vspace{-10pt} for flux forcings, and \vspace{-10pt}\begin{verbatim} biogem_force_restore_ocn_xxx_sig.dat \end{verbatim}\vspace{-10pt} where: \texttt{xxx} represents the mnemonic of the tracer (e.g., \texttt{DIC} is dissolved inorganic carbon. \texttt{CH4} is methane, etc.). Options \texttt{-1} (\texttt{SURFACE}) and \texttt{-2} (\texttt{BENTHIC}) require a 2D field to be provided in addition to the time-dependent information for each forcing. The grids for both are the same -- i.e., all 'wet' grid points (non dry land) in the model. The filename for these 2D files is of the form: \vspace{-10pt}\begin{verbatim} biogem_force_flux_ocn_xxx_SUR.dat \end{verbatim}\vspace{-10pt} for flux forcings, and \vspace{-10pt}\begin{verbatim} biogem_force_restore_ocn_xxx_SUR.dat \end{verbatim}\vspace{-10pt} for restoring \textit{forcings}. Examples of point and 2D (benthic) ocean tracer \textit{forcing} are given below. For details of the 'old', fully 3D spatially-explicit forcing methodology, refer to the \textit{c}GENIE \textit{user-manual}. %--------------------------------------------------------------------------------------------------------------------------------- \subsection{Prescribe a an injection of radiocarbon-dead DIC}\label{Prescribe a an injection of radiocarbon-dead DIC} First ... a \textit{base-config} with 14C tracers is needed, e.g.,: \vspace{-10pt}\begin{verbatim} cgenie_eb_go_gs_ac_bg_itfclsd_16l_JH_ANTH \end{verbatim}\vspace{-10pt} \vspace{-10pt}\begin{verbatim} cgenie_eb_go_gs_ac_bg_itfclsd_16l_JH_ANTHFe \end{verbatim}\vspace{-10pt} (with Fe co-limitation of marine biological productivity). Then, in the \textit{user-config}, an appropriate \textit{forcing} needs to be specified, e.g.: \vspace{-10pt}\begin{verbatim} pyyyyx_FDIC_F13DIC_F14DIC \end{verbatim}\vspace{-10pt} and under the heading \texttt{--- FORCINGS ---}, might look something like: \begin{compactitem} \item \vspace{-5pt}\begin{verbatim} bg_ctrl_force_oldformat=.false. bg_par_forcing_name="worjh2_FDIC_F13DIC_F14DIC" \end{verbatim}\vspace{-5pt} which prescribes a forcing of DIC plus its (13C and 14C) isotopes to the ocean (somewhere or everywhere). \item \vspace{-5pt}\begin{verbatim} bg_par_ocn_force_scale_val_03=0.0833e15 \end{verbatim}\vspace{-5pt} sets the flux (mol yr-1), which is equivalent to 1 PgC yr-1. \item To scale the isotopic composition: \vspace{-5pt}\begin{verbatim} bg_par_ocn_force_scale_val_04=-60.0 bg_par_ocn_force_scale_val_05=-999.0 \end{verbatim}\vspace{-5pt} for example gives -60 per mil for 13C like methane and 14C that is pretty isotopically dead\footnote{Note this is on the scale of d14C not D14C}. \item By default in the \textit{forcing}, the duration of the emission 1 year, and can be re-scaled (e.g., to 1000 years duration) by: \vspace{-5pt}\begin{verbatim} bg_par_ocn_force_scale_time_03=1000.0 bg_par_ocn_force_scale_time_04=1000.0 bg_par_ocn_force_scale_time_05=1000.0 \end{verbatim}\vspace{-5pt} \item Finally -- the emission location is specified by, e.g.: \vspace{-5pt}\begin{verbatim} bg_par_force_point_i=18 bg_par_force_point_j=26 bg_par_force_point_k=7 \end{verbatim}\vspace{-5pt} Alternatively, the DIC release can be made over the entire ocean floor or simply sections (or depth intervals) of the ocean floor instead of a point source. \end{compactitem} %--------------------------------------------------------------------------------------------------------------------------------- \subsection{Prescribe a spatial map of benthic tracer release}\label{Prescribe a spatial map of benthic tracer release} Flux \textit{forcings} to the ocean with a variety of different spatial attributes can be specified in the \textit{forcings} specification file: \texttt{configure\_forcings\_ocn.dat}. As detailed in the notes to this file, there is a flag (\texttt{COLUMN \#6}) which sets the spatial attributes of the \textit{forcing}: \vspace{-10pt}\begin{verbatim} 3 == 3D 2 == 2D 0 == point -1 == SURFACE -2 == BENTHIC \end{verbatim}\vspace{-10pt} with the default requiring a (3D) spatially explicit field to be provided. Options \texttt{-1} (\texttt{SURFACE}) and \texttt{-2} (\texttt{BENTHIC}) require a 2D field to be provided. The grids for both are the same -- i.e., all 'wet' grid points (non dry land) in the model. Templates for either can be created as follows: \begin{compactenum} \item Open up the \textit{BIOGEM} results file: \texttt{fields\_biogem\_2d.nc} (any experiment). \item Display the variable: \texttt{grid\_mask}. \item Select the \texttt{Array 1} tab (to display the actual gridded values rather than the color-coded map); highlight the grid of values and then copy-and-paste to a text editor. \item You should have a grid of values, with a '\texttt{1.0}' representing ocean, and '\texttt{NaN}' land. The \texttt{NaN}s can then be search-and-replaced to '\texttt{0.0}' and you have a grid valid for either the entire surface ocean or entire benthic surface. \end{compactenum} From here: \texttt{1}s can be replaced by \texttt{0}s to remove unwanted locations.\footnote{This can be quite time-consuming and tedious and there is no particular short-cut :(} In the forcing configuration file, if the \texttt{COLUMN \#5} flag ('\texttt{scale flux forcing of tracer?}') is set to 't', then the flux applied at each selected location is scaled such that the total applied flux is equal to that given in the \textit{forcing} time-signal file.\footnote{The values in the forcing map need not be all 1.0 of course.} %--------------------------------------------------------------------------------------------------------------------------------- %--- HOW-TOs: Sediments and weathering ------------------------------------------------------------------------------------------- %--------------------------------------------------------------------------------------------------------------------------------- \newpage \section{HOW-TOs: Sediments and weathering}\label{how-to-6} %--------------------------------------------------------------------------------------------------------------------------------- \subsection{Spin-up the full marine carbon cycle including sediments}\label{Spin-up the full marine carbon cycle including sediments} By a 2-step process\footnote{This is a revised methodology compared to that described in the GENIE-1 HOW-TO.}: \begin{compactenum} \item \textbf{First-guess '\textit{closed system}' \textit{spin-up}} \\As of \textbf{r4211} this it is possible to carry out the initial spin-up, \textbf{with} a solute input to the ocean via rivers, but also with the system configured 'closed', i.e.,: \vspace{-5pt}\begin{verbatim}bg_ctrl_force_sed_closedsystem=.true.\end{verbatim}\vspace{-5pt} The weathering flux is subtracted from ocean cells overlying the sediments to balance the global budget and ensure a closed system. This subtraction involves partitioning the total global weathering flux between each ocean floor cell with a subtraction in proportion to the estimated CaCO3 preservation and burial rate. To utilize this methodology now requires that the ROKGEM module is used, i.e., a \textit{base config} such as: \vspace{-5pt}\begin{verbatim}cgenie_eb_go_gs_ac_bg_sg_rg_itfclsd_16l_JH_BASE\end{verbatim}\vspace{-5pt} A first guess for the weathering flux must now be prescribed. This could be derived from a previous closed system model experiment with no weathering flux specified (diagnosing weathering from total global CaCO3 burial as described earlier), or from the literature, e.g., \textit{Ridgwell} [2007] cites 20 Tmol HCO3- yr-1, an equivalent CaCO3 weathering rate of 10 Tmol yr-1: \vspace{-5pt}\begin{verbatim}rg_par_weather_CaCO3=10.00E+12\end{verbatim}\vspace{-5pt} The following \textit{user config} file \vspace{-5pt}\begin{verbatim}EXAMPLE_worjh2_PO4_S36x36_SPIN\end{verbatim}\vspace{-5pt} can be used for the closed system spin-up. \noindent To launch an experiment, type (all in one line; notes space separators between line items in this document format): \vspace{-5pt}\begin{verbatim} ./runCCSgenie.sh cgenie_eb_go_gs_ac_bg_sg_rg_itfclsd_16l_JH_BASE / EXAMPLE_worjh2_PO4_S36x36_SPIN 20001 \end{verbatim}\vspace{-5pt} \noindent To submit to the cluster type: \vspace{-5pt}\begin{verbatim}qsub -q kitten.q -j y -o cgenie_log -S /bin/bash subcgenie.sh cgenie_eb_go_gs_ac_bg_sg_rg_itfclsd_16l_JH_BASE / EXAMPLE_worjh2_PO4_S36x36_SPIN 20001 \end{verbatim}\vspace{-5pt} \noindent 20000 years (20001 if using the default \textit{time-series} saving points in order to record the last (annual averaged) year of the experiment) is probably about the minimum practical \textit{spin-up} time. Primarily -- you are looking for convergence in the mean wt\% CaCO3 value (averaged sediment composition), which is recorded in the \textit{BIOGEM} \textit{time-series} file: \vspace{-5pt}\begin{verbatim}EXAMPLE_worjh2_PO4_S36x36_SPIN\end{verbatim}\vspace{-5pt} \item \textbf{Open system spin-up} \\The last stage is an open system spin-up as described previously. The prescribed weathering flux (\texttt{rg\_par\_weather\_CaCO3}) is revised and set equal to the diagnosed global CaCO3 burial rate ('\texttt{Total CaCO3 pres (sediment grid)}') as reported in the SEDGEM module results file: \\\texttt{seddiag\_misc\_DATA\_GLOBAL.res}. In addition, an \textit{open system} must now be specified in the \textit{user config}: \vspace{-5pt}\begin{verbatim}bg_ctrl_force_sed_closedsystem=.false.\end{verbatim}\vspace{-5pt} \noindent 50000 years (50001 if using the default \textit{time-series} saving points in order to record the last (annual averaged) year of the experiment) is probably about the minimum practical \textit{spin-up} time. Again -- you are looking for convergence in the mean wt\% CaCO3 value. \end{compactenum} There is still some departure of ocean Ca and ALK inventories during the revised multi-stage spin-up compared to observed (and the initialized values), but this is substantially reduced compared to the original 2-part spin-up methodology as well as to a single spin-up methodology. \noindent \textbf{TIP}: Having completed the full marine carbon cycle spin-up, it is recommended that the CaCO3:rain ratio is set invariant -- see earlier HOWTO. If the default CaCO3 parameterization setting is retained, CO2-calcification feedback as described in \textit{Ridgwell et al.} [2007b] is enabled. \noindent \textbf{NOTE}: There is no climate feedback by default. To run experiments with feedback between CO2 and climate, add:\vspace{-11pt}\begin{verbatim}ea_36=y\end{verbatim}\vspace{-11pt} at the end of the \textit{user config}. %--------------------------------------------------------------------------------------------------------------------------------- \subsection{Run the sediments at higher resolution}\label{Run the sediments at higher resolution} By default (as set in the \textit{base config} file in \texttt{\~{}/genie/genie-main/configs}) the SEDGEM sediment grid is configured at a resolution of 36x36 (and on an equal area grid), by: \vspace{-5.5pt}\begin{verbatim} SEDGEMNLONSOPTS='$(DEFINE)SEDGEMNLONS=36' SEDGEMNLATSOPTS='$(DEFINE)SEDGEMNLATS=36' \end{verbatim}\vspace{-5.5pt} Several data input files are required by SEDGEM consistent with the specified grid: \begin{compactitem} \item A mask, which specifies the sediment grid locations (if any!) at which 'sediment cores' (see: \textit{Ridgwell} [2007]) are to be generated at: \vspace{-5.5pt}\begin{verbatim}sg_par_sedcore_save_mask_name="sedgem_save_mask.36x36"\end{verbatim}\vspace{-5.5pt} The example provided on SVN contains some illustrative locations set (by a '\texttt{1}') for cores to be generated in. \item The required sediment grid topography (bathymetry): \vspace{-5.5pt}\begin{verbatim}sg_par_sed_topo_D="sedgem_topo_D.36x36"\end{verbatim}\vspace{-5.5pt} This particular grid is derived from observed bathymetry and excludes sediment locations shallower than the surface ocean layer (of the 8-level model) as described in Ridgwell and Hargreaves [2007]. \end{compactitem} The directory location of the required files is set by input directory namelist setting, which by default is: \\\texttt{sg\_par\_indir\_name="} \texttt{\$RUNTIME\_ROOT/genie-sedgem/data/input"} As described in Ridgwell and Hargreaves [2007], SEDGEM can be sub-gridded to a resolution of 72x72 (equal area). The following namelist additions are necessary to the \textit{user config} file: \vspace{-5.5pt}\begin{verbatim} SEDGEMNLONSOPTS='$(DEFINE)SEDGEMNLONS=72' SEDGEMNLATSOPTS='$(DEFINE)SEDGEMNLATS=72' sg_par_sed_topo_D="sedgem_topo_D.72x72" sg_par_sedcore_save_mask_name="sedgem_save_mask.72x72" \end{verbatim}\vspace{-5.5pt} \textbf{NOTE}: Carbonate chemistry stability problems (= model crash) may occur in the 16-level configuration in conjunction with 72x72 resolution sub-gridded sediments. Who knows why?! :( %--------------------------------------------------------------------------------------------------------------------------------- \subsection{Include shallow water depositional systems}\label{Include shallow water depositional systems} \noindent \textbf{Relevant EXAMPLES}: %--------------------------------------------------------------------------------------------------------------------------------- \subsection{Accelerate the weathering-sedimentation mass balance ('\texttt{GEMlite}')}\label{GEMlite} \noindent \textbf{Relevant EXAMPLES}: \noindent \textbf{Also see}: \texttt{\textit{c}GENIE} user-manual 'FAQ' (further comments on \texttt{GEMlite} applicability). \noindent A (pseudo) module is provided: '\texttt{GEMlite}' which provides a means of much more rapidly solving the weathering-sedimentation mass balance -- i.e. the long-term (>10 kyr) carbon cycle processes and feedbacks. The motivation behind \texttt{GEMlite} is the stark disparity between the time-scales of ocean circulation and biological pump (ca. 0.1-1000 years) and those of sedimentation and weathering (~2-20 kyr) and particularly the silicate weathering feedback (>100 kyr). This makes running \texttt{cGENIE} to an open system steady state (with or without the silicate weathering feedback) challenging. Is there any way of 'accelerating' the calculation of the 'long tail' [Archer et al., 2009] of the CO2 curve (e.g. in response to fossil fuel CO2 emissions)? The philosophy is as follows: the long-term weathering-sedimentation processes are effectively just an imbalance between the supply of solutes via weathering and preservation and burial of esp. carbonates in deep-sea (and shallow) marine sediments. For a small imbalance between weathering and sedimentation, atmospheric pCO2 and climate (and hence the solute flux when including weathering feedbacks) will only change very slightly. For long intervals characterized by only a small imbalance in weathering-sedimentation the key assumption is made: Ocean circulation and the biological pump, and hence the *gradients* of dissolved species in the ocean can be considered *invariant*. Hence, for the purpose of solving weathering-sedimentation over an intervals of time: The ocean can be treated as a *single box*. It further assumes that: The ocean is initially in equilibrium with the atmosphere (w.r.t. CO2). (This latter assumption does place important limitations on under what circumstances \texttt{GEMlite} can be employed to accelerate experiments.) This is what \texttt{GEMlite} does -- it solves for weathering-sedimentation and applies the mass difference *uniformly* throughout the ocean (as if it were a single box), hence preserving the tracer gradients in the ocean. It also (optionally) calculates and re-partitioning of carbon between ocean and atmosphere. Because ocean circulation and the biological pump etc. do not have to be re-calculated, the accelerated quasi box-model phase can be calculated very considerably faster than the 'full' model. Obviously, if atmospheric pCO2 and hence climate are changing at an appreciable rate then the assumption of invariance in ocean tracer gradients breaks down and it is not 'safe' to apply the accelerated calculation. Similarly, appreciable changes in nutrient inventories will affect the biological pump and hence also change tracer gradients. The key to employing \texttt{GEMlite}, in addition to knowing when it is appropriate/not appropriate to employ it, is to decide what balance of accelerated (\texttt{GEMlite}) time-stepping vs. normal (full system update of ocean circulation, biological pump, etc.) time-stepping to employ. This division is implemented by creating a sequence of accelerated vs. non-accelerated time-stepping. This can be done in one of two ways: \begin{compactenum} \item Fixed sequence. \\By default, \texttt{GEMlite} will employed a fixed, pre-determined sequence of accelerated vs. non-accelerated time-stepping. The parameters to specify this sequencing are: \\\texttt{ma\_gem\_notyr} -- which sets the number of years (the assumed time-step of \texttt{GEMlite}) for 'normal' time-stepping. \\\texttt{ma\_gem\_yr} -- which sets the number of years for accelerated time-stepping. \\For instance: if \texttt{ma\_gem\_notyr=50} and \texttt{ma\_gem\_yr=50}, you would have a sequence with 50 years of full updating, followed by 50 years of accelerated. \\For instance: if \texttt{ma\_gem\_notyr=10} and \texttt{ma\_gem\_yr=90}, you would have a sequence with 10 years of full updating, followed by 90 years of accelerated. \\etc. \\Note that the GEMlite cycle phase of 'normal' time-stepping is *always* done first. \\Also note that choosing e.g. \texttt{ma\_gem\_notyr=10} and \texttt{ma\_gem\_yr=100}, while appearing a desirably simple ratio, would result in the change-over point in cycle phase (to accelerated) occurring at the end of year 10, 120, 230, 240, etc. -- something that might affect/influence your choice of data saving pattern (i.e., the sequence of time-points for time-series and time-slice data saving). \\By default, the parameter values are: \texttt{ma\_gem\_notyr=999999} and \texttt{ma\_gem\_yr=1} meaning that in practice you will never get to the end of the 'normal' time-stepping phase. Note that these parameters are \textbf{integers} (setting real numbers, e.g. \texttt{1.0E6} will not work ...). \item Adaptive sequencing. \\Here, \texttt{GEMlite} attempts to be clever and optimizes the ratio between the duration of each phase of the cycle. \\The motivation for this is that often in model experiments, environmental parameters will tend to change faster at the beginning of an experiment compared to towards the end. Fossil fuel CO2 release and its long tail of declining pCo2 is a good example of this. Obviously this complicates the choice of a (fixed) ratio of cycle phases -- 100:100 (or more likely: 1000:1000) might not lead to too much degradation of the simulation, but you would only gain a speed advantage of x2 for the experiment as a whole, which if ~100-1000 kyr in total duration, is still going to be l o n g. On the other hand: 10:90 would give you a factor almost x10 increase in overall speed, but would seriously degrade the simulation during the initial, rapidly changing environment following CO2 release. \\Adaptive sequencing adjust the time-stepping via 2 criteria: \begin{compactitem} \item In the normal time-stepping phase, if the rate of change of pCO2 is *more than* a specified threshold over any one year, then the total duration of this phase is extended by one year. \item In the accelerated time-stepping phase, if the total change in pCO2 since the last normal phase is *less than* a specified threshold, then the total duration of this phase is extended by one year. \end{compactitem} The result is that the phase durations are always a minimum of the values set by \texttt{ma\_gem\_notyr} and \texttt{ma\_gem\_yr}. If it is 'unsafe' to switch to accelerated mode, because pCO2 is changing rapidly, then the model stays in normal mode. If it is safe to stay in the accelerated mode, because pCO2 has not changed much in total during the phase, then the model stays in the accelerated phase. \\The parameter names are default values for the two thresholds are: \begin{compactitem} \item \texttt{ma\_gem\_adapt\_dpCO2dt=0.1} (ppm yr-1) \item \texttt{ma\_gem\_adapt\_DpCO2=1.0} (ppm) \end{compactitem} but these will not necessarily be the ideal of any particular experiment (and some trial-and-error ma be called for). \\Adaptive time-stepping is enabled by setting: \\\texttt{ma\_gem\_adapt\_auto=.true.} \\(by default it is \texttt{.false.}). \\The switching between normal (non accelerated) and accelerated phases is saved in a time-series file: \vspace{-5pt}\begin{verbatim}biogem_series_misc_gemlite.res\end{verbatim}\vspace{-5pt} As a further refinement, the accelerated phase can be set to be relatively short to begin with, but gradually increasing in length. The parameters controlling this are: \\\texttt{ma\_gem\_yr} -- the initial accelerated phase duration \\\texttt{ma\_gem\_yr\_max} -- the maximum accelerated phase duration \\\texttt{ma\_gem\_adapt\_dgemyr} -- the (minimum) fractional increase in duration each cycle (or 1.0 yr, whichever is greater) \\ A reasonable set of parameters: \vspace{-5pt}\begin{verbatim} ma_gem_notyr=10 ma_gem_yr=10 ma_gem_yr_max=990 ma_gem_adapt_dgemyr=0.05 ma_gem_adapt_dpCO2dt=0.10 ma_gem_adapt_DpCO2=0.01 ma_gem_adapt_auto=.true. ma_gem_adapt_auto_unlimitedGEM=.false. \end{verbatim}\vspace{-5pt} \end{compactenum} Finally ... you will need a \textit{base-config} that has \texttt{GEMlite} enabled. This actually requires nothing more than the addition of a couple of lines (to a \textit{base-config} file): \vspace{-10pt}\begin{verbatim} ma_flag_gemlite=.TRUE. \end{verbatim}\vspace{-10pt} which can go e.g. near the start of the file under \texttt{\# GENIE COMPONENT SELECTION}. Plus: \vspace{-10pt}\begin{verbatim} ma_kgemlite=xx \end{verbatim}\vspace{-10pt} which can go e.g. under \texttt{\# TIME CONTROL AND TIME-STEPPING}. \\Here, \texttt{xx} will depend on the time-step assumed in the base-config. This is likely to be either \texttt{96}: the standard for most \textit{base-configs}, or \texttt{48}: for low resolution and faster model configurations, which typically have \texttt{.t48} in their filename. By convention, I name \textit{base-configs} including \texttt{GEMlite} with \texttt{\_gl}, \\e.g. \texttt{cgenie\_eb\_go\_gs\_ac\_bg\_sg\_rg\_gl.p0000c.BASESLi.t48.config} \\but you can name it \texttt{BobTheLeglessPony} for all I care. The \textbf{most important} thing is to ensure you are not seriously degrading model fidelity (of carbon cycle simulation) by your adoption and configuration of \texttt{GEMlite}. \\\textbf{Test} different assumptions of how the time-stepping phases are scheduled and compare (of possible) against a full experiment in which \texttt{GEMlite} is not used. It is important to recognize that when the model switches into the GEM phase, it assumes all ocean tracer gradients are fixed, and updates only ocean composition as a whole according to weathering vs sedimentation imbalance (and also tries to re-equilibrium ocean and atm). As part of this, the flux to the sediments is taken from the average of the last year of the preceding normal phase, and fixed. This also means that the d13C of the CaCO3 deposited to the sediments is fixed ... even if the ocean d13C is being updated and changing ... So, basically you lose the feedback that leads to d13C converging as sinks balance (weathering and volcanic) inputs\footnote{Adjusting the fluxes themselves during the GEM intervals would break the underlying assumption inherent in the acceleration approximation.}. \\The solution is to not run in the GEM phase for such long intervals -- instead giving the normal phase a chance to make a brief update of ocean gradients and also d13C of export flux. BUT, if pCO2 hardly changes, \textit{c}GENIE runs the risk of staying in the GEM phase for ever (ish)! \\ A further option: \vspace{-5pt}\begin{verbatim} gem_adapt_auto_unlimitedGEM \end{verbatim}\vspace{-5pt} sets whether GEM is allowed an unlimited phase duration or not. By default it is \texttt{.false.}. This means that the maximum GEM duration is limited to the normal \texttt{gem\_yr} parameter. Also, if excessive (pCO2) drift occurs, the model will immediately switch to the normal phase. By default then: \begin{compactitem} \item \texttt{gem\_notyr} specifies a MINIMUM duration for a normal phase.. \item \texttt{gem\_yr} specifies a MAXIMUM duration for a GEM phase. \end{compactitem} Values of \texttt{gem\_yr} much less than 100 are not advisable as you will not reestablish a new equilibrium gradient of tracers in the ocean in that time. %--------------------------------------------------------------------------------------------------------------------------------- %--- HOW-TOs: Visualization ---------------------------------------------------------------------------------------------------- %--------------------------------------------------------------------------------------------------------------------------------- \newpage \section{HOW-TOs: Visualization}\label{how-to-7} %--------------------------------------------------------------------------------------------------------------------------------- \subsection{Generate ensembles and visualize their output using Mathematica}\label{Generate ensembles and visualise their output using Mathematica} Use the scripts in \texttt{genie-tools/runscripts}. Read the \texttt{READ\_ME} file in this directory for instructions. The Mathematica notebooks allow for easy ensemble generation across multiple variables (still using non-XML version though!), and also for interactive plotting of time-series and netcdf data with GUIs - including comparisons and movies etc. (note: you can get a free trial of Mathematica if you don't have access to it). %--------------------------------------------------------------------------------------------------------------------------------- %--- HOW-TOs: Miscellaneous ------------------------------------------------------------------------------------------- %--------------------------------------------------------------------------------------------------------------------------------- \newpage \section{HOW-TOs: Miscellaneous}\label{how-to-8} %--------------------------------------------------------------------------------------------------------------------------------- \subsection{Speed up the model}\label{Speed up the model} *sign* You speed freak. Is this all you care about? What about the 'quality' of the simulation - does that mean absolutely nothing to you? Oh well ... There is a bunch of stuff that slows GENIE down that may not be absolutely essential to a particular model experiment. These include: \begin{compactenum} \item The number of tracers - if you don't need 'em, then don't select 'em! Selected tracers are automatically passed to GOLDSTEIN and advected/convected/diffused with ocean circulation. Similarly, BIOGEM does a whole bunch of stuff with tracers, particularly those which can be biologically transformed. All this is numerically wasteful if you aren't interested in them. Equally importantly, the more tracers you have selected the more careful you have to be in configuring the model. Superfluous tracers therefore cost more configuration time and/or increase the change of a model crash. \item 'Tracer auditing' - the continuous updating and checking global tracer inventories to ensure that there is no spurious loss or gain of any tracer (i.e., a bug) has computational overheads associated with it. Whether this checking is carried out or not is set by the value of the flag \texttt{bg\_ctrl\_audit}\footnote{It is \texttt{.false.} by default.}. \item Time-series results saving. Model tracer (plus some 'physical') properties are bring continuously averaged in constructing time-series results files. Cutting down on time-series that you don't need will help minimize model run-time. The various categories of time-series that will be saved are specified by a series of namelist parameter flags. However, within each category (such as \texttt{ocn} tracers - \texttt{bg\_ctrl\_data\_save\_sig\_ocn}) all properties will be saved - you are not given to option to save a defined sub-set (for example, DIC and PO4 in the ocean but not ALK). Note that time-series saving of data that is a 2-D average, such as atmospheric composition at the ocean-atmosphere interface, sediment composition at the ocean-sediment interface, or just ocean surface conditions, is less numerically demanding than mean values that have to be derived from a 3-D data field. \item Time-slice results saving. If you have relatively few requested time-slices over the course of the model integration then this is unlikely to significantly impact the overall run-time (even will all possible data category save namelist flags set to \texttt{.true.}). However, note that if you have accidently triggered the default time-slice saving interval (by having no data items in the time-slice specification file (\texttt{bg\_par\_infile\_slice\_name}) you may end up with the model running about as fast as a 2-legged dog super-glued to a 10-tonne biscuit. \item Alter the degree of asynchronicity between climate and biogeochemistry (see later HOW-TO). \end{compactenum} As a very rough guide, the impact on total run-time of making various changes to the model configuration are listed as follows. Numbers are given as a percentage increase in total model run-time (using the /usr/bin/time linux command). Tracers selected in the ocean are DIC, ALK, PO4, O2, DOM\_C, DOM\_P, DOM\_O2, as well as 13C isotopic components (DIC\_13C and DOM\_C\_13C) (+ T and S). The corresponding tracers are present in the atmosphere and as particulates. The model is run for 10 years as a new run (i.e., not loading in a restart file): \begin{compactitem} \item ADD auditing \begin{math}\Rightarrow\end{math} +15\% \item ADD time-slice saving \begin{math}\Rightarrow\end{math} +20\%\footnote{Because only a 10 year integration has been carried out with a time-slice saved at 10 years, the computational cost of time-slice saving is disproportionately high as displayed. With a longer integration, the relative cost of saving a time-slice will fall. In constrast, the computational cost as a fraction of total run-time of time-series saving and auditing is likely to remain the same.} \item ADD time-series saving \begin{math}\Rightarrow\end{math} +15\% \item REMOVE \begin{math}^{13}\end{math}C isotopic species (= DIC and DOC ocean tracers) \begin{math}\Rightarrow\end{math} -10\%\footnote{The speed gained by removing two tracers is not proportional to the fractional decrease in number of tracers (in this example reducing from 11 to 9 the number of tracers in the ocean gives only a ca. 10\% improvement in overall speed).} \end{compactitem} The basic configuration for a faster 'lego box' cGENIE configuration consists of a 18x18 model grid and an 8 level ocean. The continents are in a zonally-averaged configuration and there is no topography in the oceans. \\ The model is accelerated by:\\ (a) it's low resolution\\ (b) taking 48 instead of 96 ocean timesteps per year \\ (c) BIOGEM is only being updated every 4 rather than every 2 ocean time-steps. \\ Note: I am still in the process of carrying out numerical stability tests; in the end it may have to slow down a bit. Because the time-stepping is different a new \textit{rungenie} (make executable) script has to be used: \vspace{-5.5pt}\begin{verbatim} runCCSgenie_t48.sh \end{verbatim}\vspace{-16.5pt} (an equivalent ocean-only full carbon cycle including sediments has yet to be created!) \\ Using the following example \textit{user config}: \vspace{-5.5pt}\begin{verbatim} EXAMPLE_p0000b_SPIN_x1CO2_S18x18 \end{verbatim}\vspace{-16.5pt} and base config: \vspace{-5.5pt}\begin{verbatim} cgenie_eb_go_gs_ac_bg_sg_rg_modern_18x18x8_0i_BASE_t48.config \end{verbatim}\vspace{-16.5pt} A typical run would look like something like: \vspace{-5.5pt}\begin{verbatim} ./runCCSgenie_t48.sh cgenie_eb_go_gs_ac_bg_sg_rg_modern_18x18x8_0i_BASE_t48 / EXAMPLE_p0000b_SPIN_x1CO2_S18x18 101 \end{verbatim}\vspace{-16.5pt} (Don't forget to do a make cleanall if you were using a different ocean configuration!) \\ In this configuration 100 years take about 40 seconds, 10 kyr would just take over and hour, and 100 kyr could be run overnight! %--------------------------------------------------------------------------------------------------------------------------------- \subsection{Chop genie runs into manageable pieces (useful for very long runs)}\label{Chop genie runs into manageable pieces (useful for very long runs)} Use the scripts in \texttt{genie-tools/runscripts}. Read the \texttt{READ\_ME} file in this directory for instructions. %--------------------------------------------------------------------------------------------------------------------------------- %--------------------------------------------------------------------------------------------------------------------------------- %--- Contact Information --------------------------------------------------------------------------------------------------------- %--------------------------------------------------------------------------------------------------------------------------------- \newpage \section{Contact Information} \begin{compactitem} \item Andy Ridgwell: \texttt{[email protected]} \end{compactitem} %================================================================================================================================= %=== END DOCUMENT ================================================================================================================ %================================================================================================================================= \end{document}
{ "alphanum_fraction": 0.6730855094, "avg_line_length": 82.4596199525, "ext": "tex", "hexsha": "bb808271439b6b812093edfca0e1cc5ea4150e43", "lang": "TeX", "max_forks_count": 3, "max_forks_repo_forks_event_max_datetime": "2021-04-12T15:25:56.000Z", "max_forks_repo_forks_event_min_datetime": "2015-05-15T19:54:57.000Z", "max_forks_repo_head_hexsha": "ca5f66a857e3ba7ed60785052d19f92abb7ffc00", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "derpycode/cgenie", "max_forks_repo_path": "doc/cGENIE.HOWTO.tex", "max_issues_count": 1, "max_issues_repo_head_hexsha": "ca5f66a857e3ba7ed60785052d19f92abb7ffc00", "max_issues_repo_issues_event_max_datetime": "2016-05-10T19:33:11.000Z", "max_issues_repo_issues_event_min_datetime": "2016-05-10T17:00:54.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "derpycode/cgenie", "max_issues_repo_path": "doc/cGENIE.HOWTO.tex", "max_line_length": 901, "max_stars_count": 9, "max_stars_repo_head_hexsha": "ca5f66a857e3ba7ed60785052d19f92abb7ffc00", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "derpycode/cgenie", "max_stars_repo_path": "doc/cGENIE.HOWTO.tex", "max_stars_repo_stars_event_max_datetime": "2022-03-18T10:14:32.000Z", "max_stars_repo_stars_event_min_datetime": "2016-05-10T01:51:28.000Z", "num_tokens": 16595, "size": 69431 }
\section{Compactly generated spaces} A lot of homotopy theory is about loop spaces and mapping spaces. Standard topology doesn't do very well with mapping spaces, so we will narrate the story of \emph{compactly generated spaces}. One nice consequence of working with compactly generated spaces is that the category is Cartesian-closed (a concept to be defined below). \subsection{CGHW spaces}\label{CGWHspaces} Some constructions commute for ``categorical reasons''. For instance, limits commute with limits. Here is an exercise to convince you of a special case of this. \begin{exercise}%Exercise 4 from 18.906 Let $X$ be an object of a category $\cc$. The \emph{overcategory} (or the \emph{slice category}) $\cc_{/X}$ has objects given by morphisms $p:Y\to X$ in $\cc$, and morphisms given by the obvious commutativity condition. \begin{enumerate} \item Assume that $\cc$ has finite products. What is the left adjoint to the functor $X\times -:\cc\to\cc_{/X}$ that sends $Y$ to the object $X\times Y \xar{\pr_1}X$? \item As a consequence of Theorem \ref{adjointslimits}, we find that $X\times -:\cc\to\cc_{/X}$ preserves limits. The composite $\cc\to\cc_{/X}\to\cc$, however, probably does not. \begin{itemize} \item What is the limit of a diagram in $\cc_{/X}$? \item Let $Y:\cI\to\cc$ be any diagram. Show that $${\lim_{i\in\cI}}^{\cc_{/X}}(X\times Y_i) \simeq X\times {\lim_{i\in\cI}}^\cc Y_i.$$ What happens if $\cI$ only has two objects and only identity morphisms? \end{itemize} \end{enumerate} \end{exercise} However, colimits and limits need not commute! An example comes from algebra. The coproduct in the category of commutative rings is the tensor product (exercise!). But $\left(\lim \Z/p^k\Z\right) \otimes \QQ \simeq \Z_p \otimes \QQ \simeq \QQ_p$ is clearly not $\lim\left(\Z/p^k\Z\otimes \QQ\right) \simeq \lim 0 \simeq 0$! We also need not have an isomorphism between $X\times\colim_{j\in \cJ}Y_j$ and $\colim_{j\in \cJ}(X\times Y_j)$. One example comes a quotient map $Y\to Z$: in general, the induced map $X\times Y\to X\times Z$ is not necessarily another quotient map. A theorem of Whitehead's says that this problem is rectified if we assume that $X$ is a compact Hausdorff space. Unfortunately, a lot of interesting maps are built up from more ``elementary'' maps by such a procedure, so we would like to repair this problem. %Why were we talking about colimits? Here's an observation. Suppose $X\to Y$ is a quotient map; then a map $Y\to Z$ is continuous iff the composite $X\to Y\to Z$ is continuous. A quotient map \emph{is} a coequalizer. What I'm saying is, I can find two maps to $X$ such that $Y$ is a coequalizer of $X$. What space are we mapping into $X$? Well, suppose $Z=X/\sim$. If we considered: %\begin{equation*} % \begin{tikzcd} % X\times_Z X\ar[r,shift left=.75ex,"\pi_1"]\ar[r,shift right=.75ex,swap,"\pi_2"] & X\ar[r] & Z % \end{tikzcd} %\end{equation*} %The term here is ``regular epimorphism''. We cannot simply do this by restricting ourselves to compact Hausdorff spaces: that's a pretty restrictive condition to place. Instead (motivated partially by the Yoneda lemma), we will look at topologies detected by maps from compact Hausdorff spaces. \begin{definition} Let $X$ be a space. A subspace $F\subseteq X$ is said to be \emph{compactly closed} if, for any map $k:K\to X$ from a compact Hausdorff space $K$, the preimage $k^{-1}(F)\subseteq K$ is closed. \end{definition} It is clear that any closed subset is compactly closed, but there might be compactly closed sets which are not closed in the topology on $X$. This motivates the definition of a $k$-space: \begin{definition} A topological space $X$ is said to be a \emph{$k$-space} if every compactly closed set is closed. \end{definition} The $k$ comes either from ``kompact'' and/or Kelly, who was an early topologist who worked on such foundational topics. It's clear that $X$ is a $k$-space if and only if the following statement is true: a map $X\to Y$ is continuous if and only if, for every compact Hausdorff space $K$ and map $k:K\to X$, the composite $K\to X\to Y$ is continuous. For instance, compact Hausdorff spaces are $k$-spaces. First countable (so metric spaces) and CW-complexes are also $k$-spaces. In general, a topological space $X$ need not be a $k$-space. However, it can be ``$k$-ified'' to obtain another $k$-space denoted $kX$. The procedure is simple: endow $X$ with the topology consisting of all compactly closed sets. The reader should check that this is indeed a topology on $X$; the resulting topological space is denoted $kX$. This construction immediately implies, for instance, that the identity $kX\to X$ is continuous. Let $k\Top$ be the category of $k$-spaces. This is a subcategory of the category of topological spaces, via a functor $i:k\Top\hookrightarrow \Top$. The process of $k$-ification gives a functor $\Top\to k\Top$, which has the property that: $$k\Top(X,kY)=\Top(iX,Y).$$ Notice that this is another example of an adjunction! We can conclude from this that $k(iX\times iY)=X\times^{k\Top}Y$, where $X$ and $Y$ are $k$-spaces. One can also check that $kiX\simeq X$. The takeaway is that $k\Top$ has good categorical properties inherited from $\Top$: it is a complete and cocomplete category. As we will now explain, this category has more categorical niceness, that does not exist in $\Top$. \subsection{Mapping spaces}\label{mappingspaces} Let $X$ and $Y$ be topological spaces. The set $\Top(X,Y)$ of continuous maps from $X$ to $Y$ admits a topology, namely the compact-open topology. If $X$ and $Y$ are $k$-spaces, we can make a slight modification: define a topology on $k\Top(X,Y)$ generated by the sets $$W(k:K\to X, \text{ open }U\subseteq Y) := \{f:X\to Y: f(k(K))\subseteq U\}.$$ We write $Y^X$ for the $k$-ification of $k\Top(X,Y)$. \begin{prop} \begin{enumerate} \item The functor $(k\Top)^{op}\times k\Top\to k\Top$ given by $(X,Y)\to Y^X$ is a functor of both variables. \item $e:X\times Z^X\to Z$ given by $(x,f)\mapsto f(x)$ and $i:Y\to (X\times Y)^X$ given by $y\mapsto(x\mapsto(x,y))$ are continuous. \end{enumerate} \end{prop} \begin{proof} The first statement is left as an exercise to the reader. For the second statement, see \cite[Proposition 2.11]{StricklandCGWH}. \end{proof} As a consequence of this result, we can obtain a very nice adjunction. Define two maps: \begin{itemize} \item $k\Top(X\times Y,Z)\to k\Top(Y,Z^X)$ via $$(f:X\times Y\to Z)\mapsto (Y\xrightarrow{i}(X\times Y)^X\to Z^X).$$ \item $k\Top(Y,Z^X) \to k\Top(X\times Y,Z)$ via $$(f:Y\to Z^X)\mapsto(X\times Y\to X\times Z^X\xrightarrow{e} X).$$ \end{itemize} By \cite[Proposition 2.12]{StricklandCGWH}, these two maps are continuous inverses, so there is a natural homeomorphism $$k\Top(X\times Y,Z)\simeq k\Top(Y,Z^X).$$ This motivates the definition of a {Cartesian closed} category. \begin{definition}\label{cartesian-closed} A category $\cc$ with finite products is said to be \emph{Cartesian closed} if, for any object $X$ of $\cc$, the functor $X\times -:\cc\to \cc$ has a right adjoint. \end{definition} Our discussion above proves that $k\Top$ is Cartesian closed, while this is not satisfied by $\Top$. As we will see below, this has very important ramifications for algebraic topology. \begin{exercise} \todo{Insert Exercise 2 from 18.906.} \end{exercise}
{ "alphanum_fraction": 0.7167916443, "avg_line_length": 55.7313432836, "ext": "tex", "hexsha": "5240f3c508f8c33bb56571e6e4d198ea8f00cabd", "lang": "TeX", "max_forks_count": 3, "max_forks_repo_forks_event_max_datetime": "2019-08-13T17:38:04.000Z", "max_forks_repo_forks_event_min_datetime": "2017-10-21T18:15:11.000Z", "max_forks_repo_head_hexsha": "3f5d3189e2082716a69fccc1711d02ed848552d2", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "ichung/algtop-notes", "max_forks_repo_path": "906/lec-40-compactly-generated.tex", "max_issues_count": 3, "max_issues_repo_head_hexsha": "3f5d3189e2082716a69fccc1711d02ed848552d2", "max_issues_repo_issues_event_max_datetime": "2018-03-13T17:59:46.000Z", "max_issues_repo_issues_event_min_datetime": "2018-03-13T17:54:37.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "ichung/algtop-notes", "max_issues_repo_path": "906/lec-40-compactly-generated.tex", "max_line_length": 382, "max_stars_count": 5, "max_stars_repo_head_hexsha": "3f5d3189e2082716a69fccc1711d02ed848552d2", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "ichung/algtop-notes", "max_stars_repo_path": "906/lec-40-compactly-generated.tex", "max_stars_repo_stars_event_max_datetime": "2021-05-27T22:47:06.000Z", "max_stars_repo_stars_event_min_datetime": "2017-04-26T15:00:52.000Z", "num_tokens": 2314, "size": 7468 }
\begin{appendices} \chapter{Analyzing Synthetic Data} Here we provide additional visualizations which we generated for the purpose of our thesis for synthetic data analysis of \textit{growing networks} in section \ref{growing_networks}. As we have detailed in algorithm \ref{growing_network_model}, we grow the synthetic networks for $t(=10000)$ iterations in our experiments for \textit{Growing Networks}. We visualize the change in count of minority and majority nodes in the recommendation list of length $k(=5)$ for recommendations to minority and majority nodes with progress in time. The iteration step-size considered for these plots is 1000. Also for the \textit{reinforcement methods} we only provide the results for the ranking factor $r=1.0$ in the results section for \textit{growing networks}. Here we provide the additional plots for the ranking factor values of $r=\{0.0, 0.5\}$ for the \textit{Ranked-Bandit} and \textit{Top-Rank} recommender methods. %\section{Experimental Results : PA-Homophily} \begin{figure}[h!] \centering \includegraphics[width=1.0\textwidth]{images/count_pa.png} \caption{Count of nodes in the recommendation list over time for minority and majority nodes in a growing network aided with \textbf{PA-Homophily} recommender agent. Homophily values are specified above the respective columns and minority fractions are specified at the right side of the row. Light red plot line denotes count of minority nodes recommended for other minority nodes. Dark red plot line denotes count of majority nodes recommended for other minority nodes. Light blue line denotes count of minority nodes recommended for other majority nodes. Dark blue line denotes count of majority nodes recommended for other majority nodes.} \label{count_pa} \end{figure} %\section{Experimental Results : Adamic-Adar} \begin{figure}[h!] \centering \includegraphics[width=1.0\textwidth]{images/count_aa.png} \caption{Count of nodes in the recommendation list over time for minority and majority nodes in a growing network aided with \textbf{Adamic-Adar} recommender agent. Homophily values are specified above the respective columns and minority fractions are specified at the right side of the row. Light red plot line denotes count of minority nodes recommended for other minority nodes. Dark red plot line denotes count of majority nodes recommended for other minority nodes. Light blue line denotes count of minority nodes recommended for other majority nodes. Dark blue line denotes count of majority nodes recommended for other majority nodes.} \label{count_aa} \end{figure} %\section{Experimental Results : Twitter-Rank} \begin{figure}[h!] \centering \includegraphics[width=1.0\textwidth]{images/count_tr.png} \caption{Count of nodes in the recommendation list over time for minority and majority nodes in a growing network aided with \textbf{Twitter-Rank} recommender agent. Homophily values are specified above the respective columns and minority fractions are specified at the right side of the row. Light red plot line denotes count of minority nodes recommended for other minority nodes. Dark red plot line denotes count of majority nodes recommended for other minority nodes. Light blue line denotes count of minority nodes recommended for other majority nodes. Dark blue line denotes count of majority nodes recommended for other majority nodes.} \label{count_tr} \end{figure} %\section{Experimental Results : Ranked-Bandit (r=0.0)} \begin{figure}[h!] \centering \includegraphics[width=1.0\textwidth]{images/count_rb00.png} \caption{Count of nodes in the recommendation list over time for minority and majority nodes in a growing network aided with \textbf{Ranked-Bandit}(r=0.0) recommender agent. Homophily values are specified above the respective columns and minority fractions are specified at the right side of the row. Light red plot line denotes count of minority nodes recommended for other minority nodes. Dark red plot line denotes count of majority nodes recommended for other minority nodes. Light blue line denotes count of minority nodes recommended for other majority nodes. Dark blue line denotes count of majority nodes recommended for other majority nodes.} \label{count_rb00} \end{figure} \begin{figure}[h!] \centering \includegraphics[width=1.0\textwidth]{images/dd_growth_rb00.png} \caption{Degree distribution for growing networks with \textbf{Ranked-Bandit ($r = 0.0$)}.The minority fractions are provided at the right-side of each row and the homophily values are specified at the top of each column. Degree distribution for the majority and minority nodes are visualized using blue and red plot points respectively.} \label{dd_growth_rb00_fig} \end{figure} \begin{figure}[h!] \centering \includegraphics[width=1.0\textwidth]{images/dg_growth_rb00.png} \caption{Degree growth for growing networks with \textbf{Ranked-Bandit ($r = 0.0$)}. The minority fractions are provided at the right-side of each row and the homophily values are specified at the top of each column. Degree growth for minority and majority node is visualized using red and blue color plot lines respectively.} \label{dg_growth_rb00_fig} \end{figure} \begin{SCfigure}[1][h!] \centering \includegraphics[trim=0 5 0 10, clip, width=0.75\textwidth]{images/mf_growth_rb00.png} \caption{The fraction of total degree held by minority nodes for growing networks with \textbf{Ranked-Bandit ($r = 0.0$)}.} \label{mf_growth_rb00_fig} \end{SCfigure} \begin{figure}[h!] \centering \includegraphics[trim=0 10 0 5, clip, width=1.0\textwidth]{images/top_growth_rb00.png} \caption{The fraction of minority nodes found in top D\% nodes ranked according to degree in growing networks with \textbf{Ranked-Bandit ($r = 0.0$)}. A black dotted line at each plot shows the actual fraction of minority nodes in the network.} \label{top_growth_rb00_fig} \end{figure} %\section{Experimental Results : Ranked-Bandit (r=0.5)} \begin{figure}[h!] \centering \includegraphics[width=1.0\textwidth]{images/count_rb05.png} \caption{Count of nodes in the recommendation list over time for minority and majority nodes in a growing network aided with \textbf{Ranked-Bandit}(r=0.5) recommender agent. Homophily values are specified above the respective columns and minority fractions are specified at the right side of the row. Light red plot line denotes count of minority nodes recommended for other minority nodes. Dark red plot line denotes count of majority nodes recommended for other minority nodes. Light blue line denotes count of minority nodes recommended for other majority nodes. Dark blue line denotes count of majority nodes recommended for other majority nodes.} \label{count_rb05} \end{figure} \begin{figure}[h!] \centering \includegraphics[width=1.0\textwidth]{images/dd_growth_rb05.png} \caption{Degree distribution for growing networks with \textbf{Ranked-Bandit ($r = 0.5$)}.The minority fractions are provided at the right-side of each row and the homophily values are specified at the top of each column. Degree distribution for the majority and minority nodes are visualized using blue and red plot points respectively.} \label{dd_growth_rb05_fig} \end{figure} \begin{figure}[h!] \centering \includegraphics[width=1.0\textwidth]{images/dg_growth_rb05.png} \caption{Degree growth for growing networks with \textbf{Ranked-Bandit ($r = 0.5$)}. The minority fractions are provided at the right-side of each row and the homophily values are specified at the top of each column. Degree growth for minority and majority node is visualized using red and blue color plot lines respectively.} \label{dg_growth_rb05_fig} \end{figure} \begin{SCfigure}[1][h!] \centering \includegraphics[trim=0 5 0 10, clip, width=0.75\textwidth]{images/mf_growth_rb05.png} \caption{The fraction of total degree held by minority nodes for growing networks with \textbf{Ranked-Bandit ($r = 0.5$)}.} \label{mf_growth_rb05_fig} \end{SCfigure} \begin{figure}[h!] \centering \includegraphics[trim=0 10 0 5, clip, width=1.0\textwidth]{images/top_growth_rb05.png} \caption{The fraction of minority nodes found in top D\% nodes ranked according to degree in growing networks with \textbf{Ranked-Bandit ($r = 0.5$)}. A black dotted line at each plot shows the actual fraction of minority nodes in the network.} \label{top_growth_rb05_fig} \end{figure} %\section{Experimental Results : Ranked-Bandit (r=1.0)} \begin{figure}[h!] \centering \includegraphics[width=1.0\textwidth]{images/count_rb10.png} \caption{Count of nodes in the recommendation list over time for minority and majority nodes in a growing network aided with \textbf{Ranked-Bandit}(r=1.0) recommender agent. Homophily values are specified above the respective columns and minority fractions are specified at the right side of the row. Light red plot line denotes count of minority nodes recommended for other minority nodes. Dark red plot line denotes count of majority nodes recommended for other minority nodes. Light blue line denotes count of minority nodes recommended for other majority nodes. Dark blue line denotes count of majority nodes recommended for other majority nodes.} \label{count_rb10} \end{figure} %\section{Experimental Results : Top-Rank (r=0.0)} \begin{figure}[h!] \centering \includegraphics[width=1.0\textwidth]{images/count_top00.png} \caption{Count of nodes in the recommendation list over time for minority and majority nodes in a growing network aided with \textbf{Top-Rank}(r=0.0) recommender agent. Homophily values are specified above the respective columns and minority fractions are specified at the right side of the row. Light red plot line denotes count of minority nodes recommended for other minority nodes. Dark red plot line denotes count of majority nodes recommended for other minority nodes. Light blue line denotes count of minority nodes recommended for other majority nodes. Dark blue line denotes count of majority nodes recommended for other majority nodes.} \label{count_top00} \end{figure} \begin{figure}[h!] \centering \includegraphics[width=1.0\textwidth]{images/dd_growth_top00.png} \caption{Degree distribution for growing networks with \textbf{Top-Rank ($r = 0.0$)}. The minority fractions are provided at the right-side of each row and the homophily values are specified at the top of each column. Degree distribution for the majority and minority nodes are visualized using blue and red plot points respectively.} \label{dd_growth_top00_fig} \end{figure} \begin{figure}[h!] \centering \includegraphics[width=1.0\textwidth]{images/dg_growth_top00.png} \caption{Degree growth for growing networks with \textbf{Top-Rank ($r = 0.0$)}. The minority fractions are provided at the right-side of each row and the homophily values are specified at the top of each column. Degree growth for minority and majority node is visualized using red and blue color plot lines respectively.} \label{dg_growth_top00_fig} \end{figure} \begin{SCfigure}[1][h!] \centering \includegraphics[trim=0 5 0 10, clip, width=0.75\textwidth]{images/mf_growth_top00.png} \caption{The fraction of total degree held by minority nodes for growing networks with \textbf{Top-Rank ($r = 0.0$)}.} \label{mf_growth_top00_fig} \end{SCfigure} \begin{figure}[h!] \centering \includegraphics[trim=0 10 0 5, clip, width=1.0\textwidth]{images/top_growth_top00.png} \caption{The fraction of minority nodes found in top D\% nodes ranked according to degree in growing networks with \textbf{Top-Rank ($r = 0.0$)}. A black dotted line at each plot shows the actual fraction of minority nodes in the network.} \label{top_growth_top00_fig} \end{figure} %\section{Experimental Results : Top-Rank (r=0.5)} \begin{figure}[h!] \centering \includegraphics[width=1.0\textwidth]{images/count_top05.png} \caption{Count of nodes in the recommendation list over time for minority and majority nodes in a growing network aided with \textbf{Top-Rank}(r=0.5) recommender agent. Homophily values are specified above the respective columns and minority fractions are specified at the right side of the row. Light red plot line denotes count of minority nodes recommended for other minority nodes. Dark red plot line denotes count of majority nodes recommended for other minority nodes. Light blue line denotes count of minority nodes recommended for other majority nodes. Dark blue line denotes count of majority nodes recommended for other majority nodes.} \label{count_top05} \end{figure} \begin{figure}[h!] \centering \includegraphics[width=1.0\textwidth]{images/dd_growth_top05.png} \caption{Degree distribution for growing networks with \textbf{Top-Rank ($r = 0.5$)}. The minority fractions are provided at the right-side of each row and the homophily values are specified at the top of each column. Degree distribution for the majority and minority nodes are visualized using blue and red plot points respectively.} \label{dd_growth_top05_fig} \end{figure} \begin{figure}[h!] \centering \includegraphics[width=1.0\textwidth]{images/dg_growth_top05.png} \caption{Degree growth for growing networks with \textbf{Top-Rank ($r = 0.5$)}. The minority fractions are provided at the right-side of each row and the homophily values are specified at the top of each column. Degree growth for minority and majority node is visualized using red and blue color plot lines respectively.} \label{dg_growth_top05_fig} \end{figure} \begin{SCfigure}[1][h!] \centering \includegraphics[trim=0 5 0 10, clip, width=0.75\textwidth]{images/mf_growth_top05.png} \caption{The fraction of total degree held by minority nodes for growing networks with \textbf{Top-Rank ($r = 0.5$)}.} \label{mf_growth_top05_fig} \end{SCfigure} \begin{figure}[h!] \centering \includegraphics[trim=0 10 0 5, clip, width=1.0\textwidth]{images/top_growth_top05.png} \caption{The fraction of minority nodes found in top D\% nodes ranked according to degree in growing networks with \textbf{Top-Rank ($r = 0.5$)}. A black dotted line at each plot shows the actual fraction of minority nodes in the network.} \label{top_growth_top05_fig} \end{figure} %\section{Experimental Results : Top-Rank (r=1.0)} \begin{figure}[h!] \centering \includegraphics[width=1.0\textwidth]{images/count_top10.png} \caption{Count of nodes in the recommendation list over time for minority and majority nodes in a growing network aided with \textbf{Top-Rank}(r=1.0) recommender agent. Homophily values are specified above the respective columns and minority fractions are specified at the right side of the row. Light red plot line denotes count of minority nodes recommended for other minority nodes. Dark red plot line denotes count of majority nodes recommended for other minority nodes. Light blue line denotes count of minority nodes recommended for other majority nodes. Dark blue line denotes count of majority nodes recommended for other majority nodes.} \label{count_top10} \end{figure} \end{appendices}
{ "alphanum_fraction": 0.7917722363, "avg_line_length": 73.881773399, "ext": "tex", "hexsha": "7cb7585bc6e524278e99f74a9df2f82ce046fc3c", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "8adcbf5af5c322e4b20d4336b12ecda62a5c4d5f", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "dvaruas/minority_recommendations", "max_forks_repo_path": "src/thesis/chapters/appendix.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "8adcbf5af5c322e4b20d4336b12ecda62a5c4d5f", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "dvaruas/minority_recommendations", "max_issues_repo_path": "src/thesis/chapters/appendix.tex", "max_line_length": 652, "max_stars_count": null, "max_stars_repo_head_hexsha": "8adcbf5af5c322e4b20d4336b12ecda62a5c4d5f", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "dvaruas/minority_recommendations", "max_stars_repo_path": "src/thesis/chapters/appendix.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 3751, "size": 14998 }
\documentclass[8pt]{beamer} \setbeamertemplate{navigation symbols}{} \usepackage{graphicx} \graphicspath{ {images/} } \title{An Introduction to Version Control} \author{Brandon Moore\\Open Source Club} \begin{document} \begin{frame} \titlepage \end{frame} \section{Table of Contents} \begin{frame} \frametitle{Table of Contents} \tableofcontents[] \end{frame} \section{What is version control?} \begin{frame} \frametitle{What is version control?} \textit{Version control provides a way to keep snapshots of a project (typically source code) throughout time.}\\ \hfill \break \onslide<2->{\textbf{Why?}} \begin{itemize} \item<3-> Find out where things broke \item<4-> Make features without breaking production \item<5-> Work with a group \end{itemize} \end{frame} \begin{frame} \frametitle{Distributed versus Centralized} What's the difference?\\ \hfill \break \textit{Distributed}\\ Each person gets a full repository that they can modify on their machine. Each user can contribute to the same branch without major concerns as merge markers will be inserted.\\ \hfill \break \textit{Centralized}\\ Each person contributes to a repository located in one place. All changes end up in this centralized location allowing easy code review, but requires the use of branching or file locking(in the repo file system) to avoid conflicts.\\ \end{frame} \begin{frame} \frametitle{Examples} \begin{columns}[T] \begin{column}{0.5\textwidth} \textbf{Distributed} \begin{itemize} \item Git \item Mercurial \item Bazaar \item SVK \end{itemize} \end{column} \begin{column}{0.5\textwidth} \textbf{Centralized} \begin{itemize} \item CVS \item Subversion \item Google docs* \item Word '07 \item Wikipedia* \end{itemize} \end{column} \end{columns} \end{frame} \begin{frame} \frametitle{distributed vs centralized} \begin{columns}[T] \begin{column}{0.5\textwidth} \textbf{distributed} \begin{itemize} \item Able to work offline \item Can run the project yourself \item History can be edited before pushing \item Team can work on the same branch \end{itemize} \end{column} \begin{column}{0.5\textwidth} \textbf{Centralized} \begin{itemize} \item Must communicate with server \item Code run by a select few \item All code is seen and can be reviewed \item Branches for individuals encouraged \end{itemize} \end{column} \end{columns} \end{frame} \begin{frame} \frametitle{Questions?} \end{frame} \end{document}
{ "alphanum_fraction": 0.7201726845, "avg_line_length": 25.48, "ext": "tex", "hexsha": "436798048ab4144e2cd392a0354a368c652c371a", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "fc6ebea3417f8d5af535fbb15e9f421131cbe0f4", "max_forks_repo_licenses": [ "CC0-1.0" ], "max_forks_repo_name": "moore3071/version-control-software-presentation", "max_forks_repo_path": "presentation.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "fc6ebea3417f8d5af535fbb15e9f421131cbe0f4", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "CC0-1.0" ], "max_issues_repo_name": "moore3071/version-control-software-presentation", "max_issues_repo_path": "presentation.tex", "max_line_length": 234, "max_stars_count": null, "max_stars_repo_head_hexsha": "fc6ebea3417f8d5af535fbb15e9f421131cbe0f4", "max_stars_repo_licenses": [ "CC0-1.0" ], "max_stars_repo_name": "moore3071/version-control-software-presentation", "max_stars_repo_path": "presentation.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 801, "size": 2548 }
%---------------------------------------------------------------------------------------- % PACKAGES AND OTHER DOCUMENT CONFIGURATIONS %---------------------------------------------------------------------------------------- \documentclass[paper=a4, fontsize=11pt]{scrartcl} % A4 paper and 11pt font size \usepackage[T1]{fontenc} % Use 8-bit encoding that has 256 glyphs \usepackage{fourier} % Use the Adobe Utopia font for the document - comment this line to return to the LaTeX default \usepackage[english]{babel} % English language/hyphenation \usepackage{amsmath,amsfonts,amsthm} % Math packages \makeatletter %\renewcommand\thesection{} %\renewcommand\thesubsection{\@arabic\c@section.\@arabic\c@subsection} \makeatother \usepackage{fancyhdr} % Custom headers and footers \pagestyle{fancyplain} % Makes all pages in the document conform to the custom headers and footers \fancyhead{} % No page header - if you want one, create it in the same way as the footers below \fancyfoot[L]{} % Empty left footer \fancyfoot[C]{} % Empty center footer \fancyfoot[R]{\thepage} % Page numbering for right footer \renewcommand{\headrulewidth}{0pt} % Remove header underlines \renewcommand{\footrulewidth}{0pt} % Remove footer underlines \setlength{\headheight}{13.6pt} % Customize the height of the header %\numberwithin{equation}{section} % Number equations within sections (i.e. 1.1, 1.2, 2.1, 2.2 instead of 1, 2, 3, 4) %\numberwithin{figure}{section} % Number figures within sections (i.e. 1.1, 1.2, 2.1, 2.2 instead of 1, 2, 3, 4) %\numberwithin{table}{section} % Number tables within sections (i.e. 1.1, 1.2, 2.1, 2.2 instead of 1, 2, 3, 4) %\setlength\parindent{0pt} % Removes all indentation from paragraphs - comment this line for an assignment with lots of text \usepackage{graphicx} \usepackage{caption} \usepackage{subcaption} \usepackage{algorithm2e} \usepackage{booktabs} \newcommand{\code}[1]{{\footnotesize\textsf{#1}}} \newcommand{\q}[1]{``#1''} \usepackage{listings} \usepackage{geometry} \geometry{letterpaper, margin=1.95cm} \usepackage{url} %\usepackage{multirow} %---------------------------------------------------------------------------------------- % TITLE SECTION %---------------------------------------------------------------------------------------- \newcommand{\horrule}[1]{\rule{\linewidth}{#1}} % Create horizontal rule command with 1 argument of height \title{ \normalfont \normalsize \textsc{Utah State University, Computer Science Department} \\ [25pt] % Your university, school and/or department name(s) \horrule{0.5pt} \\[0.4cm] % Thin top horizontal rule \huge CS5890 Data Science - Project Proposal \\ Random Acts Of Pizza \\ % The assignment title \horrule{2pt} \\[0.5cm] % Thick bottom horizontal rule } \author{Team \textbf{Pizza Hackers}: Tam Nguyen and Hung Pham} % Your name \date{\normalsize\today} % Today's date or a custom date \begin{document} \maketitle \section{Introduction} In this proposal we will discuss our plan to perform data analysis on a social interaction dataset, where requester ask for free pizza on a Reddit community \q{Random Acts of Pizza}\footnote{\url{https://www.reddit.com/r/Random_Acts_Of_Pizza/}}. We will discuss the social interaction problems, the dataset provided by Kaggle, preliminary of our analysis plan, which data science toolkits we plan to use. \section{The team: Pizza hackers} We name our team \q{Pizza hackers} in direct references to the task that we want to tackle, and our Computer Scientist's root. \q{Pizza hackers} consists of two members: Tam Nguyen and Hung Pham. Tasks such as implementations of data analysis experiments, report writing, and visualization will be divided among team members. All members will be involved in analysis of experiments results as well as any decision in term of project direction. \input{problem} \input{data} \input{plan} \input{toolkit} \begin{thebibliography}{10} \bibitem{tim14} Tim Althoff, Cristian Danescu-Niculescu-Mizil, Dan Jurafsky. \emph{How to Ask for a Favor: A Case Study on the Success of Altruistic Requests}, Proceedings of ICWSM, 2014. \bibitem{word2vec} Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, Jeff Dean. \emph{Distributed representations of words and phrases and their compositionality}, Advances in neural information processing systems, 2013. \end{thebibliography} \end{document}
{ "alphanum_fraction": 0.6845742309, "avg_line_length": 41.9252336449, "ext": "tex", "hexsha": "e9fe2c7e3cbe86eafbe60efd35ebccca726b0d32", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "74446d536074d369af73bf6d88e38f917c7a3ee6", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "tamnguyenthe/CS5890Project", "max_forks_repo_path": "project/proposal/Proposal.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "74446d536074d369af73bf6d88e38f917c7a3ee6", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "tamnguyenthe/CS5890Project", "max_issues_repo_path": "project/proposal/Proposal.tex", "max_line_length": 444, "max_stars_count": null, "max_stars_repo_head_hexsha": "74446d536074d369af73bf6d88e38f917c7a3ee6", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "tamnguyenthe/CS5890Project", "max_stars_repo_path": "project/proposal/Proposal.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 1179, "size": 4486 }
\documentclass[9pt]{beamer} \usepackage{etex} % Weird problem on dimentions %\usetheme{spensiones} %\mode<presentation> { \setbeamercovered{transparent} } %\usepackage{stata} \usepackage{tikz, tabularx, ulem} %\usepackage{fancyvrb} \usetikzlibrary{arrows, fit,positioning} \title[{\tt parallel}]{Just tired of endless loops! \\ {\it \footnotesize or} {\normalsize {\tt parallel}: Stata module for parallel computing}} \author[GGV]{George G. Vega\\ {\tt \scriptsize [email protected]}} \institute[SPensiones]{Chilean Pension Supervisor} \def\unix1{Intel Xeon X470 (hexadeca-core)} \def\windows1{Intel i3 2120 (dual-core)} \date{Stata Conference New Orleans\\July 18-19, 2013} \begin{document} \frame{ \maketitle {\scriptsize Thanks to Damian C. Clarke, F\'elix Villatoro and Eduardo Fajnzylber, Tom\'as Rau, Eric Melse, Valentina Moscoso, the Research team of the Chilean Pension Supervisor and several Stata users worldwide for their valuable contributions. The usual disclaimers applies.} } \frame{ \frametitle{Agenda} \tableofcontents } \section{Motivation} \begin{frame} % [allowframebreaks=.8] \frametitle{Motivation} \begin{itemize} \item Despite the availability of administrative data, its exploitation is still a novel issue.\pause \item At the same time, currently home computers are arriving with extremely high computational capabilities.\pause \item Given its nature, matching both (big data problems and HPA) sounds strightforward.\pause \item But, implementing parallel computing for the social scientiest is not easy, \pause most of this due to lack of (user-friendly) statistical computing tools.\pause \item {\tt parallel} aims to make a contribution to these issues. \end{itemize} \end{frame} \section{What is and how does it work} \frame{\tableofcontents[currentsection]} \begin{frame} % [allowframebreaks=.8] \frametitle{What is and how does it work} \framesubtitle{What is?} \begin{itemize} \item Inspired in the R package ``snow''\pause (several other examples exists: StataMP, Condor HTC, C's Ox library, Matlab's Parallel Toolbox, etc.)\pause \item Is designed to be used in multicore CPUs (dualcore, quadcore, etc.).\pause \item It implements parallel computing methods through an OS's shell scripting (using Stata in batch mode) to accelerate computations.\pause %\item By starting determined number of clusters (stata instances) this module was design to repeat a task simultaneously over the clusters.\pause \item Depending on the task, can reach near to (or over) linear speedups proportional to the number of physical cores of the computer.\pause \item Thus having a quad-core computer can lead to a 400\% speedup. \end{itemize} \end{frame} \begin{frame}[b] \frametitle{What is and how does it work} \framesubtitle{How does it work?} \begin{figure} \centering \scalebox{.65}{\input{../../man/diagram.tex}} %\scalebox{.7}{\input{diagram.tex}} \end{figure} \end{frame} \begin{frame} \frametitle{What is and how does it work} {\Large Sounds ``pretty'' but...}\pause {\Huge is this for real!?} \end{frame} \begin{frame} \frametitle{What is and how does it work} \framesubtitle{Parallel's backend} When the user enters \begin{figure}[fragile] \small \centering {\bf{\tt parallel: gen n = \_N}} \end{figure} {\tt parallel} takes the command and writes something like this\pause \bigskip \scriptsize %\scalebox{.9}{ \input{code.tex} %} \end{frame} \begin{frame} \frametitle{What is and how does it work} {\Large Ok, it works but...}\pause {\Huge it must be really hard to use!} \end{frame} \section{Benchmarks} \frame{\tableofcontents[currentsection]} \begin{frame}[b,fragile] \frametitle{Benchmarks} \framesubtitle{Simple example: Serial replace} \begin{minipage}[c]{1\textwidth} \begin{minipage}[c]{.35\textwidth} Serial fashion \begin{semiverbatim}\scriptsize do mydofile.do \end{semiverbatim} Parallel fashion \begin{semiverbatim}\scriptsize parallel do mydofile.do \end{semiverbatim} \end{minipage} %SPLIT \fbox{ \begin{minipage}[c]{.6\textwidth}\scriptsize \begin{figure} \caption{mydofile.do} \begin{semiverbatim} local size = \_N forval i=1/`size' \{ \hspace{1cm}qui replace x = /// \hspace{1.5cm}1/sqrt(2*`c(pi)')*exp(-(x\^{}2/2)) in `i' \} \end{semiverbatim} \end{figure} \end{minipage}} \end{minipage} \begin{table}[!h] \centering \caption{Serial replacing using a loop on a Linux Server (16 clusters)} \scalebox{.9}{ \begin{tabular}{l*{3}{c}}\hline & 100.000 & 1.000.000 & 10.000.000 \\ \hline CPU & 1.43 & 16.94 & 144.68 \\ Total & 0.34 & 3.20 & 12.49 \\ \hspace{2mm} Setup & 0.00 & 0.00 & 0.00 \\ \hspace{2mm} Compute & 0.32 & 3.07 & 11.54 \\ \hspace{2mm} Finish & 0.02 & 0.12 & 0.95 \\ \hline Ratio (compute) & 4.50 & 5.51 & 12.53 \\ Ratio (total) & 4.22 (26\%) & 5.30 (30\%) & 11.58 (72\%) \\ \hline \multicolumn{4}{l}{\footnotesize Tested on a \unix1 machine} \end{tabular}} \end{table} \end{frame} \begin{frame}[b,fragile] \frametitle{Benchmarks} \framesubtitle{Monte Carlo simulation (Windows Machine)} \begin{minipage}[c]{1\textwidth} \begin{minipage}[c]{.45\textwidth} \bigskip Serial fashion \begin{semiverbatim}\scriptsize do myexperiment.do \end{semiverbatim} Parallel fashion \begin{semiverbatim}\scriptsize parallel do myexperiment.do, nodata \end{semiverbatim} \end{minipage} %SPLIT \fbox{ \scalebox{.4}{ \begin{minipage}[c]{1\textwidth}\scriptsize \begin{figure} {\Huge \caption{myexperiment.do}} \begin{semiverbatim} local num\_of\_intervals = 50 if length("`pll\_id'") == 0 \{ \hspace{.5cm}local start = 1 \hspace{.5cm}local end = `num\_of\_intervals' \} else \{ \hspace{.5cm}local ntot = floor(`num\_of\_intervals'/\$PLL\_CLUSTERS) \hspace{.5cm}local start = (`pll\_instance' - 1)*`ntot' + 1 \hspace{.5cm}local end = (`pll\_instance')*`ntot' \hspace{.5cm}if `pll\_instance' == \$PLL\_CLUSTERS local end = 10 \} local reps 10000 forval i=`start'/`end' \{ \hspace{.5cm}qui use census2, clear \hspace{.5cm}gen true\_y = age \hspace{.5cm}gen z\_factor = region \hspace{.5cm}sum z\_factor, meanonly \hspace{.5cm}scalar zmu = r(mean) \hspace{.5cm}qui \{ \hspace{.5cm}\hspace{.5cm}gen y1 = . \hspace{.5cm}\hspace{.5cm}gen y2 = . \hspace{.5cm}\hspace{.5cm}local c = `i' \hspace{.5cm}\hspace{.5cm}set seed `c' \hspace{.5cm}\hspace{.5cm}simulate c=r(c) mu1=r(mu1) se\_mu1 = r(se\_mu1) /// \hspace{.5cm}\hspace{.5cm}\hspace{.5cm}\hspace{.5cm}mu2=r(mu2) se\_mu2 = r(se\_mu2), /// \hspace{.5cm}\hspace{.5cm}\hspace{.5cm}\hspace{.5cm}saving(cc`i', replace) nodots reps(`reps'): /// \hspace{.5cm}\hspace{.5cm}\hspace{.5cm}\hspace{.5cm}mcsimul1, c(`c') \hspace{.5cm}\} \} \end{semiverbatim} \end{figure} \end{minipage}}} \end{minipage} \begin{table}[!h] \centering \caption{Monte Carlo Experiment on a Windows Machine (4 clusters)} \scalebox{.85}{ \begin{tabular}{l*{2}{c}}\hline & 2 & 4 \\ \hline CPU & 111.49 & 114.13 \\ Total & 58.02 & 37.48 \\ \hspace{2mm} Setup & 0.00 & 0.00 \\ \hspace{2mm} Compute & 58.02 & 37.48 \\ \hspace{2mm} Finish & 0.00 & 0.00 \\ \hline Ratio (compute) & 1.92 & 3.04 \\ Ratio (total) & 1.92 (96\%)& 3.04 (76\%)\\ \hline \multicolumn{3}{l}{\footnotesize Tested on a \windows1 machine} \end{tabular}} \end{table} \end{frame} \begin{frame}[b] \frametitle{Benchmarks} \framesubtitle{Monte Carlo simulation (Unix Machine)} \bigskip Serial fashion \begin{semiverbatim} \scriptsize do myexperiment.do \end{semiverbatim} Parallel fashion \begin{semiverbatim} \scriptsize parallel do myexperiment.do, nodata \end{semiverbatim} \begin{table}[!h] \centering \caption{Monte Carlo Experiment on a Linux Server (16 clusters)} \scalebox{.85}{ \begin{tabular}{l*{4}{c}}\hline & 2 & 4 & 8 & 16 \\ \hline CPU & 164.79 & 164.04 & 162.84 & 163.89 \\ Total & 69.85 & 34.28 & 19.00 & 10.78 \\ \hspace{2mm} Setup & 0.00 & 0.00 & 0.00 & 0.00 \\ \hspace{2mm} Compute & 69.85 & 34.28 & 19.00 & 10.78 \\ \hspace{2mm} Finish & 0.00 & 0.00 & 0.00 & 0.00 \\ \hline Ratio (compute) & 2.36 & 4.78 & 8.57 & 15.21 \\ Ratio (total) & 2.36 (118\%) & 4.78 (120\%)& 8.57 (107\%) & 15.21 (95\%) \\ \hline \multicolumn{4}{l}{\footnotesize Tested on a \unix1 machine} \end{tabular}} \end{table} \end{frame} \begin{frame}[b] \frametitle{Benchmarks} \framesubtitle{Reshaping Administrative Data} \bigskip Serial fashion \begin{semiverbatim} \scriptsize reshape wide tipsolic rutemp opta derecho ngiros, /// \hspace{1cm}i(id) j(time) \end{semiverbatim} Parallel fashion \begin{semiverbatim} \scriptsize parallel, by(id) :reshape wide tipsolic rutemp opta derecho ngiros, /// \hspace{1cm}i(id) j(time) \end{semiverbatim} \begin{table}[!h] \centering \caption{Reshaping wide a large database on a Linux Server (8 clusters)} \scalebox{.8}{ \begin{tabular}{l*{3}{c}}\hline & 100.000 & 1.000.000 & 5.000.000 \\ \hline CPU & 5.51 & 72.70 & 392.97 \\ Total & 2.33 & 17.46 & 86.44 \\ \hspace{2mm} Setup & 0.00 & 0.00 & 0.00 \\ \hspace{2mm} Compute & 1.83 & 12.42 & 57.93 \\ \hspace{2mm} Finish & 0.50 & 5.04 & 28.51 \\ \hline Ratio (compute) & 3.01 & 5.85 & 6.78 \\ Ratio (total) & 2.37 (29\%)& 4.16 (52\%)& 4.55 (57\%)\\ \hline \multicolumn{4}{l}{\footnotesize Tested on a \unix1 machine} \end{tabular}} \end{table} \end{frame} \section{Syntax and Usage} \frame{\tableofcontents[currentsection]} \begin{frame} \frametitle{Syntax and Usage} Setup \begin{semiverbatim} \footnotesize {\bf parallel setclusters} {\it \#} [, \uline{f}orce] \end{semiverbatim}\pause By syntax \begin{semiverbatim} \footnotesize {\bf parallel} [, by({\it \color{blue} varlist}) \uline{p}rograms \uline{m}ata \uline{s}eeds({\it \color{blue} string}) \uline{r}andtype({\it \color{blue} random.org$|$datetime}) \hspace{1cm} \uline{pr}ocessors({\it \color{blue} integer}) \uline{nod}ata]: {\it stata\_cmd} \end{semiverbatim}\pause Do syntax \begin{semiverbatim} \footnotesize {\bf parallel do} {\it \color{blue} filename} \hspace{1cm} [, by({\it \color{blue} varlist}) \uline{p}rograms \uline{m}ata \uline{s}eeds({\it \color{blue} string}) \uline{r}andtype({\it \color{blue} random.org$|$datetime}) \hspace{1cm} \uline{pr}ocessors({\it \color{blue} integer}) \uline{nod}ata] \end{semiverbatim} \end{frame} \begin{frame} \frametitle{Syntax and Usage} \framesubtitle{Recomendations on its usage} \begin{columns} \begin{column}{.5\textwidth} {\color{gray} {\tt parallel suit ...} \rule{\linewidth}{4pt}} \begin{itemize} \item Montecarlo simulation.\pause \item Extensive nested control flow (loops, while, ifs, etc.).\pause \item Bootstraping/Jacknife.\pause \item Simulations in general.\pause \end{itemize} \end{column}% \hfill% \begin{column}{.5\textwidth} {\color{gray} {\tt parallel doesn't suit ...} \rule{\linewidth}{4pt}} \begin{itemize} \item (already) fast commands.\pause \item Regressions, ARIMA, etc.\pause \item Linear Algebra.\pause \item Whatever StataMP does better. \end{itemize} \end{column}% \end{columns} \end{frame} \section{Concluding Remarks} \begin{frame} \frametitle{Concluding Remarks} \begin{itemize} \item In the case of Stata, {\tt parallel} is, to the authors knowledge, the first public user-contribution to parallel computing\pause \item its major strengths/advantages are in simulation models and non-vectorized operations such as control-flow statements.\pause \item Depending on the proportion of the algorithm that can be de-serialized, it is possible to reach near to constant scale speedups.\pause \item {\tt parallel} establishes a new basis for parallel computing in Stata,\pause{} thus an all new set of algorithms can be implemented:\pause \begin{itemize} \item {\tt parsimulate}\pause \item {\tt parfor}\pause \item {\tt parbootstrap}\pause \item {\tt parnnmatch}\pause \item ... {\large You name it!} \end{itemize} \end{itemize} \end{frame} \title{Thank you very much!} \frame{\maketitle } \end{document}
{ "alphanum_fraction": 0.6859958932, "avg_line_length": 27.5452488688, "ext": "tex", "hexsha": "b48481b21406dc53fe64f925a1b5e5487ef497c4", "lang": "TeX", "max_forks_count": 26, "max_forks_repo_forks_event_max_datetime": "2021-09-30T08:12:48.000Z", "max_forks_repo_forks_event_min_datetime": "2015-07-09T00:53:54.000Z", "max_forks_repo_head_hexsha": "68f05b902742d235270169c355eb4824c64cbf5e", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "ArnaudKunzi/parallel", "max_forks_repo_path": "talks/20130718_stata_conference/20130718_stata_conference.tex", "max_issues_count": 86, "max_issues_repo_head_hexsha": "68f05b902742d235270169c355eb4824c64cbf5e", "max_issues_repo_issues_event_max_datetime": "2022-02-15T16:31:20.000Z", "max_issues_repo_issues_event_min_datetime": "2015-01-21T04:48:38.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "ArnaudKunzi/parallel", "max_issues_repo_path": "talks/20130718_stata_conference/20130718_stata_conference.tex", "max_line_length": 178, "max_stars_count": 87, "max_stars_repo_head_hexsha": "68f05b902742d235270169c355eb4824c64cbf5e", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "ArnaudKunzi/parallel", "max_stars_repo_path": "talks/20130718_stata_conference/20130718_stata_conference.tex", "max_stars_repo_stars_event_max_datetime": "2022-03-21T11:01:03.000Z", "max_stars_repo_stars_event_min_datetime": "2015-03-04T23:17:30.000Z", "num_tokens": 4260, "size": 12175 }
\documentclass[12pt,oneside,final]{amsart} % If final is removed above, useful metadata is displayed \title{Finite element methods} \author{Lauri Oksanen} \IfFileExists{tweakslo.sty}{\usepackage{tweakslo}}{ \usepackage{amssymb,thmtools,mathtools,todonotes,hyperref} \declaretheorem{theorem}\declaretheorem{definition}\declaretheorem{lemma}\declaretheorem{theorem}\declaretheorem{corollary}\declaretheorem{remark}\declaretheorem{example}} \newcommand{\HOX}[1]{\todo[noline,color=white,size=\footnotesize]{#1}} \newcommand{\TODO}[1]{\todo[inline,bordercolor=gray]{#1}} \def\p{\partial} \def\R{\mathbb R} \DeclareMathOperator{\supp}{supp} \DeclarePairedDelimiter\norm{\lVert}{\rVert} % To the students looking at this code: % If you wonder why not to put all the definitions in the style file, % the reason is that the code compiles without the style file, and % you can pass just a single file to your collaborators. % In general, I recommend using macros or snippets in a good editor, % and using TeX macros sparingly. This makes the life of your % collaborators easier. \usepackage{enumerate} \def\I{\mathcal I} \def\inter{\mathrm{int}} \DeclareMathOperator{\linspan}{span} \begin{document}\maketitle \noindent Lecture notes for \href{https://studies.helsinki.fi/courses/cu/hy-CU-141575726-2020-08-01}{Computational methods II} course at the University of Helsinki, licensed under the Creative Commons Attribution 4.0 International license. The \LaTeX\ source code is available in \href{https://github.com/uh-comp-methods2/lectures}{GitHub}. \tableofcontents \section{Introduction} The finite element method is a widely used method for numerically solving differential equations arising in engineering and mathematical modeling. There are many commercial (e.g.~\href{https://en.wikipedia.org/wiki/COMSOL_Multiphysics}{Comsol}) and open-source (e.g.~\href{https://en.wikipedia.org/wiki/FEniCS_Project}{Fenics}) software packages implementing sophisticated versions of the method. The method is very flexible, and it can be used to solve systems of equations describing many physical phenomena, see for example this \href{https://www.comsol.com/video/joule-heating-fuse-circuit-board-chapter-1}{video} on modeling of resistive heating in an aluminum fuse using Comsol. Rather than applying the method to complicated models, using complex and often opaque software, we will focus on its \begin{enumerate}[1. ] \item Mathematical foundation \item Implementation using \href{https://scikit-fem.readthedocs.io/en/latest/}{Scikit-fem}, written in pure Python \end{enumerate} The present document, covering the first topic, is complemented by Jupyter notebooks on the second topic. They are available in \href{https://github.com/uh-comp-methods2/notebooks}{GitHub}. Our presentation of basic concepts, Sobolev spaces and interpolation theory is inspired by Chapter 0 of \cite{BS}, Chapter 8 of \cite{Brezis}, and Chapter 1 of \cite{EG}, respectively. \section{Basic concepts} In these notes $I \subset \R$ is a closed, bounded, nontrivial interval, that is, $I$ is of the form $[x_L, x_R]$ where the left and right end points $x_L, x_R \in \R$ satisfy $x_L < x_R$. \subsection{Linear spaces of functions} Let us write $F(I)$ for the set of functions $u : I \to \mathbb R$. Then $F(I)$ is a \href{https://en.wikipedia.org/wiki/Vector_space#Notation_and_definition}{vector space} with respect to the usual pointwise addition and scalar muliplication \begin{align*} + : F(I) \times F(I) \to F(I), \quad \cdot : \mathbb R \times F(I) \to F(I), \end{align*} defined for $u,v \in F(I)$ and $c \in \mathbb R$ by \begin{align*} (u + v)(x) = u(x) + v(x), \quad (cu)(x) = cu(x), \qquad x \in I. \end{align*} Indeed, it is easy to verify that the required 8 axioms are satisfied. For example, {\em associativity of vector addition} holds since for all $u,v,w \in F(I)$ there holds \begin{align*} (u + (v + w))(x) &= u(x) + (v+w)(x) = u(x) + v(x) + w(x) \\&= ((u + v) + w)(x). \end{align*} The other axioms follow from the properties of real numbers in a similar manner. \begin{definition}[Space of continuous functions] \begin{align*} C(I) = \{ u \in F(I) \mid \text{$u$ is continuous on $I$}\}. \end{align*} \end{definition} This space can be made a \href{https://en.wikipedia.org/wiki/Normed_vector_space}{normed vector space} by equipping it with the norm \begin{align*} \|u\|_{L^\infty(I)} = \sup_{x \in I} |u(x)|. \end{align*} We see that $C(I)$ is a subspace of $F(I)$. Indeed, the sum and product of two continuous functions are continuous, and thus $C(I)$ is closed under addition and scalar multiplication. \begin{definition}[Spaces of differentiable functions] \begin{align*} C^k(I) = \{ u \in C(I) \mid u', \dots, u^{(k)} \in C(I)\}, \quad k = 1,2,\dots. \end{align*} \end{definition} It is easy to see that $C^k(I)$ is a subspace of $C(I)$. Occasionally we write $C^0(I) = C(I)$. \begin{definition}[Spaces of integrable functions] \begin{align*} L^p(I) = \{u \in F(I) \mid \int_I |u(x)|^p dx < \infty \}, \quad p \ge 1. \end{align*} \end{definition} The space $L^p(I)$ can be made a normed vector spaces by equipping it with the norm \begin{align*} \|u\|_{L^p(I)} = \left( \int_I |u(x)|^p dx \right)^{\frac1p}. \end{align*} For $p=2$, this norm coincides with that given by the inner product \begin{align*} (u, v) = \int_I u(x) v(x) dx, \quad (\cdot, \cdot) : L^2(I) \times L^2(I) \to \mathbb R. \end{align*} Thus $L^2(I)$ is an \href{https://en.wikipedia.org/wiki/Inner_product_space}{inner product space}. To get nice spaces, \href{https://en.wikipedia.org/wiki/Lebesgue_integration}{Lebesgue integration} should be used. We will get back to this later. \begin{definition}[Spaces of polynomials] \begin{align*} \mathbb P_n = \{p : \R \to \R \mid \text{$p$ is a polynomial of degree $\le n$}\}, \quad n = 0, 1, \dots. \end{align*} \end{definition} It can be shown that \begin{align*} 1, x, x^2, \dots, x^n \end{align*} is a basis of $\mathbb P_n$. Thus $\mathbb P_n$ is a finite dimensional space of functions. We may view the polynomials in $\mathbb P_n$ as functions on $I$. Then \begin{align*} \mathbb P_n \subset C^k(I) \end{align*} for any $n,k=0,1,\dots$. Hence $C^k(I)$ is infinite dimensional, and so are $L^2(I)$ and $F(I)$ due to \begin{align*} C^k(I) \subset L^2(I) \subset F(I). \end{align*} \subsection{Inner product spaces} Let $V$ be an inner product space and write $(\cdot, \cdot)$ for the inner product on $V$ and $\|\cdot\|$ for the norm induced by the inner product. \begin{lemma}[\href{https://en.wikipedia.org/wiki/Cauchy\%E2\%80\%93Schwarz_inequality}{Cauchy--Schwarz inequality}] \begin{align*} (u, v) \le \|u\| \|v\| \quad u,v \in V. \end{align*} \end{lemma} \begin{lemma}[Orthogonality implies minimality] Let $u \in V$ and let $S \subset V$ be a subspace. Suppose that $s \in S$ satisfies \begin{align*} (u - s, v) = 0 \quad \text{for all $v \in S$}. \end{align*} Then $\| u - s \| = \min_{v \in S} \| u - v \|$. \end{lemma} \begin{proof} If $u = s$ then both the sides of the claimed equality are zero. Suppose now that $u \ne s$. Let $v \in S$. Then $v -s \in S$ implies \begin{align*} \| u - s \|^2 &= (u - s, u - s) = (u - s, u - v) + (u - s, v - s) = (u - s, u - v) \\&\le \| u - s \| \| u - v \|. \end{align*} We may divide by $\| u - s \|$ as $u \ne s$. As $v \in S$ is arbitrary, the claim follows. \end{proof} \subsection{A boundary value problem} To simplify the notation, we let \begin{align*} I = [0,1] \end{align*} for the rest of the section. Let $f \in C(I)$ and let $u \in C^2(I)$ solve the boundary value problem \begin{align}\label{eq_poisson_1d} \begin{cases} -u'' = f & \text{on $I$}, \\ u(0) = 0 = u(1). \end{cases}\end{align} This is the one dimensional version of \href{https://en.wikipedia.org/wiki/Poisson's_equation}{Poisson's equation}. Define the linear space \begin{align}\label{def_wrong_V} \mathcal V = \{ v \in C^1(I) : v(0) = v(1) = 0 \}, \end{align} and let $v \in \mathcal V$. Then, writing $(\cdot, \cdot)$ for the inner product on $L^2(I)$, integration by parts gives $-(u'', v) = (u', v')$. To summarize, (1) implies \begin{align}\label{eq_weak_prelim} (u', v') = (f, v) \quad \text{for all $v \in \mathcal V$}. \end{align} The opposite holds as well, that is, \eqref{eq_weak_prelim} implies \eqref{eq_poisson_1d} for $u \in C^2(I)$. An elementary proof of this fact is given in \cite{BS}, see Theorem 0.1.4. We will we return to this later when we have developed more sophisticated tools, see Lemma~\ref{lem_formulations} below. We could call \eqref{eq_weak_prelim} a weak formulation of \eqref{eq_poisson_1d}, but we will reserve this term for a modification of \eqref{eq_weak_prelim} where $\mathcal V$ is replaced by a slightly different vector space. We may view the left-hand side of \eqref{eq_weak_prelim} as a bilinear form on $\mathcal V$, that is, as a map $a: \mathcal V \times \mathcal V \to \R$, linear in both of its arguments. We write \begin{align}\label{def_a} a(u, v) = (u', v'), \quad a : \mathcal V \times \mathcal V \to \mathbb R, \end{align} to emphasize this point of view. Moreover, we may view $(f, v)$ as a linear form on $\mathcal V$, that is, as a linear map $L : \mathcal V \to \R$. We write \begin{align}\label{def_L} L(v) = (f,v), \quad L : \mathcal V \to \R. \end{align} The bilinear form $a$ gives an inner product on $\mathcal V$. Indeed, $a(u,u) \ge 0$ for $u \in \mathcal V$, and if $a(u,u) = 0$ then $u'(x) = 0$ for all $x \in I$. As $u(0) = 0$, it follows that $u(x) = 0$ for all $x \in I$. \subsection{Galerkin method} The \href{https://en.wikipedia.org/wiki/Galerkin_method}{Galerkin method} converts a continuous problem, commonly a weak formulation of a partial differential equation, to a discrete problem by applying linear constraints determined by finite sets of basis functions. \begin{theorem}[Galerkin solution]\label{th_gsol} Let $S \subset V$ be two vector spaces. Suppose that $a$ is an inner product on $V$ and that $S$ is finite dimensional. Let $L : V \to \mathbb R$ be linear. Then there is unique $u_S \in S$ such that \begin{align*} a(u_S,v) = L(v) \quad \text{for all $v \in S$}. \end{align*} \end{theorem} We call $u_S$ the Galerkin solution. \begin{proof} Let $\phi_j$, $j=1,\dots,n$, be a basis of $S$, and write \begin{align*} u_S = \sum_{j=1}^n U_j \phi_j, \quad K_{ij} = a(\phi_i, \phi_j), \quad F_i = L(\phi_i), \qquad i,j=1,\dots,n. \end{align*} Moreover, write $U$ and $F$ for the vectors with elements $U_i$ and $F_i$, and $K$ for the matrix with elements $K_{ij}$. Then \begin{align*} a(u_S,v) = L(v) \quad \text{for all $v \in S$}. \end{align*} is equivalent to $KU = F$. This is a square system of linear equations and existence and uniqueness of a solution $U$ are equivalent. So it is enough to show that $KU = 0$ implies $U = 0$. But $KU = 0$ is equivalent to $a(u_S, u_S) = 0$, and this again implies that $u_S = 0$ since $a$ is an inner product. \end{proof} \begin{lemma}[Galerkin orthogonality] Let $S \subset V$, $a$, and $L$ be as in the previous theorem. Suppose that $u \in V$ satisfies \begin{align*} a(u, v) = L(v) \quad \text{for all $v \in V$}. \end{align*} Then the Galerkin solution $u_S \in S$ satisfies \begin{align*} a(u-u_S,v) = 0 \quad \text{for all $v \in S$}. \end{align*} \end{lemma} \begin{proof} Simply compute \begin{align*} a(u-u_S,v) = a(u, v) - a(u_S, v) = L(v) - L(v) = 0. \end{align*} \end{proof} We write $\|u\|_E = \sqrt{a(u,u)}$ for the norm induced by the inner product $a$. Galerkin orthogonality implies the minimality: \begin{corollary}[Abstract error estimate]\label{cor_abs_err} Let $S \subset V$, $a$, and $u$ be as in the previous lemma. Then the Galerkin solution $u_S \in S$ satisfies \begin{align*} \|u-u_S\|_E = \min_{v \in S} \|u - v\|_E. \end{align*} \end{corollary} \subsection{Linear interpolant} Recall that linear interpolation was studied in \href{https://github.com/uh-comp-methods1/notebooks/blob/main/interpolation/lecture.ipynb}{Computational methods 1}. If you haven't taken that course, don't worry, we will revisit linear interpolation in Section \ref{sec_interp}, looking at it from a slightly different angle. The results in the present section are not used in the theory in its final form. Let $n \ge 1$ be an integer, let \begin{align}\label{def_mesh} 0 = x_0 < x_1 < \dots < x_n = 1, \end{align} and write \begin{align}\label{def_mesh_size} h = \max_{i=1,\dots,n} |x_i - x_{i-1}|. \end{align} The {\em linear interpolant} of a function $u \in C(I)$ is \begin{align*} \I_h u(x) = \frac{x - x_{i-1}}{x_i - x_{i-1}} u(x_i) + \frac{x_{i} - x}{x_i - x_{i-1}} u(x_{i-1}), \qquad x \in I_i, \quad i = 1,\dots,n, \end{align*} where $I_i$ is the subinterval $[x_{i-1}, x_i]$. There holds \begin{align*} \I_h u(x_i) = u(x_i), \quad i=0,\dots,n. \end{align*} Moreover, $\I_h u$ is continuous, and $\I_h u|_{I_i} \in \mathbb P_1$ for each $i=1,\dots,n$. Recall the following theorem. \begin{theorem}[\href{https://nbviewer.org/github/uh-comp-methods1/notebooks/blob/main/interpolation/lecture.ipynb\#Theorem:-linear-interpolation-error}{Linear interpolation error}] Let $u \in C^2(I)$. Then \begin{align*} \|u - \I_h u\|_{L^\infty(I)} \lesssim \|(h \partial)^2 u\|_{L^\infty(I)}. \end{align*} \end{theorem} Here $\lesssim$ means that there is a constant, independent of $u$ and $h$ such that the left-hand side is bounded by the constant times the right-hand side. It follows from the \href{https://nbviewer.org/github/uh-comp-methods1/notebooks/blob/main/interpolation/lecture.ipynb#Theorem:-error-in-differentiation}{error in differentiation} theorem that \begin{align}\label{eq_err_in_diff} \max_{i=1,\dots,n}\|(h\partial)(u - \I_h u)\|_{L^\infty(I_i)} \lesssim \|(h \partial)^2 u\|_{L^\infty(I)}. \end{align} \subsection{P1 finite element space} Consider the mesh (\ref{def_mesh}) and define \begin{align}\label{def_P1_basis} \phi_i(x) = \begin{cases} \frac{x - x_{i-1}}{x_i - x_{i-1}} & \text{if $x \in I_i$}, \\ \frac{x_{i+1} - x}{x_{i+1} - x_i} & \text{if $x \in I_{i+1}$}, \\ 0 & \text{otherwise}. \end{cases} \qquad i = 1,\dots,n-1, \end{align} Following the notation on p.~4 of \cite{EG}, we write \begin{align}\label{def_S} S &= \linspan \{ \phi_1,\dots,\phi_{n-1} \}, \\\notag P^1_{h} &= \{ u \in C(I) \mid \text{$u|_{I_{i}} \in \mathbb P_1$ for $i=1,\dots,n$} \}, \\\notag P^1_{h,0} &= \{ u \in P^1_{h} \mid u(0) = u(1) = 0 \}. \end{align} \begin{lemma} $\dim S = n-1$ and $S = P^1_{h,0}$. \end{lemma} \begin{proof} Note that $\phi_i$ is continuous at $x_i$ and $\phi_i(x_i) = 1$. It is also continuous at $x_{i-1}$ and at $x_{i+1}$, and $\phi_i(x_k) = 0$ for $i \ne k$. In particular, $\phi_i$ is continuous on $I$, and we see that $S \subset P_{h,0}^1$. We establish $\dim S = n - 1$ by showing that $\phi_1,\dots,\phi_{n-1}$ give a basis of $S$. It is enough to show that they are linearly independent. Suppose that \begin{align*} c_1 \phi_1 + \dots + c_{n-1} \phi_{n-1} = 0 \end{align*} for some $c_i \in \R$. Evaluating this function at $x_k$, we get $c_k = 0$. Let $u \in P^1_{h,0}$. It remains to show that there are such $c_i \in \R$ that, writing \begin{align*} v = c_1 \phi_1 + \dots + c_{n-1} \phi_{n-1}, \end{align*} we have $u = v$. Take $c_i = u(x_i)$. Then the polynomials $u|_{I_i}, v|_{I_i} \in \mathbb P_1$ coincide at the points $x_{i-1}, x_i \in I_i$, and hence everywhere in $I_i$. \end{proof} \begin{remark}\label{rem_Ih} The function $v$ in the above proof is $\I_h u$. \end{remark} \subsection{Peeking ahead\label{sec_peek}} Observe that $\phi_i \notin C^1(I)$, and therefore $\phi_i$ is not in the space $\mathcal V$ that we used earlier, see \eqref{def_wrong_V}. We will define a weaker notion of derivative so that $\phi_i$ is differentiable in this weak sense. Letting $V$ to be the space of weakly differentiable functions on $I$, with vanishing boundary conditions, we will show that $S$ is a subspace of $V$ and that \begin{align*} a(u, v) = (u', v'), \quad a : V \times V \to \mathbb R, \end{align*} is an inner product on $V$. Here prime stands now for the weak derivative. (For differentiable functions, the weak derivative is the usual derivative, so this notation will not cause trouble.) The abstract error estimate for the Galerkin solution (Corollary \ref{cor_abs_err}) holds for the spaces $S$ and $V$. We can turn the abstract error estimate into a concrete one as follows. Write $\|\cdot\|$ for the norm in $L^2(I)$ and suppose that $u \in V \cap C^2(I)$ satisfies \begin{align*} a(u, v) = L(v) \quad \text{for all $v \in V$}, \end{align*} where $L$ is a linear form. Then, using the fact that $\I_h u \in S$, \begin{align}\label{eq_err_peek} \|(h\partial)(u-u_S)\| &= h\|u-u_S\|_E = h \min_{v \in S}\|u-v\|_E \le \|(h\partial)(u-\I_h u)\| \\\notag&\le \max_{i=1,\dots,n}\|(h\partial)(u - \I_h u)\|_{L^\infty(I_i)} \lesssim\|(h \partial)^2 u\|_{L^\infty(I)}. \end{align} The last inequality is the bound \eqref{eq_err_in_diff} for the derivative of the linear interpolation error. It is possible to show stronger estimates, and we will return to this once we have defined and studied weak derivatives. \section{On functional and real analysis} \begin{definition}[Completeness]\label{def_complete} A normed vector space $X$ is complete if all its Cauchy sequences converge, that is, the following holds for all sequences $u_j \in X$, $j=1,2,\dots$: if $\|u_j - u_k\| \to 0$ as $j,k \to \infty$ then there is $u \in X$ such that $\|u_j - u\| \to 0$ as $j \to \infty$. Here $\|\cdot\|$ is the norm on $X$. \end{definition} Completeness is needed in order to be able to do analysis efficiently. For example, the crucial difference between the sets of real and rational numbers is that the former is complete whereas the latter is not. The complete inner product spaces are called Hilbert spaces, and the interplay between orthogonality and completeness allows for building a very elegant and efficient theory, based on results like Theorem \ref{th_riesz} below. The space $L^2(I)$ is a Hilbert space, whereas the spaces $C^k(I)$, $k=0,1,\dots$, are not complete with respect to the norm associated to the inner product \begin{align*} (u,v) = \int_I u(x)v(x)dx. \end{align*} The latter fact follows from Corollary \ref{cor_density_L2} below. For this reason, we are forced to consider nonsmooth functions. As it is nonetheless easier to work with smooth rather than nonsmooth functions, we will mostly follow the scheme: \begin{enumerate}[1. ] \item Approximate nonsmooth functions with smooth functions \item Argue with smooth functions \item Extend the argument for nonsmooth functions \end{enumerate} The third step is achieved by using results such as Lemma~\ref{lem_cont_density} below. Let $X$ and $Y$ be normed vector spaces, and write $\|\cdot\|_X$ and $\|\cdot\|_Y$ for their norms. \begin{definition}[Continuous linear map]\label{def_cont} Linear $A : X \to Y$ is continuous if there is $C > 0$ such that for all $u \in X$ there holds $\|A u\|_Y \le C \|u\|_X$. \end{definition} \begin{remark} Let $A : X \to Y$ be linear and continuous. Then for all $\epsilon > 0$ there is $\delta > 0$ such that for all $u, v \in X$ \begin{align*} \|u-v\|_X < \delta \quad \implies \quad \|Au-Av\|_Y < \epsilon. \end{align*} \end{remark} \begin{proof} Let $\epsilon > 0$ and choose $\delta = \epsilon / C$. Then \begin{align*} \|Au-Av\|_Y = \|A(u-v)\|_Y \le C \|u-v\|_X < C \delta = \epsilon. \end{align*} \end{proof} \begin{definition}[Dense subset] A set $D \subset X$ is dense if for all $u \in X$ there is a sequence $u_j \in D$, $j=1,2,\dots$, such that $u_j \to u$ in $X$ as $j \to \infty$. \end{definition} \begin{lemma}[Closure]\label{lem_cont_density} Let $A : X \to Y$ be linear and continuous, and let $D \subset X$ be dense. If there is $C \ge 0$ such that \begin{align}\label{eq_cont_density} \|Au\|_Y \le C \|u\|_X \end{align} holds for all $u \in D$, then \eqref{eq_cont_density} holds also for all $u \in X$. \end{lemma} \begin{proof} Let $u \in X$ and let a sequence $u_j \in D$, $j=1,2,\dots$, satisfy $u_j \to u$ as $j \to \infty$. We have \begin{align}\label{eq_cont_dens_aux} \|Au\|_Y &\le \|Au_j\|_Y + \|Au - Au_j\|_Y \le C \|u_j\|_X + \|Au - Au_j\|_Y \\\notag&\le C \|u\|_X + C\|u_j - u\|_X + \|Au - Au_j\|_Y. \end{align} The claim follows from the continuity of $A$ by letting $j \to \infty$. \end{proof} \begin{remark}\label{rem_cont_density} We could replace \eqref{eq_cont_density} in Lemma \ref{lem_cont_density} with \begin{align*} \|Au\|_Y \le p(u), \end{align*} where $p$ is a continuous \href{https://en.wikipedia.org/wiki/Seminorm}{seminorm} on $X$. \end{remark} \subsection{Square integrable functions} We will need some results that are proven in \href{https://studies.helsinki.fi/opintotarjonta/cu/hy-CU-133769219-2020-08-01}{Introduction to real and Fourier analysis}. \begin{theorem}[Completeness of $L^p$, Th. 1.45 of \cite{Holopainen}] $L^p(I)$ is complete for any $p \ge 1$. \end{theorem} Also the case $I = \R$ is covered by the theorem, and so is the case $p = \infty$, see Section 1.28 of \cite{Holopainen} for the definition of $L^\infty(I)$, the space of essentially bounded functions. The theorem requires using the Lebesgue integral in the definition of $L^p(I)$, $1 \le p < \infty$. We write $I^\inter$ for the interior of $I$, that is, if $I = [x_L, x_R]$ then $I^\inter = (x_L, x_R)$. Moreover, given $u \in C(I)$ we write $\supp(u)$ for the closure of \begin{align*} \{x \in I \mid u(x) \ne 0\}. \end{align*} \begin{definition}[Spaces of smooth functions] \begin{align*} C^\infty(I) &= \{ u \mid \text{$u \in C^k(I)$ for all $k=0,1\dots$} \}, \\ C_0^\infty(I) &= \{ u \in C^\infty(I) \mid \text{$\supp(u) \subset I^\inter$}\}. \end{align*} \end{definition} All the function spaces mentioned so far can be considered in the case $I = \R$ as well. \begin{theorem}[Smoothing by convolution, Th. 2.26 and Rem. 2.28 of \cite{Holopainen}]\label{th_smoothing} Let $f \in L^2(\R)$ and $g \in C_0^\infty(\R)$. Then the convolution \begin{align*} f * g(x) = \int_\R f(y) g(x - y) dy \end{align*} satisfies $f * g \in C^\infty(\R)$. \end{theorem} It is shown on p. 30 of \cite{Holopainen} that there is a sequence of functions $g_k \in C_0^\infty(\R)$, $k=1,2,\dots$, taking positive values and satisfying \begin{align}\label{def_mollifier} \supp(g_k) \subset \{x \in \R \mid |x| \le 1/k\}, \quad \int_\R g_k(x) dx = 1. \end{align} Such a sequence is often called a \href{https://en.wikipedia.org/wiki/Mollifier}{mollifier}. \begin{theorem}[Mollification, Th. 2.34 of \cite{Holopainen}]\label{th_mollification} Let $g_k \in C_0^\infty(\R)$, $k=1,2,\dots$, be a mollifier. Then for all $f \in L^2(\R)$ there holds $f * g_k \to f$ in $L^2(\R)$ as $k \to \infty$. \end{theorem} \begin{corollary}[Density in $L^2$]\label{cor_density_L2} The space $C_0^\infty(I)$ is dense in $L^2(I)$. \end{corollary} \begin{proof} Let $u \in L^2(I)$. Let $\delta > 0$. Writing $I = [x_L, x_R]$, we set \begin{align*} K = [x_L + \delta, x_R - \delta]. \end{align*} We define the \href{https://en.wikipedia.org/wiki/Indicator_function}{indicator function} \begin{align*} 1_K(x) = \begin{cases} 1 & x \in K, \\ 0 & \text{otherwise}, \end{cases} \end{align*} and write $f = u 1_K$. By Theorem \ref{th_mollification} the sequence $f_k = f * g_k$, $k=1,2,\dots$, converges to $f$ in $L^2(I)$ as $k \to \infty$, and $f_k \in C^\infty(\R)$ in view of Theorem \ref{th_smoothing}. It can be shown that $\supp(f_k) \subset I^\inter$ for large $k$. Hence $f_k \in C_0^\infty(I)$ for large $k$. It follows from the \href{https://en.wikipedia.org/wiki/Dominated_convergence_theorem}{dominated convergence} theorem that $u1_K \to u$ as $\delta \to 0$. Let $j = 1,2,\dots$ and choose $\delta > 0$ such that \begin{align*} \|u1_K - u\| \le 1/j. \end{align*} By the above argument there is $u_j \in C_0^\infty(I)$ such that \begin{align*} \|u_j - u1_K\| \le 1/j. \end{align*} Now \begin{align*} \|u_j - u\| \le \|u_j - u1_K\| + \|u1_K - u\| \le 2/j \to 0, \quad j \to \infty. \end{align*} \end{proof} \section{Sobolev spaces} \subsection{Weak derivative} Analogously to (\ref{def_L}), it is often convenient to view a function $f \in L^2(I)$ as the linear form $L_f : C_0^\infty(I) \to \R$ defined by \begin{align*} L_f(\phi) = (f, \phi), \end{align*} where $(\cdot,\cdot)$ is the inner product on $L^2(I)$. The following lemma shows that $f$ can be recovered from $L_f$, in other words, the map $f \mapsto L_f$ is injective. \begin{lemma}[Functions as linear forms] Let $u, v \in L^2(I)$ and suppose that \begin{align*} L_u(\phi) = L_v(\phi), \quad \text{for all $\phi \in C_0^\infty(I)$}. \end{align*} Then $u = v$. \end{lemma} \begin{proof} The linear form \begin{align*} A(w) = (u - v, w), \quad A : L^2(I) \to \R, \end{align*} is continuous due to the Cauchy--Schwarz inequality. We have $A \phi = 0$ for all $\phi \in C_0^\infty(I)$. Taking $X=L^2(I)$, $D=C_0^\infty(I)$ and $C=0$ in Lemma \ref{lem_cont_density}, we see that $Aw = 0$ for all $w \in L^2(I)$. Here we used also Corollary~\ref{cor_density_L2}. Taking $w = u-w$, leads to $\|u-v\|^2 = 0$. \end{proof} \begin{definition}[Weak derivative] Let $k=1,2,\dots$. The $k$th weak derivative of $f \in L^2(I)$ is the linear form \begin{align*} L(\phi) = (-1)^k (f, \phi^{(k)}), \quad \phi \in C_0^\infty(I). \end{align*} \end{definition} \begin{lemma}[Classical derivative is also the weak one]\label{lem_weak_classic} Let $k=1,2,\dots$. If $f \in C^k(I)$ then the $k$th weak derivative of $f$ is the linear form $L_{f^{(k)}}$. \end{lemma} \begin{proof} Consider first the case $k=1$. Let $\phi \in C_0^\infty(I)$ and integrate by parts \begin{align*} -(f, \phi') = (f', \phi) = L_{f'}(\phi). \end{align*} In general, we integrate by parts $k$ times. \end{proof} \begin{example}\label{ex_relu} Let $I = [-1,1]$ and consider the function $u \in L^2(I)$ defined by \begin{align*} u(x) = \begin{cases} 0 & x < 0, \\ x & x > 0. \end{cases} \end{align*} (In the context of artificial neural networks, this function is called the \href{https://en.wikipedia.org/wiki/Rectifier_(neural_networks)}{rectifier} or ReLU as in Rectified Linear Unit.) The weak derivative of $u$ is the linear form $L_w$ where $w \in L^2(I)$ is defined by \begin{align*} w(x) = \begin{cases} 0 & x < 0, \\ 1 & x > 0. \end{cases} \end{align*} \end{example} \begin{proof} Let $\phi \in C_0^\infty(I)$ and integrate by parts \begin{align*} -(u, \phi') = -\int_0^1 x \phi'(x) dx = \int_0^1 \phi(x) dx = (w, \phi). \end{align*} Observe that the boundary terms coming from the integration by parts vanish since $x$ vanishes at $x = 0$ and $\phi'(x)$ at $x = 1$. \end{proof} \begin{example}\label{ex_deriv_P1h} Recall that the space $P^1_h$ is defined by (\ref{def_S}). The weak derivative of $u \in P^1_h$ is the linear form $L_w$ where $w \in L^2(I)$ is defined by \begin{align*} w(x) = \frac{u(x_i) - u(x_{i-1})}{x_i - x_{i-1}}, \quad x \in I_i,\ i=1,\dots,n. \end{align*} \end{example} \subsection{Spaces of weakly differentiable functions} If there is $w \in L^2(I)$ such that the weak derivative of $u \in L^2(I)$ coincides with $L_w$ then we write $w = u'$. Moreover, we write $u' \in L^2(I)$ for $u \in L^2(I)$ if there exists such $w \in L^2(I)$. Observe that if $w, \tilde w \in L^2(I)$ satisfy $w = u' = \tilde w$ then $L_w = L_{\tilde w}$, and hence also $w = \tilde w$. So the notation makes sense, and we extend it to the higher weak derivatives in an analogous manner. Let us make one more consistency check. If $u' = w \in L^2(I)$ then the definition of the weak derivative can be applied to $w$. Let $\phi \in C_0^\infty(I)$. Then $\phi' \in C_0^\infty(I)$ as well and \begin{align*} (-1)^2 (u, \phi'') = (-1)^2 (u, (\phi')') = (-1) (w, \phi'). \end{align*} Therefore the second weak derivative of $u$ equals to the first weak derivative of $w$. We would get a more elegant theory by viewing the linear forms \begin{align*} L : C_0^\infty(I) \to \R, \end{align*} rather than the functions in $L^2(I)$, as the objects of primary interest. However, to get a good theory, we would need to use the subspace of linear forms called \href{https://en.wikipedia.org/wiki/Distribution_(mathematics)}{distributions}. Distributions are defined by certain continuity properties that are somewhat technical to state, and we do not use them for this reason. \begin{definition}[Sobolev spaces] \begin{align*} H^k(I) = \{ u \in L^2(I) \mid u', \dots, u^{(k)} \in L^2(I)\}. \end{align*} \end{definition} Removing the shorthand notation, the definition of $H^1(I)$ reads \begin{align*} H^1(I) = \{ u \in L^2(I) \mid \ &\text{there is $w \in L^2(I)$ such that} \\ &\text{$(w, \phi) = -(u, \phi')$ for all $\phi \in C_0^\infty(I)$} \}. \end{align*} We equip $H^1(I)$ with the inner product \begin{align*} (u,v)_{H^1(I)} = (u,v) + (u', v'). \end{align*} Let us show that this is indeed an inner product. It is clearly symmetric. To show that it is linear in its both arguments, it is enough to show that the map $u \mapsto u'$ is linear. Let $u, \tilde u \in H^1(I)$ and let $c \in \R$. Then \begin{align*} ((u + c \tilde u)', \phi) &= -(u + c \tilde u, \phi') = -(u, \phi') - c (\tilde u, \phi') = (u', \phi) + c(\tilde u', \phi) \\&= (u' + c \tilde u', \phi). \end{align*} This shows the required linearity \begin{align}\label{eq_weakd_lin} (u + c \tilde u)' = u' + c \tilde u'. \end{align} Finally, $(u,v)_{H^1(I)}> 0$ for $u \ne 0$ since \begin{align*} (u,u) + (u', u') \ge (u, u) > 0. \end{align*} It follows from Example \ref{ex_deriv_P1h} that: \begin{example}\label{ex_P1_basis} The basis functions \eqref{def_P1_basis} are in $H^1(I)$ with $I = [0,1]$. \end{example} See Chapter 8 of \cite{Brezis} for the proofs of the following results. \begin{theorem}[Completeness of $H^1$, Prop. 8.1] $H^1(I)$ is a Hilbert space. \end{theorem} \begin{lemma}[Vanishing weak derivative, Lem. 8.1] If $u \in L^2(I)$ satisfies $u' = 0$, then $u$ is a constant. \end{lemma} \begin{theorem}[Extension, Th. 8.6] There is linear $E : H^1(I) \to H^1(\R)$ satisfying $E u|_{I} = u$ and \begin{align*} \|E u\|_{L^2(\R)} \lesssim \|u\|_{L^2(I)}, \quad \|E u\|_{H^1(\R)} \lesssim \|u\|_{H^1(I)}. \end{align*} \end{theorem} \begin{theorem}[Density in $H^1$, Th. 8.7]\label{th_density_H1} Let $u \in H^1(I)$. Then there is a sequence $u_j \in C_0^\infty(\R)$, $j=1,2,\dots$, such that $u_j|_{I} \to u$ in $H^1(I)$. \end{theorem} \begin{theorem}[Sobolev embedding, Th. 8.8]\label{th_sob_embedding} $H^1(I) \subset C(I)$ and \begin{align*} \|u\|_{L^\infty(I)} \lesssim \|u\|_{H^1(I)}. \end{align*} \end{theorem} In view of Theorem \ref{th_sob_embedding}, writing $I=[x_L, x_R]$, we may define \begin{align*} H_0^1(I) = \{u \in H^1(I) \mid u(x_L) = 0 = u(x_R) \}. \end{align*} \begin{theorem}[Density in $H^1_0$, Th. 8.12]\label{th_density_H01} The space $C_0^\infty(I)$ is dense in $H^1_0(I)$. \end{theorem} \begin{proposition}[Poincar\'e inequality, Prop. 8.13]\label{prop_poincare} \begin{align*} \|u\|_{H^1(I)} \lesssim \|u'\|_{L^2(I)}, \quad u \in H_0^1(I). \end{align*} \end{proposition} We leave proving the following theorem as an exercise. \begin{theorem}[Density in $H^k$]\label{th_density_H2} The space $C^\infty(I)$ is dense in $H^k(I)$ for any $k=1,2,\dots$. \end{theorem} \section{P1 method in one dimension} \subsection{Formulation of the method} Let us revisit the sketch in Section~\ref{sec_peek}, and make it precise. We set \begin{align*} I = [0,1], \quad V = H_0^1(I), \end{align*} and define the bilinear form \begin{align*} a(u, v) = (u', v'), \quad a : V \times V \to \mathbb R. \end{align*} Then $a$ is an inner product. Indeed, if $a(u,u) = 0$ then $u=0$ by the Poincar\'e inequality. Let $S$ be the finite dimensional space in \eqref{def_S}. It follows from Example~\ref{ex_P1_basis} that $S \subset V$. Let $f \in L^2(I)$ and define the linear form \begin{align*} L(v) = (f, v), \quad L : V \to \R. \end{align*} By Theorem \ref{th_gsol} there is unique $u_h \in P_{h,0}^1$ such that \begin{align}\label{def_P1_fem} % To the students looking at this code: % In general I recommend to avoid using manual spacing as below, % but I feel that tuning the spacing is justifiable here. \ \, a(u_h,v) = L(v) \quad \text{for all $v \in P_{h,0}^1$}. \end{align} We say that \eqref{def_P1_fem} defines the P1 finite element method for the weak formulation, \begin{align}\label{def_weak_form} a(u, v) = L(v) \quad \text{for all $v \in V$}, \end{align} of the boundary value problem \eqref{eq_poisson_1d}. \begin{lemma}\label{lem_formulations} The following are equivalent for $u \in V$: \begin{enumerate} \item the weak formulation \eqref{def_weak_form} holds, \item $a(u, \phi) = L(\phi)$ for all $\phi \in C_0^\infty(I)$, \item $-u'' = f$ in the sense of weak derivatives. \end{enumerate} Moreover, if $u \in C^2(I) \cap V$ then they imply $-u'' = f$ in the sense of classical derivatives. \end{lemma} \begin{proof} It is clear that (1) implies (2) since $C_0^\infty(I) \subset V$. Let us show that (2) implies (1). The linear form \begin{align*} Av = a(u,v) - L(v), \quad A : V \to \R, \end{align*} is continuous due to Cauchy--Schwarz inequality. We have $A\phi = 0$ for all $\phi \in C_0^\infty(I)$. Taking $X = V$, $D = C_0^\infty(I)$ and $C = 0$ in Lemma \ref{lem_cont_density}, we see that $Av = 0$ for all $v \in V$. Here we used also Theorem \ref{th_density_H01}. Let us turn to the equivalence of (2) and (3). By definition, the weak derivative of $-u' \in L^2(I)$ is the linear form \begin{align*} \phi \mapsto (u', \phi') = a(u, \phi), \quad \phi \in C_0^\infty(I). \end{align*} Thus it is equal to $L_f = L$ if and only if (2) holds. Suppose $u \in C^2(I) \cap V$. By Lemma \ref{lem_weak_classic}, the second weak derivative of $-u$ is equal to $L_{-u''}$. Thus $L_{-u''} = L_f$ by (3) and this yields $-u'' = f$. \end{proof} \begin{theorem}[Preliminary error estimate] Let $u \in C^2(I)$ solve \eqref{eq_poisson_1d} and let $u_h \in P_{h,0}^1$ solve \eqref{def_P1_fem}. Then \begin{align*} \|(h\partial)(u-u_h)\| \lesssim \|(h \partial)^2 u\|_{L^\infty(I)}. \end{align*} \end{theorem} \begin{proof} Recall that $u$ satisfies (\ref{eq_weak_prelim}). Moreover, $u \in C^2(I) \cap V$ due to the boundary conditions in \eqref{eq_poisson_1d}. Hence (\ref{def_weak_form}) holds by the implication (2)$\implies$(1) in Lemma~\ref{lem_formulations}. Recall that $P_{h,0}^1 = S \subset V$, $a$ is an inner product on $V$, and that $S$ is finite dimensional. We see that the abstract error estimate in Corollary \ref{cor_abs_err} holds. Remark \ref{rem_Ih} implies $\I_h u \in S$. Let us now repeat the argument \eqref{eq_err_peek}, with all the steps fully justified, \begin{align*} \|(h\partial)(u-u_h)\| &= h\|u-u_h\|_E = h \min_{v \in S}\|u-v\|_E \le \|(h\partial)(u-\I_h u)\| \\&\le \max_{i=1,\dots,n}\|(h\partial)(u - \I_h u)\|_{L^\infty(I_i)} \lesssim \|(h \partial)^2 u\|_{L^\infty(I)}. \end{align*} Here the first equality follows from the definitions \begin{align}\label{eq_E_recall} \|w\|_E^2 = a(w,w) = \|w'\|^2, \quad w \in V, \end{align} the second equality is Corollary \ref{cor_abs_err}, the first inequality follows from $\I_h u \in S$, together with \eqref{eq_E_recall}, the second inequality simply replaces $(u - \I_h u)'|_{I_i} \in C^1(I_i)$ by its maximum on each $I_i$, and the last inequality is \eqref{eq_err_in_diff}. \end{proof} The assumption that $u \in C^2(I)$ solves \eqref{eq_poisson_1d} can be replaced with the assumption that $u \in H^2(I) \cap V$ satisfies the weak formulation \eqref{def_weak_form}, while still getting the same order of convergence. We will show this next. \subsection{Interpolation estimates\label{sec_interp}} Due to the embedding $H^1(I) \subset C(I)$ we may view $\I_h$ as a linear map \begin{align*} \I_h : H^1(I) \to S. \end{align*} \begin{theorem}[Continuity of interpolation] \begin{align*} \|\I_h u\|_{H^1(I)} \lesssim \|u\|_{H^1(I)}, \quad u \in H^1(I). \end{align*} \end{theorem} \begin{proof} Step 1 (smooth case). We will show that \begin{align}\label{eq_Ih_cont_pre} \|\I_h u\|_{H^1(I)} \lesssim \|u\|_{H^1(I)}, \quad u \in C^\infty(I). \end{align} In view of \begin{align}\label{eq_Ih_L2_cont} \|\I_h u\| \lesssim \|\I_h u\|_{L^\infty(I)} \le \|u\|_{L^\infty(I)} \lesssim \|u\|_{H^1(I)}, \quad u \in H^1(I), \end{align} it is enough to show that \begin{align*} \|\p \I_h u\| \lesssim \|u\|_{H^1(I)}, \quad u \in C^\infty(I). \end{align*} Let $u \in C^\infty(I)$. By Example \ref{ex_deriv_P1h} \begin{align*} \p \I_h u|_{I_i} = \frac{u(x_i) - u(x_{i-1})}{x_i - x_{i-1}}. \end{align*} Moreover, using the Cauchy--Schwarz inequality \begin{align*} u(x_i) - u(x_{i-1}) = \int_{I_i} u'(x) dx \le |x_i - x_{i-1}|^{\frac12} \|u'\|_{L^2(I_i)}. \end{align*} Thus \begin{align*} \|\p \I_h u\|^2 &= \sum_{i=1}^n \int_{I_i} \frac{|u(x_i) - u(x_{i-1})|^2}{|x_i - x_{i-1}|^2} dx = \sum_{i=1}^n \frac{|u(x_i) - u(x_{i-1})|^2}{|x_i - x_{i-1}|} \\&\le \sum_{i=1}^n \|u'\|_{L^2(I_i)}^2 \le \|u\|_{H^1(I)}^2. \end{align*} Step 2 (continuity). Recall that the notation $\lesssim$ means that the implicit constant in the inequality is independent of both $u$ and $h$. In this step we show that $\I_h : H^1(I) \to H^1(I)$ is continuous, but the constant $C > 0$ in Definition \ref{def_cont} is allowed to depend on $h$. There holds \begin{align*} \I_h u = \sum_{i=0}^n u(x_i) \phi_i, \end{align*} where the basis functions $\phi_i$ are defined by \eqref{def_P1_basis}. (The definition extends to the cases $i=0,n$ by taking $I_0$ and $I_{n+1}$ to be, say, empty sets.) Hence \begin{align*} \|\I_h u\|_{H^1(I)} \le \|u\|_{L^\infty(I)} \sum_{i=0}^n \|\phi_i\|_{H^1(I)} \le C_h \|u\|_{H^1(I)}, \end{align*} where the constant $C_h > 0$ depends on $h > 0$, but not on $u$. Step 3 (closure). In view of \eqref{eq_Ih_cont_pre} and the density in Theorem \ref{th_density_H1}, the claim follows from Lemma \ref{lem_cont_density} with $A=\I_h$, $X = H^1(I)$ and $D = C^\infty(I)$. \end{proof} \begin{remark} An alternative way to show the continuity in the above proof is to the \href{https://en.wikipedia.org/wiki/Closed_graph_theorem}{closed graph} theorem. \end{remark} Indeed, by the closed graph theorem it is enough to show that the graph of $\I_h$ is closed, that is, we need to show that if a sequence of pairs $(u_j, \I_h u_j)$, $j=1,2,\dots$, with $u_j \in H^1(I)$ converges to $(u, v)$ in $H^1(I) \times H^1(I)$ then $\I_h u = v$. Now $\I_h u_j \to \I_h u$ in $L^2(I)$ by \eqref{eq_Ih_L2_cont}. Also, $\I_h u_j \to v$ in $L^2(I)$ as convergence in $H^1(I)$ implies convergence in $L^2(I)$. But the limit of a convergent sequence is unique, and therefore $\I_h u = v$. \begin{theorem}[Interpolation inequality]\label{th_interp} \begin{align*} \|u - \I_h u\| + \|(h\p)(u - \I_h u)\| \lesssim \|(h\p)^2 u\|, \quad u \in H^2(I). \end{align*} \end{theorem} \begin{proof} In view of Remark \ref{rem_cont_density}, the continuity of $\I_h$ on $H^1(I)$ and the density $C^\infty(I) \subset H^2(I)$, it is enough to show the claim for $u \in C^\infty(I)$. As $u - \I_h u$ vanishes at the points $x_0, \dots, x_n$, the \href{https://en.wikipedia.org/wiki/Mean_value_theorem}{mean value} theorem implies that $\p (u - \I_h u)|_{I_i}$ vanishes at some point $\xi_i \in I_i$ for each $i=1,\dots,n$. Consider a function $w \in C^\infty(I_i)$ that vanishes at a point $\xi \in I_i$. Then for all $x \in I_i$ \begin{align*} w(x) = \int_\xi^x w'(y) dy \le |x - \xi|^{\frac12} \|w'\|_{L^2(I_i)}, \end{align*} and \begin{align*} \|w\|_{L^2(I_i)}^2 \le \int_{I_i} |x - \xi| dx\, \|w'\|_{L^2(I_i)}^2 \le h^2 \|w'\|_{L^2(I_i)}^2 = \|(h\p) w\|_{L^2(I_i)}^2. \end{align*} Taking $w = h\p (u - \I_h u)|_{I_i}$ gives \begin{align*} \|(h\p)(u - \I_h u)\|_{L^2(I_i)}^2 \le \|(h\p)^2 (u - \I_h u)\|_{L^2(I_i)}^2 = \|(h\p)^2 u\|_{L^2(I_i)}^2, \end{align*} where we used $\I_h u|_{I_i} \in \mathbb P_1$. Taking $w = (u - \I_h u)|_{I_i}$ gives \begin{align*} \|(u - \I_h u)\|_{L^2(I_i)}^2 \le \|(h\p)(u - \I_h u)\|_{L^2(I_i)}^2 \le \|(h\p)^2 u\|_{L^2(I_i)}^2. \end{align*} The claim follows by summing over $i=1,\dots,n$. \end{proof} \begin{theorem}[Error estimate for the derivative]\label{th_err_deriv} Let $u \in H^2(I) \cap V$ solve \eqref{def_weak_form} and let $u_h \in P_{h,0}^1$ solve \eqref{def_P1_fem}. Then \begin{align*} \|(h\partial)(u-u_h)\| \lesssim \|(h \partial)^2 u\|. \end{align*} \end{theorem} \begin{proof} We have \begin{align*} \|(h\partial)(u-u_h)\| &= h\|u-u_h\|_E = h \min_{v \in S}\|u-v\|_E \le \|(h\partial)(u-\I_h u)\| \\&\lesssim \|(h \partial)^2 u\|. \end{align*} Here the first equality follows from the definitions \eqref{eq_E_recall}, the second equality is Corollary \ref{cor_abs_err}, the first inequality follows from $\I_h u \in S$, together with \eqref{eq_E_recall}, and the second inequality is contained in Theorem \ref{th_interp}. \end{proof} \subsection{Error estimate} We will need the following result proven in \href{https://studies.helsinki.fi/courses/cu/hy-CU-117627226-2021-08-01}{Functional analysis}. \begin{theorem}[\href{https://en.wikipedia.org/wiki/Riesz_representation_theorem}{Riesz representation}]\label{th_riesz} Let $H$ be a Hilbert space and let \begin{align*} L : H \to \R \end{align*} be linear and continuous. Then there is unique $u \in H$ such that \begin{align*} L(v) = (u,v), \quad v \in H. \end{align*} Here $(\cdot,\cdot)$ is the inner product on $H$. \end{theorem} \begin{corollary}[Weak formulation] Let $L : V \to \R$ be a continuous linear form. Then there is unique $u \in V$ satisfying weak formulation (\ref{def_weak_form}). \end{corollary} \begin{proof} Apply the Riesz representation theorem to $V$ equipped with the inner product $a$. \end{proof} \begin{remark}[Higher regularity]\label{rem_hreg} Let $f \in L^2(I)$ and set $L(v) = (f, v)$. Let $u \in V$ be the solution of (\ref{def_weak_form}). Then $u \in H^2(I)$ and \begin{align*} \|u''\| = \|f\|. \end{align*} \end{remark} \begin{proof} By Lemma \ref{lem_formulations} there holds $-u'' = f \in L^2(I)$. Hence $u \in H^2(I)$ and $\|u''\| = \|f\|$. \end{proof} \begin{theorem}[Error estimate]\label{th_err} Let $u \in H^2(I) \cap V$ solve \eqref{def_weak_form} and let $u_h \in P_{h,0}^1$ solve \eqref{def_P1_fem}. Then \begin{align*} \|u-u_h\| \lesssim \|(h \partial)^2 u\|. \end{align*} \end{theorem} \begin{proof} Let $w \in V$ be the solution of \eqref{def_weak_form} with $L(v) = (u - u_h,v)$. Then \begin{align*} \|u-u_h\|^2 = L(u - u_h) = a(w, u - u_h). \end{align*} The Galerkin orthogonality implies \begin{align*} a(\I_h w, u - u_h) = 0. \end{align*} Hence, using the Cauchy--Schwarz inequality, \begin{align*} \|u-u_h\|^2 = a(w - \I_h w, u - u_h) \le \|\p(w - \I_h)\|\|\p(u-u_h)\|. \end{align*} By Remark \ref{rem_hreg}, $w \in H^2(I)$ and \begin{align*} \|w''\| = \|u - u_h\|. \end{align*} Theorem \ref{th_interp} implies then that \begin{align*} \|(h\p)(w - \I_h)\| \lesssim \|(h\p)^2 w\|. \end{align*} By combining the above observations and using Theorem~\ref{th_err_deriv}, we obtain \begin{align*} \|u-u_h\| \le \|(h\p)(u-u_h)\| \lesssim \|(h \partial)^2 u\|. \end{align*} \end{proof} \section{On more general finite element methods} \subsection{Higher order methods in one dimension} Let $k=1,2,\dots$, and define \begin{align*} P^k_{h} &= \{ u \in C(I) \mid \text{$u|_{I_{i}} \in \mathbb P_k$ for $i=1,\dots,n$} \}, \\\notag P^k_{h,0} &= \{ u \in P^k_{h} \mid u(0) = u(1) = 0 \}. \end{align*} By Theorem \ref{th_gsol} there is unique $u_h \in P_{h,0}^k$ such that \begin{align}\label{def_Pk_fem} a(u_h,v) = L(v) \quad \text{for all $v \in P_{h,0}^k$}. \end{align} We say that \eqref{def_P1_fem} defines the finite element method of order $k$ for the weak formulation \eqref{def_weak_form} of the boundary value problem \eqref{eq_poisson_1d}. Of course, in order to solve \eqref{def_Pk_fem} in practice we need to choose a basis of $P_{h,0}^k$. This is also needed for a construction of the analogue of the interpolation operator $\I_h$. We will not enter into the bookkeeping related to indexing of basis functions of $P_{h,0}^k$, and consider only local interpolation on each subinterval $I_i$ separately. We fix $i=1,\dots,n$ and introduce the nodes \begin{align*} \xi_m = x_{i-1} + \frac m k (x_i - x_{i-1}) \in I_i, \quad m = 0,\dots,k. \end{align*} Let $\mathcal L_m$, $m=0,\dots,k$, be the basis polynomials in the Lagrange interpolation, associated to $\xi_m$, $m=0,\dots,k$, that is, \begin{align*} \mathcal L_m(x) = \prod_{l=0,l \ne m}^k \frac{x - \xi_l}{\xi_m - \xi_l}. \end{align*} The local interpolation operator $\I^k_{h,i}$ on $I_i$ is defined by \begin{align*} \I^k_{h,i} u(x) = \sum_{m=0}^k u(\xi_m) \mathcal L_m(x), \quad u \in C(I_i). \end{align*} In other words, $\I^k_{h,i} u$ is the Lagrange interpolation polynomial of $u$. There is a global interpolation operator $\tilde \I_h^k : C(I) \to P_{h}^k$ such that \begin{align*} \tilde \I_h^k v|_{I_i} = \I_{h,i}^k(v|_{I_i}), \quad v \in C(I), \end{align*} see p. 10 of \cite{EG}. By eliminating the degrees of freedom at $x=0$ and $x=1$, an interpolation operator $\I_h^k : C(I) \to P_{h,0}^k$ can also be obtained. But as said, we will not enter into global bookkeeping. See Chapter 1 of \cite{EG} for the proofs of the following two results. \begin{proposition}[Continuity of local interpolation, Prop. 1.11] \begin{align*} \|\I_{h,i}^k u\|_{H^1(I_i)} \lesssim \|u\|_{H^1(I_i)}, \quad u \in H^1(I_i). \end{align*} \end{proposition} \begin{proposition}[Local interpolation inequality, Prop. 1.12] For $l=0,\dots,k$, \begin{align}\label{eq_inter_hord} \sum_{m=0}^{l+1} \|(h\p)^m (u - \I_{h,i}^k u)\|_{L^2(I_i)} \lesssim \|(h\p)^{l+1} u\|_{L^2(I_i)}, \quad u \in H^{l+1}(I_i). \end{align} \end{proposition} Replacing the last inequality in the proof of Theorem \ref{th_err_deriv} by the global version of \eqref{eq_inter_hord} leads to \begin{align*} \|(h\partial)(u-u_h)\| \lesssim \|(h \partial)^{l+1} u\|, \end{align*} for the solutions $u \in H^{l+1}(I) \cap V$ and $u_h \in P_{h,0}^k$ of \eqref{def_weak_form} and \eqref{def_Pk_fem}. Likewise, replacing the last inequality in the proof of Theorem \ref{th_err} by the global version of \eqref{eq_inter_hord} leads to \begin{align*} \|u-u_h\| \lesssim \|(h \partial)^{l+1} u\|. \end{align*} \subsection{Poisson's equation} All the function spaces above can be generalized to the case that $I$ is replaced by a domain $\Omega \subset \R^n$, and the finite element method can be generalized to solve the partial differential equation \begin{align}\label{eq_poisson_nd} - \Delta u = f. \end{align} In this case we choose $V$ to be a suitable subspace of $H^1(\Omega)$, depending on the boundary conditions imposed on $u$. Let consider the homogeneous \href{https://en.wikipedia.org/wiki/Dirichlet_boundary_condition}{Dirichlet boundary condition} \begin{align}\label{eq_dirichlet} u|_{\p \Omega} = 0. \end{align} Assuming that $\p \Omega$ is regular enough so that $u|_{\p \Omega}$ is well-defined for $u \in H^1(\Omega)$, we choose \begin{align*} V = H_0^1(\Omega) = \{u \in H^1(\Omega) \mid u|_{\p \Omega} = 0\}, \end{align*} define the bilinear and linear forms \begin{align*} a(u,v) &= \int_\Omega \nabla u \cdot \nabla v\, dx, \quad a : V \times V \to \R, \\ L(v) &= \int_\Omega f v\, dx, \quad L : V \to \R, \end{align*} and consider the weak formulation \begin{align}\label{def_weak_nd} a(u, v) = L(v) \quad \text{for all $v \in V$}. \end{align} We choose a subspace $V_h \subset V$ consisting of piecewise polynomial functions on a mesh with mesh size $h$. Varying the mesh, we view $V_h$ as a family of spaces parametrized by $h$. The spaces $V_h$ are chosen so that there are linear maps \begin{align*} \I_h : V \to V_h, \end{align*} satisfying the interpolation inequality \begin{align}\label{eq_interp_nd} \|(h\nabla)(u - \I_h u)\|_{L^2(\Omega)} \lesssim \|(hD)^2 u\|_{L^2(\Omega)}, \end{align} where $(hD)^2 u$ is the Hessian $D^2 u$ of $u$, rescaled by $h^2$. Assuming that $\Omega$ is bounded, the higher dimensional version of Poincar\'e's inequality holds, and $a$ is an inner product. Then it follows from the Riesz representation theorem that \eqref{def_weak_nd} has a unique solution $u \in V$. We make the further assumption that $\p \Omega$ is regular enough so that the solution of \eqref{def_weak_nd} satisfies $u \in H^2(\Omega)$ and \begin{align}\label{eq_hreg_nd} \|D^2 u\|_{L^2(\Omega)} \lesssim \|f\|_{L^2(\Omega)}. \end{align} This holds when $\p \Omega$ is smooth, however, in this case care is needed in construction of the spaces $V_h$ as the mesh needs to fit well to $\p \Omega$. A simpler construction is possible when $\Omega \subset \R^2$ is a convex polygon, and \eqref{eq_hreg_nd} holds in this case as well, see e.g. p. 139 of \cite{BS} for further details. Under the above assumptions, the Poisson problem \eqref{eq_poisson_nd}--\eqref{eq_dirichlet} can be solved using the finite element method defined by \begin{align}\label{def_fem_nd} a(u_h,v) = L(v) \quad \text{for all $v \in V_h$}, \end{align} in the sense that \begin{align}\label{eq_err_nd} \|u - u_h\|_{L^2(\Omega)} + \|(h\nabla)(u - u_h)\|_{L^2(\Omega)} \lesssim \|(hD)^2 u\|_{L^2(\Omega)}. \end{align} Indeed, following the proof of Theorem \ref{th_err_deriv} and writing \begin{align*} \|\cdot\| = \|\cdot\|_{L^2(\Omega)}, \quad \|\cdot\|_E = a(\cdot, \cdot), \end{align*} we have \begin{align*} \|(h\nabla)(u-u_h)\| &= h\|u-u_h\|_E = h \min_{v \in V_h}\|u-v\|_E \le \|(h\nabla)(u-\I_h u)\| \\&\lesssim \|(hD)^2 u\|. \end{align*} Here the second equality is once again Corollary \ref{cor_abs_err}, the first inequality follows from $\I_h u \in V_h$, and the second inequality is \eqref{eq_interp_nd}. To conlcude, we follow the proof of Theorem \ref{th_err}, and write $(\cdot, \cdot)$ for the inner product on $L^2(\Omega)$. Let $w \in V$ be the solution of \eqref{def_weak_nd} with $L(v) = (u - u_h,v)$. Then \begin{align*} \|u-u_h\|^2 &= L(u - u_h) = a(w, u - u_h). = a(w - \I_h w, u - u_h) \\&\le \|\nabla(w - \I_h)\|\|\nabla(u-u_h)\|. \end{align*} Using \eqref{eq_hreg_nd} and \eqref{eq_interp_nd}, \begin{align*} \|(h\nabla)(w - \I_h)\| \lesssim \|h^2 D^2 w\| \lesssim h^2 \|u - u_h\|, \end{align*} and \eqref{eq_err_nd} follows easily. Needless to say, higher order methods can also be developed in higher dimensional cases. \subsection{More general second order elliptic equations} The equation \begin{align}\label{eq_laplace_q} - \Delta u + q u &= f, \\\notag u|_{\p \Omega} &= 0, \end{align} where $q : \Omega \to [0,\infty)$ is smooth enough, requires only minor modifications to the sketch in the previous section. In this case we set $V = H^1_0(\Omega)$ and \begin{align*} a(u,v) &= \int_\Omega (\nabla u \cdot \nabla v + q u v)\, dx. \end{align*} Moreover, $\Delta$ can be replaced by the \href{https://en.wikipedia.org/wiki/Laplace%E2%80%93Beltrami_operator}{Laplace--Beltrami operator} associated to a smooth enough Riemannian metric $g : \Omega \to \R^{n \times n}$, with the only modifications that $\nabla$ and $dx$ in the definition of $a$ are then taken to be the gradient and the volume form induced by $g$, and $\cdot$ is replaced by the inner product with respect to $g$. Care is needed when considering $q : \Omega \to \R$ taking negative values. In this case \eqref{eq_laplace_q} may not have a unique solution due to eigenvalues. For example, if $\Omega = I = [0, \pi]$ then both $u = 0$ and $u(x) = \sin(x)$ are solutions to \begin{align*} -u'' - u &= 0, \\ u(0) = u(\pi) &= 0. \end{align*} Similarly the problem \begin{align*} - \Delta u + b\cdot\nabla u + q u &= f, \\\notag u|_{\p \Omega} &= 0, \end{align*} where $b : \Omega \to \R^n$, may fail to have a unique solution. Observe also that the associated bilinear form \begin{align*} a(u,v) &= \int_\Omega (\nabla u \cdot \nabla v + (b \cdot \nabla u) v+ q u v)\, dx, \end{align*} is nonsymmetric unless $\nabla b = 0$. The general theory is typically developed under the assumption that $a$ is coercive so that the \href{https://en.wikipedia.org/wiki/Weak_formulation#The_Lax%E2%80%93Milgram_theorem}{Lax--Milgram} theorem can be substituted for the Riesz representation theorem. In this case the sketch in the previous section requires only relatively minor changes. % To the students looking at this code: % I recommend using BibTeX for reference management. % In fact this list of references was obtained by tuning slightly an output of bibtex. % For your convenience, I added links to Helka. \begin{thebibliography}{1} \bibitem{BS} S.~C. {Brenner} and L.~R. {Scott}. \newblock {\em {The mathematical theory of finite element methods}}. \newblock New York, NY: \href{https://doi.org/10.1007/978-0-387-75934-0}{Springer}, 2008. Available in \href{https://helka.helsinki.fi/permalink/358UOH_INST/q5v72t/alma9934193675806253}{Helka}. \bibitem{Brezis} H.~{Brezis}. \newblock {\em {Functional analysis, Sobolev spaces and partial differential equations}}. \newblock New York, NY: \href{https://doi.org/10.1007/978-0-387-70914-7}{Springer}, 2011. Available in \href{https://helka.helsinki.fi/permalink/358UOH_INST/1rnip4l/alma9926442113506253}{Helka}. \bibitem{EG} A.~{Ern} and J.-L. {Guermond}. \newblock {\em {Theory and practice of finite elements}}. \newblock New York, NY: \href{https://doi.org/10.1007/978-1-4757-4355-5}{Springer}, 2004. Available in \href{https://helka.helsinki.fi/permalink/358UOH_INST/1rnip4l/alma9934192676606253}{Helka}. \bibitem{Holopainen} I.~{Holopainen}. \newblock {\em {Real Analysis I}}. \newblock Avaiblabe on the Moodle page. \end{thebibliography} \end{document}
{ "alphanum_fraction": 0.6577014887, "avg_line_length": 39.6524566474, "ext": "tex", "hexsha": "6e00123e588e80455ce5998dd5761945d0885ca2", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "0019ef68502ef6bbcb3ad25b4a34fc95fd8e69b0", "max_forks_repo_licenses": [ "CC-BY-4.0" ], "max_forks_repo_name": "uh-comp-methods2/lectures", "max_forks_repo_path": "lec_notes.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "0019ef68502ef6bbcb3ad25b4a34fc95fd8e69b0", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "CC-BY-4.0" ], "max_issues_repo_name": "uh-comp-methods2/lectures", "max_issues_repo_path": "lec_notes.tex", "max_line_length": 801, "max_stars_count": null, "max_stars_repo_head_hexsha": "0019ef68502ef6bbcb3ad25b4a34fc95fd8e69b0", "max_stars_repo_licenses": [ "CC-BY-4.0" ], "max_stars_repo_name": "uh-comp-methods2/lectures", "max_stars_repo_path": "lec_notes.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 19778, "size": 54879 }
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % % Template for a LaTex article in English. % %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \documentclass{article} % AMS packages: \usepackage{amsmath, amsthm, amsfonts} % Theorems %----------------------------------------------------------------- \newtheorem{thm}{Theorem}[section] \newtheorem{cor}[thm]{Corollary} \newtheorem{lem}[thm]{Lemma} \newtheorem{prop}[thm]{Proposition} \theoremstyle{definition} \newtheorem{defn}[thm]{Definition} \theoremstyle{remark} \newtheorem{rem}[thm]{Remark} % Shortcuts. % One can define new commands to shorten frequently used % constructions. As an example, this defines the R and Z used % for the real and integer numbers. %----------------------------------------------------------------- \def\RR{\mathbb{R}} \def\ZZ{\mathbb{Z}} % Similarly, one can define commands that take arguments. In this % example we define a command for the absolute value. % ----------------------------------------------------------------- \newcommand{\abs}[1]{\left\vert#1\right\vert} % Operators % New operators must defined as such to have them typeset % correctly. As an example we define the Jacobian: % ----------------------------------------------------------------- \DeclareMathOperator{\Jac}{Jac} %----------------------------------------------------------------- \title{Implementing Jarzynski Equality in Python} \author{Lingbo Tang\\ \small Dept. Computing Science\\ \small University of Alberta\\ \small Canada } \begin{document} \maketitle \abstract{Jarsynski Equality is a neat equality that reveals that the system Free Energy could be calculated even when the system is not in equilibrium state.} \section{Introduction} In general, the Jarzynski Equality could look like this: \begin{equation}\label{eq:general} \Delta F = F(\lambda_{t}) - F(\lambda_{0}) <= < W > \end{equation} and the integration form looks like: \begin{equation}\label{eq:integration} W_{0\rightarrow t} = \int_{0}^{t} dt^{'} \frac{\partial \lambda_t^{'}}{\partial t^{'}} [\frac{\partial \tilde{H} (r, p; \lambda)}{\partial \lambda} ]_{(r,p; \lambda)} = (r_{t^{'}}, p_{t^{'}} ; \lambda_{t^{'}}) \end{equation} One can refer to equations like this: see equation (\ref{eq:general}). One can also refer to sections in the same way: see section \ref{sec:nothing}. Or to the bibliography like this: \cite{Cd94}. \subsection{Subsection}\label{sec:nothing} More text. \subsubsection{Subsubsection}\label{sec:nothing2} More text. % Bibliography %----------------------------------------------------------------- \begin{thebibliography}{99} \bibitem{Cd94} Author, \emph{Title}, Journal/Editor, (year) \end{thebibliography} \end{document}
{ "alphanum_fraction": 0.5836337419, "avg_line_length": 30.8222222222, "ext": "tex", "hexsha": "0b9c4f2130bda5b5cd3732c17063c60ccc380dd5", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "efe3174dd7700d1f39851d9d813929425b4e473f", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "LingboTang/LearningFiles", "max_forks_repo_path": "article.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "efe3174dd7700d1f39851d9d813929425b4e473f", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "LingboTang/LearningFiles", "max_issues_repo_path": "article.tex", "max_line_length": 211, "max_stars_count": null, "max_stars_repo_head_hexsha": "efe3174dd7700d1f39851d9d813929425b4e473f", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "LingboTang/LearningFiles", "max_stars_repo_path": "article.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 687, "size": 2774 }
%******************************************************* % Abstract %******************************************************* \pdfbookmark[1]{Abstract}{Abstract} % \addcontentsline{toc}{chapter}{\tocEntry{Abstract}} \begingroup \let\clearpage\relax \let\cleardoublepage\relax \let\cleardoublepage\relax \chapter*{Abstract} Heart failure (HF) affects nearly a million people in the UK alone and increases the risk of cardiovascular diseases, stroke and death. At the whole-organ level, HF often manifests as impaired left ventricular (LV) contractile function. At the cellular level, LV contractile dysfunction is associated with altered sarcomere kinetics and disrupted calcium ($\Ca$) homeostasis. However, the link between cellular events and emerging pathological whole heart phenotypes is incompletely understood. \vspace{0.2cm} In this thesis, we aim to quantify the translation of cellular mechanisms to the LV contractile function and to elucidate the role of $\Ca$ and sarcomere dynamics in rat HF, with emphasis on the disease with preserved ejection fraction phenotype (HFpEF). We employed (Chapter~\ref{cha:chapter3}) a biophysically detailed 3D biventricular rat heart contraction mechanics model, which incorporated preload, afterload, fibre orientation, passive material properties, anatomy, $\Ca$ transient and sarcomere dynamics. The model cell-level function was described using a set of parameters (key regulators of the ionic processes and sarcomere contraction). The model organ-level behaviour was described using a set of features characterising tissue and haemodynamics properties, and the LV volume and pressure transients and the corresponding pressure-volume (PV) loop. \vspace{0.2cm} We first (Chapter~\ref{cha:chapter4}) fitted the model to real biventricular geometries, volumetric and functional data from a sham-operated (SHAM) and an aortic-banded (AB) 6-weeks-post-surgery rats, respectively representative of the healthy control and diseased rat cohorts from an experimental study on AB rats (diastolic HF animal model). We then characterised the LV features' sensitivity to model parameters. Model fitting was performed using the history matching (HM) technique, while uncertainty quantification was performed using Sobol' global sensitivity analysis (GSA). These normally require a large number of model evaluations to be performed. As the full forward model was too computationally expensive ($\sim 4-10$ hours per single forward calculation), we made HM and GSA performance computationally feasible by replacing the input-to-output multi-scale map with fast-evaluating ($\sim 1$ second per single forward calculation) probabilistic surrogates based on Gaussian process emulation (GPE). From now on, we will refer to the personalised (fitted) healthy SHAM rat model as ``the model''. The model constituted the starting point of the following three case studies. \newpage In the first case study (Chapter~\ref{cha:chapter5}), we used the model to show that it is possible to map pharmacological modulations from the sarcomere through to whole heart function and back again. As a case study, we validated the omecamtiv mecarbil (OM) mechanisms of action across scales in the healthy rat heart. Preclinical force-calcium (F-pCa) and LV haemodynamics data were used to constrain (using GPE $+$ HM) the parameter space to represent \textit{in silico} OM effects at the cellular level. The obtained spaces were then respectively mapped to features characterising the LV contractile function and to features of the F-pCa curve to show that the model predictions are in qualitative agreement with the experimentally observed OM effects at both the cell and whole-organ levels. \vspace{0.2cm} In the second case study (Chapter~\ref{cha:chapter6}), we first performed a validation against pharmacological channel blocking experimental literature data using a number of compounds by showing that the model can predict the observed effects on the LV contractile function. Next (Chapter~\ref{cha:chapter7}), we used the model to generate a pathological model, representing the $20$-week-old obese ZSF1 rat (HFpEF animal model). This was done by perturbing specific model parameters according to experimental evidence from the available literature on ZSF1 rats. We then recovered the ZSF1 rat model back to the healthy state (using GPE $+$ GSA $+$ HM) by perturbing different sub-groups of parameters to represent different strategies of recovery to identify potential pharmacological, cellular targets for the treatment of HFpEF in rats. \vspace{0.2cm} In the third case study (Chapter~\ref{cha:chapter8}), we used the model to demonstrate that changes in the F-pCa relationship do not uniquely map to observed changes in the LV function and vice versa. This result sheds new light on the assessment of myofilament $\Ca$ sensitivity using F-pCa shifts and the corresponding predictions on the LV contractile function based on F-pCa shifts. \vspace{0.2cm} We have built a virtual platform that can be used to efficiently test different pharmacological interventions and provide an indication of/identify potential pharmaceutical, cellular targets for ``virtually'' treating HFpEF in rats. This was done by using computational models of cardiac mechanics and their probabilistic surrogates to quantify how normal/pathological cellular function is translated into normal/altered whole heart function. We have demonstrated the feasibility of applying Bayesian probabilistic techniques to small mammalian (rat) healthy and diseased $3$D models of cardiac mechanics. This thesis constitutes an important step towards applications to more complex systems ($3$D contractile human heart) for personalised medicine. \vspace{0.2cm} The $3$D rat heart contraction mechanics model code and GPE, GSA, HM techniques' implementations are available open access~\cite{zenodo:2021,Historia:2021,GPErks:2021}. The scripts for generating the figures of all the main results of this thesis are available upon request. \endgroup \vfill
{ "alphanum_fraction": 0.7977509509, "avg_line_length": 155.0512820513, "ext": "tex", "hexsha": "f1d5f3ce89b8496f8d3caf169fbb6a6b8a0acb9c", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "25a2c45d359403bde916b9bcfb9485402b4d2a8a", "max_forks_repo_licenses": [ "CC-BY-4.0" ], "max_forks_repo_name": "stelong/phd-thesis", "max_forks_repo_path": "phd-thesis/frontback/abstract.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "25a2c45d359403bde916b9bcfb9485402b4d2a8a", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "CC-BY-4.0" ], "max_issues_repo_name": "stelong/phd-thesis", "max_issues_repo_path": "phd-thesis/frontback/abstract.tex", "max_line_length": 1187, "max_stars_count": null, "max_stars_repo_head_hexsha": "25a2c45d359403bde916b9bcfb9485402b4d2a8a", "max_stars_repo_licenses": [ "CC-BY-4.0" ], "max_stars_repo_name": "stelong/phd-thesis", "max_stars_repo_path": "phd-thesis/frontback/abstract.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 1291, "size": 6047 }
\clearpage \subsection{Small DB 2, the linked version in Pascal} % (fold) \label{sub:pas_small_db_2_the_linked_version} % subsection small_db_2_the_linked_version (end) \sref{sec:using_dynamic_memory_allocation}, \nameref{sec:using_dynamic_memory_allocation}, introduced a version of the Small DB program with a linked structure, as opposed to the array structure used to manage the rows in \cref{cha:more_data_types}. The Pascal code for the altered functions and procedures is shown in \lref{plst:linked-db}, the original version can be found in \lref{lst:pas-small-db}. \straightcode{\pascode{plst:linked-db}{Pascal code for the linked version of Small DB, see \lref{plst:dynamic-array-db} for the array version of this program}{code/pascal/dynamic-memory/LinkedDBforChap.pas}} \mynote{ \begin{itemize} \item \texttt{PrintAll}, \texttt{DeleteRow}, and \texttt{AddRow} are the only procedures that have changed significantly. \item Each of these is explained in more detail in \sref{sec:using_dynamic_memory_allocation}. \item Each row has a pointer to the next row in the database, this will point to nothing in the last row. \item The \texttt{DataStore} has a pointer to the first and last rows in the database. \item Adding and removing rows is done by changing the links between row values on the heap. \end{itemize} }
{ "alphanum_fraction": 0.7890800299, "avg_line_length": 74.2777777778, "ext": "tex", "hexsha": "b5d9cd19635fcc6e13f6b53b5bfec1d8a9ac1f2a", "lang": "TeX", "max_forks_count": 6, "max_forks_repo_forks_event_max_datetime": "2022-03-24T07:42:53.000Z", "max_forks_repo_forks_event_min_datetime": "2020-06-02T03:18:37.000Z", "max_forks_repo_head_hexsha": "8f3040983d420129f90bcc4bd69a96d8743c412c", "max_forks_repo_licenses": [ "CC-BY-4.0" ], "max_forks_repo_name": "macite/programming-arcana", "max_forks_repo_path": "topics/dynamic-memory/pascal/pas-linked-db.tex", "max_issues_count": 1, "max_issues_repo_head_hexsha": "bb5c0d45355bf710eff01947e67b666122901b07", "max_issues_repo_issues_event_max_datetime": "2021-12-29T19:45:10.000Z", "max_issues_repo_issues_event_min_datetime": "2021-12-29T19:45:10.000Z", "max_issues_repo_licenses": [ "CC-BY-4.0" ], "max_issues_repo_name": "thoth-tech/programming-arcana", "max_issues_repo_path": "topics/dynamic-memory/pascal/pas-linked-db.tex", "max_line_length": 403, "max_stars_count": 1, "max_stars_repo_head_hexsha": "bb5c0d45355bf710eff01947e67b666122901b07", "max_stars_repo_licenses": [ "CC-BY-4.0" ], "max_stars_repo_name": "thoth-tech/programming-arcana", "max_stars_repo_path": "topics/dynamic-memory/pascal/pas-linked-db.tex", "max_stars_repo_stars_event_max_datetime": "2021-08-10T04:50:54.000Z", "max_stars_repo_stars_event_min_datetime": "2021-08-10T04:50:54.000Z", "num_tokens": 348, "size": 1337 }
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % % % GLE - Graphics Layout Engine <http://www.gle-graphics.org/> % % % % Modified BSD License % % % % Copyright (C) 2009 GLE. % % % % Redistribution and use in source and binary forms, with or without % % modification, are permitted provided that the following conditions % % are met: % % % % 1. Redistributions of source code must retain the above copyright % % notice, this list of conditions and the following disclaimer. % % % % 2. Redistributions in binary form must reproduce the above % % copyright notice, this list of conditions and the following % % disclaimer in the documentation and/or other materials provided with % % the distribution. % % % % 3. The name of the author may not be used to endorse or promote % % products derived from this software without specific prior written % % permission. % % % % THIS SOFTWARE IS PROVIDED BY THE AUTHOR "AS IS" AND ANY EXPRESS OR % % IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED % % WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE % % ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY % % DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL % % DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE % % GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS % % INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER % % IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR % % OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN % % IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. % % % %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{32bit DOS Version of GLE} \index{DOS!32bit} Axel Rohde has compiled a 32bit DOS version of GLE in 1994 ( email: [email protected])\\ On any 386 or better machine this version of GLE should run without problems, it's main features are these: 1) No 640K memory restrictions. 2) Much faster.\\ \clearpage Properties of GLE 32: \begin{verbatim} - All programs are running in the 386-protected-mode and therefore there is neither a limited 640kB adress-range nor a 64kB segmentation. - 32-bit programs are running faster than their 16-bit-counterparts. - There exists a multitude of GRX-graphics-drivers, e.g. for TSENG ET4000(W32), S3, 8514A, Cirrus Logic GD 542x, Trident 8900, Diamond Viper, ATI Ultra, ATI VGA and EGA. These drivers are highly configurable and can use flicker-free high resolution modes. \end{verbatim} Installation Quick guide: \begin{verbatim} 1) FTP the binary distribution ftp tui.marc.cri.nz cd pub/gle/gle32 binary mget gle32bi*.zip 2) Unzip them keeping the directory structure cd c:\ pkunzip gle32bi1.zip -d pkunzip gle32bi2.zip -d pkunzip gle32bi3.zip -d pkunzip gle32bi4.zip -d pkunzip gle32bi5.zip -d pkunzip gle32bi6.zip -d 3) Edit the batch file which tells gle where to find it's fonts and also what sort of graphics card you have. edit setgle32.bat (change the disk and directory as appropriate) 4) Run the batch script setgle32 5) Try out the new version gle_vga 6) Note most of the programs have been renamed to avoid conflicts!!! \end{verbatim} To avoid name-conflicts between a 16-bit and a 32-bit version of GLE, all the programs and the environment-variables were renamed. All GLE-programs have now unix-style names like \verb#gle_ps# (='psgle'- Postscript-output), \verb#gle_vga# (='cgle' - VGA-Preview) etc. The names of the utilities end with '32' - \verb#manip32#, \verb#contou32# \dots Restrictions and Bugs: \begin{description} \item[1.] The vector-fonts of the 32-bit-version are NOT compatible to their 16-bit-counterparts. They may be compatible to fonts that were created under other 32-bit operating-systems. \item[2.] The on-line-help of \verb#gle_vga# is usable but may sometimes look different compared to the original. \item[3.] Makefmt and fbuild are missing. DJGPP's Library lacks ecvt(). Both programs are used to calculate vector-fonts in the Unix-version from the source-distribution. Both programs are NOT included in the 16-bit DOS-version, too. This package includes all (already calculated) fonts from the Unix-source distribution. They were calculated under Linux, a free Unix-implementation for i386-PC's and higher. \item[4.] The DVI-drivers are not testet! \item[5.] Surface (\verb#surf_vga#) hanged under unknown circumstances while loading one data-file. \verb#surf_vga# can be stopped by pressing Control-Pause. If this happens, load the data-file into an editor, save it, and try it again. \end{description} In \verb#gle32b.txt# are the more detailed instructions provided by Axel Rohde. Read these carefully if you have any problems.
{ "alphanum_fraction": 0.5906040268, "avg_line_length": 53.6936936937, "ext": "tex", "hexsha": "8e95b484671740b21a20ec01b3456eac5d841f3d", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "b7ff1659436a544efacd9fd5df13b6206131605b", "max_forks_repo_licenses": [ "BSD-2-Clause" ], "max_forks_repo_name": "vlabella/gle-manual", "max_forks_repo_path": "obsolete/sm_gle32_tut.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "b7ff1659436a544efacd9fd5df13b6206131605b", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "BSD-2-Clause" ], "max_issues_repo_name": "vlabella/gle-manual", "max_issues_repo_path": "obsolete/sm_gle32_tut.tex", "max_line_length": 88, "max_stars_count": null, "max_stars_repo_head_hexsha": "b7ff1659436a544efacd9fd5df13b6206131605b", "max_stars_repo_licenses": [ "BSD-2-Clause" ], "max_stars_repo_name": "vlabella/gle-manual", "max_stars_repo_path": "obsolete/sm_gle32_tut.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 1275, "size": 5960 }
\documentclass[../../../main]{subfiles} \graphicspath{{images/related/}} \begin{document} \subsection{Computer vision} The following text is the overview of computer vision algorithms theoretical lowdown. The following material covers theory from the simple manipulation with digital images, (for instance, blurring) to the complex theory of stereo vision and Structure from Motion problem. \subfile{sections/related/computer-vision/theoretical/index} \newpage \subfile{sections/related/computer-vision/implementational/index} \newpage \end{document}
{ "alphanum_fraction": 0.8057553957, "avg_line_length": 34.75, "ext": "tex", "hexsha": "27ce7f1493b7fa8692ed0f6721fd6d60365b5a49", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "30926412ef0fce764c9d737940a757ec4f55d3ac", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "Lewis945/RubiksCubeSolver", "max_forks_repo_path": "docs/Master Thesis/sections/related/computer-vision/index.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "30926412ef0fce764c9d737940a757ec4f55d3ac", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "Lewis945/RubiksCubeSolver", "max_issues_repo_path": "docs/Master Thesis/sections/related/computer-vision/index.tex", "max_line_length": 271, "max_stars_count": null, "max_stars_repo_head_hexsha": "30926412ef0fce764c9d737940a757ec4f55d3ac", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "Lewis945/RubiksCubeSolver", "max_stars_repo_path": "docs/Master Thesis/sections/related/computer-vision/index.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 121, "size": 556 }
\documentclass[12pt]{report} \usepackage{float,moreverb,xspace,epsfig} \include{eicpreamble} \begin{document} \huge {\bf \centerline{ EiC}} \normalsize \tableofcontents \vspace{0.5 cm} EiC is designed to be a production tool, it is not to be viewed as a toy, and is certainly one of the most complete, freely-available C interpreters built to-date. It is suitable as: an aid in teaching C, for fast prototyping of new programs and as a research tool --- as it allows the user to quickly interface and experiment with user supplied, standard ISO C and POSIX.1 functions via immediate statements, which are statements that are executed immediately. EiC can be run in several different modes: (1) interactively, (2) non-interactively (3) in scripting mode and (4) it can be embedded in other systems. \section{ Interactive mode} In interactive mode, the user enters commands, or immediate commands, at the EiC prompt. Each immediate instruction produces a type, even if the type is void; as for example, C statements, declarations etc. All resulting type values are displayed: \begin{production} \begin{verbatim} EiC 1> 3*55.5; 166.5 EiC 2> "hello, world!"; hello, world! EiC 3> int i; (void) EiC 4> for(i=0;i<10;i++); (void) EiC 5> i; 10 EiC 6> struct {int a; double b[3];} ab = { 5,{0,1,2}}; (void) EiC 7> ab; {5,Array} EiC 8> ab.a = 3; 3 EiC 9> ab.b[2]; 2 EiC 10> #include <stdio.h> (void) EiC 11> printf("hello\n"); hello 6 \end{verbatim} \end{production} \section{EiC is pointer safe} EiC is also pointer safe. This means EiC catches most types of array bound violations; for example (for brevity, some output has been deleted): \begin{production} \begin{verbatim} EiC 1> int a[10], *p, i; EiC 2> a[10]; READ: attempted beyond allowed access area EiC 3> p = &a[5]; EiC 4> p[-5]; EiC 5> p[-6]; READ: attempted before allowed access area EiC 6> p[4]; EiC 7> p[5]; READ: attempted beyond allowed access area EiC 8> *(p+100); READ: attempted beyond allowed access area EiC 9> p = malloc(5*sizeof(int)); EiC 10> *(p+100); READ: attempted beyond allowed access area EiC 11> for(i=0;i<100;i++) *p++ = i; WRITE: attempted beyond allowed access area \end{verbatim} \end{production} To detect array bound violations as efficiently as possible, EiC does not concern it self with the values held or produced by pointers, it only worries about address values when they are either referenced or dereferenced: \begin{production} \begin{verbatim} EiC 1> int a, *p; EiC 2> p = &a; EiC 3> p+10; // okay, no problems EiC 4> *(p+10); // but just try to read or write to the address READ: attempted beyond allowed access area \end{verbatim} \end{production} \section{Running EiC non-interactively} EiC can also be run non-interactively or in batch mode, where it is possible to run C programs in a typical interpreter style. It can also handle programs that accept command line arguments, as seen from the toy example in main2.c: \begin{production} \begin{verbatim} #include <stdio.h> int main(int argc, char **argv) { while(argc--) printf("%s\n",*argv++); return 0; } \end{verbatim} \end{production} The first parameter, argc, holds the number of argument strings passed to the program and is always at least one. The second parameter, argv, is an array of unspecified size of pointers to the input strings, which the first one will be the name of the program being executed: \begin{production} \begin{verbatim} % eic main2.c 123 hello -Dworld this.and.that main2.c 123 hello -Dworld this.and.that \end{verbatim} \end{production} \section{EiC's scripting language} In non-interactive mode, EiC runs generally like a typical interpreter, accepting input from a complete C program. However, EiC is also a scripting language. Below is an example of an EiC script, called \T{hello.eic}: \begin{quote} \begin{verbatim} #!/usr/local/bin/eic -f #include <stdio.h> printf(" ******* Hello from EiC's script mode. ******\n"); \end{verbatim} \end{quote} The \T{-f} command-line switch, informs EiC to run in script mode. In script mode, EiC will treat all lines beginning with `\T{\#}' and which cannot be interpreted as a preprocessor directive as a comment. To run the above script and assuming that it's executable (chmod~+x~hello.eic): \begin{production} \begin{verbatim} % hello.eic ******* Hello from EiC's script mode. ****** % \end{verbatim} \end{production} Another example of a more extensive EiC script is given in \T{script1.eic}: \begin{quote} \begin{verbatim} 1 #!/usr/local/bin/eic -f 2 #include <stdio.h> 3 4 // example of control of flow 5 int i; 6 int isqr(int x) { return x*x; } 7 for(i=0;i<4;i++) 8 printf("%d^2 = %d\n",i,isqr(i)); 9 switch(i) { 10 case 4: printf(" good\n\n"); break; 11 default: printf(" bad\n\n"); 12 } 13 // example of some file stuff; 14 // read in some tools 15 #include "tools/nxtString.c" 16 FILE *fp = fopen(_Argv[0],"r"); 17 char *p; 18 while((p=nxtString(fp))) 19 printf("%s ",p); 20 fclose(fp); 21 printf("\n\n"); 22 // further example of using command line args 23 if(_Argc) { // this is always true 24 int k=0; 25 printf("Processing command line arguments\n"); 26 for(k=0;k<_Argc;k++) { 27 printf("%s\n",_Argv[k]); 28 } 29 } else 30 printf("OOPS, an internal error has occurred\n"); \end{verbatim} \end{quote} An EiC shell script is interpreted from the top to the bottom. First the code is compiled to bytecode, in its entirety, and then run. After this, control will be parsed to the \T{main} function if it exists. However, it is not illegal to have a script that does not include the definition of a \T{main} function. If the EiC directive \T{:exit}, which is the directive that terminates an EiC interactive session, is present, it will cause the interpretation of the script to halt at the position \T{:exit} is encounted, and nothing will have happened other than having the code up to \T{:exit} operator compiled and parsed -- but it will not have been executed. Generally, the code for a function is not executed until it is called, see line 8. Command line arguments are passed into to the global variables \T{\_Argc} and \T{\_Argv}, see lines 16 and 23 to 30. For example: \begin{quote} \begin{verbatim} % script1.eic abc 123 -DHELP \end{verbatim} \end{quote} Implies that: \begin{quote} \begin{verbatim} _Argc = 4, _Argv[0] = "sript1.eic" _Argv[1] = "abc" _Argv[2] = "123" _Argv[3] = "-DHELP" _Argv[4] = NULL \end{verbatim} \end{quote} \section{Embedding or linking to EiC} \input{embed.tex} \section{EiC modules} In a nutshell, EiC modules are related groups of EiC/C functions, which get interpreter'd by EiC or builtin to EiC. Therefore, there are basically two types of EiC modules. Interpreter'd code modules and builtin modules (compiled code). It is also possible for compiled code to make calls (callbacks) to interpreter'd code. One of the nice features of an EiC module, is that once you have a module built you can add it to another EiC distribution by simply copying it into the `EiC/module' directory and to remove a module you simply remove it from the `EiC/module' directory -- easy as that. \section{ EiC vs C} Because EiC can be run interactively, it differs from C in several ways. In this section I will outline what is currently missing from EiC and how EiC differs from ISO C. Although, EiC can parse almost all of the C programming language, right up front it is best to mention what is currently lacking or different: \begin{enumerate} \item EiC is pointer safe. It detects many classes of memory read and write violations. Also, to help in interfacing compiled library code to EiC, EiC uses the optional pointer-qualifiers \T{safe} and \T{unsafe}. \item Structure \index{structure! bit fields}\index{bit fields} bit fields are not supported. \item While structures and unions can be returned from and passed by value to functions, it is illegal in EiC to pass a structure or a union to a variadic function (that is, a function that takes a variable number of arguments): \begin{production} \begin{verbatim} EiC 1> struct stag {int x; double y[5];} ss; EiC 2> void foo(const char *fmt, ...); EiC 3> foo("",ss); Error: passing a struct/union to variadic function \T{foo} \end{verbatim} \end{production} \item The C concept of linkage\index{linkage!external} is not supported. This is because, EiC does not export identifiers to a linker -- as does a true C compiler. EiC works from the concept of a single {\it translation unit}. However, EiC does support the concept of file scope; that is, static extern variables declared in a file are not visible outside that file. \item EiC does not parse preprocessor numbers\index{pp numbers}, which aren't valid numeric constants; for example, {\tt 155.6.8}, which is an extended floating point constants, will cause an error. \item EiC supports both standard C like comments \verb+/* ... */+ and C++ style comments. Also, when EiC is run in script mode, it treats all lines that start with `\T{\#}' and which can't be interpreted as a preprocessor directive as a comment. \item There are no default type specifiers for function return values. In EiC it is illegal to not explicitly state the return type of a function: \begin{production} \begin{verbatim} foo() { ... } /* error: missing return type */ int foo() { ... } /* correct, return type specified */ \end{verbatim} \end{production} \item In addition to function definitions and declarations with an empty parameter list, EiC only supports prototype declarations and definitions: \begin{production} \begin{verbatim} int foo(); /* Empty parameter list allowed */ int f(value) int value { ... } /* Illegal: old style C */ int f(int); /* Allowed, prototype declaration */ int f(int value); /*Allowed, full prototype declaration */ \end{verbatim} \end{production} \item EiC does not support trigraph sequences, wide characters or wide strings: nor does it support the standard header \verb+<locale.h>+. \item EiC's preprocessor lacks the \directive{line} directive. \item For convenience, EiC allows the \directive{include} directive to have an extra form, which permits the parsing of a {\it token-sequence} in the form \verb+#include filename+; that is, without enclosing double quotes or angled brackets. \item Besides parsing preprocessor directives or C statements, EiC also parses its own internal house keeping language. House keeping commands are communicated to EiC via lines that begin with a colon. \end{enumerate} \end{document}
{ "alphanum_fraction": 0.704162492, "avg_line_length": 30.6675977654, "ext": "tex", "hexsha": "8302525559e34b66822f2550e0e98d8783ad6dd1", "lang": "TeX", "max_forks_count": 4, "max_forks_repo_forks_event_max_datetime": "2022-02-07T12:42:58.000Z", "max_forks_repo_forks_event_min_datetime": "2018-08-28T16:55:20.000Z", "max_forks_repo_head_hexsha": "0c4150760b613f7de666c492ce14683f216cfc84", "max_forks_repo_licenses": [ "Artistic-1.0-Perl" ], "max_forks_repo_name": "kungfooman/EiC-C-Interpreter", "max_forks_repo_path": "doc/miniblurb.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "0c4150760b613f7de666c492ce14683f216cfc84", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Artistic-1.0-Perl" ], "max_issues_repo_name": "kungfooman/EiC-C-Interpreter", "max_issues_repo_path": "doc/miniblurb.tex", "max_line_length": 86, "max_stars_count": 19, "max_stars_repo_head_hexsha": "0c4150760b613f7de666c492ce14683f216cfc84", "max_stars_repo_licenses": [ "Artistic-1.0-Perl" ], "max_stars_repo_name": "kungfooman/EiC-C-Interpreter", "max_stars_repo_path": "doc/miniblurb.tex", "max_stars_repo_stars_event_max_datetime": "2021-01-22T01:59:31.000Z", "max_stars_repo_stars_event_min_datetime": "2015-05-06T07:40:06.000Z", "num_tokens": 3078, "size": 10979 }
Following the work of \cite{Brosse18tULA}, we aim to quantify the accuracy of these methods by finding the first and second moments of known distributions. Then, comparing the error between the generated value and the true value will give a first indication on the accuracy of the scheme. Beginning with the original code associated with \cite{Brosse18tULA}\footnote{Available at \url{https://github.com/nbrosse/TULA}}, we initially optimised the code then aimed to reproduce the results they found. \subsection{Testing Potentials} For the purposes of testing, four qualitatively different potentials have been made available within our program. These are: Gaussian, Double Well, Rosenbrock function and Ginzburg-Landau model. We note that these, save for Gaussian, are non-convex. Any of the above potentials may also be scaled by \textit{temperature} $T$. That is, having chosen a potential function $U$ and a temperature $T$, the true distribution to be sampled from will be \[\pi(x) = e^{-\frac{U(x)}{T}}.\] This is common in the molecular dynamics literature and is also useful in MCMC to find modes of distributions more quickly, a technique known as tempering. Figure \ref{fig:tamedStep} shows how taming prevents divergence even at higher step sizes, where the untamed algorithms would diverge. Figure \ref{fig:doubleWell_moment} compares first and second moments of the double well distribution in 100 dimensions after running the methods for \(10^5\) iterations and with a stepsize of \(h=0.1\). Note that \texttt{ULA} and \texttt{HOLA} diverge and produce no useful samples. Theoretically, the first moment of this distribution is 0, and the second moment is around 0.104 in 100 dimensions. It can be seen that at this larger step size, the taming is causing an overestimation of the second moment for \texttt{tULA,tLM} and \texttt{tHOLA}. Adjusting the taming to a coordinatewise function is reducing the issue. This a similar phenomena to that seen in Section \ref{sec:stiff}. Figure \ref{fig:timedRun} takes a different approach. Rather than running each algorithm for a fixed number of iterations, we instead let each scheme run for 10 seconds. It can be seen that \texttt{tHOLA} has a much higher range of error when stepsize is low. This is because each iterate takes much longer than the other methods, resulting in an order of magnitude fewer samples being produced in 10 seconds. This is due to the algorithm involving the multiplication of large matrices, a costly computation. When the stepsize is increased the Metroplised algorithms seem to perform worst. It is unclear why this should be the case. Once again \texttt{ULA} and \texttt{LM} diverge. \begin{figure} \centering \begin{minipage}[b]{0.32\textwidth} \centering \includegraphics[width=\textwidth]{Figures/tula_fm.png} \end{minipage} % \begin{minipage}[b]{0.32\textwidth} \centering \includegraphics[width=\textwidth]{Figures/tulac_fm.png} \end{minipage} % \begin{minipage}[b]{0.32\textwidth} \centering \includegraphics[width=\textwidth]{Figures/tmala_fm.png} \end{minipage} \caption{Comparison of \texttt{tULA}, \texttt{tULAc} and \texttt{tMALA} for the first moment evolving as a function of step size.} \label{fig:tamedStep} \end{figure} \begin{figure} \centering \begin{minipage}[b]{0.85\textwidth} \centering \includegraphics[width=\textwidth]{Figures/doublewell_0_1_10_5samp_100dFirstMoment.png} \end{minipage}\\ % \begin{minipage}[b]{0.85\textwidth} \centering \includegraphics[width=\textwidth]{Figures/secondmoment_double_well_100d_10_5samp.png} \end{minipage} % \caption{Comparison of first (top) and second (bottom) moments of the double well distribution in 100 dimensions after running the methods for \(10^5\) iterations and with a stepsize of \(h=0.1\). Note that \texttt{ULA} and \texttt{HOLA} diverge and produce no useful samples. The true first moment is 0, and second moment $\approx 0.1$} \label{fig:doubleWell_moment} \end{figure} \begin{figure} \centering \begin{minipage}[b]{0.47\textwidth} \centering \includegraphics[width=\textwidth]{Figures/10sBoxPlot1moment100dim001step.png} \end{minipage} % \begin{minipage}[b]{0.47\textwidth} \centering \includegraphics[width=\textwidth]{Figures/10sBoxPlot1moment100dim01step.png} \end{minipage} % \caption{Comparison of methods when given 10 seconds to run at \(h=0.01\) (left) and \(h=0.1\) (right). Note that \texttt{ULA} and \texttt{LM} both diverge when the stepsize is too high.} \label{fig:timedRun} \end{figure}
{ "alphanum_fraction": 0.7697022767, "avg_line_length": 81.5714285714, "ext": "tex", "hexsha": "ecbf0c24e0ad2b7dcf86969cdfc4dbf300e65c56", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2021-01-19T17:44:19.000Z", "max_forks_repo_forks_event_min_datetime": "2021-01-19T17:44:19.000Z", "max_forks_repo_head_hexsha": "ed36a17ce9b7d1e39097aeaf5b92f0fa286d5489", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "swyoon/LangevinMC", "max_forks_repo_path": "WriteUp/MomentErrors.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "ed36a17ce9b7d1e39097aeaf5b92f0fa286d5489", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "swyoon/LangevinMC", "max_issues_repo_path": "WriteUp/MomentErrors.tex", "max_line_length": 1503, "max_stars_count": 10, "max_stars_repo_head_hexsha": "ed36a17ce9b7d1e39097aeaf5b92f0fa286d5489", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "Tom271/LangevinMC", "max_stars_repo_path": "WriteUp/MomentErrors.tex", "max_stars_repo_stars_event_max_datetime": "2022-03-04T13:35:13.000Z", "max_stars_repo_stars_event_min_datetime": "2019-02-07T12:51:19.000Z", "num_tokens": 1282, "size": 4568 }
% % TAZ.tex % % History of LulzBot Printers % % Copyright (C) 2014, 2015 Aleph Objects, Inc. % % This document is licensed under the Creative Commons Attribution 4.0 % International Public License (CC BY-SA 4.0) by Aleph Objects, Inc. % \section{LulzBot TAZ 1.0} LulzBot TAZ 1.0. \begin{figure}[h!] \thisfloatpagestyle{empty} \includegraphics[keepaspectratio=true,height=0.40\textheight,width=1.00\textwidth,angle=0]{taz/taz-1-cat.jpg} \caption{LulzBot TAZ 1.0 with Cat.} \label{fig:taz-1-cat} \end{figure} \begin{figure}[h!] \includegraphics[keepaspectratio=true,height=0.40\textheight,width=1.00\textwidth,angle=0]{taz/taz-1.jpg} \caption{LulzBot TAZ 1.0.} \label{fig:taz-1} \end{figure} \begin{figure}[h!] \includegraphics[keepaspectratio=true,height=0.40\textheight,width=1.00\textwidth,angle=0]{taz/taz-1-front-left.jpg} \caption{LulzBot TAZ 1.0 Front Left.} \label{fig:taz-1-front-left} \end{figure} \begin{figure}[h!] \includegraphics[keepaspectratio=true,height=0.40\textheight,width=1.00\textwidth,angle=0]{taz/taz-1-vase.jpg} \caption{LulzBot TAZ 1.0 with Vase.} \label{fig:taz-1-vase} \end{figure} \begin{figure}[h!] \includegraphics[keepaspectratio=true,height=0.40\textheight,width=1.00\textwidth,angle=0]{taz/taz-1-octo.jpg} \caption{LulzBot TAZ 1.0 with Octopus.} \label{fig:taz-1-octo} \end{figure} \begin{figure}[h!] \includegraphics[keepaspectratio=true,height=0.40\textheight,width=1.00\textwidth,angle=0]{taz/taz-1-max.jpg} \caption{LulzBot TAZ 1.0 Max Build Volume.} \label{fig:taz-1-max} \end{figure}
{ "alphanum_fraction": 0.7451361868, "avg_line_length": 30.2352941176, "ext": "tex", "hexsha": "8ae2917646ed96fc0b3cff7eaa5e96f96589aa11", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "1923ed04c79b0eb81338b8be3fe2f2d57dae6e07", "max_forks_repo_licenses": [ "CC-BY-4.0" ], "max_forks_repo_name": "alephobjects/history-of-lulzbot-printers", "max_forks_repo_path": "source/TAZ-1.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "1923ed04c79b0eb81338b8be3fe2f2d57dae6e07", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "CC-BY-4.0" ], "max_issues_repo_name": "alephobjects/history-of-lulzbot-printers", "max_issues_repo_path": "source/TAZ-1.tex", "max_line_length": 116, "max_stars_count": null, "max_stars_repo_head_hexsha": "1923ed04c79b0eb81338b8be3fe2f2d57dae6e07", "max_stars_repo_licenses": [ "CC-BY-4.0" ], "max_stars_repo_name": "alephobjects/history-of-lulzbot-printers", "max_stars_repo_path": "source/TAZ-1.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 587, "size": 1542 }
\documentclass[10pt,letterpaper]{article} \usepackage[margin=1in]{geometry} \usepackage{setspace} \usepackage{fancyhdr} \usepackage{lastpage} \pagestyle{fancyplain} % Put watermark on % \usepackage{draftwatermark} % \SetWatermarkText{Draft} % \SetWatermarkScale{7} \lhead{} \chead{Central Massachusetts Amateur Radio Association} \rhead{} \lfoot{\texttt{https://github.com/mide/cmara-meeting-minutes/}} \cfoot{} \rfoot{Page \thepage\ of \pageref{LastPage}} \begin{document} \begin{center} {\huge January 2016 Board of Directors Meeting}\\ \emph{of the}\\ {\Large Central Massachusetts Amateur Radio Association}\\ \emph{Submitted by Mark Ide \texttt{W1IDE}, Secretary} \end{center} \section{Meeting Called to Order} The CMARA January 2016 board of directors meeting was called to order on January 21, 2016 at 9:17 PM by CMARA president Bob Peloquin (\texttt{KB1VUA}). \section{Attendance} \subsection{Officers Present} \begin{tabular}{|l|l|l|c|} \hline \textbf{Position} & \textbf{Name} & \textbf{Callsign} & \textbf{Present} \\ \hline President & Bob Peloquin & \texttt{KB1VUA} & Yes \\ Vice President & Art Kass & \texttt{WA1RCQ} & Yes \\ Secretary & Mark Ide & \texttt{W1IDE} & Yes \\ Treasurer & Jim Singer & \texttt{N1EKO} & Yes \\ Webmaster & Lynn Glagowski & \texttt{WB1CCL} & No \\ \hline \end{tabular} \subsection{Board of Directors Present} \begin{tabular}{|l|l|c|} \hline \textbf{Name} & \textbf{Callsign} & \textbf{Present} \\ \hline Adrian Zeffert & \texttt{AB2IX} & Yes \\ Harold Carlson & \texttt{N1ZC} & No \\ Dick Jubinville & \texttt{W1REJ} & Yes \\ Terry Glagowski & \texttt{W1TR} & Yes \\ Randy Dore & \texttt{W4FEB} & Yes \\ Johnathan Sherman & \texttt{WW2JS} & No \\ \hline \end{tabular} \subsection{Members Present} \begin{itemize} \item Greg (\texttt{WA1JXR}) \item Don (\texttt{W3DEC}) \end{itemize} \subsection{Guests \& Visitors} No guests were present. \section{Old Business} There was no old business tabled. \section{New Business} \begin{enumerate} \item Art (\texttt{WA1RCQ}) brought up the concerns regarding field day and the securing the site. \begin{itemize} \item We are no longer directly working with David Prouty High School when we secure the site, we are now working with the school district and they tend to have less flexibility. \item We need to retrieve our proof of insurance in order to secure the site. \item We need to increase our insurance from ``\$1 million - \$2 million'' to ``\$1 million - \$3 million''. \item We need to request a certificate of insurance listing David Prouty as the holder for three days. Bob (\texttt{KB1VUA}) mentioned this is fairly common in towns. \item If we stay with Moose Hill, we may have to pay \$36 per hour to use the field house. If we can't or don't want to take on that cost, we'll have to form a backup plan for sanitary facilities. We need to consider if it's cheaper to rent porta-potties or to rent the field house. \item We need to get the proof of our non-profit status (501c). Bob (\texttt{KB1VUA}) said this can be found on the Secretary of State website. \item We should start to consider backup locations. A few locations that were thrown out were Bob (\texttt{KB1VUA})'s astronomy club's field and the Mercy Centre. \end{itemize} \item The club does not have a presentation lined up for the February meeting. Some of the following were ideas thrown out: \begin{itemize} \item Mark (\texttt{W1IDE}) - Linux \item Terry (\texttt{W1TR}) - Tower \& Generator \item Adrian (\texttt{AB2IX}) - How to Use Test Equipment \item Tom (\texttt{NE1R}) - Knots \item Don (\texttt{W3DEC}) - Near Vertical Incidence Skywave \end{itemize} It was decided by unanimous decision that Don (\texttt{W3DEC}) will give his presentation on Near Vertical Incidence Skywave (NVIS) in February. \item Earlier in the evening, Don (\texttt{W3DEC}) made a movement to discuss purchasing new repeater equipment for no more than \$1,500.00. \begin{itemize} \item We have a fair amount of money from the generosity of a ham; we have \$1,400 in the memorial account and \$900 in the repeater fund right now. \item How much did we and will we spend on field day? And that's for one day. We should consider servicing the unit that the club uses every day. \item The current repeater is a 45 year old Micor model. \item Don (\texttt{W3DEC}) has observed an uneven and inconsistent footprint. He could get 9 S-Units one day, and 4 S-Units the next. \begin{itemize} \item It's possible the antenna is going bad. We were originally transmitting on a StationMaster antenna that was in the middle of the tower, but that failed. We're now transmitting on a folded dipole that's on one leg of the tower. It's possible it's on the wrong leg. \item Two-Meter propagations do change regularly \item If the old radio tower sways, that would change propagation patterns too. \end{itemize} \item Greg (\texttt{WA1JXR}) stated that ``the repeater is stable as rock'' and that they've watched it. \item Greg (\texttt{WA1JXR}) clarified about the state and configuration of our antennas. We have two antennas on the tower. The receive antenna is at the very top of the tower and is shared with other tenants. The transmitting antenna is a folded dipole that's on one leg of the tower and it's about half way up the tower; we are the only ones using the transmitting antenna. \item Regarding a backup solution, Greg (\texttt{WA1JXR}) has spare parts for the existing Micor repeater and he also has a different repeater that we could put into service if the need arose. It's also worth mentioning that we can also order another repeater at any point and install it with short notice. \item Should we buy another antenna to replace the existing one? \begin{itemize} \item Terry (\texttt{W1TR}) will ask his other club about which antennas they would suggest for our situation. \item We may want to consider measuring the footprint. Take a couple measurements across various locations and actually base our decisions off numbers. \item We may invite Kurt for input on the antenna selection as he may have opinions. \end{itemize} \item No action has been determined nor is recommended at this time. \end{itemize} \end{enumerate} \section{Next Meeting} The next meeting will be Thursday, February 18, 2016 at the Oakdale United Methodist Church, 15 North Main Street, West Boylston, MA 01583.\\ \section{Meeting Adjourned} The meeting was adjourned at 9:58 PM by CMARA president Bob Peloquin (\texttt{KB1VUA}). \end{document}
{ "alphanum_fraction": 0.7250996016, "avg_line_length": 51.7328244275, "ext": "tex", "hexsha": "754663e17e26ada07c8f23ce126609b05c6608a0", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2021-03-17T09:20:26.000Z", "max_forks_repo_forks_event_min_datetime": "2021-03-17T09:20:26.000Z", "max_forks_repo_head_hexsha": "e1f7e3debca5145a668321f75d12ce3db418eb5c", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "cmara/meeting-minutes", "max_forks_repo_path": "minutes/2016-01-21-board-meeting.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "e1f7e3debca5145a668321f75d12ce3db418eb5c", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "cmara/meeting-minutes", "max_issues_repo_path": "minutes/2016-01-21-board-meeting.tex", "max_line_length": 376, "max_stars_count": 1, "max_stars_repo_head_hexsha": "e1f7e3debca5145a668321f75d12ce3db418eb5c", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "cmara/meeting-minutes", "max_stars_repo_path": "minutes/2016-01-21-board-meeting.tex", "max_stars_repo_stars_event_max_datetime": "2020-01-27T17:33:16.000Z", "max_stars_repo_stars_event_min_datetime": "2020-01-27T17:33:16.000Z", "num_tokens": 1839, "size": 6777 }
\section{Advantages \& Limitations} \label{sec:conclusion:advantages_and_limitations} Although the approach taken by this thesis has some distinct advantages over other possible methods for proving the correctness of model transformations, there are also some limitations of the work that need to be discussed. These advantages and limitations are further discussed in this section before the work is evaluated. As explained earlier, the transformation framework presented in \cref{chapter:transformation_framework} is considered the main result of this thesis. This framework is compelling, in that it allows to compose model transformations, which allows for creating possibly infinitely large model transformations between Ecore and GROOVE. In order to have composable model transformations, the concept of combining models and graphs is used. Within this work, it has been chosen to maintain the correctness of the transformation at each step. This correctness means that only valid or consistent models and graphs are used within each step. The use of correct models and graphs is favourable because it makes proving the correctness of the combination much more straightforward. The correctness properties of the individual models and graphs can be used to prove the correctness of the combinations. Maintaining correctness in each step of the composition is a definite advantage to maintaining the proof of correctness. However, it also presents limitations for the transformation steps. Because of the required correctness properties, each transformation step has to be valid itself. For some transformations, this means quite significant transformation steps. For example, when introducing a new field on the type level, it is required to introduce the value for this field for all related objects on the instance level within one step. The introduction of all values at once is the only way that correctness is maintained. However, these are already quite large proofs, and they become increasingly complicated when adding inheritance. If a field is added to a supertype, a value for the field must be introduced for all instances of the supertype and its subtypes. Introducing values in this way means that an even more significant transformation step is needed to achieve such a composition. In practice, it is possible to use more substantial transformations, but it means that the complexity of proving each transformation step is increased. The consequences of this limitation are directly visible from the library of transformations, \cref{chapter:library_of_transformations}. The transformations that introduce new fields are already quite complex, especially in their proofs. Furthermore, these transformations cannot be used on extended types because of the limitation above. Separate transformations need to be proven to allow the addition of fields to an extended type. Because of the limitations of the transformation framework and the limited amount of time available for this thesis, the library of transformations is quite small and incomplete. Only a selected set of transformations is presented here, which does not even cover all concepts of Ecore. On the side of Ecore, the following concepts still need to be covered: \begin{itemize} \item The introduction of fields on types that are extended by other types, as discussed above. \item The introduction of fields typed by different container types still needs to be covered. Only one transformation shows the use of a $\type{setof}$-type in conjunction with a $\type{containment}$ property (\cref{subsec:library_of_transformations:type_level_transformations:contained_class_set_fields} and \cref{subsec:library_of_transformations:instance_level_transformations:contained_class_set_field_values}), but the other containers also need to be covered. Also, container types containing attributes still need to be covered. \item The concept of multiple inheritance, which is supported by Ecore and GROOVE, but not used in any transformation. Only one transformation with `single' inheritance is shown as part of \cref{subsec:library_of_transformations:type_level_transformations:regular_subclasses} and \cref{subsec:library_of_transformations:instance_level_transformations:objects_of_subtype}. \item Most of the different model properties (\cref{defin:formalisations:ecore_formalisation:type_models:type_model_properties}), $\type{defaultValue}$, $\type{identity}$, $\type{keyset}$, $\type{opposite}$ and $\type{readonly}$ to be precise, are not yet covered by any of the transformations. The model properties that are covered, $\type{abstract}$ and $\type{containment}$ are not yet covered in their full potential. \item The introduction of constants and their corresponding values is not covered yet. Since they are only used in conjunction with $\type{defaultValue}$ properties, it makes sense to cover them at the same time. \end{itemize} Covering all concepts of Ecore with one or more encodings in GROOVE would mean that all concepts of GROOVE are also covered. However, this way of achieving coverage means the addition of a lot more transformations, which could be its own research. Therefore, this is considered future work as described in \cref{sec:conclusion:future_work}. A different limitation of this thesis, in general, is the focus on syntactical correctness only. No effort has been made to prove the correctness of the semantics of a model under transformation. This correctness property has been excluded on purpose, as EMF/Ecore is a quite general modelling framework in which a lot of different software models can be expressed. Therefore, it is difficult to prove something about the semantics on an abstract level. A consequence of this decision is that curious encodings are possible, which are still syntactically correct. For example, one might create an encoding that multiplies all integer values within a model with a certain number $x$, when transformed into a GROOVE graph. Then, a different transformations function can be used to convert back to a model, dividing all integer values with the same number $x$. Although this is syntactically correct, it could have enormous implications for the use within software verification, as the values of the model have changed. If this is not taken into account beforehand, the results of the software verification could still be questionable. Although the work presented by this thesis has some limitations, the work is still considered a useful contribution. It is believed (although not proven) that it is possible to work around the limitations of the transformation framework, possibly with more substantial transformations. Moreover, the semantics of a transformation could be addressed in future research and does not invalidate the work presented here. Therefore, the work presented in this thesis should be used as a foundation, rather than a piece of work that is ready to use.
{ "alphanum_fraction": 0.8218902614, "avg_line_length": 290.0833333333, "ext": "tex", "hexsha": "e531dbfc3195fc8505253cc652cf6e9c1c188f2f", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "a0e860c4b60deb2f3798ae2ffc09f18a98cf42ca", "max_forks_repo_licenses": [ "AFL-3.0" ], "max_forks_repo_name": "RemcodM/thesis-ecore-groove-formalisation", "max_forks_repo_path": "thesis/tex/07_conclusion/01_limitations.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "a0e860c4b60deb2f3798ae2ffc09f18a98cf42ca", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "AFL-3.0" ], "max_issues_repo_name": "RemcodM/thesis-ecore-groove-formalisation", "max_issues_repo_path": "thesis/tex/07_conclusion/01_limitations.tex", "max_line_length": 1149, "max_stars_count": null, "max_stars_repo_head_hexsha": "a0e860c4b60deb2f3798ae2ffc09f18a98cf42ca", "max_stars_repo_licenses": [ "AFL-3.0" ], "max_stars_repo_name": "RemcodM/thesis-ecore-groove-formalisation", "max_stars_repo_path": "thesis/tex/07_conclusion/01_limitations.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 1339, "size": 6962 }
\chapter{The Bard} \index{Bard (class)} The poems say an adventurer's life is all open roads and the glory of coin and combat. The tales told in every farmhand-filled inn have to have some ring of truth to them, don't they? The songs to inspire peasantry and royals alike--to soothe the savage beast or drive men to a frenzy--have to come from somewhere. Enter the bard. You, with your smooth tongue and quick wit. You teller-of-tales and singer-of-songs. It takes a mere minstrel to retell a thing but a true bard to live it. Strap on your boots, noble orator. Sharpen that hidden dagger and take up the call. Someone's got to be there, fighting shoulder-to-shoulder with the goons and the thugs and the soon-to-be-heroes. Who better than you to write the tale of your own heroism? Nobody. Get going. \section*{Names} \emph{Elf} : Astrafel, Daelwyn, Feliana, Damarra, Sistranalle, Pendrell, Melliandre, Dagoliir \emph{Human} : Baldric, Leena, Dunwick, Willem, Edwyn, Florian, Seraphine, Quorra, Charlotte, Lily, Ramonde, Cassandra \section*{Look} Choose one for each: \begin{itemize} \item Knowing Eyes, Fiery Eyes, or Joyous Eyes \item Fancy Hair, Wild Hair, or Stylish Cap \item Finery, Traveling Clothes, or Poor Clothes \item Fit Body, Well-fed Body, or Thin Body \end{itemize} \section*{Stats} Your maximum HP is 6+Constitution. Your base damage is d6. \section*{Starting Moves} {\bfseries Choose a race and gain the corresponding move:} \index{Bard (class)!moves|(} \subsection{Elf} When you enter an important location (your call) you can ask the GM for one fact from the history of that location. \subsection{Human} When you first enter a civilized settlement someone who respects the custom of hospitality to minstrels will take you in as their guest. \vspace{\baselineskip} {\bfseries You start with these moves:} \subsection{Arcane Art} When you\textbf{weave a performance into a basic spell} , choose an ally and an effect: \begin{itemize} \item Heal 1d8 damage \item +1d4 forward to damage \item Their mind is shaken clear of one enchantment \item The next time someone successfully assists the target with aid, they get +2 instead of +1 \end{itemize} Then roll+Cha. *On a 10+, the ally gets the selected effect. *On a 7-9, your spell still works, but you draw unwanted attention or your magic reverberates to other targets affecting them as well, GM's choice. \subsection{Bardic Lore} Choose an area of expertise: \begin{itemize} \item Spells and Magicks \item The Dead and Undead \item Grand Histories of the Known World \item A Bestiary of Creatures Unusual \item The Planar Spheres \item Legends of Heroes Past \item Gods and Their Servants \end{itemize} When you \textbf{first encounter an important creature, location, or item (your call) covered by your bardic lore} you can ask the GM any one question about it; the GM will answer truthfully. The GM may then ask you what tale, song, or legend you heard that information in. \subsection{Charming and Open} When you \textbf{speak frankly with someone} , you can ask their player a question from the list below. They must answer it truthfully, then they may ask you a question from the list (which you must answer truthfully). \begin{itemize} \item Whom do you serve? \item What do you wish I would do? \item How can I get you to \_\_\_\_\_\_? \item What are you really feeling right now? \item What do you most desire? \end{itemize} \subsection{A Port in the Storm} When you \textbf{return to a civilized settlement you've visited before} , tell the G M when you were last here. They'll tell you how it's changed since then. \index{Bard (class)!moves|)} \section*{Alignment} {\bfseries Choose an alignment:} \subsection{Good} Perform your art to aid someone else. \subsection{Neutral} Avoid a conflict or defuse a tense situation. \subsection{Chaotic} Spur others to significant and unplanned decisive action. \section*{Gear} Your load is 9+Str. You have dungeon rations (5 uses, 1 weight). Choose one instrument, all are 0 weight for you: \begin{itemize} \item Your father's mandolin, repaired \item A fine lute, a gift from a noble \item The pipes with which you courted your first love \item A stolen horn \item A fiddle, never before played \item A songbook in a forgotten tongue \end{itemize} Choose your clothing: \begin{itemize} \item Leather armor (1 armor, 1 weight) \item Ostentatious clothes (0 weight) \end{itemize} Choose your armament: \begin{itemize} \item Dueling rapier (close, precise, 2 weight) \item Worn bow (near, 2 weight), bundle of arrows (3 ammo, 1 weight), and short sword (close, 1 weight) \end{itemize} Choose one: \begin{itemize} \item Adventuring gear (1 weight) \item Bandages (0 weight) \item Halfling pipeleaf (0 weight) \item 3 coins \end{itemize} \section*{Bonds} Fill in the name of one of your companions in at least one: \noindent This is not my first adventure with \_\_\_\_\_\_\_\_\_\_\_\_\_\_\_. \noindent I sang stories of \_\_\_\_\_\_\_\_\_\_\_\_\_\_\_ long before I ever met them in person. \noindent \_\_\_\_\_\_\_\_\_\_\_\_\_\_\_ is often the butt of my jokes. \noindent I am writing a ballad about the adventures of \_\_\_\_\_\_\_\_\_\_\_\_\_\_\_. \noindent \_\_\_\_\_\_\_\_\_\_\_\_\_\_\_ trusted me with a secret. \noindent \_\_\_\_\_\_\_\_\_\_\_\_\_\_\_ does not trust me, and for good reason. \section*{Advanced Moves} \index{Bard (class)!moves|(} {\bfseries When you gain a level from 2-5, choose from these moves.} \subsection{Healing Song} When you \textbf{heal with arcane art} , you heal +1d8 damage. \subsection{Vicious Cacophony} When you \textbf{grant bonus damage with arcane art} , you grant an extra +1d4 damage. \subsection{It Goes To Eleven} When you \textbf{unleash a crazed performance} (a righteous lute solo or mighty brass blast, maybe) choose a target who can hear you and roll+Cha. *On a 10+ the target attacks their nearest ally in range. *On a 7--9 they attack their nearest ally, but you also draw their attention and ire. \subsection{Metal Hurlant} When you \textbf{shout with great force or play a shattering note} choose a target and roll+Con. *On a 10+ the target takes 1d10 damage and is deafened for a few minutes. *On a 7--9 you still damage your target, but it's out of control: the GM will choose an additional target nearby. \subsection{A Little Help From My Friends} When you \textbf{successfully aid someone} you take +1 forward as well. \subsection{Eldritch Tones} Your arcane art is strong, allowing you to choose two effects instead of one. \subsection{Duelist's Parry} When you hack and slash, you take +1 armor forward. \subsection{Bamboozle} When you \textbf{parley with someone} , on a 7+ you also take +1 forward with them. \subsection{Multiclass Dabbler} Get one move from another class. Treat your level as one lower for choosing the move. \subsection{Multiclass Initiate} Get one move from another class. Treat your level as one lower for choosing the move. \vspace{\baselineskip} {\bfseries When you gain a level from 6-10, choose from these moves or the level 2-5 moves.} \subsection{Healing Chorus} Replaces: Healing Song When you \textbf{heal with arcane art} , you heal +2d8 damage. \subsection{Vicious Blast} Replaces: Vicious Cacophony When you \textbf{grant bonus damage with arcane art} , you grant an extra +2d4 damage. \subsection{Unforgettable Face} When you \textbf{meet someone you've met before} (your call) after some time apart you take +1 forward against them. \subsection{Reputation} When you \textbf{first meet someone who's heard songs about you} , roll+Cha. *On a 10+, tell the GM two things they've heard about you. *On a 7-9, tell the GM one thing they've heard, and the GM tells you one thing. \subsection{Eldritch Chord} Replaces: Eldritch Tones When you use arcane art, you choose two effects. You also get to choose one of those effects to double. \subsection{An Ear For Magic} When you \textbf{hear an enemy cast a spell} the GM will tell you the name of the spell and its effects. Take +1 forward when acting on the answers. \subsection{Devious} When you use charming and open you may also ask ``How are you vulnerable to me?'' Your subject may not ask this question of you. \subsection{Duelist's Block} Replaces: Duelist's Parry When you hack and slash, you take +2 armor forward. \subsection{Con} Replaces: Bamboozle When you \textbf{parley with someone} , on a 7+ you also take +1 forward with them and get to ask their player one question which they must answer truthfully. \subsection{Multiclass Master} Get one move from another class. Treat your level as one lower for choosing the move. \index{Bard (class)!moves|)}
{ "alphanum_fraction": 0.7573876049, "avg_line_length": 35.2105263158, "ext": "tex", "hexsha": "0bf023ffd7d6977e06e292cf302ddd160b49581a", "lang": "TeX", "max_forks_count": 5, "max_forks_repo_forks_event_max_datetime": "2021-01-27T03:56:49.000Z", "max_forks_repo_forks_event_min_datetime": "2016-09-01T13:27:46.000Z", "max_forks_repo_head_hexsha": "49a230f82fdeab7faa7c736ef81ef13266ac399d", "max_forks_repo_licenses": [ "CC-BY-3.0" ], "max_forks_repo_name": "Hegz/DW-Latex", "max_forks_repo_path": "tex/Bard.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "49a230f82fdeab7faa7c736ef81ef13266ac399d", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "CC-BY-3.0" ], "max_issues_repo_name": "Hegz/DW-Latex", "max_issues_repo_path": "tex/Bard.tex", "max_line_length": 427, "max_stars_count": 6, "max_stars_repo_head_hexsha": "49a230f82fdeab7faa7c736ef81ef13266ac399d", "max_stars_repo_licenses": [ "CC-BY-3.0" ], "max_stars_repo_name": "Hegz/DW-Latex", "max_stars_repo_path": "tex/Bard.tex", "max_stars_repo_stars_event_max_datetime": "2019-12-15T21:25:14.000Z", "max_stars_repo_stars_event_min_datetime": "2015-04-27T22:54:43.000Z", "num_tokens": 2374, "size": 8697 }
\documentclass[a4paper, 12pt]{article} %% Language and font encodings \usepackage[english]{babel} \usepackage[utf8x]{inputenc} \usepackage[T1]{fontenc} %% Sets page size and margins \usepackage[a4paper,top=3cm,bottom=2cm,left=3cm,right=3cm,marginparwidth=1.75cm]{geometry} %% Useful packages \usepackage{amsmath} \usepackage{graphicx} \usepackage[colorinlistoftodos]{todonotes} \usepackage[colorlinks=true, allcolors=blue]{hyperref} \title{708 Assignment 1} \date{} \author{Karan Dasgupta} %=============================================================================== \begin{document} \maketitle %\begin{abstract} %Your abstract. %\end{abstract} \section*{Question 3} Showing that Hamiltonians conserve energy, that is, $\frac{dH}{dt} = 0 $ along $x(t)$:\\ Using $H=H(q(t),p(t))$, without any explicit time dependence, that is, $\frac{\partial H}{\partial t} = 0 $ $$ \frac{dH}{dt} = \frac{\partial H}{\partial q_i} \frac{\partial q_i}{\partial t} + \frac{\partial H}{\partial p_i} \frac{\partial p_i}{\partial t} $$ Substituting $$ \frac{\partial q_i}{\partial t} = \frac{\partial H}{\partial p_i} \hspace{1cm} \& \hspace{1cm} \frac{\partial p_i}{\partial t} = - \frac{\partial H}{\partial q_i} $$ Thus: $$ \frac{dH}{dt} = \frac{\partial H}{\partial q_i}\frac{\partial H}{\partial p_i} - \frac{\partial H}{\partial p_i} \frac{\partial H}{\partial q_i} = 0 $$ as we required. \\ This is a significant result for a few reasons. Firstly, the entropy of this system is shown to be an extensive variable, which according to the fundamental postulate of thermodynamics suggests that one can characterise the state of a thermodynamic system by a set of extensive variables, which are all observables. This also shows that even though the accessible phase space volume can change, we expect the Hamiltonian of the system to be conserved. This may also help us to identify this accessible phase space, since if we choose a phase space that doesn't satisfy this condition, we can determine that this is NOT part of the accessible phase space of the system. In relation to ergodicity, which states that the variable, in this case energy, averaged over periods of time, exhibits similar behaviour to that over the accessible phase space volume, we show that if the Hamiltonian is conserved, that the internal energy doesn't change as $t \rightarrow \infty$. \newpage \section*{Question 5} \subsection*{Random Walks} There are a number of different kinds of random walks. One of the simplest is called the Pearson Random Walk, after Karl Pearson, where we have steps of a fixed length, in a random direction. Similarly, we can also have a random walk on a lattice - where we move between the nearest-neighbours on a fixed lattice. Some others to consider are: \begin{itemize} \item Levy flight: where the steps are sized according to the power law distribution, and in random directions. Here, we may find that the effect of the largest step can dominate over the effect of the smaller steps. \item Shrinking Steps: here, the random walk steps are decreasing in size, the random walker is getting tired. The length of step n is given by $\lambda ^n$, where $\lambda$ is a "shrinking constant" and is $<1$. \item Growing Steps: here, the random walk steps increase in size, similar to the above case for shrinking steps. An example of this is in turbulent diffusion. \end{itemize} More generally, we can look at random walks in continuous space where steps are discrete in time, some discrete space where steps happen in continuous time, or as in the case of diffusion, where both space and time are continuous.\\ Now, we can begin to examine the role that the spatial dimension plays in random walks. If we have a series of steps that trace some path of a random walker, we can define the "Exploration Sphere" as the range of points the random walker can be expected to visit in time $t$, in some arbitrary dimension $d$. We have seen that if the number of steps is proportionate to the length of time that particles move for, then the mean-square displacement is proportional to $\sqrt{t}$. The density of visited sites, $\rho$, is given by the ratio of sites visited to size of the exploration sphere: $$ \rho \sim \frac{t}{\sqrt{t}^d} \sim t ^{\left( 1-\frac{d}{2} \right) } $$ We arrive at this result by recognising that the numerator, $t$ is simply the number of sites visited, and $\sqrt{t}^d$ is a representation of the volume (omitting some constants). \\ Examining $\rho$ for various $d$: $$ d<2: \rho \rightarrow 0 $$ $$ d=2: \rho \rightarrow constant$$ $$ d>2: \rho \rightarrow \infty$$ This gives us a few insights about whether we return to the starting point or not. The first is that, for $d>2$, we are guaranteed to return the starting point, since the density of visited sites tends to infinity, termed "recurrent". Secondly, for $d<0$, $\rho$ tends to $0$, so we can't be certain that we will return to the starting point, termed "transient". Lastly, for the case $d=2$, that is, a path in the 2D plane, this technically falls within the recurrent regime, however we find that the mean time to return is $\infty$. \end{document}
{ "alphanum_fraction": 0.728923724, "avg_line_length": 75.8115942029, "ext": "tex", "hexsha": "61d42cefa486def2c6975c63bad25588615058e7", "lang": "TeX", "max_forks_count": 10, "max_forks_repo_forks_event_max_datetime": "2018-04-15T22:54:11.000Z", "max_forks_repo_forks_event_min_datetime": "2018-02-26T21:38:05.000Z", "max_forks_repo_head_hexsha": "0a4ef44e261f2892b0e927aeadf7f06adda1b80b", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "wvan478/708Notes2018", "max_forks_repo_path": "a1Contributions/708 A1-karan.tex", "max_issues_count": 4, "max_issues_repo_head_hexsha": "0a4ef44e261f2892b0e927aeadf7f06adda1b80b", "max_issues_repo_issues_event_max_datetime": "2018-04-18T20:53:19.000Z", "max_issues_repo_issues_event_min_datetime": "2018-03-07T20:07:07.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "wvan478/708Notes2018", "max_issues_repo_path": "a1Contributions/708 A1-karan.tex", "max_line_length": 592, "max_stars_count": 3, "max_stars_repo_head_hexsha": "0a4ef44e261f2892b0e927aeadf7f06adda1b80b", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "wvan478/708Notes2018", "max_stars_repo_path": "a1Contributions/708 A1-karan.tex", "max_stars_repo_stars_event_max_datetime": "2019-12-10T19:05:54.000Z", "max_stars_repo_stars_event_min_datetime": "2018-02-28T20:47:25.000Z", "num_tokens": 1340, "size": 5231 }
\section{Cocktails} \input{Sections/Cocktails/sugar_syrup.tex} \newpage \input{Sections/Cocktails/amaretto_ginger_ale.tex} \newpage \input{Sections/Cocktails/strawberry_kiss.tex} \newpage \input{Sections/Cocktails/whiskey_sour.tex} \newpage \input{Sections/Cocktails/ginfizz.tex} \newpage \input{Sections/Cocktails/sloeginfizz.tex} \newpage \input{Sections/Cocktails/moscowmule.tex}
{ "alphanum_fraction": 0.8174807198, "avg_line_length": 18.5238095238, "ext": "tex", "hexsha": "d235bf90af72f4515ea59531b585932a2862393d", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "d07bcea099e1028873f2fcac0f7d76f9b31ee9c7", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "huserben/cookbook", "max_forks_repo_path": "Sections/Cocktails.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "d07bcea099e1028873f2fcac0f7d76f9b31ee9c7", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "huserben/cookbook", "max_issues_repo_path": "Sections/Cocktails.tex", "max_line_length": 50, "max_stars_count": null, "max_stars_repo_head_hexsha": "d07bcea099e1028873f2fcac0f7d76f9b31ee9c7", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "huserben/cookbook", "max_stars_repo_path": "Sections/Cocktails.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 130, "size": 389 }
% Options for packages loaded elsewhere \PassOptionsToPackage{unicode}{hyperref} \PassOptionsToPackage{hyphens}{url} % \documentclass[ english, man]{apa6} \usepackage{lmodern} \usepackage{amssymb,amsmath} \usepackage{ifxetex,ifluatex} \ifnum 0\ifxetex 1\fi\ifluatex 1\fi=0 % if pdftex \usepackage[T1]{fontenc} \usepackage[utf8]{inputenc} \usepackage{textcomp} % provide euro and other symbols \else % if luatex or xetex \usepackage{unicode-math} \defaultfontfeatures{Scale=MatchLowercase} \defaultfontfeatures[\rmfamily]{Ligatures=TeX,Scale=1} \fi % Use upquote if available, for straight quotes in verbatim environments \IfFileExists{upquote.sty}{\usepackage{upquote}}{} \IfFileExists{microtype.sty}{% use microtype if available \usepackage[]{microtype} \UseMicrotypeSet[protrusion]{basicmath} % disable protrusion for tt fonts }{} \makeatletter \@ifundefined{KOMAClassName}{% if non-KOMA class \IfFileExists{parskip.sty}{% \usepackage{parskip} }{% else \setlength{\parindent}{0pt} \setlength{\parskip}{6pt plus 2pt minus 1pt}} }{% if KOMA class \KOMAoptions{parskip=half}} \makeatother \usepackage{xcolor} \IfFileExists{xurl.sty}{\usepackage{xurl}}{} % add URL line breaks if available \IfFileExists{bookmark.sty}{\usepackage{bookmark}}{\usepackage{hyperref}} \hypersetup{ pdftitle={Relationship between social capital and election results}, pdfauthor={Anisha Babu1, Hyeonjin Cha1, Diana DeWald1, \& Murat Kezer1}, pdflang={en-EN}, pdfkeywords={social capital, presidential elections}, hidelinks, pdfcreator={LaTeX via pandoc}} \urlstyle{same} % disable monospaced font for URLs \usepackage{graphicx,grffile} \makeatletter \def\maxwidth{\ifdim\Gin@nat@width>\linewidth\linewidth\else\Gin@nat@width\fi} \def\maxheight{\ifdim\Gin@nat@height>\textheight\textheight\else\Gin@nat@height\fi} \makeatother % Scale images if necessary, so that they will not overflow the page % margins by default, and it is still possible to overwrite the defaults % using explicit options in \includegraphics[width, height, ...]{} \setkeys{Gin}{width=\maxwidth,height=\maxheight,keepaspectratio} % Set default figure placement to htbp \makeatletter \def\fps@figure{htbp} \makeatother \setlength{\emergencystretch}{3em} % prevent overfull lines \providecommand{\tightlist}{% \setlength{\itemsep}{0pt}\setlength{\parskip}{0pt}} \setcounter{secnumdepth}{-\maxdimen} % remove section numbering % Make \paragraph and \subparagraph free-standing \ifx\paragraph\undefined\else \let\oldparagraph\paragraph \renewcommand{\paragraph}[1]{\oldparagraph{#1}\mbox{}} \fi \ifx\subparagraph\undefined\else \let\oldsubparagraph\subparagraph \renewcommand{\subparagraph}[1]{\oldsubparagraph{#1}\mbox{}} \fi % Manuscript styling \usepackage{upgreek} \captionsetup{font=singlespacing,justification=justified} % Table formatting \usepackage{longtable} \usepackage{lscape} % \usepackage[counterclockwise]{rotating} % Landscape page setup for large tables \usepackage{multirow} % Table styling \usepackage{tabularx} % Control Column width \usepackage[flushleft]{threeparttable} % Allows for three part tables with a specified notes section \usepackage{threeparttablex} % Lets threeparttable work with longtable % Create new environments so endfloat can handle them % \newenvironment{ltable} % {\begin{landscape}\begin{center}\begin{threeparttable}} % {\end{threeparttable}\end{center}\end{landscape}} \newenvironment{lltable}{\begin{landscape}\begin{center}\begin{ThreePartTable}}{\end{ThreePartTable}\end{center}\end{landscape}} % Enables adjusting longtable caption width to table width % Solution found at http://golatex.de/longtable-mit-caption-so-breit-wie-die-tabelle-t15767.html \makeatletter \newcommand\LastLTentrywidth{1em} \newlength\longtablewidth \setlength{\longtablewidth}{1in} \newcommand{\getlongtablewidth}{\begingroup \ifcsname LT@\roman{LT@tables}\endcsname \global\longtablewidth=0pt \renewcommand{\LT@entry}[2]{\global\advance\longtablewidth by ##2\relax\gdef\LastLTentrywidth{##2}}\@nameuse{LT@\roman{LT@tables}} \fi \endgroup} % \setlength{\parindent}{0.5in} % \setlength{\parskip}{0pt plus 0pt minus 0pt} % \usepackage{etoolbox} \makeatletter \patchcmd{\HyOrg@maketitle} {\section{\normalfont\normalsize\abstractname}} {\section*{\normalfont\normalsize\abstractname}} {}{\typeout{Failed to patch abstract.}} \patchcmd{\HyOrg@maketitle} {\section{\protect\normalfont{\@title}}} {\section*{\protect\normalfont{\@title}}} {}{\typeout{Failed to patch title.}} \makeatother \shorttitle{U.S. Social Capital and Elections} \keywords{social capital, presidential elections} \DeclareDelayedFloatFlavor{ThreePartTable}{table} \DeclareDelayedFloatFlavor{lltable}{table} \DeclareDelayedFloatFlavor*{longtable}{table} \makeatletter \renewcommand{\efloat@iwrite}[1]{\immediate\expandafter\protected@write\csname efloat@post#1\endcsname{}} \makeatother \usepackage{csquotes} \ifxetex % Load polyglossia as late as possible: uses bidi with RTL langages (e.g. Hebrew, Arabic) \usepackage{polyglossia} \setmainlanguage[]{english} \else \usepackage[shorthands=off,main=english]{babel} \fi \usepackage[]{biblatex} \addbibresource{r-references.bib} \title{Relationship between social capital and election results} \author{Anisha Babu\textsuperscript{1}, Hyeonjin Cha\textsuperscript{1}, Diana DeWald\textsuperscript{1}, \& Murat Kezer\textsuperscript{1}} \date{} \affiliation{\vspace{0.5cm}\textsuperscript{1} University of Oregon} \abstract{ Social capital has been an important predictor of the U.S. presidential election results. However, previous studies conceptualizes social capital as a unidimensional construct. We argue that a multidimensional conceptualization of social capital will reveal patterns that will contribute to our understanding of social capital and its relationship with elections. In the present study, we first examine how social capital changes across years. Then, we investigate how different types of social capital (i.e., religion, civic, and labor) are associated with presidential election results across years. Social capital as a unidimensional construct was largely stable over the four time points used in this research (1997, 2005, 2009, and 2014). As such, it is unlikely that social capital has much impact on election results, as presidential winners typically alternate between the two major political parties. Moreover, our findings indicate that while an aggregate social capital negatively predicted democratic margin in the presidential elections, this was not the case when social capital was operationalized as a multidimensional construct. Therefore, future research should consider conceptualizing social capital as a construct comprised of a multitude of dimensions. } \begin{document} \maketitle \hypertarget{introduction}{% \section{Introduction}\label{introduction}} Social capital, a concept popularized in the late 20th century, describes a measure of connections among individuals within a society \autocite{putnam2000}. This measure is largely concerned with the cultural norms, interpersonal trust, and social connections that allow individuals to act together in the pursuit of common goals (Ibid). Social capital is often conceptualized by recording the presence of businesses, institutions, etc. in counties and the fluctuations in their presence over time. These institutions include religious organizations, civic associations, and sporting clubs, among others (see \enquote{data} section below for further details). Social science literature has previously examined the relationship between aggregate social capital and political involvement in European countries \autocites[e.g.,][]{morales2016}{jottier2012} as well as in the U.S. \autocite{la1998}.\\ More specifically, there are studies which implicate a relation between social capital and voting behavior in the 2016 presidential race \autocite[such as][]{lee2020}. Notably, \textcite{giuliano2020} found that the aggregate density of social capital at the county and individual levels in 2016 was negatively correlated with vote share for the republican candidate. However, relatively little is known about how specific (non-aggregate) sectors of social capital impacted presidential races in 2000, 2008, 2012, and 2016. To that end, we set out to explore the relation between social capital in U.S. counties from 1997-2014 and voting behavior during subsequent presidential races. \hypertarget{methods}{% \section{Methods}\label{methods}} We report how we determined our sample size, all data exclusions (if any), all manipulations, and all measures in the study. \hypertarget{data}{% \subsection{Data}\label{data}} The present study uses secondary datasets. First, \emph{The production of social capital in US counties constitutes the social capital data} \autocite{rupasingha2006}. Second, \emph{County Presidential Election Returns 2000-2016} \autocite{data6science} is used for presidential election results. Both datasets provide data on county level.\\ In the election data, we mainly used how many votes each political party received in each county as well as total number of votes in that election for a given county.\\ In the social capital data, we only selected the variables that were available in all time points. These variables were the number of bowling centers, civic and social associations, public golf courses, religious organizations, fitness and recreational sports centers, political organizations, professional organizations, business associations, labor organizations, and non-profit organizations. We also use the variables that indicate the population at county level, voter turnout, census response rate, and a social capital index. \hypertarget{data-preparation}{% \subsection{Data Preparation}\label{data-preparation}} To prepare the data for analysis, we started with the election data as it is more comprehensive in terms of the number of counties. First, we selected the variables of interests. Then, we selected the election years (i.e., 2000, 2008, 2012, 2016) that match with social capital data. The name of the year variable was changed in a way that shows it is the year of election so that it is not mixed with the same year variable in social capital data. Next, we create new datasets for each presidential election we are interested in. These will be later merged with corresponding social capital data.\\ For each social capital dataset (i.e., 1997, 2005, 2009, 2014), we first added state code for some counties that do not readily contain that information. Then, we created two variables out of the area name such that we have different variables for county names and state codes. Then, we selected the relevant variables and cleaned the variable names. We only selected the variables that were available for all time points that we chose. Next, we created a year variable indicating when the data were collected.\\ Finally, we reorder the variables so that they are the same across datasets, and merged the four datasets to create one dataset that contains all of the data from each dataset. \hypertarget{data-analysis}{% \subsection{Data analysis}\label{data-analysis}} First, we provide the descriptive statistics regarding election results in Table 1. Next, we visualize the distribution of the votes across U.S. Finally, we present several plots of our analyses and regression models in which elections results are predicted by an aggragate social capital variable as well as by different types of social capital.\\ We used R \autocite[Version 4.0.2;][]{R-base} and the R-packages \emph{broom} \autocite[Version 0.7.1;][]{R-broom}, \emph{corx} \autocite[Version 1.0.6.1;][]{R-corx}, \emph{dplyr} \autocite[Version 1.0.2;][]{R-dplyr}, \emph{forcats} \autocite[Version 0.5.0;][]{R-forcats}, \emph{ggplot2} \autocite[Version 3.3.2;][]{R-ggplot2}, \emph{ggpubr} \autocite[Version 0.4.0;][]{R-ggpubr}, \emph{here} \autocite[Version 0.1;][]{R-here}, \emph{janitor} \autocite[Version 2.0.1;][]{R-janitor}, \emph{kableExtra} \autocite[Version 1.3.1;][]{R-kableExtra}, \emph{knitr} \autocite[Version 1.30;][]{R-knitr}, \emph{magrittr} \autocite[Version 1.5;][]{R-magrittr}, \emph{papaja} \autocite[Version 0.1.0.9997;][]{R-papaja}, \emph{purrr} \autocite[Version 0.3.4;][]{R-purrr}, \emph{readr} \autocite[Version 1.3.1;][]{R-readr}, \emph{rio} \autocite[Version 0.5.16;][]{R-rio}, \emph{scales} \autocite[Version 1.1.1;][]{R-scales}, \emph{sjmisc} \autocite[Version 2.8.5;][]{R-sjmisc}, \emph{stringr} \autocite[Version 1.4.0;][]{R-stringr}, \emph{tibble} \autocite[Version 3.0.3;][]{R-tibble}, \emph{tidyr} \autocite[Version 1.1.2;][]{R-tidyr}, \emph{tidyverse} \autocite[Version 1.3.0;][]{R-tidyverse}, and \emph{usmap} \autocite[Version 0.5.1;][]{R-usmap} for all our analyses. \hypertarget{visualization}{% \subsection{Visualization}\label{visualization}} We sought to answer our research questions with different visualizations. First, we explored how social capital changes over the four time points in our study (1997, 2005, 2009, and 2014). Next, we explored how U.S. presidential election results over the years (2000, 2008, 2012, and 2016) change depending on social capital in the preceding years. To supplement these analyses, we also consider geographic trends for election results and social capital across the U.S. and the state of Oregon. \hypertarget{results}{% \section{Results}\label{results}} We start with a description of the presidential elections analyzed in this study, and continue with our main questions. Table 1 shows votes by candidate and year of election. Figure 1 visualizes the distributions of the votes in a U.S. map. Figure 2 shows the distribution of votes for the state of Oregon. \begin{table} \caption{(\#tab:descriptives table 1)A summary table for votes by candidate and year of election.} \centering \begin{tabular}[t]{c|c|c|c|c} \hline Year & Party & N & Mean Candidate Votes & SD Candidate Votes\\ \hline 2000 & Dem & 3107 & 16218 & 57150\\ \hline 2000 & Green & 3107 & -- & --\\ \hline 2000 & Rep & 3107 & 16049 & 38632\\ \hline 2000 & -- & 3107 & 339 & 954\\ \hline 2008 & Dem & 3108 & 22157 & 76972\\ \hline 2008 & Rep & 3108 & 19167 & 44840\\ \hline 2008 & -- & 3108 & 577 & 1848\\ \hline 2012 & Dem & 3108 & 20974 & 73998\\ \hline 2012 & Rep & 3108 & 19409 & 44596\\ \hline 2012 & -- & 3108 & 838 & 2952\\ \hline 2016 & Dem & 3115 & 21071 & 80496\\ \hline 2016 & Rep & 3115 & 20160 & 43157\\ \hline 2016 & -- & 3115 & 2449 & 7509\\ \hline \multicolumn{5}{l}{\rule{0pt}{1em}\textit{Note: } N = total number of counties in the US reporting data.}\\ \end{tabular} \end{table} \includegraphics{Script_files/figure-latex/visualization US election results-1.pdf} \includegraphics{Script_files/figure-latex/visualization US election results-2.pdf} \includegraphics{Script_files/figure-latex/visualization Oregon election results-1.pdf} \includegraphics{Script_files/figure-latex/visualization Oregon election results-2.pdf} \hypertarget{how-does-social-capital-change-over-time}{% \subsection{How does social capital change over time?}\label{how-does-social-capital-change-over-time}} We considered how social capital changes over time. To do so, we first created an \enquote{aggregate social capital} variable for each state. This variable represents the sum of all social capital types (bowling, civic, golf, religion, sport, political, professional, business, and labor) across all counties in a given state. To account for differences in state population, this aggregate variable was divided by the total population. \includegraphics{Script_files/figure-latex/violin-1.pdf} Figure 3: Violin plots of aggregate social capital for each state are shown for the years: 1997, 2005, 2009, and 2014. Overlaid are boxplots representing each distribution's minimum, maximum, medium, and first and third quartiles. Aggregate social capital variables (described above) were multiplied by 1000 to avoid small decimal values and improve readability. This graph shows relatively consistent aggregate social capital over the years. It is important to note that the lower outlier in the year 1997, and the upper outliers in years 2005, 2009, and 2014 are values for Washington, D.C. Without this value, distributions would be nearly identical across all time points.\\ As a secondary measure of aggregate social capital, we considered the provided values from our dataset. These provided values summed all types of social capital, divided by population per 10,000, and divided by 10 (1st factor). These values were used to visualize geographic trends in social capital.\\ GEOGRAPHICAL TRENDS - MAY NEED MAP PLOTS HERE, OTHERWISE REMOVE ABOVE PARAGRAPH We also considered how specific types of social capital change over time. As such, we focused on religion, civic, and labor types of social capital trends over time. \includegraphics{Script_files/figure-latex/scatterplot-1.pdf} Figure 4: These scatterplots show the changes in religion, civic, and labor social capital across years. We then explored geographic trends for specific types of social capital. Figures 5a-7b visualize the distribution of the specific types of social capital on a U.S. map. \includegraphics{Script_files/figure-latex/visualization religious capital US map-1.pdf} \includegraphics{Script_files/figure-latex/visualization religious capital US map-2.pdf} \includegraphics{Script_files/figure-latex/visualization religious capital US map-3.pdf} \includegraphics{Script_files/figure-latex/visualization religious capital US map-4.pdf} \includegraphics{Script_files/figure-latex/visualization civic social capital maps-1.pdf} \includegraphics{Script_files/figure-latex/visualization civic social capital maps-2.pdf} \includegraphics{Script_files/figure-latex/visualization civic social capital maps-3.pdf} \includegraphics{Script_files/figure-latex/visualization civic social capital maps-4.pdf} \includegraphics{Script_files/figure-latex/visualization labor social capital maps-1.pdf} \includegraphics{Script_files/figure-latex/visualization labor social capital maps-2.pdf} \includegraphics{Script_files/figure-latex/visualization labor social capital maps-3.pdf} \includegraphics{Script_files/figure-latex/visualization labor social capital maps-4.pdf} \hypertarget{relationship-between-social-capital-and-election-results}{% \subsection{Relationship between social capital and election results}\label{relationship-between-social-capital-and-election-results}} Finally, we considered the relationship between social capital and election results. The figure below shows how proportion of votes for the two major parties relates to aggregate social capital in the preceding years.\\ \includegraphics{Script_files/figure-latex/scatter and line-1.pdf} Figure 8: This graph shows the relationship between aggregate social capital (same values as Figure 1) and proportion of votes for political parties (democrat in blue, republican in red) over the years. This set of plots reveals trends between aggregate social capital and proportion of political party votes. Data for Washington, D.C. was not included in these plots as it was an outlier. This figure reveals that as aggregate social capital increases, proportion of democrat votes decreases and proportion of republican votes increases.\\ Next, to test this relationship, we regressed democratic margin on aggregate social capital. Table 2 displays the results. Across all election years, social capital negatively predicts democratic margin, all ps \textless{} .05. In the 2012 \& 2016 elections, this relationship seems to be stronger. However, this relationship is inconsistent with the theoretical view that suggests a positive relationship between social capital and democratic margin \autocite[e.g.,][]{giuliano2020}. Therefore, we look at how different types of social capital predict democratic margin. Table 3 presents the results. Consistent with our earlier finding, religious social capital is also negatively associated with democratic margin across elections. However, we found that civic social capital is positively associated with democratic margin in all elections except for the 2000 election. In a similar vein, labor social capital has a positive and strong relationship with democratic margin across all elections. \begin{table} \caption{\label{tab:regression}Table 2. Democratic Margin Predicted by Aggregate Social Capital} \centering \begin{tabular}[t]{r|r|r|r} \hline Term & B & SE & p\\ \hline \multicolumn{4}{l}{\textbf{2000 Election}}\\ \hline \hspace{1em}Intercept & -0.08 & 0.01 & <0.05\\ \hline \hspace{1em}Social\_Capital\_(aggregate) & -0.07 & 0.01 & \vphantom{1} <0.05\\ \hline \multicolumn{4}{l}{\textbf{2008 Election}}\\ \hline \hspace{1em}Intercept & -0.07 & 0.01 & <0.05\\ \hline \hspace{1em}Social\_Capital\_(aggregate) & -0.07 & 0.01 & <0.05\\ \hline \multicolumn{4}{l}{\textbf{2012 Election}}\\ \hline \hspace{1em}Intercept & -0.10 & 0.01 & <0.05\\ \hline \hspace{1em}Social\_Capital\_(aggregate) & -0.09 & 0.01 & <0.05\\ \hline \multicolumn{4}{l}{\textbf{2016 Election}}\\ \hline \hspace{1em}Intercept & -0.16 & 0.01 & <0.05\\ \hline \hspace{1em}Social\_Capital\_(aggregate) & -0.13 & 0.01 & <0.05\\ \hline \end{tabular} \end{table} \begin{table} \caption{\label{tab:regression2}Table 3. Correlation between social capital variables (2014) and democratic margin (2016)} \centering \begin{tabular}[t]{l|c|c|c|c|c|c|c|c|c|c|c} \hline & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & 11\\ \hline 1. Bowling & - & & & & & & & & & & \\ \hline 2. Civic & .16* & - & & & & & & & & & \\ \hline 3. Golf & .18* & .17* & - & & & & & & & & \\ \hline 4. Religious & .18* & .25* & .35* & - & & & & & & & \\ \hline 5. Sport & -.01 & .00 & -.02 & .00 & - & & & & & & \\ \hline 6. Political & -.03 & .00 & -.03 & .00 & .01 & - & & & & & \\ \hline 7. Professional & -.01 & .08* & -.04* & -.03 & .02 & .20* & - & & & & \\ \hline 8. Business & .10* & .14* & .14* & .31* & -.02 & .09* & .16* & - & & & \\ \hline 9. Labor & .01 & .13* & -.03 & -.05* & .02 & .05* & .11* & -.02 & - & & \\ \hline 10. NonProfit & .22* & .35* & .28* & .37* & .02 & .09* & .14* & .33* & .00 & - & \\ \hline 11. Social Capital Index & .29* & .46* & .43* & .68* & .03 & .09* & .13* & .44* & .03 & .85* & -\\ \hline 12. Democratic Margin & -.09* & -.04* & -.14* & -.33* & .02 & .09* & .19* & -.09* & .13* & -.07* & -.14*\\ \hline \multicolumn{12}{l}{\rule{0pt}{1em}\textit{Note: } * p < .05; ** p < .01; *** p < .001}\\ \end{tabular} \end{table} \begin{table} \caption{\label{tab:regression2}Table X. Correlation between social capital variables (2009) and democratic margin (2012)} \centering \begin{tabular}[t]{l|c|c|c|c|c|c|c|c|c|c|c} \hline & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & 11\\ \hline 1. Bowling & - & & & & & & & & & & \\ \hline 2. Civic & .21* & - & & & & & & & & & \\ \hline 3. Golf & .23* & .18* & - & & & & & & & & \\ \hline 4. Religious & .23* & .23* & .42* & - & & & & & & & \\ \hline 5. Sport & -.02 & -.01 & -.02 & .00 & - & & & & & & \\ \hline 6. Political & .00 & .05* & -.04* & -.01 & .01 & - & & & & & \\ \hline 7. Professional & .06* & .05* & -.04* & -.01 & .01 & .24* & - & & & & \\ \hline 8. Business & .12* & .21* & .17* & .26* & -.02 & .15* & .22* & - & & & \\ \hline 9. Labor & .04* & .13* & -.05* & -.03 & .01 & .06* & .11* & -.04* & - & & \\ \hline 10. NonProfit & .29* & .38* & .33* & .41* & .01 & .09* & .16* & .33* & .01 & - & \\ \hline 11. Social Capital Index & .36* & .47* & .48* & .65* & .03 & .10* & .16* & .41* & .05* & .86* & -\\ \hline 12. Democratic Margin & -.05* & .02 & -.10* & -.27* & .02 & .06* & .12* & -.12* & .19* & -.05* & -.08*\\ \hline \multicolumn{12}{l}{\rule{0pt}{1em}\textit{Note: } * p < .05; ** p < .01; *** p < .001}\\ \end{tabular} \end{table} \begin{table} \caption{\label{tab:regression2}Table X. Correlation between social capital variables (2005) and democratic margin (2008)} \centering \begin{tabular}[t]{l|c|c|c|c|c|c|c|c|c|c|c} \hline & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & 11\\ \hline 1. Bowling & - & & & & & & & & & & \\ \hline 2. Civic & .29* & - & & & & & & & & & \\ \hline 3. Golf & .23* & .18* & - & & & & & & & & \\ \hline 4. Religious & .22* & .24* & .34* & - & & & & & & & \\ \hline 5. Sport & -.04* & .04* & -.05* & -.09* & - & & & & & & \\ \hline 6. Political & -.02 & .04* & -.05* & -.01 & .05* & - & & & & & \\ \hline 7. Professional & .02 & .12* & -.03 & .03 & .12* & .21* & - & & & & \\ \hline 8. Business & .11* & .13* & .16* & .26* & -.01 & .12* & .17* & - & & & \\ \hline 9. Labor & .02 & .15* & -.04* & -.01 & .14* & .12* & .10* & -.02 & - & & \\ \hline 10. NonProfit & .30* & .37* & .29* & .40* & .02 & .06* & .18* & .30* & .02 & - & \\ \hline 11. Social Capital Index & .39* & .48* & .42* & .63* & .01 & .07* & .18* & .35* & .11* & .81* & -\\ \hline 12. Democratic Margin & -.04* & .08* & -.07* & -.23* & .14* & .06* & .12* & -.14* & .23* & -.03 & -.05*\\ \hline \multicolumn{12}{l}{\rule{0pt}{1em}\textit{Note: } * p < .05; ** p < .01; *** p < .001}\\ \end{tabular} \end{table} \begin{table} \caption{\label{tab:regression2}Table X. Correlation between social capital variables (1997) and democratic margin (2000)} \centering \begin{tabular}[t]{l|c|c|c|c|c|c|c|c|c|c|c} \hline & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & 11\\ \hline 1. Bowling & - & & & & & & & & & & \\ \hline 2. Civic & .25* & - & & & & & & & & & \\ \hline 3. Golf & .22* & .18* & - & & & & & & & & \\ \hline 4. Religious & .23* & .21* & .17* & - & & & & & & & \\ \hline 5. Sport & -.01 & .04* & .01 & .01 & - & & & & & & \\ \hline 6. Political & -.02 & .05* & -.01 & -.06* & .04* & - & & & & & \\ \hline 7. Professional & .03 & .12* & -.03 & -.01 & .06* & .33* & - & & & & \\ \hline 8. Business & .10* & .14* & .05* & .09* & .01 & .17* & .22* & - & & & \\ \hline 9. Labor & .03 & .14* & -.01 & -.04* & .03 & .10* & .08* & -.02 & - & & \\ \hline 10. NonProfit & .39* & .44* & .24* & .39* & .04* & .06* & .18* & .30* & .00 & - & \\ \hline 11. Social Capital Index & .45* & .51* & .31* & .60* & .08* & .06* & .17* & .31* & .07* & .87* & -\\ \hline 12. Democratic Margin & -.15* & -.06* & -.13* & -.20* & .01 & .08* & .07* & -.08* & .25* & -.23* & -.26*\\ \hline \multicolumn{12}{l}{\rule{0pt}{1em}\textit{Note: } * p < .05; ** p < .01; *** p < .001}\\ \end{tabular} \end{table} \begin{table} \caption{\label{tab:regression2}Table X. Social Capital Variables Regressed on Democratic Margin for Each Time Point} \centering \begin{tabular}[t]{c|c|c|c|c|c|c|c|c|c|c|c|c} \hline \multicolumn{1}{c|}{\textbf{ }} & \multicolumn{3}{c|}{\textbf{2000}} & \multicolumn{3}{c|}{\textbf{2004}} & \multicolumn{3}{c|}{\textbf{2008}} & \multicolumn{3}{c}{\textbf{2012}} \\ \cline{2-4} \cline{5-7} \cline{8-10} \cline{11-13} Term & B & SE & p & B & SE & p & B & SE & p & B & SE & p\\ \hline Intercept & -0.12 & 0.01 & <0.05 & -0.09 & 0.01 & <0.05 & -0.11 & 0.01 & <0.05 & -0.17 & 0.01 & <0.05\\ \hline Religious & -0.09 & 0.01 & <0.05 & -0.15 & 0.01 & <0.05 & -0.16 & 0.01 & <0.05 & -0.21 & 0.01 & <0.05\\ \hline Civic & -0.09 & 0.03 & <0.05 & 0.20 & 0.03 & <0.05 & 0.11 & 0.03 & <0.05 & 0.08 & 0.04 & 0.05\\ \hline Labor & 0.89 & 0.06 & <0.05 & 1.03 & 0.08 & <0.05 & 0.90 & 0.09 & <0.05 & 0.65 & 0.10 & <0.05\\ \hline \multicolumn{13}{l}{\rule{0pt}{1em}\textit{Note: } The headers indicate election years.}\\ \end{tabular} \end{table} \hypertarget{discussion}{% \section{Discussion}\label{discussion}} This project sought to study trends in U.S. social capital and presidential election results. It is thought that fluctuations in social capital may inform political involvement. As such, we observed changes in social capital over time and subsequent election results for the two major political parties (democrat and republican).\\ Initial analysis considered how aggregate social capital changes over time. Aggregate capital was largely stable over the four time points used in this project (1997, 2005, 2009, and 2014). As such, it is unlikely aggregate capital has much impact on election results, as presidential winners typically alternate between the two major political parties. GEOGRAPHICAL TRENDS A secondary analysis considered religion, civic, and labor types of social capital more specifically. However, our analysis of these types of social capital revealed a somewhat linear decrease in labor and civil social capital, and larger fluctuations of religion social capital. As such, none of these types of social capital corresponded directly with election results in the following years. Moreover, while aggregate social capital always negatively predicted democratic margin, this was not the case for individual types of social capital. That is, religion social capital was negatively associated with democratic margin, whereas civic and labor social capital were mostly positively related to the outcome. This finding underlines the importance of investigating different types of social capital with regard to election results. GEOGRAPHICAL TRENDS Finally, we observed the direct relationship between aggregate social capital and proportion of votes for the major political party votes. Consistent with our earlier analysis, social capital did not appear to fluctuate with winners in any of the selected presidential elections. Rather, social capital was tied to the proportion of party votes regardless of the winner. Higher social capital was associated with a lower proportion of democratic votes and a higher proportion of republican votes.\\ In summary, our analyses reveal that social capital does indeed correspond with political party votes in presidential elections. However, social capital trends over time do not appear to affect fluctuations in presidential winners. Further geographical considerations lend a more nuanced understanding of how social capital and election results are spread across the U.S. and the state of Oregon. \newpage \hypertarget{references}{% \section{References}\label{references}} \begingroup \setlength{\parindent}{-0.5in} \setlength{\leftskip}{0.5in} \hypertarget{refs}{} \endgroup \printbibliography \end{document}
{ "alphanum_fraction": 0.7144551324, "avg_line_length": 60.8336673347, "ext": "tex", "hexsha": "e6ed363b03ce2a93b966392618cd832d7240ed8b", "lang": "TeX", "max_forks_count": 4, "max_forks_repo_forks_event_max_datetime": "2020-11-23T17:46:10.000Z", "max_forks_repo_forks_event_min_datetime": "2020-11-18T18:29:01.000Z", "max_forks_repo_head_hexsha": "885ea3bbde1bda545cc66dd44bdd2d25f637ae53", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "mkezer/EDLD651-Final-Project", "max_forks_repo_path": "script/Script.tex", "max_issues_count": 1, "max_issues_repo_head_hexsha": "885ea3bbde1bda545cc66dd44bdd2d25f637ae53", "max_issues_repo_issues_event_max_datetime": "2020-11-23T15:41:33.000Z", "max_issues_repo_issues_event_min_datetime": "2020-11-23T15:41:33.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "mkezer/EDLD651-Final-Project", "max_issues_repo_path": "script/Script.tex", "max_line_length": 1274, "max_stars_count": null, "max_stars_repo_head_hexsha": "885ea3bbde1bda545cc66dd44bdd2d25f637ae53", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "mkezer/EDLD651-Final-Project", "max_stars_repo_path": "script/Script.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 9461, "size": 30356 }
% NOTES: % - 1000 words or less for RNAAS! % - Add an appendix or some words to get >=3 pages for arxiv posting % STYLE: % - New line after each sentence (makes Git diff's readable) % TODO: % - Run: texcount -v3 -merge -incbib -dir -sub=none -utf8 -sum paper.tex \documentclass[RNAAS]{aastex63} % Load common packages \usepackage{microtype} % ALWAYS! \usepackage{amsmath} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{booktabs} \usepackage{graphicx} \usepackage{color} \usepackage{enumitem} \setlist[description]{style=unboxed} \sloppy\sloppypar\raggedbottom\frenchspacing \input{preamble.tex} \shorttitle{Stellar streams in the Legacy Surveys} \shortauthors{Shipp \& Price-Whelan} \begin{document} \title{Something catchy...} \author[0000-0003-2497-091X]{Nora~Shipp} \affiliation{affiliation...} \author[0000-0003-0872-7098]{Adrian~M.~Price-Whelan} \affiliation{Center for Computational Astrophysics, Flatiron Institute, Simons Foundation, 162 Fifth Avenue, New York, NY 10010, USA} \section{Introduction} % Keep it short: this will be our abstract on arxiv Stellar streams provide a record of ancient and ongoing accretion into galaxies, and provide unique constraints on the nature of dark matter on the scales of individual galaxies and smaller. The first streams were discovered around the Milky Way stellar halo by filtering photometric catalogs of stars, which led to ... \citep{Grillmair}. Now, Gaia, use proper motions...\citep{Ibata,Malhan}. However, many streams are too distant for Gaia to observe main sequence (what typical distance) ... For these streams, deeper photometric surveys provide better star-galaxy separation, better photometric precision that increases the signal-to-noise... \citep{Belokurov:2007, Bernard:20??, Shipp:2018}. Here, we produce maps of the ... \section{Data and methods} We use data from ... We follow a methodology similar to \cite{Shipp:2018}... Briefly, ... explain filtering ... \section{Results} Show figure...maybe RGB field of streams? Link to zenodo archive of visualizations (so it has a DOI), explain content \section{Discussion} Lots of tentative new structures, but most are low S/N and hard to robustly select because of survey nonuniformities, background structures. Something about LSST Something about how we really need spectra of every halo star to get a high-contrast view of these things! % \begin{figure}[!t] % \begin{center} % \includegraphics[width=0.8\textwidth]{example-binaries-long.pdf} % \end{center} % \caption{% % The same as \figurename~\ref{fig:binary-examples-short}, but for \apogee\ % sources with long visit baselines ($\tau > 1000~\dayd$) % \label{fig:binary-examples-long} % } % \end{figure} \acknowledgements It is a pleasure to thank... Link to Legacy Surveys acknowledgement: http://legacysurvey.org/acknowledgment/ \software{ Astropy \citep{astropy:2018}, gala \citep{gala}, IPython \citep{ipython}, numpy \citep{numpy}, schwimmbad \citep{schwimmbad:2017}, scipy \citep{scipy}, } \appendix TODO. % \bibliographystyle{aasjournal} % \bibliography{refs} \end{document}
{ "alphanum_fraction": 0.7426679281, "avg_line_length": 26.8728813559, "ext": "tex", "hexsha": "e0a5556b84c0e48ea87d593ffacba478d302b09b", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "18819b8f4c878f99f1007637a0525c56600ef32a", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "adrn/slegs", "max_forks_repo_path": "rnaas/paper.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "18819b8f4c878f99f1007637a0525c56600ef32a", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "adrn/slegs", "max_issues_repo_path": "rnaas/paper.tex", "max_line_length": 201, "max_stars_count": null, "max_stars_repo_head_hexsha": "18819b8f4c878f99f1007637a0525c56600ef32a", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "adrn/slegs", "max_stars_repo_path": "rnaas/paper.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 885, "size": 3171 }
% This is "sig-alternate.tex" V2.1 April 2013 % This file should be compiled with V2.5 of "sig-alternate.cls" May 2012 % % This example file demonstrates the use of the 'sig-alternate.cls' % V2.5 LaTeX2e document class file. It is for those submitting % articles to ACM Conference Proceedings WHO DO NOT WISH TO % STRICTLY ADHERE TO THE SIGS (PUBS-BOARD-ENDORSED) STYLE. % The 'sig-alternate.cls' file will produce a similar-looking, % albeit, 'tighter' paper resulting in, invariably, fewer pages. % % ---------------------------------------------------------------------------------------------------------------- % This .tex file (and associated .cls V2.5) produces: % 1) The Permission Statement % 2) The Conference (location) Info information % 3) The Copyright Line with ACM data % 4) NO page numbers % % as against the acm_proc_article-sp.cls file which % DOES NOT produce 1) thru' 3) above. % % Using 'sig-alternate.cls' you have control, however, from within % the source .tex file, over both the CopyrightYear % (defaulted to 200X) and the ACM Copyright Data % (defaulted to X-XXXXX-XX-X/XX/XX). % e.g. % \CopyrightYear{2007} will cause 2007 to appear in the copyright line. % \crdata{0-12345-67-8/90/12} will cause 0-12345-67-8/90/12 to appear in the copyright line. % % --------------------------------------------------------------------------------------------------------------- % This .tex source is an example which *does* use % the .bib file (from which the .bbl file % is produced). % REMEMBER HOWEVER: After having produced the .bbl file, % and prior to final submission, you *NEED* to 'insert' % your .bbl file into your source .tex file so as to provide % ONE 'self-contained' source file. % % ================= IF YOU HAVE QUESTIONS ======================= % Questions regarding the SIGS styles, SIGS policies and % procedures, Conferences etc. should be sent to % Adrienne Griscti ([email protected]) % % Technical questions _only_ to % Gerald Murray ([email protected]) % % Technical questions related to COCO/BBOB to [email protected] % =============================================================== % % For tracking purposes - this is V2.0 - May 2012 \documentclass{sig-alternate} \usepackage{graphicx} \usepackage{rotating} \usepackage[dvipsnames]{xcolor} % color is sufficient %\usepackage[hidelinks]{hyperref} % make COCO papers clickable \pdfpagewidth=8.5in \pdfpageheight=11in \special{papersize=8.5in,11in} \renewcommand{\topfraction}{1} % max fraction of floats at top \renewcommand{\bottomfraction}{1} % max fraction of floats at bottom % Parameters for TEXT pages (not float pages): \setcounter{topnumber}{3} \setcounter{bottomnumber}{3} \setcounter{totalnumber}{3} % 2 may work better \setcounter{dbltopnumber}{4} % for 2-column pages \renewcommand{\dbltopfraction}{1} % fit big float above 2-col. text \renewcommand{\textfraction}{0.0} % allow minimal text w. figs % Parameters for FLOAT pages (not text pages): \renewcommand{\floatpagefraction}{0.80} % require fuller float pages % N.B.: floatpagefraction MUST be less than topfraction !! \renewcommand{\dblfloatpagefraction}{0.7} % require fuller float pages %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%% TO BE EDITED %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % rungeneric.py writes data into a subfolder of ppdata \newcommand{\bbobdatapath}{ppdata/} % default output folder of rungeneric.py \input{\bbobdatapath bbob_pproc_commands.tex} % provide default of algname and algfolder % \renewcommand{\algname}{MY-ALGORITHM-NAME} % name of algorithm as it should appear in the text % \renewcommand{\algfolder}{FOLDER/} % subfolder of \bbobdatapath for processed algorithm % Find all \change commands in the text below and update the information according to your data %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \graphicspath{{\bbobdatapath\algfolder}} \newcommand{\DIM}{\ensuremath{\mathrm{DIM}}} \newcommand{\aRT}{\ensuremath{\mathrm{aRT}}} \newcommand{\FEvals}{\ensuremath{\mathrm{FEvals}}} \newcommand{\nruns}{\ensuremath{\mathrm{Nruns}}} \newcommand{\Dfb}{\ensuremath{\Delta f_{\mathrm{best}}}} \newcommand{\Df}{\ensuremath{\Delta f}} \newcommand{\nbFEs}{\ensuremath{\mathrm{\#FEs}}} \newcommand{\hvref}{\ensuremath{HV_\mathrm{ref}}} \newcommand{\fopt}{\hvref} %\newcommand{\fopt}{\ensuremath{f_\mathrm{opt}}} \newcommand{\ftarget}{\ensuremath{f_\mathrm{t}}} \newcommand{\CrE}{\ensuremath{\mathrm{CrE}}} \newcommand{\change}[1]{{\color{red} #1}} \newcommand{\TODO}[1]{{\color{orange} !!! #1 !!!}} % To suppress warnings about PDF page groups: %\pdfsuppresswarningpagegroup=1 % Dimo: gives errors on my machine %%%%%%%%%%%%%%%%%%%%%% END OF PREAMBLE %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{document} % % --- Author Metadata here --- \conferenceinfo{GECCO'16,} {July 20-24, 2016, Denver, CO, USA.} \CopyrightYear{2016} \crdata{TBA} \clubpenalty=10000 \widowpenalty = 10000 % --- End of Author Metadata --- \title{Black-Box Optimization Benchmarking Template for the Bi-Objective BBOB Test Suite % \titlenote{If needed} } \subtitle{Draft version \titlenote{Submission deadline: April 3rd.}} % Camera-ready paper due by May 4th. % % You need the command \numberofauthors to handle the 'placement % and alignment' of the authors beneath the title. % % For aesthetic reasons, we recommend 'three authors at a time' % i.e. three 'name/affiliation blocks' be placed beneath the title. % % NOTE: You are NOT restricted in how many 'rows' of % "name/affiliations" may appear. We just ask that you restrict % the number of 'columns' to three. % % Because of the available 'opening page real-estate' % we ask you to refrain from putting more than six authors % (two rows with three columns) beneath the article title. % More than six makes the first-page appear very cluttered indeed. % % Use the \alignauthor commands to handle the names % and affiliations for an 'aesthetic maximum' of six authors. % Add names, affiliations, addresses for % the seventh etc. author(s) as the argument for the % \additionalauthors command. % These 'additional authors' will be output/set for you % without further effort on your part as the last section in % the body of your article BEFORE References or any Appendices. \numberofauthors{1} % in this sample file, there are a *total* % of EIGHT authors. SIX appear on the 'first-page' (for formatting % reasons) and the remaining two appear in the \additionalauthors section. % \author{ % You can go ahead and credit any number of authors here, % e.g. one 'row of three' or two rows (consisting of one row of three % and a second row of one, two or three). % % The command \alignauthor (no curly braces needed) should % precede each author name, affiliation/snail-mail address and % e-mail address. Additionally, tag each line of % affiliation/address with \affaddr, and tag the % e-mail address with \email. % % 1st. author \alignauthor Forename Name\\ %\titlenote{Dr.~Trovato insisted his name be first.}\\ % \affaddr{Institute for Clarity in Documentation}\\ % \affaddr{1932 Wallamaloo Lane}\\ % \affaddr{Wallamaloo, New Zealand}\\ % \email{[email protected]} %% 2nd. author %\alignauthor %G.K.M. Tobin\titlenote{The secretary disavows %any knowledge of this author's actions.}\\ % \affaddr{Institute for Clarity in Documentation}\\ % \affaddr{P.O. Box 1212}\\ % \affaddr{Dublin, Ohio 43017-6221}\\ % \email{[email protected]} %% 3rd. author %\alignauthor Lars Th{\o}rv{\"a}ld\titlenote{This author is the %one who did all the really hard work.}\\ % \affaddr{The Th{\o}rv{\"a}ld Group}\\ % \affaddr{1 Th{\o}rv{\"a}ld Circle}\\ % \affaddr{Hekla, Iceland}\\ % \email{[email protected]} %\and % use '\and' if you need 'another row' of author names %% 4th. author %\alignauthor Lawrence P. Leipuner\\ % \affaddr{Brookhaven Laboratories}\\ % \affaddr{Brookhaven National Lab}\\ % \affaddr{P.O. Box 5000}\\ % \email{[email protected]} %% 5th. author %\alignauthor Sean Fogarty\\ % \affaddr{NASA Ames Research Center}\\ % \affaddr{Moffett Field}\\ % \affaddr{California 94035}\\ % \email{[email protected]} %% 6th. author %\alignauthor Charles Palmer\\ % \affaddr{Palmer Research Laboratories}\\ % \affaddr{8600 Datapoint Drive}\\ % \affaddr{San Antonio, Texas 78229}\\ % \email{[email protected]} } % author %% There's nothing stopping you putting the seventh, eighth, etc. %% author on the opening page (as the 'third row') but we ask, %% for aesthetic reasons that you place these 'additional authors' %% in the \additional authors block, viz. %\additionalauthors{Additional authors: John Smith (The Th{\o}rv{\"a}ld Group, %email: {\texttt{[email protected]}}) and Julius P.~Kumquat %(The Kumquat Consortium, email: {\texttt{[email protected]}}).} %\date{30 July 1999} %% Just remember to make sure that the TOTAL number of authors %% is the number that will appear on the first page PLUS the %% number that will appear in the \additionalauthors section. \maketitle \begin{abstract} to be written \end{abstract} % Add any ACM category that you feel is needed, not mandatory anymore %\category{G.1.6}{Numerical Analysis}{Optimization}[global optimization, %unconstrained optimization] %\category{F.2.1}{Analysis of Algorithms and Problem Complexity}{Numerical Algorithms and Problems} % Complete with anything that is needed \terms{Algorithms} % Complete with anything that is needed \keywords{Benchmarking, Black-box optimization, Bi-objective optimization} % \section{Introduction} % % \section{Algorithm Presentation} % % \section{Experimental Procedure} % %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{CPU Timing} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % note that the following text is just a proposal and can/should be changed to your needs: In order to evaluate the CPU timing of the algorithm, we have run the \change{\algname} with restarts on the entire bbob-biobj test suite \cite{biobj2016func} for $2 D$ function evaluations. The \change{C/Java/Matlab/Octave/Python} code was run on a \change{Mac Intel(R) Core(TM) i5-2400S CPU @ 2.50GHz} with \change{1} processor and \change{4} cores. The time per function evaluation for dimensions 2, 3, 5, 10, 20\change{, 40} equals \change{$x.x$}, \change{$x.x$}, \change{$x.x$}, \change{$xx$}, \change{$xxx$}\change{, and $xxx$} seconds respectively. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{Results} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% Results of \algname\ from experiments according to \cite{hansen2016exp}, \cite{hansen2016perfass} and \cite{biobj2016perfass} on the benchmark functions given in \cite{biobj2016func} are presented in Figures~\ref{fig:ECDFsingleOne}, \ref{fig:ECDFsingleTwo}, \ref{fig:ECDFsingleThree}, and \ref{fig:ECDFsGroups}, and in Table~\ref{tab:aRTs}. The experiments were performed with COCO \cite{hansen2016cocoplat}, version \change{1.0.1}, the plots were produced with version \change{1.0.4}. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % Scaling of ECDFs with dimension %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{figure*} \centering \begin{tabular}{@{\hspace*{-0.018\textwidth}}l@{\hspace*{-0.02\textwidth}}l@{\hspace*{-0.02\textwidth}}l@{\hspace*{-0.02\textwidth}}l@{\hspace*{-0.02\textwidth}}} \includegraphics[width=0.25\textwidth]{pprldmany-single-functions/pprldmany_f001}& \includegraphics[width=0.25\textwidth]{pprldmany-single-functions/pprldmany_f002}& \includegraphics[width=0.25\textwidth]{pprldmany-single-functions/pprldmany_f003}& \includegraphics[width=0.25\textwidth]{pprldmany-single-functions/pprldmany_f004}\\[-1.8ex] \includegraphics[width=0.25\textwidth]{pprldmany-single-functions/pprldmany_f005}& \includegraphics[width=0.25\textwidth]{pprldmany-single-functions/pprldmany_f006}& \includegraphics[width=0.25\textwidth]{pprldmany-single-functions/pprldmany_f007}& \includegraphics[width=0.25\textwidth]{pprldmany-single-functions/pprldmany_f008}\\[-1.8ex] \includegraphics[width=0.25\textwidth]{pprldmany-single-functions/pprldmany_f009}& \includegraphics[width=0.25\textwidth]{pprldmany-single-functions/pprldmany_f010}& \includegraphics[width=0.25\textwidth]{pprldmany-single-functions/pprldmany_f011}& \includegraphics[width=0.25\textwidth]{pprldmany-single-functions/pprldmany_f012}\\[-1.8ex] \includegraphics[width=0.25\textwidth]{pprldmany-single-functions/pprldmany_f013}& \includegraphics[width=0.25\textwidth]{pprldmany-single-functions/pprldmany_f014}& \includegraphics[width=0.25\textwidth]{pprldmany-single-functions/pprldmany_f015}& \includegraphics[width=0.25\textwidth]{pprldmany-single-functions/pprldmany_f016}\\[-1.8ex] \end{tabular} \caption{\label{fig:ECDFsingleOne} \bbobecdfcaptionsinglefcts{} } \end{figure*} \begin{figure*} \centering \begin{tabular}{@{\hspace*{-0.018\textwidth}}l@{\hspace*{-0.02\textwidth}}l@{\hspace*{-0.02\textwidth}}l@{\hspace*{-0.02\textwidth}}l@{\hspace*{-0.02\textwidth}}l@{\hspace*{-0.02\textwidth}}} \includegraphics[width=0.25\textwidth]{pprldmany-single-functions/pprldmany_f017}& \includegraphics[width=0.25\textwidth]{pprldmany-single-functions/pprldmany_f018}& \includegraphics[width=0.25\textwidth]{pprldmany-single-functions/pprldmany_f019}& \includegraphics[width=0.25\textwidth]{pprldmany-single-functions/pprldmany_f020}\\[-1.8ex] \includegraphics[width=0.25\textwidth]{pprldmany-single-functions/pprldmany_f021}& \includegraphics[width=0.25\textwidth]{pprldmany-single-functions/pprldmany_f022}& \includegraphics[width=0.25\textwidth]{pprldmany-single-functions/pprldmany_f023}& \includegraphics[width=0.25\textwidth]{pprldmany-single-functions/pprldmany_f024}\\[-1.8ex] \includegraphics[width=0.25\textwidth]{pprldmany-single-functions/pprldmany_f025}& \includegraphics[width=0.25\textwidth]{pprldmany-single-functions/pprldmany_f026}& \includegraphics[width=0.25\textwidth]{pprldmany-single-functions/pprldmany_f027}& \includegraphics[width=0.25\textwidth]{pprldmany-single-functions/pprldmany_f028}\\[-1.8ex] \includegraphics[width=0.25\textwidth]{pprldmany-single-functions/pprldmany_f029}& \includegraphics[width=0.25\textwidth]{pprldmany-single-functions/pprldmany_f030}& \includegraphics[width=0.25\textwidth]{pprldmany-single-functions/pprldmany_f031}& \includegraphics[width=0.25\textwidth]{pprldmany-single-functions/pprldmany_f032}\\[-1.8ex] \includegraphics[width=0.25\textwidth]{pprldmany-single-functions/pprldmany_f033}& \includegraphics[width=0.25\textwidth]{pprldmany-single-functions/pprldmany_f034}& \includegraphics[width=0.25\textwidth]{pprldmany-single-functions/pprldmany_f035}& \includegraphics[width=0.25\textwidth]{pprldmany-single-functions/pprldmany_f036}\\[-1.8ex] \end{tabular} \caption{\label{fig:ECDFsingleTwo} Empirical cumulative distribution of simulated (bootstrapped) runtimes, measured in number of objective function evaluations, divided by dimension (FEvals/DIM) for the targets as given in Fig.~\ref{fig:ECDFsingleOne} for functions $f_{17}$ to $f_{36}$ and all dimensions. % % Empirical cumulative distribution function (ECDF) per dimension for all % targets of each function as in Fig.~\ref{fig:ECDFsingleOne} but for $f_{17}$ till $f_{36}$. } \end{figure*} \begin{figure*} \centering \begin{tabular}{@{\hspace*{-0.018\textwidth}}l@{\hspace*{-0.02\textwidth}}l@{\hspace*{-0.02\textwidth}}l@{\hspace*{-0.02\textwidth}}l@{\hspace*{-0.02\textwidth}}l@{\hspace*{-0.02\textwidth}}} \includegraphics[width=0.25\textwidth]{pprldmany-single-functions/pprldmany_f037}& \includegraphics[width=0.25\textwidth]{pprldmany-single-functions/pprldmany_f038}& \includegraphics[width=0.25\textwidth]{pprldmany-single-functions/pprldmany_f039}& \includegraphics[width=0.25\textwidth]{pprldmany-single-functions/pprldmany_f040}\\[-1.8ex] \includegraphics[width=0.25\textwidth]{pprldmany-single-functions/pprldmany_f041}& \includegraphics[width=0.25\textwidth]{pprldmany-single-functions/pprldmany_f042}& \includegraphics[width=0.25\textwidth]{pprldmany-single-functions/pprldmany_f043}& \includegraphics[width=0.25\textwidth]{pprldmany-single-functions/pprldmany_f044}\\[-1.8ex] \includegraphics[width=0.25\textwidth]{pprldmany-single-functions/pprldmany_f045}& \includegraphics[width=0.25\textwidth]{pprldmany-single-functions/pprldmany_f046}& \includegraphics[width=0.25\textwidth]{pprldmany-single-functions/pprldmany_f047}& \includegraphics[width=0.25\textwidth]{pprldmany-single-functions/pprldmany_f048}\\[-1.8ex] \includegraphics[width=0.25\textwidth]{pprldmany-single-functions/pprldmany_f049}& \includegraphics[width=0.25\textwidth]{pprldmany-single-functions/pprldmany_f050}& \includegraphics[width=0.25\textwidth]{pprldmany-single-functions/pprldmany_f051}& \includegraphics[width=0.25\textwidth]{pprldmany-single-functions/pprldmany_f052} \end{tabular} \begin{tabular}{@{\hspace*{-0.018\textwidth}}l@{\hspace*{-0.02\textwidth}}l@{\hspace*{-0.02\textwidth}}l@{\hspace*{-0.02\textwidth}}l@{\hspace*{-0.02\textwidth}}} \includegraphics[width=0.25\textwidth]{pprldmany-single-functions/pprldmany_f053}& \includegraphics[width=0.25\textwidth]{pprldmany-single-functions/pprldmany_f054}& \includegraphics[width=0.25\textwidth]{pprldmany-single-functions/pprldmany_f055}\\[-1.8ex] \end{tabular} \caption{\label{fig:ECDFsingleThree} Empirical cumulative distribution of simulated (bootstrapped) runtimes, measured in number of objective function evaluations, divided by dimension (FEvals/DIM) for the targets as given in Fig.~\ref{fig:ECDFsingleOne} for functions $f_{37}$ to $f_{55}$ and all dimensions. % Empirical cumulative distribution function (ECDF) per dimension for all targets of each function as in Fig.~\ref{fig:ECDFsingleOne} but for $f_{37}$ till $f_{55}$. } \end{figure*} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % Empirical cumulative distribution functions (ECDFs) per function group. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \newcommand{\rot}[2][2.5]{ \hspace*{-3.5\baselineskip}% \begin{rotate}{90}\hspace{#1em}#2 \end{rotate}} \begin{figure*} \begin{tabular}{c@{\hspace*{-0.02\textwidth}}c@{\hspace*{-0.02\textwidth}}c@{\hspace*{-0.02\textwidth}}c} separable-separable & separable-moderate & separable-ill-cond. & separable-multimodal\\ \includegraphics[width=0.268\textwidth,trim=0 0 0 13mm, clip]{pprldmany-single-functions/pprldmany_1-separable_1-separable} & \includegraphics[width=0.268\textwidth,trim=0 0 0 13mm, clip]{pprldmany-single-functions/pprldmany_1-separable_2-moderate} & \includegraphics[width=0.268\textwidth,trim=0 0 0 13mm, clip]{pprldmany-single-functions/pprldmany_1-separable_3-ill-conditioned} & \includegraphics[width=0.268\textwidth,trim=0 0 0 13mm, clip]{pprldmany-single-functions/pprldmany_1-separable_4-multi-modal}\\ separable-weakstructure & moderate-moderate & moderate-ill-cond. & moderate-multimodal\\ \includegraphics[width=0.268\textwidth,trim=0 0 0 13mm, clip]{pprldmany-single-functions/pprldmany_1-separable_5-weakly-structured} & \includegraphics[width=0.268\textwidth,trim=0 0 0 13mm, clip]{pprldmany-single-functions/pprldmany_2-moderate_2-moderate} & \includegraphics[width=0.268\textwidth,trim=0 0 0 13mm, clip]{pprldmany-single-functions/pprldmany_2-moderate_3-ill-conditioned} & \includegraphics[width=0.268\textwidth,trim=0 0 0 13mm, clip]{pprldmany-single-functions/pprldmany_2-moderate_4-multi-modal}\\ moderate-weakstructure & ill-cond.-ill-cond. & ill-cond.-multimodal & ill-cond.-weakstructure\\ \includegraphics[width=0.268\textwidth,trim=0 0 0 13mm, clip]{pprldmany-single-functions/pprldmany_2-moderate_5-weakly-structured} & \includegraphics[width=0.268\textwidth,trim=0 0 0 13mm, clip]{pprldmany-single-functions/pprldmany_3-ill-conditioned_3-ill-conditioned} & \includegraphics[width=0.268\textwidth,trim=0 0 0 13mm, clip]{pprldmany-single-functions/pprldmany_3-ill-conditioned_4-multi-modal} & \includegraphics[width=0.268\textwidth,trim=0 0 0 13mm, clip]{pprldmany-single-functions/pprldmany_3-ill-conditioned_5-weakly-structured} \\ multimodal-multimodal & multimodal-weakstructure & weakstructure-weakstructure & all 55 functions\\ \includegraphics[width=0.268\textwidth,trim=0 0 0 13mm, clip]{pprldmany-single-functions/pprldmany_4-multi-modal_4-multi-modal} & \includegraphics[width=0.268\textwidth,trim=0 0 0 13mm, clip]{pprldmany-single-functions/pprldmany_4-multi-modal_5-weakly-structured} & \includegraphics[width=0.268\textwidth,trim=0 0 0 13mm, clip]{pprldmany-single-functions/pprldmany_5-weakly-structured_5-weakly-structured} & \includegraphics[width=0.268\textwidth,trim=0 0 0 13mm, clip]{pprldmany-single-functions/pprldmany} \vspace*{-0.5ex} \end{tabular} \caption{\label{fig:ECDFsGroups} \bbobecdfcaptionallgroups{} } \end{figure*} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % Table showing the average running time (aRT in number of function % evaluations) to reach the given targets for functions $f_1$--$f_{55}$. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{sidewaystable*} \centering {\tiny \parbox{0.499\textwidth}{\centering {\small 5-D}\\ \input{\bbobdatapath\algfolder pptable_05D_noiselessall}}% \parbox{0.499\textwidth}{\centering {\small 20-D}\\ \input{\bbobdatapath\algfolder pptable_20D_noiselessall}}}% \caption[Table of aRTs]{\label{tab:aRTs}\bbobpptablecaption{} } \end{sidewaystable*} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %\section{Discussion} % and/or conclusions etc %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % REFERENCES %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % The following two commands are all you need in the % initial runs of your .tex file to % produce the bibliography for the citations in your paper. \bibliographystyle{abbrv} \bibliography{bbob} % bbob.bib is the name of the Bibliography in this case % You must have a proper ".bib" file and remember to run: % latex bibtex latex latex % to resolve all references % to create the ~.bbl file. Insert that ~.bbl file into % the .tex source file and comment out % the command \texttt{{\char'134}thebibliography}. % % ACM needs 'a single self-contained file'! % \clearpage % otherwise the last figure might be missing % Please uncomment for final version to fit paper to 8 pages. %\end{document} \appendix %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % Scaling of aRT with dimension %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{figure*} \begin{tabular}{@{\hspace*{-0.018\textwidth}}l@{\hspace*{-0.02\textwidth}}l@{\hspace*{-0.02\textwidth}}l@{\hspace*{-0.02\textwidth}}l@{\hspace*{-0.02\textwidth}}l@{\hspace*{-0.02\textwidth}}} \includegraphics[width=0.223\textwidth]{ppfigdim_f001}& \includegraphics[width=0.223\textwidth]{ppfigdim_f002}& \includegraphics[width=0.223\textwidth]{ppfigdim_f003}& \includegraphics[width=0.223\textwidth]{ppfigdim_f004}& \includegraphics[width=0.223\textwidth]{ppfigdim_f005}\\[-1.8ex] \includegraphics[width=0.223\textwidth]{ppfigdim_f006}& \includegraphics[width=0.223\textwidth]{ppfigdim_f007}& \includegraphics[width=0.223\textwidth]{ppfigdim_f008}& \includegraphics[width=0.223\textwidth]{ppfigdim_f009}& \includegraphics[width=0.223\textwidth]{ppfigdim_f010}\\[-1.8ex] \includegraphics[width=0.223\textwidth]{ppfigdim_f011}& \includegraphics[width=0.223\textwidth]{ppfigdim_f012}& \includegraphics[width=0.223\textwidth]{ppfigdim_f013}& \includegraphics[width=0.223\textwidth]{ppfigdim_f014}& \includegraphics[width=0.223\textwidth]{ppfigdim_f015}\\[-1.8ex] \includegraphics[width=0.223\textwidth]{ppfigdim_f016}& \includegraphics[width=0.223\textwidth]{ppfigdim_f017}& \includegraphics[width=0.223\textwidth]{ppfigdim_f018}& \includegraphics[width=0.223\textwidth]{ppfigdim_f019}& \includegraphics[width=0.223\textwidth]{ppfigdim_f020}\\[-1.8ex] \includegraphics[width=0.223\textwidth]{ppfigdim_f021}& \includegraphics[width=0.223\textwidth]{ppfigdim_f022}& \includegraphics[width=0.223\textwidth]{ppfigdim_f023}& \includegraphics[width=0.223\textwidth]{ppfigdim_f024}& \includegraphics[width=0.223\textwidth]{ppfigdim_f025}\\[-1.8ex] \includegraphics[width=0.223\textwidth]{ppfigdim_f026}& \includegraphics[width=0.223\textwidth]{ppfigdim_f027}& \includegraphics[width=0.223\textwidth]{ppfigdim_f028}& \includegraphics[width=0.223\textwidth]{ppfigdim_f029}& \includegraphics[width=0.223\textwidth]{ppfigdim_f030} \end{tabular} \vspace{-3ex} \caption{\label{fig:aRTgraphs} \bbobppfigdimlegend{$f_1$ and $f_{30}$} } \end{figure*} \begin{figure*} \begin{tabular}{@{\hspace*{-0.018\textwidth}}l@{\hspace*{-0.02\textwidth}}l@{\hspace*{-0.02\textwidth}}l@{\hspace*{-0.02\textwidth}}l@{\hspace*{-0.02\textwidth}}l@{\hspace*{-0.02\textwidth}}} \includegraphics[width=0.223\textwidth]{ppfigdim_f031}& \includegraphics[width=0.223\textwidth]{ppfigdim_f032}& \includegraphics[width=0.223\textwidth]{ppfigdim_f033}& \includegraphics[width=0.223\textwidth]{ppfigdim_f034}& \includegraphics[width=0.223\textwidth]{ppfigdim_f035}\\[-1.8ex] \includegraphics[width=0.223\textwidth]{ppfigdim_f036}& \includegraphics[width=0.223\textwidth]{ppfigdim_f037}& \includegraphics[width=0.223\textwidth]{ppfigdim_f038}& \includegraphics[width=0.223\textwidth]{ppfigdim_f039}& \includegraphics[width=0.223\textwidth]{ppfigdim_f040}\\[-1.8ex] \includegraphics[width=0.223\textwidth]{ppfigdim_f041}& \includegraphics[width=0.223\textwidth]{ppfigdim_f042}& \includegraphics[width=0.223\textwidth]{ppfigdim_f043}& \includegraphics[width=0.223\textwidth]{ppfigdim_f044}& \includegraphics[width=0.223\textwidth]{ppfigdim_f045}\\[-1.8ex] \includegraphics[width=0.223\textwidth]{ppfigdim_f046}& \includegraphics[width=0.223\textwidth]{ppfigdim_f047}& \includegraphics[width=0.223\textwidth]{ppfigdim_f048}& \includegraphics[width=0.223\textwidth]{ppfigdim_f049}& \includegraphics[width=0.223\textwidth]{ppfigdim_f050}\\[-1.8ex] \includegraphics[width=0.223\textwidth]{ppfigdim_f051}& \includegraphics[width=0.223\textwidth]{ppfigdim_f052}& \includegraphics[width=0.223\textwidth]{ppfigdim_f053}& \includegraphics[width=0.223\textwidth]{ppfigdim_f054}& \includegraphics[width=0.223\textwidth]{ppfigdim_f055} \end{tabular} \vspace{-3ex} \caption{\label{fig:aRTgraphsTwo} Runtime versus dimension as described in Fig.~\ref{fig:aRTgraphs}, here for functions $f_{31}$ to $f_{55}$. } \end{figure*} \end{document}
{ "alphanum_fraction": 0.7009618187, "avg_line_length": 51.8865784499, "ext": "tex", "hexsha": "bef7c6a7e3c834c21d6532e3f249f8a12fab3220", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "aa3d9fb9673dcdcad29f5f87fadfd06eec627f97", "max_forks_repo_licenses": [ "BSD-3-Clause" ], "max_forks_repo_name": "akhoufi/Optimization-Algo-2", "max_forks_repo_path": "code-postprocessing/latex-templates/templateBIOBJarticle.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "aa3d9fb9673dcdcad29f5f87fadfd06eec627f97", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "BSD-3-Clause" ], "max_issues_repo_name": "akhoufi/Optimization-Algo-2", "max_issues_repo_path": "code-postprocessing/latex-templates/templateBIOBJarticle.tex", "max_line_length": 556, "max_stars_count": null, "max_stars_repo_head_hexsha": "aa3d9fb9673dcdcad29f5f87fadfd06eec627f97", "max_stars_repo_licenses": [ "BSD-3-Clause" ], "max_stars_repo_name": "akhoufi/Optimization-Algo-2", "max_stars_repo_path": "code-postprocessing/latex-templates/templateBIOBJarticle.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 8331, "size": 27448 }
\section{Specifications} \label{sec:specs} \newcounter{SpecID} \subsection{Markers} \refstepcounter{SpecID} \label{spec:markers} The arena and tokens in the game are labelled with fiducial markers. Each marker number is associated with a particular feature in the arena, and also has an associated size. The marker numbers and sizes are as follows: \begin{center} \begin{tabular}{lcc} \toprule \textbf{Item} & \textbf{Marker Number} & \textbf{Marker Size (\si{mm})} \\ \midrule Arena boundary & 0 -- 27 & 250 \\ Columns & 28 -- 43 & 250 \\ Tokens belonging to the robot in zone 0 & 44 -- 48 & 100 \\ Tokens belonging to the robot in zone 1 & 49 -- 53 & 100 \\ Tokens belonging to the robot in zone 2 & 54 -- 58 & 100 \\ Tokens belonging to the robot in zone 3 & 59 -- 63 & 100 \\ % Robot Badges & 69 -- 73 & 100 \\ \bottomrule \end{tabular} \end{center} All markers are oriented vertically such that the human-readable text is under the marker. \subsection{Arena} \refstepcounter{SpecID} \label{spec:arena} \begin{enumerate} \item The arena floor is an \SI{8}{m} $\times$ \SI{8}{m} square. The tolerance of these two dimensions is $\pm$\SI{250}{mm}. \item The floor of the arena is carpeted. \item The layout of the arena is given in \figref{fig:arena}. \item The outer walls of the arena are at least \SI{600}{mm} high, and the interior surface is white plastic-coated hardboard. \item Each wall of the arena features seven \SI{250}{mm} fiducial markers. The positions of these markers is given in \figref{fig:sidewall}. The marker numbering is given in \figref{fig:arena}. \item The robot starting zones are squares which share corners with the arena itself. Their sides are of length \si{1}{m}. \item Starting zones are numbered 0,1,2,3 clockwise starting at the north west corner. \item In the arena there are 4 fixed square columns with a height greater than or equal to \SI{370}{mm}, and a width of \SI{370}{mm}. \item Columns will have 4 different markers on each face, as given in \figref{fig:arena}. \item Markers will be placed on columns such that there is a \SI{120}{mm} gap at the bottom. \item The scoring zones are squares with sides \si{2815}{mm}$\pm$\SI{50}{mm} positioned with the columns separating them. \item The starting and scoring zones is visually delineated on the floor of the arena by coloured tape. The outer edge of the tape indicates the outer edge of the zone. This tape is for visual reference only. \item \label{spec:tokenpos} Tokens will be placed in undisclosed layouts within an inner \si{2}{m} square of each scoring zone. The inner square is positioned such that two of its edges are the inside edges of the scoring zones. Tokens will start at least \si{150}{mm} from columns or other tokens and their layouts will be rotationally symmetric to that of other zones. \item Tokens will be placed in the scoring zone on the opposite side to their matching coloured scoring zone. \item \label{spec:flags} Flags will be cylinders with a diameter of \si{15}{mm}, and a length of \si{200}{mm} or longer. There will be a decoration attached \si{100}{mm} from the top, with a height of \si{100}{mm} and a width of \si{150}{mm}, as described in \figref{fig:flag}. The cloth part of the flag must be visible when attached to the mount. \end{enumerate} \begin{sidewaysfigure} \includegraphics[scale=0.58]{fig-sidewall.pdf} \caption{Layout of markers along each arena wall.} \label{fig:sidewall} \end{sidewaysfigure} \begin{figure} \includegraphics[scale=0.58]{fig-arena.pdf} \caption{Layout zones and tokens in the arena. Please note that tokens will be placed randomly but rotationally symmetrically within the respective scoring zones.} \label{fig:arena} \end{figure} \begin{figure} \includegraphics[scale=0.3]{fig-flag.pdf} \caption{Specification of a robot flag} \label{fig:flag} \end{figure} \subsection{Tokens} \refstepcounter{SpecID} \label{spec:tokens} \begin{enumerate} \item Tokens are cubic corrugated cardboard boxes, with sides of length \si{110}{mm}$\pm$\si{10}{mm}. \item Tokens will be coloured to match the colour of a scoring zone. \item Each face of each token has a fiducial marker attached. \item The initial layout of tokens in the arena is defined in \subspecref{spec:arena}{spec:tokenpos}. \end{enumerate}
{ "alphanum_fraction": 0.7099691222, "avg_line_length": 41.9814814815, "ext": "tex", "hexsha": "9807c4c1ded11b509400284a23988e04b44548a0", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2018-07-20T10:03:06.000Z", "max_forks_repo_forks_event_min_datetime": "2018-07-20T10:03:06.000Z", "max_forks_repo_head_hexsha": "d66a7e007a7899d2fa972fc7cdcfb673ea124e8a", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "Adimote/sb2018-rules", "max_forks_repo_path": "specs.tex", "max_issues_count": 4, "max_issues_repo_head_hexsha": "d66a7e007a7899d2fa972fc7cdcfb673ea124e8a", "max_issues_repo_issues_event_max_datetime": "2018-04-28T14:57:39.000Z", "max_issues_repo_issues_event_min_datetime": "2017-12-17T20:34:11.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "Adimote/sb2018-rules", "max_issues_repo_path": "specs.tex", "max_line_length": 88, "max_stars_count": null, "max_stars_repo_head_hexsha": "d66a7e007a7899d2fa972fc7cdcfb673ea124e8a", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "Adimote/sb2018-rules", "max_stars_repo_path": "specs.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 1249, "size": 4534 }
%\subsection{Psalm } \begin{Parallel}[v]{\colw}{\colx} {\latin{\noindent \textit{Ant.} A porta ínferi érue, Dómine, ánimam meam.}} {\vern {\noindent \textit{Ant.} O Lord, deliver my soul from the gates of the grave.}} \end{Parallel}
{ "alphanum_fraction": 0.6995708155, "avg_line_length": 29.125, "ext": "tex", "hexsha": "82d4d09b8e337e8574c31073c58ba9fcc70e4dfb", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "b0edee91a9aa5e1d985c1ce5e912fc16395d0ca4", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "twclark21/Good-Friday-Tenebrae", "max_forks_repo_path": "antiphons/aporta.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "b0edee91a9aa5e1d985c1ce5e912fc16395d0ca4", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "twclark21/Good-Friday-Tenebrae", "max_issues_repo_path": "antiphons/aporta.tex", "max_line_length": 68, "max_stars_count": 3, "max_stars_repo_head_hexsha": "b0edee91a9aa5e1d985c1ce5e912fc16395d0ca4", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "twclark21/Good-Friday-Tenebrae", "max_stars_repo_path": "antiphons/aporta.tex", "max_stars_repo_stars_event_max_datetime": "2018-03-05T22:49:34.000Z", "max_stars_repo_stars_event_min_datetime": "2018-03-05T02:19:53.000Z", "num_tokens": 88, "size": 233 }
\section{~Numerical approaches} \label{chapt:num} \newcounters \input{num/basics} \input{num/depth} \input{num/space} \input{num/space_trad} \input{num/space_trad_1} \input{num/space_trad_2} \input{num/space_trad_3} \input{num/space_curv} \input{num/space_tri} \input{num/space_SMC} \input{num/GSE} \input{num/GSE_null} \pb \input{num/GSE_BH} \pb \input{num/GSE_avg} \pb \input{num/obst} \input{num/move} \input{num/rotagrid} \input{num/spec} \input{num/spec_1up} \input{num/spec_uno} \input{num/spec_uq} \input{num/source} \input{num/ice} \input{num/w_c} \input{num/tide} \input{num/space_time_ext} \input{num/part} \input{num/track} \input{num/nest} % \bpage
{ "alphanum_fraction": 0.7435897436, "avg_line_length": 17.4473684211, "ext": "tex", "hexsha": "0ec1e2ac97e6fa872e36c51b01280d4514bd1c87", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2021-06-01T09:29:46.000Z", "max_forks_repo_forks_event_min_datetime": "2021-06-01T09:29:46.000Z", "max_forks_repo_head_hexsha": "3e8bbbe6652b702b61d2896612f6aa8e4aa6c803", "max_forks_repo_licenses": [ "Apache-2.0", "CC0-1.0" ], "max_forks_repo_name": "minsukji/ci-debug", "max_forks_repo_path": "WW3/manual/num.tex", "max_issues_count": 5, "max_issues_repo_head_hexsha": "3e8bbbe6652b702b61d2896612f6aa8e4aa6c803", "max_issues_repo_issues_event_max_datetime": "2021-06-04T14:17:45.000Z", "max_issues_repo_issues_event_min_datetime": "2021-05-31T15:49:26.000Z", "max_issues_repo_licenses": [ "Apache-2.0", "CC0-1.0" ], "max_issues_repo_name": "minsukji/ci-debug", "max_issues_repo_path": "WW3/manual/num.tex", "max_line_length": 49, "max_stars_count": null, "max_stars_repo_head_hexsha": "3e8bbbe6652b702b61d2896612f6aa8e4aa6c803", "max_stars_repo_licenses": [ "Apache-2.0", "CC0-1.0" ], "max_stars_repo_name": "minsukji/ci-debug", "max_stars_repo_path": "WW3/manual/num.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 266, "size": 663 }
\begin{savequote}[8cm] ‘After all, it is a common weakness of young authors to put too much into their papers.’ \qauthor{--- Ronald Fisher, \textit{\usebibentry{fisher1950contributions}{title}} \citeyearpar{fisher1950contributions}} \end{savequote} \chapter{\label{app:data-and-figs}Supplementary data and figures} \minitoc{} \section{Supplementary data} \subsection{PRDM9\textsuperscript{Dom2/Cst}-targeted hotspots studied} The table below gives the list of mouse hotspots targeted by either PRDM9\textsuperscript{Dom2} or PRDM9\textsuperscript{Cst} that have been individually studied. \begin{table}[h] \centering \begin{adjustbox}{width = 1\textwidth} \begin{tabular}{rrrr} \toprule \textbf{Name} & \textbf{Target allele} & \textbf{Chr.} & \textbf{Reference} \\ \midrule A3 & PRDM9\textsuperscript{Dom2} & 1 & \citet{kelmenson2005torrid, cole2010comprehensive} \\ G7c & PRDM9\textsuperscript{Dom2} & 17 & \citet{snoek1998molecular} \\ E\textsubscript{\textgreek{β}} & PRDM9\textsuperscript{Dom2} & 17 & \citet{steinmetz1982molecular} \\ % strain B6 Esrrg1 & PRDM9\textsuperscript{Cst} & 1 & \citet{billings2013dna} \\ Hlx1 & PRDM9\textsuperscript{Cst} & 1 & \citet{ng2008quantitative,billings2013dna} \\ HS9 & PRDM9\textsuperscript{Dom2} & 19 & \citet{bois2007highly,getun2010nucleosome} \\%B6/DBA2 strain HS22 & PRDM9\textsuperscript{Dom2} & 19 & \citet{getun2010nucleosome} \\ HS59.4 & PRDM9\textsuperscript{Dom2} & 19 & \citet{getun2010nucleosome} \\ HS61.1 & PRDM9\textsuperscript{Dom2} & 19 & \citet{wu2010anatomy,getun2010nucleosome} \\ Pbx1 & PRDM9\textsuperscript{Dom2} & 1 & \citet{billings2013dna,baker2015multimer} \\ Psmb9 & PRDM9\textsuperscript{Cst} & 17 & \citet{guillon2002initiation,baudat2007cis} \\ \bottomrule \end{tabular} \end{adjustbox} \caption[List of PRDM9\textsuperscript{Dom2}- and PRDM9\textsuperscript{Cst}-targeted hotspots individually studied] {\textbf{List of PRDM9\textsuperscript{Dom2}- and PRDM9\textsuperscript{Cst}-targeted hotspots individually studied.} } \label{tab:hotspots-studied-sperm-typing} \end{table} \subsection{Disclaimer for the resources used} This work was performed using the computing facilities of the CC LBBE/PRABI\@. \subsection{Erroneously called W~$\rightarrow$~S and S~$\rightarrow$~W events} Quantifying gBGC comes back to measuring the $\frac{WS}{WS+SW}$ ratio. However, since the large majority of pot-NCO-1 events corresponded to FPs, we had to distinguish the (potential) contribution of FPs to this ratio from that of genuine NCO-1 events. In particular, this ratio may depart from the expected 50\% ratio if (1) a non-negligible proportion of FPs arise from sequencing miscalls and (2) W~$\rightarrow$~S and S~$\rightarrow$~W sequencing errors appear at different frequencies.\\ % \paragraph{Proportion of FPs due to sequencing errors\\} First, we thus wanted to quantify the proportion of FPs due to sequencing miscalls. To do this, we estimated the sequencing error rate directly in our sequencing data by monitoring the apparition of \textit{de novo} variants: given that the mutation rate ($\sim$10\textsuperscript{-8}/bp) is much lower than the sequencing error rate ($\sim$10\textsuperscript{-3}/bp), we assumed that, outside the polymorphic sites identified by variant-calling, any base call that differed from the nucleotide of the reference genome was a sequencing error and counted them to compute the conditional frequency matrix of sequencing errors\footnote{Matrix $M$ was computed based on the analysis of one chromosome (chromosome 10) for all of our 18 samples individually (because the sequencing errors may vary between the biological samples and sequencing runs). This matrix gives the probability of each erroneous base call, given the genuine nucleotide.} ($M$): \begin{equation*} M = \begin{bmatrix} \Pr( A\rightarrow A \mid A) & \Pr( A\rightarrow C \mid A) & \Pr( A\rightarrow G \mid A) & \Pr( A\rightarrow T \mid A) \\ \Pr( C\rightarrow A \mid C) & \Pr( C\rightarrow C \mid C) & \Pr( C\rightarrow G \mid C) & \Pr( C\rightarrow T \mid C) \\ \Pr( G\rightarrow A \mid G) & \Pr( G\rightarrow C \mid G) & \Pr( G\rightarrow G \mid G) & \Pr( G\rightarrow T \mid G) \\ \Pr( T\rightarrow A \mid T) & \Pr( T\rightarrow C \mid T) & \Pr( T\rightarrow G \mid T) & \Pr( T\rightarrow T \mid T) \end{bmatrix} \end{equation*} $\forall (i,j) \in \{A, C, G, T\}^2$, the number of NCO-1 FPs expected due to sequencing errors involving a genuine base $i$ mistakenly called as a $j$ base ($e_{i\rightarrow j}$) simply equalled the product of the number of central markers (i.e.\ markers \textit{not} located at the extremity of fragments) that were genuinely $i$ in $ij$ polymorphic sites ($g_{i}^{ij}$) by the conditional probability that a genuine $i$ would mistakenly be called a $j$ ($\Pr( i\rightarrow j \mid i )$): \begin{equation} \label{eq:nb-errors} e_{i\rightarrow j} = g_{i}^{ij} \times \Pr( i\rightarrow j \mid i ) \end{equation} $g_{i}^{ij}$ was not directly accessible from the data because we could not know which base calls were correctly sequenced. Though, this number was linked to the number of central markers containing an $i$ allele and involved in a polymorphic site $ij$ ($n_{i}^{ij}$) through the following equation: \begin{equation} \label{eq:genuine-to-called} n_{i}^{ig} = g_{i}^{ij} \times ( 1 - \Pr( i\rightarrow j \mid i ) ) + g_{j}^{ij} \times \Pr( j\rightarrow i \mid j ) \end{equation} When we computed the $M$ matrix, we found that the frequency of sequencing errors was very low ($\simeq 10^{-3}$). Thus, to approximate $g_{i}^{ij}$, we used the simplifying assumption that the frequency of wrong calls were close to zero and that of good calls close to 1: \begin{subequations} \begin{alignat}{5} \forall &(i,j) &{}\in{}& \{A, C, G, T\}^2 \; \backslash \: i \neq j, &{}\Pr({}& i\rightarrow j \mid i ) \simeq 0,\label{eq:assumption-low-freqs}\\ \forall &i &{}\in{}& \{A, C, G, T\}, &{}\Pr({}& i\rightarrow i \mid i ) \simeq 1 \end{alignat} \end{subequations} From equation~\ref{eq:assumption-low-freqs}, equation~\ref{eq:genuine-to-called} simplified to: \begin{equation} \label{eq:genuine-to-called-simplified} n_{i}^{ij} \simeq g_{i}^{ij} \end{equation} And, by incorporating equation~\ref{eq:genuine-to-called-simplified} into equation~\ref{eq:nb-errors}, we had: \begin{equation*} \label{eq:nb-errors-with-only-known-parameters} e_{i\rightarrow j} = n_{i}^{ij} \times \Pr( i\rightarrow j \mid i ) \end{equation*} Finally, the total number of FPs that were expected due to sequencing errors ($E$) was the total sum of each type of sequencing error: \begin{equation*} \label{eq:sum-all-NCOs-expected} E = \underset{i \neq j} {\sum_{(i, j) \in \{A, C, G, T\}^2}} e_{i\rightarrow j} \end{equation*} This allowed us to predict that, among the total 287,577,349 fragments overlapping 3 markers or more, 231,905 were expected to be discovered as NCO-1 FPs due to sequencing errors only. This represented 66.7\% of the 347,652\footnote{The sequencing error estimate was calculated upon all sequenced fragments, i.e.\ before setting the sequencing error filter, and thus had to be compared to the total number of NCO-1 FPs obtained without the filter (Table~\ref{tab:NCO-1-FP-rate-no-filter}).} NCO-1 FPs that we found in pot-NCO-1 events (110,615 in control regions + an estimate of 237,037 in hotspots, Table~\ref{tab:NCO-1-FP-rate-no-filter}). \begin{table}[t] \centering \begin{tabular}{rrrrr} \toprule \textbf{Target} & \textbf{Nb of} & \textbf{Nb of} & \textbf{Nb of} & \textbf{Event rate} \\ \textbf{category} & \textbf{targets} & \textbf{fragments} & \textbf{events} & \textbf{($\times$ 10\textsuperscript{-6})} \\ \midrule Hotspots & 1,018 & 228,984,512 & 243,390 & 1062.9 \\ Controls & 500 & 106,850,906 & 110,615 & 1035.2 \\ \midrule \multicolumn{1}{r}{\textbf{FP rate}} & \multicolumn{4}{r}{\textbf{97.4 \%}} \\ \bottomrule \end{tabular} \caption[Number of pot-NCO-1 events detected in hotspot and control targets without the sequencing error filter] {\textbf{Number of pot-NCO-1 events detected in hotspot and control targets without the sequencing error filter.} \par Pot-NCO-1 events were detected without the sequencing error filter controlling that the allele supporting the genotype call with the mapping onto the B6 genome is identical to that based on the mapping onto the CAST genome. All fragments or events overlapping at least 1 bp with a given target are counted in this table. The event rate corresponds to the ratio of candidate recombination events over the total number of fragments. The maximum false positive (FP) rate is the ratio of the event rate in control targets over that in hotspots. } \label{tab:NCO-1-FP-rate-no-filter} \end{table} We further evaluated the imprecision on this percentage by calculating, for each sample individually\footnote{With the exception of the four samples which were lowly sequenced}, the ratio between the latter number of FPs expected in the sample due to sequencing errors and the total number of fragments in the sample. We sequentially applied the multiplier of each sample to the total number of fragments and finally determined that the proportion of FPs due to sequencing errors capped between 60 and 78\% of all FPs.\\ Therefore, the largest part (66.7\%, CI $= [60\%; 78\%$]) of FPs arose from sequencing errors. The next step thus consisted in estimating the $\frac{WS}{WS+SW}$ ratio expected because of these sequencing errors. To do this, we simply computed the total number of FPs containing an erroneous W~$\rightarrow$~S base call ($E_{W\rightarrow S}$) and the number containing an erroneous S~$\rightarrow$~W base call ($E_{S\rightarrow W}$) as follows: \begin{align} E_{W\rightarrow S}&= e_{A\rightarrow C} + e_{A\rightarrow G} + e_{T\rightarrow C} + e_{T\rightarrow G}, \\ E_{S\rightarrow W}&= e_{C\rightarrow A} + e_{C\rightarrow T} + e_{G\rightarrow A} + e_{G\rightarrow T} \end{align} Importantly, we found that $E_{S\rightarrow W}$ was greater than $E_{W\rightarrow S}$, i.e.\ S bases were more oftenly mistakenly sequenced as W bases than the other way round. More precisely, we found that the $\frac{WS}{WS+SW}$ ratio expected with such FPs (i.e.\ $\frac{E_{W\rightarrow S}}{E_{W\rightarrow S} + E_{S\rightarrow W}}$) equalled 0.39. We note that this estimate was slightly higher than the $\frac{WS}{WS+SW}$ observed in control regions (0.31), possibly because the non-negligible portion (33.3\%) of FPs that did not originate from these sequencing errors may somehow also bias the ratio. % \subsection{Source code to reproduce figures} % % The source code to reproduce figures will be put online shortly. % \hypersetup{linkcolor=titlepagecolorsection} \section{Supplementary figures for Chapters~\ref{ch:6-recombination-parameters} and~\ref{ch:7-quantification-BGC}} \hypersetup{linkcolor=black} \subsection{Figures of recombination events per hotspot} The figures corresponding to the recombination events detected on all 889 recombination hotspots displaying at least one event will be accessible until the end of year 2019 at the following url: \url{https://drive.google.com/open?id=1d48R_npcqyWTCixwiMpo9DC2oyrLV4v_}. Afterwards, they might be moved to another location online (unknown at the time this manuscript was written). \newpage \subsection{Distribution of switch points} \begin{figure}[h!] \centering \includegraphics[width = 1\textwidth]{figures/appendices/density_switch_points.eps} \caption[Distribution of switch points along hotspots for Rec-1S and Rec-2S events] {\textbf{Distribution of switch points along hotspots for Rec-1S and Rec-2S events.} } \label{fig:density-switch-points} \end{figure} \newpage \subsection{Correlation between expected and observed donor} \begin{figure}[h!] \centering \includegraphics[width = 1\textwidth]{figures/appendices/CorrelationDMC1_FINAL_BIS_on_lab_computer_COLOURS.eps} \caption[Correlation between the expected and observed proportions of CAST-donor fragments across hotspots displaying at least 5 events, coloured per PRDM9 target] {\textbf{Correlation between the expected and observed proportions of CAST-donor fragments across hotspots displaying at least 5 events, coloured per PRDM9 target.} \par The expected proportion of CAST-donor fragments (x-axis) was based on the probability that the DSB initiates on the B6 haplotype from DMC1 ssDNA-sequencing (SSDS) data by \citet{smagulova2016evolutionary} (see main text). Only the 582 hotspots displaying a minimum of 5 recombination events were reported in this figure. The Pearson correlation between the two measures gave: $R^2 = 0.66$; {\textit{p}-val $< 2.2 \times 10^{-16}$}. } \label{fig:correl-donor-DMC1-with-colour} \end{figure} \hypersetup{linkcolor=titlepagecolorsection} \section{Supplementary figures for Chapter~\ref{ch:8-HFM1}} \hypersetup{linkcolor=black} \subsection{Genetic background of all chromosomes} % LEFT PAGE \begin{sidewaysfigure}[p] \centering \leftskip-3.4cm \rightskip-2.7cm \rotfloatpagestyle{empty} \includegraphics[width = 1.25\textwidth]{figures/chap8/HFM1_background_28355-DOM.eps} \captionsetup{width=1.25\textwidth, margin={-2.2cm, -3.3cm}} \caption[Mosaic of genetic backgrounds inferred at each target along the autosomes of mouse 28355] {\textbf{Mosaic of genetic backgrounds inferred at each target along the autosomes of mouse 28355.} \par Chromosomes are represented in grey and oriented so that the centromere is on the bottom side of the figure (mouse chromosomes are acrocentric). Each segment corresponds to the position of a target (hotspot or control region) and was coloured in red when the background inferred was BD/BD (homozygous) and in blue when the background inferred was BD/CAST (heterozygous). } \label{fig:mosaic-backgrounds-2} \end{sidewaysfigure} % RIGHT PAGE \begin{sidewaysfigure}[p] \centering \leftskip-2.4cm \rightskip-2.4cm \rotfloatpagestyle{empty} \includegraphics[width = 1.25\textwidth]{figures/chap8/HFM1_background_28367-DOM.eps} \captionsetup{width=1.25\textwidth, margin={-2.2cm, -3.3cm}} \caption[Mosaic of genetic backgrounds inferred at each target along the autosomes of mouse 28367] {\textbf{Mosaic of genetic backgrounds inferred at each target along the autosomes of mouse 28367.} \par Chromosomes are represented in grey and oriented so that the centromere is on the bottom side of the figure (mouse chromosomes are acrocentric). Each segment corresponds to the position of a target (hotspot or control region) and was coloured in red when the background inferred was BD/BD (homozygous) and in blue when the background inferred was BD/CAST (heterozygous). } \label{fig:mosaic-backgrounds-3} \end{sidewaysfigure} % LEFT PAGE \begin{sidewaysfigure}[p] \centering \leftskip-3.4cm \rightskip-2.7cm \rotfloatpagestyle{empty} \includegraphics[width = 1.25\textwidth]{figures/chap8/HFM1_background_28371-DOM.eps} \captionsetup{width=1.25\textwidth, margin={-2.2cm, -3.3cm}} \caption[Mosaic of genetic backgrounds inferred at each target along the autosomes of mouse 28371] {\textbf{Mosaic of genetic backgrounds inferred at each target along the autosomes of mouse 28371.} \par Chromosomes are represented in grey and oriented so that the centromere is on the bottom side of the figure (mouse chromosomes are acrocentric). Each segment corresponds to the position of a target (hotspot or control region) and was coloured in red when the background inferred was BD/BD (homozygous) and in blue when the background inferred was BD/CAST (heterozygous). } \label{fig:mosaic-backgrounds-4} \end{sidewaysfigure} \newpage \subsection{Pairwise comparison of the RR in shared hotspots} \begin{figure}[h!] \centering \begin{subfigure}[b]{0.75\textwidth} \subcaption{Between 28371 (WT) and 28353 (mutant)} \includegraphics[width=\textwidth]{figures/chap8/28371_vs_28353.eps} \end{subfigure} \vspace{0.5cm} \begin{subfigure}[b]{0.75\textwidth} \subcaption{Between 28371 (WT) and 28367 (mutant)} \includegraphics[width=\textwidth]{figures/chap8/28371_vs_28367.eps} \end{subfigure} \caption[Correlation of the number of recombination events in shared hotspots between the 28371 WT mouse and the two mutant mice] {\textbf{Correlation of the number of recombination events in shared hotspots between the 28371 WT mouse and the two mutant mice.} } \label{fig:pairwise-RR-shared-BIS-1} \end{figure} \begin{figure}[h!] \centering \begin{subfigure}[b]{0.75\textwidth} \subcaption{Between 28355 (WT) and 28367 (mutant)} \includegraphics[width=\textwidth]{figures/chap8/28355_vs_28367.eps} \end{subfigure} \vspace{0.5cm} \begin{subfigure}[b]{0.75\textwidth} \subcaption{Between 28355 (WT) and 28353 (mutant)} \includegraphics[width=\textwidth]{figures/chap8/28355_vs_28353.eps} \end{subfigure} \caption[Correlation of the number of recombination events in shared hotspots between the 28355 WT mouse and the two mutant mice] {\textbf{Correlation of the number of recombination events in shared hotspots between the 28355 WT mouse and the two mutant mice.} } \label{fig:pairwise-RR-shared-BIS2} \end{figure} \newpage \subsection{Pairwise comparison of the rate of Rec-1S events} \begin{figure}[h!] \centering \begin{subfigure}[b]{0.75\textwidth} \subcaption{Between the two WT mice} \includegraphics[width=\textwidth]{figures/appendices/pairwise_Rec_1S/28371_vs_28355.eps} \end{subfigure} \vspace{0.5cm} \begin{subfigure}[b]{0.75\textwidth} \subcaption{Between the two mutant mice} \includegraphics[width=\textwidth]{figures/appendices/pairwise_Rec_1S/28367_vs_28353.eps} \end{subfigure} \caption[Correlation of the number of Rec-1S events in shared hotspots for the two WT and the two mutant mice] {\textbf{Correlation of the number of Rec-1S events in shared hotspots for the two WT (a) and the two mutant (b) mice.} } \label{fig:pairwise-RR-shared-Rec1S-2} \end{figure} \begin{figure}[h!] \centering \begin{subfigure}[b]{0.75\textwidth} \subcaption{Between 28371 (WT) and 28353 (mutant)} \includegraphics[width=\textwidth]{figures/appendices/pairwise_Rec_1S/28371_vs_28353.eps} \end{subfigure} \vspace{0.5cm} \begin{subfigure}[b]{0.75\textwidth} \subcaption{Between 28371 (WT) and 28367 (mutant)} \includegraphics[width=\textwidth]{figures/appendices/pairwise_Rec_1S/28371_vs_28367.eps} \end{subfigure} \caption[Correlation of the number of Rec-1S events in shared hotspots between the 28371 WT mouse and the two mutant mice] {\textbf{Correlation of the number of Rec-1S events in shared hotspots between the 28371 WT mouse and the two mutant mice.} } \label{fig:pairwise-RR-shared-Rec1S-2} \end{figure} \begin{figure}[h!] \centering \begin{subfigure}[b]{0.75\textwidth} \subcaption{Between 28355 (WT) and 28367 (mutant)} \includegraphics[width=\textwidth]{figures/appendices/pairwise_Rec_1S/28355_vs_28367.eps} \end{subfigure} \vspace{0.5cm} \begin{subfigure}[b]{0.75\textwidth} \subcaption{Between 28355 (WT) and 28353 (mutant)} \includegraphics[width=\textwidth]{figures/appendices/pairwise_Rec_1S/28355_vs_28353.eps} \end{subfigure} \caption[Correlation of the number of Rec-1S events in shared hotspots between the 28355 WT mouse and the two mutant mice] {\textbf{Correlation of the number of Rec-1S events in shared hotspots between the 28355 WT mouse and the two mutant mice.} } \label{fig:pairwise-RR-shared-Rec1S-3} \end{figure}
{ "alphanum_fraction": 0.7331146563, "avg_line_length": 48.0214797136, "ext": "tex", "hexsha": "ca128039f2ec37a631c2702b9be72c07e8a31ddc", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "f95237ad4f90f28a0fd7e429d3f8a1fd393e7224", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "MaudGautier/PhD-thesis", "max_forks_repo_path": "text/appendix1-list-hotspots.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "f95237ad4f90f28a0fd7e429d3f8a1fd393e7224", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "MaudGautier/PhD-thesis", "max_issues_repo_path": "text/appendix1-list-hotspots.tex", "max_line_length": 719, "max_stars_count": null, "max_stars_repo_head_hexsha": "f95237ad4f90f28a0fd7e429d3f8a1fd393e7224", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "MaudGautier/PhD-thesis", "max_stars_repo_path": "text/appendix1-list-hotspots.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 6004, "size": 20121 }
% Options for packages loaded elsewhere \PassOptionsToPackage{unicode}{hyperref} \PassOptionsToPackage{hyphens}{url} % \documentclass[ ]{article} \usepackage{lmodern} \usepackage{amssymb,amsmath} \usepackage{ifxetex,ifluatex} \ifnum 0\ifxetex 1\fi\ifluatex 1\fi=0 % if pdftex \usepackage[T1]{fontenc} \usepackage[utf8]{inputenc} \usepackage{textcomp} % provide euro and other symbols \else % if luatex or xetex \usepackage{unicode-math} \defaultfontfeatures{Scale=MatchLowercase} \defaultfontfeatures[\rmfamily]{Ligatures=TeX,Scale=1} \setmathfont{latinmodern-math.otf} \fi % Use upquote if available, for straight quotes in verbatim environments \IfFileExists{upquote.sty}{\usepackage{upquote}}{} \IfFileExists{microtype.sty}{% use microtype if available \usepackage[]{microtype} \UseMicrotypeSet[protrusion]{basicmath} % disable protrusion for tt fonts }{} \makeatletter \@ifundefined{KOMAClassName}{% if non-KOMA class \IfFileExists{parskip.sty}{% \usepackage{parskip} }{% else \setlength{\parindent}{0pt} \setlength{\parskip}{6pt plus 2pt minus 1pt}} }{% if KOMA class \KOMAoptions{parskip=half}} \makeatother \usepackage{xcolor} \IfFileExists{xurl.sty}{\usepackage{xurl}}{} % add URL line breaks if available \IfFileExists{bookmark.sty}{\usepackage{bookmark}}{\usepackage{hyperref}} \hypersetup{ pdftitle={Formalization of Ostrowski theorems in Lean theorem prover}, pdfauthor={Ryan Lahfa,1,; Julien Marquet,1; Hadrien Barral1}, hidelinks, pdfcreator={LaTeX via pandoc}} \urlstyle{same} % disable monospaced font for URLs \setlength{\emergencystretch}{3em} % prevent overfull lines \providecommand{\tightlist}{% \setlength{\itemsep}{0pt}\setlength{\parskip}{0pt}} \setcounter{secnumdepth}{5} \usepackage[ruled, french, frenchkw]{algorithm2e} \usepackage{turnstile} \usepackage{ebproof} \usepackage{amssymb, upgreek} \usepackage{color} \definecolor{keywordcolor}{rgb}{0.7, 0.1, 0.1} % red \definecolor{commentcolor}{rgb}{0.4, 0.4, 0.4} % grey \definecolor{symbolcolor}{rgb}{0.0, 0.1, 0.6} % blue \definecolor{sortcolor}{rgb}{0.1, 0.5, 0.1} % green \definecolor{errorcolor}{rgb}{1, 0, 0} % bright red \definecolor{stringcolor}{rgb}{0.5, 0.3, 0.2} % brown \usepackage{listings} \def\lstlanguagefiles{lstlean.tex} \lstset{language=lean} \usepackage{mathtools} \usepackage{stmaryrd} \DeclarePairedDelimiter\abs{\lvert}{\rvert}% \DeclarePairedDelimiter\norm{\lVert}{\rVert}% \DeclarePairedDelimiter\ceil{\lceil}{\rceil}% \DeclarePairedDelimiter\floor{\lfloor}{\rfloor}% \DeclareMathOperator*{\card}{card}% \DeclareMathOperator*{\argmin}{argmin}% \DeclareMathOperator*{\Mat}{Mat}% \DeclareMathOperator{\Vol}{Vol}% \DeclareMathOperator{\msucc}{succ}% \DeclareMathOperator{\pgcd}{pgcd}% \DeclareMathOperator{\ppcm}{ppcm}% \DeclareMathOperator{\Ker}{Ker}% \DeclareMathOperator*{\Vect}{Vect}% \DeclareMathOperator{\rref}{rref}% \DeclareMathOperator{\rg}{rg}% \DeclareMathOperator{\lfp}{lfp}% \DeclareMathOperator{\im}{Im}% \usepackage{amsthm} \newtheorem{theorem}{Theorem} \newtheorem{lemma}{Lemma} \newtheorem{corollary}{Corollary}[lemma] \newtheorem{definition}{Definition} \newcommand{\PR}{\mathbb{P}} \newcommand{\E}{\mathbb{E}} \newcommand{\N}{\mathbb{N}} \newcommand{\R}{\mathbb{R}} \newcommand{\Z}{\mathbb{Z}} \newcommand{\Q}{\mathbb{Q}} \newcommand{\K}{\mathbb{K}} \newcommand{\M}{\mathcal{M}} \newcommand{\F}{\mathbb{F}} \newcommand{\class}[1]{\mathcal{C}^{#1}} \newcommand{\ie}{\text{i.e. }} \newcommand{\application}[5]{% \begin{array}{ccccl}% #1 & : & #2 & \to & #3 \\ % & & #4 & \mapsto & #5 \\ % \end{array}% }% \newtoks\rowvectoks \newcommand{\rowvec}[2]{% \rowvectoks={#2}\count255=#1\relax \advance\count255 by -1 \rowvecnexta} \newcommand{\rowvecnexta}{% \ifnum\count255>0 \expandafter\rowvecnextb \else \begin{pmatrix}\the\rowvectoks\end{pmatrix} \fi} \newcommand\rowvecnextb[1]{% \rowvectoks=\expandafter{\the\rowvectoks&#1}% \advance\count255 by -1 \rowvecnexta} \newcount\colveccount \newcommand*\colvec[1]{ \global\colveccount#1 \begin{pmatrix} \colvecnext } \def\colvecnext#1{ #1 \global\advance\colveccount-1 \ifnum\colveccount>0 \\ \expandafter\colvecnext \else \end{pmatrix} \fi } % Swap the definition of \abs* and \norm*, so that \abs % and \norm resizes the size of the brackets, and the % starred version does not. \makeatletter \let\oldabs\abs \def\abs{\@ifstar{\oldabs}{\oldabs*}} % \let\oldnorm\norm \def\norm{\@ifstar{\oldnorm}{\oldnorm*}} \makeatother \usepackage[style=alphabetic,]{biblatex} \renewcommand*{\bibfont}{\small} \addbibresource{./Formalization.bib} \addbibresource{./Berkovich.bib} \title{Formalization of Ostrowski theorems\\ in Lean theorem prover} \author{Ryan Lahfa\textsuperscript{$\dagger{}$,1,*} \and Julien Marquet\textsuperscript{$\dagger{}$,1} \and Hadrien Barral\textsuperscript{1}} \date{} \usepackage{onecolceurws_pandoc} \begin{document} \maketitle \begin{abstract} Ostrowski theorems provide classification of all absolute values in certain fields and lies at the foundations of Berkovich space theory. In particular, over \(\Q\), all absolute values are either the trivial, the usual or a \(p\)-adic. This statement entirely determines the Berkovich spectrum of integers. We formalize Ostrowski theorems in the Lean theorem prover, in two attempts, one aiming to understand the challenges and determining a reachable generalization target. The second attempt reaches this target and shows everything the first attempt does in a simpler and cleaner way. Following this road, we identify low-hanging fruits missing the Lean mathematical library and develop a self-contained reusable general theory to formalize Ostrowski theorems in general contexts. Our proofs show the discrepancy between how easy it is to use algebra versus how tedious it is to conduct analytical reasoning with inequalities and calculus, and calls for a thorough examination on how to drastically simplify analysis in these contexts. \end{abstract} \textsuperscript{$\dagger{}$} These authors contributed equally to this work. \textsuperscript{1} DIENS, École Normale Supérieure, CNRS, PSL University, Paris, France \textsuperscript{*} Correspondence: \href{mailto:[email protected]}{Ryan Lahfa \textless{}[email protected]\textgreater{}} \hypertarget{introduction}{% \section{Introduction}\label{introduction}} \hypertarget{background-work}{% \subsection{Background work}\label{background-work}} The formalization of mathematics has seen a lot of projects: \cite{wiedijkQEDManifestoRevisited}, \autocite{abelruffinicoq}, \autocite{feitthompsoncoq}, \autocite{buzzard2020perfectoids}, \autocite{lewis2019hensel}, \autocite{commelin2021witt}, most of them treat of undergraduate mathematics and seldom of research-level mathematics. In particular, the surrounding mathlib \autocite{The_mathlib_Community_2020} formalization projects are progressing at a fast pace, with Witt vectors \autocite{commelin2021witt}, schemes \autocite{buzzard2021schemes}, the Liquid Tensor Experiment \autocite{scholze2021liquid}. Yet, formalizing research-level theories remain very difficult, especially when the theory requires non-trivial metaprogrammation and tactics to simplify proof terms. In \autocite{buzzard2020perfectoids}, the definition of perfectoid spaces is formalized entirely and required 33 files and more than 3000 lines of code which should have been in the mathematical library (so-called \texttt{for\_mathlib} folder), upstreaming back such amount of contributions is also a non-trivial problem \autocite{van_Doorn_2020}. Their formalization also used ad-hoc automation, notably with non-classical objects like algebraic structure ``with zero''. In this paper, we will formalize the very start of an alternative theory: Berkovich spaces. This paper follows those ideas and provides an attempt to open up a formalization of Berkovich's young theory. To the best of our knowledge, this formalization has never made its way in any proof assistant. We also show along the way that picking up a research-level theory produces many undergraduate-level theorems the Lean mathematical library lacks and how it can provide for better interfaces for further formalizations. \hypertarget{ostrowski-theorem-and-berkovich-spaces}{% \subsection{Ostrowski theorem and Berkovich spaces}\label{ostrowski-theorem-and-berkovich-spaces}} This work will provide an in-depth view on the process of formalizing Ostrowski theorem and its variants. In this section, we will first re-introduce the mathematical contents. In section \ref{sec:first_attempt}, we detail our bruteforce attempt to formalizing the basic version of the theorem with minimal tooling. In section \ref{sec:smart_attempt}, we use lessons learnt from the previous section to generalize our tooling so that the Ostrowski theorem and its variants can be derived while reusing as much as possible the steps and arguments. In section \ref{sec:conclusion}, we provide our feedback on the process and discuss future work to improve such formalizations and this work. The core objects of Ostrowski's theorem are \textbf{absolute values}: \begin{definition}[absolute value] \label{def:absolute_value} An absolute value on a ring $R$ is a function $\abs{\cdot}: R \to \R$ such that \begin{enumerate} \item{} $\forall x \in R, \abs{x} = 0 \iff x = 0$ \item{} $\forall x, y \in R, \abs{xy} = \abs{x} \abs{y}$ \item{} $\forall x, y \in R, \abs{x + y} \le \abs{x} + \abs{y}$ \end{enumerate} \end{definition} The usual absolute value is an absolute value with respect to Definition \ref{def:absolute_value}. These objects allow to build a completion of \(\Q\) in an algebraically interesting way. The usual completion of \(\Q\) is \(\R\), and is obtained with the usual absolute value. Absolute values retain just the right amount of properties of the usual absolute value to show \emph{both} analytical and algebraic interest. In this paper, we focus on the following class of absolute values: \begin{definition}[$p$-adic absolute value] \label{def:padic_abv} With $p \in \N$ prime, we denote $\abs{\cdot}_p$ the $p$-adic absolute value on $\Z$, where $\textrm{v}_p(k)$ is the multiplicity of $p$ in $k$: \begin{equation*} \abs{k}_p = p^{-\textrm{v}_p(k)} \end{equation*} \end{definition} The superclass of \(p\)-adic absolute values is the class of the \emph{non-Archimedean absolute value}: \begin{definition}[non-Archimedean absolute value] \label{def:nonArchimedean} An absolute value $\abs{\cdot}$ is called \emph{non-Archimedean} when the following holds: \begin{equation*} \forall x, y \in R, \abs{x + y} \le \max\left(\abs{x}, \abs{y}\right) \end{equation*} \end{definition} A natural question is to classify all absolutes values over \(\Q\), which are classified \emph{up to equivalence}: \begin{definition}[equivalence] \label{def:abv_equiv} Two absolute values $\abs{\cdot}_1$ and $\abs{\cdot}_2$ on a ring $R$ are said to be \emph{equivalent} when for some $\alpha > 0$ we have $\forall x \in R, {\abs{x}_1}^{\alpha} = \abs{x}_2$. When this holds, we write $\abs{\cdot}_1 \sim \abs{\cdot}_2$. \end{definition} It is noteworthy that equivalent absolute values are topologically equivalent: this turns Ostrowski's theorem into a bridge between algebra and analysis, completely classifying the absolute values on \(\Q\). \begin{theorem}[Ostrowski] \label{target:ostrowski} Given $\lambda: \Q \to \Q$ an absolute value over $\Q$, either $\lambda \sim \abs{\cdot}$, either there is some $p \in \PR$ such that $\lambda \sim \abs{\cdot}_p$. \end{theorem} Such a theorem shows there is an alternative to the completion of \(\Q\) by taking a prime number \(p\) and completing using \(p\)-adic absolute value, giving arise to \(\Q_p\), and that these completions are the only alternatives to the usual one. Ostrowski's theorem plays an interesting role in Berkovich space theory to completely determine the structure of the Berkovich spectrum of integers: \(\mathcal{M}(\Z)\), which is the set of all norms over \(\Z\) equiped with a certain topology. Note that Ostrowski theorem has many variants where we can extend it to fields like \(\F[X]\) or more complex structures. We will explore in this work how a formalization of multiple variants can be obtained efficiently. For a more in-depth presentation of Berkovich space theory, refer to \autocite{ducrosBerkovichSpacesApplications2015} or \autocite{temkinIntroductionBerkovichAnalytic2015}. \hypertarget{naive-formalization}{% \section{\texorpdfstring{Naive formalization \label{sec:first_attempt}}{Naive formalization }}\label{naive-formalization}} To understand the challenges behind Ostrowski theorem being formalized, we attempted a bruteforce formalization over \(\Q\) based on \autocite{ruiterOstrowski}. The resulting proof is easily understandable, only basic mathematical tooling was needed as in the original proof: Bézout's identity, simple limits and calculus. Yet this proof does not fit the standard of formalized mathematics: it is far too long and would greatly benefit from: \begin{itemize} \tightlist \item extraction of lemmas, and generalization of most parts, \item automation: most of the proof is calculus and could be automated with the right tactics and systems. \end{itemize} Concretely, the core lemma of this first attempt is around 200 lines long. It is built mainly with the \texttt{obtain} keyword, which is the formal equivalent of saying ``let us now show that \ldots''. This construct allows us to stay close to the intuition but led us to longer proofs, like in the toy example that follows. For instance, one would start the proof of the bounded case by ``let us first show that there is some \(n \in \N, n > 0\) such that \(\abs{n} < 1\)'' (in this context, \(\abs{\cdot}\) is a nontrivial bounded absolute value). To quickly prove this statement on a piece of paper, we may say that: \begin{itemize} \tightlist \item assuming \(\forall n \in \N^*, \abs{n} \ge 1\), then \(\forall n \in \N^*, \abs{n} = 1\) (\(\abs{\cdot}\) is bounded), \item this is absurd because by hypothesis, \(\abs{\cdot}\) is nontrivial. \end{itemize} Following this exact scheme, our formalized proof starts with the following : \begin{lstlisting} obtain ⟨ n, zero_lt_n, abvn_lt_one ⟩: ∃ n: ℕ, 0 < n ∧ abv n < 1, { /- 18 lines omitted -/ } \end{lstlisting} Suddenly, a two line ``human'' proof came out as a 18 lines long formalized version. In fact, what we really did when we proved this property in two sentences was: \begin{itemize} \tightlist \item proceed by \emph{reducio ad absurdum}, \item realize that \(\abs{\cdot}\) is equal to \(1\) everywhere on \(\N\), \item prove it by bounding the values of \(\abs{\cdot}\) using the suitable hypotheses, \item realize that this is actually enough to prove that \(\abs{\cdot}\) is trivial, \item show the contradiction by recalling our hypothesis : \(\abs{\cdot}\) is nontrivial. \end{itemize} Formalizing our two-liner required getting into punctilious details, and even further formal considerations when detailing the very informal ``realize that \ldots''. Our readers can easily imagine how a handful of calculations became a 200 lines formal proof for the core lemma. \hypertarget{pursuing-a-general-enough-point-of-view}{% \section{\texorpdfstring{Pursuing a general enough point of view \label{sec:smart_attempt}}{Pursuing a general enough point of view }}\label{pursuing-a-general-enough-point-of-view}} Naturally, the previous proof lacked of generality and contained too much irrelevant detail which translated into bothersome ad-hoc statements, so we adopted two objectives from this experience: \begin{itemize} \tightlist \item as much as possible, make Ostrowski theorem a natural consequence from the general theory and allow for interesting generalizations, e.g.~Ostrowski over \(\F[X]\), \item see how to fit parts of this general theory in the Lean mathematical library, so it can benefit other users. \end{itemize} Our intuition is a synthetic point of view is more suitable for formalization than an analytic approach. Therefore, we went looking for the adequate algebraic theories to support our goals. We take inspiration from \autocite{artinAlgebraicNumbersAlgebraic2005} presentation of Ostrowski theorem and transform the approach in a suitable way for formalization. \hypertarget{section:core_theory}{% \subsection{Core of the theory}\label{section:core_theory}} For this presentation, we will use \(R\) a principal ideal domain (PID). The core idea is to keep an algebraic point of view and develop some tools to characterize the behavior of bounded absolute values on general rings (Definition \ref{def:our_boundness}). \begin{definition} \label{def:our_boundness} Given $\abs{\cdot}: R \to \R$ an absolute value, $\abs{\cdot}$ is said bounded when: \begin{equation*} \forall x \in R, \abs{x} \leq 1 \end{equation*} \end{definition} Note that this is equivalent to the usual definition of boundedness (existence of some upper bound): \begin{itemize} \tightlist \item if \(\abs{\cdot}\) is bounded, then \(1\) is an upper bound, \item otherwise there is some \(x\) such that \(\abs{x} > 1\), then \(\abs{x^n} \xrightarrow[n \to +\infty]{} +\infty\) and \(\abs{\cdot}\) has no finite upper bound. \end{itemize} Furthermore, we define the \emph{trivial absolute value} as the function that maps \(0\) to \(0\) and any other element to \(1\). We will need one extra lemma for the core theorem, stating that an absolute value is bounded over \(\N\) if and only if it is non-Archimedean: \begin{lstlisting}[label={contrib:nonArchimedean_iff_integers_bounded}] theorem nonArchimedean_iff_integers_bounded {α} [comm_ring α] [nontrivial α] (abv: α → ℝ) [is_absolute_value abv]: (∃ C: ℝ, 0 < C ∧ ∀ n: ℕ, abv n ≤ C) ↔ (∀ a b: α, abv (a + b) ≤ max (abv a) (abv b)) \end{lstlisting} Proving this lemma revealed to be challenging: on the paper, it takes at most a dozen of lines, but the formalization took around 200 lines. The reasons are the same as in section \ref{sec:first_attempt}. We have isolated a corner of the theory where calculus cannot be avoided, like we moved the problem that lied in section \ref{sec:first_attempt} from one place to another. As future work, these lines would greatly benefit from new calculus tactics. The main theorem is \texttt{abv\_bounded\_padic}, which states that a non-trivial bounded absolute value on a principal ideal domain \(R\) is a \(p\)-adic absolute value for some prime \(p\) of \(R\). \begin{lstlisting}[label={contrib:abv_bounded_padic}] theorem abv_bounded_padic {α} [integral_domain α] [is_principal_ideal_ring α] [normalization_monoid α] (abv: α → ℝ) [is_absolute_value abv] (bounded: ∀ a: α, abv a ≤ 1) (nontrivial: ∃ a: α, a ≠ 0 ∧ abv a ≠ 1): ∃ (p: α) (p_prime: prime p), abvs_equiv abv (sample_padic_abv p p_prime) \end{lstlisting} The typeclasses \lstinline{[integral_domain α]} and \lstinline{[is_principal_ideal_ring α]} ensure that \(\alpha\) is a principal integral domain (PID). \lstinline{[normalization_monoid α]} means that the elements of \(\alpha\) admit a normal form (say, in \(\Z\), the positive integers, and in \(\K[X]\), the monics). This is required by some of the lemmas we use, but can be omitted for the scope of this paper. \texttt{abvs\_equiv} is the relation of equivalence between absolute values. \texttt{sample\_padic\_abv\ p\ p\_prime} is an \(p\)-adic absolute value (\texttt{p\_prime} is a proof that \(p\) is indeed prime). Keeping in mind that according to \lstinline{nonArchimedian_iff_integers_bounded}, \(\abs{\cdot}\) is non-Archimedian, the strategy to prove the core lemma (\texttt{abv\_bounded\_padic}) is as follows: \begin{itemize} \tightlist \item Take \(\{ x \in R \mid \abs{x} < 1 \}\), this is a prime ideal of \(R\); \item As \(R\) is a PID, there is some prime \(p \in R\) that generates the previous set; \item Now, it is sufficient to prove the equivalence between \(\abs{\cdot}\) and \(\abs{\cdot}_p\) to finish; \item By the primes extensionality lemma (see \ref{section:a_lemma}), it suffices to prove there is some \(\alpha > 0\) such that for all prime \(q \in R\), \(\abs{q}^{\alpha} = \abs{q}_p\); \item To clear this goal, a case analysis on whether \(p\) and \(q\) are associated is enough and helps to find the suitable \(\alpha\) in terms of logarithms of absolute values of \(p\). \end{itemize} The core lemma is easy to prove as it is the result of composable and reusable lemmas and proves our point regarding the need of finding general enough abstractions so that the proofs tend towards an assembling game. \nopagebreak[4] \hypertarget{section:a_lemma}{% \subsection{A lemma}\label{section:a_lemma}} We also encountered a very useful extensionality lemma for morphisms over monoids with zero, of which we give the Lean definition: \begin{lstlisting}[label={contrib:ext_hom_primes}] theorem ext_hom_primes {α} [comm_monoid_with_zero α] [wf_dvd_monoid α] {β} [monoid_with_zero β] (φ₁ φ₂: monoid_with_zero_hom α β) (h_units: ∀ u: units α, φ₁ u = φ₂ u) (h_irreducibles: ∀ a: α, irreducible a → φ₁ a = φ₂ a): φ₁ = φ₂ \end{lstlisting} \lstinline{[monoid_with_zero β]} states that \(\beta\) is a monoid that contains a ``zero'', \emph{i.e.} an absorbing element. These objects may seem peculiar to a mathematician, but are useful in the context of formalized mathematics. We will not discuss the use of ``monoids with zero'', as they are outside of the scope of this article. \lstinline{[comm_monoid_with_zero α]} further states that \(\alpha\) is commutative. \lstinline{φ : monoid_with_zero_hom α β} states that \(\varphi\) is a homomorphism of monoid with zero with source \(\alpha\) and target \(\beta\). \lstinline{[wf_dvd_monoid α]} states that the division on \(\alpha\) is a well-founded order. This is key to the lemma: we only need to proceed by induction. This makes this lemma apply well to principal ideal domains, because the division in such rings the inclusion is well-founded. Mathematically, this lemma states that if \begin{itemize} \tightlist \item \(R\) is a principal ideal domain \item Two multiplicative functions agree on the units of \(R\) and on its primes \end{itemize} Then, they coincide everywhere. This nontrivial lemma may be useful to anyone working with multiplicative functions and was added to mathlib \autocite{The_mathlib_Community_2020}. We therefore fulfilled one of our two goals: formalizing mathematics which may be useful to future users. We brought the problem into statements about multiplicative functions, but have yet to lift our original statement which has a valuation flavor in these terms. Note that valuations and multiplicative functions (actually, homomorphisms) are unfortunately very different objects in Lean: the former are just functions that are refined using a typeclass, while the latter are \emph{structures} (in a nutshell, tuples containing objects and proofs). This implies that switching from the valuation point of view to homomorphisms and back is cumbersome. To solve this problem, we had to write some boilerplate to bridge this gap. As future work, it might be possible to automate the process of switching of point of view on this kind of objects, certainly through meta-programming \autocite{commelin2021witt}. \hypertarget{application-ostrowski-on-mathbbq}{% \subsection{\texorpdfstring{Application: Ostrowski on \(\mathbb{Q}\)}{Application: Ostrowski on \textbackslash mathbb\{Q\}}}\label{application-ostrowski-on-mathbbq}} Once the core lemmas are laid out, Ostrowski's theorem on \(\Z\) is almost immediate. Now, obtaining it over \(\Q\) requires the extension of absolute values to the entire field. In theory, it is also almost immediate because of the multiplicative property of absolute values. In practice, some manual work remained to lift results from \(\Z\) to \(\Q\), yet, this is not a failure of the previous goal to pursue a general enough theory but rather what we would believe to be a lack of automation in the proof assistant which could be alleviated by meta-programming. That being said, we did not pursue this venture and test our hypothesis and will discuss it later. \hypertarget{application-ostrowski-on-mathbbfx}{% \subsection{\texorpdfstring{Application: Ostrowski on \(\mathbb{F}[X]\)}{Application: Ostrowski on \textbackslash mathbb\{F\}{[}X{]}}}\label{application-ostrowski-on-mathbbfx}} We proved a statement that is slightly less powerful in spirit, in that it does not actually cover \emph{all} the possible absolute values, but only the absolute values that are trivial on \(\mathbb{F}\). \begin{theorem}[Ostrowski variant] \label{contrib:ostrowski_variant} Given $\abs{\cdot}$ an absolute value on $\F[X]$, trivial on $\F$. Exactly one of the following is true: \begin{itemize} \item $\abs{\cdot}$ is bounded and for some prime $p \in \F[X]$, $\abs{\cdot} = \abs{\cdot}_p$. \item $\abs{\cdot}$ is equivalent to the degree. \end{itemize} \end{theorem} Comforting our intuition, both cases were straightforward reusing the tools in section \ref{section:core_theory}. \hypertarget{conclusion}{% \section{\texorpdfstring{Conclusion \label{sec:conclusion}}{Conclusion }}\label{conclusion}} \vspace*{-0.2em} \hypertarget{results}{% \subsection{Results}\label{results}} We wanted to examine the difficulties of formalizing Ostrowski's theorem which constitutes the first step towards Berkovich space theory formalization. With a bruteforce method, we encountered many tedious computations of analytical nature which led us to hide all the complexity inside algebra which was easier to handle in the proof assistant. The second part presents an approach which worked effectively and gave us with fewer efforts more theorems and provided us with insights on how to pursue the generalization. Nevertheless, this suggests that calculus and analysis might benefit from a local framework which might help with their manipulation, non-standard analysis seems a promising avenue already explored in Isabelle/HOL with \autocite{fleuriot2000mechanization}, but also, the Lean theorem prover in its version 4 might help with its treatment of coercions and performance improvements \autocite{Lean4_2021}. \vspace*{-0.2em} \hypertarget{outlook}{% \subsection{Outlook}\label{outlook}} We notice formalization is not only a process that helps you verify a proof but also to understand and provide insights on results surrounding a theory and sustain the improvements on the system being used beyond classical computer science aspects like performance or user experience. In particular, we identified pain points using the Lean theorem prover which constitutes interesting future works, namely automation to: \begin{itemize} \tightlist \item combine analysis and inequalities/equalities reasoning, e.g.~taking limits on a side or both sides, \item bridge points of view or even theories, \emph{e.g.} the former discussion on valuations and homomorphisms. \end{itemize} Despite these bothersome points, we found that adopting a synthetic approach alleviates us from most of the hardships that we encountered with an analytic approach. Finally, now that Ostrowski theorems are formalized, it is possible to produce the basic objects of Berkovich space theory, notably the Berkovich spectrum and give a non-trivial example: \(\mathcal{M}(\Z)\). \vspace*{-0.5em} \printbibliography[title=References] \end{document}
{ "alphanum_fraction": 0.7518494577, "avg_line_length": 38.9454545455, "ext": "tex", "hexsha": "60fcc13353a39dbbe3eb4e5da3b6965800b9ab27", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "0a49f75a599bcb20333ec86b301f84411f04f7cf", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "RaitoBezarius/berkovich-spaces", "max_forks_repo_path": "docs/paper/main.tex", "max_issues_count": 1, "max_issues_repo_head_hexsha": "0a49f75a599bcb20333ec86b301f84411f04f7cf", "max_issues_repo_issues_event_max_datetime": "2021-08-18T18:41:09.000Z", "max_issues_repo_issues_event_min_datetime": "2021-08-18T18:41:09.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "RaitoBezarius/berkovich-spaces", "max_issues_repo_path": "docs/paper/main.tex", "max_line_length": 167, "max_stars_count": 4, "max_stars_repo_head_hexsha": "0a49f75a599bcb20333ec86b301f84411f04f7cf", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "RaitoBezarius/berkovich-spaces", "max_stars_repo_path": "docs/paper/main.tex", "max_stars_repo_stars_event_max_datetime": "2022-03-19T02:14:49.000Z", "max_stars_repo_stars_event_min_datetime": "2021-08-18T20:03:23.000Z", "num_tokens": 7933, "size": 27846 }
%!TEX root=report.tex \subsection{Gaussian Mixture Model (GMM)} The GMM is a more advanced clustering model than K-Means. The main advantage is that GMM allows for hyper elliptical clusters. This uses gaussian kernels with its the shape described in a covariance matrix. A result similar to K-means could be obtained by forcing this covariance matrix to be the identity matrix. Restrictions on the covariance matrix (i.e. shared covariance, diagonal covariance, spherical covariance etc.) can easily be applied in GMM and is quite common. In this analysis however, no covariance restrictions will be used. In the GMM the assumption is that data comes from a single density function. The density function is assumed to be a combination (mixture) of $K$ Gaussian PDFs where K is finite and denotes the number of mixture components (i.e. clusters). Each mixture component has a centroid (the mean), a covariance matrix and a mixing weight. The sum of mixing weights across components has to be one for the GMM to constitute an actual pdf. To estimate the model parameters several different methods exists. The most common and is the expectation maximization (EM) algorithm which is quite complex and thus wont be described here. To the curious reader we recommend \cite[p.~214,272,463]{statistical-learning}). In practice if K is large and the vector space $X$ is high dimensional, estimation of the model parameters will take too much computing power, and even if one get model parameters to converge, the degrees of freedom will be low. In our case the input space would be 341-dimensional (341 observations in time per location) and thus a dimensionality reduction of some kind is needed. \subsubsection{Dimensionality reduction} In many cases when dealing with high dimensional data, most of the data lies on a lower dimensional manifold. Different methods exists to try to identify such manifolds, but in this analysis the previously described technique PCA will be used. This is done by selecting only the most important principal components from the PCA, thus forcing the data onto a lower dimensional manifold. The GRACE data contains quite a bit of noise and one might hope that the noise will be primarily contained in its own principal components. Hopefully those PCs will only account for a small amount of the variance in the data. Thus when selecting only the most significance PCs some of the noise will be "lost". It should be noted that the standard PCA method wasn't used, instead a more complex method called Kernel PCA is used. Using kernel PCA combined with the more flexible GMM over K-Means, will hopefully lead to better clustering.
{ "alphanum_fraction": 0.8033771107, "avg_line_length": 98.7037037037, "ext": "tex", "hexsha": "cc7ac0ebc07f6f632a410e3df14efb94850b5aab", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "bf472d30a2fac76145d3f68e819c92da4a1970ba", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "AndreasMadsen/grace", "max_forks_repo_path": "Rapport/theory-gmm.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "bf472d30a2fac76145d3f68e819c92da4a1970ba", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "AndreasMadsen/grace", "max_issues_repo_path": "Rapport/theory-gmm.tex", "max_line_length": 270, "max_stars_count": 1, "max_stars_repo_head_hexsha": "bf472d30a2fac76145d3f68e819c92da4a1970ba", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "AndreasMadsen/grace", "max_stars_repo_path": "Rapport/theory-gmm.tex", "max_stars_repo_stars_event_max_datetime": "2016-05-17T22:52:19.000Z", "max_stars_repo_stars_event_min_datetime": "2016-05-17T22:52:19.000Z", "num_tokens": 544, "size": 2665 }