Search is not available for this dataset
text
string
meta
dict
%!TEX program = xelatex \documentclass[a4paper,12pt]{article} \usepackage{polyglossia} \setdefaultlanguage{english} \usepackage[backend=biber,style=mla,noremoteinfo=false]{biblatex} \usepackage[autostyle]{csquotes} \usepackage{fontspec} \usepackage{hyperref} \usepackage{setspace} \usepackage{titlesec} \renewcommand{\thesection}{\Roman{section}} \newcommand{\HRule}{\rule{\linewidth}{0.5mm}} \newcommand{\sectionbreak}{\clearpage} \doublespacing \addbibresource{./Single_Party_States.bib} \begin{document} \input{./Single_Party_States.title.tex} %--------------------------------------------------------------------------------------------------% The early $20$th century was a period of upheaval in Russia: the first World War had just ended, leaving in its aftermath millions of dead Russians and the Russian national pride scarred in defeat; the second industrial revolution left behind a large industrial capacity, but the devastating defeat of Russia during World War I had left Russia's manufacturing sector in a shambles \cite[39]{danielsr}; the people of Russia, most of whom were peasantry, were expressing their disdain of the Romanov dynasty, the family that was at the head of Russia's Tsarist government. The peasant class was angry at the government's ignorance and abuse of the peasantry. The state practiced progressive terrorization and enserfment of the peasantry, ignoring their conditions, with shortages rampant and growing unrest. Vladimir Lenin, the leader of the Bolshevik Party, spoke of creating a new socialist state and removing the incompetent government: a call for revolution. The October Revolution in November of 1917---also known as the Bolshevik Revolution---was the culmination of the frustrations of the people of Russia, and thrust Stalin into the political scene as Lenin's protégé. Stalin used his influence as Lenin's direct subordinate to slowly build his network of political allies, all the while weeding out his enemies, until he was able to consolidate political power. After gaining power, Stalin only followed his declared ideology ---Marxism-Leninism-Leninism---to a moderate extent: while he did nationalize industry and implement some level of redistribution, he never created a socialist state, instead heading a centralized government. %--------------------------------------------------------------------------------------------------% \section{Origins} The origins of Stalin's single-party state stem to the period two decades before the Bolshevik revolution. While there is no single universally accepted cause of the Bolshevik Revolution, it is generally agreed upon that the Bolshevik Revolution was a product of previous trends; it was not a spontaneous event \cite[331]{danielsr}. The rapid industrialization of Russia's economy through the late $19$th and into the early $20$th century showcased the corresponding stagnancy of the Russian political system. Mistreatment of the growing permanent working class---composed primarily of extorted, abused serfs--- incited more and more strikes and mutinies, paralyzing the government \cite[32]{pitirims}. The Russian Empire was composed of institutions that were increasingly obsolete in the new century, and was ill-suited to deal with a changing political and economic climate. \cite[18]{pitirims}. Lenin led the Russian Social-Democratic Labor Party, a Marxist group that was among the many entities disillusioned with the current government. The creation of the Bolshevik Party came around 1903, when Lenin's party fractured into the Bolsheviks---those who wanted revolution immediately---and the Mensheviks---those who wanted to wait longer before the revolution \cite[37]{pitirims}. Stalin, attracted by Lenin's call for revolution and vision of a Marxist state, joined the Bolsheviks \cite[54]{servicer}. Stalin became known for his crude, simple and yet pragmatic approach to politics, recognizing, like Lenin, the importance of propaganda and organization, and though he and Lenin did not agree on everything, Lenin promoted the still obscure Stalin in the Bolshevik Party, taking him under his patronage and promoting his career \cite[77,124]{servicer}. After world war I, the resentment harbored by the people began to boil over. Bloody Sunday, the humiliation of the Russo-Japanese war, and now, the destruction of World War I all contributed to a common feeling that the Romanov government was inadequate \cite[18,26,32]{pitirims}. The government's credibility was further compromised with the Rasputin scandal, disintegrating the government from the inside, causing the State Duma--- the legislative assembly of the Russian Empire---to create a provisional government controlled by the Bolsheviks' rivals, the Mensheviks \cite[44]{pitirims}. The State Duma was already viewed as a travesty of an institution by the people; the provisional government it created was not viewed any better, and was seen as an incompetent institution (\cites[338]{danielsr}[32]{kuromiyah}). Lenin played upon the mood of the masses to further antagonize the people, not afraid of inciting violence in the revolution \cite[335]{danielsr}. The October Revolution took place in early November. By the end of the revolution, the Preparliament and Constituent assemblies were dismantled, and Lenin's Bolshevik party was in complete power \cite[46-47]{basilj}. More importantly, however, Stalin was appointed the People's Commissar, publicly creating direct association between himself and Lenin, opening for Stalin the pathway to consolidating influence and power in the new government \cite[124]{servicer}. %--------------------------------------------------------------------------------------------------% \section{Establishment} What Stalin was able to do with his newfound political influence as the People's Commissar ---and, more importantly, Lenin's direct subordinate---was build up a following in the Bolshevik Party and project policies in the interests of wide circles of the party \cite[3]{rigbyt}. Stalin was skilled with political tactics, manipulation, and bargaining, which would frequently allow him to get others to do what he wanted them to do (\cites[3]{rigbyt}[4]{carre}); Stalin's ``human touch'' in negotiations misled others into trusting him, after which he would use them for his own gain \cite[718]{kuromiyah}. As Stalin consolidated more political power, he was able to be more vocal about his ideologies; he was most serious about his support of Marxism-Leninism \cite[27]{reee}. This does not mean he was incapable of change, however. Stalin constantly adjusted to his surroundings with regards to his ideology, making sure that while he was maintaining a Marxist ideology, he was also remaining flexible in changing times \cite[720]{kuromiyah}. Stalin was, for example, an orthodox Marxist with regards to his belief that nothing good could come from a bourgeois state \cite[29]{reee}. However, he did inject ``Russian tradition'' into his version of Marxism-Leninism, centralizing it slightly to concur with the Russian ``cult of personality'' (\cites[23]{reee}[32]{hingleyr}). Even before he was the leader of his single-party state, he was building his ``cult of personality'' so as to gain influence over the public, as well as the government. This maneuvering was for a metaphorical ``charge'' to power. When Lenin suffered a stroke in 1922, his position of leadership in the Bolshevik Party was effectively terminated. At this point, Stalin was a member of the Politburo---a group of six people that carried out many of the executive decisions in the government---and, more importantly, in this case, he was the General Secretary of the government; with these credentials Stalin now had---for the most part---the ability to influence without the support of Lenin, which Lenin recognized as the source of Stalin's growing power \cite[160,167]{kortm}. Lenin had originally tapped Trotsky as his preferred successor, but Trotsky did not place his supporters in the party \cite[167]{kortm}. Stalin used this opportunity to fill the party with his patrons; patronage was, after all, common in the Soviet government \cite[4-5]{rigbyt}. After filling the Bolshevik Party with his supporters, all that remained for Stalin to do to consolidate power was to remove political enemies. Stalin's greatest political enemy at this time was---arguably---Leon Trotsky \cite[167]{kortm}. In 1927, Stalin used his supporters in the congress to remove Trotsky from the Bolshevik Party and exile him from the country, removing his greatest political enemy, and leaving him on the top of the Soviet Union %--------------------------------------------------------------------------------------------------% \section{Rule} Stalin, after gaining power, followed only parts of his ideology. One of the first things Stalin on gaining power the Five Year Plan, a plan that would end private ownership of all land \cite[51]{basilj}. Having the government take control of all private land is, essentially, a Marxist economic decision. Stalin was also actively nationalizing industries, again a Marxist economic decision \cite[44]{remingtont}. Stalin's decisions while in power, at face value, are concurrent with his Marxist ideology. However, Stalin's decisions were not completely ``Marxist'', nor was he setting up a pure Marxist government; on the contrary, Stalin's centralized rule was against the basic tenets of Marxism-Leninism \cite[49]{remingtont}. Events like the Great Terror highlight how Stalin ruled; in response to fear of opposition within his party, Stalin launched the Great Terror: a campaign to forcibly remove political dissidents by exiling them or killing them \cite[716]{kuromiyah}. What Stalin accomplished with the Great Terror was that he increased his political power by removing political opponents, and he also increased his public influence. Stalin heavily used terror tactics to keep the public---or, more specifically, dissidents---in check; the amount of terror in society was an accurate measure of the political and social malleability of the population \cite[386]{conquestr}. The reason why Stalin used terror tactics so heavily can be traced back to the tsars: most had an explosive temper and doled out harsh punishments to deter further deviance \cite[64]{hingleyr}. Stalin admired the heavy handed policies of the tsars, and those policies, evidently, made their way into his rule \cite[26]{remingtont}. Stalin's relationship with the economy was slightly more standard with respect to his Marxist ideology. Stalin's economic policies were essentially centred around the belief that state direction of the economy was a necessary social development \cite[50]{remingtont}. Stalin nationalized land, nationalized industry, and collectivized agriculture \cite[50]{remingtont}. His belief that the economy should eventually be run by the armed proletariat, though not fully realized, was a basic statement of Marxist economic policy \cite[29]{reee}. Stalin, publicly, at least, focused on the ``distribution'' of the economy ---as in, distribution of control---heavily while in power. In the background, however, he sometimes made economic decisions that were essentially for progress at all costs, be it human or otherwise. Stalin's labor conscription in the name of industrializing the state was not an acceptable Marxist policy, yet he implemented it anyway \cite[49]{remingtont}. Ultimately, Stalin's economic policy can be described as a mixture of both his ideology and his desire for economic progress. With regards to the public---or, more accurately, the public's social space--- Stalin's actions were not entirely concurrent with his Marxist ideology. Stalin implemented heavy censorship laws as single-party leader; all newspapers were taken under control of the Bolshevik government, even if they were not opposing the government in any way \cite[47-49]{basilj}. Later, even newspapers that were supportive of the government were taken under the control of the state, censored or propagandized \cite[49]{basilj}. Through censorship and propaganda, Stalin was able to hone his ``cult of personality'', elevating his image in the eyes of the public \cite[718]{kuromiyah}. Stalin used his ``cult of personality'' to control the public, even though elevating the image of an individual is not permissible in Marxism-Leninism-Leninism \cite[718]{kuromiyah}. Essentially, with regards to the public, Stalin's rule meant that information was changed or lost for the purpose of further centralizing Stalin's state. %--------------------------------------------------------------------------------------------------% \newpage After gaining power, Stalin only partially followed the aims of his declared ideology. While he did carry out many of the nationalization or collectivization tasks that were expected in his ideology, his usage of terror to control the public as well as the extensive promotion of his public image go against his declared ideology. This is partially due to the fact that Stalin's ideology---and, through that, rule---was very much influenced by the example set by the tsars and traditional Russian culture. Concepts like the ``cult of personality'' were traits seen even in the first tsars. Policies like the Great Terror were also reminiscent of the punishments given out to dissidents by the tsars. As such, the best way to describe Stalin's actual ideology would be a kind of ``Russianized'' Marxism-Leninism; it takes the economic policies of Marxism- Leninism and melds it with the political and social policies of the tsars. One thing to note, however, is that Stalin did not always follow his ideology with respect to the economy, either. Conscripted labor is not a desired economic policy in Marxism-Leninism; it is what Stalin believed he had to do to industrialize the Soviet Union. Ultimately, Stalin is closest to his declared ideology in economic policy; all in all, however, Stalin only followed his declared political ideology---Marxism-Leninism---to a moderate extent. %--------------------------------------------------------------------------------------------------% \printbibliography \end{document}
{ "alphanum_fraction": 0.7126571559, "avg_line_length": 72.4103773585, "ext": "tex", "hexsha": "189f5d042167dc4d7be3305ebc299329f4fc774d", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "a1bec6b1ed4ea843cb291babf7b7b9925e370749", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "QuantumPhi/school", "max_forks_repo_path": "2014-2015/History/Single_Party_States.tex", "max_issues_count": 2, "max_issues_repo_head_hexsha": "a1bec6b1ed4ea843cb291babf7b7b9925e370749", "max_issues_repo_issues_event_max_datetime": "2015-04-10T07:30:10.000Z", "max_issues_repo_issues_event_min_datetime": "2015-04-10T07:28:17.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "QuantumPhi/school", "max_issues_repo_path": "2014-2015/History/Single_Party_States.tex", "max_line_length": 104, "max_stars_count": null, "max_stars_repo_head_hexsha": "a1bec6b1ed4ea843cb291babf7b7b9925e370749", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "QuantumPhi/school", "max_stars_repo_path": "2014-2015/History/Single_Party_States.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 3221, "size": 15351 }
\section{Adding Type Parameter to Actor Library} This section examines actor programming in the Akka library and how type parameters are added to actor-related classes to improve type safety. We show that supervision trees in TAkka are constructed in the same way as in Akka. This section concludes with a brief discussion about alternative designs used by Akka and other actor libraries. \mycomment{Explain Actor Programming. Compare Akka and TAkka side by side.} \subsection{The Actor Class} An Actor has four important fields given in Figure-\ref{actor_api}: (i) a receive function that defines its reaction to incoming messages, (ii) an actor reference pointing to itself, (iii) the actor context representing the outside world of the actor, and (iv) the supervisor strategy for its children. An TAkka {\bf Actor} has Akka equivalent fields as shown in Table \ref{actor_api}. Different from other actor libraries, every TAkka actor class takes a type parameter {\bf M} which specifies the type of messages it expects to receive. The same type parameter is used as the input type of the receive function, the type parameter of actor context and the type parameter of the actor reference pointing to itself. We introduce a new field $mt$ to inform the compiler that the type parameter of the {\bf Actor} class should be recorded. \begin{table}[h] \label{actor_api} \caption{Akka and TAkka Actor} % \begin{adjustwidth}{-1.8cm}{} \begin{tabular}{ l l } \begin{lstlisting}[language=scala] package akka.actor trait Actor { def typedReceive:Any=>Unit val typedSelf:ActorRef val typedContext:ActorContext var supervisorStrategy: SupervisorStrategy } \end{lstlisting} & \begin{lstlisting}[language=scala] package takka.actor trait Actor[M] { implicit val mt : TypeTag[M] def typedReceive:M=>Unit val typedSelf:ActorRef[M] val typedContext:ActorContext[M] var supervisorStrategy: SupervisorStrategy } \end{lstlisting} \end{tabular} % \end{adjustwidth} \end{table} The three immutable fields of TAkka {\bf Actor}: {\it mt}, {\it typedContext} and {\it typedSelf}, will be initialised automatically when the actor is created. Application developers may override the default supervisor strategy in the way explained in \S\ref{supervision}. The implementation of the {\it typedReceive} method, on the other hand, is left to developers. \subsection{Actor Reference} \label{actor_ref} A reference pointing to an Actor of type {\bf Actor[M]} has type {\bf ActorRef[M]}. It provides a {\it !} method, through which users could send a message of type {\bf M} to the referenced actor. Sending an actor reference a message whose type is not the expected type will raise a compile error. By using type-parametrized actor references, the receiver does not need to worry about unexpected messages while senders can ensure that messages will be understood and processed, as long as the message is delivered. In a type system which supports polymorphism, {\bf ActorRef} should be contravariant. We further provide a {\it publishAs} method which type-safely casts an actor reference to a version of its supertype. In another word, the output actor reference accepts partial forms of messages that are accepted by the input actor reference. The ability of publishing partial services makes TAkka a good tool for solving security problems such as the type pollution problem described at \S\ref{type_pollution}. \begin{table}[h] \label{ActorRef} \begin{tabular}{ l l } \begin{lstlisting}[language=scala] abstract class ActorRef { def !(message: Any):Unit } \end{lstlisting} & \begin{lstlisting}[language=scala] abstract class ActorRef[-M] (implicit mt:TypeTag[M]) { def !(message: M):Unit def publishAs[SubM<:M] (implicit mt:TypeTag[SubM]):ActorRef[SubM] } \end{lstlisting} \end{tabular} \caption{Actor Reference} \end{table} The code below defines and uses a string processing actor in Akka and TAkka. The receive function of Akka Actor has type {\bf Any$\Rightarrow$Unit} but the defined actor only intends to process strings. Both examples create an actor inside an actor system and returns a reference pointing to that actor. The same string message is sent to actor in both examples and processed in the way defined in the receive function. Sending an integer message, which is not expected by both actors; However, is permitted in the Akka version but rejected by the TAkka version. \begin{table}[h] \label{ActorRef} \begin{adjustwidth}{-0.8cm}{} \begin{tabular}{ l l } \begin{lstlisting}[language=scala] class MyActor extends Actor { def receive:Any => Unit = { case m:String => println("Received message: " + m) } } val system = ActorSystem("MySystem") val actorRef:ActorRef = system.actorOf(Props[MyActor]) actorRef ! "Hello World!" actorRef ! 3 /* Terminal output: Received message: Hello World! */ \end{lstlisting} & \begin{lstlisting}[language=scala] class MyActor extends Actor[String] { def typedReceive:String=>Unit = { case m:String => println("received message: "+m) } } val system = ActorSystem("MySystem") val actorRef:ActorRef[String] = system.actorOf(Props[String, MyActor]) actorRef ! "Hello World!" // actorRef ! 3 // compile error: type mismatch; found : Int(3) // required: String /* Terminal output: Received message: Hello World! */ \end{lstlisting} \end{tabular} \end{adjustwidth} \caption{A Actor String Processor} \end{table} Undefined messages are treated differently in different actor libraries. In Erlang, an actor keeps undefined messages in its mailbox, attempts to process the message again when a new message handler is in use. In versions prior to 2.0, an Akka actor raises an exception when it processes an undefined message. In recent Akka versions, undefined message is discarded by the actor and an {\bf UnhandledMessage} event is pushed to the event stream of the actor system. The event stream may be subscribed by other actors to which all events will be published. In the example code no subscriber of the event stream is defined and the integer message is simply discarded. \subsection{Props and Actor Context} \label{actor_context} \mycomment{Explain Akka version and TAkka version at the same time} A Props is the configuration of actor creation. A Props of type {\bf Prop[M]} specifies how to create an actor of type {\bf Actor[M]} and returns an actor reference of type {\bf ActorRef[M]}. A Prop should be created by one of following APIs, where {\bf MyActor} should be a subtype of {\bf Actor[M]}: \begin{table}[h] \label{Props} \begin{adjustwidth}{-1.2cm}{} \begin{tabular}{ l l } \begin{lstlisting}[language=scala] val props:Props = Props[MyActor] val props:Props = Props(new MyActor) val props:Props = Props(myActor.getClass) \end{lstlisting} & \begin{lstlisting}[language=scala] val props:Props[M] = Props[M, MyActor] val props:Props[M] = Props[M](new MyActor) val props:Props[M] = Props[M](myActor.getClass) \end{lstlisting} \end{tabular} \end{adjustwidth} \caption{Props Creation API} \end{table} Contrary to actor reference, an actor context describes the outside world of the actor. For security consideration, actor context is only available inside the actor definition. By using APIs in Figure \ref{ActorContext}, an actor can (i) retrieving an actor reference according to a given actor path, (ii) creating a child actor with system generated or user specified name, (iii) set a timeout during which period a new message shall be received, and (iv) update its behaviours. The {\it ActorFor} method in the {\bf ActorContext} classes returns an actor reference of the desired type, if the actor located at the specified actor path has a compatible type. To implement the {\it ActorFor} method, we would like to have a more general mechanism that will return a value of the desired type when the corresponding key is given. To this end, we designed and implemented a typed name server which will be explained \S\ref{nameserver}. The {\it become} method enables hot code swap on the behaviour of an actor. The {\it become} method in TAkka is different from code swap in Akka in two aspects. Firstly, the supervision strategy could be updated as well. Secondly, the new receive function must be more general than the previous version. As a result, no stack of receive functions is required. Interestingly, the implementation of {\it become} involves a mix of static and dynamic type checks. Details of the implementation will be discussed in \S\ref{code_evolution}. \begin{table}[h] \label{ActorContext} \begin{adjustwidth}{-1.3cm}{} \begin{tabular}{ l l } \begin{lstlisting}[language=scala] trait ActorContext { def actorFor(actorPath:String):ActorRef def actorOf(props: Props):ActorRef def actorOf(props: Props, name: String): ActorRef def setReceiveTimeout (timeout: Duration): Unit def become( newReceive: Any => Unit, ):ActorRef } \end{lstlisting} & \begin{lstlisting}[language=scala] trait ActorContext[M] { def actorFor [Msg] (actorPath: String) (implicit mt: TypeTag[Msg]): ActorRef[Msg] def actorOf[Msg](props: Props[Msg]) (implicit mt: TypeTag[Msg]): ActorRef[Msg] def actorOf[Msg](props: Props[Msg], name: String) (implicit mt: TypeTag[Msg]): ActorRef[Msg] def setReceiveTimeout(timeout: Duration): Unit def become[SupM >: M]( newTypedReceive: SupM => Unit, newSystemMessageHandler: SystemMessage => Unit, newSupervisionStrategy:SupervisionStrategy )(implicit smt:TypeTag[SupM]):ActorRef[SupM] } \end{lstlisting} \end{tabular} \end{adjustwidth} \caption{Actor Context} \end{table} \subsection{Supervision Strategies} \label{supervision} There are three supervision strategies defined in Erlang/OTP: one-for-one, one-for-all, and rest-for-one\cite{OTP}. If a supervisor adopts the one-for-one supervision strategy, a child will be restarted when it fails. If a supervisor adopts the one-for-all supervision strategy, all children will be restarted when any of them fails. In Erlang/OTP, children are started in a user-specified order. If a supervisor adopts the rest-for-one supervision strategy, all children started after the failed child will be restarted. For each supervision strategy, users can further specify the maximum restarts of any child within a period. The Akka library only considers the one-for-one strategy and the one-for-all strategy. The rest-for-one strategy is not considered because children are not created according to a user-specified order. The default supervision strategy is a one-for-one strategy that permits unlimited restarts. Users can define their own supervision strategy by using APIs given in Figure-\ref{super}. {\bf OneForOne} corresponds to the one-for-one strategy in Erlang whereas {\bf OneForAll} corresponds to the all-for-one strategy in Erlang. Both strategies are constructed by providing required parameters. {\bf Directive} is an enumerable type whose value is {\bf Escalate}, {\bf Restart}, {\bf Resume}, or {\bf Stop}. Notice that neither supervision strategies require any type-parametrized class. Therefore, both supervision strategies are constructed in TAkka in the same way as in Akka. \begin{figure}[h] \label{super} \begin{lstlisting} abstract class SupervisorStrategy case class OneForOne(restart:Int, time:Duration) (decider: Throwable => Directive) extends SupervisorStrategy case class OneForAll(restart:Int, time:Duration) (decider: Throwable => Directive) extends SupervisorStrategy \end{lstlisting} \caption{Supervision Strategies} \end{figure} \subsection{Handling System Messages} Actors are communicated with each other by sending messages. To maintain a supervision tree, for example to monitor and control the liveness of actors, a special category of messages should be addressed by all actors. We define a trait {\bf SystemMessage} to be the supertype of all messages for system maintenance purposes. Based on the design of Erlang and Akka, we consider following messages should be included in system messages: \begin{itemize} \item {\bf ChildTerminated(child: ActorRef[M])} a message sent from a child actor to its supervisor before it terminates. \item {\bf Kill} a message sent from a supervisor to its child. \item {\bf Restart} a message sent from a supervisor to its terminated child asking to restart the child. \item {\bf ReceiveTimeout} a message sent from an actor to itself when it did not receive any message after a timeout. \end{itemize} An open question is which system messages are allowed to be handled by users. In Erlang and early Akka versions, all system messages could be explicitly handled by users in the {\it receive} block. In recent Akka versions, there is a consideration that some system messages should be handled in library implementation but not be handled by library users. Considering that there are only two supervision strategies to consider, both of which have clearly defined operational behaviours, all messages related to the liveness of actors are handled in the TAkka library implementation. General library users may indirectly affect the system message handler via specifying the supervision strategies. On the contrary, messages related to the behaviour of an actor, e.g. ReceiveTimeout, are better to be handled by application developers. In TAkka, {\bf ReceiveTimeout} is the only system message that can be explicitly handled by users. \subsection{Alternative Designs} \label{alternative designs} \paragraph{Akka Typed Actor} In the Akka library, there is a special class called {\bf TypedActor}, which contains an internal actor and could be supervised. Users of typed actor invoke a service by calling a method instead of sending messages. The typed actor prevents some type errors but has two limitations. For one thing, typed actor does not permit code evolution. For the other, avoiding type pollution when using typed actor would be as awkward as using a plain objected-oriented model. \paragraph{Actors with or without Mutable States} The actor model formalised by Hewitt et al. \cite{Hewitt:1973} does not specify its implementation strategy. In Erlang, a functional programming language, actor does not have mutable states. In Scala, an objected-oriented programming language, actor may have mutable states. The TAkka library is built on top of Akka and implemented in Scala as well. As a result, TAkka does not prevent users from defining actors with mutable states. Nevertheless, the library designers would like to encourage using actors in a functional style because actors with mutable states is difficult to synchronize in a cluster environment. In a cluster, resources are replicated at different locations to provide efficient fault-tolerant services. Known as the CAP theorem \cite{CAP}, it is impossible to achieve consistency, availability, and partition tolerance in a distributed system simultaneously. For actors without mutable state, system providers do not need to worry about consistency. For actors that contain mutable states, system providers have to either sacrifice availability or partition tolerance, or modify the consistency model. For example, Akka actor has mutable states and Akka cluster employs an eventual consistency model \cite{Kuhn12}. \paragraph{Linked Actors} Alternative to forming supervision trees, reliability of actor-based programs could be improved by linking related actors \cite{ErlangWeb}. Linked actors are aware of the death of each other. Indeed, supervision tree is a special structure for linking actors. For this reason, we consider actor linking is a redundant design in a system where supervision is obligatory. After all, if the computation of an actor relies on the liveness of another actor, those two actors should be organised in the same logic supervision tree.
{ "alphanum_fraction": 0.7623401495, "avg_line_length": 40.1662531017, "ext": "tex", "hexsha": "87b82dd720ca758c4287c4d0a626866fd3c3d13f", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "d2410190552aeea65c1da5f0ae05f08ba1f4d102", "max_forks_repo_licenses": [ "BSD-Source-Code" ], "max_forks_repo_name": "Jiansen/TAkka", "max_forks_repo_path": "s1024484/TechReport/TAkka/actor.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "d2410190552aeea65c1da5f0ae05f08ba1f4d102", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "BSD-Source-Code" ], "max_issues_repo_name": "Jiansen/TAkka", "max_issues_repo_path": "s1024484/TechReport/TAkka/actor.tex", "max_line_length": 341, "max_stars_count": 2, "max_stars_repo_head_hexsha": "d2410190552aeea65c1da5f0ae05f08ba1f4d102", "max_stars_repo_licenses": [ "BSD-Source-Code" ], "max_stars_repo_name": "Jiansen/TAkka", "max_stars_repo_path": "s1024484/TechReport/TAkka/actor.tex", "max_stars_repo_stars_event_max_datetime": "2019-06-27T06:36:09.000Z", "max_stars_repo_stars_event_min_datetime": "2016-09-11T14:35:53.000Z", "num_tokens": 3861, "size": 16187 }
\documentclass[../main.tex]{subfiles} \begin{document} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%% Examples %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% In this chapter, we mainly discuss about the array based questions. We first categorize these problems into different type, and then each type can usually be solved and optimized with nearly the best efficiency. Given an array, a subsequence is composed of elements whose subscripts are increasing in the original array. A subarray is a subset of subsequence, which is contiguous subsequence. Subset contain any possible combinations of the original array. For example, for array [1, 2, 3, 4]: \begin{lstlisting}[numbers=none] Subsequence [1, 3] [1, 4] [1, 2, 4] Subarray [1, 2] [2, 3] [2, 3, 4] Subset includes different length of subset, either length 0: [] length 1: [1], [2], [3], [4] length 2: [1, 2], [1, 3], [1, 4], [2, 3], [2, 4], [3, 4] \end{lstlisting} Here array means one dimension list. For array problems, math will play an important role here. The rules are as follows: \begin{itemize} \item Subarray: using dynamic programming based algorithm to make brute force $O(n^3)$ to $O(n)$. Two pointers for the increasing subarray. Prefix sum, or kadane's algorithm plus sometimes with the hashmap, or two pointers (three pointers) for the maximum subarray. \item Subsequence: using dynamic programming based algorithm to make brute force $O(2^n)$ to $O(n^2)$, which corresponds to the seqence type of dynamic programming. \item Duplicates: 217, 26, 27, 219, 287, 442; \item Intersections of Two Arrays: \end{itemize} Before we get into solving each type of problems, we first introduce the algorithms we will needed in this Chapter, including two pointers (three pointers or sliding window), prefix sum, kadane's algorithm. Kadane's algorithm can be explained with sequence type of dynamic programming. % Easy problems: Duplicates: Intersection: 349. Intersection of Two Arrays; Consecutive: 485. Max Consecutive Ones % Maximum/Minimum subarray: 718, 53. Maximum Subarray, 325. Maximum Size Subarray Sum Equals k. 209. Minimum Size Subarray Sum Solutions: divide and conquer, special sum and hashtable, two pointers (sliding window) for minimum % Sum of K numbers of elements: Target, return either the index or the elements(might need to avoid repetition). (2/3/4 sums) % Partition a list into K equal part: DP After this chapter, we need to learn the step to solve these problems: \begin{enumerate} \item Analyze the problem and categorize it. To know the naive solution's time complexity can help us identify it. \item If we can not find what type it is, let us see if we can \textit{convert}. If not, we can try to identify a simple version of this problem, and then upgrade the simple solution to the more complex one. \item Solve the problem with the algorithms we taught in this chapter. \item Try to see if there is any more solutions. % \textit{Note: If the problem is complex, trying to see the simple version, and then upgrade the simple version to a complex one. e.g. (487. Max Consecutive Ones II, 485. Max Consecutive Ones)} \item Check the special case. (Usually very important for this type of problems) \end{enumerate} % Including two pointers both from the start, or two pointers one is from the beginning and the other is from the end. Also, the sliding window, and the flexible sliding windows, also find the cycle algorithm. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%% Subarray %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{Subarray } \textit{Note: For subarray the most important feature is contiguous. Here, we definitely will not use sorting.} Given an array with size $n$, the total number of subarrays we have is $\sum_{i=1}^{i=n} i = n*(n+1)/2$, which makes the time complexity of naive solution that use two nested for/while loop $O(n^2)$ or $O(n^3)$. There are two types of problems related to subarry: \textbf{Range Query} and \textbf{optimization-based subarray}. The Range query problems include querying the minimum/maximum or sum of all elements in a given range [i,j] in an array. Range Query has a more standard way to solve, either by searching or with the segment tree: \paragraph{Range Query} \begin{enumerate} \item 303. Range Sum Query - Immutable \item 307. Range Sum Query - Mutable \item 304. Range Sum Query 2D - Immutable \end{enumerate} \paragraph{Optimization-based subarray} Given a single array, we would normally be asked to return either the maximum/minimum value, the maximum/minimum length, or the number of subarrays that has sum/product that \textit{satisfy a certain condition}. The condition here decide the difficulty of these problems. The questions can are classified into two categories: \begin{enumerate} \item \textit{Absolute-conditioned Subarray} that $sum/product=K$ or \item \textit{Vague-conditioned subarray} that has these symbols that is not equal. \end{enumerate} % \begin{enumerate} % \item Maximum/minimum subarray with Sum or Product or a pattern; we use \textbf{math and prefix\_sum} or sometimes together with hashmap method to tackle. Also, sliding window can be used. % \item Minimum Subarray with Sum or Product or a pattern; \textbf{sliding window} can be used to get the minimum length of subarray. % \item Find subarray that is increasing or decreasing ; \textbf{Two pointers or sliding window} can be used. % \end{enumerate} With the proposed algorithms, the time complexity of subarray problems can be decreased from the brute force $O(n^3)$ to $O(n)$. The brute force is universal: two nested for loops marked the start and end of the subarray to enumerate all the possible subarrays, and another $O(n)$ spent to compute the result needed (sum or product or check the pattern like increasing or decreasing). % Using prefix sum or kadane's algorithm or hashmap sometimes if we have problems to solve it, a panacea is the Sliding Window Algorithm either with two or three pointers. As we have discussed in the algorithm section, \begin{enumerate} \item \textbf{stack/queue/monotone stack} can be used to solve subarray problems that is related to its smaller/larger item to one item's left/right side \item \textbf{sliding window} can be used to find subarray that either the sum or product inside of the sliding window is ordered (either monotone increasing/decreasing). This normally requires that the array are all positive or all negative. We can use the sliding window to cover its all search space. Or else we cant use sliding window. \item For all problems related with subarray sum/product, for both vague or absolute conditioned algorithm, we have a universal algorithm: save the prefix sum (sometimes together with index) in a sorted array, and use binary search to find all possible starting point of the window. \item Prefix Sum or Kadane's algorithm can be used when we need to get the sum of the subarry. \end{enumerate} \begin{enumerate} \item 53. Maximum Subarray (medium) \item 325. Maximum Size Subarray Sum Equals k \item 525. Contiguous Array \item 560. Subarray Sum Equals K \item 209. Minimum Size Subarray Sum (medium) \end{enumerate} Monotone stack and vague conditioned subarray \begin{enumerate} \item 713. Subarray Product Less Than K (all positive) \item 862. Shortest Subarray with Sum at Least K (with negative) \item 907. Sum of Subarray Minimums (all positive, but minimum in all subarray and sum) \end{enumerate} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%% Maximum Subarray %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \subsection{Absolute-conditioned Subarray} For the maximum array, you are either asked to return: \begin{enumerate} \item the maximum sum or product; \textit{solved using prefix sum or kadane's algorithm} \item the maximum length of subarray with sum or product S equals to K; \textit{solved using prefix sum together with a hashmap saves previous prefix sum and its indices} \item the maximum number of subarray with sum or product S (the total number of) equals to K; \textit{solved using prefix sum together with a hashmap saves previous prefix sum and its count} \end{enumerate} \paragraph{Maximum/Minimum sum or product} \begin{examples}[resume] \item \textbf{53. Maximum Subarray (medium).} Find the contiguous subarray within an array (containing at least one number) which has the largest sum. \begin{lstlisting}[numbers=none] For example, given the array [-2,1,-3,4,-1,2,1,-5,4], the contiguous subarray [4,-1,2,1] has the largest sum = 6. \end{lstlisting} Solution: Brute force is to use two for loops, first is the starting, second is the end, then we can get the maximum value. To optimize, we can use divide and conquer, $O(nlgn)$ vs brute force is $O(n^3)$ (two embedded for loops and n for computing the sum). The divide and conquer method was shown in that chapter. A more efficient algorithm is using pre\_sum. Please check Section~\ref{part4_prefix_sum} for the answer. Now what is the slinding window solution? The key step in sliding window is when to move the first pointer of the window (shrinking the window). The window must include current element j. For the maximum subarray, to increase the sum of the window, we need to abandon any previous elements if they have negative sum. \begin{lstlisting}[language = Python] from sys import maxsize class Solution: def maxSubArray(self, nums): """ :type nums: List[int] :rtype: int """ if not nums: return 0 i, j = 0, 0 #i<=j maxValue = -maxsize window_sum = 0 while j < len(nums): window_sum += nums[j] j += 1 maxValue = max(maxValue, window_sum) while i<j and window_sum < 0: window_sum -= nums[i] i += 1 return maxValue \end{lstlisting} \end{examples} \paragraph{Maximum/Minimum length of subarray with sum or product S} For this type of problem we need to track the length of it. \begin{examples}[resume] \item \textbf{325. Maximum Size Subarray Sum Equals k.} Given an array nums and a target value k, find the maximum length of a subarray that sums to k. If there isn’t one, return 0 instead. \textit{Note: The sum of the entire nums array is guaranteed to fit within the 32-bit signed integer range.} \begin{lstlisting}[numbers=none] Example 1: Given nums = [1, -1, 5, -2, 3], k = 3, return 4. (because the subarray [1, -1, 5, -2] sums to 3 and is the longest) Example 2: Given nums = [-2, -1, 2, 1], k = 1, return 2. (because the subarray [-1, 2] sums to 1 and is the longest) Follow Up: Can you do it in O(n) time? \end{lstlisting} \textbf{Solution: Prefix Sum Saved as Hashmap.} Answer: the brute force solution of this problem is the same as the maximum subarray. The similarity here is we track the prefix sum $S_{(i,j)} = y_j - y_{i-1}$, if we only need to track a certain value of $S_{(i,j)}$, which is $k$. Because $y_i = y_j - k$ which is the current prefix sum minus the k. If we use a hashmap to save the set of prefix sum together with the first index of this value appears. We saved $(y_i, first\_index)$, so that $max\_len = max(idx-dict[y_j-k])$. % Solution: using prefix\_sum and hashmap, to just need to reformulate dict[sum\_i]. For this question, we need to get the maximum size of subarray, so dict[i] =min(idx), the earliest that the value appear. which means every time we just set the $dict[i]=current_index$, only if the key does'nt exist. dict[0] = -1. Here we have sum\_i, to check if there is a j in front of i that makes sum(j,i)=k, sum(j,i)=sum\_i-A Value in Dict = k, so the value need to be sum\_i-k, so that we need to check if it is in the dictionary. \begin{lstlisting}[language = Python] def maxSubArrayLen(self, nums, k): """ :type nums: List[int] :type k: int :rtype: int """ prefix_sum = 0 dict = {0:-1} #this means for index -1, the sum is 0 max_len = 0 for idx,n in enumerate(nums): prefix_sum += n # save the set of prefix sum together with the first index of this value appears. if prefix_sum not in dict: dict[prefix_sum] = idx # track the maximum length so far if prefix_sum-k in dict: max_len=max(max_len, idx-dict[sum_i-k]) return max_len \end{lstlisting} Another example that asks for pattern but can be converted or equivalent to the last problems: \item \textbf{525. Contiguous Array.} Given a binary array, find the maximum length of a contiguous subarray with equal number of 0 and 1. \textit{Note: The length of the given binary array will not exceed 50,000.} \begin{lstlisting}[numbers=none] Example 1: Input: [0,1] Output: 2 Explanation: [0, 1] is the longest contiguous subarray with equal number of 0 and 1. Example 2: Input: [0,1,0] Output: 2 Explanation: [0, 1] (or [1, 0]) is a longest contiguous subarray with equal number of 0 and 1. \end{lstlisting} Solution: the problem is similar to the maximum sum of array with sum==0, so 0=-1, 1==1. Here our $k=0$ \begin{lstlisting}[language = Python] def findMaxLength(self, nums): """ :type nums: List[int] :rtype: int """ nums=[nums[i] if nums[i]==1 else -1 for i in range(len(nums))] max_len=0 cur_sum=0 mapp={0:-1} for idx,v in enumerate(nums): cur_sum+=v if cur_sum in mapp: max_len=max(max_len,idx-mapp[cur_sum]) else: mapp[cur_sum]=idx return max_len \end{lstlisting} \item \textbf{674. Longest Continuous Increasing Subsequence} Given an unsorted array of integers, find the length of longest continuous increasing subsequence (subarray). \begin{lstlisting}[numbers=none] Example 1: Input: [1,3,5,4,7] Output: 3 Explanation: The longest continuous increasing subsequence is [1,3,5], its length is 3. Even though [1,3,5,7] is also an increasing subsequence, it's not a continuous one where 5 and 7 are separated by 4. Example 2: Input: [2,2,2,2,2] Output: 1 Explanation: The longest continuous increasing subsequence is [2], its length is 1. \textit{Note: Length of the array will not exceed 10,000.} \end{lstlisting} Solution: The description of this problem should use ''subarray" instead of the ''subsequence". The brute force solution is like any subarray problem $O(n^3)$. For embedded for loops to enumerate the subarray, and another $O(n)$ to check if it is strictly increasing. Using two pointers, we can get $O(n)$ time complexity. We put two pointers: one $i$ located at the first element of the nums, second $j$ at the second element. We specifically restrict the subarray from $i$ to $j$ to be increasing, if this is violated, we reset the starting point of the subarray from the violated place. \begin{lstlisting}[language = Python] class Solution: def findLengthOfLCIS(self, nums): """ :type nums: List[int] :rtype: int """ if not nums: return 0 if len(nums)==1: return 1 i,j = 0,0 max_length = 0 while j < len(nums): j += 1 #slide the window max_length = max(max_length, j-i) # when condition violated, reset the window if j<len(nums) and nums[j-1]>=nums[j]: i = j return max_length \end{lstlisting} \item \textbf{209. Minimum Size Subarray Sum (medium)} Given an array of n positive integers and a positive integer s, find the minimal length of a contiguous subarray of which the sum >= s. If there isn't one, return 0 instead. \begin{lstlisting}[numbers=none] Example: Input: s = 7, nums = [2,3,1,2,4,3] Output: 2 Explanation: the subarray [4,3] has the minimal length under the problem constraint. \end{lstlisting} \textbf{Solution 1: Sliding Window, $O(n)$.} \begin{lstlisting}[language=Python] def minSubArrayLen(self, s, nums): ans = float('inf') n = len(nums) i = j = 0 acc = 0 while j < n: acc += nums[j] # increase the window size while acc >= s: # shrink the window to get the optimal result ans = min(ans, j-i+1) acc -= nums[i] i += 1 j +=1 return ans if ans != float('inf') else 0 \end{lstlisting} \textbf{Solution 2: prefix sum and binary search. $O(n\log_n)$.} Assuming current prefix sum is $p_i$, We need to find the $\max p_j \leq (p_i-s)$, this is the right most value in the prefix sum array (sorted) that is <= $p_i -s$. \begin{lstlisting}[language=Python] from bisect import bisect_right class Solution(object): def minSubArrayLen(self, s, nums): ans = float('inf') n = len(nums) i = j = 0 ps = [0] while j < n: ps.append(nums[j]+ps[-1]) # find a posible left i if ps[-1]-s >= 0: index = bisect_right(ps, ps[-1]-s) if index > 0: index -= 1 ans = min(ans, j-index+1) j+=1 return ans if ans != float('inf') else 0 \end{lstlisting} \end{examples} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%% Maximum Subarray %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \paragraph{The maximum number of subarray with sum or product S} \begin{examples}[resume] \item \textbf{560. Subarray Sum Equals K} Given an array of integers and an integer k, you need to find the total number of continuous subarrays whose sum equals to k. \begin{lstlisting}[numbers=none] Example 1: Input:nums = [1,1,1], k = 2 Output: 2 \end{lstlisting} Answer: The naive solution is we enumerate all possible subarray which is $n^2$, and then we compute and check its sum which is $O(n)$. So the total time complexity is $O(n^3)$ time complexity. However, we can decrease it to $O(n^2)$ if we compute the sum of array in a different way: we first compute the sum till current index for each position, with equation $sum(i,j) = sum(0,j)-sum(0,i)$. However the OJ gave us LTE error. \begin{lstlisting}[language = Python] def subarraySum(self, nums, k): """ :type nums: List[int] :type k: int :rtype: int """ ''' return the number of subarrays that equal to k''' count = 0 sums = [0]*(len(nums)+1) # sum till current index for idx, v in enumerate(nums): sums[idx+1] = sums[idx]+v for i in range(len(nums)): for j in range(i, len(nums)): value = sums[j+1]-sums[i] count = count+1 if value==k else count return count \end{lstlisting} Solution 3: using prefix\_sum and hashmap, to just need to reformulate dict[sum\_i]. For this question, we need to get the total number of subsubarray, so $dict[i] = count$, which means every time we just set the dict[i]+=1. dict[0]=1 \begin{lstlisting}[language = Python] import collections class Solution(object): def subarraySum(self, nums, k): """ :type nums: List[int] :type k: int :rtype: int """ ''' return the number of subarrays that equal to k''' dict = collections.defaultdict(int) #the value is the number of the sum occurs dict[0]=1 prefix_sum, count=0, 0 for v in nums: prefix_sum += v count += dict[prefix_sum-k] # increase the counter of the appearing value k, default is 0 dict[prefix_sum] += 1 # update the count of prefix sum, if it is first time, the default value is 0 return count \end{lstlisting} \item \textbf{974. Subarray Sums Divisible by K.} Given an array A of integers, return the number of (contiguous, non-empty) subarrays that have a sum divisible by K. \begin{lstlisting}[numbers=none] Example 1: Input: A = [4,5,0,-2,-3,1], K = 5 Output: 7 Explanation: There are 7 subarrays with a sum divisible by K = 5: [4, 5, 0, -2, -3, 1], [5], [5, 0], [5, 0, -2, -3], [0], [0, -2, -3], [-2, -3] \end{lstlisting} \textbf{Analysis:} for the above array, we can compute the prefix sum as [0,4,9, 9, 7,4,5]. Let P[i+1] = A[0] + A[1] + ... + A[i]. Then, each subarray can be written as P[j] - P[i] (for j > i). We need to i for current j index that (P[j]-P[i])\% K == 0. Because P[j]\%K=P[i]\%K, therefore different compared with when sum == K, we not check P[j]-K but instead P[j]\%K if it is in the hashmap. Therefore, we need to save the prefix sum as the modulo of K. For the example, we have dict: {0: 2, 4: 4, 2: 1}. \begin{lstlisting}[language=Python] from collections import defaultdict class Solution: def subarraysDivByK(self, A, K): """ :type A: List[int] :type K: int :rtype: int """ a_sum = 0 p_dict = defaultdict(int) p_dict[0] = 1 # when it is empty we still has one 0:1 ans = 0 for i, v in enumerate(A): a_sum += v a_sum %= K if a_sum in p_dict: ans += p_dict[a_sum] p_dict[a_sum] += 1 # save the remodule instead return ans \end{lstlisting} \textbf{Solution 2: use Combination} Then P = [0,4,9,9,7,4,5], and $C_0 = 2, C_2 = 1, C_4 = 4$. With $C_0=2$, (at P[0] and P[6]), it indicates $C_2^1$ subarray with sum divisible by K, namely A[0:6]=[4, 5, 0, -2,-3,1]. With $C_4 = 4$ (at P[1], P[2], P[3], P[5]), it indicates $C_4^2=6$ subarrays with sum divisible by K, namely A[1:2]], A[1:3], A[1:5], A[2:3], A[2:5], A[3:5]. \begin{lstlisting}[language=Python] def subarraysDivByK(self, A, K): P = [0] for x in A: P.append((P[-1] + x) % K) count = collections.Counter(P) return sum(v*(v-1)/2 for v in count.values()) \end{lstlisting} \end{examples} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%% Vague-conditioned subarray %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \subsection{Vague-conditioned subarray} In this section, we would be asked to ask the same type of question comapred with the last section. The only difference is the condition. For example, in the following question, it is asked with subarray that with $sum >= s$. Because of the vague of the condition, a hashmap$+$prefix sum solution will on longer give us $O(n)$ liner time. The best we can do if the array is all positive number we can gain $O(nlgn)$ if it is combined with binary search. However, a carefully designed sliding window can still help us achieve linear time $O(n)$. For array with negative number, we can ultilize monotonic queue mentioned in Section~\ref{section_mono_stack}, which will achieve $O(n)$ both in time and space complexity. \paragraph{All Positive Array (Sliding Window)} If it is all positive array, it can still be easily solved with sliding window. For example: \begin{examples}[resume] \item \textbf{209. Minimum Size Subarray Sum (medium)} Given an array of n positive integers and a positive integer s, find the minimal length of a contiguous subarray of which the sum >= s. If there isn't one, return 0 instead. \begin{lstlisting}[numbers=none] Example: Input: s = 7, nums = [2,3,1,2,4,3] Output: 2 Explanation: the subarray [4,3] has the minimal length under the problem constraint. \end{lstlisting} Follow up: If you have figured out the O(n) solution, try coding another solution of which the time complexity is O(n log n). \textbf{Analysis.} For this problem, we can still use prefix sum saved in hashmap. However, since the condition is $sum >= s$, if we use a hashmap, we need to search through the hashmap with $key <= prefix_sum - s$. The time complexity would rise up to $O(n^2)$ if we use linear search. We would receive LTE error. \begin{lstlisting}[language = Python] def minSubArrayLen(self, s, nums): """ :type s: int :type nums: List[int] :rtype: int """ if not nums: return 0 dict = collections.defaultdict(int) dict[0] = -1 # pre_sum 0 with index -1 prefixSum = 0 minLen = sys.maxsize for idx, n in enumerate(nums): prefixSum += n for key, value in dict.items(): if key <= prefixSum - s: minLen = min(minLen, idx-value) dict[prefixSum] = idx #save the last index return minLen if 1<=minLen<=len(nums) else 0 \end{lstlisting} \textbf{Solution 1: Prefix Sum and Binary Search.} Because the items in the array are all positive number, so the prefix sum array is increasing, this means if we save the prefix sum in an array, it is ordered, we can use binary search to find the index of largest value <= (prefix sum - s). If we use bisect module, we can use bisect\_right function which finds the right most position that we insert current value to keep the array ordered. The index will be rr-1. \begin{lstlisting}[language=Python] import bisect def minSubArrayLen(self, s, nums): ps = [0] ans = len(nums)+1 for i, v in enumerate(nums): ps.append (ps[-1] + v) #find the right most position that <= rr = bisect.bisect_right(ps, ps[i+1] - s) if rr: ans = min(ans, i+1 - (rr-1)) return ans if ans <= len(nums) else 0 \end{lstlisting} \begin{lstlisting}[language = Python] def minSubArrayLen(self, s, nums): """ :type s: int :type nums: List[int] :rtype: int """ def bSearch(nums, i, j, target): while i < j: mid = (i+j) / 2 if nums[mid] == target: return mid elif nums[mid] < target: i = mid + 1 else: j = mid - 1 return i if not nums: return 0 rec = [0] * len(nums) rec[0] = nums[0] if rec[0] >= s: return 1 minlen = len(nums)+1 for i in range(1, len(nums)): rec[i] = rec[i-1] + nums[i] if rec[i] >= s: index = bSearch(rec, 0, i, rec[i] - s) if rec[index] > rec[i] - s: index -= 1 minlen = min(minlen, i - index) return minlen if minlen != len(nums)+1 else 0 \end{lstlisting} \textbf{Solution 2: Sliding window in $O(n)$.} While, using the sliding window, Once the sum in the window satisfy the condition, we keep shrinking the window size (moving the left pointer rightward) untill the condition is no longer hold. This way, we are capable of getting the complexity with $O(n)$. \begin{lstlisting}[language = Python] def minSubArrayLen(self, s, nums): i, j = 0, 0 sum_in_window = 0 ans = len(nums) + 1 while j < len(nums): sum_in_window += nums[j] j += 1 # shrink the window if the condition satisfied while i <j and sum_in_window >= s: ans = min(ans, j-i) sum_in_window -= nums[i] i += 1 return ans if ans <= len(nums) else 0 \end{lstlisting} \item \textbf{713. Subarray Product Less Than K} Your are given an array of positive integers nums. Count and print the number of (contiguous) subarrays where the product of all the elements in the subarray is less than k. \begin{lstlisting}[numbers=none] Example 1: Input: nums = [10, 5, 2, 6], k = 100 Output: 8 Explanation: The 8 subarrays that have product less than 100 are: [10], [5], [2], [6], [10, 5], [5, 2], [2, 6], [5, 2, 6]. Note that [10, 5, 2] is not included as the product of 100 is not strictly less than k. Note: 0 < nums.length <= 50000. 0 < nums[i] < 1000. 0 <= k < 10^6. \end{lstlisting} Answer: Because we need the subarray less than k, so it is difficult to use prefix sum. If we use sliding window, \begin{lstlisting} i=0, j=0, 10 10<100, ans+= j-i+1 (1) -> [10] i=0, j=1, 50 50<100, ans+= j-i+1 (3), -> [10],[10,5] i=0, j=2, 100 shrink the window, i=1, product = 10, ans+=2, ->[5,2][2] i=1, j=3, 60, ans+=3->[2,6],[2],[6] \end{lstlisting} The python code: \begin{lstlisting}[language = Python] class Solution: def numSubarrayProductLessThanK(self, nums, k): """ :type nums: List[int] :type k: int :rtype: int """ if not nums: return 0 i, j = 0, 0 window_product = 1 ans = 0 while j < len(nums): window_product *= nums[j] while i<j and window_product >= k: window_product /= nums[i] i+=1 if window_product < k: ans += j-i+1 j += 1 return ans \end{lstlisting} \end{examples} \paragraph{Array with Negative Element (Monotonic Queue)} In this section, we will work through how to handle the array with negative element and is Vague-conditioned. We found using monotonic Queue or stack (Section~\ref{section_mono_stack} will fit the scenairo and gave $O(n)$ time complexity and $O(N)$ space complexity. \begin{examples}[resume] \item \textbf{862. Shortest Subarray with Sum at Least K} Return the length of the shortest, non-empty, contiguous subarray of A with sum at least K. If there is no non-empty subarray with sum at least K, return -1. \begin{lstlisting}[numbers=none] Example 1: Input: A = [1], K = 1 Output: 1 Example 2: Input: A = [1,2], K = 4 Output: -1 Example 3: Input: A = [2,-1,2], K = 3 Output: 3 \end{lstlisting} Note: $1 <= A.length <= 50000$, $-10 ^ 5 <= A[i] <= 10 ^ 5$, $1 <= K <= 10 ^ 9$. \textbf{Analysis:} The only difference of this problem compared with the last is with negative value. Because of the negative, the shrinking method no longer works because when we shrink the window, the sum in the smaller window might even grow if we just cut out a negative value. For instance, [84,-37,32,40,95], K=167, the right answer is [32, 40, 95]. In this program, i=0, j=4, so how to handle the negative value? \textbf{Solution 1: prefix sum and binary search in prefix sum. LTE} \begin{lstlisting}[language=Python] def shortestSubarray(self, A, K): def bisect_right(lst, target): l, r = 0, len(lst)-1 while l <= r: mid = l + (r-l)//2 if lst[mid][0] <= target: l = mid + 1 else: r = mid -1 return l acc = 0 ans = float('inf') prefixSum=[(0, -1)] #value and index for i, n in enumerate(A): acc += n index = bisect_right(prefixSum, acc-K) for j in range(index): ans = min(ans, i-prefixSum[j][1]) index = bisect_right(prefixSum, acc) prefixSum.insert(index, (acc, i)) #print(index, prefixSum) return ans if ans != float('inf') else -1 \end{lstlisting} % For an all positive array, the prefix is perfectly increasing, if we want $S_{(i,j)} = y[j]-y[i-1] >= k$, if we want to get the smallest subarray, then $max(i)$. For each $j$, if we found $i$ that is satisfied, that i is no longer needed to be considered again (just like the previous sliding window, where when the condition satisfied, we move the window i+1 till the condition is violated again or we can not move i any further). Now, let us analyze a simple example which includes both 0 and negative number. [2, -1, 2, 0, 1], K=3, with prefix sum [0, 2, 1, 3, 3, 4], the subarray is [2,-1,2], [2,-1,2, 0] and [2, 0, 1] where its sum is at least three. First, let us draw the prefix sum on a x-y axis. When we encounter an negative number, the prefix sum decreases, if it is zero, then the prefix sum stablize. For the zero case: at p[2] = p[3], if subarray ends with index 2 is considered, then 3 is not needed. For the negative case: p[0]=2>p[1]=1 due to A[1]<0. Because p[1] can always be a better choice to be $i$ than p[1] (smaller so that it is more likely, shorter distance). Therefore, we can still keep the validate prefix sum monoitually increasing like the array with all positive numbers by maintaince a mono queue. \begin{lstlisting}[language = Python] class Solution: def shortestSubarray(self, A, K): P = [0]*(len(A)+1) for idx, x in enumerate(A): P[idx+1] = P[idx]+x ans = len(A)+1 # N+1 is impossible monoq = collections.deque() for y, Py in enumerate(P): while monoq and Py <= P[monoq[-1]]: #both negative and zero leads to kick out any prevous larger or equal value print('pop', P[monoq[-1]]) monoq.pop() while monoq and Py - P[monoq[0]] >= K: # if one x is considered, no need to consider again (similar to sliding window where we move the first index forward) print('pop', P[monoq[0]]) ans = min(ans, y - monoq.popleft()) print('append', P[y]) monoq.append(y) return ans if ans < len(A)+1 else -1 \end{lstlisting} \end{examples} \subsection{LeetCode Problems and Misc} \paragraph{Absolute-conditioned Subarray} \begin{enumerate} \item 930. Binary Subarrays With Sum \begin{lstlisting} In an array A of 0s and 1s, how many non-empty subarrays have sum S? Example 1: Input: A = [1,0,1,0,1], S = 2 Output: 4 Explanation: The 4 subarrays are bolded below: [1,0,1,0,1] [1,0,1,0,1] [1,0,1,0,1] [1,0,1,0,1] Note: A.length <= 30000 0 <= S <= A.length A[i] is either 0 or 1. \end{lstlisting} Answer: this is exactly the third time of maximum subarray, the maximum length of subarry with a certain value. We solve it using prefix sum and a hashmap to save the count of each value. \begin{lstlisting}[language=Python] import collections class Solution: def numSubarraysWithSum(self, A, S): """ :type A: List[int] :type S: int :rtype: int """ dict = collections.defaultdict(int) #the value is the number of the sum occurs dict[0]=1 #prefix sum starts from 0 and the number is 1 prefix_sum, count=0, 0 for v in A: prefix_sum += v count += dict[prefix_sum-S] # increase the counter of the appearing value k, default is 0 dict[prefix_sum] += 1 # update the count of prefix sum, if it is first time, the default value is 0 return count \end{lstlisting} We can write it as: \begin{lstlisting}[language=Python] def numSubarraysWithSum(self, A, S): """ :type A: List[int] :type S: int :rtype: int """ P = [0] for x in A: P.append(P[-1] + x) count = collections.Counter() ans = 0 for x in P: ans += count[x] count[x + S] += 1 return ans \end{lstlisting} Also, it can be solved used a modified sliding window algorithm. For sliding window, we have $i,j$ starts from 0, which represents the window. Each iteration j will move one position. For a normal sliding window, only if the sum is larger than the value, then we shrink the window size by one. However, in this case, like in the example $1, 0, 1, 0, 1$, when $j = 5$, $i = 1$, the sum is $2$, but the algorithm would miss the case of $i = 2$, which has the same sum value. To solve this problem, we keep another index $i_hi$, in addition to the moving rule of $i$, it also moves if the sum is satisfied and that value is $0$. This is actually a Three pointer algorithm. \begin{lstlisting}[language=Python] def numSubarraysWithSum(self, A, S): i_lo, i_hi, j = 0, 0, 0 #i_lo <= j sum_lo = sum_hi = 0 ans = 0 while j < len(A): # Maintain i_lo, sum_lo: # While the sum is too big, i_lo += 1 sum_lo += A[j] while i_lo < j and sum_lo > S: sum_lo -= A[i_lo] i_lo += 1 # Maintain i_hi, sum_hi: # While the sum is too big, or equal and we can move, i_hi += 1 sum_hi += A[j] while i_hi < j and ( sum_hi > S or sum_hi == S and not A[i_hi]): sum_hi -= A[i_hi] i_hi += 1 if sum_lo == S: ans += i_hi - i_lo + 1 j += 1 return ans \end{lstlisting} \item 523. Continuous Subarray Sum \begin{lstlisting} Given a list of non-negative numbers and a target integer k, write a function to check if the array has a continuous subarray of size at least 2 that sums up to the multiple of k, that is, sums up to n*k where n is also an integer. Example 1: Input: [23, 2, 4, 6, 7], k=6 Output: True Explanation: Because [2, 4] is a continuous subarray of size 2 and sums up to 6. Example 2: Input: [23, 2, 6, 4, 7], k=6 Output: True Explanation: Because [23, 2, 6, 4, 7] is an continuous subarray of size 5 and sums up to 42. Note: The length of the array won't exceed 10,000. You may assume the sum of all the numbers is in the range of a signed 32-bit integer. \end{lstlisting} Answer: This is a mutant of the subarray with value k. The difference here, we save the prefix sum as the reminder of k. if $(a+b)\%k=0$, then $(a\%k+b\%k)/k=1$. \begin{lstlisting}[language=Python] class Solution: def checkSubarraySum(self, nums, k): """ :type nums: List[int] :type k: int :rtype: bool """ if not nums: return False k = abs(k) prefixSum = 0 dict = collections.defaultdict(int) dict[0]=-1 for i, v in enumerate(nums): prefixSum += v if k!=0: prefixSum %= k if prefixSum in dict and (i-dict[prefixSum])>=2: return True if prefixSum not in dict: dict[prefixSum] = i return False \end{lstlisting} \end{enumerate} For problems like bounded, or average, minimum in a subarray, \begin{examples}[resume] \item 795.Number of Subarrays with Bounded Maximum (medium) \item 907. Sum of Subarray Minimums (monotone stack) \end{examples} % \item 674. Longest Continuous Increasing Subsequence % Given an unsorted array of integers, find the length of longest continuous increasing subsequence (subarray). % Example 1: % \begin{lstlisting} % Input: [1,3,5,4,7] % Output: 3 % Explanation: The longest continuous increasing subsequence is [1,3,5], its length is 3. % Even though [1,3,5,7] is also an increasing subsequence, it's not a continuous one where 5 and 7 are separated by 4. % \end{lstlisting} % Example 2: % \begin{lstlisting} % Input: [2,2,2,2,2] % Output: 1 % Explanation: The longest continuous increasing subsequence is [2], its length is 1. % \end{lstlisting} % \textit{Note: Length of the array will not exceed 10,000.} % Solution: The brute force solution is use two for loops with $O(n^2)$. The first loop is the start number, the second loop is the $nums[j]>nums[j-1]$ or else stop. Or we can use two pointers. i,j start from 0,1 respectively. % \begin{lstlisting}[language = Python] % class Solution: % def findLengthOfLCIS(self, nums): % """ % :type nums: List[int] % :rtype: int % """ % if not nums: % return 0 % if len(nums)==1: % return 1 % i,j=0,1 % max_length = 0 % while j<len(nums): % if nums[j-1]<nums[j]: % j+=1 % else: % if j-i>max_length: % max_length = j-i % i=j % j+=1 % if j-i>max_length: % max_length = j-i % return max_length % \end{lstlisting} % section subsequence %\subfile{chapters/mastering/array/subsequence} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%% sub sequence %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{Subsequence (Medium or Hard)} The difference of the subsequence type of questions with the subarray is that we do not need the elements to be consecutive. Because of this relaxation, the brute force solution of this type of question is exponential$O(2^n)$, because for each element, we have two options: chosen or not chosen. This type of questions would usually be used as a follow-up question to the subarray due to its further difficulty because of nonconsecutive. This type of problems are a typical dynamic programming. Here we should a list of all related subsequence problems shown on LeetCode in Fig.~\ref{fig:subsequence_problems} A subsequence of a string is a new string which is formed from the original string by deleting some (can be none) of the characters without disturbing the relative positions of the remaining characters. (ie, "ACE" is a subsequence of "ABCDE" while "AEC" is not). For the subsequence problems, commonly we will see increasing subsequence, count the distinct subsequence. And they are usually solved with single sequence type of dynamic programming. \begin{figure}[h] \centering \includegraphics[width=0.8\columnwidth]{fig/subsequence_1.png} \includegraphics[width=0.8\columnwidth]{fig/subsequence_2.png} \caption{Subsequence Problems Listed on LeetCode} \label{fig:subsequence_problems} \end{figure} 940. Distinct Subsequences II (hard) Given a string S, count the number of distinct, non-empty subsequences of S . Since the result may be large, return the answer modulo $10^9 + 7$. \begin{lstlisting} Example 1: Input: "abc" Output: 7 Explanation: The 7 distinct subsequences are "a", "b", "c", "ab", "ac", "bc", and "abc". Example 2: Input: "aba" Output: 6 Explanation: The 6 distinct subsequences are "a", "b", "ab", "ba", "aa" and "aba". Example 3: Input: "aaa" Output: 3 Explanation: The 3 distinct subsequences are "a", "aa" and "aaa". \end{lstlisting} \textbf{Sequence type dynamic programming}. The naive solution for subsequence is using DFS to generate all of the subsequence recursively and we also need to check the repetition. The possible number of subsequence is $2^n-1$. Let's try forward induction method. \begin{lstlisting} # define the result for each state: number of subsequence ends with each state state: a b c ans : 1 2 4 a: a; dp[0] = 1 b: b, ab; = dp[0]+1 if this is 'a', length 1 is the same as dp[0], only length 2 is possible c: c, ac, bc, abc; = dp[0]+dp[1]+1, if it is 'a', aa, ba, aba, = dp[1]+1 d: d, ad, bd, abd, cd, acd, bcd, abcd = dp[0]+dp[1]+dp[2]+1 \end{lstlisting} Thus the recurrence function can be Eq.~\ref{eq:distinct_subsequence}. \begin{equation} \label{eq:distinct_subsequence} dp[i] = \sum_{j<i}(dp[j]) +1, S[j] != S[i] \end{equation} Thus, we have $O(n^2)$ time complexity, and the following code: \begin{lstlisting}[language=Python] def distinctSubseqII(self, S): """ :type S: str :rtype: int """ MOD = 10**9+7 dp = [1]*len(S) #means for that length it has at least one count for i, c in enumerate(S): for j in range(i): if c == S[j]: continue else: dp[i] += dp[j] dp[i] %= MOD return sum(dp) % MOD \end{lstlisting} However, we still get LTE. How to improve it further. If we use a counter indexed by all of the 26 letters, and a prefix sum. The inner for loop can be replaced by dp[i] = 1+ (prefix sum - sum of all S[i]).Thus we can lower the complexity further to $O(n)$. \begin{lstlisting}[language=Python] def distinctSubseqII(self, S): MOD = 10**9+7 dp = [1]*len(S) #means for that length it has at least one count sum_tracker = [0]*26 total = 0 for i, c in enumerate(S): index = ord(c) - ord('a') dp[i] += total-sum_tracker[index] total += dp[i] sum_tracker[index] += dp[i] return sum(dp) % MOD \end{lstlisting} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%% Sum %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % \subsection{Sum} % In this section, to get sum we can choose to use hashmap to save the original list so that for the last element, we only check the hashmap, we can lower the complexity by one power of n. However, a better solution is to use two pointers or three pointers. for three pointers, the first one is to make sure the starting point. Also, we can think about divide and conquer. % \begin{lstlisting} % [-4,-1,-1,0,1,2] % i, l-> ``````<-r % \end{lstlisting} % \begin{enumerate} % \item %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%% Others %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \subsection{Others} For example, the following question would be used as follow up for question \textit{Longest Continuous Increasing Subsequence} 300. Longest Increasing Subsequence 673. Number of Longest Increasing Subsequence Given an unsorted array of integers, find the number of longest increasing subsequence. \begin{lstlisting} Example 1: Input: [1,3,5,4,7] Output: 2 Explanation: The two longest increasing subsequence are [1, 3, 4, 7] and [1, 3, 5, 7]. Example 2: Input: [2,2,2,2,2] Output: 5 Explanation: The length of longest continuous increasing subsequence is 1, and there are 5 subsequences' length is 1, so output 5. \textit{Note: Length of the given array will be not exceed 2000 and the answer is guaranteed to be fit in 32-bit signed int.} \end{lstlisting} Solution: Another different problem, to count the number of the max subsequence. Typical dp: state: f[i] \begin{lstlisting}[language = Python] from sys import maxsize class Solution: def findNumberOfLIS(self, nums): """ :type nums: List[int] :rtype: int """ max_count = 0 if not nums: return 0 memo =[None for _ in range(len(nums))] rlst=[] def recursive(idx,tail,res): if idx==len(nums): rlst.append(res) return 0 if memo[idx]==None: length = 0 if nums[idx]>tail: addLen = 1+recursive(idx+1, nums[idx],res+[nums[idx]]) notAddLen = recursive(idx+1, tail,res) return max(addLen,notAddLen) else: return recursive(idx+1, tail,res) ans=recursive(0,-maxsize,[]) count=0 for lst in rlst: if len(lst)==ans: count+=1 return count \end{lstlisting} Using dynamic programming, the difference is we add a count array. \begin{lstlisting}[language = Python] from sys import maxsize class Solution: def findNumberOfLIS(self, nums): N = len(nums) if N <= 1: return N lengths = [0] * N #lengths[i] = longest ending in nums[i] counts = [1] * N #count[i] = number of longest ending in nums[i] for idx, num in enumerate(nums): #i for i in range(idx): #j if nums[i] < nums[idx]: #bigger if lengths[i] >= lengths[idx]: lengths[idx] = 1 + lengths[i] #set the biggest length counts[idx] = counts[i] #change the count elif lengths[i] + 1 == lengths[idx]: #if it is a tie counts[idx] += counts[i] #increase the current count by count[i] longest = max(lengths) print(counts) print(lengths) return sum(c for i, c in enumerate(counts) if lengths[i] == longest) \end{lstlisting} 128. Longest Consecutive Sequence \begin{lstlisting} Given an unsorted array of integers, find the length of the longest consecutive elements sequence. For example, Given [100, 4, 200, 1, 3, 2], The longest consecutive elements sequence is [1, 2, 3, 4]. Return its length: 4. Your algorithm should run in O(n) complexity. \end{lstlisting} Solution: Not thinking about the O(n) complexity, we can use sorting to get [1,2,3,4,100,200], and then use two pointers to get [1,2,3,4]. How about O(n)? We can pop out a number in the list, example, 4 , then we use while first-1 to get any number that is on the left side of 4, here it is 3, 2, 1, and use another to find all the bigger one and remove these numbers from the nums array. \begin{lstlisting}[language =Python] def longestConsecutive(self, nums): nums = set(nums) maxlen = 0 while nums: first = last = nums.pop() while first - 1 in nums: #keep finding the smaller one first -= 1 nums.remove(first) while last + 1 in nums: #keep finding the larger one last += 1 nums.remove(last) maxlen = max(maxlen, last - first + 1) return maxlen \end{lstlisting} % subset %\subfile{chapters/mastering/array/subset.tex} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%% Subset %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{Subset(Combination and Permutation)} \label{part4_array_subset} The Subset B of a set A is defined as a set within all elements of this subset are from set A. In other words, the subset B is contained inside the set A, $B \in A$. There are two kinds of subsets: if the order of the subset doesnt matter, it is a combination problem, otherwise, it is a permutation problem. To solve the problems in this section, we need to refer to the backtracking in Sec~\ref{sec_combination}. When the subset has a fixed constant length, then hashmap can be used to lower the complexity by one power of n. \textbf{Subset VS Subsequence}. In the subsequence, the elements keep the original order from the original sequence. While, in the set concept, there is no ordering, only a set of elements. In this type of questions, we are asked to return subsets of a list. For this type of questions, backtracking~\ref{sec:backtrack} can be applied. \subsection{Combination} \label{part4_array_combine} The solution of this section is heavily correlated to Section~\ref{sec_combination}. 78. Subsets \begin{lstlisting} Given a set of distinct integers, nums, return all possible subsets (the power set). Note: The solution set must not contain duplicate subsets. Example: Input: nums = [1,2,3] Output: [ [3], [1], [2], [1,2,3], [1,3], [2,3], [1,2], [] ] \end{lstlisting} \textbf{Backtracking}. This is a combination problem, which we have explained in backtrack section. We just directly gave the code here. \begin{lstlisting}[language = Python] def subsets(self, nums): res, n = [], len(nums) res = self.combine(nums, n, n) return res def combine(self, nums, n, k): """ :type n: int :type k: int :rtype: List[List[int]] """ def C_n_k(d, k, s, curr, ans): #d controls the degree (depth), k is controls the return level, curr saves the current result, ans is all the result ans.append(curr) if d == k: #the length is satisfied return for i in range(s, n): curr.append(nums[i]) C_n_k(d+1, k, i+1, curr[:], ans) # i+1 because no repeat, make sure use deep copy curr[:] curr.pop() ans = [] C_n_k(0, k, 0, [], ans) return ans \end{lstlisting} \textbf{Incremental}. Backtracking is not the only way for the above problem. There is another way to do it iterative, observe the following process. We can just keep append elements to the end of of previous results. \begin{lstlisting} [1, 2, 3, 4] l = 0, [] l = 1, for 1, []+[1], -> [1], get powerset of [1] l = 2, for 2, []+[2], [1]+[2], -> [2], [1, 2], get powerset of [1, 2] l = 3, for 3, []+[3], [1]+[3], [2]+[3], [1, 2]+[3], -> [3], [1, 3], [2, 3], [1, 2, 3], get powerset of [1, 2, 3] l = 4, for 4, []+ [4]; [1]+[4]; [2]+[4], [1, 2] +[4]; [3]+[4], [1,3]+[4],[2,3]+[4], [1,2,3]+[4], get powerset of [1, 2, 3, 4] \end{lstlisting} \begin{lstlisting}[language=Python] def subsets(self, nums): result = [[]] #use two dimensional, which already have [] one element for num in nums: new_results = [] for r in result: new_results.append(r + [num]) result += new_results return result \end{lstlisting} 90. Subsets II \begin{lstlisting} Given a collection of integers that might contain duplicates, nums, return all possible subsets (the power set). Note: The solution set must not contain duplicate subsets. Example: Input: [1,2,2] Output: [ [2], [1], [1,2,2], [2,2], [1,2], [] ] \end{lstlisting} Analysis: Because of the duplicates, the previous superset algorithm would give repetitive subset. For the above example, we would have [1, 2] twice, and [2] twice. If we try to modify on the previous code. We first need to sort the nums, which makes the way we check repeat easiler. Then the code goes like this: \begin{lstlisting}[language = Python] def subsetsWithDup(self, nums): """ :type nums: List[int] :rtype: List[List[int]] """ nums.sort() result = [[]] #use two dimensional, which already have [] one element for num in nums: new_results = [] for r in result: print(r) new_results.append(r + [num]) for rst in new_results: if rst not in result: # check the repetitive result.append(rst) return result \end{lstlisting} However, the above code is extremely inefficient because of the checking process. A better way to do this: \begin{lstlisting} [1, 2, 2] l = 0, [] l = 1, for 1, []+[1] l = 2, for 2, []+[2], [1]+[2]; []+[2, 2], [1]+[2, 2] \end{lstlisting} So it would be more efficient if we first save all the numbers in the array in a dictionary. For the above case, the dic = {1:1, 2:2}. Each time we try to generate the result, we use 2 up to 2 times. Same way, we can use dictionary on the backtracking too. \begin{lstlisting}[language=Python] class Solution(object): def subsetsWithDup(self, nums): """ :type nums: List[int] :rtype: List[List[int]] """ if not nums: return [[]] res = [[]] dic = collections.Counter(nums) for key, val in dic.items(): tmp = [] for lst in res: for i in range(1, val+1): tmp.append(lst+[key]*i) res += tmp return res \end{lstlisting} 77. Combinations \begin{lstlisting} Given two integers n and k, return all possible combinations of k numbers out of 1 ... n. Example: Input: n = 4, k = 2 Output: [ [2,4], [3,4], [2,3], [1,2], [1,3], [1,4], ] \end{lstlisting} Analysis: In this problem, it is difficult for us to generate the results iteratively, the only way we can use the second solution is by filtering and get only the results with the length we want. However, the backtrack can solve the problem easily as we mentioned in Section~\ref{sec_combination}. \begin{lstlisting}[language=Python] def combine(self, n, k): """ :type n: int :type k: int :rtype: List[List[int]] """ ans = [] def C_n_k(d,k,s,curr): if d==k: ans.append(curr) return for i in range(s, n): #curr.append(i+1) #C_n_k(d+1, k, i+1, curr[:]) #curr.pop() C_n_k(d+1, k, i+1, curr+[i+1]) C_n_k(0,k,0,[]) return ans \end{lstlisting} %%%%%%%%%%%%%%%%%%%combination sum%%%%%%%%%%%%%%%%%%%%%%%%%%%% \subsection{Combination Sum} 39. Combination Sum Given a set of candidate numbers (candidates) \textbf{(without duplicates)} and a target number (target), find all unique combinations in candidates where the candidate numbers sums to target. The same repeated number may be chosen from candidates \textbf{unlimited number} of times. \begin{lstlisting} Note: All numbers (including target) will be positive integers. The solution set must not contain duplicate combinations. Example 1: Input: candidates = [2,3,6,7], target = 7, A solution set is: [ [7], [2,2,3] ] Example 2: Input: candidates = [2,3,5], target = 8, A solution set is: [ [2,2,2,2], [2,3,3], [3,5] ] \end{lstlisting} \textbf{DFS Backtracking}. Analysis: This is still a typical combination problem, the only thing is the return level is when the sum of the path we gained is larger than the target, and we only collect the answer when it is equal. And Because a number can be used unlimited times, so that each time after we used one number, we do not increase the next start position. \begin{lstlisting}[language=Python] def combinationSum(self, candidates, target): """ :type candidates: List[int] :type target: int :rtype: List[List[int]] """ ans = [] candidates.sort() self.combine(candidates, target, 0, [], ans) return ans def combine(self, nums, target, s, curr, ans): if target < 0: return # backtracking if target == 0: ans.append(curr) return for i in range(s, len(nums)): # if nums[i] > target: # return self.combine(nums, target-nums[i], i, curr+[nums[i]], ans) # use i, instead of i+1 because we can reuse \end{lstlisting} 40. Combination Sum II Given a collection of candidate numbers \textbf{(candidates with duplicates)} and a target number (target), find all unique combinations in candidates where the candidate numbers sums to target. Each number in candidates may only \textbf{be used once} in the combination. \begin{lstlisting} Note: All numbers (including target) will be positive integers. The solution set must not contain duplicate combinations. Example 1: Input: candidates = [10,1,2,7,6,1,5], target = 8, A solution set is: [ [1, 7], [1, 2, 5], [2, 6], [1, 1, 6] ] Example 2: Input: candidates = [2,5,2,1,2], target = 5, A solution set is: [ [1,2,2], [5] ] \end{lstlisting} \textbf{Backtracking+Counter}. Because for the first example, if we reuse the code from the previous problem, we will get extra combinations: [7, 1], [2, 1, 5]. To avoid this, we need a dictionary to save all the unique candidates with its corresponding appearing times. For a certain number, it will be used at most its counter times. \begin{lstlisting}[language=Python] def combinationSum2(self, candidates, target): """ :type candidates: List[int] :type target: int :rtype: List[List[int]] """ candidates = collections.Counter(candidates) ans = [] self.combine(list(candidates.items()), target, 0, [], ans) # convert the Counter to a list of (key, item) tuple return ans def combine(self, nums, target, s, curr, ans): if target < 0: return if target == 0: ans.append(curr) return for idx in range(s, len(nums)): num, count = nums[idx] for c in range(count): self.combine(nums, target-num*(c+1), idx+1, curr+[num]*(c+1), ans ) \end{lstlisting} 377. Combination Sum IV (medium) \begin{lstlisting} Given an integer array with all positive numbers and no duplicates, find the number of possible combinations that add up to a positive integer target. Example: nums = [1, 2, 3] target = 4 The possible combination ways are: (1, 1, 1, 1) (1, 1, 2) (1, 2, 1) (1, 3) (2, 1, 1) (2, 2) (3, 1) Note that different sequences are counted as different combinations. Therefore the output is 7. Follow up: What if negative numbers are allowed in the given array? How does it change the problem? What limitation we need to add to the question to allow negative numbers? \end{lstlisting} \textbf{DFS + MEMO}. This problem is similar to 39. Combination Sum. For [2, 3, 5], target = 8, comparison: \begin{lstlisting} [2, 3, 5], target = 8 39. Combination Sum. # there is ordering (each time the start index is same or larger than before) [ [2,2,2,2], [2,3,3], [3,5] ] 377. Combination Sum IV, here we have no ordering( each time the start index is the same as before). Try all element. [ [2,2,2,2], [2,3,3], * [3,3,2] * [3,2,3] [3,5], * [5,3] ] \end{lstlisting} \begin{lstlisting}[language=Python] def combinationSum4(self, nums, target): """ :type nums: List[int] :type target: int :rtype: int """ nums.sort() n = len(nums) def DFS(idx, memo, t): if t < 0: return 0 if t == 0: return 1 count = 0 if t not in memo: for i in range(idx, n): count += DFS(idx, memo, t-nums[i]) memo[t] = count return memo[t] return(DFS(0, {}, target)) \end{lstlisting} Because, here we does not need to numerate all the possible solutions, we can use dynamic programming, which will be shown in Section~\ref{}. \subsection{K Sum} In this subsection, we still trying to get subset that sum up to a target. But the length here is fixed. We would have 2, 3, 4 sums normally. Because it is still a combination problem, we can use the \textbf{backtracking} to do. Second, because the fixed length, we can use \textbf{multiple pointers} to build up the potential same lengthed subset. But in some cases, because the length is fixed, we can use \textbf{hashmap} to simplify the complexity. 1. Two Sum Given an array of integers, return \textbf{indices} of the two numbers such that they add up to a specific target. You may assume that each input would have \textbf{exactly} one solution, and you may not use the same element twice. \begin{lstlisting} Example: Given nums = [2, 7, 11, 15], target = 9, Because nums[0] + nums[1] = 2 + 7 = 9, return [0, 1]. \end{lstlisting} \textbf{Hashmap}. Using backtracking or brute force will get us $O(n^2)$ time complexity. We can use hashmap to save the nums in a dictionary. Then we just check target-num in the dictionary. We would get $O(n)$ time complexity. We have two-pass hashmap and one-pass hashmap. \begin{lstlisting}[language=Python] # two-pass hashmap def twoSum(self, nums, target): """ :type nums: List[int] :type target: int :rtype: List[int] """ dict = collections.defaultdict(int) for i, t in enumerate(nums): dict[t] = i for i, t in enumerate(nums): if target - t in dict and i != dict[target-t]: return [i, dict[target-t]] # one-pass hashmap def twoSum(self, nums, target): """ :type nums: List[int] :type target: int :rtype: List[int] """ dict = collections.defaultdict(int) for i, t in enumerate(nums): if target - t in dict: return [dict[target-t], i] dict[t] = i \end{lstlisting} 15. 3Sum Given an array S of n integers, are there elements a, b, c in S such that a + b + c = 0? Find all unique triplets in the array which gives the sum of zero. Note: The solution set must not contain duplicate triplets. For example, given array S = [-1, 0, 1, 2, -1, -4], \begin{lstlisting} A solution set is: [ [-1, 0, 1], [-1, -1, 2] ] \end{lstlisting} Solution: Should use three pointers, no extra space. i is the start point from [0,len-2], l,r is the other two pointers. l=i+1, r=len-1 at the beignning. The saving of time complexity is totally from the sorting algorithm. \begin{lstlisting} [-4,-1,-1,0,1,2] i, l-> ``````<-r \end{lstlisting} How to delete repeat? \begin{lstlisting}[language = Python] def threeSum(self, nums): res = [] nums.sort() for i in xrange(len(nums)-2): if i > 0 and nums[i] == nums[i-1]: #make sure pointer not repeat continue l, r = i+1, len(nums)-1 while l < r: s = nums[i] + nums[l] + nums[r] if s < 0: l +=1 elif s > 0: r -= 1 else: res.append((nums[i], nums[l], nums[r])) l+=1 r-=1 #after the first run, then check duplicate example. while l < r and nums[l] == nums[l-1]: l += 1 while l < r and nums[r] == nums[r+1]: r -= 1 return res \end{lstlisting} Use hashmap: \begin{lstlisting}[language = Python] def threeSum(self, nums): """ :type nums: List[int] :rtype: List[List[int]] """ res =[] nums=sorted(nums) if not nums: return [] if nums[-1]<0 or nums[0]>0: return [] end_position = len(nums)-2 dic_nums={} for i in xrange(1,len(nums)): dic_nums[nums[i]]=i# same result save the last index for i in xrange(end_position): target = 0-nums[i] if i>0 and nums[i] == nums[i-1]: #this is to avoid repeat continue if target<nums[i]: #if the target is smaller than this, we can not find them on the right side break for j in range(i+1,len(nums)): #this is to avoid repeat if j>i+1 and nums[j]==nums[j-1]: continue complement =target - nums[j] if complement<nums[j]: #if the left numbers are bigger than the complement, no need to keep searching break if complement in dic_nums and dic_nums[complement]>j: #need to make sure the complement is bigger than nums[j] res.append([nums[i],nums[j],complement]) return res \end{lstlisting} The following code uses more time \begin{lstlisting}[language = Python] for i in xrange(len(nums)-2): if i > 0 and nums[i] == nums[i-1]: continue l, r = i+1, len(nums)-1 while l < r: if l-1>=i+1 and nums[l] == nums[l-1]: #check the front l += 1 continue if r+1<len(nums) and nums[r] == nums[r+1]: r -= 1 continue s = nums[i] + nums[l] + nums[r] if s < 0: l +=1 elif s > 0: r -= 1 else: res.append((nums[i], nums[l], nums[r])) l += 1; r -= 1 return res \end{lstlisting} 18. 4Sum \begin{lstlisting}[language = Python] def fourSum(self, nums, target): def findNsum(nums, target, N, result, results): if len(nums) < N or N < 2 or target < nums[0]*N or target > nums[-1]*N: # early termination return if N == 2: # two pointers solve sorted 2-sum problem l,r = 0,len(nums)-1 while l < r: s = nums[l] + nums[r] if s == target: results.append(result + [nums[l], nums[r]]) l += 1 r-=1 while l < r and nums[l] == nums[l-1]: l += 1 while l < r and nums[r] == nums[r+1]: r -= 1 elif s < target: l += 1 else: r -= 1 else: # recursively reduce N for i in range(len(nums)-N+1): if i == 0 or (i > 0 and nums[i-1] != nums[i]): findNsum(nums[i+1:], target-nums[i], N-1, result+[nums[i]], results) #reduce nums size, reduce target, save result results = [] findNsum(sorted(nums), target, 4, [], results) return results \end{lstlisting} 454. 4Sum II Given four lists A, B, C, D of integer values, compute how many tuples (i, j, k, l) there are such that A[i] + B[j] + C[k] + D[l] is zero. To make problem a bit easier, all A, B, C, D have same length of N where $0 \leq N \leq 500$. All integers are in the range of -228 to 228–1 and the result is guaranteed to be at most 231–1. Example: \begin{lstlisting} Input: A = [ 1, 2] B = [-2,-1] C = [-1, 2] D = [ 0, 2] Output: 2 \end{lstlisting} Explanation: \begin{lstlisting} The two tuples are: 1. (0, 0, 0, 1) -> A[0] + B[0] + C[0] + D[1] = 1 + (-2) + (-1) + 2 = 0 2. (1, 1, 0, 0) -> A[1] + B[1] + C[0] + D[0] = 2 + (-1) + (-1) + 0 = 0 \end{lstlisting} Solution: if we use brute force, use 4 for loop, then it is $O(N^4)$. If we use divide and conquer, sum the first half, and save a dictionary (counter), time complexity is $O(2N^2)$. What if we have 6 sum, we can reduce it to $O(2N^3)$, what if 8 sum. \begin{lstlisting}[language = Python] def fourSumCount(self, A, B, C, D): AB = collections.Counter(a+b for a in A for b in B) return sum(AB[-c-d] for c in C for d in D) \end{lstlisting} \subsubsection{Summary} As we have seen from the shown examples in this section, to solve the combination problem, backtrack shown in Section~\ref{sec_combination} offers a universal solution. Also, there is another iterative solution which suits the power set purpose. And I would include its code here again: \begin{lstlisting}[language = Python] def subsets(self, nums): result = [[]] #use two dimensional, which already have [] one element for num in nums: new_results = [] for r in result: new_results.append(r + [num]) result += new_results return result \end{lstlisting} If we have duplicates, how to handle in the backtrack?? In the iterative solution, we can replace the array with a dictionary saves the counts. \subsection{Permutation} 46. Permutations \begin{lstlisting} Given a collection of distinct numbers, return all possible permutations. For example, [1,2,3] have the following permutations: [ [1,2,3], [1,3,2], [2,1,3], [2,3,1], [3,1,2], [3,2,1] ] \end{lstlisting} 47. Permutations II Given a collection of numbers that might contain duplicates, return all possible unique permutations. For example, \begin{lstlisting} [1,1,2] have the following unique permutations: [ [1,1,2], [1,2,1], [2,1,1] ] \end{lstlisting} 301. Remove Invalid Parentheses Remove the minimum number of invalid parentheses in order to make the input string valid. Return all possible results. Note: The input string may contain letters other than the parentheses ( and ). Examples: \begin{lstlisting} "()())()" -> ["()()()", "(())()"] "(a)())()" -> ["(a)()()", "(a())()"] ")(" -> [""] \end{lstlisting} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%% Merge List %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{Merge and Partition} \subsection{Merge Lists} We can use divide and conquer (see the merge sort) and the priority queue. \subsection{Partition Lists} Partition of lists can be converted to subarray, combination, subsequence problems. For example, \begin{enumerate} \item 416. Partition Equal Subset Sum (combination) \item 698. Partition to K Equal Sum Subsets \end{enumerate} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%% Sweep Line %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{Intervals} \label{sec_sweep_line} % \documentclass[../../main.tex]{subfiles} Sweep Line is a type of algorithm that mainly used to solve problems with intervals of one-dimensional. Let us look at one example: 1. 253. Meeting Rooms II Given an array of meeting time intervals consisting of start and end times [[s1,e1],[s2,e2],...] (si < ei), find the minimum number of conference rooms required. \begin{lstlisting} Example 1: Input: [[0, 30],[5, 10],[15, 20]] Output: 2 Example 2: Input: [[7,10],[2,4]] Output: 1 \end{lstlisting} It would help a lot if at first we can draw one example with cooridinates. \begin{figure}[h] \centering \includegraphics[width = 0.6\columnwidth]{fig/sweep_line_253.png} \caption{Interval questions} \label{fig:interval} \end{figure} First, the simplest situation is when we only need one meeting room is there is no intersection between these time intervals. If we add one interval that only intersect with one of the previous intervals, this means we need two conference rooms. So to find the minimum conference rooms we need, we need to find the maximum number of intersection between these time intervals. The most native solution is to scan all the time slot in one for loop, and at another inner loop go through all the intervals, if this time slot is in this intervals, then we increase the minimum number of meeting room counter. This gives us time complexity of $O(n*m)$, where $n$ is the number of intervals and $m$ is the total number of time slots. The Python code is as follows, unfortunately, with this solution we have LTE error. \begin{lstlisting}[language = Python] # Definition for an interval. # class Interval(object): # def __init__(self, s=0, e=0): # self.start = s # self.end = e from collections import defaultdict from heapq import heappush, heappop from sys import maxint class Solution(object): def minMeetingRooms(self, intervals): """ :type intervals: List[Interval] :rtype: int """ if not intervals: return 0 #solution 1, voting, time complexity is O(e1-s1), 71/77 test, TLE votes = defaultdict(int) num_rooms = 0 for interval in intervals: s=interval.start e=interval.end for i in range(s+1,e+1): votes[i]+=1 num_rooms = max(num_rooms, votes[i]) return num_rooms \end{lstlisting} \subsection{Speedup with Sweep Line} Now, let us see how to speed up this process. We can use Sweep Line method. For the sweep line, we have three basic implementations: one-dimensional, min-heap, or map based. \subsubsection{One-dimensional Implementation} To get the maximum number of intersection of all the intervals, it is not necessarily to scan all the time slots, how about just scan the key slot: the starts and ends . Thus, what we can do is to open an array and put all the start or end slot into the array, and with $1$ to mark it as start and $0$ to mark it as end. Then we sort this array. Till this point, how to get the maximum intersection? We go through this sorted array, if we get a start our current number of room needed will increase by one, otherwise, if we encounter an end slot, it means one meeting room is freed, thus we decrease the current on-going meeting room by one. We use another global variable to track the maximum number of rooms needed in this whole process. Great, because now our time complexity is decided by the number of slots $2n$, with the sorting algorithm, which makes the whole time complexity $O(nlogn)$ and space complexity $n$. This speeded up algorithm is called Sweep Line algorithm. Before we write our code, we better check the \textit{special cases}, what if there is one slot that is marked as start in one interval but is the end of another interval. This means we can not increase the counting at first, but we need to decrease, so that the sorting should be based on the first element of the tuple, and followed by the second element of the tuple. For example, the simple case $[[13,15],[1,13]]$, we only need maximum of one meeting room. Thus it can be implemented as: \begin{figure}[h] \centering \includegraphics[width=0.6\columnwidth]{fig/sweep_line_one_dimension.png} \caption{One-dimensional Sweep Line} \label{fig:one_dim_sl} \end{figure} \begin{lstlisting}[language=Python] def minMeetingRooms(self, intervals): if not intervals: return 0 #solution 2 slots = [] # put slots into one-dimensional axis for i in intervals: slots.append((i.start, 1)) slots.append((i.end, 0)) # sort these slots on this dimension #slots.sort(key = lambda x: (x[0], x[1])) slots.sort() # now execute the counting crt_room, max_room = 0, 0 for s in slots: if s[1]==0: # if it ends, decrease crt_room-=1 else: crt_room+=1 max_room = max(max_room, crt_room) return max_room \end{lstlisting} \subsubsection{Min-heap Implementation} \begin{figure}[h] \centering \includegraphics[width=0.6\columnwidth]{fig/sweep_line_min_heap.png} \caption{Min-heap for Sweep Line} \label{fig:min_heap_sl} \end{figure} Instead of opening an array to save all the time slots, we can directly sort the intervals in the order of the start time. We can see Fig.~\ref{fig:min_heap_sl}, we go through the intervals and visit their end time, the first one we encounter is $30$, we put it in a min-heap, and then we visit the next interval $[5, 10]$, $5$ is smaller than the previous end time $30$, it means this interval intersected with a previous interval, so the number of maximum rooms increase $1$, we get $2$ rooms now. We put $10$ into the min-heap. Next, we visit $[15, 20]$, $15$ is larger than the first element in the min-heap $10$, it means that these two intervals can be merged into one $[5, 20]$, so we need to update the end time $10$ to $20$. This way, the time complexity is still the same which is decided by the sorting algorithm. While the space complexity is decided by real situation, it varies from $O(1)$ (no intersection) to $O(n)$ (all the meetings are intersected at at least one time slot). \begin{lstlisting}[language=Python] def minMeetingRooms(self, intervals): if not intervals: return 0 #solution 2 intervals.sort(key=lambda x:x.start) h = [intervals[0].end] rooms = 1 for i in intervals[1:]: s,e=i.start, i.end e_before = h[0] if s<e_before: #overlap heappush(h, i.end) rooms+=1 else: #no overlap #merge heappop(h) #kick out 10 in our example heappush(h,e) # replace 10 with 20 return rooms \end{lstlisting} % 2、multiset:这次是先对每个区间的起点排序,然后依次将每个区间的终点放在一个集合中。如果下一个区间的起点大于等于之前某个区间的终点,就将其从集合中删除,每次需要统计一下当前所需的最大办公室数量。这个版本的时间复杂度还是O(nlogn),但是空间复杂度却变成output-dependent的了,最多是O(n),最少是O(1)。 \subsubsection{Map-based Implementation} % 3、map:我们用一个map来存储重合区域,即每个区间的起点代表一个区间的开始,会将重叠区域+1,每个区间的结束点代表一个区间的结束,会将重叠区域-1。因此我们可以利用这个性质,结合STL中的map来实现(实质上这个算法和“一维向量”版本非常像,只是采用的数据结构不同而已)。 % --------------------- % 作者:魔豆Magicbean % 来源:CSDN % 原文:https://blog.csdn.net/magicbean2/article/details/74199529 % 版权声明:本文为博主原创文章,转载请附上博文链接! \begin{lstlisting}[language=Python] class Solution { public: int minMeetingRooms(vector<Interval>& intervals) { map<int, int> mp; for (auto val : intervals) { ++mp[val.start]; --mp[val.end]; } int max_room = 0, crt_room = 0; for (auto val : mp) { crt_room += val.second; max_room = max(max_room, crt_room); } return max_room; } }; \end{lstlisting} \subsection{LeetCode Problems} \begin{enumerate} \item \textbf{986. Interval List Intersections} Given two lists of closed intervals, each list of intervals is pairwise disjoint and in sorted order. Return the intersection of these two interval lists. \begin{lstlisting}[numbers=none] Input: A = [[0,2],[5,10],[13,23],[24,25]], B = [[1,5],[8,12],[15,24],[25,26]] Output: [[1,2],[5,5],[8,10],[15,23],[24,24],[25,25]] Reminder: The inputs and the desired output are lists of Interval objects, and not arrays or lists. \end{lstlisting} \end{enumerate} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%% Intersection %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{Intersection} For problems to get intersections of lists, we can use hashmap, which takes $O(m+n)$ time complexity. Also, we can use sorting at first and use two pointers one start from the start of each array. Examples are shown as below; \begin{enumerate} \item 349. Intersection of Two Arrays (Easy) Given two arrays, write a function to compute their intersection. Example: \begin{lstlisting} Given nums1 = [1, 2, 2, 1], nums2 = [2, 2], return [2]. \end{lstlisting} Note: \begin{itemize} \item Each element in the result must be unique. \item The result can be in any order. \end{itemize} Solution 1: Using hashmap, here we use set to convert, this takes 43ms. \begin{lstlisting}[language = Python] def intersection(self, nums1, nums2): """ :type nums1: List[int] :type nums2: List[int] :rtype: List[int] """ if not nums1 or not nums2: return [] if len(nums1) > len(nums2): nums1, nums2 = nums2, nums1 ans = set() nums1 = set(nums1) for e in nums2: if e in nums1: ans.add(e) return list(ans) \end{lstlisting} Solution2: sorting at first, and then use pointers. Take 46 ms. \begin{lstlisting}[language = Python] def intersection(self, nums1, nums2): """ :type nums1: List[int] :type nums2: List[int] :rtype: List[int] """ nums1.sort() nums2.sort() r = set() i, j = 0, 0 while i < len(nums1) and j < len(nums2): if nums1[i] < nums2[j]: i += 1 elif nums1[i] > nums2[j]: j += 1 else: r.add(nums1[i]) i += 1 j += 1 return list(r) \end{lstlisting} \item 350. Intersection of Two Arrays II(Easy) Given two arrays, write a function to compute their intersection. Example: \begin{lstlisting} Given nums1 = [1, 2, 2, 1], nums2 = [2, 2], return [2, 2]. \end{lstlisting} Note: \begin{itemize} \item Each element in the result should appear as many times as it shows in both arrays. \item The result can be in any order. \end{itemize} Follow up: \begin{enumerate} \item What if the given array is already sorted? How would you optimize your algorithm? \item What if nums1's size is small compared to nums2's size? Which algorithm is better? \item What if elements of nums2 are stored on disk, and the memory is limited such that you cannot load all elements into the memory at once? \end{enumerate} \end{enumerate} \section{Miscellanous Questions} \begin{examples}[resume] \item \textbf{283. Move Zeroes. (Easy)} Given an array nums, write a function to move all 0's to the end of it while maintaining the relative order of the non-zero elements. Note: \begin{enumerate} \item You must do this in-place without making a copy of the array. \item Minimize the total number of operations. \end{enumerate} \begin{lstlisting}[language=Python] Example: Input: [0,1,0,3,12] Output: [1,3,12,0,0] \end{lstlisting} \textbf{Solution 1: Find All Zeros Subarray.} If we found the first all zeros subarray [0, ..., 0] + [x], and we can swap this subarray with the first non-zero element as swap last 0 with x, swap second last element with x, ..., and so on. Therefore, if 0 is at first index, one zero, then it takes O(n), if another 0, at index 1, it takes n-1+n-2 = 2n. It is bit tricky to compute the complexity analysis. The upper bound is $O(n^2)$. \end{examples} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%% Exercises %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{Exercises} % %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % %%%%% Subsequence % %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \subsection{Subsequence with (DP)} \begin{enumerate} \item 594. Longest Harmonious Subsequence We define a harmonious array is an array where the difference between its maximum value and its minimum value is exactly 1. Now, given an integer array, you need to find the length of its longest harmonious subsequence among all its possible subsequences. Example 1: \begin{lstlisting} Input: [1,3,2,2,5,2,3,7] Output: 5 Explanation: The longest harmonious subsequence is [3,2,2,2,3]. \end{lstlisting} \textit{Note: The length of the input array will not exceed 20,000.} Solution: at first, use a Counter to save the whole set. Then visit the counter dictionary, to check key+1 and key-1, only when the item is not zero, we can count it as validate, or else it is 0. \begin{lstlisting}[language = Python] from collections import Counter class Solution: def findLHS(self, nums): """ :type nums: List[int] :rtype: int """ if not nums or len(nums)<2: return 0 count=Counter(nums) #the list is sorted by the key value maxLen = 0 for key,item in count.items(): #to visit the key: item in the counter if count[key+1]: #because the list is sorted, so we only need to check key+1 maxLen = max(maxLen,item+count[key+1]) # if count[key-1]: # maxLen=max(maxLen, item+count[key-1]) return maxLen \end{lstlisting} \item 521. Longest Uncommon Subsequence I Given a group of two strings, you need to find the longest uncommon subsequence of this group of two strings. The longest uncommon subsequence is defined as the longest subsequence of one of these strings and this subsequence should not be any subsequence of the other strings. A subsequence is a sequence that can be derived from one sequence by deleting some characters without changing the order of the remaining elements. Trivially, any string is a subsequence of itself and an empty string is a subsequence of any string. The input will be two strings, and the output needs to be the length of the longest uncommon subsequence. If the longest uncommon subsequence doesn’t exist, return -1. Example 1: \begin{lstlisting} Input: "aba", "cdc" Output: 3 Explanation: The longest uncommon subsequence is "aba" (or "cdc"), because "aba" is a subsequence of "aba", but not a subsequence of any other strings in the group of two strings. \end{lstlisting} \textit{Note:} \textit{Both strings’ lengths will not exceed 100.} \textit{Only letters from a ~ z will appear in input strings.} Solution: if we get more examples, we could found the following rules, “aba”,”aba” return -1, \begin{lstlisting}[language = Python] def findLUSlength(self, a, b): """ :type a: str :type b: str :rtype: int """ if len(b)!=len(a): return max(len(a),len(b)) #length is the same return len(a) if a!=b else -1 \end{lstlisting} \item 424. Longest Repeating Character Replacement Given a string that consists of only uppercase English letters, you can replace any letter in the string with another letter at most k times. Find the length of a longest substring containing all repeating letters you can get after performing the above operations. \textit{Note:} \textit{Both the string’s length and k will not exceed 104.} Example 1: \begin{lstlisting} Input: s = "ABAB", k = 2 Output: 4 \end{lstlisting} Explanation: Replace the two 'A's with two 'B's or vice versa. Example 2: \begin{lstlisting} Input: s = "AABABBA", k = 1 Output: 4 \end{lstlisting} Explanation: Replace the one 'A' in the middle with 'B' and form "AABBBBA". The substring "BBBB" has the longest repeating letters, which is 4. Solution: the brute-force recursive solution for this, is try to replace any char into another when it is not equal or choose not too. LTE \begin{lstlisting}[language = Python] #brute force, use recursive function to write brute force solution def replace(news, idx, re_char, k): nonlocal maxLen if k==0 or idx==len(s): maxLen = max(maxLen, getLen(news)) return if s[idx]!=re_char: #replace news_copy=news[:idx]+re_char+news[idx+1:] replace(news_copy, idx+1, re_char, k-1) replace(news[:], idx+1, re_char,k) #what if we only have one char # for char1 in chars.keys(): # replace(s[:],0,char1, k) \end{lstlisting} To get the BCR, think about the sliding window. The longest repeating string we can by number of replacement = `length of string max(numer of occurence of letter i), i=’A’ to ‘Z’. With the constraint, which means the equation needs to be $\leq k$. So we can use sliding window to record the max occurence, and when the constraint is violated, we shrink the window. Given an example, strs= “BBCABBBAB”, k=2, when i=0, and j=7, 8–5=3>2, which is at A, we need to shrink it, the maxCharCount changed to 4, i=1, so that 8–1–4=3, i=2, 8–2–3=3, 8–3–3=2, so i=3, current length is 5. \begin{lstlisting}[language = Python] def characterReplacement(self, s, k): """ :type s: str :type k: int :rtype: int """ i,j = 0,0 #sliding window counter=[0]*26 ans = 0 maxCharCount = 0 while j<len(s): counter[ord(s[j])-ord('A')]+=1 maxCharCount = max(maxCharCount, counter[ord(s[j])-ord('A')]) while j-i+1-maxCharCount>k: #now shrink the window counter[ord(s[i])-ord('A')]-=1 i+=1 #updata max maxCharCount=max(counter) ans=max(ans, j-i+1) j+=1 return ans \end{lstlisting} \item 395. Longest Substring with At Least K Repeating Characters Find the length of the longest substring T of a given string (consists of lowercase letters only) such that every character in T appears no less than k times. Example 1: \begin{lstlisting} Input: s = "aaabb", k = 3 Output: 3 \end{lstlisting} The longest substring is "aaa", as 'a' is repeated 3 times. Example 2: \begin{lstlisting} Input: s = "ababbc", k = 2 Output: 5 \end{lstlisting} The longest substring is "ababb", as 'a' is repeated 2 times and 'b' is repeated 3 times. Solution: use dynamic programming with memo: Cons: it takes too much space, and with LTE. \begin{lstlisting}[language = Python] from collections import Counter, defaultdict class Solution: def longestSubstring(self, s, k): """ :type s: str :type k: int :rtype: int """ if not s: return 0 if len(s)<k: return 0 count = Counter(char for char in s) print(count) memo=[[None for col in range(len(s))] for row in range(len(s))] def cut(start,end,count): if start>end: return 0 if memo[start][end]==None: if any(0<item<k for key,item in count.items()): newCounterF=count.copy() newCounterF[s[start]]-=1 newCounterB=count.copy() newCounterB[s[end]]-=1 #print(newsF,newsB) memo[start][end]= max(cut(start+1, end, newCounterF), cut(start, end-1, newCounterB)) else: memo[start][end] = end-start+1 return memo[start][end] return cut(0,len(s)-1,count) \end{lstlisting} Now, use sliding window, we use a pointer mid, what start from 0, if the whole string satisfy the condition, return len(s). Otherwise, use two while loop to separate the string into three substrings: left, mid, right. left satisfy, mid unsatisfy, right unknown. \begin{lstlisting}[language = Python] from collections import Counter, defaultdict class Solution: def longestSubstring(self, s, k): """ :type s: str :type k: int :rtype: int """ if not s: return 0 if len(s)<k: return 0 count = Counter(char for char in s) mid=0 #on the left side, from 0-mid, satisfied elments while mid<len(s) and count[s[mid]]>=k: mid+=1 if mid==len(s): return len(s) left = self.longestSubstring(s[:mid],k) #"ababb" #from pre_mid - cur_mid, get rid of those cant satisfy the condition while mid<len(s) and count[s[mid]]<k: mid+=1 #now the right side keep doing it right = self.longestSubstring(s[mid:],k) return max(left,right) \end{lstlisting} \end{enumerate} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%% Subset %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \subsection{Subset} \label{sec_array_subset} 216. Combination Sum III Find all possible combinations of \textbf{k numbers} that add up to a number n, given that only numbers from 1 to 9 can be used and each combination should be a unique set of numbers. \begin{lstlisting}[numbers=none] Note: All numbers will be positive integers. The solution set must not contain duplicate combinations. Example 1: Input: k = 3, n = 7 Output: [[1,2,4]] Example 2: Input: k = 3, n = 9 Output: [[1,2,6], [1,3,5], [2,3,4]] \end{lstlisting} \begin{lstlisting}[language=Python] def combinationSum3(self, k, n): """ :type k: int :type n: int :rtype: List[List[int]] """ # each only used one time def combine(s, curr, ans, t, d, k, n): if t < 0: return if d == k: if t == 0: ans.append(curr) return for i in range(s, n): num = i+1 combine(i+1, curr+[num], ans, t-num, d+1, k, n) ans = [] combine(0, [], ans, n, 0, k, 9) return ans \end{lstlisting} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%% Intersection %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \subsection{Intersection} 160. Intersection of Two Linked Lists (Easy) Write a program to find the node at which the intersection of two singly linked lists begins. For example, the following two linked lists: \begin{lstlisting}[numbers=none] A: a1 -> a2 \ c1 -> c2 -> c3 / B: b1 -> b2 -> b3 \end{lstlisting} begin to intersect at node c1. Notes: \begin{itemize} \item If the two linked lists have no intersection at all, return null. \item The linked lists must retain their original structure after the function returns. \item You may assume there are no cycles anywhere in the entire linked structure. \item Your code should preferably run in O(n) time and use only O(1) memory. \end{itemize} \end{document}
{ "alphanum_fraction": 0.6252445978, "avg_line_length": 40.8307560137, "ext": "tex", "hexsha": "81bc86aae3333c160ede1a75bc1056e13aac8cbc", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "131199fea0b082d92c0f272a495c7a56a3242b71", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "stungkit/Algorithms-and-Coding-Interviews", "max_forks_repo_path": "Easy-Book/chapters/question_3_array_question.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "131199fea0b082d92c0f272a495c7a56a3242b71", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "stungkit/Algorithms-and-Coding-Interviews", "max_issues_repo_path": "Easy-Book/chapters/question_3_array_question.tex", "max_line_length": 1473, "max_stars_count": null, "max_stars_repo_head_hexsha": "131199fea0b082d92c0f272a495c7a56a3242b71", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "stungkit/Algorithms-and-Coding-Interviews", "max_stars_repo_path": "Easy-Book/chapters/question_3_array_question.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 25775, "size": 95054 }
\chapter{Nanopype Supplement} \label{cha:supplement:nanopype} \section{Listings} \label{sec:supplement:nanopype:listings} \lstinputlisting[language=bash, caption=Snakemake installation wrapper, label=lst:nanopype:build]{listings/nanopype/snakemake_build.txt} %\label{sec:supplement:nanopype:listings} %\lstinputlisting[language=yaml, caption=Nanopype environment configuration example, label=lst:nanopype:env]{listings/nanopype/env.txt} %\label{sec:supplement:nanopype:listings} %\lstinputlisting[language=yaml, caption=Nanopype workflow configuration example, label=lst:nanopype:workflow]{listings/nanopype/nanopype.txt} %\section{Report} \label{sec:supplement:nanopype:report} \includepdf[pages=-]{figures/nanopype/V65.pdf} \chapter{STRique Supplement} %\section{Tables} %\label{sec:supplement:strique:tables} \setlength\LTleft{0pt} \setlength\LTright{0pt} \begin{longtable}{lrrrrrr} \caption[Cas12 target enrichment throughput per flow cell]{Cas12 target enrichment throughput per flow cell} \\ \label{tab:strique:Cas12} \\ run & reads & valid & wild type & expanded & target & sample \\ \hline FAH91937 & 74 & 67 & 36 & 31 & C9orf72 & 24/5\#2 \\ FAH91937 & 5 & 3 & 3 & NA & FMR1 & 24/5\#2 \\ FAJ02524 & 139 & 87 & 51 & 36 & C9orf72 & 24/5\#2 \\ FAJ02524 & 2 & 0 & 0 & NA & FMR1 & 24/5\#2 \\ FAJ03272 & 14 & 1 & 0 & 1 & C9orf72 & 24/5\#2 \\ FAK02383 & 10 & 7 & 2 & 5 & C9orf72 & 24/5\#2 \\ FAK02383 & 5 & 5 & 5 & NA & FMR1 & 24/5\#2 \\ FAK02402 & 1 & 1 & 1 & 0 & C9orf72 & 24/5\#2 \\ FAK57423 & 22 & 18 & 9 & 9 & C9orf72 & 24/5\#2 \\ FAK57423 & 25 & 22 & 22 & NA & FMR1 & 24/5\#2 \\ FAK67802 & 102 & 93 & 53 & 40 & C9orf72 & 24/5\#2 \\ FAK67802 & 63 & 55 & 55 & NA & FMR1 & 24/5\#2 \\ PAD01039 & 228 & 122 & 72 & 50 & C9orf72 & 24/5\#2 \\ PAD01039 & 19 & 10 & 10 & NA & FMR1 & 24/5\#2 \\ PAD01249 & 601 & 436 & 290 & 146 & C9orf72 & 24/5\#2 \\ PAD01249 & 623 & 491 & 491 & NA & FMR1 & 24/5\#2 \\ PAD42366 & 152 & 122 & 69 & 53 & C9orf72 & 24/5\#2 \\ PAD42366 & 102 & 90 & 90 & NA & FMR1 & 24/5\#2 \\ FAH66294 & 28 & 21 & 21 & NA & C9orf72 & iPS6 \\ FAH66294 & 1 & 0 & 0 & 0 & FMR1 & iPS6 \\ FAJ02378 & 350 & 291 & 291 & NA & C9orf72 & iPS6 \\ FAJ02378 & 4 & 2 & 0 & 2 & FMR1 & iPS6 \\ PAD01034 & 151 & 96 & 65 & NA & C9orf72 & iPS6 \\ PAD01034 & 5 & 4 & 0 & 4 & FMR1 & iPS6 \\ PAD01413 & 779 & 565 & 564 & NA & C9orf72 & iPS6 \\ PAD01413 & 317 & 166 & 0 & 166 & FMR1 & iPS6 \\ FAK02017 & 2 & 0 & 0 & NA & C9orf72 & iPS7 \\ FAK02017 & 28 & 11 & 0 & 11 & FMR1 & iPS7 \\ FAK58936 & 283 & 261 & 261 & NA & C9orf72 & iPS7 \\ FAK58936 & 181 & 147 & 0 & 147 & FMR1 & iPS7 \end{longtable} \setlength\LTleft{0pt} \setlength\LTright{0pt} \begin{longtable}{lrrrrrr} \caption[Cas9 target enrichment throughput per flow cell]{Cas9 target enrichment throughput per flow cell} \\ \label{tab:strique:Cas9} \\ run & reads & valid & wild type & expanded & target & sample \\ \hline FAJ04502 & 639 & 463 & 284 & 179 & C9orf72 & 24/5\#2 \\ FAJ04502 & 316 & 233 & 233 & NA & FMR1 & 24/5\#2 \\ FAJ04588 & 94 & 67 & 38 & 29 & C9orf72 & 24/5\#2 \\ FAJ04588 & 46 & 39 & 39 & NA & FMR1 & 24/5\#2 \\ FAK01734 & 2 & 2 & 2 & 0 & C9orf72 & 24/5\#2 \\ FAK01734 & 1 & 1 & 1 & NA & FMR1 & 24/5\#2 \\ FAK01877 & 36 & 31 & 17 & 14 & C9orf72 & 24/5\#2 \\ FAK01877 & 56 & 47 & 47 & NA & FMR1 & 24/5\#2 \\ FAK57718 & 73 & 60 & 37 & 23 & C9orf72 & 24/5\#2 \\ FAK57718 & 69 & 59 & 59 & NA & FMR1 & 24/5\#2 \\ FAK58137 & 157 & 128 & 75 & 53 & C9orf72 & 24/5\#2 \\ FAK58137 & 127 & 109 & 109 & NA & FMR1 & 24/5\#2 \\ FAK62030 & 151 & 126 & 82 & 44 & C9orf72 & 24/5\#2 \\ FAK62030 & 100 & 79 & 79 & NA & FMR1 & 24/5\#2 \\ FAK67994 & 1041 & 884 & 575 & 309 & C9orf72 & 24/5\#2 \\ FAK67994 & 367 & 319 & 319 & NA & FMR1 & 24/5\#2 \\ PAD43259 & 364 & 298 & 187 & 111 & C9orf72 & 24/5\#2 \\ PAD43259 & 197 & 154 & 154 & NA & FMR1 & 24/5\#2 \end{longtable}
{ "alphanum_fraction": 0.4777418049, "avg_line_length": 50.9484536082, "ext": "tex", "hexsha": "c097e9e68b018a33ce0a685e0f4bbf4fbe1a5696", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "27fa68f016636ca7e415cc3d814645f35fc50028", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "giesselmann/dissertation", "max_forks_repo_path": "tex/sections/supplement.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "27fa68f016636ca7e415cc3d814645f35fc50028", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "giesselmann/dissertation", "max_issues_repo_path": "tex/sections/supplement.tex", "max_line_length": 143, "max_stars_count": null, "max_stars_repo_head_hexsha": "27fa68f016636ca7e415cc3d814645f35fc50028", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "giesselmann/dissertation", "max_stars_repo_path": "tex/sections/supplement.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 2048, "size": 4942 }
\subsection{Primary Documentation Sources} \label{sec:primary-sources} Describe here the proposed location of the primary documentation sources. These are a subset of the identified source listed in SITCOMTN-012.
{ "alphanum_fraction": 0.8194444444, "avg_line_length": 36, "ext": "tex", "hexsha": "e1ee4388f2abe3679213dd85aa1d051330556158", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "666f37bd6322f89b6bf394972bceb1c79dca1f17", "max_forks_repo_licenses": [ "CC-BY-4.0" ], "max_forks_repo_name": "lsst-sitcom/sitcomtn-014", "max_forks_repo_path": "PrimarySources.tex", "max_issues_count": 1, "max_issues_repo_head_hexsha": "666f37bd6322f89b6bf394972bceb1c79dca1f17", "max_issues_repo_issues_event_max_datetime": "2021-09-14T19:22:32.000Z", "max_issues_repo_issues_event_min_datetime": "2021-09-14T19:22:32.000Z", "max_issues_repo_licenses": [ "CC-BY-4.0" ], "max_issues_repo_name": "lsst-sitcom/sitcomtn-014", "max_issues_repo_path": "PrimarySources.tex", "max_line_length": 142, "max_stars_count": null, "max_stars_repo_head_hexsha": "666f37bd6322f89b6bf394972bceb1c79dca1f17", "max_stars_repo_licenses": [ "CC-BY-4.0" ], "max_stars_repo_name": "lsst-sitcom/sitcomtn-014", "max_stars_repo_path": "PrimarySources.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 45, "size": 216 }
% thesisdefs.tex % This is mostly adapted from withesis.cls. The original copyright % notice for withesis.cls follows, preceded by two percent signs (%%): %% withesis.cls %% LaTeX Style file for the University of Wisconsin-Madison Thesis Format %% Adapted from the Purdue University Thesis Format %% Originally by Dave Kraynie %% Edits by Darrell McCauley %% Adapted to UW-Madison format by Eric Benedict (Noted with <EB>) %% Updated to LaTeX2e by Eric Benedict 24 July 00 %% %%============================================================================= %% Licensed under the Perl Artistic License. %% see: http://www.ctan.org/tex-archive/help/Catalogue/licenses.artistic.html %% for more info... %%============================================================================= % withesis.cls is available from CTAN. The modifications to this file % are also licensed under the Perl Artistic License. % --wb, 2008 \makeatletter \newcounter {tocpage} \newcounter {lofpage} \newcounter {lotpage} \newcounter {listofheading} \newcommand\@thesistitlesmallskip{0.2in} \newcommand\@thesistitlemedskip{0.4in} \newcommand\@thesistitlebigskip{0.8in} \newcommand{\degree}[1]{\gdef\@degree{#1}} \newcommand{\project}{\gdef\@doctype{A masters project report}} \newcommand{\prelim}{\gdef\@doctype{A preliminary report}} \newcommand{\thesis}{\gdef\@doctype{A thesis}} \newcommand{\dissertation}{\gdef\@doctype{A dissertation}} \newcommand{\department}[1]{\gdef\@department{(#1)}} \newenvironment{titlepage} {\@restonecolfalse\if@twocolumn\@restonecoltrue\onecolumn \else \newpage \fi \thispagestyle{empty} % \c@page\z@ -- deleted: count title page in thesis }{\if@restonecol\twocolumn \else \newpage \fi} \gdef\@degree{Doctor of Philosophy} %Default is PhD \gdef\@doctype{A dissertation } %Default is dissertation \gdef\@department{Nuclear Engineering \& Engineering Physics} \gdef\@defensedate{16 August 2021} \gdef\@committee{ \setstretch{1.0} \footnotesize Sunil S. Chirayath, Associate Professor, Nuclear Engineering, Texas A\&M University\\ Douglass L. Henderson, Spangler Professor, Nuclear Engineering \& Engineering Physics\\ Benjamin A. Lindley, Assistant Professor, Nuclear Engineering \& Engineering Physics\\ Robert D. Nowak, Nosbusch Professor, Electrical \& Computer Engineering\\ Paul P.H. Wilson, Grainger Professor, Nuclear Engineering \& Engineering Physics } \renewcommand{\maketitle}{% \begin{titlepage} %----------------------------------------------------------------------------- % -- The thesis office doesn't like thanks on title page. Put it in % -- the acknowledgments. This is here so you don't have to change % -- your titlepage when converting from report style. -> from Purdue, but I % left it here since it seems compatible with UW-Madison, Eric %----------------------------------------------------------------------------- \def\thanks##1{\typeout{Warning: `thanks' deleted from thesis titlepage.}} \let\footnotesize\small \let\footnoterule\relax \setcounter{page}{1} \begin{center} {\textbf{\expandafter\expandafter{\Large \@title}}} \\[\@thesistitlesmallskip] by \\[\@thesistitlesmallskip] {\Large \@author} \\[\@thesistitlemedskip] \@doctype submitted in partial fulfillment of \\ the requirements for the degree of \\[\@thesistitlemedskip] {\Large \@degree} \\[\@thesistitlesmallskip] {\large \@department} \\[\@thesistitlesmallskip] at the \\[\@thesistitlesmallskip] {\large UNIVERSITY OF WISCONSIN--MADISON} \\[\@thesistitlesmallskip] {\large \@date} \\[\@thesistitlebigskip] \end{center} \hspace*{-0.5cm}Date of final oral examination: \@defensedate \\[\@thesistitlesmallskip] \hspace*{-0.5cm}{\small The dissertation is approved by the following members of the Final Oral Committee:}\\ \@committee \end{titlepage} \setcounter{footnote}{0} \setcounter{page}{1} %title page is NOT counted \let\thanks\relax \let\maketitle\relax \let\degree\relax \let\project\relax \let\prelim\relax \let\department\relax \gdef\@thanks{}\gdef\@degree{}\gdef\@doctype{} \gdef\@department{} %\gdef\@author{}\gdef\@title{} } %============================================================================= % ABSTRACT %============================================================================= % The abstract should begin with two single-spaced lines describing % the author and title in a standard format. After these lines comes % the standard abstract. %============================================================================= \def\abstract{ \chapter*{Abstract} \addcontentsline{toc}{chapter}{Abstract} \relax\markboth{Abstract}{Abstract}} \def\endabstract{\par\newpage} %============================================================================= % UMI ABSTRACT %============================================================================= % The UMI abstract should begin with the author and title in a standard format. % After the author comes the advisor and university. After these lines comes % a bunch of double spaced text to make up the standard abstract. % After the abstract, the advisor's approval signature follows. % This page is not numbered and is delivered seperately to the thesis office. %============================================================================= \def\advisortitle#1{\gdef\@advisortitle{#1}} \def\advisorname#1{\gdef\@advisorname{#1}} \gdef\@advisortitle{Professor} \gdef\@advisorname{Cheer E.\ Place} \def\umiabstract{ \thispagestyle{empty} \addtocounter{page}{-1} \begin{center} {\textbf{\expandafter\uppercase\expandafter{\@title}}}\\ \vspace{12pt} \@author \\ \vspace{12pt} Under the supervision of \@advisortitle\ \@advisorname\\ At the University of Wisconsin-Madison \end{center} } \def\endumiabstract{\vfill \hfill\@advisorname\par\newpage} %============================================================================ % VERBATIMFILE %============================================================================ % \verbatimfile{<filename>} for verbatim inclusion of a file % - Note that the precise layout of line breaks in this file is important! % - added the \singlespace - EB %============================================================================ \def\verbatimfile#1{\begingroup \singlespace \@verbatim \frenchspacing \@vobeyspaces \input#1 \endgroup } %============================================================================= % SEPARATOR Pages % Creates a blank page with a text centered horizontally and vertically. % The page is neither counted nor numbered. % These pages are required in the thesis format before sections such % as appendices, vita, bibliography, etc. %============================================================================= \def\separatorpage#1{ \newpage \thispagestyle{empty} \addtocounter{page}{-1} \null \vfil\vfil \begin{center} {\textbf{#1}} \end{center} \vfil\vfil \newpage} %============================================================================= % COPYRIGHTPAGE %============================================================================= % The copyright must do the following: % - start a new page with no number % - place the copyright text centered at the bottom. %============================================================================= \def\copyrightpage{ \newpage \thispagestyle{empty} % No page number \addtocounter{page}{-1} \chapter*{} % Required for \vfill to work \begin{center} \vfill \copyright\ Copyright by \@author\ \@date\\ All Rights Reserved \end{center}} %============================================================================= % GLOSSARY %============================================================================= % The glossary environment must do the following: % - produce the table of contents entry for the glossary % - start a new page with GLOSSARY centered two inches from the top %============================================================================= %\def\glossary{ % \chapter*{GLOSSARY} % \addcontentsline{toc}{chapter}{Glossary}} %\def\endglossary{\par\newpage} %============================================================================= % NOMENCLATURE %============================================================================= % The nomenclature environment must do the following: % - produce the table of contents entry for the nomenclature section % - start a new page with NOMENCLATURE centered two inches from the top %============================================================================= \def\nomenclature{\separatorpage{DISCARD THIS PAGE} \chapter*{Nomenclature} \addcontentsline{toc}{chapter}{NOMENCLATURE}} \def\endnomenclature{\par\newpage} %============================================================================= % CONVENTIONS %============================================================================= % The conventions environment must do the following: % - produce the table of contents entry for the nomenclature section % - start a new page with CONVENTIONS centered two inches from the top %============================================================================= \def\conventions{\separatorpage{DISCARD THIS PAGE} \chapter*{Conventions} \addcontentsline{toc}{chapter}{CONVENTIONS}} \def\endconventions{\par\newpage} %============================================================================= % COLOPHON %============================================================================= % The colophon environment must do the following: % - produce the table of contents entry for the nomenclature section % - start a new page with COLOPHON centered two inches from the top %============================================================================= \def\colophon{\separatorpage{DISCARD THIS PAGE} \chapter*{Colophon} \addcontentsline{toc}{chapter}{Colophon}} \def\endcolophon{\par\newpage} %============================================================================= % LIST OF SYMBOLS %============================================================================= % The list of symbols environment must do the following: % - produce the table of contents entry for the list of symbols section % - start a new page with LIST OF SYMBOLS centered two inches from the top %============================================================================= \def\listofsymbols{\separatorpage{DISCARD THIS PAGE} \eject \chapter*{LIST OF SYMBOLS} \addcontentsline{toc}{chapter}{LIST OF SYMBOLS}} \def\endlistofsymbols{\par\newpage} %============================================================================= % VITA %============================================================================= % The vita environment must do the following: % - produce a separator page with the word vita centered % - produce the table of contents entry for the vita % - start a new page with VITA centered two inches from the top %============================================================================= \def\vita{ % \separatorpage{VITA} % UW doesn't require this EB \chapter*{VITA} \addcontentsline{toc}{chapter}{VITA}} \def\endvita{\par\newpage} %============================================================================= % ACKNOWLEDGMENTS %============================================================================= % The acknowledgments environment must do the following: % - start a new page with ACKNOWLEDGMENTS centered two inches from the top %============================================================================= \def\acks{ \chapter*{Acknowledgments} } \def\endacks{\par\newpage} %============================================================================= % DEDICATION %============================================================================= % The dedication environment must do the following: % - start a new page % - center the text vertically % - include the text in a center environment %============================================================================= %\def\dedication{ % \newpage % \null\vfil % \begin{center}} %\def\enddedication{\end{center}\par\vfil\newpage} \def\dedication{ \newpage \null\vfil } \def\enddedication{\par\vfil\newpage} %============================================================================= % DATE %============================================================================= %\def\today{\ifcase\month\or %January\or February\or March\or April\or May\or June\or %July\or August\or September\or October\or November\or December\fi %\space\number\day, \number\year} \newcount\@testday \def\today{\@testday=\day \ifnum\@testday>30 \advance\@testday by -30 \else\ifnum\@testday>20 \advance\@testday by -20 \fi\fi \number\day\ \ \ifcase\month\or January \or February \or March \or April \or May \or June \or July \or August \or September \or October \or November \or December \fi\ \number\year } % Single counter for theorems and theorem-like environments: \newtheorem{theorem}{Theorem}[chapter] \newtheorem{assertion}[theorem]{Assertion} \newtheorem{claim}[theorem]{Claim} \newtheorem{conjecture}[theorem]{Conjecture} \newtheorem{corollary}[theorem]{Corollary} \newtheorem{definition}[theorem]{Definition} \newtheorem{example}[theorem]{Example} \newtheorem{figger}[theorem]{Figure} \newtheorem{lemma}[theorem]{Lemma} \newtheorem{prop}[theorem]{Proposition} \newtheorem{remark}[theorem]{Remark} %============================================================================= % TABLE OF CONTENTS; LIST OF FIGURES; LIST OF TABLES %============================================================================= % In report style, \tableofcontents, \listoffigures, etc. are always % set in single-column style. @restonecol is used to keep track of % whether we need to switch back to double column style after the toc. % % The only known problem now is that the first page with the new % layout is too long. The problem seems to be that the change to % textheight doesn't take place on the first page. Even if it's the % first line in the table of contents macro. Presumably the same % problem also occurs in the lof and lot. % % I'm taking a shot at fixing the problem by dropping in a throw-away % page between the change to the height parameters and the start of % the chapter. Isn't elegance wonderful? % %============================================================================= % \def\@tableof#1#2#3#4#5{ % { % limit scope of following declarations!! % \@restonecolfalse\if@twocolumn\@restonecoltrue\onecolumn\fi % \addtolength{\textheight}{-40pt} % -24-16 % \addtolength{\majorheadskip}{-40pt} % -24-16 % \addtolength{\headheight}{52pt} % 36+16 % \addtolength{\headsep}{-12pt} % -12 % \separatorpage{DISCARD THIS PAGE} % \chapter*{#1} % #5 % \relax\markboth{#1}{#1} % \hbox to \hsize{#2 \hfil Page} % \singlespace % \setcounter{#3}{0} % \setcounter{listofheading}{1} % change from 0 to 1 by mccauley, 14may93 % \def\@oddhead{\vbox to \headheight{\vspace{4pt} % \hbox to \hsize{\hfil\textrm{\thepage}} \vfil % \ifnum\value{#3}=1 % \ifnum\value{listofheading}=2 % \hbox to \hsize{Appendix\hfil} \vspace{4pt} \fi % \ifnum\value{listofheading}=1 % \stepcounter{listofheading} \fi % \hbox to \hsize{#2 \hfil Page} % \else % \setcounter{#3}{1} % \fi}} % \def\@evenhead{\vbox to \headheight{\vspace{4pt} % \hbox to \hsize{\textrm{\thepage}\hfil} \vfil % \ifnum\value{#3}=1 % \ifnum\value{listofheading}=2 % \hbox to \hsize{Appendix\hfil} \vspace{4pt} \fi % \ifnum\value{listofheading}=1 % \stepcounter{listofheading} \fi % \hbox to \hsize{#2 \hfil Page} % \else % \setcounter{#3}{1} % \fi}} % \@starttoc{#4} \if@restonecol\twocolumn\fi % \newpage % }} % % \def\tableofcontents{\@tableof{TABLE OF CONTENTS}{}{tocpage}{toc}{}} % % \def\listoffigures{ % \@tableof{LIST OF FIGURES}{Figure}{lofpage}{lof} % {\protect\addcontentsline{toc}{chapter}{LIST OF FIGURES}}} % % \def\listoftables{ % \@tableof{LIST OF TABLES}{Table}{lotpage}{lot} % {\protect\addcontentsline{toc}{chapter}{LIST OF TABLES}}} %============================================================================= % BIBLIOGRAPHY %============================================================================= % The thebibliography environment executes the following commands: % % o start a new 'chapter' with BIBLIOGRAPHY as the heading % o produce a separator page for the bibliography % % \def\newblock{\hskip .11em plus .33em minus -.07em} -- % Defines the `closed' format, where the blocks (major units of % information) of an entry run together. % % \sloppy -- Used because it's rather hard to do line breaks in % bibliographies, % % \sfcode`\.=1000\relax -- % Causes a `.' (period) not to produce an end-of-sentence space. %============================================================================= % \altbibtitle % The default title for the References chapter is ``LIST OF REFERENCES'' % Since some people prefer ``BIBLIOGRAPHY'', the command % \altbibtitle has been added to change the chapter title. % This command does nothing more than change REFERENCES to BIBLIOGRAPHY %============================================================================ \def\@bibchaptitle{Bibliography} \def\altbibtitle{\def\@bibchaptitle{Bibliography}} \def\thebibliography#1{ %\separatorpage{\@bibchaptitle} \global\@bibpresenttrue \chapter*{\@bibchaptitle\markboth{\@bibchaptitle}{\@bibchaptitle}} \addcontentsline{toc}{chapter}{\@bibchaptitle} \vspace{0.375in} % added to match 4 line requirement \interlinepenalty=10000 % added to prevent breaking of bib entries \singlespace\list {[\arabic{enumi}]}{\settowidth\labelwidth{[#1]}\leftmargin\labelwidth \advance\leftmargin\labelsep \usecounter{enumi}} \def\newblock{\hskip .11em plus .33em minus -.07em} \sloppy \sfcode`\.=1000\relax} \let\endthebibliography=\endlist \makeatother
{ "alphanum_fraction": 0.5554164398, "avg_line_length": 40.8222222222, "ext": "tex", "hexsha": "561a4f4010c9c2396beb9772813de3a027a38149", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "b42f24d1cbe8f8c9af00e5ba5b7585da7331f36f", "max_forks_repo_licenses": [ "CC0-1.0" ], "max_forks_repo_name": "opotowsky/dissertation", "max_forks_repo_path": "document/includes/thesisdefs.tex", "max_issues_count": 11, "max_issues_repo_head_hexsha": "b42f24d1cbe8f8c9af00e5ba5b7585da7331f36f", "max_issues_repo_issues_event_max_datetime": "2019-09-23T14:41:46.000Z", "max_issues_repo_issues_event_min_datetime": "2019-08-22T21:32:31.000Z", "max_issues_repo_licenses": [ "CC0-1.0" ], "max_issues_repo_name": "opotowsky/dissertation", "max_issues_repo_path": "document/includes/thesisdefs.tex", "max_line_length": 114, "max_stars_count": null, "max_stars_repo_head_hexsha": "b42f24d1cbe8f8c9af00e5ba5b7585da7331f36f", "max_stars_repo_licenses": [ "CC0-1.0" ], "max_stars_repo_name": "opotowsky/dissertation", "max_stars_repo_path": "document/includes/thesisdefs.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 4511, "size": 18370 }
%!TEX root = ../template.tex %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %% chapter2.tex %% NOVA thesis document file %% %% Chapter with the template manual %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \typeout{NT FILE chapter2.tex} \chapter{Background}\label{cha:background} In this chapter I briefly discuss the current panorama regarding systems programming languages and their presence in the ecosystem, justify why Rust was the language of choice for the current work and provide a general presentation of session types and typestates. A detailed discussion \wrt~the latter and its relation with the present work is done in \autoref{cha:related-work}. \section{Systems Programming Languages}\label{sec:systems-programming} The definition of the term \emph{systems programming language} is not agreed upon, being somewhat flexible and ever-changing due to constant shift in requirements for applications~\autocite{Torre2014}. Before the cloud, in the age of C, a systems programming language would most likely be a language able to provide an adequate interface between the programmer and the machine. Nowadays, the definition is more vague, as machines and software grow in complexity, and the definition of system grows from a single computer to a distributed system, interfacing with the hardware in a more direct fashion is mostly not required. Systems programming languages emphasized being able to produce a standalone binary able to run on a variety of machines without requiring extra software. The subjects of this analysis have been picked due to their relevance in the area of systems programming, being present in indices such as TIOBE\citeurl{https://www.tiobe.com/tiobe-index/}{14/07/2021}, or developer surveys such as StackOverflow's\footnote{\url{https://insights.stackoverflow.com/survey/2020\#most-popular-technologies}\\(visited in 14/07/2021)} and JetBrains'\citeurl{https://www.jetbrains.com/lp/devecosystem-2020/}{14/07/2021}. \subsection{C} C is a general-purpose programming language, while it can be considered a high-level programming language when put besides assembly, it also fits the description of a low-level level programming language when besides languages like Python. It was originally designed by Dennis Ritchie for the PDP-11 and has been around since 1972 \autocite{Ritchie1993, Kernighan1978}, C is by no means modern, being older than myself and most likely to outlive me. Designed in a different time, C's mental model is also different, the language is simple and straight forward, the designers had goals to achieve and designed the language with them in mind. Such mentality is noticeable when using the language, it is simple as the hardware was and the level of control C provides is unparalleled, being both a major benefit and a hindrance. An expert programmer is able to take advantage of the language to produce highly-efficient software, but a novice programmer will often find himself battling memory and pointer management bugs. The language influence echoes in the modern languages, whether in the form of syntax (i.e. the famous C-style syntax) or in the problems it tries to solve. Languages such as Java take from C their syntax as well as one problem to solve, memory management; other languages like Julia \autocite{Bezanson2017} aim to achieve similar performance. While not as popular as other languages, C was able to stay relevant in the modern development landscape, some of the most used software in the world is either written with or powered by C. The \emph{Linux} kernel, which powers servers, the world's most powerful computers and serves as a base for Android and other mobile devices, \emph{git}, \emph{Redis} and \emph{nginx} are also software examples which reached the top of their respective fields. \subsection{C++} Introduced in 1985 as an extension to C; the author, Bjarne Stroustrup writes: \begin{displayquote}[{\autocite[Section 1.2]{Stroustrup1986}}] C++ is based on the idea of providing both: \begin{compactitem} \item direct mappings of built-in operations and types to hardware to provide efficient memory use and efficient low-level operations, and \item affordable and flexible abstraction mechanisms to provide user-defined types with the same notational support, range of uses, and performance as built-in types. \end{compactitem} \end{displayquote} The language has since gone on to conquer the programming world, being used in a wide variety of software and hardware. Currently, companies such as Google, Amazon and Microsoft have widespread adoption of C++ in their codebases. Industries requiring the best performance as possible of the host, such as scientific computing, financial software, AAA games and visual effects will most likely be running C++. Just like C, C++ is far from perfect. The language is enormous, with very complicated parts \eg{templates} and compilation for big projects is very slow, the author acknowledges this in \autocite{Torre2014}. Furthermore, as the language provides a high level of control over the system, it has manual memory management, suffering from the same problems as C. Even with smart pointers \eg{\texttt{unique\_ptr}} the problem is not considered to be solved, as they still introduce overhead in the most demanding applications\citeurl{https://www.youtube.com/watch?v=rHIkrotSwcc}{01/07/2021}. \subsection{Ada} Ada was developed in 1980, during a standardization effort in the USA's Department of Defense, with the goal of unifying projects spanning over 450 programming languages\citeurl{https://tinyurl.com/AdaPL2016}{08/06/2021}. Ada's main focus was the development of embedded applications, currently the Ada language is mostly used in the critical domain due to the strong emphasis on safety, some Ada success stories are the London Metro Victoria Line and the Paris Metro Line\citeurl{https://www.sigada.org/}{08/06/2021}. The language is also used in several other domains, such as aviation, space vehicles, financial systems and more\citeurl{https://www2.seas.gwu.edu/\~mfeldman/ada-project-summary.html}{08/06/2021}. In comparison with the other languages in this section, Ada is eclipsed, barely showing in the GitHub rankings\citeurl{https://tjpalmer.github.io/languish/}{25/01/2021}. However, given that Ada's compiler requires a paid license to take full advantage of and their main application market are critical applications, it makes sense that most Ada code is not open-source. Regardless, when one views the list of features Ada has, the first arising question is --- \emph{why is Ada not popular?}. An old article in AdaPower\citeurl{https://tinyurl.com/AdaPower1998}{08/06/2021} provides some possible insight over the question, referring to the compiler's price and the Hoare's harsh critics. From my point of view, the critics to the compiler and ecosystem pricing still make sense, as access to the full tooling is limited. The lack of programmers goes on to perpetuate the lack of adoption in the industry and this cycle ends up limiting Ada's reach in the market. \subsection{Go} The Go programming language (or \texttt{golang}) is a Google project, according to the language folklore, it was designed by the authors while they waited for their C++ code to compile. Go tried to address several of the criticisms to C, namely memory management, which it solved through the usage of a garbage collector. While it has made a name for itself in the network and distributed systems sector, being the main language behind projects like Docker and Kubernetes, Go's categorization as a systems programming language can be discussed. When put against its peers in the systems programming language ecosystem, Go's performance may be considered \emph{lacking}, not being enough for certain use cases. This was Discord's case, the popular internet voice server company, as demand increased, Go was not able to meet the expected performance requirements and the company replaced it with Rust\footnote{\url{https://blog.discord.com/why-discord-is-switching-from-go-to-rust-a190bbca2b1f}\\(visited in 08/06/2021)}. In \autocite{Torre2014}, one of Go's authors, Robert Pike, says that he regrets categorizing Go as a systems programming language, being rather a server programming language that evolved into a cloud infrastructure language. Regardless of discussion, Go has proven to be a viable alternative to existing counterparts, compromising extreme performance in name of safety and simplicity. \subsection{Summary} In this section I reviewed four system programming languages, suited for different kinds of environments, C, C++ and Ada can be considered the traditional system languages kind, with a strong emphasis on efficiency and support for embedded devices. Go on the other hand, could be considered a \emph{new generation} systems programming language, a language for cloud infrastructure. Among the four, only Ada places strong emphasis on safety, with several features allowing for more guarantees at compile-time, such as contract based programming, non-nullable types by default and even some theorem proving capabilities, being the only one which does not suffer from the \equotes{billion dollar mistake}\citeurl{https://tinyurl.com/Hoare2009}{25/01/2021}. \section{The Rust Language}\label{sec:rust-lang} Rust is a fairly recent systems programming language, it started as a side project of Graydon Hoare and its public history dates back to 2010\citeurl{https://git.io/JZ3X7}{08/06/2021}. In 2012 Mozilla picked up Rust to help develop the Servo browser engine, the successor to the previous Gecko engine; as a way to test Rust's capabilities \autocite{Klabnik2016}. \subsection{What makes Rust different?} In comparison with other languages, one of the first things someone new to Rust ought to notice is the emphasis put on safety. Being a competitor to C++ and achieving memory safety while still providing C++-level performance is quite an accomplishment. Rust, however, also aims to allow users to be productive without sacrificing neither safety nor performance. The key to all the promises Rust makes is its ownership system and borrow checker. The borrow checker is a completely new mechanism when compared with other mainstream languages. However, it is a product of years of research both in academia and the industry. This mechanism merits most of Rust's accomplishments and also its biggest defect, the learning curve. While Rust has become more accessible over the years, ownership and the borrow checker still require some effort on the part of the developer to learn. I provide a small overview of ownership, the borrow checker and their part in Rust's promise of “\emph{fearless concurrency}”. \subsection{Ownership}\label{sec:rust-lang:ownership} Ownership is the mechanism used by Rust to ensure no memory block stays allocated longer than it is required to. Through ownership, the compiler is able to free memory when required, inserting the respective deallocation procedures in the output program. Behind ownership, there are three rules: \begin{displayquote}[{\autocite[Section 4.1]{RustBook2021}}] \begin{compactitem} \item Each value in Rust has a variable that’s called the owner. \item There can only be one owner at a time. \item When the owner goes out of scope, the value will be dropped. \end{compactitem} \end{displayquote} To illustrate the rules, consider \autoref{lst:rust-move}, where we have two variables \texttt{x} and \texttt{y}. First, \texttt{"Hello"} is assigned to \texttt{x} (line 2), thus \texttt{x} now owns \texttt{"Hello"}. After, \texttt{x} is assigned to \texttt{y} (line 3), consider the second rule of ownership, since we can only have one owner, \texttt{x}'s value ownership is transferred to \texttt{y}. Since we transferred \texttt{x}'s value to \texttt{y}, \texttt{x} is no longer valid on line 4, consequently, when compiling the code an error will be issued due to \texttt{x} being~moved. Notice how \texttt{\textcolor{violet}{String}::from} is used instead of another type, since \keyword{String} type does not implement \keyword{Copy} it can only be moved. If the used type implemented \keyword{Copy}, the value would have been implicitly copied instead of moved. \input{Chapters/Listings/C2/rust-move.tex} So far, this example illustrates the first two rules. The last rule can be considered \quotes{invisible}, as it happens during compilation and the user will not notice it usually. What happens is that at the end of the scope, any variable whose owner is in scope, will be freed (in Rust's terms, it will be dropped). While the developer is not required to explicitly free the memory, the compiler will insert the calls for the developer. \subsection{Borrowing}\label{sec:rust-lang:borrowing} If the developer could only copy or move memory the usability of the language would be severely limited. For example, functions that read a variable and produce a new value, not requiring the variable to be consumed would be impossible. To cope with this, Rust allows values to be \emph{borrowed}, in other words, the owner of the variable allows for it to be read by others. To borrow a value, one writes \texttt{\&value}, this creates a read-only reference to \texttt{value}. There can be an unlimited number of read-only references to a value, but only a single mutable reference. This is discussed in \autoref{sec:rust-lang:concurrency}. Consider the example \autoref{lst:rust-borrow}. In the example, \texttt{x} is now possible to be printed since it was not moved into \texttt{y}. Rather, \texttt{y} borrowed \texttt{x} through a reference. Going back to the rules (\autoref{sec:rust-lang:ownership}), Rust's references obey them just like all other values. The variable containing them has ownership \emph{over the reference}; it still is a single owner (if \mintinline{Rust}{let z = y;} was to be added, the reference would be copied instead of moved); and finally, when the owner goes out of scope \emph{the reference is dropped}, but not the original~value. \input{Chapters/Listings/C2/rust-borrow.tex} \subsubsection*{Mutable Borrows} One last thing to consider are mutable borrows. As previously discussed, in Rust it is possible to create multiple immutable references but only one mutable reference. Regarding mutable references there are two cases to consider: \paragraph{$N$ mutable references,} see \autoref{lst:rust-borrow-n-mut}. Understanding why only one mutable reference can exist at a time is trivial, as multiple mutable references to the same object would allow it to be mutated concurrently, which could lead to inconsistent values. \paragraph{$N$ immutable references and $1$ mutable reference,} see \autoref{lst:rust-borrow-n-immut-1-mut}. The reason behind not allowing a mutable reference to coexist is similar. Consider that each value can be executed by a different thread, the first two (\texttt{r1} and \texttt{r2}, on lines 3 \& 4) are only read and the third (\texttt{r3}, on line 5) can be read and written. While there will be no conflicts between writers, it is possible for the readers to read an inconsistent value, since it can happen during the write operation. \input{Chapters/Listings/C2/rust-borrow-n-mut.tex} \input{Chapters/Listings/C2/rust-borrow-n-immut-1-mut.tex} \subsection{Concurrency}\label{sec:rust-lang:concurrency} \begin{displayquote}[{\autocite[Section 16]{RustBook2021}}] Initially, the Rust team thought that ensuring memory safety and preventing concurrency problems were two separate challenges to be solved with different methods. Over time, the team discovered that the ownership and type systems are a powerful set of tools to help manage memory safety and concurrency problems! By leveraging ownership and type checking, many concurrency errors are compile-time errors in Rust rather than runtime errors. \end{displayquote} Rust provides several kinds of mechanisms to prevent concurrency related problems. Mechanisms as \emph{message-passing}, \emph{shared-state} and traits\footnote{For more information on traits, see \url{https://doc.rust-lang.org/rust-by-example/trait.html} (visited in 20/07/2021), \url{https://doc.rust-lang.org/book/ch10-02-traits.html} (visited in 20/07/2021) and \url{https://doc.rust-lang.org/book/ch19-03-advanced-traits.html} (visited in 20/07/2021)} to enable developers to extend upon the existing abstractions. \subsubsection*{Message-passing} Rust's message-passing library\citeurl{https://doc.rust-lang.org/std/sync/mpsc/}{01/07/2021} is inspired by Go's approach to concurrency, prioritizing message passing over other kinds of concurrent approaches, such as locking. \begin{displayquote}[Effective Go\citeurl{https://golang.org/doc/effective\_go.html}{08/06/2021}] Do not communicate by sharing memory; instead, share memory by communicating. \end{displayquote} Rust defines channels which have two ends, the transmitter and the receiver. The former can also be seen as the sender, and when is declared with the message type, the latter is also declared with the message type, they can be the same or distinct. The ownership system comes in when the transmitter sends a message, when received the ownership of the message is taken on by the receiver end. This enforces that values cannot be in both sides of the communication at the same time, preventing concurrent accesses. \subsubsection*{Shared-state} Along with message-passing, Rust allows memory to be shared in a concurrent, safe way. Just as before, Rust's ownership system also helps with mutexes' biggest problem, locking and unlocking. In a language like Java, whenever a thread is able to call \texttt{lock} on a mutex, it is required to call \texttt{unlock} on it, only then can other threads can use it. The problem is that this approach is subject to human error, forgetting to call unlock or calling unlock in the wrong place is possible. Making use of the ownership system, Rust is able to know when the lock reached the end of the scope and should be dropped. \subsection{Why Rust instead of Language X?} The main obstacle between typestates and programming languages is the requirement for aliasing control. In short, typestates are incompatible with aliasing (details are provided in \autoref{sec:typestates}). So to implement a language from the ground up, it is required that aliasing is handled, however, instead of building a new one, the goal is to extend an existing one, Rust. As discussed in \autoref{sec:rust-lang:borrowing}, Rust's ownership system allows for aliasing control. Using moves to enforce the consumption of values, immutable references for pure functions and mutable ones for limited mutability, it is possible to emulate typestates. This takes care of aliasing concerns. When designing on top of another language, two more ingredients are required, a powerful extension mechanism (i.e. Rust's macros, discussed in \autoref{sec:rust-macros}) and a strong type system, able to provide the necessary abstractions. Rust provides both, the type systems goes as far as allowing zero-sized types, allowing type-level abstractions to incur no runtime overhead. This is a key ingredient in our DSL, aiming to minimize possible runtime overhead while providing an expressive language for typestate specification. \section{Behavioral Types}\label{sec:behavioral-types} As previously discussed, with the growth in software complexity, developers are required to develop better tools to tame such complexity. Such tools require a theoretical body of work to support them, behavioral types are part of such body of work. The theory behind them encompasses several domains, and they can be applied over a wide range of entities, from an object to a web service. The work on behavioral types arose in the context of type systems able to capture properties of computations \autocite{Huttel2016}. Session types and typestates are part of this field of study, both capturing distinct property kinds while aiming for a common goal, stronger type systems and better static assurances. \begin{displayquote}[\autocite{Ancona2016}] Roughly speaking, a behavioral type describes a software entity, such as an object, a communication channel, or a Web Service, in terms of the sequences of operations that allow for a correct interaction among the involved entities. \end{displayquote} Behavioral types allow developers to model a protocol, define the communication messages and possible interactions and check that certain requirements are met when implementing. Consider the protocol from \autoref{fig:login-protocol}, where a user tries to authenticate. A developer can use it as a specification (for simplicity consider the uppercase messages to be simple strings), using behavioral types the developer could be able to specify the described interactions and all boilerplate could be generated for them. While using strings as a payload is not very \quotes{interesting}, consider instead that the object in transit is an encrypted payload, the boilerplate will take care of decryption and deserialization. Furthermore, consider the constraint that \emph{all interactions end with a message from the server}. If the specification has an interaction that is not compliant with such rule, the code should not compile, raising an error. \input{Chapters/Figures/C2/login-protocol.tex} \subsection{Session Types} Session types are a subset of behavioral types, focused on communication, from entities in a distributed system to threads in a computer. Session types are based on process calculi and can be thought as \quotes{types for protocols} \autocite{Honda1993, Honda1998}. They elevate communication to the type level, allowing expressing them as types in a program, in turn this enables the compiler to reason about the protocol during compile-time \autocite{Gay2015, Vasconcelos2006}. In Rust, a channel is created with \mintinline{Rust}{let (tx, rx) = channel::<SenderT, ReceiverT>()}, where \texttt{SenderT} and \texttt{ReceiverT} are the types sent and received by the channel. Channels are well-typed, meaning that if \mintinline{Rust}{SenderT = String}, sending another type over the channel will result in a type error. \input{Chapters/Listings/C2/rust-login-chan.tex} Session types extend on this notion, not only allowing for a single type to be sent or received, but also model the protocol. Consider \autoref{lst:rust-login-chan}, the example has unnecessary complexity, as for each receive the developer is required to match all possible replies. Ideally, we declare the steps and possible outcomes beforehand. For example, in plain English: \begin{compactenum} \item Send login credentials. \item If successful, send a message to user \texttt{jmgd}. \item Otherwise, exit. \end{compactenum} And now using session types (\autoref{eq:session-types} with syntax adapted from \autocite{Vasconcelos2006}, where the first four assignments ($:=$) are simple aliases, to simplify reading): \input{Chapters/Figures/C2/session-types.tex} Consider \textcolor{violet}{$!$} to be \emph{sends}, \textcolor{violet}{$?$} to be \emph{receives}, \textcolor{violet}{$.$} the \emph{sequence} operator, \textcolor{violet}{$\&$} the \emph{choice offering} and \textcolor{violet}{$\oplus$} the \emph{choice selection}. Using session types effectively offloads complexity to the type system, resulting in more complex types, but simpler implementations, since protocol compliance can be checked during compilation and boilerplate can be added by the compiler. No message is matching required, the compiler does it for the developer. Using session types it is possible to write it in a simpler form, where a type is assigned to each endpoint. Notice how the server provides more than one operation, but the user does not call them all. \subsection{Typestates}\label{sec:typestates} \begin{displayquote}[{\autocite{Strom1983}}] ... traditional strong type checking was enhanced with \textbf{typestate checking} a new mechanism in which the compiler guarantees that for all execution paths, the sequence of operations on each variable obeys a finite state grammar associated with that variable's type. \end{displayquote} The first language to make use of typestates was NIL~\autocite{Strom1983}, afterwards languages like Hermes\footnote{\url{https://researcher.watson.ibm.com/researcher/files/us-bacon/Strom90HermesTutorial.pdf}\\(visited in 17/07/2021)} and Plaid~\autocite{Aldrich2009} extended the concept with new techniques. \subsubsection*{Automata} A possible question on the reader's mind is --- \emph{how are automata and typestates related?} This section tries to address that question and exemplify how automata helps prove typestate's properties. It is possible to express typestates as automata, as the reader can observe in \autoref{fig:file-automata}. Each state is a possible state the object can be in and transitions are done through methods. Methods can either mutate the object state, such as \texttt{open} and \texttt{close}, or leave it unchanged, such as \texttt{nextLine}. \input{Chapters/Figures/C2/file-automata.tex} \paragraph{Real-world scenario.} In production applications, the \gls{API} is not this simple. In fact, the \keyword{Scanner} \gls{API} is not this simple, however it was simplified for the example. Complex \gls{API}s can be designed by a team and implemented by another, specifications can be changed and during project development some details may be lost. Such details can be costly, imagine for example that a method call reaches a state, which has no outgoing edges, but it is not a final state. This is a problem addressed by existing automata algorithms. Representing typestates as automata, extracting all necessary information and applying such algorithms provides the \gls{API} with extra safety. \subsubsection*{The case for typestates} As discussed in \autoref{sec:context}, bugs in systems programming are costly, thus, bugs must be minimized. Several tools, such as static analyzers, fuzzers, testing frameworks and others, aid in this purpose, if we have all these external tools, why should we not try and leverage the programming language itself? \paragraph{Moving towards better languages.} Programming languages allow the programmer to express a set of actions to be taken by the computer, they are tools which enable us to achieve a goal. Being essential to our work, better tools enable developers to be more productive and achieve higher quality work. The remaining question is --- \emph{why do we not create better languages?} Even when considering languages to be cheap to develop, the amount of work between a \emph{working} language to be \emph{production ready} is not cheap. Furthermore, while adopting a new language for a hobby project is easy, the same does not apply for enterprise level projects, requiring several developers to know the ins and outs of the language. \paragraph{Static typed languages.} The current trend is to move away from dynamically typed languages, to statically typed ones, or at the very least, add typing support to existing dynamic languages. Typescript\citeurl{https://www.typescriptlang.org/}{08/06/2021}, Reason\citeurl{https://reasonml.github.io/}{08/06/2021} and PureScript\citeurl{https://www.purescript.org/}{08/06/2021} are all examples of languages built to bridge the gap between static type systems and JavaScript. Python and Ruby, two popular dynamic languages, have also pushed for type adoption with the addition of type hint support in recent releases\citeurl{https://docs.python.org/3/library/typing.html}{08/06/2021}$^,$\citeurl{https://github.com/ruby/rbs}{08/06/2021}. \paragraph{Where do typestates fit?} Typestates are a complex subject, able to be adopted at several levels, just like type hints, they can be partially used in some languages, through tools such as Mungo~\autocite{Voinea2020}, by contract-style assertions as in Ada2012, Eiffel or pre-0.4 Rust, or finally by leveraging the existing type system to write typestate enabled code as it is possible in Rust\citeurl{https://git.io/JZ3i7}{08/06/2021}. Typestate-related concepts were also used in Singularity OS \autocite[Section 6]{Ancona2016}, a reliable operating system prototype where programs were written using Sing\# --- a C\# derived language which supports behavioral typing, specifically, contracts in a similar capacity to typestates. \paragraph{Why use typestates?} By leveraging the state to the type system, the compiler is able to aid the programmer during development, a given set of transitions will be impossible by default, since the types do not implement them. By reducing the need for developers to check for a certain set of conditions through the use of typestates, it becomes possible to reduce the number of runtime assertions and completely eliminate the need for illegal state exceptions since illegal transitions are checked at compile-time. \subsubsection*{Typestates in action} As a simple example, consider the Java application in \autoref{lst:java-read} which simply which reads two lines from \texttt{stdin}. The application will throw an exception on line 6, since the programmer closed the \keyword{Scanner} in line 5. In this example, the error is simple to catch, the program is short and the \keyword{Scanner} can either be open or closed, however, real-world applications are not that simple. \input{Chapters/Listings/C2/java-read.tex} In the case of \emph{typestated} programming, the type system will provide the programmer with better tools to express state, furthermore, the compiler will then catch errors regarding state, such as the previous \emph{use-after-close}. \autoref{lst:java-read-typestate} shows the \keyword{Read} program written in a \emph{typestated} fashion, notice that the \keyword{Scanner} type is now augmented with state and the compiler is able to catch the misuse of the \texttt{\textcolor{violet}{Scanner}[Closed]} interface. \input{Chapters/Listings/C2/java-read-typestate.tex} \paragraph{Plaid} is a typestate-oriented programming language \autocite{Aldrich2009}, instead of \keyword{class}es users write \keyword{typestate}s. Each typestate represents a class and possible states, the class methods and behavior change during runtime as state changes, in contrast with other languages (e.g. Java) where public methods and fields are always available. In \autoref{lst:plaid-file-use}, the \keyword{File} passes through states as it is open, read and closed in \texttt{readClosedFile}. This property allows the type system to enforce certain properties at compile-time, such as certain methods will never be called in a given state since it is not possible by design (i.e. they are not available in the interface). \input{Chapters/Listings/C2/plaid-file-use.tex} \paragraph{Obsidian} is a smart-contract language targeting Hyperledger Fabric blockchain\citeurl{https://www.hyperledger.org/use/fabric}{08/06/2021}, among other features it makes use of typestates to reduce the amount of bugs when dealing with~assets. Coblenz et al. \autocite{Coblenz2020} tested and proved Obsidian claims through an empirical study; when compared with Solidity, the leading blockchain language, users inserted fewer bugs and were able to start developing safer code faster. Consider \autoref{lst:obsidian-state}, in which a light switch is modeled, the same can either be \texttt{On} or \texttt{Off}, but not both. The \texttt{brightness} field can only be accessed if \texttt{LightSwitch} is in the \texttt{On} state, however the \texttt{switchLocation} field can be accessed from both states. Furthermore, consider that upon instantiation, the \texttt{LightSwitch} is set to the \texttt{Off} state. Notice that in \autoref{lst:obsidian-transaction-ok} the user is able to call \texttt{turnOn}, as the switch is in the \texttt{Off} state, as expected. However, the user is unable to call \texttt{turnOff} in \autoref{lst:obsidian-transaction-err}, since the switch is already set to the \texttt{Off} state. The Obsidian compiler is able to notice such invalid transitions and provide an error during compile-time. \input{Chapters/Listings/C2/obsidian-state.tex} \input{Chapters/Listings/C2/obsidian-transaction-ok.tex} \input{Chapters/Listings/C2/obsidian-transaction-error.tex} \paragraph{Rust.} As discussed in \autoref{sec:rust-lang}, Rust takes the commitment with safety with seriousness, providing the necessary tools to users. While Rust does not support first-class typestates, it is possible to emulate them using the type system, this is discussed in further sections of this document. While the file example does not apply in Rust, since files and other objects are closed as they leave the scopes, enforcing protocols is important and an aspect not covered by the language. Consider \autoref{lst:rust-protocol}, the example is expected to call first \texttt{F1}, followed by \texttt{F2} and finally \texttt{F3}, however such does not happen and the error is only caught at runtime. % \input{Chapters/Listings/C2/rust-protocol.tex} \begin{listing} \centering \begin{minted}{rust} fn main() { let protocol = Protocol::new(); protocol.F1(); protocol.F3(); // possible crash during runtime protocol.F2(); } \end{minted} \caption{ Rust example of an unchecked protocol compliance failure. The protocol expected operation order is \texttt{F1}, \texttt{F2}, \texttt{F3}, however, the developer placed the operations in the wrong order. This mistake is only caught during runtime. } \label{lst:rust-protocol} \end{listing} As the next paragraph discusses, this behavior can be prevented using the language's type system. However, such utilization requires complex types. Since it is not \quotes{part} of the language, most users will neither use it nor be aware of it. \paragraph{Embedded Rust.} As any systems programming language, Rust penetrated the embedded development space. Providing features in line with the area's requirements, along with community efforts to make Rust viable in embedded systems. \emph{The Embedded Rust Book}'s\citeurl{https://rust-embedded.github.io/book/}{08/06/2021} Chapter 4 is dedicated to static guarantees, introducing programmers to the concepts of typestate in its Section 4.1\footnote{\url{https://docs.rust-embedded.org/book/static-guarantees/typestate-programming.html}\\(visited in 01/07/2021)}, and their usage in embedded systems. As for real-world usage, typestates are abundantly used in the area (not just discussed in the book), under the \texttt{stm32-rs} GitHub organization\citeurl{https://github.com/stm32-rs/}{08/06/2021} one finds several repositories (suffixed with \texttt{-hal}) which implement typestates (e.g. \href{https://github.com/stm32-rs/stm32h7xx-hal/blob/master/src/gpio.rs#L51-L128}{\texttt{gpio.rs}} from \texttt{stm32h7xx-hal}).
{ "alphanum_fraction": 0.789122817, "avg_line_length": 62.7924865832, "ext": "tex", "hexsha": "86687b3e2374658d07a8df87cfe388af3d2d8116", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "db8a4eae4824f18386e1d3a449de0e8bbab4ef00", "max_forks_repo_licenses": [ "LPPL-1.3c" ], "max_forks_repo_name": "rustype/thesis-doc", "max_forks_repo_path": "Chapters/chapter2.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "db8a4eae4824f18386e1d3a449de0e8bbab4ef00", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "LPPL-1.3c" ], "max_issues_repo_name": "rustype/thesis-doc", "max_issues_repo_path": "Chapters/chapter2.tex", "max_line_length": 238, "max_stars_count": null, "max_stars_repo_head_hexsha": "db8a4eae4824f18386e1d3a449de0e8bbab4ef00", "max_stars_repo_licenses": [ "LPPL-1.3c" ], "max_stars_repo_name": "rustype/thesis-doc", "max_stars_repo_path": "Chapters/chapter2.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 8031, "size": 35101 }
\section{\module{csv} --- CSV File Reading and Writing} \declaremodule{standard}{csv} \modulesynopsis{Write and read tabular data to and from delimited files.} \sectionauthor{Skip Montanaro}{[email protected]} \versionadded{2.3} \index{csv} \indexii{data}{tabular} The so-called CSV (Comma Separated Values) format is the most common import and export format for spreadsheets and databases. There is no ``CSV standard'', so the format is operationally defined by the many applications which read and write it. The lack of a standard means that subtle differences often exist in the data produced and consumed by different applications. These differences can make it annoying to process CSV files from multiple sources. Still, while the delimiters and quoting characters vary, the overall format is similar enough that it is possible to write a single module which can efficiently manipulate such data, hiding the details of reading and writing the data from the programmer. The \module{csv} module implements classes to read and write tabular data in CSV format. It allows programmers to say, ``write this data in the format preferred by Excel,'' or ``read data from this file which was generated by Excel,'' without knowing the precise details of the CSV format used by Excel. Programmers can also describe the CSV formats understood by other applications or define their own special-purpose CSV formats. The \module{csv} module's \class{reader} and \class{writer} objects read and write sequences. Programmers can also read and write data in dictionary form using the \class{DictReader} and \class{DictWriter} classes. \begin{notice} This version of the \module{csv} module doesn't support Unicode input. Also, there are currently some issues regarding \ASCII{} NUL characters. Accordingly, all input should generally be printable \ASCII{} to be safe. These restrictions will be removed in the future. \end{notice} \begin{seealso} % \seemodule{array}{Arrays of uniformly types numeric values.} \seepep{305}{CSV File API} {The Python Enhancement Proposal which proposed this addition to Python.} \end{seealso} \subsection{Module Contents \label{csv-contents}} The \module{csv} module defines the following functions: \begin{funcdesc}{reader}{csvfile\optional{, dialect=\code{'excel'}\optional{, fmtparam}}} Return a reader object which will iterate over lines in the given {}\var{csvfile}. \var{csvfile} can be any object which supports the iterator protocol and returns a string each time its \method{next} method is called. If \var{csvfile} is a file object, it must be opened with the 'b' flag on platforms where that makes a difference. An optional {}\var{dialect} parameter can be given which is used to define a set of parameters specific to a particular CSV dialect. It may be an instance of a subclass of the \class{Dialect} class or one of the strings returned by the \function{list_dialects} function. The other optional {}\var{fmtparam} keyword arguments can be given to override individual formatting parameters in the current dialect. For more information about the dialect and formatting parameters, see section~\ref{csv-fmt-params}, ``Dialects and Formatting Parameters'' for details of these parameters. All data read are returned as strings. No automatic data type conversion is performed. \end{funcdesc} \begin{funcdesc}{writer}{csvfile\optional{, dialect=\code{'excel'}\optional{, fmtparam}}} Return a writer object responsible for converting the user's data into delimited strings on the given file-like object. \var{csvfile} can be any object with a \function{write} method. If \var{csvfile} is a file object, it must be opened with the 'b' flag on platforms where that makes a difference. An optional {}\var{dialect} parameter can be given which is used to define a set of parameters specific to a particular CSV dialect. It may be an instance of a subclass of the \class{Dialect} class or one of the strings returned by the \function{list_dialects} function. The other optional {}\var{fmtparam} keyword arguments can be given to override individual formatting parameters in the current dialect. For more information about the dialect and formatting parameters, see section~\ref{csv-fmt-params}, ``Dialects and Formatting Parameters'' for details of these parameters. To make it as easy as possible to interface with modules which implement the DB API, the value \constant{None} is written as the empty string. While this isn't a reversible transformation, it makes it easier to dump SQL NULL data values to CSV files without preprocessing the data returned from a \code{cursor.fetch*()} call. All other non-string data are stringified with \function{str()} before being written. \end{funcdesc} \begin{funcdesc}{register_dialect}{name, dialect} Associate \var{dialect} with \var{name}. \var{dialect} must be a subclass of \class{csv.Dialect}. \var{name} must be a string or Unicode object. \end{funcdesc} \begin{funcdesc}{unregister_dialect}{name} Delete the dialect associated with \var{name} from the dialect registry. An \exception{Error} is raised if \var{name} is not a registered dialect name. \end{funcdesc} \begin{funcdesc}{get_dialect}{name} Return the dialect associated with \var{name}. An \exception{Error} is raised if \var{name} is not a registered dialect name. \end{funcdesc} \begin{funcdesc}{list_dialects}{} Return the names of all registered dialects. \end{funcdesc} The \module{csv} module defines the following classes: \begin{classdesc}{DictReader}{csvfile, fieldnames\optional{, restkey=\constant{None}\optional{, restval=\constant{None}\optional{, dialect=\code{'excel'}\optional{, fmtparam}}}}} Create an object which operates like a regular reader but maps the information read into a dict whose keys are given by the \var{fieldnames} parameter. If the row read has fewer fields than the fieldnames sequence, the value of \var{restval} will be used as the default value. If the row read has more fields than the fieldnames sequence, the remaining data is added as a sequence keyed by the value of \var{restkey}. If the row read has fewer fields than the fieldnames sequence, the remaining keys take the value of the optional \var{restval} parameter. All other parameters are interpreted as for \class{reader} objects. \end{classdesc} \begin{classdesc}{DictWriter}{csvfile, fieldnames\optional{, restval=""\optional{, extrasaction=\code{'raise'}\optional{, dialect=\code{'excel'}\optional{, fmtparam}}}}} Create an object which operates like a regular writer but maps dictionaries onto output rows. The \var{fieldnames} parameter identifies the order in which values in the dictionary passed to the \method{writerow()} method are written to the \var{csvfile}. The optional \var{restval} parameter specifies the value to be written if the dictionary is missing a key in \var{fieldnames}. If the dictionary passed to the \method{writerow()} method contains a key not found in \var{fieldnames}, the optional \var{extrasaction} parameter indicates what action to take. If it is set to \code{'raise'} a \exception{ValueError} is raised. If it is set to \code{'ignore'}, extra values in the dictionary are ignored. All other parameters are interpreted as for \class{writer} objects. \end{classdesc} \begin{classdesc*}{Dialect}{} The \class{Dialect} class is a container class relied on primarily for its attributes, which are used to define the parameters for a specific \class{reader} or \class{writer} instance. \end{classdesc*} \begin{classdesc}{Sniffer}{} The \class{Sniffer} class is used to deduce the format of a CSV file. \end{classdesc} The \class{Sniffer} class provides a single method: \begin{methoddesc}{sniff}{sample\optional{,delimiters=None}} Analyze the given \var{sample} and return a \class{Dialect} subclass reflecting the parameters found. If the optional \var{delimiters} parameter is given, it is interpreted as a string containing possible valid delimiter characters. \end{methoddesc} \begin{methoddesc}{has_header}{sample} Analyze the sample text (presumed to be in CSV format) and return \constant{True} if the first row appears to be a series of column headers. \end{methoddesc} The \module{csv} module defines the following constants: \begin{datadesc}{QUOTE_ALL} Instructs \class{writer} objects to quote all fields. \end{datadesc} \begin{datadesc}{QUOTE_MINIMAL} Instructs \class{writer} objects to only quote those fields which contain the current \var{delimiter} or begin with the current \var{quotechar}. \end{datadesc} \begin{datadesc}{QUOTE_NONNUMERIC} Instructs \class{writer} objects to quote all non-numeric fields. \end{datadesc} \begin{datadesc}{QUOTE_NONE} Instructs \class{writer} objects to never quote fields. When the current \var{delimiter} occurs in output data it is preceded by the current \var{escapechar} character. When \constant{QUOTE_NONE} is in effect, it is an error not to have a single-character \var{escapechar} defined, even if no data to be written contains the \var{delimiter} character. \end{datadesc} The \module{csv} module defines the following exception: \begin{excdesc}{Error} Raised by any of the functions when an error is detected. \end{excdesc} \subsection{Dialects and Formatting Parameters\label{csv-fmt-params}} To make it easier to specify the format of input and output records, specific formatting parameters are grouped together into dialects. A dialect is a subclass of the \class{Dialect} class having a set of specific methods and a single \method{validate()} method. When creating \class{reader} or \class{writer} objects, the programmer can specify a string or a subclass of the \class{Dialect} class as the dialect parameter. In addition to, or instead of, the \var{dialect} parameter, the programmer can also specify individual formatting parameters, which have the same names as the attributes defined below for the \class{Dialect} class. Dialects support the following attributes: \begin{memberdesc}[Dialect]{delimiter} A one-character string used to separate fields. It defaults to \code{','}. \end{memberdesc} \begin{memberdesc}[Dialect]{doublequote} Controls how instances of \var{quotechar} appearing inside a field should be themselves be quoted. When \constant{True}, the character is doubled. When \constant{False}, the \var{escapechar} must be a one-character string which is used as a prefix to the \var{quotechar}. It defaults to \constant{True}. \end{memberdesc} \begin{memberdesc}[Dialect]{escapechar} A one-character string used to escape the \var{delimiter} if \var{quoting} is set to \constant{QUOTE_NONE}. It defaults to \constant{None}. \end{memberdesc} \begin{memberdesc}[Dialect]{lineterminator} The string used to terminate lines in the CSV file. It defaults to \code{'\e r\e n'}. \end{memberdesc} \begin{memberdesc}[Dialect]{quotechar} A one-character string used to quote elements containing the \var{delimiter} or which start with the \var{quotechar}. It defaults to \code{'"'}. \end{memberdesc} \begin{memberdesc}[Dialect]{quoting} Controls when quotes should be generated by the writer. It can take on any of the \constant{QUOTE_*} constants (see section~\ref{csv-contents}) and defaults to \constant{QUOTE_MINIMAL}. \end{memberdesc} \begin{memberdesc}[Dialect]{skipinitialspace} When \constant{True}, whitespace immediately following the \var{delimiter} is ignored. The default is \constant{False}. \end{memberdesc} \subsection{Reader Objects} Reader objects (\class{DictReader} instances and objects returned by the \function{reader()} function) have the following public methods: \begin{methoddesc}[csv reader]{next}{} Return the next row of the reader's iterable object as a list, parsed according to the current dialect. \end{methoddesc} \subsection{Writer Objects} \class{Writer} objects (\class{DictWriter} instances and objects returned by the \function{writer()} function) have the following public methods. A {}\var{row} must be a sequence of strings or numbers for \class{Writer} objects and a dictionary mapping fieldnames to strings or numbers (by passing them through \function{str()} first) for {}\class{DictWriter} objects. Note that complex numbers are written out surrounded by parens. This may cause some problems for other programs which read CSV files (assuming they support complex numbers at all). \begin{methoddesc}[csv writer]{writerow}{row} Write the \var{row} parameter to the writer's file object, formatted according to the current dialect. \end{methoddesc} \begin{methoddesc}[csv writer]{writerows}{rows} Write all the \var{rows} parameters (a list of \var{row} objects as described above) to the writer's file object, formatted according to the current dialect. \end{methoddesc} \subsection{Examples} The ``Hello, world'' of csv reading is \begin{verbatim} import csv reader = csv.reader(file("some.csv")) for row in reader: print row \end{verbatim} The corresponding simplest possible writing example is \begin{verbatim} import csv writer = csv.writer(file("some.csv", "w")) for row in someiterable: writer.writerow(row) \end{verbatim}
{ "alphanum_fraction": 0.7649693893, "avg_line_length": 42.2523659306, "ext": "tex", "hexsha": "1afe3c22a288bfb9edffb6f52aeb6dae5a5dd2fa", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "0b4a6871ca57123c10aa48cc2a5d2b7c0ee3c849", "max_forks_repo_licenses": [ "PSF-2.0" ], "max_forks_repo_name": "deadsnakes/python2.3", "max_forks_repo_path": "Doc/lib/libcsv.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "0b4a6871ca57123c10aa48cc2a5d2b7c0ee3c849", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "PSF-2.0" ], "max_issues_repo_name": "deadsnakes/python2.3", "max_issues_repo_path": "Doc/lib/libcsv.tex", "max_line_length": 78, "max_stars_count": null, "max_stars_repo_head_hexsha": "0b4a6871ca57123c10aa48cc2a5d2b7c0ee3c849", "max_stars_repo_licenses": [ "PSF-2.0" ], "max_stars_repo_name": "deadsnakes/python2.3", "max_stars_repo_path": "Doc/lib/libcsv.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 3301, "size": 13394 }
\subsection{Homotopy}
{ "alphanum_fraction": 0.72, "avg_line_length": 5, "ext": "tex", "hexsha": "b91a30c0156d5ed6c233517aaae098e7683d8ba3", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "adamdboult/nodeHomePage", "max_forks_repo_path": "src/pug/theory/geometry/manifoldsTopological/06-02-homotopy.tex", "max_issues_count": 6, "max_issues_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_issues_repo_issues_event_max_datetime": "2022-01-01T22:16:09.000Z", "max_issues_repo_issues_event_min_datetime": "2021-03-03T12:36:56.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "adamdboult/nodeHomePage", "max_issues_repo_path": "src/pug/theory/geometry/manifoldsTopological/06-02-homotopy.tex", "max_line_length": 21, "max_stars_count": null, "max_stars_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "adamdboult/nodeHomePage", "max_stars_repo_path": "src/pug/theory/geometry/manifoldsTopological/06-02-homotopy.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 8, "size": 25 }
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % Deedy - One Page Two Column Resume % LaTeX Template % Version 1.2 (16/9/2014) % % Original author: % Debarghya Das (http://debarghyadas.com) % % Original repository: % https://github.com/deedydas/Deedy-Resume % % IMPORTANT: THIS TEMPLATE NEEDS TO BE COMPILED WITH XeLaTeX % % This template uses several fonts not included with Windows/Linux by % default. If you get compilation errors saying a font is missing, find the line % on which the font is used and either change it to a font included with your % operating system or comment the line out to use the default font. % %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % % TODO: % 3. Add styling information for a "Projects/Hacks" section. % 4. Add location/address information % %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % % Known Issues: % 1. Overflows onto second page if any column's contents are more than the % vertical limit % 2. Hacky space on the first bullet point on the second column. % %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \documentclass[]{darling-resume-openfont} \usepackage{fancyhdr} \usepackage{textcomp} \pagestyle{fancy} \fancyhf{} \begin{document} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % % LAST UPDATED DATE % %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \lastupdated %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % % TITLE NAME % %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \namesection{Andrew}{Darling}{ \urlstyle{same} \href{mailto:[email protected]}{[email protected]} | 720.505.0555 \\ \href{https://github.com/z3ht}{github.com/z3ht} | \href{https://www.linkedin.com/in/andrew~darling}{linkedin.com/in/andrew\~{}darling} \\ } %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % % COLUMN ONE % %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{minipage}[t]{0.33\textwidth} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % EDUCATION %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{Education} \subsection{School of Mines} \descript{BS in Computer Science} \location{Aug 2018 - Dec 2021} \location{Golden, CO} \location{ Jr/Sr GPA: 3.87 / 4.0 \\ Cum. GPA: 3.58 / 4.0} \sectionsep %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % COURSEWORK %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{Coursework} Multivariate Analysis \\ Applied Statistics \\ Mathematical Statistics \\ Machine Learning \\ Algorithms \\ Operating Systems \\ Programming Languages \\ Computer Organization \\ Ethics \\ \sectionsep %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % Interests %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{Interests} 1. Modernizing high impact, legacy \\ \hspace*{4mm}services \\ 2. Proving code correctness \\ 3. Making strong predictions with \\ \hspace*{4mm}incomplete information \sectionsep %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % PROJECTS %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{Projects} \textbf{silver-mind}: \\ \hspace*{2mm} Deep learning chess AI \\ \textbf{moving-pose}: \\ \hspace*{2mm} Pose recognition AI \\ \textbf{executioner}: \\ \hspace*{2mm} Model-based testing framework \\ \textbf{dr-nim}: \\ \hspace{2mm} Minecraft NIM game plugin \\ \sectionsep %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % SKILLS %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{Skills} \subsection{Programming} \vspace*{0.5mm} \location{Over 6000 lines:} Python \textbullet{} Java \\ \vspace*{0.75mm} \location{Over 1000 lines:} Git \textbullet{} R \textbullet{} Bash \textbullet{} \\ C \textbullet{} C++ \textbullet{} C\#{} \\ \vspace*{0.75mm} \location{Familiar:} Dart/Flutter \textbullet{} MySQL \textbullet{} Rust \textbullet{} \\ Elixir \textbullet{} Assembly \textbullet{} GH Actions \textbullet{} \\ \LaTeX\ \textbullet{} Javascript \textbullet{} AWS \textbullet{} Docker \\ \sectionsep %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % % COLUMN TWO % %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \end{minipage} \hfill \begin{minipage}[t]{0.66\textwidth} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % EXPERIENCE %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{Experience} \runsubsection{Trimble} \descript{| Cloud Architecture Intern } \location{May 2021 - Aug 2021 | Westminster, CO} \vspace{\topsep} % Hacky fix for awkward extra vertical space \begin{tightemize} \item Exposed to the architectural side of large-scale software development efforts \item Built out test-suite for a new Trimble Cloud product \item Navigated the test-suite handoff to the product's new, dedicated team \end{tightemize} \sectionsep \runsubsection{Tyler Technologies} \descript{| Fullstack Development Intern } \location{May 2020 – Dec 2020 | Lakewood, CO} \begin{tightemize} \item Learned how to create new features within an extremely large code-base \item Improved developer efficiency by building out documentation for a frequently used, proprietary C\#{} framework \item Enabled Google Analytics for a large web application and worked within a team to develop new features required by the business \end{tightemize} \sectionsep \runsubsection{FirstBank} \descript{| Java Development Intern } \location{May 2019 – Aug 2019 | Lakewood, CO} \begin{tightemize} \item Introduced to agile software development workflows (Scrum + Kanban) \item Created Behavior Driven Development (BDD) unit testing Java framework \item Presented the BDD framework to FirstBank's unit testing committee, Java development team, and FirstBank executives \item Tested, documented, and deployed the BDD framework to FirstBank's Common repository \end{tightemize} \sectionsep \runsubsection{Coalesce Development Community} \descript{| Contractor } \location{May 2017 – Aug 2018 | Denver, CO} \begin{tightemize} \item Built Minecraft plugins for server owners using the Java Spigot API \end{tightemize} \sectionsep %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % RESEARCH %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{Research} \runsubsection{Sense2Safeguard} \descript{| Undergraduate Researcher} \location{Aug 2020 – July 2021 | Golden, CO} Worked under \textbf{\href{https://chemistry.mines.edu/project/klein-seetharaman-judith/}{Dr. Judith Klein}}, the director of BioScience and Engineering at Mines, to create \textbf{iCovid}, a tool which performs COVID-19 disease comorbidity analysis using data from the UK Biobank \sectionsep %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % Involvement %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{Involvement} \runsubsection{CS@Mines} \descript{| Teaching Assistant (Machine Learning) } \location{Aug 2021 – Present | Golden, CO} \begin{tightemize} \item Modified activities to better present machine learning concepts \end{tightemize} \sectionsep \runsubsection{Denver School of Science and Technology} \descript{| Tutor} \location{Aug 2019 – April 2020 | Golden, CO} \begin{tightemize} \item Spent afternoons tutoring middle-schoolers at DSST in math and science \end{tightemize} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % AWARDS %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{Awards} \begin{tabular}{rll} 2018-2020 & Recipient & CS@MINES PATHS Scholarship \\ 2020 & 2\textsuperscript{nd}/15 & Algorithms Course's AlgoBowl \\ 2019 & 2\textsuperscript{nd}/16 & Tyler Tech Coding Competition \\ 2019 & 22\textsuperscript{nd}/76 & ICPC Rocky Mountain Regionals \\ 2018 & Varsity Letter & Arapahoe High School Cross Country \\ \end{tabular} \sectionsep \end{minipage} \end{document} \documentclass[]{article}
{ "alphanum_fraction": 0.6302452316, "avg_line_length": 29.4779116466, "ext": "tex", "hexsha": "0c81cc8bda6def41c4c8fe4863fd10677255687f", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "b24a28b417937f44d376274393a1639854aaf674", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "z3ht/resume", "max_forks_repo_path": "src/darling-resume-openfont.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "b24a28b417937f44d376274393a1639854aaf674", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "z3ht/resume", "max_issues_repo_path": "src/darling-resume-openfont.tex", "max_line_length": 280, "max_stars_count": null, "max_stars_repo_head_hexsha": "b24a28b417937f44d376274393a1639854aaf674", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "z3ht/resume", "max_stars_repo_path": "src/darling-resume-openfont.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 1845, "size": 7340 }
\section{\textbf{LockFreeQueueRecycleTest}} \subsection{Particular Case (problem)} From the generic problem of improving coarse grained lock approaches, the particular approach followed on this exercise corresponds to the extreme case: lock-free data structure. The particular case is for a queue, and it has the additional bonus of recycling memory. \subsection{Solution} The solution is based on an 1996 ACM article from Maged M. Michel and Michael L. Scott, who based on previous publications, suggest a new way of implementing a lock-free queue with recycles. They implement the queue with a single-linked list using \C{tail} and \C{head} pointers; where \C{head} always points to a dummy or sentinel node which is the first in the list, and \C{tail} points to either the last or second to last node in the list. The algorithm uses ``compare and swap'' (\C{CAS}) with modification counters to avoid the ABA problem. \\ Dequeuers are allowed to free dequeued nodes by ensuring that \C{tail} does not point to the dequeued node nor to any of its predecessors (that is, dequeued nodes may be safely reused). To obtain consistent values of various pointers the authors relied on sequences of reads that re-check earlier values to ensure they have not changed (these reads are claimed to be simpler than snapshots). \subsection{Experiment Description} The test is pretty much the same described for \C{LockFreeQueueTest} (with the exception that it uses 8 threads instead of two). \newpage \subsection{Observations and Interpretations} The test does not exhibit any pitfall, which suggests that the theory works just fine on the tested machines (we tried the one with 2 and with 24 cores). Sample output below: \begin{verbatim} .parallel enq .parallel deq .parallel both .sequential push and pop Time: 0.023 OK (4 tests) \end{verbatim} \hfill
{ "alphanum_fraction": 0.7885652643, "avg_line_length": 38.625, "ext": "tex", "hexsha": "8555b9c8327d9bec200fdcc15b0543d616d95a53", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "f27d4dd6f44172bb6c910552e50107838d653f2f", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "rzavalet/multiprocessor", "max_forks_repo_path": "dario/Multiprocessor/LockFreeQueueRecycleTest.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "f27d4dd6f44172bb6c910552e50107838d653f2f", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "rzavalet/multiprocessor", "max_issues_repo_path": "dario/Multiprocessor/LockFreeQueueRecycleTest.tex", "max_line_length": 71, "max_stars_count": null, "max_stars_repo_head_hexsha": "f27d4dd6f44172bb6c910552e50107838d653f2f", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "rzavalet/multiprocessor", "max_stars_repo_path": "dario/Multiprocessor/LockFreeQueueRecycleTest.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 433, "size": 1854 }
% s,strum \newcommand{\strum} {\texttt{strum}\xspace} \subsection{The \strum class} Most object-oriented languages allow object classes to have private methods, \ie, functions that are intended for use only with objects or data of a specific class. \matlab also supports the use of private methods. If you put a function in an m-file named \ty{myfun.m} within the directory where an class \ty{myclass} is declared, \ie, in the directory \ty{@myclass}, then invoking \ty{myfun(ob)} will call the function \ty{myfun} if \ty{ob} is of class \ty{myclass}. This mechanism is very convenient, particularly when overloading an existing \matlab operation, but it has some limitations. \blist \item If several object classes can all use the same method \ty{myfun}, the you must put copies of \ty{myfun} in each of the object class directories (or use suitable links), which complicates software maintenance. \item Every single method requires such an m-file, even if the function is only a few lines long, leading to a proliferation of little m-files littering the directories. \item There is no mechanism for changing the methods during execution. \elist The \strum object class is essentially a special \emph{structure} that contains private \emph{methods} that are simply function handles. If \ty{st} is a \strum object that was declared to have a method \ty{method1}, then invoking \[ \ty{st.method1(}\textsl{args}\ty{)} \] will cause a call to the function handle. A concrete example is given in the \ty{sino_geom.m}. A call of the form \[ \ty{sg = sino_geom('par', 'nb', 128, 'na', 100, 'dr', 3, 'orbit', 180);} \] creates a \strum object that describes a parallel-beam sinogram. This object has a variety of data elements and associated methods. For example, invoking \ty{sg.ar} returns a list of the projection view angles in radians, and \ty{sg.ad(2)} returns the second projection view angle in degrees. These methods are very short functions defined within the \ty{sino_geom.m} m-file.
{ "alphanum_fraction": 0.7616424637, "avg_line_length": 21.9450549451, "ext": "tex", "hexsha": "bd6e3884f8e5a300ebef0baca6fd1e64605b3ddd", "lang": "TeX", "max_forks_count": 50, "max_forks_repo_forks_event_max_datetime": "2022-03-16T09:19:27.000Z", "max_forks_repo_forks_event_min_datetime": "2019-06-12T09:20:17.000Z", "max_forks_repo_head_hexsha": "d2cd8d980a4a7a8ba7523850ed1c31d016f633df", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "tianrluo/mirt", "max_forks_repo_path": "doc/s,strum.tex", "max_issues_count": 4, "max_issues_repo_head_hexsha": "d2cd8d980a4a7a8ba7523850ed1c31d016f633df", "max_issues_repo_issues_event_max_datetime": "2021-03-27T03:29:20.000Z", "max_issues_repo_issues_event_min_datetime": "2019-06-15T22:02:22.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "tianrluo/mirt", "max_issues_repo_path": "doc/s,strum.tex", "max_line_length": 72, "max_stars_count": 72, "max_stars_repo_head_hexsha": "d2cd8d980a4a7a8ba7523850ed1c31d016f633df", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "tianrluo/mirt", "max_stars_repo_path": "doc/s,strum.tex", "max_stars_repo_stars_event_max_datetime": "2022-03-08T04:10:47.000Z", "max_stars_repo_stars_event_min_datetime": "2019-06-04T08:11:12.000Z", "num_tokens": 543, "size": 1997 }
\section{Economic properties} \subsection{Emission Rules} Ergo emission will last for 2080799 blocks~(8 years with planned 2 minute block interval) --- for the first 525600 blocks (2 years) 75 Erg will be issued per block and after that the block reward will be reduced by 3 Erg every 64800 blocks~(3 months). To fund the development, during the first 655200 blocks~(2.5 years) the part of the block rewards that exceeds 67.5 will go to foundation treasury instead of a miner. (see Fig.~\ref{fig:emission}). \begin{figure}[H] \centering \includegraphics[width=\textwidth]{img/emission.jpg} \caption{Ergo emission rule \label{fig:emission}} \end{figure} Instead of having an implicit emission rule via a special type of transaction (e.g. coinbase transaction in Bitcoin), Ergo coins emission rule is defined explicitly by sigma-state transactional language. Total miners rewards of 93409132.5 Erg will be created in the genesis state in a box, protected by a script defined at https://git.io/fhgOq. This script allows a miner to take only a part of all coins every block. Transaction that spends this output should have exactly 2 outputs: the first one should have the same protecting script and contain all input coins minus current block reward, while the second one may be collected by the current block miner after at least 720 blocks delay. Total foundation rewards will be kept in the genesis state in a box with 4330791.5 Erg and will be protected protected by the script defined at https://git.io/fhoqw. First output of the transaction that spends this box should at least have the value of the remaining treasury. In addition, conditions from R4 register of this box should be satisfied, allowing to protect this output from undesirable spent. At the beginning R4 register will contain 2-of-3 multisignature proposition with the hardcoded public keys. Ergo emission will start from zero with no premine. As proof-of-no-pre-mine genesis state will contain an additional box with 1 Erg inside protected by unspendable proposition. This box registers R4-R6 will contain latest news headlines from The Guardian, Vedomosti, and Xinhua, while registers R7 and R8 will contain latest block id's from Bitcoin and Ethereum blockchains correspondingly. \subsection{Storage rent} We outline following two properties required for long-term survivability of the chain: \begin{itemize} \item{} coins protected by keys being lost should be returned into circulation. Otherwise, after the end of the emission period, amount of the coins in the circulation will always decrease and eventually reach zero. \item{} nothing should be kept in the state forever and for free. Otherwise, the size of the state is always increasing with time, thus reducing clients performance. \end{itemize} To achieve this, we propose the following storage fee consensus rules. Register $R3$ of a box contains tuple $(creation\_height, tx\_id || out\_num)$, where $creation\_height$ provided by a user is used to determine the block height at the moment of transaction generation. Transaction can only be put in the block of height $h$ if for every created box its $creation\_height \le h$. Once the subsidized period for the box ends (that is, $current\_block\_height \ge box.creation\_height + SP$, see $SP$ value below), anyone~(presumably, a miner) can create a new box $out$ with the exactly the same content (including the guarding script) except the monetary value and $R3$ contents. The monetary value is to reduced by $K \cdot B$ maximum, where $B$ is the spent box~($self$) size and $K$ is the storage cost for the period $SP$. Thus, the difference is to be paid to the miner. If box value is less that the storage fee, all the box content including tokens could be spent by the miner. For efficient lookup~(to find a proper output regarding an expired input without an iteration over transaction outputs), we require a spending proof for an expired box to be just a context extension which contains only an index of an output which is trying to spend the box. The variable identifier for the index in the extension is $127$. We propose the following concrete parameters: \begin{itemize} \item{} $SP$ - length of a period for which box can be stored in a State for free untouched. It is a constant, namely, $SP = 1051200 \approx 4$ years. \item{} $K$ - cost of storage of 1 byte of data in a State for the period of $SP$ blocks. Should be determined by miner votes, $1250000 (nanoErg/SP)$ by default, max value is $2500000$. \end{itemize} \knote{Provide a script}
{ "alphanum_fraction": 0.775802255, "avg_line_length": 56.243902439, "ext": "tex", "hexsha": "0ae4ded58ebb85b36548e43438f2b1845db1dff3", "lang": "TeX", "max_forks_count": 131, "max_forks_repo_forks_event_max_datetime": "2022-03-22T01:08:16.000Z", "max_forks_repo_forks_event_min_datetime": "2017-07-19T12:46:49.000Z", "max_forks_repo_head_hexsha": "9964f415526f491a4837774d80b59792e1e2b8bb", "max_forks_repo_licenses": [ "CC0-1.0" ], "max_forks_repo_name": "scasplte2/ergo", "max_forks_repo_path": "papers/yellow/economy.tex", "max_issues_count": 886, "max_issues_repo_head_hexsha": "9964f415526f491a4837774d80b59792e1e2b8bb", "max_issues_repo_issues_event_max_datetime": "2022-03-31T10:21:25.000Z", "max_issues_repo_issues_event_min_datetime": "2017-07-20T21:59:30.000Z", "max_issues_repo_licenses": [ "CC0-1.0" ], "max_issues_repo_name": "scasplte2/ergo", "max_issues_repo_path": "papers/yellow/economy.tex", "max_line_length": 123, "max_stars_count": 424, "max_stars_repo_head_hexsha": "9964f415526f491a4837774d80b59792e1e2b8bb", "max_stars_repo_licenses": [ "CC0-1.0" ], "max_stars_repo_name": "scasplte2/ergo", "max_stars_repo_path": "papers/yellow/economy.tex", "max_stars_repo_stars_event_max_datetime": "2022-03-29T13:33:57.000Z", "max_stars_repo_stars_event_min_datetime": "2017-07-17T12:33:06.000Z", "num_tokens": 1095, "size": 4612 }
\section{Analysis format and code} All the reconstructed particles of all the detectors are kept in a file called \textbf{AliESDs.root}. The detectors must store there the most relevant information which will be used in the analysis. Together with the AliESDs.root file, another file is created with some reference tags of the simulated events, containing for example the number of events per run. This file is named \textbf{Run0.Event0\_1.ESD.tag.root} (1 means that only 1 event was simulated). In order to do the analysis with the data contained in the ESDs, the only file needed is \textbf{AliESDs.root} in the local directories or a grid collection. No other files are needed in the working directory (such as galice.root nor EMCAL.{*}.root) unless one needs to access the primary particles generated during the simulation. In that case, the files \textbf{galice.root} and \textbf{Kinematics.root} are needed locally. Also, if one want to access to some information of the detector geometry, the \textbf{geometry.root} file is needed. There are other data analysis containers created from the ESD, the AOD (Analysis Object Data) with smaller quantity of data for most of the subsystems but for the calorimeters, where we copy all the information\footnote{until half 2012 everything but the time of the cells was stored}. \subsection{Calorimeter information in ESDs/AODs} The basic calorimeter information needed for analysis is stored in the ESDs or AODs in the form of CaloClusters and CaloCells (cell = EMCal Tower or PHOS crystal). Also there is some information stored in the AOD/ESD event classes, it will be detailed more in the lines below. Both AOD and ESD classes derive from virtual classes so that with a similar analysis code and access methods, we can read both kind of data formats. \subsubsection{AliVEvent (AliESDEvent, AliAODEvent)} Those are manager classes for the event information retrieval. Regarding the calorimeters they have the following access information (getters) methods\footnote{There are the equivalent setters just have a look to the header file of the class}: \begin{itemize} \item AliVCaloCluster *GetCaloCluster(Int\_t i) : Returns a CaloCluster listed in position "i" in the array of CaloClusters. It can be either PHOS or EMCal (PHOS list of clusters is before the EMCal list). \item TClonesArray *GetCaloClusters() : Returns the array with CaloClusters PHOS+EMCAL, Only defined for AODs \item Int\_t GetEMCALClusters(TRefArray *clusters) ; Int\_t GetPHOSClusters(TRefArray *clusters) : Returns an array with only EMCal clusters or only with PHOS clusters. \item Int\_t GetNumberOfCaloClusters(): Returns the total number of clusters PHOS+EMCAL. \item AliVCaloCells *GetEMCALCells(); AliESDCaloCells *GetPHOSCells() : Returns the pointer with the CaloCells object for EMCal or PHOS. \item AliVCaloTrigger *GetCaloTrigger(TString calo) : Access to trigger patch information, for calo="PHOS" or calo="EMCAL" \item const TGeoHMatrix* GetPHOSMatrix(Int\_t i); const TGeoHMatrix* GetEMCALMatrix(Int\_t i): Get the matrices for the transformation of global to local. The transformation matrices are not stored in the AODs. \end{itemize} \subsubsection{AliVCaloCluster (AliESDCaloCluster,AliAODCaloCluster)} They contain the information of the calorimeter clusters. Note that PHOS and EMCAL CaloClusters are kept in the same TClonesArray (see above). The information stored in each CaloCluster is : \begin{itemize} \item General \begin{itemize} \item Int\_t GetID(): It returns a unique identifier number for a CaloCluster. \item Char\_t GetClusterType():It returns kPHOSNeutral (kPHOSCharged exists but not used) or kEMCALClusterv1. Another way to get the origin of the cluster: \item Bool\_t IsEMCAL(); Bool\_t IsPHOS(). \item void GetPosition(Float\_t *pos) : It returns a {x,y,z} array with the global positions of the clusters in centimeters. \item Double\_t E() : It returns the energy of the cluster in GeV units. \item void GetMomentum(TLorentzVector\& p, Double\_t * vertexPosition ): It fills a TLorentzVector pointing to the measured vertex of the collision. It also modifies the cluster global positions to have a vector pointing to the vertex, this has to be corrected. Assumes that cluster is neutral. To be used only for analysis with clusters not matched with tracks. \end{itemize} \item Shower Shape \begin{itemize} \item Double\_t GetDispersion(): Dispersion of the shower. \item Double\_t Chi2(): Not filled. \item Double\_t GetM20() Double\_t GetM02() : Ellipse axis. \item UChar\_t GetNExMax() : Number or maxima in cluster. Not filled. \item Double\_t *GetPID(): PID weights array, 10 entries corresponding to the ones defined in AliPID.h \item enum EParticleType { kElectron = 0, kMuon = 1, kPion = 2, kKaon = 3, kProton = 4, kPhoton = 5, kPi0 = 6, kNeutron =7, kKaon0 = 8, kEleCon = 9,kUnknown = 10}; : PID tag numbers, corresponding to the PID array \item Double\_t GetDistanceToBadChannel() : Distance of the cluster to closest channel declared as kDead, kWarm or kHot. \item Double\_t GetTOF() : Measured Time of Flight of the cluster. \end{itemize} \item Track-Cluster matching \begin{itemize} \item TArrayI * GetTracksMatched(): List of indexes to the likely matched tracks. Tracks ordered in matching likeliness. If there is no match at all, by default it contains one entry with value -1. Only in ESDs. \item Int\_t GetTrackMatchedIndex(Int\_t i): Index of track in position "i" in the list of indices stored in GetTracksMatched(). Only in ESDs \item Int\_t GetNTracksMatched() : Total number of likely matched tracks. Size of GetTracksMatched() array. \item Double\_t GetEmcCpvDistance() : PHOS method, not used anymore. Use instead those below. \item Double\_t GetTrackDx(void), Double\_t GetTrackDz(void): Distance in x and z to closest track. \item TObject * GetTrackMatched(Int\_t i): References to the list of most likely matched tracks are stored in a TRefArray. This method retrives the one in position "i". Tracks are listed in order of likeliness. The TObject is a AliAODTrack. Only for AODs \end{itemize} \item MonteCarlo labels: \begin{itemize} \item TArrayI * GetLabels(): List of indexes to the MonteCarlo particles that contribute to the cluster. Labels ordered in energy contribution. \item Int\_t GetLabel(): Index of MonteCarlo particle that deposited more energy in the cluster. First entry of GetLabels() array. \item Int\_t GetLabelAt(UInt\_t i): Index of MonteCarlo particle in position i of the array of MonteCarlo indices. \item Int\_t GetNLabels() : Total number of MonteCarlo particles that deposited energy. Size of GetLabels() array. \end{itemize} \item Cluster cells \begin{itemize} \item Int\_t GetNCells() : It returns the number of cells that contribute to the cluster. \item UShort\_t *GetCellsAbsId(): It returns the array with absolute id number of the cells contributing to the cluster. Size of the array is given by GetNCells(). \item Double32\_t *GetCellsAmplitudeFraction(): For cluster unfolding, it returns an array with the fraction the energy that a cell contributes to the cluster. \item Int\_t GetCellAbsId(Int\_t i) : It returns the absolute Id number of a cell in the array between 0 and GetNCells()-1. \item Double\_t GetCellAmplitudeFraction(Int\_t i) : It returns the amplitude fraction of a cell in the array between 0 and GetNCells()-1. \end{itemize} \end{itemize} \subsubsection{AliVCaloCells (AliESDCaloCells, AliAODCaloCells)} They contain an array with the amplitude or time of all the cells that fired in the calorimeter during the event. Notice that per event there will be a CaloCell object with EMCAL cells and another one with PHOS cells. \begin{itemize} \item Short\_t GetNumberOfCells(): Returns number of cells with some energy. \item Bool\_t IsEMCAL(); Bool\_t IsPHOS(); Char\_t GetType(): Methods to check the origin of the AliESDCaloCell object, kEMCALCell or kPHOSCell. \item Short\_t GetCellNumber(Short\_t pos): Given the position in the array of cells (from 0 to GetNumberOfCells()-1), it returns the absolute cell number (from 0 to NModules*NRows*NColumns - 1). \item Double\_t GetCellAmplitude(Short\_t cellNumber): Given absolute cell number of a cell (from 0 to NModules*NRows*NColumns - 1), it returns the measured amplitude of the cell in GeV units. \item Double\_t GetCellTime(Short\_t cellNumber): Given absolute cell number of a cell (from 0 to NModules*NRows*NColumns - 1), it returns the measured time of the cell in second units. \item Double\_t GetAmplitude(Short\_t pos): Given the position in the array of cells (from 0 to GetNumberOfCells()-1), it returns the amplitude of the cell in GeV units. \item Double\_t GetTime(Short\_t pos): Given the position in the array of cells (from 0 to GetNumberOfCells()-1), it returns the time of the cell in second units. \item Double\_t GetCellMCLable(Short\_t cellNumber): Given absolute cell number of a cell (from 0 to NModules*NRows*NColumns - 1), it returns the index of the most likely MC label. \item Double\_t GetCellEFraction(Short\_t cellNumber): Given absolute cell number of a cell (from 0 to NModules*NRows*NColumns - 1), it returns the fraction of embedded energy from MC to real data (only for embedding) \item Double\_t GetMCLabel(Short\_t pos): Given the position in the array of cells (from 0 to GetNumberOfCells()-1), it returns the index of the most likely MC label. \item Double\_t GetEFraction(Short\_t pos): Given the position in the array of cells (from 0 to GetNumberOfCells()-1), it returns the fraction of embedded energy from MC to real data (only for embbedding) \item Bool\_t GetCell(Short\_t pos, Short\_t \&cellNumber, Double\_t \&amplitude, Double\_t \&time, Short\_t \&mclabel, Double\_t \&efrac); : For a given position of the list of cells, it fills the amplitude, time, mc lable and fraction of energy. \end{itemize} \subsubsection{AliVCaloTrigger (AliESDCaloTrigger, AliAODCaloTrigger) - Rachid)} \subsection{Macros} You can find example macros to run on ESDs or AODs in \begin{lstlisting} $ALICE_ROOT/EMCAL/macros/TestESD.C or TestAOD.C \end{lstlisting} All the ESDs information is filled via the AliEMCALReconstructor/AliPHOSReconstructor class, in the method FillESD(). The AODs are created via the analysis class \begin{lstlisting} $ALICE_ROOT/ANALYSIS/AliAnalysisTaskESDfilter.cxx,.h \end{lstlisting} and as already mentioned, for the calorimeters it basically just copies all the information from ESD format to AOD format. Below is a description of what information is stored and how to retrieve it. The location of the corresponding classes is \begin{lstlisting} $ALICE_ROOT/STEER \end{lstlisting} \subsection{Code example} The analysis is done using the data stored in the ESD. The macro \begin{lstlisting} $ALICE_ROOT/EMCAL/macros/TestESD.C \end{lstlisting} is an example of how to read the data for the calorimeters PHOS and EMCal (just replace where it says EMCAL by PHOS in the macro to obtain PHOS data). For these detectors we have to use the ESD class AliESDCaloCluster or AliESDCaloCells to retrieve all the calorimeters information. For the tracking detectors, the class is called AliESDtrack, but the way to use it is very similar (see ``\$ALICE\_ROOT/STEER/AliESDtrack.*''\\ and ``\$ALICE\_ROOT/STEER/AliESDCaloCluster*'' for more details). In AliESDCaloCluster we keep the following cluster information: energy, position, number of Digits that belong to the cluster, list of the cluster Digits indeces, shower dispersion, shower lateral axis and a few more parameters. In AliESDCaloCells we keep the following tower information: amplitude (GeV), time (seconds), absolute cell number. The structure of the ESD testing macro (TestESD.C) is the following: \begin{itemize} \item Lines 0-29: This macro is prepared to be compiled so it has ``includes'' to all the Root and AliRoot classes used. \item Lines 30-36: This macro prints some information on screen, the kind of information is set here. We print by default clusters information and optionally, the cells information, the matches information, the cells in the clusters information or the MonteCarlo original particle kinematics. \item Lines 40-64: Here are the methods used to load AliESDs.root , geometry or kinematics files. Also loop on ESD event is here. \item Lines 65-66 Gets the measured vertex of the collision. \item Lines 69-78 Loops on all the CaloCell entries and prints the cell amplitude, absolute number and time. \item Lines 84- end: We access the EMCAL AliESDCaloCluster array and loop on it. We get the different information from the CaloCluster. \item Lines 111-130: Track Matching prints. Access to the matched track stored in AliESDtrack. \item Lines 133-159: Cells in cluster prints \item Lines 161 - end: Access the stack with the MC information and prints the parameters of the particle that generated the cluster. \end{itemize} \subsection{Advanced utilities : Reconstruction/corrrections of cells, clusters during the analysis} \subsubsection{AliEMCALRecoUtils} \subsubsection{Tender : AliEMCALTenderSupply} \subsubsection{Particle Identification with the EMCal} In the EMCal we have two different ways to obtain the PID of a given particle: \begin{enumerate} \item Shower shape of the cluster: Distinguish electrons/photons and $\pi^{0}$ from other particles. This is done without any track information in the class \texttt{AliEMCALPID}. \item Ratio between energy of the cluster and the momentum of a matched track: Distinguish electrons from other particles. This is done in the combined PID framework by the class \texttt{AliEMCalPIDResponse}. \end{enumerate} \paragraph{AliEMCALPID (AliEMCALPIDutils)} The idea for particle identification with EMCal clusters is that the shower shapes produced in the EMCal are different for different particle species. The long axis of the cluster ($\lambda_{0}$) is used for this purpose and parametrized for electrons/photons (should have the same electromagnetic shower), for $\pi^{0}$ (two photons merging in one cluster give a different shape than one photon), and hadrons (MIP signal). The main method in this class is \texttt{RunPID()}, which calls \texttt{AliEMCALPIDutils::ComputePID(Double\_t energy, Double\_t lambda0)} for each cluster with cluster energy \texttt{energy} and long axis of the cluster \texttt{lambda0} in the event. Inside this method first the \texttt{energy} dependent parametrizations for the three cases (photons, $\pi^{0}$, and hadrons) are retrieved. The parametrization is done here with a combination of a Gaussian and a Landau (at the moment there are two parameter sets available: low and high flux, which can be set by \texttt{AliEMCALPID::SetLowFluxParam()} and \texttt{AliEMCALPID::SetHighFluxParam()}). Then a conditional probability is assigned to the cluster for each of these three species depending on \texttt{lambda0}. Finally a probability for a cluster being of a certain particle species is calculated with the Bayesian approach that can be retrieved by \texttt{AliVCluster::GetPID()}.\\ \begin{DDbox}{\linewidth} \begin{lstlisting} // compute PID weights if( (fProbGamma + fProbPiZero + fProbHadron)>0.){ fPIDWeight[0] = fProbGamma / (fProbGamma + fProbPiZero + fProbHadron); // gamma fPIDWeight[1] = fProbPiZero / (fProbGamma+fProbPiZero+fProbHadron); // pi0 fPIDWeight[2] = fProbHadron / (fProbGamma+fProbPiZero+fProbHadron); // hadron } ... //default particles fPIDFinal[AliPID::kElectron] = fPIDWeight[0]/2; // electron fPIDFinal[AliPID::kMuon] = fPIDWeight[2]/8; fPIDFinal[AliPID::kPion] = fPIDWeight[2]/8; fPIDFinal[AliPID::kKaon] = fPIDWeight[2]/8; fPIDFinal[AliPID::kProton] = fPIDWeight[2]/8; //light nuclei fPIDFinal[AliPID::kDeuteron] = 0; fPIDFinal[AliPID::kTriton] = 0; fPIDFinal[AliPID::kHe3] = 0; fPIDFinal[AliPID::kAlpha] = 0; //neutral particles fPIDFinal[AliPID::kPhoton] = fPIDWeight[0]/2; // photon fPIDFinal[AliPID::kPi0] = fPIDWeight[1] ; // pi0 fPIDFinal[AliPID::kNeutron] = fPIDWeight[2]/8; fPIDFinal[AliPID::kKaon0] = fPIDWeight[2]/8; fPIDFinal[AliPID::kEleCon] = fPIDWeight[2]/8; // fPIDFinal[AliPID::kUnknown] = fPIDWeight[2]/8; \end{lstlisting} \end{DDbox} \paragraph{AliEMCalPIDResponse} The idea for particle identification with EMCal clusters AND the track information is that electrons are loosing their total energy in an electromagnetic shower inside the EMCal whereas all other charged particles only part of it. The main observable is $E/p$ with the energy of the EMCal cluster ($E$) and the momentum of a matched track ($p$). This ratio is $E/p\sim1$ for electrons and $E/p< 1$ for other charged particles. The decision about a particle species is done within the PID framework provided by ALICE. The main classes are: \texttt{STEER/STEERBase/AliPIDCombined} and \texttt{STEER/STEERBase/AliPIDResponse}. There are two methods of usage: \begin{enumerate} \item $n\sigma$ method: For each detector the multiples of $\sigma$ values are given for the deviation from the expected mean value at a given $p_{\mathrm{T}}$ (assuming a Gaussian distribution). This can be done via: \texttt{AliPIDResponse::GetNumberOfSigmas(EDetector detCode, const AliVParticle *track, AliPID::EParticleType type)} \item Bayesian approach: In \texttt{AliPIDCombined::ComputeProbabilities(const AliVTrack *track, const AliPIDResponse *response, Double\_t* bayesProbabilities)} for each detector (included in the analysis via\\ \texttt{AliPIDCombined::SetDetectorMask(AliPIDResponse::EDetector)}) the conditional probability for the respective detector observable is calculated for each particle species (selected via\\ \texttt{AliPIDCombined::SetSelectedSpecies(AliPID::EParticleType)}). Then the probability for a track being of a certain particle type is calculated with the Bayesian approach. The initial particle abundances (priors) can be activated via\\ \texttt{AliPIDCombined::SetEnablePriors(kTRUE)} and either own priors can be loaded\\ (\texttt{AliPIDCombined::SetPriorDistribution(AliPID::EParticleType type,TH1F *prior)}) or default ones can be chosen (\texttt{AliPIDCombined::SetDefaultTPCPriors()}). \end{enumerate} For the case of the EMCal the $n\sigma$ as well as the conditional probability are calculated in\\ \texttt{AliEMCALPIDResponse::GetNumberOfSigmas( Float\_t pt, Float\_t eop, AliPID::EParticleType n, Int\_t charge)} and\\ \texttt{AliEMCALPIDResponse::ComputeEMCALProbability(Int\_t nSpecies, Float\_t pt, Float\_t eop, Int\_t charge, Double\_t *pEMCAL)}, respectively. These methods are called from \texttt{AliPIDCombined} and \texttt{AliPIDResponse} internally, so usually the user does not use them. To calculate $n\sigma$ and the conditional probability a parametrization of $E/p$ for the different particle species at different momenta is needed. This was retrieved from data in a clean PID sample with the help of $V0$ decays for electrons, pions and protons ($\gamma\rightarrow e^+e^-$, $K^0\rightarrow \pi^+\pi^-$, and $\Lambda\rightarrow p+\pi^-/\bar{p}+\pi^+$ ) for different periods. Electrons are parametrized with a Gaussian distribution (mean value and $\sigma$), all other particles are parametrized with a Gaussian for $0.5 < E/p < 1.5$ and the probabilty to have a value in this $E/p$ interval (this is small, since the maximum of the distribution lies around $0.1$). Here we distinguish between protons, antiprotons (higher probabilty for higher $E/p$ values due to annihilation) and other particles (pions are used for these). At the moment this parametrization is not done for all periods so far, as default LHC11d is taken. There might be especially some centrality dependence on the $E/p$ parametrization (because of the different multiplicities of track--cluster matches). In addition to that the purity of the electron identification can be enhanced by using shower shape cuts in addition. At the moment this can be done by getting them together with $n\sigma$:\\ \texttt{AliEMCALPIDResponse::NumberOfSigmasEMCAL(const AliVParticle *track,\\ AliPID::EParticleType type, Double\_t \&eop, Double\_t showershape[4])} In future, a full treatmeant inside the PID framework is planned (by combining with \texttt{AliEMCALPID}). Some more information can be found on following TWiki pages: \begin{itemize} \item \href{https://twiki.cern.ch/twiki/bin/view/ALICE/AlicePIDTaskForce}{https://twiki.cern.ch/twiki/bin/view/ALICE/AlicePIDTaskForce} \item \href{https://twiki.cern.ch/twiki/bin/view/ALICE/PIDInAnalysis}{https://twiki.cern.ch/twiki/bin/view/ALICE/PIDInAnalysis} \item \href{https://twiki.cern.ch/twiki/bin/viewauth/ALICE/EMCalPIDResponse}{https://twiki.cern.ch/twiki/bin/viewauth/ALICE/EMCalPIDResponse} \end{itemize} Here follows an example how to include the EMCal PID in an analysis task: \begin{DDbox}{\linewidth} \begin{lstlisting} // in analysis header: AliPIDCombined fPIDCombined; AliPIDResponse fPIDResponse; // in Constructor: fPIDCombined(0), fPIDResponse(0), // in UserCreateOutputObjects // Set up combined PID AliAnalysisManager *man=AliAnalysisManager::GetAnalysisManager(); AliInputEventHandler* inputHandler = (AliInputEventHandler*) (man->GetInputEventHandler()); fPIDResponse = (AliPIDResponse*)inputHandler->GetPIDResponse(); fPIDCombined = new AliPIDCombined; fPIDCombined->SetSelectedSpecies(AliPID::kSPECIES); fPIDCombined->SetDetectorMask(AliPIDResponse::kDetEMCAL); fPIDCombined->SetEnablePriors(kFALSE); // in UserExec: Double_t pEMCAL[AliPID::kSPECIES]; Double_t pBAYES[AliPID::kSPECIES]; Double_t eop; Double_t ss[4]; //shower shape parameters (number of cells, M02, M20, Dispersion) // NSigma value for electrons nSigma = fPIDResponse->NumberOfSigmas(kEMCAL,track,AliPID::kElectron); // or with getting also the E/p and shower shape values nSigma = fPIDResponse->NumberOfSigmasEMCAL(track,AliPID::kElectron,eop,ss); // Bayes probability fPIDCombined->ComputeProbabilities(track, fPIDResponse, pBAYES); \end{lstlisting} \end{DDbox}
{ "alphanum_fraction": 0.7669353761, "avg_line_length": 63.4265536723, "ext": "tex", "hexsha": "55f7947a819aef26eb56d85c76ebe647af369812", "lang": "TeX", "max_forks_count": 275, "max_forks_repo_forks_event_max_datetime": "2022-03-31T13:06:19.000Z", "max_forks_repo_forks_event_min_datetime": "2016-06-21T20:24:05.000Z", "max_forks_repo_head_hexsha": "c53712645bf1c7d5f565b0d3228e3a6b9b09011a", "max_forks_repo_licenses": [ "BSD-3-Clause" ], "max_forks_repo_name": "AllaMaevskaya/AliRoot", "max_forks_repo_path": "EMCAL/doc/analysis.tex", "max_issues_count": 1388, "max_issues_repo_head_hexsha": "c53712645bf1c7d5f565b0d3228e3a6b9b09011a", "max_issues_repo_issues_event_max_datetime": "2022-03-30T15:26:09.000Z", "max_issues_repo_issues_event_min_datetime": "2016-11-01T10:27:36.000Z", "max_issues_repo_licenses": [ "BSD-3-Clause" ], "max_issues_repo_name": "AllaMaevskaya/AliRoot", "max_issues_repo_path": "EMCAL/doc/analysis.tex", "max_line_length": 1092, "max_stars_count": 52, "max_stars_repo_head_hexsha": "c53712645bf1c7d5f565b0d3228e3a6b9b09011a", "max_stars_repo_licenses": [ "BSD-3-Clause" ], "max_stars_repo_name": "AllaMaevskaya/AliRoot", "max_stars_repo_path": "EMCAL/doc/analysis.tex", "max_stars_repo_stars_event_max_datetime": "2022-03-11T11:49:35.000Z", "max_stars_repo_stars_event_min_datetime": "2016-12-11T13:04:01.000Z", "num_tokens": 6021, "size": 22453 }
\section{Exponential and Logarithmic Functions} In the previous section we dealt with functions composed of integer powers of $x$. We will now shortly focus on functions where $x$ is in the power itself and their inverse functions. An \emph{exponential function}, or simply an \emph{exponential}, is a real function of the type \begin{equation} f(x) = b^{x}, \label{eq:exponent} \end{equation} where $b>0$ is called the \emph{base} of the exponentiation, and $x$ the exponential. All exponents, regardless of base, are always positive. In addition, all exponents pass through the point $(0,1)$ since $b^{0}=1$ for any real positive number, and through the point $(1,b)$ since $b^{1}=b$. When $b>1$ the function is increasing on $\mathbb{R}$, while for $b<1$ the function is descending on $\mathbb{R}$. \begin{example}{Exponential functions}{exponents} The following are graphs of the exponential functions \textcolor{xred}{$\bm{1.5^{x}}$}, \textcolor{xblue}{$\bm{2^{x}}$} and \textcolor{xgreen}{$\bm{3.5^{x}}$}: \begin{figure}[H] \centering \begin{tikzpicture} \begin{axis}[ graph2d, y axis line style={-stealth, thick}, width=8cm, height=8cm, xmin=-5, xmax=3, ymin=0, ymax=4, domain=-5:3, restrict y to domain=0:5, ] \addplot[function, xred] {(1.5)^\x}; \addplot[function, xblue] {2^\x}; \addplot[function, xgreen] {3.5^\x}; \end{axis} \end{tikzpicture} \end{figure} And the following are graphs of the exponential functions \textcolor{xpurple}{$\bm{0.7^{x}}$}, \textcolor{xorange}{$\bm{0.5^{x}}$} and $\bm{0.2^{x}}$: \begin{figure}[H] \centering \begin{tikzpicture} \begin{axis}[ graph2d, y axis line style={-stealth, thick}, width=8cm, height=8cm, xmin=-5, xmax=3, ymin=0, ymax=4, domain=-5:3, restrict y to domain=0:5, ] \addplot[function, xpurple] {0.7^\x}; \addplot[function, xorange] {0.5^\x}; \addplot[function, black] {0.2^\x}; \end{axis} \end{tikzpicture} \end{figure} \end{example} As a reminder, the following are two well known properties of exponents: given a base $b>0$, \begin{align} b^{-x} &= \frac{1}{b^{x}},\\ b^{x}b^{y} &= b^{x+y}. \label{eq:exponents_properties} \end{align} A special base for exponential functions is the real, non-algebraic number $\eu$. This number has many names, among them is \emph{Euler's number}, but in the constant of exponentials it is known as the \emph{natural base}. Its exact value is not entirely important for the moment: it is about $2.718$, and in any case it is not possible to write it as there it has infinitely many digits after the period. It is very common across different fields of mathematics and science to write $\exp(x)$ instead of $\eu^{x}$. The inverse function to exponentials are the \emph{logarithmic functions} (or simply \emph{logarithms}), i.e.\ for any real $b>0,\ b\neq1$, \begin{equation} \log_{b}\left( b^{x} \right) = b^{\log_{b}(x)} = x. \label{eq:logarithms} \end{equation} In essence, the logarithm in base $b$ of a number $x$ answers the question \textit{``what is the number $a$ for which $b^{a}=x$?''}. Being the inverses of exponential functions, all logarithms go through the point $(1,0)$, and each also passes through its own point $(b,1)$. \begin{example}{Logarithmic functions}{logarithms} The following are graphs of the logarithmic functions \textcolor{xred}{${\log_{1.5}(x)}$}, \textcolor{xblue}{${\log_{2}(x)}$} and \textcolor{xgreen}{${\log_{3.5}(x)}$}: \begin{figure}[H] \centering \begin{tikzpicture} \begin{axis}[ graph2d, x axis line style={-stealth, thick}, width=8cm, height=8cm, xmin=0, xmax=4, ymin=-5, ymax=3, domain=0:4, restrict y to domain=-10:4, ] \addplot[function, xred] {ln(\x)/ln(1.5)}; \addplot[function, xblue] {ln(\x)/ln(2)}; \addplot[function, xgreen] {ln(\x)/ln(3.5)}; \end{axis} \end{tikzpicture} \end{figure} ...and the following are graphs of the exponential functions \textcolor{xpurple}{$\log_{0.75}(x)$}, \textcolor{xorange}{$\log_{0.5}(x)$} and $\log_{0.2}(x)$: \begin{figure}[H] \centering \begin{tikzpicture} \begin{axis}[ graph2d, x axis line style={-stealth, thick}, width=8cm, height=8cm, xmin=0, xmax=4, ymin=-3, ymax=5, domain=0:4, restrict y to domain=-4:10, ] \addplot[function, xpurple] {ln(\x)/ln(0.75)}; \addplot[function, xorange] {ln(\x)/ln(0.5)}; \addplot[function, black] {ln(\x)/ln(0.2)}; \end{axis} \end{tikzpicture} \end{figure} \end{example} A useful property of logarithms is that they can help reduce ranges spanning several orders of magnitude to numbers humans can deal with. The easiest way to see this is using $b=10$: $10^{1}=10$, and so $\log_{10}(10)=1$. $10^{2}=100$, and so $\log_{10}(100)=2$. $10^{3}=1000$, and so $\log_{10}(1000)=3$, etc. The value of the logarithm goes by $1$ for each raise in order of magnitude of its argument. Therefore, if we have some measurement $x$ which can hold values spanning several orders of magnitude (say $x\in[3,1500000000]$), then it can sometimes be useful to use instead the logarithmic value of $x$ (which in our case would span the range $\log_{10}(x)\in[0.477,9.176]$). This is done in many fields of science, for example some definitions of entropy\footnote{$S=k_{\text{B}}\log\left( \Omega \right)$}, acid dissociation constants\footnote{$\text{p}K_{a}=-\log\left( K_{\text{diss}} \right)$}, pH\footnote{$\text{pH}=-\log\left(\ce{[H+]}\right)$} and more. \begin{example}{Logarithms as evaluating orders of magnitude}{} In the following graph of $\log_{2}(x)$, each increase by power of two in $x$ (i.e. $x=1,2,4,8,16,\dots$) yields only a single increase in $y$ (i.e. $y=0,1,2,3,4,\dots$). This shows how logarithms shift our perspective from absolute values to orders of magnitude. \begin{figure}[H] \centering \begin{tikzpicture} \begin{axis}[ graph2d, x axis line style={-stealth, thick}, width=10cm, height=8cm, xmin=0, xmax=16, ymin=-4, ymax=8, domain=0:16, restrict y to domain=-7:10, grid=major, xtick={0,1,2,4,8,16}, ytick={-4,-3,...,8}, ] \addplot[function, xpurple] {ln(\x)/ln(2)}; \end{axis} \end{tikzpicture} \end{figure} \end{example} Using the definition of the logarithmic function $\log_{b}(x)$ (\autoref{eq:logarithms}) and the product rule for exponentials (\autoref{eq:exponents_properties}), a similar rule can be derived for logarithms. Let $x,y>0$ and $b>0,\ b\neq1$ all be real numbers. We define \begin{equation} \log_{b}(x)=M,\ \log_{b}(y)=N, \label{eq:log_addition_rule_step1} \end{equation} which means \begin{equation} b^{M}=x,\ b^{N}=y. \label{eq:log_addition_rule_step2} \end{equation} From \autoref{eq:exponents_properties} we know that \begin{equation} xy = b^{M}b^{N} = b^{M+N}, \label{eq:log_addition_rule_step3} \end{equation} and by re-applying the definition of logarithmic functions we get that \begin{equation} \log_{b}(xy) = M+N = \log_{b}(x) + \log_{b}(y). \label{eq:log_addition_rule_step4} \end{equation} Similarly to \autoref{eq:log_addition_rule_step4}, division yields subtraction: \begin{equation} \log_{b}\left(\frac{x}{y}\right) = \log_{b}(x)-\log_{b}(y). \label{eq:log_subtraction_rule} \end{equation} Equations \ref{eq:log_addition_rule_step4} and \ref{eq:log_subtraction_rule} reveal another valuable property of logarithms: they reduce multiplication to addition (and subsequently division to subtraction). While today this property doesn't seem very impressive, in pre-computers days it helped carrying on complicated calculations, using tables of pre-calculated logarithms (called simply \emph{logarithm tables}) - a sight rarely seen today. Taking one step forward in regards to reduction of operations, logarithms reduce powers to multiplication: \begin{equation} \log_{b}\left( x^{k} \right) = k\log_{b}(x). \label{eq:log_product_rule} \end{equation} for any $k\in\mathbb{R}$. (TBW:\@ proving this will be in the chapter questions to the reader) Any logarithm $\log_{b}(x)$ can be expressed using another base, i.e. $\log_{a}(x)$ (where $a>0,\ a\neq1$) using the following formula: \begin{equation} \log_{a}(x) = \log_{b}(x)\cdot\log_{a}(b). \label{eq:log_base_change} \end{equation} (TBW:\@ proving this too will be a question to the reader) \begin{example}{Changing logarithm base}{} Expressing $\log_{4}(x)$ in terms of $\log_{2}(x)$: \[ \log_{4}(x) = \log_{2}(x)\cdot\underbrace{\log_{4}(2)}_{=\frac{1}{2}} = \frac{1}{2}\log_{2}(x). \] \end{example} Much like with exponentials, the number $e$ plays an important role when it comes to logarithms, for reasons that are discussed in the calculus chapter (ref). For now, we will just mention that $\log_{e}(x)$ gets a special notation: $\ln(x)$, which stands for \emph{natural logarithm}. This notation is mainly used in applied mathematics and science, while in pure mathematics the notation is simply $\log(x)$, i.e. without mentioning the base \footnote{Depending on convention and context, this notation can refer to logarithm in any other base, most commonly $\log_{10}(x)$ and $\log_{2}(x)$.}. For reason we will see in the calculus chapter, it is relatively simple to calculate both the exponential and logarithm in base $e$. Therefore, many operations in modern computations are actually done using these functions, for example calculating logarithms in other bases: \begin{equation} \log_{b}(x) = \frac{\ln(x)}{\ln(b)}. \label{eq:ln_base_change} \end{equation} Another operation commonly using both $\eu^{x}$ and $\ln(x)$ is raising a real number $a$ to a real power $b$: using the properties of both exponential and logarithmic functions, any such power can be expressed as \begin{equation} a^{b} = \eu^{b\ln(a)}. \label{eq:powers_using_e} \end{equation}
{ "alphanum_fraction": 0.6888303264, "avg_line_length": 48.8415841584, "ext": "tex", "hexsha": "d50eb3ff4ae72b32c80b381ec47fde26f045df7a", "lang": "TeX", "max_forks_count": 5, "max_forks_repo_forks_event_max_datetime": "2022-02-02T10:45:13.000Z", "max_forks_repo_forks_event_min_datetime": "2022-01-17T10:15:18.000Z", "max_forks_repo_head_hexsha": "b5fdd19b09e97697f287f5ca83e0d9133b704789", "max_forks_repo_licenses": [ "CC0-1.0" ], "max_forks_repo_name": "JASory/maths_book", "max_forks_repo_path": "chapters/intro/exponentials_logarithms.tex", "max_issues_count": 5, "max_issues_repo_head_hexsha": "b5fdd19b09e97697f287f5ca83e0d9133b704789", "max_issues_repo_issues_event_max_datetime": "2022-01-20T06:18:24.000Z", "max_issues_repo_issues_event_min_datetime": "2022-01-17T05:01:10.000Z", "max_issues_repo_licenses": [ "CC0-1.0" ], "max_issues_repo_name": "JASory/maths_book", "max_issues_repo_path": "chapters/intro/exponentials_logarithms.tex", "max_line_length": 596, "max_stars_count": 28, "max_stars_repo_head_hexsha": "b5fdd19b09e97697f287f5ca83e0d9133b704789", "max_stars_repo_licenses": [ "CC0-1.0" ], "max_stars_repo_name": "JASory/maths_book", "max_stars_repo_path": "chapters/intro/exponentials_logarithms.tex", "max_stars_repo_stars_event_max_datetime": "2022-02-08T17:57:59.000Z", "max_stars_repo_stars_event_min_datetime": "2021-12-25T20:02:57.000Z", "num_tokens": 3250, "size": 9866 }
\section{Decay tables in EvtGen} \label{sect:decaytable} \index{DECAY.DEC} The decay table included in the EvtGen package, {\tt DECAY.DEC}, provides an extensive list of decays for resonances below the $\Upsilon(4S)$. However, the table is updated with available experimental and theoretical information on a regular basis. To describe the format of the decay table consider the decay of a $D^{*+}$ meson \index{keywords} \index{Decay} \index{Enddecay} \index{branching fraction} \index{keywords!Decay} \index{keywords!Enddecay} \begin{verbatim} Decay D*+ 0.67 D0 pi+ VSS; 0.33 D+ pi0 VSS; Enddecay \end{verbatim} A decay entry starts with the keyword ``Decay'' and ends with ``Enddecay''. Throughout the decay table, capitalization is important. Decay channel entries are separated by ``;''. There are three main components needed to specify a decay channel. First the branching ratio is given, followed by the list of final state particles. Finally the model used to simulate the decay is specified. For many models, the order of final state particles is important. The correct order for each model can be found in the {\tt DECAY.DEC} provided, or in Section~\ref{sect:models} of this documentation. Details on available models can be found in Section~\ref{sect:models}. Certain models also take arguments, as in the case of the decay $B^0\rightarrow \pi^+\pi^-$ \index{arguments} \begin{verbatim} Decay B0 1.00 pi+ pi- SSS_CP dm alpha 1 1.0 0.0 1.0 0.0; Enddecay \end{verbatim} The meaning of these arguments are explained in Section~\ref{sect:models}. The above example also shows the use of constants defined in the decay table, in this case ``dm'' and ``alpha''. The syntax for defining constants is \index{Define} \index{keywords!Define} \begin{verbatim} Define alpha 0.9 \end{verbatim} Note that a ``Define'' statement can not be within a ``Decay-Enddecay'' statement and also must precede the first use of the constant that it defines. If a constant is redefined, the last value defined before the use in the decay table is used. To illustrate this consider the following example \begin{verbatim} Define alpha 0.9 Decay B0 1.00 pi+ pi- SSS_CP dm alpha 1 1.0 0.0 1.0 0.0; Enddecay Define alpha 1.1 Decay B0B 1.00 pi+ pi- SSS_CP dm alpha 1 1.0 0.0 1.0 0.0; Enddecay \end{verbatim} Here the decay of the $B^0$ will use $\alpha=0.9$ while the $\bar B^0$ decay will use $\alpha=1.1$. This means, in particular, that you can not create user decay files that change parameter values to change the default decay table settings after the default decay table has been parsed. Once the decay channels of a particle (eg, a $D^0$) have been specified, the decay channels of the charge conjugate particle ($\bar D^0$) can be specified using the syntax \index{CDecay} \index{keywords!CDecay} \begin{verbatim} CDecay anti-D0 \end{verbatim} Another feature is that particle aliases may be defined. These aliases are useful for various reasons, including the generation of Monte Carlo for special signal channels. Consider the case of Monte Carlo for the decay $B^0\rightarrow J/\Psi K^0_s$ with $J/\Psi\rightarrow \mu^+\mu^-$. Not all $J/\Psi$ mesons in the event should decay to $\mu^+\mu^-$, only the ones in the signal mode. The concept of particle aliases solves this problem. Consider the example \index{Alias} \index{keywords!Alias} \begin{verbatim} Alias myJ/psi J/psi Decay B0 1.00 myJ/psi K_S0 SVS_CP dm beta 1.0 1.0 0.0 1.0 0.0; Endedecay Decay myJ/psi 1.00 mu+ mu- VLL; Enddecay \end{verbatim} Here, {\tt myJ/psi} has been created to alias the {\tt J/psi}. Inside the generator, {\tt myJ/psi} is treated as a new particle, with the same properties as the {\tt J/psi}, except for its decay channels. This includes the particle name and number, such that at the end of each event, the output will contain the chain $B^0\rightarrow J/\Psi K^0_s$, $J/\Psi\rightarrow \mu^+\mu^-$. There is no need for user analysis code to know about {\tt myJ/psi}. There is one small complication related to aliased particles involving the {\tt CDecay} feature described above. Sometimes it is necesary to define the charge conjugate of an alias. This is used, for example by the {\tt CDecay} feature. To use this feature with aliases, you must specify the charge conjugate state for each alias. If you alias a particle that is its own charge conjugate, (eg, a $\pi^0$) the aliased particle will be its own charge conjugate. However, in the case of the alias {\tt mypi+}, which represents {\tt pi+}, a charge conjugate alais {\tt mypi-} must be defined and you will need to tell the generator that the charge conjugate of {\tt mypi-} is {\tt mypi+}: \index{ChargeConj} \index{keywords!ChargeConj} \begin{verbatim} ChargeConj mypi+ mypi- \end{verbatim} The charge conjugate of {\tt mypi+} can not be defined to be {\tt pi-}. \index{PHOTOS} \index{keywords!PHOTOS} Final state radiation using the PHOTOS~\cite{Was92} package may be included in each decay. This option is invoked by the key word {\tt PHOTOS}, which is placed after the list of decay daughters, but before the model name. See Appendix~\ref{sect:finalstaterad} for more details. \index{JetSetPar} \index{lugive} \index{keywords!JetSetPar} The keyword {\tt JetSetPar} allows for manipulation of parameters in the JETSET common blocks using the {\tt lugive} subroutine. For example, to set the parameter {\tt MSTJ(26)=0} (this turns off $B^0$-$\bar B^0$ mixing in JetSet) the line \begin{verbatim} JetSetPar MSTJ(26)=0 \end{verbatim} is added to the decay table. Note that no spaces are allowed in the string that is passed to lugive. This is due to a limitation of how the decay table is parsed by EvtGen. The setting of a global parameter, illustrated by {\tt JetSetPar}, is a general feature that any decay model can implement. This is discussed further in Section~\ref{sect:modelparameters}. \subsection{Guidelines to write decay files} \index{End} \index{keywords!End} There are several things to know when writing a decay table: \begin{itemize} \item The decay table must end with {\tt End}. This is also true for user decay tables as described in Section~\ref{sec:userdecayfiles}. \item Everything in the decay table is case sensitive. \item Charge conservation will be checked on each decay in the decay table. \item Each parent and daughter particle must be contained in {\tt evt.pdl}. \item Most models included in the EvtGen package will check that the number of daughters as well as the spin of each daughter are correct. \item The number of arguements provided must be the same as is expected by the model. \item If the sum of branching ratios in the decay channels for a particle do not sum to 1.0, they will all be rescaled so that the sum is 1.0. \item Any line beginning with ``\#'' is considered a comment. \end{itemize} \subsection{User decay files} \label{sec:userdecayfiles} In addition to changing {\tt DECAY.DEC}, the output of EvtGen may be controlled via a user decay file. We now give several examples of how to configure a user decay table. Many details of decay tables have already been discussed above. As the first example, consider generating $\Upsilon(4S) \to B^+ B^-$, $B^{+}\rightarrow D^{*0}e^{+}\nu_{e}$ with $D^{*0}\rightarrow \bar D^{0}\pi^{0}$ and $\bar D^{0}\rightarrow K^{+}\pi^{-}$. The other $B$ from the $\Upsilon(4S)$ decays generically. The standard way of running EvtGen begins with parsing the full standard decay table, {\tt DECAY.DEC}. After this, a user decay table is parsed in order to redefine the particle decays of interest. This way, the user decay file will override the standard decay table. In the example above, the user decay table could be implemented as: \begin{verbatim} # Ups(4S)->B+ B- # | |-> generic # |->D*0 e+ nu_e # |->D0B pi0 # |->K+pi- # Alias myD*0 D*0 Alias my-anti-D0 anti-D0 # Decay Upsilon(4S) 1.000 B+ B- VSS; Enddecay # Decay B+ 1.000 my-anti-D*0 e+ nu_e ISGW2; Enddecay # Decay my-anti-D*0 1.000 my-anti-D0 pi0 VSS; Enddecay # Decay my-anti-D0 1.000 K+ pi- PHSP; Enddecay # End \end{verbatim} The decay of the $\Upsilon(4S)$ is redefined. In the default decay table it is defined to decay to a proper mixture of charged and neutral $B$ mesons. Note that when a {\tt Decay} statement is found for a particle, it erases all previous decays that have been defined for that particle. The $\Upsilon(4S)$ above is redefined such that it decays into a $B^+$ and a $B^-$ $100\%$ of the time. The $B^-$ decay is not redefined and hence it decays generically according to the entries in {\tt DECAY.DEC}. However, the $B^+$ is redefined such that it is forced to decay semileptonically according to the model of ISGW2. (For more information about details about what different models do, see Appendix~\ref{sect:models}.) Another use of user decay files is to make a particle stable (if, for example, its decay is uninteresting for your purpose). To make a $K_S$ stable do \index{stable} \begin{verbatim} Decay K0S Enddecay \end{verbatim} {\it Anders, add b0b0bar mixing example}
{ "alphanum_fraction": 0.7335924007, "avg_line_length": 34.6966292135, "ext": "tex", "hexsha": "46c1089d21ca5ec71c5009c167cc44327b3de431", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "7dd407c41d4eea059ca96ded80d30bda0bc014a4", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "klendathu2k/StarGenerator", "max_forks_repo_path": "EvtGen1_06_00/doc/evt_decaytable.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "7dd407c41d4eea059ca96ded80d30bda0bc014a4", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "klendathu2k/StarGenerator", "max_issues_repo_path": "EvtGen1_06_00/doc/evt_decaytable.tex", "max_line_length": 79, "max_stars_count": 2, "max_stars_repo_head_hexsha": "7dd407c41d4eea059ca96ded80d30bda0bc014a4", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "klendathu2k/StarGenerator", "max_stars_repo_path": "EvtGen1_06_00/doc/evt_decaytable.tex", "max_stars_repo_stars_event_max_datetime": "2022-02-28T06:57:20.000Z", "max_stars_repo_stars_event_min_datetime": "2018-12-24T19:37:00.000Z", "num_tokens": 2689, "size": 9264 }
\subsection{Tank Models} Accurately modelling tank pressure changes is essential for accurate maneuver modelling and reconstruction. The following sections discuss three types of tank models: pressurant tank, regulated fuel tank, and blowdown fuel tank. For each tank there are three models: isothermal, heat transfer, and adiabatic. The models used in GMAT are based on work by Estey\cite{Estey:83} , Hearn\cite{Hearn:01} and Moran\cite{Moran}. For each tank, we select a set of state variables that when defined, allow us to determine all remaining properties of the tank. For the state variables, we provide differential equations that describe how the state variables change with respect to time. The number of state variables varies between different tanks, and with the model type such as isothermal and heat transfer. For each of the three tanks, we develop a heat transfer model, an adiabatic model, and an isothermal model. The heat transfer model is derived using the laws of conservation of energy and the conservation of mass. An adiabatic model is provided by setting the heat transfer rates to zero in the heat transfer model. The isothermal model for each tank is developed separately. Each ot these models is useful for certain classes of maneuvers. Isothermal models are useful for maneuvers with low mass flow rates, adiabatic models are useful for short maneuvers with high mass flow rates. Heat transfer models are useful for long maneuvers with large mass flow rates. When developing heat transfer models, we'll assume that specific internal energy is given by % \begin{equation} u = c T \end{equation} % specific enthalpy for a liquid is given by % \begin{equation} h_\ell = c_\ell T_\ell \end{equation} % and specific enthalpy for a gas is given by % \begin{equation} h_g = T_g( c_g + R_g) \end{equation} % The notation used in tank model development is shown below. After presenting notation, we present the dynamics model for a pressurant tank. \noindent\textit{Nomenclature}\vspace{-.1 in} \begin{tabbing} ----------------------- \= Reynolds number based on length $s$ \kill $A_g,A_\ell,A_w$ \> = Heat transfer area \\ $c_v, c_g$ \> = Specific heat at constant volume \\ $D$ \> = Tank diameter \\ $d$ \> = Liquid surface diameter \\ $Gr$ \> = Grashof number \\ $h_\ell, h_v$ \> = Enthalpy \\ $m_g,m_\ell,m_w,m_v$ \> = Mass \\ $P_g,P_v,P_t$ \> = Pressure \\ $R_v,R_g$ \> = Gas constant \\ $T_g,T_\ell,T_w,T_v,T_a$ \> = Temperature \\ $u_g,u_\ell,u_w,u_v$ \> = Specific internal energy \\ $V_g,V_\ell,V_t$ \> = Volume \\ $\dot{W}$ \> = Work rate \\ $\dot{Q}_g,\dot{Q}_v,\dot{Q}_l,\dot{Q}_w$ \> = Heat transfer rate \\ $\nu_l,\nu_g,\nu_v$ \> = Specific volume \\ \end{tabbing} % \noindent\textit{Subscripts}\vspace{-.1 in} \begin{tabbing} ----------------------- \= Reynolds number based on length $s$ \kill $a$ \> = Ambient \\ $g$ \> = Pressurant gas \\ $\ell$ \> = Propellant liquid \\ $t$ \> = Total \\ $v$ \> = Propellant vapor \\ $w$ \> = Tank wall \\ $e$ \> = Exit-flow \\ $i$ \> = In-flow \end{tabbing} \subsubsection{Pressurant Tank} The pressurant tank model is the simplest of the tank models primarily due to the fact that there is only one substance, the pressurant gas, contained in the tank. In this section, we develop a state space model for pressurant tank dynamics. We choose the state variables to be pressurant gas mass and temperature, $m_g$ and $T_g$ respectively, and tank wall temperature $T_w$. In Fig.\ref{fig:PressurantTank} we see an illustration of a pressurant tank. We divide the tank into two control volumes: the gas region and the tank wall region. The only mass flow in the system occurs where pressurant gas exits the tank. Heat transfer occurs between the gas and the wall, and the wall and the ambient surroundings. % \begin{figure}[ht] \centerline{ \begin{picture}(110,440) \special{psfile= ./Images/PressurantTank.eps hoffset= -120 voffset= 115 hscale=55 vscale=55} \makebox(90,525){$\dot{m}_e$,$h_g$} \makebox(-105,790){1.Gas} \makebox(-105,765){$m_g$, $P_g$, $T_g$} \makebox(-165,680){$\dot{Q}_g$} \makebox(-295,760){$\dot{Q}_w$} \makebox(-260,865){2.Tank Wall} \makebox(-270,840){$m_w$, $T_w$} \end{picture}}\vskip -3.65 in \caption{ Pressurant Tank Diagram} \label{fig:PressurantTank} \end{figure} % Knowing the volume of the tank and the state variables $m_g$, $T_g$, and $T_w$, we calculate pressure from one of the following two equations of state: % \begin{equation} P_g = \frac{m_g R_g T_g}{V_g} \end{equation} % or from the Beattie-Bridgeman Eq. % \begin{equation} P_g = \frac{R_g T_g}{V_g} + \frac{a_g}{V_g^2} + \frac{b_g}{V_g^3} \end{equation} % The state variables $m_g$, $T_g$, and $T_w$ are described by ordinary differential equations found by applying the first law of thermodynamics and the conservation of mass. The 1st Law applied to the gas control volume yields % \begin{equation} \frac{d}{dt}\left(m_g u_g \right) = \dot{Q}_g - \dot{m}_e h_g \label{Eq:PressurantGas1stLaw} \end{equation} % The 1st Law applied to the wall control volume yields % \begin{equation} \frac{d}{dt}\left( m_w u_w \right) = \dot{Q}_w - \dot{Q}_g \label{Eq:PressurantWall1stLaw} \end{equation} % and finally from conservation of mass we obtain % \begin{equation} \dot{m}_g = -\dot{m}_e \label{Eq:PressurantMassCon} \end{equation} % For these equations to be useful for numerical integration, we need to expand the derivatives, and if necessary, decouple the equations (as we'll see, for the pressurant tank, the equations are not coupled). Expanding the terms in Eq.~(\ref{Eq:PressurantGas1stLaw}) we have % \begin{equation} \dot{m}_g c_g T_g + m_g c_g \dot{T}_g = \dot{Q}_g - \dot{m}_e T_g \left( c_g + R_g\right) \end{equation} % Similarly, expanding Eq.~(\ref{Eq:PressurantWall1stLaw}) we obtain % \begin{equation} m_w c_w \dot{T}_w = \dot{Q}_w - \dot{Q}_g \end{equation} % Solving the system of equations yields the following differential equations of state for the pressurant tank heat transfer model. % \begin{eqnarray} \dot{m}_g &=& -\dot{m}_e\\ % \dot{T}_g &=& \frac{1}{m_g c_g} \left( \dot{Q}_g - T_g R_g \dot{m}_e \right)\\ % \dot{T}_w &=& \frac{1}{m_w c_w} \left( \dot{Q}_w - \dot{Q}_g \right) \end{eqnarray} % The adiabatic model is obtained by setting the terms $\dot{Q}_g$ and $\dot{Q}_w$ to zero in the above equations. (Note for the adiabatic model there are only two state variables, $m_g$ and $T_g$, as the wall temperature $T_w$ is removed from the system of equations.) Similarly, the isothermal model is obtained by setting $\dot{T}_g$ and $\dot{T}_w$ to zero. So, for the isothermal model there is only one state variable $m_g$. In summary, for the pressurant tank, all models calculate the tank pressure using % \[ P_g = \frac{m_g R_g T_g}{V_g} \] % then the specific equations for the heat transfer, adiabatic, and isothermal models, are as follows \noindent\textit{Pressurant Tank: Heat Transfer} \noindent State Variables: $m_g$, $T_g$, $T_w$ % \begin{eqnarray} \dot{m}_g &=& -\dot{m}_e \nonumber\\ % \dot{T}_g &=& \frac{1}{m_g c_g} \left( \dot{Q}_g - T_g R_g \dot{m}_e \right)\nonumber\\ % \dot{T}_w &=& \frac{1}{m_w c_w} \left( \dot{Q}_w - \dot{Q}_w \right)\nonumber \end{eqnarray} % \textit{Pressurant Tank: Adiabtic} \noindent State Variables: $m_g$, $T_g$ % \begin{eqnarray} \dot{m}_g &=& -\dot{m}_e \nonumber\\ % \dot{T}_g &=& \dot{m}_e\frac{T_g R_g }{m_g c_g} \nonumber \end{eqnarray} % \noindent\textit{Pressurant Tank: Isothermal} \noindent State Variables: $m_g$ % \begin{equation} \dot{m}_g = -\dot{m}_e \end{equation} Now let's look at a model for a fuel tank operating in blow down mode. %\subsubsection{Blow-Down Tank w/o Vapor Pressure} % % %\begin{figure}[ht] %\centerline{ % \begin{picture}(100,510) % \special{psfile= ./Images/BlowDownTank.eps hoffset= -120 voffset= 170 % hscale=55 vscale=55} % \makebox(80,525){$\dot{m}_e$,$h_l$} % \makebox(-80,970){1.Gas} % \makebox(-90,940){$m_g, P_g, T_g$} % \makebox(-190,905){$\dot{Q}_v$} % %\makebox(-140,905){$\dot{m}_v, h_v$} % \makebox(10,905){$\dot{Q}_g$} % \makebox(-135,820){2.Liquid} % \makebox(-145,790){$m_\ell, T_\ell$} % \makebox(-235,740){$\dot{Q}_\ell$} % \makebox(-335,990){3.Tank Wall} % \makebox(-335,970){$m_w, T_w$} % \makebox(-277,1020){$\dot{Q}_w$} % \end{picture}}\vskip -3.65 in \caption{ Bi-Prop Thruster Diagram} \label{fig:BlowDownTankWOVap} %\end{figure} % %\textit{Assumptions} %\begin{itemize} % \item Vapor pressure is zero. % \item Liquid density is constant. % \item Gas mass is constant. %\end{itemize} %% %Assume we are given $m_g$, the tank diameter $D$, and hence know the %total tank volume $V_t$, and we know the physical constants %associated with the liquid and gas ($R_g$,$c_g$,$\nu_g$,$c_\ell$, %$\nu_\ell$). We choose the state variables $m_\ell$, $T_\ell$, %$T_g$, and $T_w$, all other tank properties can be calculated from %these state variables using the following equations: %% %\begin{eqnarray} % V_\ell & = & \nu_\ell m_\ell\\ % % % V_g & = & V_t - V_\ell\\ % % % P_g & = & \frac{m_g R_g T_g}{V_g}\\ %\end{eqnarray} %% % %We require differential equations that describe the time rate of %change of the state variables $m_\ell$, $T_\ell$, $T_g$, and $T_w$. %The differential equations are found by applying the 1st law of %thermodynamics and conservation of mass to the three control volumes %illustrated in Fig. \ref{fig:BlowDownTankWOVap}. The 1st Law applied %to the gas control volume yields %% %\begin{equation} % \frac{d}{dt}\left(m_g u_g \right) = \dot{Q}_v + \dot{Q}_g - P_g \dot{V}_g % \label{Eq:Blowdown1stLawWOVap} %\end{equation} %% %The 1st Law applied to the liquid control volume yields %% %\begin{equation} % \frac{d}{dt}\left( m_\ell u_\ell \right) = \dot{Q}_\ell - \dot{Q}_v + % P_g \dot{V}_g - \dot{m}_e h_\ell \label{Eq:BlowdownLiquid1stLawWOVap} %\end{equation} %% %The 1st Law applied to the wall control volume yields %% %\begin{equation} % \frac{d}{dt}\left( m_w u_w \right) = \dot{Q}_w - \dot{Q}_\ell - \dot{Q}_g %\end{equation} %% %and finally from conservation of mass we obtain %% %\begin{equation} % \dot{m}_\ell = -\dot{m}_e \label{Eq:BlowdownMassConWOVap} %\end{equation} % % %The equations above give us four ordinary differential equations %that allow us to solve for the tank states as a function of time. %For numerical integration, we need to decouple these equations. % %Let's continue with Eq.~(\ref{Eq:Blowdown1stLawWOVap}). Taking the %derivative assuming $\dot{m}_g = 0$ and noting that $\dot{V}_g = - %\nu_\ell \dot{m}_\ell $ yields %% %\begin{equation} % m_g c_g \dot{T}_g = \dot{Q}_v + \dot{Q}_g + P_g\nu_\ell \dot{m}_\ell %\end{equation} %% %Gathering all state terms on the left hand side yields %% %\begin{equation} % -P_g\nu_\ell \dot{m}_\ell + m_g c_g \dot{T}_g = \dot{Q}_v + \dot{Q}_g \label{Eq:CV1} %\end{equation} % % %Continuing with Eq.~(\ref{Eq:BlowdownLiquid1stLawWOVap}), we take %the derivative and group terms to obtain %% %\begin{equation} % \left( c_\ell T_\ell + P_g \nu_\ell\right)\dot{m}_\ell + % m_\ell c_\ell \dot{T}_\ell = \dot{Q}_\ell - \dot{Q}_v - c_\ell T_\ell \dot{m}_e\label{Eq:CV2} %\end{equation} %% %Similarly for the wall region, we arrive at %% %\begin{equation} % m_w c_w \dot{T}_w = \dot{Q}_w - \dot{Q}_\ell - \dot{Q}_g \label{Eq:CV3} %\end{equation} % %Equations \ref{Eq:CV1} -\ref{Eq:CV3} and %Eq.\ref{Eq:BlowdownMassConWOVap} can be written in matrix form as %follows. %% %\begin{equation} % \left(\begin{array}{ccccccc} % A_{11} & 0 & A_{13} & 0 \\ % A_{21} & A_{22} & 0 & 0 \\ % 0 & 0 & 0 & A_{34} \\ % A_{41} & & 0 & 0 \\ % \end{array}\right) % % % \left(\begin{array}{c} % \dot{m}_\ell \\ % \dot{T}_\ell \\ % \dot{T}_g \\ % \dot{T}_w \\ % \end{array}\right) = % % % \left(\begin{array}{c} % b_1\\ % b_2 \\ % b_3 \\ % b_4 \\ % \end{array}\right) %\end{equation} %% %where %% %\begin{eqnarray} % A_{11}& = & - P_g \nu_\ell \\ % % % A_{13}& = & m_g c_g\\ % % % A_{21} & = & c_\ell T_\ell + P_g \nu_\ell \\ % % % A_{22} & = & m_\ell c_\ell\\ % % % A_{34} & = & m_w c_w\\ % % % A_{41} & = & 1\\ % % % b_1 & = & \dot{Q}_v + \dot{Q}_g \\ % % % b_2 & = & \dot{Q}_\ell - \dot{Q}_v - c_\ell T_\ell\dot{m}_e\\ % % % b_3 & = & \dot{Q}_w -\dot{Q}_\ell - \dot{Q}_g\\ % % % b_4 & = & -\dot{m}_e\\ %\end{eqnarray} %% % %Solving the system of equations yields %% %\begin{eqnarray} % \dot{m}_\ell &=& -\dot{m}_e\\ % % % \dot{T}_\ell &=& \frac{1}{m_\ell c_\ell}\left( \dot{Q}_\ell - \dot{Q}_v + P_g \nu_\ell \dot{m}_e\right)\\ % % % \dot{T}_g &=& \frac{1}{m_g c_g}\left( \dot{Q}_v + \dot{Q}_g - P_g \nu_\ell \dot{m}_e\right)\\ % % % \dot{T}_w &=& \frac{1}{m_w c_w}\left(\dot{Q}_w - \dot{Q}_\ell - \dot{Q}_g \right)\\ %\end{eqnarray} \subsubsection{Blowdown Tank} The blowdown tank model is significantly more complex than the pressurant tank model due to the presence of liquid fuel and fuel vapor contained in the tank ullage. In this section, we develop a state space model for a blow down tank. We choose the state variables to be the liquid mass and temperature, $m_\ell$ and $T_\ell$, the gas temperature $T_g$, and tank wall temperature $T_w$. In Fig.\ref{fig:BlowDownTank} we see an illustration of a blow down tank. We divide the tank into three control volumes: the gas region, the liquid region, and the tank wall region. Mass flow occurs where the pressurant gas exits the tank and at the boundary between the liquid and gas in the form of evaporation. Heat transfer occurs between all three control volumes as well as with the surroundings. In summary, the physical processes modelled for a blow down tank are % \begin{compactenum} \item Vapor pressure is a function of liquid temperature. \item Liquid density is a function of liquid temperature. \item Heat transfer between the liquid and gas. \item Heat transfer between the tank wall and gas. \item Heat transfer between the tank wall and liquid. \item Heat transfer between the surroundings and tank. wall. \end{compactenum} % The assumptions made in the tank model are % \begin{compactenum} \item Pressurant does not dissolve in liquid ($m_g = C$). \item Vapor and gas temperatures are equal. \item Vapor and gas volumes are equal. \end{compactenum} % \begin{figure}[ht] \centerline{ \begin{picture}(100,510) \special{psfile= ./Images/BlowDownTankWVap.eps hoffset= -120 voffset= 170 hscale=55 vscale=55} \makebox(80,525){$\dot{m}_e$,$h_l$} \makebox(-80,970){1.Gas/Vapor} \makebox(-90,940){$m_g, m_v, P_g, P_v, T_g$} \makebox(-190,905){$\dot{Q}_v$} \makebox(-140,905){$\dot{m}_v, h_{v}$} \makebox(10,905){$\dot{Q}_g$} \makebox(-135,820){2.Liquid} \makebox(-145,790){$m_\ell, T_\ell$} \makebox(-235,740){$\dot{Q}_\ell$} \makebox(-335,990){3.Tank Wall} \makebox(-335,970){$m_w, T_w$} \makebox(-265,1030){$\dot{Q}_w$} \end{picture}}\vskip -3.65 in \caption{ Blow Down Tank Diagram} \label{fig:BlowDownTank} \end{figure} Assume we are given $m_g$, the tank diameter $D$, and hence know the total tank volume $V_t$, and we know the physical constants associated with the liquid and gas ($R_g$, $c_g$, $\nu_g$, $c_\ell$, $\nu_\ell(T_\ell)$ and $P_v(T_\ell))$. We choose the state variables $m_\ell$, $T_\ell$, $T_g$, and $T_w$, all other tank properties can be calculated from these state variables using the following equations: % \begin{eqnarray} V_\ell & = & \nu_\ell(T_\ell)m_\ell \label{Eq:BlowDownVell}\\ % V_g & = & V_t - V_\ell\\ % P_g & = & \frac{m_g R_g T_g}{V_g}\\ % P_v & = & P_v(T_\ell)\\ % m_v & = & \frac{P_v V_g}{R_v T_g}\\ % P_t & = & P_v + P_g \label{Eq:BlowDownPt} \end{eqnarray} % To determine the state equations governing $m_\ell$, $T_\ell$, $T_g$, and $T_w$ we apply the 1st law of thermodynamics and the law of conservation of mass. The 1st Law applied to the gas control volume is % \begin{equation} \frac{d}{dt}\left( m_v u_v + m_g u_g \right) = \dot{Q}_v + \dot{Q}_g - P_t \dot{V}_g + \dot{m}_v h_{v} \label{Eq:BlowdownGas1stLaw} \end{equation} % The 1st Law applied to the liquid control volume is % \begin{equation} \frac{d}{dt}\left( m_\ell u_\ell \right) = \dot{Q}_\ell - \dot{Q}_v + P_t \dot{V}_g - \dot{m}_v h_{lg} - \dot{m}_e h_\ell \label{Eq:BlowdownLiquid1stLaw} \end{equation} % The 1st Law applied to the wall control volume yields % \begin{equation} \frac{d}{dt}\left( m_w u_w \right) = \dot{Q}_w - \dot{Q}_\ell - \dot{Q}_g \label{Eq:BlowdownWall1stLaw} \end{equation} % and finally from conservation of mass: % \begin{equation} \dot{m}_\ell = -\dot{m}_e - \dot{m}_v \label{Eq:BlowdownMassCon} \end{equation} % we also know that % \begin{equation} \dot{m}_v = \frac{P_v \dot{V}_g}{R_v T_g} - \frac{P_v V_g \dot{T}_g}{R_v T_g^2} \label{Eq:BlowDownGasLaw} \end{equation} % where we assume that % \begin{equation} \dot{P}_v \approx 0 \end{equation} Equations (\ref{Eq:BlowdownGas1stLaw}) - (\ref{Eq:BlowDownGasLaw}) are five equations in five unknowns ($m_v$, $m_\ell$, $T_\ell$, $T_g$, and $T_w$). Our approach is to use Eq.~(\ref{Eq:BlowdownMassCon}) to eliminate $\dot{m}_v$ terms. The result is a system of four equations in four unknowns using Eqs.~(\ref{Eq:BlowdownGas1stLaw}), (\ref{Eq:BlowdownLiquid1stLaw}), (\ref{Eq:BlowdownWall1stLaw}), and (\ref{Eq:BlowDownGasLaw}). The result we seek is four decoupled ordinary differential equations for $m_\ell$, $T_\ell$, $T_g$, and $T_w$. Let's continue with Eq.~(\ref{Eq:BlowdownGas1stLaw}). We need to rewrite the equation in terms of $\dot{m}_\ell$ and $\dot{T}_g$ ($\dot{T_w}$ and $\dot{T}_\ell$ don't appear explicitly). Expanding the derivatives assuming $\dot{m}_g = 0$ yields % \begin{equation} \dot{m}_v c_v T_g + m_v c_v \dot{T}_g + m_g c_g \dot{T}_g = \dot{Q}_v + \dot{Q}_g - P_t \dot{V}_g + \dot{m}_v h_{v} \end{equation} % Now, substituting $\dot{m}_v = -\dot{m}_\ell - \dot{m}_e$ and noting that $\dot{V}_g = -\nu_l \dot{m}_\ell$ if we assume % \[\dot{\nu}_\ell = \frac{d \nu_\ell}{dT_\ell }\dot{T}_\ell \approx 0 \] % we arrive at % \begin{equation} \begin{split} \left( T_g R_v - P_t \nu_\ell \right) \dot{m}_\ell + \left( m_v c_v + m_g c_g\right)\dot{T}_g = \\ \dot{Q}_v + \dot{Q}_g - \dot{m}_e T_g R_v \label{Eq:BlowdownWVaporEq1} \end{split} \end{equation} % Now continuing with Eq.~(\ref{Eq:BlowdownLiquid1stLaw}) expanding the derivatives and making similar substitutions as we made previously we obtain % \begin{equation} \begin{split} \dot{m}_\ell c_\ell T_\ell + m_\ell c_\ell \dot{T}_\ell = \dot{Q}_\ell - \dot{Q}_v + P_t(-\nu_\ell \dot{m}_\ell) - \\ (-\dot{m}_\ell - \dot{m}_e)h_{v} - \dot{m}_e c_\ell T_\ell \end{split} \end{equation} % Grouping terms we obtain % \begin{equation} \begin{split} ( c_\ell T_\ell + P_t v_l - h_{v})\dot{m}_\ell + ( m_\ell c_\ell )\dot{T}_\ell = \\ \dot{Q}_\ell - \dot{Q}_v + \dot{m}_e (h_v - c_\ell T_\ell) \label{Eq:BlowdownWVaporEq2} \end{split} \end{equation} For the wall region, described by Eq.~(\ref{Eq:BlowdownWall1stLaw}), we arrive at \begin{equation} (m_w c_w) \dot{T}_w = \dot{Q}_w - \dot{Q}_\ell - \dot{Q}_g \label{Eq:BlowdownWVaporEq3} \end{equation} Finally, by eliminating $\dot{m}_v$ in the Gas Law shown in Eq.~(\ref{Eq:BlowDownGasLaw}) we obtain % \begin{equation} -\dot{m}_\ell - \dot{m}_e = \frac{P_v (-\nu_\ell \dot{m}_\ell)}{R_v T_g} - \frac{P_v V_g \dot{T}_g}{R_v T_g^2} \end{equation} % Grouping terms yields the result \begin{equation} \left( 1 - \frac{P_v\nu_\ell }{R_v T_g} \right) \dot{m}_\ell - \frac{P_v V_g }{R_v T_g^2}\dot{T}_g = - \dot{m}_e \label{Eq:BlowdownWVaporEq4} \end{equation} Equations (\ref{Eq:BlowdownWVaporEq1}), (\ref{Eq:BlowdownWVaporEq2}), (\ref{Eq:BlowdownWVaporEq3}), and (\ref{Eq:BlowdownWVaporEq4}) are four coupled ordinary differential equations that can be decoupled by casting them in matrix form as follows: % \begin{equation} \left(\begin{array}{ccccccc} A_{11} & 0 & A_{13} & 0 \\ A_{21} & A_{22} & 0 & 0 \\ 0 & 0 & 0 & A_{34} \\ A_{41} & & A_{43} & 0 \\ \end{array}\right) % \left(\begin{array}{c} \dot{m}_\ell \\ \dot{T}_\ell \\ \dot{T}_g \\ \dot{T}_w \\ \end{array}\right) = % \left(\begin{array}{c} b_1\\ b_2 \\ b_3 \\ b_4 \\ \end{array}\right) \end{equation} % where % \begin{eqnarray} A_{11}& = & T_g R_v - P_t \nu_\ell \label{Eq:BlowDownA11}\\ % A_{13}& = & m_v c_v + m_g c_g\\ % A_{21} & = & c_\ell T_\ell + P_t v_l - h_{v}\\ % A_{22} & = & m_\ell c_\ell\\ % A_{34} & = & m_w c_w\\ % A_{41} & = & 1 - \nu_\ell/\nu_v\\ % A_{43} & = & - m_v/T_g\\ % b_1 & = & \dot{Q}_v + \dot{Q}_g - \dot{m}_e T_g R_v \label{Eq:BlowDownb1}\\ % b_2 & = & \dot{Q}_\ell - \dot{Q}_v + \dot{m}_e (h_v - c_\ell T_\ell) \\ % b_3 & = & \dot{Q}_w -\dot{Q}_\ell - \dot{Q}_g\\ % b_4 & = & -\dot{m}_e \label{Eq:BlowDownb4} \end{eqnarray} % The solution to the equations is % \begin{eqnarray} \dot{m}_\ell &=& \frac{A_{43} b_{1}-A_{13} b_{4}}{A_{11} A_{43}-A_{41} A_{13}} \label{Eq:BlowDownmdotDiffEq}\\ % %\dot{T}_\ell &=& \frac{-A_{21}A_{43}b_1+ (A_{11}A_{43}-A_{41}A_{13})b_2+A_{21}A_{13}b_4}{A_{22}\left( A_{11}A_{43}-A_{41}A_{13}\right)}\\ \dot{T}_\ell &=& \frac{1}{A_{22}}\left( b_2 - A_{21} \frac{A_{43} b_{1}-A_{13} b_{4}}{A_{11} A_{43}-A_{41} A_{13}}\right)\\ % \dot{T}_g &=& \frac{A_{11} b_{4} - A_{41} b_{1}}{A_{11} A_{43}-A_{41} A_{13}}\\ % \dot{T}_w &=& \frac{b_3}{A_{34}} \label{Eq:BlowDownTwdotDiffEq} \end{eqnarray} % For the adiabatic model we set all heat transfer rates, $\dot{Q}$, to zero in Eqs.~(\ref{Eq:BlowDownb1})-(\ref{Eq:BlowDownb4}) and so there are only two state variables as $\dot{T}_w = 0$ and so $T_w =$ constant. Now let's develop equations for an isothermal model of a blow down tank. In the isothermal model, we assume $T_\ell = T_g = T_w = T$. The only state variable that requires a differential equation is $m_\ell$. Because $T_g$, $T_\ell$, and hence, $P_v$ are constant, we know that % \begin{equation} \dot{m}_v = \frac{P_v \dot{V}_g}{R_v T_g} \end{equation} % Subtituting this result into Eq.(\ref{Eq:BlowdownMassCon}) and solving for $\dot{m}_\ell$ we obtain. % \begin{equation} \dot{m}_\ell = -\frac{\dot{m}_e}{\left( 1 - \displaystyle\frac{P_v \nu_\ell}{R_v T}\right)} \end{equation} In summary for the heat transfer model for a blow down tank, we choose $m_\ell$, $T_\ell$, $T_g$, and $T_w$ are state variables. Eqs.~(\ref{Eq:BlowDownVell})-(\ref{Eq:BlowDownPt}) are used to calculate the remaining tank properties, and Eqs.~(\ref{Eq:BlowDownmdotDiffEq})-(\ref{Eq:BlowDownTwdotDiffEq}) are used to model the tank states as functions of time. For the all three models, heat transfer, adiabatic, and isothermal, knowing the state variables $m_\ell$, $T_\ell$, $T_g$, and $T_w$ we compute the remaining tank properties using % \begin{eqnarray} V_\ell & = & \nu_\ell(T_\ell)m_\ell \nonumber \\ % V_g & = & V_t - V_\ell\nonumber \\ % P_g & = & \frac{m_g R_g T_g}{V_g}\nonumber \\ % P_v & = & P_v(T_\ell)\nonumber \\ % m_v & = & \frac{P_v V_g}{R_v T_g}\nonumber \\ % P_t & = & P_v + P_g \nonumber \end{eqnarray} % The models differ in the number of state variables and in the state rate equations. A summary is presented below. \noindent\textit{Blow Down Tank: Heat Transfer} \noindent State Variables: $m_\ell$, $T_\ell$, $T_g$, $T_w$ % \begin{eqnarray} \dot{m}_\ell &=& \frac{A_{43} b_{1}-A_{13} b_{4}}{A_{11} A_{43}-A_{41} A_{13}}\nonumber \\ % %\dot{T}_\ell &=& \frac{-A_{21}A_{43}b_1+ (A_{11}A_{43}-A_{41}A_{13})b_2+A_{21}A_{13}b_4}{A_{22}\left( A_{11}A_{43}-A_{41}A_{13}\right)}\\ \dot{T}_\ell &=& \frac{1}{A_{22}}\left( b_2 - A_{21} \frac{A_{43} b_{1}-A_{13} b_{4}}{A_{11} A_{43}-A_{41} A_{13}}\right) \nonumber \\ % \dot{T}_g &=& \frac{A_{11} b_{4} - A_{41} b_{1}}{A_{11} A_{43}-A_{41} A_{13}} \nonumber \\ % \dot{T}_w &=& \frac{b_3}{A_{34}} \nonumber \end{eqnarray} % where $A_{ij}$ and $b_i$ are given by Eqs.~(\ref{Eq:BlowDownA11})-(\ref{Eq:BlowDownb4}). \noindent\textit{Blow Down Tank: Adiabtic} \noindent State Variables: $m_\ell$, $T_\ell$, $T_g$ % \begin{eqnarray} \dot{m}_\ell &=& \frac{A_{43} b_{1}-A_{13} b_{4}}{A_{11} A_{43}-A_{41} A_{13}}\nonumber \\ % %\dot{T}_\ell &=& \frac{-A_{21}A_{43}b_1+ (A_{11}A_{43}-A_{41}A_{13})b_2+A_{21}A_{13}b_4}{A_{22}\left( A_{11}A_{43}-A_{41}A_{13}\right)}\\ \dot{T}_\ell &=& \frac{1}{A_{22}}\left( b_2 - A_{21} \frac{A_{43} b_{1}-A_{13} b_{4}}{A_{11} A_{43}-A_{41} A_{13}}\right) \nonumber \\ % \dot{T}_g &=& \frac{A_{11} b_{4} - A_{41} b_{1}}{A_{11} A_{43}-A_{41} A_{13}} \nonumber \end{eqnarray} % where $A_{ij}$ and $b_i$ are given by Eqs.~(\ref{Eq:BlowDownA11})-(\ref{Eq:BlowDownb4}). Note that all heat flow rates, $\dot{Q}$, are set to zero. \noindent\textit{Blow Down Tank: Isothermal} \noindent State Variables: $m_\ell$ % \begin{equation} \dot{m}_\ell = -\frac{\dot{m}_e}{\left( 1 - \displaystyle\frac{P_v \nu_\ell}{R_v T}\right)} \nonumber \end{equation} \subsubsection{Pressure Regulated Tank} The pressure regulated fuel tank model is the most complex tank model supported in GMAT. The model complexity is due to the presence of liquid fuel and fuel vapor contained in the tank ullage, and due to mass and energy transfer from the pressurant tank to the ullage of the regulated fuel tank. In this section, we develop a state space model for a pressure regulated tank. Note, to model a pressure regulated tank using a heat transfer or adiabitic model, we must simultaneously solve the equations associated with the pressurant tank. For the regulated tank model, we choose the state variables to be the liquid mass and temperature, $m_\ell$ and $T_\ell$, the gas temperature $T_g$, tank wall temperature $T_w$, and the incoming pressurant gas mass $m_i$. In Fig.\ref{fig:PressureRegulatedTank} we see an illustration of a pressure regulated tank. Like the blow down tank model, we divide the tank into three control volumes: the gas region, the liquid region, and the tank wall region. Mass flow occurs where the pressurant gas exits the tank, at the boundary between the liquid and gas in the form of evaporation, and from the pressurant tank to the ullage of the regulated tank. Heat transfer occurs between all three control volumes as well as with the surroundings. Hence, the physical processes modelled for a blow down tank are the same as those listed for the blow down tank, with the added process of mass flow from the pressurant tank. % \begin{figure}[ht] \centerline{ \begin{picture}(100,510) \special{psfile= ./Images/PressureRegulatedTank.eps hoffset= -120 voffset= 170 hscale=55 vscale=55} \makebox(80,525){$\dot{m}_e$,$h_l$} \makebox(-80,970){1.Gas/Vapor} \makebox(-90,940){$m_g, m_v, P_g, P_v, T_g$} \makebox(-190,905){$\dot{Q}_v$} \makebox(-140,905){$\dot{m}_v, h_{lg}$} \makebox(10,905){$\dot{Q}_g$} \makebox(-135,820){2.Liquid} \makebox(-145,790){$m_\ell, T_\ell$} \makebox(-235,740){$\dot{Q}_\ell$} \makebox(-335,990){3.Tank Wall} \makebox(-335,970){$m_w, T_w$} \makebox(-265,1030){$\dot{Q}_w$} \makebox(-045,1030){$\dot{m}_i$, $h_i$} \end{picture}}\vskip -3.65 in \caption{ Pressure Regulated Tank Diagram} \label{fig:PressureRegulatedTank} \end{figure} The derivation of the state equations for a pressure regulated tank follows naturally from the derivation of the blow down tank. The only control volume that differs between the two models is the gas/vapor control volume. Applying the 1st Law of thermodynamics to the gas/vapor control volume of the pressure regulated tank gives us % \begin{equation} \frac{d}{dt}\left( m_v u_v + m_g u_g \right) = \dot{Q}_v + \dot{Q}_g - P_t \dot{V}_g + \dot{m}_v h_v + \dot{m}_p h_p\label{Eq:RegulatedGas1stLaw} \end{equation} % Taking the time derivative of the gas law for the gas contained in the tank ullage yields % \begin{equation} \dot{m}_g = \frac{P_g \dot{V}_g}{R_g T} - \frac{P_g V_g \dot{T}_g}{R_g T_g^2} \label{Eq:RegulatedGasLawDeriv} \end{equation} % Equations (\ref{Eq:RegulatedGas1stLaw}) and (\ref{Eq:RegulatedGasLawDeriv}), together with equations (\ref{Eq:BlowdownWVaporEq2}), (\ref{Eq:BlowdownWVaporEq3}), and (\ref{Eq:BlowdownWVaporEq4}) are a system of 5 equations in 5 unknowns which can be decoupled using simple linear algebra. However, first we must expand Eqs.~(\ref{Eq:RegulatedGas1stLaw}) and (\ref{Eq:RegulatedGasLawDeriv}) and write them in terms of the state rate derivatives. Expanding Eq.~(\ref{Eq:RegulatedGas1stLaw}) we arrive at \begin{equation} \begin{split} \left( R_v T_g - P_t \nu_\ell \right)\dot{m}_\ell + \left( m_v c_v + m_g c_g \right) \dot{T}_g + \\ \left(c_g Tg - h_p \right)\dot{m}_g = \dot{Q}_v + \dot{Q}_g - \dot{m}_e R_v T_g \end{split} \end{equation} % Similarly, for Eq.~(\ref{Eq:RegulatedGasLawDeriv}) we obtain % \begin{equation} \frac{\nu_\ell}{\nu_g}\dot{m}_\ell + \frac{m_g}{T_g}\dot{T}_g + \dot{m}_g = 0 \end{equation} % To integrate the state equations we must decouple the equations and this is easily done by casting the equations in matrix form and solving the system of equations. We can write the equations is state space for as follows % \begin{equation} \left(\begin{array}{ccccccc} A_{11} & 0 & A_{13} & 0 & A_{15}\\ A_{21} & A_{22} & 0 & 0 & 0 \\ 0 & 0 & 0 & A_{34} & 0\\ A_{41} & 0 & A_{43} & 0 & 0\\ A_{51} & 0 & A_{43} & 0 & A_{55}\\ \end{array}\right) % \left(\begin{array}{c} \dot{m}_\ell \\ \dot{T}_\ell \\ \dot{T}_g \\ \dot{T}_w \\ \dot{m}_g \end{array}\right) = % \left(\begin{array}{c} b_1\\ b_2 \\ b_3 \\ b_4 \\ 0 \end{array}\right) \end{equation} % where the coefficients $A_{ij}$ and $b_i$ are given by % \begin{eqnarray} A_{11}& = & T_g R_v - P_t \nu_\ell \label{Eq:RegulatedA11}\\ % A_{13}& = & m_v c_v + m_g c_g\\ % A_{15} & = & c_g T_g - h_p\\ % A_{21} & = & c_\ell T_\ell + P_t v_l - h_{lg}\\ % A_{22} & = & m_\ell c_\ell\\ % A_{34} & = & m_w c_w\\ % A_{41} & = & 1 - \nu_\ell/\nu_v\\ % A_{43} & = & - m_v/T_g\\ % A_{51} & = & \nu_\ell/\nu_g\\ % A_{53} & = & m_g/T_g \\ % A_{55} & = & 1 \\ % b_1 & = & \dot{Q}_v + \dot{Q}_g - \dot{m}_e R_v T_g \label{Eq:Regulatedb1}\\ % b_2 & = & \dot{Q}_\ell - \dot{Q}_v + \dot{m}_e (h_{lg} - c_\ell T_\ell)\\ % b_3 & = & \dot{Q}_w -\dot{Q}_\ell - \dot{Q}_g\\ % b_4 & = & -\dot{m}_e \label{Eq:Regulatedb4} \end{eqnarray} \begin{eqnarray} \dot{m}_\ell &=& \frac{A_{55} A_{43} b_1 - b_4 A_{13} A_{55} + b_4 A_{15} A_{53}}{D}\\ \dot{T}_\ell &=& \frac{b_2 - A_{21}\dot{m}_\ell}{A_{22}}\\ % \dot{T}_g &=& \frac{-A_{41} A_{55} b_1 + b_4 A_{11} A_{55} - b_4 A_{51} A_{15}}{D}\\ % \dot{T}_w &=& \frac{b_3}{A_{34}}\\ % \dot{m}_g &=& \frac{-b_1 A_{51} A_{43} + b_1 A_{41} A_{53} - b_4 A_{11} A_{53} + b_4 A_{51} A_{13}}{D} \end{eqnarray} % where % \begin{equation} D = A_{55} A_{43} A_{11} - A_{43} A_{51} A_{15} + A_{41} A_{15} A_{53} - A_{41} A_{13} A_{55} \end{equation} For the adiabatic model we set all heat transfer rates, $\dot{Q}$, to zero in Eqs.~(\ref{Eq:Regulatedb1})-(\ref{Eq:Regulatedb4}). For the adiabatic model there is only four state variables as $\dot{T}_w = 0$ and so $T_w =$ constant. Now let's develop equations for an isothermal model of a pressure regulated tank. In the isothermal model, we assume $T_\ell = T_g = T_w = T$. The only state variables that require differential equations are $m_\ell$ and $m_g$. Because $T_g =$ constant, and hence, $P_v =$ constant, we know that % \begin{equation} \dot{m}_\ell = -\frac{\dot{m}_e}{\left( 1 - \displaystyle\frac{P_v \nu_\ell}{R_v T}\right)} \end{equation} % Similarly, for $m_g$ we obtain % \begin{equation} \dot{m}_g = \frac{\nu_\ell}{\nu_g}\frac{\dot{m}_e}{\left( 1 - \displaystyle\frac{P_v \nu_\ell}{R_v T}\right)} \end{equation} \subsubsection{Heat Transfer} Heat transfer models are from Ring\cite{Ring:64} and Incropera\cite{Incropera:06} and Pitts\cite{Pitts:98} \begin{equation} \dot{Q} = h A \Delta T \end{equation} \begin{equation} \frac{h L}{k} = Nu = c(Gr_L Pr)^a \end{equation} % so % \begin{equation} h = \frac{k c}{L}(Gr_L Pr)^a \end{equation} \begin{table}[ht] \centering \caption{ Dimensionless Heat Transfer Terms \cite{Incropera:06}} \begin{tabular}{p{.65 in} p{.950 in} p{1.4 in}} \hline\hline % after \\: \hline or \cline{col1-col2} \cline{col3-col4} ... Parameter & Definition & Interpretation \\ \hline Grashof number ($Gr_L$) & \hspace{ 1 in} $\displaystyle\frac{g \beta(T_s - T_\infty)L^3}{\nu^2}$ & Measure of the ratio of buoyancy forces to viscous forces.\\ % & \\ % Prandtl number ($Pr$) & \hspace{ 1 in} $\displaystyle\frac{c \mu}{k} = \frac{\mu}{\alpha}$ & Ratio of the momentum and thermal diffusivities.\\ % & \\ % Nusselt number ($Nu_L$) & \hspace{ 1 in} $\displaystyle\frac{h L}{k_f}$ & Ratio of convection to pure conduction heat transfer. \\ % & \\ % Reynolds number ($Re_L$) & \hspace{ 1 in} $\displaystyle\frac{V L}{\nu}$ & Ratio of inertia and viscous forces. \\ \hline \end{tabular} \end{table} \subsubsection{ Physiochemical Properties } Hydrazine Properties \cite{HydraBook} \[ c = 3084 \mbox{J/kg/K} \] \[ \rho \mbox{ (kg/m}^3) = 1025.6 - 0.87415 \mbox{ }T \mbox{ }(^o\mbox{C})- 0.45283e^{-3}\mbox{ }T^2 \mbox{ }(^o\mbox{C}) \] % \[ \rho \mbox{ (kg/m}^3) = 1230.6 - 0.62677 \mbox{ }T \mbox{ }(^o\mbox{K})- 0.45283e^{-3}\mbox{ }T^2 \mbox{ }(^o\mbox{K}) \] Some models are from \cite{Ricciardi:87} \begin{eqnarray} \rho_\ell(T_\ell) & = &K_1 + K_2 T_\ell + K_3 T_\ell^2 \mbox{ (kg/m$^3$)}\\ % \frac{d \rho_\ell}{dT_\ell } & = & K_2 + 2K_3 T_\ell \mbox{ (kg/m$^3$)} \end{eqnarray} % \begin{eqnarray} P_v & = &\displaystyle C_1 e^{(C_2 + C_3 T_\ell + C_4 T_\ell^2)} \mbox{ (N/m$^2$)}\\ % \frac{d P_v}{d T_\ell}& = &\displaystyle C_1 ln{(C_2 + C_3 T_\ell C_4 T_\ell^2)}\left(C_3 + 2C_4T_\ell\right) \end{eqnarray} % \begin{equation} m_d = P_g m_\ell^\alpha\left( \frac{T_\ell}{294}\right)^2 \end{equation} % \begin{equation}\begin{split} \frac{d m_d}{dt} = m_\ell^\alpha\left( \frac{T_\ell}{294}\right)^2\dot{P}_g + \alpha P_g m_\ell^{(\alpha - 1)}\left( \frac{T_\ell}{294}\right)^2 \dot{m}_\ell \\ + 2 P_g m_\ell^\alpha\left( \frac{T_\ell}{294}\right)\dot{T}_\ell \end{split} \end{equation} % \begin{table} \centering \caption{Constants for Density Equations} \begin{tabular}{lcc} \hline % after \\: \hline or \cline{col1-col2} \cline{col3-col4} ... Const. & \mbox{N}$_2$\mbox{0}$_4$ & CH$_3$N$_2$H$_3$ \\ \hline \hline $K_1$ (kg/m$^3$) & 2066 & 1105.3 \\ $K_2$ (kg/m$^3$/K )& -1.979 & -0.9395 \\ $K_3$ (kg/m$^3$/K$^2) $ & -4.826e-4 & 0 \\ \hline \end{tabular} \end{table} \begin{table} \centering \caption{Constants for Vapor Pressure Equations} \begin{tabular}{lcc} \hline % after \\: \hline or \cline{col1-col2} \cline{col3-col4} ... Const. & \mbox{N}$_2$\mbox{0}$_4$ & CH$_3$N$_2$H$_3$ \\ \hline \hline $C_1$ (kg/m$^3$) & 6895 & 6895\\ $C_2$ (kg/m$^3$/K )& 18.6742 & 12.4293 \\ $C_3$ (kg/m$^3$/K$^2) $ & -5369.57 & 2543.37 \\ $C_4$ (kg/m$^3$/K$^2) $ & 194721 & 0 \\ \hline \end{tabular} \end{table} \begin{table} \centering \caption{Constants for Dissolved Pressurant Equations} \begin{tabular}{lcc} \hline % after \\: \hline or \cline{col1-col2} \cline{col3-col4} ... Const. & \mbox{N}$_2$\mbox{0}$_4$ & CH$_3$N$_2$H$_3$ \\ \hline \hline $\alpha$ & 3.335e-11 & 2.059e-11 \\ \hline \end{tabular} \end{table}
{ "alphanum_fraction": 0.6229880148, "avg_line_length": 33.8419150858, "ext": "tex", "hexsha": "392c86da3f297c0366d25e9a8c9586b572897a88", "lang": "TeX", "max_forks_count": 3, "max_forks_repo_forks_event_max_datetime": "2020-12-09T07:06:55.000Z", "max_forks_repo_forks_event_min_datetime": "2019-10-13T10:26:49.000Z", "max_forks_repo_head_hexsha": "39673be967d856f14616462fb6473b27b21b149f", "max_forks_repo_licenses": [ "NASA-1.3" ], "max_forks_repo_name": "ddj116/gmat", "max_forks_repo_path": "doc/SystemDocs/MathematicalSpecification/TankModels.tex", "max_issues_count": 1, "max_issues_repo_head_hexsha": "39673be967d856f14616462fb6473b27b21b149f", "max_issues_repo_issues_event_max_datetime": "2018-03-20T20:11:26.000Z", "max_issues_repo_issues_event_min_datetime": "2018-03-15T08:58:37.000Z", "max_issues_repo_licenses": [ "NASA-1.3" ], "max_issues_repo_name": "ddj116/gmat", "max_issues_repo_path": "doc/SystemDocs/MathematicalSpecification/TankModels.tex", "max_line_length": 163, "max_stars_count": 2, "max_stars_repo_head_hexsha": "d6a5b1fed68c33b0c4b1cfbd1e25a71cdfb8f8f5", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "Randl/GMAT", "max_stars_repo_path": "doc/SystemDocs/MathematicalSpecification/TankModels.tex", "max_stars_repo_stars_event_max_datetime": "2020-12-09T07:05:07.000Z", "max_stars_repo_stars_event_min_datetime": "2020-01-01T13:14:57.000Z", "num_tokens": 14205, "size": 37463 }
\section{Conclusion} \label{sec:conclusion} Table~\ref{tab:predictions} summarizes my predictions for the progress in speech recognition to the year 2030. The predictions show that the coming decade could be just as exciting and important to the development of speech recognition and spoken language understanding as the previous one. We still have many research problems to solve before speech recognition will reach the point where it works all the time, for everyone. However, this is a goal worth working toward, as speech recognition is a key component to more fluid, natural, and accessible interactions with technology.
{ "alphanum_fraction": 0.8200636943, "avg_line_length": 52.3333333333, "ext": "tex", "hexsha": "9baf15e10b01024efb3b6a266e7d7f645c08ebdf", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "5090102c64ad80a6af91e8a32de6deb0aad4ee8e", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "awni/future_speech", "max_forks_repo_path": "conclusion.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "5090102c64ad80a6af91e8a32de6deb0aad4ee8e", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "awni/future_speech", "max_issues_repo_path": "conclusion.tex", "max_line_length": 78, "max_stars_count": 12, "max_stars_repo_head_hexsha": "5090102c64ad80a6af91e8a32de6deb0aad4ee8e", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "awni/future_speech", "max_stars_repo_path": "conclusion.tex", "max_stars_repo_stars_event_max_datetime": "2022-02-21T11:03:50.000Z", "max_stars_repo_stars_event_min_datetime": "2021-08-11T16:12:51.000Z", "num_tokens": 129, "size": 628 }
%------------------------- % Software developer resume in Latex % Author :Abul Hasanat Sekh % License : MIT %------------------------ \documentclass[letterpaper,11pt]{article} \usepackage{latexsym} \usepackage[empty]{fullpage} \usepackage{titlesec} \usepackage{marvosym} \usepackage[usenames,dvipsnames]{color} \usepackage{verbatim} \usepackage{enumitem} \usepackage[hidelinks]{hyperref} \usepackage{fancyhdr} \usepackage[english]{babel} \pagestyle{fancy} \fancyhf{} % clear all header and footer fields \fancyfoot{} \renewcommand{\headrulewidth}{0pt} \renewcommand{\footrulewidth}{0pt} % Adjust margins \addtolength{\oddsidemargin}{-0.5in} \addtolength{\evensidemargin}{-0.5in} \addtolength{\textwidth}{1in} \addtolength{\topmargin}{-.5in} \addtolength{\textheight}{1.0in} \urlstyle{same} \raggedbottom \raggedright \setlength{\tabcolsep}{0in} % Sections formatting \titleformat{\section}{ \vspace{2pt}\scshape\raggedright\large }{}{0em}{}[\color{black}\titlerule \vspace{2pt}] %------------------------- % Custom commands \newcommand{\resumeItem}[2]{ \item\small{ \textbf{#1}{: #2 \vspace{-2pt}} } } \newcommand{\resumeSubheading}[4]{ \vspace{-1pt}\item \begin{tabular*}{0.97\textwidth}[t]{l@{\extracolsep{\fill}}r} \textbf{#1} & #2 \\ \textit{\small#3} & \textit{\small #4} \\ \end{tabular*}\vspace{-5pt} } \newcommand{\resumeSubItem}[2]{\resumeItem{#1}{#2}\vspace{-4pt}} \renewcommand{\labelitemii}{$\circ$} \newcommand{\resumeSubHeadingListStart}{\begin{itemize}[leftmargin=*]} \newcommand{\resumeSubHeadingListEnd}{\end{itemize}} \newcommand{\resumeItemListStart}{\begin{itemize}} \newcommand{\resumeItemListEnd}{\end{itemize}\vspace{-5pt}} %------------------------------------------- %%%%%% CV STARTS HERE %%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{document} %----------HEADING----------------- \begin{tabular*}{\textwidth}{c @{\extracolsep{\fill}}c c} \href{mailto:[email protected]}{[email protected]} & \textbf{\href{https://www.abulhasanat.com/}{\huge Abul Hasanat Sekh}} & GitHub: \href{https://github.com/abulhasanat}{github.com/abulhasanat}\\ Electronic City, Bangalore & \href{http://abulhasanat.com/}{abulhasanat.com} & Medium: \href{https://medium.com/@hasanat.abul}{medium.com/@hasanat.abul} \\Mobile: +91-9989335052, 8918014140 & & Skype: abulhasanat \end{tabular*} %--------SUMMARY------------------------ \section{Summary} {Seasoned Data Scientist driving transformation with Analytics and Machine Learning with over15 years of experience in building Machine Learning models, Deep Learning models, BigQuary, GCP, Azure, Data Visualization, SQL Server DBA \& Developer, Performance Tuning, Web and REST API applications. I have worked with Investment Banks, Startups, and consulted for Technology services companies.} \\{I'm currently working as an Independent Consultant and completed multiple projects on Market Research, Time Series Analysis, A\/B Testing, Image, and Text Classification. } \section{Area of interest} {• Data Analytics, Machine Learning \& Deep Learning}\\{• Solution Architect}\\{• Project/Delivery Management}\\{• SQL Server DBA/Development/Performance Tuning} \section{Top skillst} {Python, Numpy, Pandas, Tensorflow, PyTorch, PySpark, SQL Server, Mongo DB, Azure, GCP \& AWS} %-----------EXPERIENCE----------------- \section{Experience} \resumeSubHeadingListStart \resumeSubheading {Independent Consultant}{Bangalore, India} {Machine Learning Consultant}{Sep 2019 - present} \resumeItemListStart \resumeItem{Customer Engagement and ML Solutions} {Work independently with end customers provide solution, development and support on Machine Learning \& Deep Learning} \resumeItem{Web Content Filter}{A web content filtering is a solution that would intercept web traffic, have the ability to accurately categorize it, and either allow or disallow based on some config.(CNN, NLP, Google BERT, Web scrapping, DB Scan) } \resumeItem{Structural Classification of Web Pages}{An effective way to build relevance and authority signals related to the google search engine algorithm is to get links from other sites to the sites and pages. To find sites that could potentially link to the client’s site, user required a similar page’s backlinks and determine a strategy for getting the link placed on that page or a similar page.(Random Forest, Web scrapping)} \resumeItem{SQL Server Support}{SQL Server Performance tuning and DBA activities} \resumeItem{Key clients} {Compu Solutions, iConnect \& Money Geek, USA} \resumeItemListEnd \resumeSubheading {\href{https://www.dxc.com/}{DxC.technology}}{Bangalore, India} {Service Delivery Consultant - IV}{Aug 2016 - Aug 2019} \textbf{Deutsche Bank Account Support:} DxC Bionics project is an initiative to reduce efforts on IT infrastructure services by using automation. Data Analytics with Machine learning plays a major role on finding opportunities on automation and incidents reductions. \resumeItemListStart \resumeItem{ML Framework} {Develop an end-to-end framework for training, testing and deployment of Machine Learning models with a team of 3 developers (Python, NumPy, scikit-learn)} \resumeItem{Data Collection \& Preprocessing} {Extract \& preprocess data from using Web scrapping (Python, BeautifulSoup, Pandas, NumPy, Matplotlib)} \resumeItem{Modeling} {Develop and implement Neural Network, Timeseries Analysis \& Random Forest models to predict new incidents \& CI health (Python, scikit-learn, Pandas, NumPy and Keras)} \resumeItem{SQL Server backend support for Private cloud} {Portfolio comprised various technologies on private cloud support and maintenance and SQL Server Customer Support Manager.} \resumeItemListEnd \resumeSubheading {\href{https://www.thomsonreuters.com}{Thomson Reuters}}{Hyderabad, India} {Assistant Manager}{May 2009 - April 2016} \resumeItemListStart \resumeItem{Data Analytics Project} {Develop Lead Segmentation Realignment (LSR) with a team of 8 developers (Python, NumPy, Pandas, Matplotlib)} \resumeItem{Web Application: Indian Tax \& Accounting Solutions} {End to end product development lifecycle for Indian tax \& accounting solutions which involved business case preparation, project charting, cost management, scheduling, resource planning, development \& monitoring and deployment (MVC.Net, REST API, SQL Server)} \resumeItem{Database Development \& Support} {Database design for Tax \& Accounting solutions, Production Deployment and Support (SQL Server, SSIS, SSRS)} \resumeItemListEnd \resumeSubheading {Satyam Computers}{Hyderabad, India} {Database Analyst}{Dec 2007 - May 2009} \resumeItemListStart \resumeItem{Microsoft Corporation License Generation} {Analyzing day by day issues form the business and involve developers for any change request. Building SQL queries for data extraction for exploratory analysis.(SQL Server, SSAS, SSRS)} \resumeItemListEnd \resumeSubheading {Hewlett Packard}{Singapore} {Database Consultant}{May 2007 - Nov 2007} \resumeItemListStart \resumeItem{Database Migration} {Database Migration from SQL 2000 to SQL 2005 and Testing. Database growth, owner and compatibility setting, Maintenance Plan and Backup job creation} \resumeItemListEnd \resumeSubheading {\href{https://www.wipro.com/}{Wipro Technologies}}{Hyderabad, India} {Project Engineer}{June 2005 - May 2007} \resumeItemListStart \resumeItem{Database Development} {New application design, development and implementation, T-SQL coding like SP, View, Function etc. Implementing and Fine tuning Database and web application } \resumeItemListEnd \resumeSubheading {\href{https://www.rightwave.com/}{Rightwave Infosolutions}}{NOIDA, India} {Database Engineer}{Feb 2005 - June 2005} \resumeItemListStart \resumeItem{Database Development} {Worked on Data modeling, Managing databases, Database backup/restore, Data Import/Export, Performance tuning, writing Procedure, Functions, Triggers and complex queries. Worked on high availability solutions. } \resumeItemListEnd \resumeSubheading {Electrobug Technology}{Gurgaon, India} {Database Engineer}{Sep 2004 - Feb 2005} \resumeItemListStart \resumeItem{SQL Server DBA} {Worked on Data modeling, Managing databases, Database backup/restore, Data Import/Export, Performance tuning, writing Procedure, Functions, Triggers and complex queries. Worked on high availability solutions. } \resumeItemListEnd \resumeSubheading {Resource Center for Indian Language and Technology Solutions}{New Delhi, India} {Programmer}{Jan 2003 - Sep 2004} \resumeSubHeadingListEnd %-----------EDUCATION----------------- \section{Education} \resumeSubHeadingListStart \resumeSubheading {Master in Computer Applications}{Midnapore, West Bengal, India} {Vidyasagar University}{Dec. 1999 -- Jan. 2003} \resumeSubheading {Bachelor of Science in Computer Application}{Kharagpur, West Bengal, India} {Hijli College}{Sept. 1996 -- Sept. 1999} \resumeSubHeadingListEnd %-----------CERTIFICATION----------------- \section{Certification} {• 2020 - Google Cloud Platform Fundamentals: Core Infrastructure by Coursera}\\{• 2020 - Google Cloud Platform Big Data and Machine Learning Fundamentals by Coursera} \\{• 2019 - Machine Learning for Engineering and Science Applications by NPTEL \& IIT Madras}\\{• 2018 - Prince2 Agile Foundation Certificate in Agile Project Management by AXLEOS UK} %-----------INTERESTS----------------- \section{Interests} {Photography, Marathon Running, Personal Productivity, Entrepreneurship} %------------------------------------------- \end{document}
{ "alphanum_fraction": 0.6992686494, "avg_line_length": 47.6976744186, "ext": "tex", "hexsha": "b6b56db5e18ec2dfb96bf4d277bf9d4e594f9010", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "b16e90b5cfcbb1888fc00e49d0578c7aa3558a37", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "abulhasanat/latexresume", "max_forks_repo_path": "Abul-Hasanat-Resume.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "b16e90b5cfcbb1888fc00e49d0578c7aa3558a37", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "abulhasanat/latexresume", "max_issues_repo_path": "Abul-Hasanat-Resume.tex", "max_line_length": 435, "max_stars_count": null, "max_stars_repo_head_hexsha": "b16e90b5cfcbb1888fc00e49d0578c7aa3558a37", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "abulhasanat/latexresume", "max_stars_repo_path": "Abul-Hasanat-Resume.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 2526, "size": 10255 }
\chapter{Syntactic properties of adverbial and conditional clauses}\label{cpt:Syntactic properties of adverbial and conditional clauses} \largerpage[2] This chapter analyzes the syntax of adverbial and \is{conditional clause}conditional clauses in Sanzhi and compares them to the syntactic properties of similar clauses in other East Caucasian languages. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{The syntax of adverbial clauses} \label{sec:The syntax of adverbial clauses} Sanzhi has different types of \is{adverbial clause}adverbial clauses that can be distinguished by the morphological make-up of the verb forms in the subordinate clause and by their semantics. Semantically, we can distinguish between simple converbs with a fairly general meaning and specialized converbs with a rather specific temporal or non-temporal meaning. The first group consists of the imperfective (\refsec{sssec:The imperfective converb}) and \isi{perfective converb} (\refsec{sssec:The perfective converb}). The second group contains temporal, causal, and other converbs (\refsec{cpt:specializedconverbssubordinatingenclitics}). A similar distinction is found in many East Caucasian languages (e.g. in Tsezic, see \citealp{Comrie.Forker.Khalilova2012}, and in Dargwa varieties, see \citealp{Belyaev2010}). The syntactic characteristics of constructions with general converbs have repeatedly been discussed in the literature because they exhibit a mixed behavior, showing features of subordination as well as of \isi{coordination} (see, among others, \citealp{Kazenin.Testelets2004}; \citealp{Haspelmath1995}; \citealp{Belyaev2010}; \citealp{Comrie.Forker.Khalilova2012}; \citealp{Creissels2010, Creissels2012}; \citealp{Forker2013b}). Sentences in Sanzhi can be fairly complex, containing a \isi{number} of \is{adverbial clause}adverbial clauses that are combined with one main clause. Semantically, these clauses either resemble \isi{coordination}, as in \refex{ex:‎‎‎The donkey got tired, fell down, and died}, or subordination, when the meaning of the \isi{adverbial clause} is causal \refex{ex:‎‎‎First, (because) the stones of father's house had fallen down}. \begin{exe} \ex \label{ex:‎‎‎The donkey got tired, fell down, and died} \gll amχa [b-arcː-ur-re] [ka-b-ič-ib-le] b-ebč'-ib ca-b\\ donkey \tsc{n-}get.tired\tsc{.pfv-pret-cvb} \tsc{down-n-}occur\tsc{.pfv-pret-cvb} \tsc{n-}die\tsc{.pfv-pret} \tsc{cop-n}\\ \glt \sqt{‎‎‎The donkey got tired, fell down, and died.} \ex \label{ex:‎‎‎First, (because) the stones of father's house had fallen down} \gll bahsar [heχ cin-na atːa-la jurt-la qːarqːa ʡaˁbal qal-la xːari k-ag-ur-re] [qːaq-li-j či-ka-d-irxː-ul] [ha-d-iqː-ul] qːarqːa=ra gu-r-h-aqː-ib=da\\ first \tsc{dem.down} \tsc{refl.sg-gen} father\tsc{-gen} house\tsc{-gen} stone three house\tsc{-gen} down \tsc{down}-go\tsc{.pfv-pret-cvb} back\tsc{-obl-dat} \tsc{spr-down}\tsc{-npl-}put\tsc{.ipfv-icvb} \tsc{up-npl-}carry\tsc{.ipfv-icvb} stone\tsc{=add} \tsc{sub-abl-up}-carry\tsc{-pret=1}\\ \glt \sqt{‎‎‎First, (because) the stones of father's house had fallen down three floors, we put them on the back and carried them, carried the stones.} \end{exe} The \isi{perfective converb} is widely used in procedural texts, such as the description of how to prepare dishes. These texts consist of a list of actions that are expressed by verbs bearing \isi{perfective converb} suffixes with a main clause at the end. The actions are supposed to occur in the order in which the clauses follow each other, i.e., there is iconicity, and the order of the clauses cannot be changed without changing the meaning of the whole sentence. This is generally interpreted as a semantic feature of \isi{coordination}, as opposed to subordination, where the order of the clauses does not reflect the temporal order of the events and can therefore be changed without a concomitant change in the meaning. Linear order will be discussed in more detail in \refsec{ssec:Linear order and iconicity} below. Converbs are non-finite in the sense that they head only subordinate clauses. The two general converbs (imperfective, perfective) also occur in analytic tenses in main clauses (\refcpt{cpt:Analytic verb forms}), but only when combined with a \isi{copula} or a predicative \isi{particle} (\refsec{sec:Predicative particles}). Therefore, they are unable to express illocutionary force or absolute temporal reference but share those properties with the verb form in the main clause (see \refsec{ssec:Scope properties} below). They are also not marked for person by person suffixes or enclitics, in contrast to the verb forms in the superordinate clause. However, they express aspect, because aspect is mainly conveyed through the verbal stem and there are no restrictions concerning the use of perfective or imperfective stems in \is{adverbial clause}adverbial clauses. Moreover, they can have their own arguments that fulfill the same \is{grammatical role}grammatical roles as arguments in main clauses, i.e., case marking patterns in \is{adverbial clause}adverbial clauses and main clauses do not differ. Furthermore, \isi{gender} agreement is present in \is{adverbial clause}adverbial clauses. In contrast to main clauses, it is strictly controlled by the \isi{absolutive} argument. By contrast, in main clauses copulas can exhibit \isi{gender} agreement with \isi{ergative} or \isi{dative} arguments. However, these copulas cannot occur in subordinate clauses. The \isi{constituent order} in \is{adverbial clause}adverbial clauses shows a far greater tendency for verb-final order than is observed for main clauses \refex{ex:‎‎‎First, (because) the stones of father's house had fallen down}, but \is{adverbial clause}adverbial clauses in which the verb is followed by other constituents can be found as well \refex{ex:‎[when I was in that situation], when I also was in a place like this, I also did not feel well}, \refex{ex:Stalin died, and the cars, the trains were stopped making tooot}. \begin{exe} \ex \label{ex:‎[when I was in that situation], when I also was in a place like this, I also did not feel well} \gll [hel=ʁuna musna-w ink w-aq-ib=qːel du=ra] dam=ra ʡaˁħ-le=kːʷi\\ that\tsc{=eq} place\tsc{.loc-m} meet \tsc{m-}go.through\tsc{.pfv-pret=}when \tsc{1sg=add} \tsc{1sg.dat=add} good\tsc{-advz=neg.pst}\\ \glt \sqt{‎[When I was in that situation], when I also was in a place like this, I also did not feel well.} \end{exe} In the following discussion, I will adopt the typology of \citet{Bickel2010} for the investigation of clause-linkage patterns. Bickel's typology consists of eleven variables, which are reproduced in the first column of \reftab{tab:Syntactic variables for the analysis of adverbial clauses}. A short description is given in the second column of the same table. \begin{table} \caption{Syntactic variables for the analysis of adverbial clauses} \label{tab:Syntactic variables for the analysis of adverbial clauses} \small \begin{tabularx}{0.98\textwidth}[]{% >{\raggedright\arraybackslash}p{90pt} >{\raggedright\arraybackslash\hangindent=0.5em}X} \lsptoprule Variable & Description\\ \midrule Illocutionary scope & Which clauses fall within the scope of illocutionary force operators?\\ Illocutionary marking & Can the dependent clause contain illocutionary force operators?\\ Tense scope & Which clauses fall within the scope of tense operators?\\ Tense marking & Can the dependent clause contain tense markers?\\ Finiteness & Does the dependent clause express fewer (non-finite) or the same \isit{number} (finite) of categories?\\ Symmetry & Can the range of expressed categories in the dependent and in the main clause be different or not?\\ WH & Are question words and focus\is{focus} enclitics inside dependent clauses allowed or not?\\ Focus & Can focus marking appear on the dependent clause?\\ Extraction & Is extraction of elements of dependent clauses allowed?\\ Position & Can the dependent clause appear before and after the main clause? Can it be separated by other clauses?\\ Layer & Can the dependent clause be center-embedded?\\ \lspbottomrule \end{tabularx} \end{table} I will additionally use a \isi{number} of other criteria that have been proposed in order to differentiate between \isi{coordination} and subordination, namely co-reference and expression of shared arguments, morphosyntactic locus, and relativization of constituents of \is{adverbial clause}adverbial clauses. I will mainly analyze the two general converbs as well as the temporal converb \tit{=qːel(la)} \sqt{when, while, because}, which expresses temporal simultaneity and anteriority as well as causality, because these converbs show the largest semantic overlaps and are semantically close to \isi{coordination}. % --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- % \subsection{Scope properties} \label{ssec:Scope properties} Adverbial clauses do not contain markers for illocutionary force, such as the \isi{imperative}, \isi{optative} suffixes, or the interrogative \is{particle}particles (``banned''). Those markers can only occur in the main clause. Their scope can be restricted to the main clause (``local''), but, in the appropriate context, it can also extent across the \isi{adverbial clause} (``extensible''). However, the latter possibility is noticeably less common in texts. Examples \refexrange{ex:When you put the roof beam at this (at one) side}{ex:‎‎The legs are trembling, jump} illustrate local scope restricted to the main clause. \begin{exe} \ex \label{ex:When you put the roof beam at this (at one) side} \gll [hej šːal-li-cːe cːiχːin ka-b-alt-an=qːel] het šːal-la ʡaˁnčːi a-ka-d-ax-u=w?\\ this side\tsc{-obl-in} roof.beam \tsc{down-n-}put\tsc{.ipfv-ptcp=}when that side\tsc{-gen} earth \tsc{neg-down}\tsc{-npl}-go-\tsc{prs.3=q}\\ \glt \sqt{When you put the roof beam at this (at one) side, does the clay of that (the other side) not fall down?}\pagebreak \ex \label{ex:Do you go home because your wife told you to} \gll [heχ xːunul-la ʁaj-li-gu aq-ib-le] qili arg-ul=de=w?\\ \tsc{dem.down} woman\tsc{-gen} word\tsc{-obl-sub} go.through\tsc{.pfv-pret-cvb} home go\tsc{.ipfv-icvb=2sg=q}\\ \glt \sqt{Do you go home because your wife told you to?} \ex \label{ex:‎‎The legs are trembling, jump} \gll [t'uˁ-me rurčː-ul] taˁħ d-uq-ene!\\ leg\tsc{-pl} tremble\tsc{-icvb} jump \tsc{1/2pl-}go\tsc{.pfv-imp.pl}\\ \glt \sqt{‎‎The legs are trembling, jump!}\footnote{Within the contexts from which this example originates the subjects of the \isi{adverbial clause} and the main clause differ. The speaker who was guiding a truck full of people urged them to jump off the car because he had problems controlling it. This means that the full translation is `While/because my legs are trembling, jump!’ Out of context, however, the most natural reading is rather: `While your legs are trembling, jump!’ with a same-subject interpretation.} \end{exe} Some converbs seem to fully ban joint scope of illocutionary operators. For instance, interrogative markers \refex{ex:When you put the roof beam at this (at one) side} or \isi{imperative} markers \refex{ex:‎When you buy bread, eat it with cheese!} cannot scope over the temporal converb \tit{=qːel}, although tense suffixes can. \begin{exe} \ex \label{ex:‎When you buy bread, eat it with cheese!} \gll [t'ult' asː-ib=qːel] nisːe-cːella b-erkʷ-en!\\ bread buy\tsc{.pfv-pret=}when cheese\tsc{-comit} \tsc{n-}eat\tsc{.pfv-imp}\\ \glt \sqt{‎When you buy bread, eat it with cheese!} (NOT: \sqt{Buy bread and eat it with cheese!}) (E) \end{exe} But at least with the perfective and the imperfective converbs it is also possible that the two clauses have joint scope: \begin{exe} \ex \label{ex:Go and bring it} \gll [ag-ur-re] h-aqː-a!\\ go\tsc{.pfv-pret-cvb} \tsc{up}-carry\tsc{-imp}\\ \glt \sqt{Go and bring it!} \ex \label{ex:‎Sing a song and dig the field} \gll [dalaj ∅-ik'-ul] qu b-urqː-a!\\ song \tsc{m-}say\tsc{.ipfv-icvb} field \tsc{n-}dig\tsc{.pfv-imp}\\ \glt \sqt{‎Sing a song and dig the field!} (E) \end{exe} Similarly, \is{adverbial clause}adverbial clauses can only express aspectual distinctions because this is a property of the verbal stem. Other semantic categories of verbs such as tense and evidentiality are only available to verb forms in main clauses. The converbs have relative temporal reference. This means that they refer to situations that take place before, after or during the situation that is expressed by the matrix clause. For instance, in \refex{ex:I will go to the forest and bring nuts@37} the verb form in the main clause has future/modal meaning, which is extended to the \isi{adverbial clause} with the preterite converb. Sentence \refex{ex:‎Very slowly making small steps we went across (the river)} conveys past time reference due to the preterite in the main clause, and \refex{ex:Being alone at home, I stay not crying} conveys present time reference because of the compound \isi{present tense}. Both sentences contain \is{adverbial clause}adverbial clauses with the \isi{imperfective converb} that only expresses that the situation in the \isi{adverbial clause} took place during the situation described in the main clause. \begin{exe} \ex \label{ex:I will go to the forest and bring nuts@37} \gll du-l [ag-ur-re wac'a-cːe] ka-d-iqː-an=da qix-be\\ \tsc{1sg-erg} go\tsc{.pfv-pret-cvb} forest-\tsc{in} \tsc{down-npl-}carry\tsc{.ipfv-ptcp=1} nut\tsc{-pl}\\ \glt \sqt{I will go to the forest and bring nuts.} \ex \label{ex:‎Very slowly making small steps we went across (the river)} \gll [bahla-l bahla-l nik'a kːanc ka-b-ircː-ul] bahla-l či-r-ag-ur=da\\ slow\tsc{-advz} slow\tsc{-advz} small step \tsc{down-n-}stand\tsc{.ipfv-icvb} slow\tsc{-advz} \tsc{spr-abl-}go\tsc{.pfv-pret=1}\\ \glt \sqt{‎Very slowly making small steps we went across (the river).} \ex \label{ex:Being alone at home, I stay not crying} \gll [qili-r du=gina r-irχ-ul] [a-r-isː-ul] r-ug-ul=da\\ home\tsc{-f} \tsc{1sg=}only \tsc{f-}be\tsc{.ipfv-icvb} \tsc{neg-f-}cry\tsc{-icvb} \tsc{f-}stay\tsc{.ipfv-icvb=1}\\ \glt \sqt{Being alone at home, I (fem.) stay not crying.} \end{exe} Similarly, the past perfect used in the main clause of \refex{ex:From her (he) took 16 000, and he sent (that money to us)} expresses not only past time reference but also indirect evidentiality, which extends to the meaning of the full sentence including the \isi{adverbial clause} with the preterite converb. \begin{exe} \ex \label{ex:From her (he) took 16 000, and he sent (that money to us)} \gll [it-i-sa-r s-asː-ib-le wec'-nu urek-ra azir] it-i-l=ra d-ataʁ-ib-le=de\\ that\tsc{-obl-ante-abl} \tsc{hither}-take\tsc{.pfv-pret-cvb} ten-\tsc{ten} six\tsc{-num} thousand that\tsc{-obl-erg=add} \tsc{npl-}send\tsc{.pfv-pret-cvb=pst}\\ \glt \sqt{From her (he) took 16\thinspace 000, and he sent (that money to us).} \end{exe} In short, fewer categories are expressed in \is{adverbial clause}adverbial clauses than in main clauses, because \isi{person agreement}, tense, evidentiality, and illocutionary force are absent. This means that Sanzhi \is{adverbial clause}adverbial clauses are, in Bickel's terms, ``asymmetrical'' and non-finite \citep{Bickel2010}. % --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- % \subsection{Focus and question words} \label{ssec:Focus and question words} Most but not all focus-sensitive \is{focus-sensitive particle}particles can appear in \is{adverbial clause}adverbial clauses attached to the converbs. The following examples show the \isi{enclitic} \tit{=cun} \sqt{only} and the emphatic modal \isi{particle} \tit{=q'ar} in clauses together with the \isi{perfective converb} and the \tit{=qːel} converb. The modal \isi{particle} \tit{=q'al} can also be employed in certain types of \is{adverbial clause}adverbial clauses, but in general its use in subordinate clauses is subject to many restrictions \refex{But when I saw the teacher, my face changed (i.e. turned red)}, \refex{ex:‎‎‎Was he at home when I came back from Derbent@B}. The restrictions are specific to this \isi{particle} and therefore not relevant for a discussion of the morphosyntactic properties of \is{adverbial clause}adverbial clauses. \begin{exe} \ex \label{ex:‎‎‎I understood (everything) only wrongly)} \gll [b-alk'-un-ne=cun] irʁ-ul=de\\ \tsc{n-}bend\tsc{-pret-cvb=}only understand\tsc{.ipfv-icvb=pst}\\ \glt \sqt{‎‎‎I understood (everything) only wrongly (i.e. I had only bad thoughts.).} \ex \label{ex:Fallen down there are bottles there} \gll ka-d-ič-ib-le=q'ar χe-d heχtːu-d šuš-ne\\ \tsc{down}\tsc{-npl-}occur\tsc{.pfv-pret-cvb=mod} exist.\tsc{down-npl} there.\tsc{down-npl} bottle\tsc{-pl}\\ \glt \sqt{Fallen down there are bottles there.} \ex[]{\label{But when I saw the teacher, my face changed (i.e. turned red)} \gll [a učitil či-w-až-ib=qːel=q'al] c'il di-la daˁʡ d-ars d-iχ-ub\\ but teacher \tsc{spr-m}-see\tsc{.pfv-pret=}when\tsc{=mod} then \tsc{1sg-gen} face \tsc{npl}-change \tsc{npl}-be.\tsc{pfv-pret} \\ \glt \sqt{But when I saw the teacher, my face changed (i.e. turned red).}} \ex[*]{ \label{ex:‎‎‎Was he at home when I came back from Derbent@B} \gll du Derbent-le-r sa-jʁ-ib=qːel=q'al it qili-w=de?\\ \tsc{1sg} Derbent\tsc{-loc-abl} \tsc{hither}-come\tsc{.pfv-pret=}when\tsc{=mod} that home\tsc{-m=pst} \\ \glt (Intended meaning: \sqt{‎‎‎Was he at home when I came back from Derbent?}) (E)} \end{exe} As mentioned in \refsec{ssec:Scope properties} above, interrogative \is{particle}particles (which also belong to the focus-sensitive \is{focus-sensitive particle}particles) cannot be used in \is{adverbial clause}adverbial clauses. However, \is{adverbial clause}adverbial clauses with various converbs can contain interrogative pronouns as the following examples with the \isi{perfective converb} \refex{ex:When Hazhimurad was given what we were happy} and the converb \tit{=qːel} \refex{ex:‎When who is guiding the car the girls get afraid} show. \begin{exe} \ex \label{ex:When Hazhimurad was given what we were happy} \gll [ħaˁžimurad-li-j ce b-ičː-ib-le] ušːa razi d-iχ-ub=da=ja?\\ Hazhimurad\tsc{-obl-dat} what \tsc{n-}give\tsc{.pfv-pret-cvb} \tsc{2pl} happy \tsc{1/2pl-}be\tsc{.pfv-pret=1=q}\\ \glt \sqt{When Hazhimurad was given what were we happy?} (E) \ex \label{ex:‎When who is guiding the car the girls get afraid} \gll [hi-l mašin b-ik-an=qːel] rurs-be uruχ b-ik'-ul=e?\\ who\tsc{.obl-erg} car \tsc{n-}lead\tsc{.ipfv-ptcp=}when girl\tsc{-pl} fear \tsc{hpl-}\tsc{aux.ipfv-icvb=q} \\ \glt \sqt{‎When who is guiding the car do the girls get afraid?} (E) \end{exe} % --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- % \subsection{Co-reference and expression of shared arguments} \label{ssec:Co-reference and expression of shared arguments} Converb clauses can almost always have their own subjects that do not need to be co-referential with the subject in the main clause. Examples of \is{adverbial clause}adverbial clauses with differing subjects can be found in \refex{ex:Stalin died, and the cars, the trains were stopped making tooot} for the \isi{perfective converb}, in \refex{ex:‎Two days it was freezing and raining} for the \isi{imperfective converb}, and in \refex{ex:When you put the roof beam at this (at one) side} and \refex{But when I saw the teacher, my face changed (i.e. turned red)} for constructions with \tit{=qːel}. However, for the sentence in \refex{ex:‎Two days it was freezing and raining} there is no alternative possibility of using a same-subject construction because the two weather verbs grammatically require different subjects. Thus, syntactically \refex{ex:‎Two days it was freezing and raining} is a complex clause with two different subjects, but semantically there is a clear relationship between the two clauses. \begin{exe} \ex \label{ex:Stalin died, and the cars, the trains were stopped making tooot} \gll [w-ebč'-ib-le Istalin] [mašin-te pojezd-e t'aš aʁ-ib-le] tːuːˁtː-d-ik'-ul, \ldots\\ \tsc{m-}die\tsc{.pfv-pret-cvb} Stalin, car\tsc{-pl} train\tsc{-pl} stop do\tsc{-pret-cvb} toot\tsc{-npl-}say\tsc{.ipfv-icvb}\\ \glt \sqt{Stalin died, and the cars, the trains were stopped making tooot, \ldots} \ex \label{ex:‎Two days it was freezing and raining} \gll k'ʷel bar [wiz b-ik'-ul] b-us-ib\\ two day freeze \tsc{n}-\tsc{aux.ipfv-icvb} \tsc{n-}rain\tsc{-pret}\\ \glt \sqt{‎Two days it was freezing and raining.} \end{exe} If the subjects differ, it is possible that other arguments are co-referential instead. In \refex{ex:‎‎The legs are trembling, jump} the subject of the first clause with the \isi{imperfective converb} is not identical to that of the following, but can be identical to the omitted possessor (see the comment in the footnote). In \refex{But when I saw the teacher, my face changed (i.e. turned red)}, the omitted \isi{dative} subject of the \isi{adverbial clause} shares the referent with the possessive pronoun in the main clause. Similarly, in \refex{ex:When they got married, they had a good life} the omitted subject of the \isi{adverbial clause} is identical to the referent of the possessive pronoun in the main clause. It can also be the case that a string of \is{adverbial clause}adverbial clauses shares the subject with an adjunct in the main clause. \begin{exe} \ex \label{ex:When they got married, they had a good life} \gll [can ka-b-iž-ib=qːel] ču-la jašaw-li-cːe-b zamana ca-b\\ meet \tsc{down-hpl-}be\tsc{.pfv-pret=}when \tsc{refl.pl-gen} being\tsc{-obl-in-n} time \tsc{cop-n}\\ \glt \sqt{When they got married, they had a good life.} (lit. When they met it is the time of their well-being.) \end{exe} The sharing of the subject argument is clearly preferred for the \isi{perfective converb} and can be seen in most examples in this section. Even in example \refex{ex:Stalin died, and the cars, the trains were stopped making tooot} there is at least a causal relationship between the described events: because of the death of Stalin the trains tooted and honked. If no such causal relationship can be found, a complex clause with different subjects is impossible \refex{ex:When Ali came home, Indira was sewing a dress}. \begin{exe} \ex \label{ex:When Ali came home, Indira was sewing a dress} \gll {??} [ʡaˁli qili w-i-ha-w-q-un-ne] Indira-l kːurtːi b-urχ-ul=de\\ {} Ali home \tsc{m-in-up}\tsc{-m-}go\tsc{.pfv-pret-cvb} Indira\tsc{-erg} dress \tsc{n-}sew\tsc{.ipfv-icvb=pst}\\ \glt (Intended meaning: \sqt{When Ali came home, Indira was sewing a dress.}) \end{exe} The requirement for shared subjects is even stronger for the \isi{imperfective converb}, for which it is almost the only attested possibility in natural texts. By contrast, for \tit{=qːel} it is easy to find examples with differing subjects \refex{ex:‎When they did not calm down, (he) put them into the box, frightening them}, but still around half to two third of the examples share the subject \refex{ex:‎When he left, he prayed to Allah}, \refex{ex:‎When you buy bread, eat it with cheese!} \begin{exe} \ex \label{ex:‎When he left, he prayed to Allah} \gll [tːura sa-w-q-un=qːel] heχ Allah-li-cːe ulkː-un-ne\\ outside \tsc{hither-m-}go\tsc{.pfv-pret=}when \tsc{dem.down} Allah\tsc{-obl-in} pray\tsc{-icvb-cvb}\\ \glt \sqt{‎When he left, he prayed to Allah.} \end{exe} In clauses with disjoint subjects, normally at least one of the subjects \refex{ex:Stalin died, and the cars, the trains were stopped making tooot}, \refex{ex:When they got married, they had a good life}, if not both are overt. However, even in those cases it is possible that both subjects are absent, as in example \refex{ex:‎When they did not calm down, (he) put them into the box, frightening them}, in which it is clear from the context that the referent of the subject of the first clause is the children, and that the referent of the subject in the main clause as well as in the following \isi{adverbial clause} is the main character of the story. \begin{exe} \ex \label{ex:‎When they did not calm down, (he) put them into the box, frightening them} \gll [a-b-ug-an=qːel] b-i-ka-b-at-ur ca-b [q'ʷani-l-cːe uruχ b-arq'-ib-le]\\ \tsc{neg-hpl-}be.calm\tsc{.ipfv-ptcp=}when \tsc{hpl-in-down}\tsc{-hpl-}leave\tsc{.pfv-pret} \tsc{cop-hpl} box\tsc{-obl-in} fear \tsc{hpl-}do\tsc{.pfv-pret-cvb}\\ \glt \sqt{‎When they did not calm down, (he) put (the children) into the box, frightening them.} \end{exe} Co-referential arguments are omitted, so zeroes commonly occur in the subordinate clause. Therefore, cataphora is very frequent. In example \refex{ex:When he felt the warmth of the sun, he thanked the sun@35} the omitted argument in the first clause corresponds to the \isi{agent} in the second clause. \begin{exe} \ex \label{ex:When he felt the warmth of the sun, he thanked the sun@35} \gll [bari-la gʷana-dex-li-j šak ič-ib-le] il-i-l bari-li-j barkalla b-aχ-aq-ur\\ sun\tsc{-gen} warm\tsc{-nmlz-obl-dat} feel occur\tsc{.m.pfv-pret-cvb} that\tsc{-obl-erg} sun\tsc{-obl-dat} thanks \tsc{n-}know\tsc{.pfv-caus-pret}\\ \glt \sqt{When he felt the warmth of the sun, he thanked the sun.} \end{exe} But anaphora is also attested \refex{ex:The bird run (i.e. flies) after him and his dog, and they run and run, and shout, but they did not find the frog@36}. In this example, we find G=S=S=A, with only the first G argument being a full \isi{noun phrase} and all other occurrences of the same argument left implicit, so that no grammatical relations are involved. \begin{exe} \ex \label{ex:The bird run (i.e. flies) after him and his dog, and they run and run, and shout, but they did not find the frog@36} \gll [hitːi b-uq-un-ne č'aka χːʷe-j=ra hel-i-j=ra] [sa-r-b-uq-un-ne, sa-r-b-uq-un-ne] [waˁw b-ik'-ul] b-arčː-ib-le=kːu ʡaˁt'a\\ after \tsc{n-}go\tsc{.pfv-pret-cvb} eagle dog\tsc{-dat=add} that\tsc{-obl-dat=add} \tsc{ante-abl-hpl-}go\tsc{.pfv-pret-cvb} \tsc{ante-abl-hpl-}go\tsc{.pfv-pret-cvb } call \tsc{hpl-}say\tsc{.ipfv-icvb} \tsc{n-}find\tsc{.pfv-pret-cvb=neg} frog\\ \glt \sqt{The bird runs (i.e. flies) after him and his dog, and they run and run, and shout, but they did not find the frog.} \end{exe} Another strategy commonly employed is to have the co-referential NP in clause-initial position, syntactically belonging to the main clause, but separated from the rest of the main clause in terms of linear order. The controlee is in the embedded clause, resulting in center embedding. In \refex{ex:I will go to the forest and bring nuts@37}, the \isi{adverbial clause} contains an intransitive predicate; therefore, the pronoun \tit{dul} \sqt{\tsc{1sg.erg}} must be part of the main clause. If both clauses have the same valency frame, it is in principle impossible to decide to which of the two clauses the overt argument belongs. In general, arguments whose referents the speaker assumes to be known to the hearer are left implicit such that often none of the clauses contains an occurrence of the shared arguments. Though shared arguments are very common, this is not a necessity. In \refex{ex:Stalin died, and the cars, the trains were stopped making tooot} the first \isi{adverbial clause} contains an overt S, \tit{Istalin}, which is not shared in the subsequent adverbial and main clause. The \isi{adverbial clause} mostly precedes the main clause, but the reverse order is also attested (\refsec{ssec:Linear order and iconicity}). Shared S and A arguments in either order are frequently found in texts \refex{ex:When he felt the warmth of the sun, he thanked the sun@35}, \refex{ex:I will go to the forest and bring nuts@37}, and are easily provided in elicitation \refex{ex:Mother came and fed Madina@39a}, \refex{ex:Murad saw Madina and went away@39b}. The situation gets more complicated if P arguments are also involved. An overt S argument in the first clause can correspond to a covert P in the second clause but not if the verb in the subordinate clause bears the converb suffix \tit{-le}. Instead, the more specific construction with \tit{=qːella} must be used such that the first clause is not only syntactically but also semantically an \isi{adverbial clause} \refex{ex:When the daughter came, the mother fed@39c}. According to my Sanzhi consultants, the more general converb \tit{-le} can only be used if the S in the converbal clause corresponds to an S or A in the main clause. \begin{exe} \ex \label{ex:Mother, Madina, feeding@39} \begin{xlist} \ex \label{ex:Mother came and fed Madina@39a} \gll [aba$_{i}$ sa-r-eʁ-ib-le] \_$_{i}$ Madina r-aχː-un\\ mother \tsc{hither-f}-come\tsc{-pret-cvb} \tsc{erg} Madina \tsc{f-}feed\tsc{-pret}\\ \glt \sqt{Mother came and fed Madina.} (S = A) (E) \ex \label{ex:Murad saw Madina and went away@39b} \gll [Murad-li-j$_{i}$ Madina či-r-až-ib-le] \_$_{i}$ ag-ur\\ Murad\tsc{-obl-dat} Madina \tsc{spr-f}-see\tsc{.pfv-pret-cvb} \tsc{abs} go\tsc{-pret}\\ \glt \sqt{Murad saw Madina and went away.} (A = S) (E) \ex \label{ex:When the daughter came, the mother fed@39c} \gll [rursːi$_{i}$ sa-r-eʁ-ib=qːella] aba-l \_$_{i}$ r-aχː-un\\ daughter \tsc{hither-f}-come\tsc{-pret}=when mother\tsc{-erg} \tsc{abs} \tsc{f-}feed\tsc{-pret}\\ \glt \sqt{When the daughter came, the mother fed (her).} (S = P) (E) \end{xlist} \end{exe} If the first clause contains two arguments A and P, then an implicit S in the second clause can, in principle, be co-referential with any of these two arguments. However, co-reference with P is less preferable, i.e. in example \refex{ex:Father saw Madina and (she) got happy@40}, the S argument in the second clause can be co-referential with P in the first clause, or with another argument previously established in the context. In natural texts the co-referential argument would rather be expressed as S in the main clause and left implicit in the \isi{adverbial clause}. In \refex{ex:Murad saw Madina and went away@39b}, co-reference between the A in the first clause and S in the second clause is the preferred reading, and co-reference with a third person is rather unlikely. \begin{exe} \ex \label{ex:Father saw Madina and (she) got happy@40} \gll [atːa-j Madina$_{i}$ či-r-až-ib-le] \_$_{i/j}$ razi r-iχ-ub\\ father\tsc{-dat} Madina \tsc{spr-f}-see\tsc{.pfv-pret-cvb} \tsc{abs} happy \tsc{f-}become\tsc{-pret}\\ \glt \sqt{Father saw Madina and (she) got happy.} (P = S) (E) \end{exe} If we exchange the predicate in the second clause in \refex{ex:When the daughter came, the mother fed@39c} with a transitive predicate, we again encounter the same situation. If the shared argument occurs as P in the \isi{adverbial clause}, the whole sentence becomes rather marginal because out of context the referent of the omitted A in the main clause could be either the mother or the daughter. Therefore, speakers prefer to express the shared argument as A in the main clause \refex{ex:Mother called her daughter and she swept the house@41}. % \begin{exe} \ex \label{ex:Mother called her daughter and she swept the house@41} \gll [aba-l \_$_{i}$ až-aq-ur-re] rursːi-l$_{i}$ qal qʷaˁrš b-arq'-ib\\ mother\tsc{-erg} \tsc{abs} go\tsc{.pfv-caus-pret-cvb} girl\tsc{-erg} house sweep \tsc{n-}do\tsc{.pfv-pret}\\ \glt \sqt{Mother called her daughter and she (= the daughter) swept the house.} (P = A) (E) \end{exe} Thus, there is some evidence that shared arguments are preferably expressed as S or A instead of P. However, co-reference is never a grammatical necessity. In each of the sentences an implicit argument can always be co-referential with other referents in the contexts that do not occur in the sentence to which the omitted argument belongs. \hspace*{-1.7606pt}Pro\isi{nouns} (demonstrative or reflexive) in combination with co-referential noun phrases are usually not employed to express shared arguments, because the use of pronouns often leads to disjoint reference as the only available interpretation. Adverbial clauses preceding the main clause never allow for pronominal cataphoras as we know them from European languages. This means that the demonstrative or \isi{reflexive pronoun} in \refex{ex:‎When s/he bought bread, Zainab ate (it) with cheese} cannot be co-referential with a following \isi{noun phrase}. \begin{exe} \ex \label{ex:‎When s/he bought bread, Zainab ate (it) with cheese} \gll [cin-ni / it-i-l t'ult' asː-ib=qːel] Zajnab-li nisːe-li-cːella b-erk-un\\ \tsc{refl.sg-erg} / that\tsc{-obl-erg} bread buy\tsc{.pfv-pret=}when Zainab\tsc{-erg} cheese\tsc{-obl-comit} \tsc{n-}eat\tsc{.pfv-pret}\\ \glt \sqt{‎When s/he (i.e. not Zainab) bought bread, Zainab ate (it) with cheese.} (E) \end{exe} If we reverse the order of pronoun and noun we also have disjoint reference for the \isi{demonstrative pronoun} \refex{ex:‎When Hurija came, s/he milked the cow1}. However, with the \isi{reflexive pronoun} the situation is more complicated because this pronoun can be interpreted as fulfilling a purely emphatic function, which means that the main clause actually lacks an overt subject. This makes it possible, in turn, to arrive at a co-referential reading \refex{ex:‎When Hurija came, s/he milked the cow2}, \refex{ex:‎While Zapir was singing a song while he dug the field}. If we exclude the emphatic interpretation of the reflexive, then in clauses with the \tit{=qːel} converb, disjoint reference is the only possible interpretation, but perfective converbs still seem to allow co-reference. \begin{exe} \ex \label{ex:‎When Hurija came, s/he milked the cow1} \gll [ħuˁrija sa-r-eʁ-ib=qːel] cin-ni q'ʷal b-ircː-ib\\ Hurija \tsc{hither-f-}go\tsc{.pfv-pret=}when \tsc{refl.sg-erg} cow \tsc{n-}milk\tsc{.pfv-pret}\\ \glt \sqt{‎When Hurija came, s/he (i.e. not Hurijat) milked the cow.} (E) \ex \label{ex:‎When Hurija came, s/he milked the cow2} \gll [ħuˁrija sa-r-eʁ-ib-le] cin-ni q'ʷal b-ircː-ib\\ Hirija \tsc{hither-f-}go\tsc{.pfv-pret-cvb} \tsc{refl.sg-erg} cow \tsc{n-}milk\tsc{.pfv-pret}\\ \glt \sqt{‎When Hurija came, s/he (Hurijat herself or another person) milked the cow.} (E) \ex \label{ex:‎While Zapir was singing a song while he dug the field} \gll [Zapir dalaj ∅-ik'-ul] cin-ni qu b-urqː-ib\\ Zapir song \tsc{m-}say\tsc{.ipfv-icvb} \tsc{refl.sg-erg} garden \tsc{n-}dig\tsc{.pfv-pret}\\ \glt \sqt{‎While Zapir was singing a song he (another person or Zapir himself) dug the field.} (E) \end{exe} We can also swap around the order of the clauses. In sentences in which the main clause precedes the \isi{adverbial clause}, no cataphora whatsoever is allowed \refex{ex:S/he ate the bread when Zainab bought it}, \refex{ex:While Zapir was singing a song}. This means that neither zeroes nor pronouns can express co-reference with subject arguments in the following subordinate clauses. A pronoun (or a zero anaphora) may not both precede and c-command its antecedent (\citealp[185]{Langacker1969}; \citealp[8]{Reinhart1976}). Note that if we use \is{demonstrative pronoun}demonstrative pronouns or zero, the person reference in the first clause remains unspecified. By contrast, the \isi{reflexive pronoun} would be used if we continue to talk about a person who already was the topic of the conversation. \begin{exe} \ex \label{ex:S/he ate the bread when Zainab bought it} \gll (cin-ni / it-i-l) t'ult' b-erk-un, [Zajnab-li asː-ib=qːel]\\ \tsc{refl.sg-erg} / that\tsc{-obl-erg} bread \tsc{n-}eat\tsc{.pfv-pret} Zajnab\tsc{-erg} buy\tsc{.pfv-pret=}when\\ \glt \sqt{S/he (i.e. not Zajnab) ate the bread when Zajnab bought it.} (E) \ex \label{ex:While Zapir was singing a song} \gll (cin-ni / it-i-l) qu b-urqː-ib, [Marko dalaj ∅-ik'-ul]\\ \tsc{refl.sg-erg} / that\tsc{-obl-erg} garden \tsc{n-}dig\tsc{.pfv-pret} Marko song \tsc{m-}say\tsc{.ipfv-icvb}\\ \glt \sqt{While Marko was singing a song, s/he (i.e. not Marko) dug the field.} (E) \end{exe} % --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- % \subsection{Linear order and iconicity} \label{ssec:Linear order and iconicity} This criterion concerns the linear order of \isi{adverbial clause} and main clause (``position'' and ``layer'' in the terminology of \citealp{Bickel2010}). Although the \is{adverbial clause}adverbial clauses most frequently precede the main clause, they may also follow it \xxref{ex:‎‎‎I remained there for two years, unable to eat food made of corn}{ex:‎‎‎I came to know the truth from you, I said, because he (the other doctor) did not tell me (the truth)}, \refex{ex:‎When they did not calm down, (he) put them into the box, frightening them}, and they may be separated by other subordinate clauses from the main clause, e.g. by other \is{adverbial clause}adverbial clauses. In \refex{ex:‎‎‎I remained there for two years, unable to eat food made of corn}, the \isi{imperfective converb} clause follows the main clause and shares with the main clause the subject referent and the past time reference. In \refex{ex:Then, in the maternity hospital, here you do not give money when you go to take (the child) out (of the hospital and home)} the converbal clause with \tit{=qːel} also follows the main clause and most probably shares the subject-like argument. In \refex{ex:‎‎‎I came to know the truth from you, I said, because he (the other doctor) did not tell me (the truth)} we again have a converbal clause with \tit{=qːel} that follows the main clause and has a causal interpretation. \begin{exe} \ex \label{ex:‎‎‎I remained there for two years, unable to eat food made of corn} \gll k'ʷi dus kelg-un=da [ʡaˁžlač'i-la χurejg b-erkʷ-ij a-r-irχ-ul]\\ two year remain\tsc{.pfv-pret=1} corn\tsc{-gen} food \tsc{n-}eat\tsc{.pfv-inf} \tsc{neg-f-}be.able\tsc{.ipfv-icvb}\\ \glt \sqt{‎‎‎I remained there for two years, unable to eat food made of corn.} \ex \label{ex:Then, in the maternity hospital, here you do not give money when you go to take (the child) out (of the hospital and home)} \gll c'il roddom-le heštːu-d lukː-unne=kːu=w ce=ja arc [tːura h-asː-ij r-ax-an=qːel]?\\ then maternity.hospital\tsc{-loc} here\tsc{-npl} give\tsc{.ipfv-icvb=}\tsc{cop.neg=q} what\tsc{=q} money outside \tsc{up}-take\tsc{.pfv-inf} \tsc{f-}go\tsc{-ptcp=}when\\ \glt \sqt{Then, in the maternity hospital, here you do not give money when you go to take (the child) out (of the hospital and home)?} \ex \label{ex:‎‎‎I came to know the truth from you, I said, because he (the other doctor) did not tell me (the truth)} \gll wallah, haʔ-ib=da, [a-cːe hel b-arx-dex b-aχ-ij bahanne] sa-r-ač'-ib-il=da [ik'-i-l a-b-urs-ib=qːel]\\ by.God say\tsc{.pfv-pret=1} \tsc{2sg-in} that \tsc{n-}right\tsc{-nmlz} \tsc{n-}know\tsc{.pfv-inf} because.of \tsc{hither}\tsc{-f-}come\tsc{.pfv-pret-ref=1} \tsc{dem.up}\tsc{-obl-erg} \tsc{neg-n-}tell\tsc{-pret=}when\\ \glt \sqt{‎‎‎I came to know the truth from you, I said, because he (the other doctor) did not tell me (the truth).} \end{exe} Examples \xxref{ex:‎When / Because Murad got ill he did not build the fence}{ex:‎Madina, having come home, washed the dishes} show center-embedding, i.e. \is{adverbial clause}adverbial clauses that occur within the main clause. That it is in fact center-embedding and not \is{adverbial clause}adverbial clauses preceding the main clauses is indicated by the case-marking on the shared argument. The verb in the \is{adverbial clause}adverbial clauses differs from the verb in the main clause in transitivity, and the case of the shared argument is assigned by the predicate in the main clause. Note that in all examples the only interpretation available is the shared subject interpretation. \begin{exe} \ex \label{ex:‎When / Because Murad got ill he did not build the fence} \gll Murad-li [ʡaˁrkːa ∅-iχ-ub-le] lac a-b-arq'-ib\\ Murad\tsc{-erg} ill \tsc{m-}be\tsc{.pfv-pret-cvb} fence \tsc{neg-n-}do\tsc{.pfv-pret}\\ \glt \sqt{‎When\slash Because Murad got ill he did not build the fence.} (E) \ex \label{ex:Musa is singing a song and building the fence} \gll Musa-l [dalaj ∅-ik'-ul] lac b-irq'-ul ca-b\\ Musa\tsc{-erg} song \tsc{m-}say\tsc{.ipfv-icvb} fence \tsc{n-}do\tsc{.ipfv-icvb} \tsc{cop-n}\\ \glt \sqt{Musa is singing a song and building the fence.} (E) \ex \label{ex:‎Madina, having come home, washed the dishes} \gll Madina-l [qili sa-r-eʁ-ib=qːel] t'alaħ-ne d-irc-ib\\ Madina\tsc{-erg} home \tsc{hither-f-}go\tsc{.pfv-pret=}when dishes\tsc{-pl} \tsc{npl-}wash\tsc{.pfv-pret}\\ \glt \sqt{‎Madina, having come home, washed the dishes.} (E) \end{exe} It has been observed for the \isi{perfective converb} in other Dargwa varieties and other East Caucasian languages that when the subjects are not identical, the order of main clause and \isi{adverbial clause} can be changed, but then only the causal interpretation is possible (\citealp{Belyaev2010}; \citealp{Kustova2015}; \citealp{Kazenin.Testelets2004}). In other words, when the \isi{adverbial clause} precedes the main clause, we can have both a same-subject and a different-subject reading \refex{ex:‎When / Because Murad got ill he (= Murad or some other person) did not build the fence}. However, the different-subject reading is rather marginal and only available in the right context (see the discussion in \refsec{ssec:Co-reference and expression of shared arguments} about example \refex{ex:When Ali came home, Indira was sewing a dress}). \begin{exe} \ex \label{ex:‎When / Because Murad got ill he (= Murad or some other person) did not build the fence} \gll [Murad ʡaˁrkːa ∅-iχ-ub-le] lac a-b-arq'-ib\\ Murad ill \tsc{m-}be\tsc{.pfv-pret-cvb} fence \tsc{neg-n-}do\tsc{.pfv-pret}\\ \glt \sqt{‎When\slash Because Murad got ill he (= Murad or some other person) did not build the fence.} (E) \end{exe} If we reverse the order, interpretations with shared subjects are more frequently disapproved, e.g. \refex{ex:‎‎‎(He) is digging the field while Musa is singing} means that an unspecified person is digging the field while Murad is singing. For the \isi{perfective converb}, a reversal of the order means that a causal interpretation between the two described situations is required \refex{ex:Because Murad got ill, he (= Murad or another person) did not build the fence2}, whereas in the default order, in which the \isi{adverbial clause} precedes the main clause, a causal interpretation is possible, but not necessary. Sentences such as \refex{ex:‎When / Because Murad got ill he (= Murad or some other person) did not build the fence} can also simply express the temporal order of the events as occurring simultaneously or sequentially without implying a causal relationship. \begin{exe} \ex \label{ex:‎‎‎(He) is digging the field while Musa is singing} \gll qu uqː-ul ca-w [Musa dalaj ∅-ik'-ul]\\ garden dig\tsc{.ipfv-icvb} \tsc{cop-m} Musa song \tsc{m-}say\tsc{.ipfv-icvb}\\ \glt \sqt{‎‎‎(He) is digging the field while Musa is singing.} (E) \ex \label{ex:Because Murad got ill, he (= Murad or another person) did not build the fence2} \gll lac a-b-arq'-ib [Murad ʡaˁrkːa ∅-iχ-ub-le]\\ fence \tsc{neg-n-}do\tsc{.pfv-pret} Murad ill \tsc{m-}be\tsc{.pfv-pret-cvb}\\ \glt \sqt{Because Murad got ill, he (= Murad or another person) did not build the fence.} (E) \end{exe} This means that the order of the clauses in constructions with perfective and imperfective converbs cannot be changed without a concomitant change in the interpretations. This property makes the respective converb constructions slightly similar to clause \isi{coordination}, which also depicts the order of the events if they do not occur simultaneously: the first clause refers to the first event, the second clause to the second event. By contrast, for other converbs such as the temporal converb \tit{=qːel}, it is possible to reverse the order of the clauses without changing the interpretation, which makes them more similar to subordination \refex{ex:Then, in the maternity hospital, here you do not give money when you go to take (the child) out (of the hospital and home)}, \refex{ex:‎‎‎I came to know the truth from you, I said, because he (the other doctor) did not tell me (the truth)}. % --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- % \subsection{Morphosyntactic locus} \label{ssec:Morphosyntactic locus} In addition to the properties discussed, I also tested for morphosyntactic locus \citep{Kazenin.Testelets2004}, i.e. the locus of marking a complement clause as dependent on the main clause. For \isi{coordination} embedded into a complement clause, the formal marking of embedding is expected to occur on each member of the \isi{coordination}. By contrast, in case of subordination we can expect the formal marking to occur only on the head or within the head constituent of the complement, but not within another \isi{adverbial clause} that is part of the complement. This is the case for Sanzhi \is{adverbial clause}adverbial clauses that can occur in complement constructions. For instance, in \refex{ex:‎‎‎I am sad because Murad got ill and did not build the fence4} and \refex{ex:‎I am happy when Murad got healthy and finished building the fence5} the \isi{masdar} suffix that marks the complement clause as dependent occurs only on one verb, whereas the other verb in the complement retains its converbal suffix. In \refex{ex:‎I am happy that Fatimat dug the field while singing a song} \isi{complementation} is achieved by means of the \isi{cross-categorical suffix} -\textit{ce} added to the preterite. \begin{exe} \ex \label{ex:‎‎‎I am sad because Murad got ill and did not build the fence4} \gll du pašman-ne=da [[Murad ʡaˁrkːa ∅-iχ-ub-le] lac a-b-arq'-ni]\\ \tsc{1sg} sad\tsc{-advz=1} Murad ill \tsc{m-}be\tsc{.pfv-pret-cvb} fence \tsc{neg-n-}do\tsc{.pfv-msd}\\ \glt \sqt{‎‎‎I am sad because Murad got ill and did not build the fence.} (E) \ex \label{ex:‎I am happy when Murad got healthy and finished building the fence5} \gll du razi-l=da [[Murad ʡaˁħ ∅-iχ-ub=qːel] lac taman b-arq'-ni]\\ \tsc{1sg} happy\tsc{-advz=1} Murad good \tsc{m-}be\tsc{.pfv-pret=}when fence end \tsc{n-}do\tsc{.pfv-msd}\\ \glt \sqt{‎I am happy when Murad got healthy and finished building the fence.} (E) \ex \label{ex:‎I am happy that Fatimat dug the field while singing a song} \gll du razi-l=da [[Fat'imat dalaj r-ik'-ul] qu b-urqː-ib-ce]\\ \tsc{1sg} happy\tsc{-advz=1} Fatimat song \tsc{f-}say\tsc{.ipfv-icvb} garden \tsc{n-}dig\tsc{.pfv-pret-dd.sg}\\ \glt \sqt{‎I am happy that Fatimat dug the field while singing a song.} (E) \end{exe} % --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- % \subsection{Island constraints: relativization and extraction} \label{ssec:Island constraints: relativization and extraction} The data concerning extraction out of \is{relative clause}relative clauses varies depending on the converb used and on the interpretations available. The converb \tit{=qːel} blocks extraction, as example \refex{ex:The driver who when driving the car the girls got afraid is our father} shows. By contrast, the \isi{perfective converb} allows for extraction \refex{ex:‎Give me the i-phone that when it was given to Hazhimurad we got happy.}. Although the data in \xxref{ex:When father was guiding the car, the girls became afraid}{ex:‎Give me the i-phone that when it was given to Hazhimurad we got happy.} generally fits what has been observed for other East Caucasian languages (e.g. \citealp{Kazenin.Testelets2004}; \citealp{Creissels2012}; \citealp{Bickel2010}), two divergent examples are not enough to understand whether Sanzhi \is{adverbial clause}adverbial clauses show the behavior of \isi{coordination} or of subordination and to what extent this depends on the converbs themselves or on the available interpretations. \begin{exe} \ex \label{ex:father was guiding the car} \begin{xlist} \ex[]{ \label{ex:When father was guiding the car, the girls became afraid} \gll [šupir-ri mašin b-ik-an=qːel] rurs-be uruχ b-iχ-ub \\ driver\tsc{-erg} car \tsc{n-}lead\tsc{.ipfv-ptcp=}when girl\tsc{-pl} fear \tsc{hpl-}be\tsc{.pfv-pret}\\ \glt \sqt{When the driver was guiding the car, the girls became afraid.} (E)} \ex[*]{ \label{ex:The driver who when driving the car the girls got afraid is our father} \gll [[\_$_{i}$ mašin b-ik-an=qːel] rurs-be uruχ b-iχ-ub] šupir$_{i}$ nušːa atːa ca-w\\ \tsc{erg} car \tsc{n-}lead\tsc{.ipfv-ptcp=}when girl\tsc{-pl} fear \tsc{hpl-}be\tsc{.pfv-pret} driver \tsc{1pl} father \tsc{cop-m}\\ \glt (Intended meaning: \sqt{The driver who when driving the car the girls got afraid is our father.}) (E)} \end{xlist} \end{exe} \begin{exe} \ex \label{ex:father an iphone was given} \begin{xlist} \ex \label{ex:When an iphone was given to Hazhimurad we got happy} \gll [ħaˁžimurad-li-j ajpun b-ičː-ib-le] nušːa razi d-iχ-ub=da\\ Hazhimurad-\tsc{obl-dat} i-phone \tsc{n-}give\tsc{.pfv-pret-cvb} \tsc{1pl} happy \tsc{1/2pl-}be\tsc{.pfv-pret=1}\\ \glt \sqt{When an i-phone was given to Hazhimurad we got happy.} (E) \ex \label{ex:‎Give me the i-phone that when it was given to Hazhimurad we got happy.} \gll [[ħaˁžimurad-li-j \_$_{i}$ b-ičː-ib-le] nušːa razi d-iχ-ub-il] ajpun$_{i}$ b-iqː-a!\\ Hazhimurad\tsc{-obl-dat} \tsc{abs} \tsc{n-}give\tsc{.pfv-pret-cvb} \tsc{1pl} happy \tsc{1/2pl-}be\tsc{.pfv-pret-ref} i-phone \tsc{n-}take.out\tsc{.ipfv-imp}\\ \glt \sqt{‎Give me the i-phone that when it was given to Hazhimurad we got happy.} (E) \end{xlist} \end{exe} % --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- % \subsection{Summary} \label{ssec:adverbialclausessummary} \reftab{tab:Morphosyntactic properties of adverbial clauses} summarizes some of the morphosyntactic properties of perfective and \isi{imperfective converb} clauses as well as \is{adverbial clause}adverbial clauses with \tit{=qːel} that have been discussed in the previous sections. The table shows that the three converbs by and large share most of their properties. If we compare the behavior of Sanzhi \isi{adverbial clause} constructions with \is{adverbial clause}adverbial clauses in other East Caucasian languages, we also find that Sanzhi converb constructions strongly resemble their counterparts in other languages of the family (e.g. \citealp{Forker2013b} on Tsezic; \citealp{Creissels2010, Creissels2012} on Akhvakh; \citealp{Bickel2010} on Chechen). \begin{table} \caption{Morphosyntactic properties of adverbial clauses} \label{tab:Morphosyntactic properties of adverbial clauses} \small \begin{tabularx}{0.88\textwidth}[]{% >{\raggedright\arraybackslash}X >{\raggedright\arraybackslash}p{90pt} >{\raggedright\arraybackslash}p{80pt}} \lsptoprule Variable & \tsc{ipfv}\slash\tsc{pfv} converb & \tit{=qːel}\\ \midrule Illocutionary scope & local\slash extensible & local\\ Illocutionary marking & \multicolumn{2}{c}{banned} \\ Tense scope & \multicolumn{2}{c}{conjunct} \\ Tense marking & \multicolumn{2}{c}{banned} \\ Finiteness & \multicolumn{2}{c}{non-finite} \\ Symmetry & \multicolumn{2}{c}{asymmetrical}\\ WH-words & \multicolumn{2}{c}{allowed} \\ Focus-sensitive particles\is{particle} & \multicolumn{2}{c}{allowed} \\ Extraction & no data\slash allowed & disallowed\\ Position & \multicolumn{2}{c}{flexible-relational} \\ \lspbottomrule \end{tabularx} \end{table} % --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- % \subsection{Adverbial clauses as independent utterances?}\label{ssec:Adverbial clauses as independent utterances} \largerpage[2] When examining natural texts it is striking to notice that \is{adverbial clause}adverbial clauses headed by perfective and imperfective converbs occur sometimes without a main clause that is obviously connected to it. Example \refex{ex:‎‎Then slowly I learned the language and I did my (military) service} illustrates a \isi{perfective converb} clause, followed by an \isi{imperfective converb} clause, and then the speaker concludes his narrative about his military service with a comment that is not directly related to the two preceding \is{adverbial clause}adverbial clauses. The utterance in \refex{ex:‎‎The brother of my father (= Abdulkhalik) tore down the wall}, which consists of three \is{adverbial clause}adverbial clauses with preterite converbs, describes what the speaker's uncle Abdulkhalik did in order to build himself a house. It is followed by a comment that explicitly states the name of the uncle, but not by a main clause referring to the building of the house, which would be expected based on the general rules of use for the \isi{perfective converb}. \begin{exe} \ex \label{ex:‎‎Then slowly I learned the language and I did my (military) service} \gll c'il=ra hel-tːi bahla bahla-l ʁaj=ra d-aχ-ur-re bahla bahla-l islužba=ra b-iqː-ul \ldots\\ then=\tsc{add} that-\tsc{pl} slow slow-\tsc{advz} language=\tsc{add} \tsc{npl}-know.\tsc{pfv-pret-cvb} slow slow-\tsc{advz} service=\tsc{add} \tsc{n}-carry.\tsc{ipfv-icvb}\\ \glt \sqt{‎‎Then slowly I learned the language and I did my (military) service, (To be honest, I stayed for three years, and I was not one single hour at the guardhouse.)} \ex \label{ex:‎‎The brother of my father (= Abdulkhalik) tore down the wall} \gll [di-la atːa-la ucːi-l ha-b-ertː-ib-le il b-aʔ] [ʡaˁħ-te [cin-na taχna b-arq'-ij] d-erqː-ib-le už-ib-le] [wahi-te heχtːu lak' d-i-ka-d-arq'-ib-le d-už-ib-le ʡaˁbdulχaliq'-li]\ldots \\ \tsc{1sg-gen} father-\tsc{gen} brother-\tsc{erg} \tsc{up-n}-take.\tsc{pfv-pret-cvb} that \tsc{n}-edge good-\tsc{dd.pl} \tsc{refl.sg.obl-gen} room \tsc{n}-do.\tsc{pfv-inf} \tsc{npl}-carry.\tsc{pfv-pret-cvb} be-\tsc{pret-cvb} bad-\tsc{dd.pl} there.\tsc{down} throw \tsc{npl-in-down-npl}-do.\tsc{pfv-pret-cvb} \tsc{npl}-be-\tsc{pret-cvb} Abdulkhalik-\tsc{erg} \\ \glt \sqt{‎‎The brother of my father (= Abdulkhalik) tore down the wall, apparently took the good (materials) in order to build his house (=room), the bad (materials) Abdulkhalik threw away there, (My fathers brother was called Abdulkhalik.)} \end{exe} Therefore, we might wonder if we perhaps observe an ongoing change in which subordinate verb forms develop into forms that can head independent main clauses. For Mehweb Dargwa, it has been observed in elicitation that some speakers allow perfective and imperfective converbs to head main clauses \citep{KustovaForthcoming}, although the corpus does not contain any examples. In Sanzhi, the situation is reversed: in elicitation, examples such as \refex{ex:‎‎Then slowly I learned the language and I did my (military) service} and \refex{ex:‎‎The brother of my father (= Abdulkhalik) tore down the wall} are clearly judged as subordinate clauses, but in narrations we find again and again subordinate clauses with a missing main clause. The following excerpt from a discussion between two speakers illustrates the phenomenon. The conversation starts with a question by speaker A \refex{ex:‎‎‎Why did he die}, which is then answered by speaker B. About half of the utterances by speaker B are formally subordinate clauses. \begin{enumerate} \item finite clause (speaker A) % \begin{exe} \ex \label{ex:‎‎‎Why did he die} \gll c'il cellij w-ebč'-ib-le=de?\\ then why \tsc{m-}die\tsc{.pfv-pret-cvb=pst} \\ \glt \sqt{‎‎‎Why did he die?} \end{exe} \item non-finite clause as answer (speaker B) % \begin{exe} \ex \label{ex:‎in the ditches, he died because of hunger} \gll cin-na hetːu qːanaw-t-a-cːe-w kːiši-l w-ebč'-ib-le, \ldots\\ \tsc{refl.sg-gen} there ditch\tsc{-pl-obl-in-m} hunger\tsc{-adv} \tsc{m-}die\tsc{.pfv-pret-cvb}\\ \glt \sqt{‎in the ditches, he died because of hunger, \ldots} \end{exe} \item finite clause (speaker B) % \begin{exe} \ex \label{ex:‎What could it be, why should he die} \gll ∅-irχʷ-an=de cellij ubk'-an-ne c'il\\ \tsc{m-}be\tsc{.ipfv-ptcp=pst} why die\tsc{.m.ipfv-ptcp-fut.3} then\\ \glt \sqt{Something must have happened to him, why should he die (i.e. what other reasons were there to die at that time).} \end{exe} \item non-finite clause (speaker B) % \begin{exe} \ex \label{ex:He got ill, and they let him go} \gll zaˁʡip ∅-ič-ib-le, w-ataʁ-ib-le, \ldots\\ ill \tsc{m-}occur\tsc{.pfv-pret-cvb} \tsc{m-}let\tsc{.pfv-pret-cvb}\\ \glt \sqt{He got ill, and they let him go, \ldots} \end{exe} \item non-finite clause (speaker B) % \begin{exe} \ex \label{ex:AAAAAAAAAAAAHHH} \gll nuz-b-a-l b-ukː-unne, χalq' nuz-b-a-l t'ut'u ka-b-ik'-ul, \ldots\\ louse\tsc{-pl-obl-erg} \tsc{hpl-}eat\tsc{.ipfv-icvb} people louse\tsc{-pl-obl-erg} spread \tsc{down}\tsc{-hpl-}move\tsc{.ipfv-icvb}\\ \glt \sqt{‎‎‎The lice were eating (the people), lice were all over (the people), \ldots} \end{exe} \item finite clause (speaker B) % \begin{exe} \ex \label{ex:‎They died without stopping, in Sanzhi (people) died} \gll b-ubč'-i naχadu. Sanži-b b-ebč'-ib\\ \tsc{hpl-}die\tsc{.pfv-hab.pst.3} without.break Sanzhi\tsc{-hpl} \tsc{hpl-}die\tsc{.pfv-pret}\\ \glt \sqt{‎They died without stopping. In Sanzhi (people) died.} \end{exe} \end{enumerate} \citet{Mithun2008}, examines the development of subordinate clauses into main clauses in Navajo, Central Alaskan, Yup'ik, and a few other languages, and notes that the respective sentences contain background information, evaluations or comments that do not advance the storyline. However, this does not seem to be the case in Sanzhi. In both examples \refex{ex:‎‎Then slowly I learned the language and I did my (military) service} and \refex{ex:‎‎The brother of my father (= Abdulkhalik) tore down the wall}, it is rather the other way around. The \is{adverbial clause}adverbial clauses drive forward the narrative and the main clauses that follow them provide background information or evaluations. And when we compare the main clauses with the subordinate clauses in \refex{ex:‎‎‎Why did he die} to \refex{ex:‎They died without stopping, in Sanzhi (people) died}, there is no obvious division into story line and background information that correlates with the use of converbs and finite verb forms. Only a more detailed study of the Sanzhi corpus can help to clarify whether we really observe an ongoing change, or whether utterances such as the ones discussed in this Section can simply be explained as natural, unprepared spoken text or perhaps performance errors. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{The syntax of conditional clauses} \label{sec:The syntax of conditional clauses} Conditional clauses behave syntactically like \is{adverbial clause}adverbial clauses, but also show some differences; for the morphological structure and their functions see \refsec{cpt:conditionalconcessiveclauses}. Firstly, \is{conditional clause}conditional clauses have \isi{person agreement}. Secondly, \is{conditional clause}conditional clauses express the difference between present/future time and past time reference, and they can also express irrealis mood. Thirdly, an \isi{imperative} marker in a main clause does not have scope over the \isi{conditional clause} \refex{ex:‎‎‎If you need me, burn one hair}, such that the illocutionary scope is always ``local''. Conditional clauses may share their subject or other arguments with the main clause, but this is not a requirement. They mostly precede the main clause, but can also follow it \refex{ex:‎‎‎(They) will operate your eye, she said, if you go (to the doctor)}. \begin{exe} \ex \label{ex:‎‎‎If you need me, burn one hair} \gll [du ħaˁžat b-ik'-ulle] b-ikːʷ-a ca ʁez!\\ \tsc{1sg} need \tsc{n-}say\tsc{.ipfv-cond.1} \tsc{n-}burn\tsc{-imp} one hair\\ \glt \sqt{‎‎‎If you need me, burn one hair!} \ex \label{ex:‎‎‎(They) will operate your eye, she said, if you go (to the doctor)} \gll ``ala ul-li-j aparacija b-irq'-u,'' r-ik'ʷ-ar, ``[r-uq'-utːe]''\\ \tsc{2sg.gen} eye\tsc{-obl-dat} operation \tsc{n-}do\tsc{.ipfv-prs} \tsc{f-}say\tsc{.ipfv-prs} \tsc{f-}go\tsc{-cond.2sg}\\ \glt \sqt{‎‎‎``(They) will operate your eye,'' she said, ``if you (fem.) go (to the doctor).''} \end{exe} The past \isi{conditional} occurs recurrently without an apodosis \refex{ex:‎if he did not drink first}. Such sentences can also express wishes \refex{ex:‎‎‎In which year it was, beloved Allah, if I would remember the years}. \begin{exe} \ex \label{ex:‎if he did not drink first} \gll bahsar a-b-učː-an-del iž ce=del, \ldots \\ first \tsc{neg-n-}drink\tsc{.ipfv-ptcp-cond.pst} this what\tsc{=indef}\\ \glt \sqt{‎if he did not drink first, \ldots} \ex \label{ex:‎‎‎In which year it was, beloved Allah, if I would remember the years} \gll čum-ib dusː-i-cːe-b=de=l w-ikː-an Allah dus-me han d-ik-ardel, \ldots\\ how.many\tsc{-ord} year\tsc{-obl-in-n=pst=indq} \tsc{m-}want\tsc{.ipfv-ptcp} Allah year\tsc{-pl} remember \tsc{npl-}occur\tsc{.pfv-cond.pst}\\ \glt \sqt{‎‎‎In which year it was, beloved Allah, if I would remember the years, \ldots} \end{exe} Interrogative pronouns \refex{ex:If Zapir buys what Zainab will marry him} and focus-sensitive \is{focus-sensitive particle}particles \refex{ex:If Zapir buys only a car, Zainab will not marry him} are allowed to occur in \is{conditional clause}conditional clauses. Extraction out of \is{conditional clause}conditional clauses is blocked \refex{ex:The car that if Zapir buys it Zainab will marry him is a foreign car}: \begin{exe} \ex \label{ex:Zapir, Zainab, and Heloise} \begin{xlist} \ex \label{ex:If Zapir buys what Zainab will marry him} \gll [Zapir-ri ce asː-ar] Zajnab xadi r-ax-an-ne?\\ Zapir\tsc{-erg} what buy\tsc{.pfv-cond.3} Zainab married \tsc{f-}go\tsc{-ptcp-fut.3}\\ \glt \sqt{If Zapir buys what Zainab will marry him?} (E) \ex \label{ex:If Zapir buys only a car, Zainab will not marry him} \gll [Zapir-ri mašin=cun asː-ar] Zajnab xadi a-r-ax-an-ne\\ Zapir\tsc{-erg} car=only buy\tsc{.pfv-cond.3} Zainab married \tsc{neg-f-}go\tsc{-ptcp-fut.3}\\ \glt \sqt{If Zapir buys only a car, Zainab will not marry him.} (E) \ex \label{ex:If Zapir buys a foreign car, Zajnab will marry him} \gll [Zapir-ri mašin asː-ar] Zajnab xadi r-ax-an-ne\\ Zapir\tsc{-erg} car buy\tsc{.pfv-cond.3} Zainab married \tsc{f-}go\tsc{-ptcp-fut.3}\\ \glt \sqt{If Zapir buys a car, Zajnab will marry him.} (E) \ex \label{ex:The car that if Zapir buys it Zainab will marry him is a foreign car} \gll {*} [[Zapir-ri \_$_{i}$ asː-ar] Zajnab xadi r-ax-an] mašin$_{i}$ inomarka ca-b\\ {} Zapir\tsc{-erg} \tsc{abs} buy\tsc{.pfv-cond.3} Zainab married \tsc{f-}go\tsc{-ptcp} car foreign.car \tsc{cop-n}\\ \glt (Intended meaning: \sqt{The car that if Zapir buys it Zainab will marry him is a foreign car.}) (E) \end{xlist} \end{exe}
{ "alphanum_fraction": 0.7176385912, "avg_line_length": 95.8185907046, "ext": "tex", "hexsha": "f33b401343bdbde1d79b3d880809f6dc8e7602d5", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "e9b8a5670dbc0500e17c80aae117a5975b14ecb4", "max_forks_repo_licenses": [ "CC-BY-4.0" ], "max_forks_repo_name": "langsci/250", "max_forks_repo_path": "chapters/syntax_adverbialclauses.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "e9b8a5670dbc0500e17c80aae117a5975b14ecb4", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "CC-BY-4.0" ], "max_issues_repo_name": "langsci/250", "max_issues_repo_path": "chapters/syntax_adverbialclauses.tex", "max_line_length": 1675, "max_stars_count": null, "max_stars_repo_head_hexsha": "e9b8a5670dbc0500e17c80aae117a5975b14ecb4", "max_stars_repo_licenses": [ "CC-BY-4.0" ], "max_stars_repo_name": "langsci/250", "max_stars_repo_path": "chapters/syntax_adverbialclauses.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 20986, "size": 63911 }
\chapter{MEG source localisation\label{Chap:data:sloc}} \section{Overview} In this section we will generate some simulated data to show how the inversion algorithms compare when the ground-truth is known. \section{Simulation} The easiest way to simulate M/EEG data is by replacing data from an existing experimental recording in which sensor locations/head position etc are already defined. You can do this using the batch editor. Start the batch editor (\texttt{Batch} button) on main panel. Then from the dropdown menu SPM: select \texttt{M/EEG}; select \texttt{Source reconstruction}; select \texttt{Simulation of sources on cortex}. You will see the following menu: \begin{figure} \begin{center} \includegraphics[width=100mm]{meg_sloc/slide1} \caption{\em The simulation batch options} \label{meg_sloc:fig:1} \end{center} \end{figure} You can use any SPM file you like to provide the basic simulation set up: this file will include information on sensor locations, triggers, head model. As an example we can use the preprocessed multimodal face-evoked MEG dataset\footnote{Multimodal face-evoked dataset: \url{http://www.fil.ion.ucl.ac.uk/spm/data/mmfaces/}}. So for M/EEG dataset select \begin{verbatim} cdbespm12_SPM_CTF_MEG_example_faces1_3D.mat \end{verbatim} \texttt{Inversion index} allows you to determine which forward model/ inversion is used to simulate data, leave this at the default value (1) for now. \texttt{Output file prefix} allows you to specify the prefix to be added to the new file. Under 'what conditions to include', you can either specify to simulate data in all experimental conditions 'All' or in specific conditions only. Here we want to test between conditions so we will simulate data in only one condition. Select the \texttt{Conditions} option and for \texttt{Condition labe}' type \begin{verbatim} faces \end{verbatim} The next option \texttt{Use inversion or define sources} allows you to either re-generate data based on a previous source reconstruction (and vary the SNR) or to set up a number of active sources on the cortical surface. We will use the last option, select \texttt{Set sources}. You can use the default options for now which defines two sources at different frequencies in approximately the auditory cortices. That is, the two dipoles are currently set to be on (at 10 and 20Hz) during the faces condition and off during the scrambled condition. This file has dipoles at [52, -25, 9] and [-52, -25, 9] in MNI space. The dipoles are energized at 10Hz and 20Hz from 0.1 to 0.4 seconds (Figure~\ref{meg_sloc:fig:1}). In each epoch the activation profile is identical, the channel data will be slightly different due to the white noise added. The green arrow in the top left menu bar should light up when all the essential parameters have been input and you can press it to run the simulation. \begin{figure} \begin{center} \includegraphics[width=100mm]{meg_sloc/slide2} \caption{\em The simulation outputs a glass brain showing maximum and median channel time-series as well as a glass brain showing the locations at which the sources were simulated } \label{meg_sloc:fig:2} \end{center} \end{figure} You can visualise the data trial by trial if you like by using the main menu Display/MEEG button. \section{Imaging solutions for evoked or induced responses} There are two pathways you can take to analyse the data. Either with the GUI buttons or with the batch interface. Firstly lets use the GUI to check that the data is ready for source inversion. On the main menu Click \texttt{3D Source Reconstruction}. Press \texttt{Load}. Select the new simulated data file \texttt{sim\_cdbespm12\_SPM\_CTF\_MEG\_example\_faces1\_3D.mat}. Moving left to right along the bottom panel you will notice that all of the buttons (MRI, Co-register, Forward Model) are active. This means that the preprocessing stages have already been carried out on these data (see multi-modal evoked responses chapter). The advantage of the batch interface is that you can then build up large and automated analysis pathways, it also is a little more flexible so it has more functionality. So restart the Batch editor from the main menu \texttt{Batch}. Then from the \texttt{SPM} drop-down menu select \texttt{M/EEG} / \texttt{source reconstruction} / \texttt{Source inversion}. Select the new simulated data file \texttt{sim\_cdbespm12\_SPM\_CTF\_MEG\_example\_faces1\_3D.mat}. Now we wish to invert all conditions using the same assumptions (and then compare between conditions afterwards) so under 'what conditions to include' select 'All'. At this point we can now try out inversion algorithms with different implicit assumptions. Under \texttt{Inversion parameters} select \texttt{Custom}. We will modify \texttt{inversion type} in the subsequent sections. Select \texttt{IID} for minimum norm assumptions for now. For the \texttt{Time window of interest} select from 0 to 600ms. For the \texttt{frequency window of interest} select 0 to 80 Hz (our data were simulated at 10 and 20Hz between 100 and 400ms). All the other settngs should remain as default. \begin{figure} \begin{center} \includegraphics[width=100mm]{meg_sloc/slide3} \caption{\em The basic batch settings for the source inversion} \label{meg_sloc:fig:3} \end{center} \end{figure} \subsection{IID (minimum norm)} We will start off with the traditional minimum norm solution: the 'IID' inversion type option . This starts by assuming that all source elements contribute something to the measured data. The constraint is that the total power (in the sources) should be minimised. Press \texttt{Invert}. Under reconstruction method press \texttt{Imaging}. For \texttt{All conditions or trials} press \texttt{Yes}. For model press \texttt{Custom}. Model inversion \texttt{IID}. Under Time-window ``0 600''. For \texttt{PST Hanning} select \texttt{Yes}. For High-pass (Hz) select \texttt{1} for Low-pass (Hz) select \texttt{48}. For \texttt{Source priors}, select \texttt{No}. Under \texttt{Restrict solutions} select \texttt{No}. \begin{figure} \begin{center} \includegraphics[width=100mm]{meg_sloc/slide4} \caption{\em IID imaging source reconstruction \label{meg_sloc:fig:4}} \end{center} \end{figure} We see the anticipated minimum norm result. The current density estimate is diffuse and relatively superficial due to the minimum energy constraint. Note the log-evidence 1987917 (this value depends on the data - so value of the log evidence you see may be different but it is this value relative to those following which is important). The top panel shows two time-series extracted from the mesh vertex (location given at the top of the panel) with highest posterior probability. The red line corresponds to the first condition (faces). Note the sinsuoidal time-series which should correspond in frequency to the source simulated on that side of the brain. The grey line corresponds to the other condition (scrambled) in which no data were simulated. \subsection{Smooth priors (COH)} The \texttt{COH}, under \texttt{Inversion type}, option allows the mixture of two possible source covariance matrices: the minimum norm prior above and a much smoother source covariance matrix in which adjacent sources are correlated (over the scale of a few mm). Select \texttt{COH} as the custom source reconstruction and run the batch again. You will see a plot similar to Figure~\ref{meg_sloc:fig:5} appear. The lower panel shows the glass brain in which bilateral sources are apparent. The upper panel shows the time-series of the source with the largest amplitude. In this case the peak activation is identified at location 59,-15, 15mm. The 20Hz time-course (associated with this source) is also clearly visible in the top panel. Log evidence is 2000393 (again this number may be different in your spm version). Note both that the source reconstruction is more compact and that the log evidence has increased over the IID solution. \begin{figure} \begin{center} \includegraphics[width=100mm]{meg_sloc/slide5} \caption{\em COH imaging source reconstruction.\label{meg_sloc:fig:5}} \end{center} \end{figure} \subsection{The Multiple sparse priors algorithm} \begin{figure} \begin{center} \includegraphics[width=100mm]{meg_sloc/slide6} \caption{\em Multiple sparse priors imaging source reconstruction using the Greedy Search (GS) option.\label{meg_sloc:fig:6}} \end{center} \end{figure} In contrast to IID or COH, the greedy search routine used in MSP builds up successive combinations of source configurations until the model evidence can no longer be improved. Select \texttt{GS} as the inversion type and run the batch again. You will see a plot similar to Figure~\ref{meg_sloc:fig:6} appear. The lower panel shows the glass brain in which bilateral sources are apparent. The upper panel shows the time-series of the source with the largest amplitude. Again the source reconstruction is compact with log evidence is 2150944. Note both that the source reconstruction is more compact and that the log evidence has increased over the IID and COH solutions. There are two more options in the basic MSP implementaion- ARD- based on the removal of patches that contribute little to the model evidence; and the use of both schemes 'ARD and GS' in which both methods provide candidate source covariance estimates which are then combined. You can try out these other options for yourself and note the model evidences (which will be directly comparable as long as the data do not change). \subsection{Making summary images} Often we will interested in some summary of conditions over a specific time-frequency window. We can add in an extra module to the batch script to produce such an output. . From the \texttt{SPM} drop down menu click \texttt{M/EEG}/ \texttt{Source reconstruction}/ \texttt{Inversion results}. Now for \texttt{M/EEG dataset}, click \texttt{Dependency}- and press OK to link the output of the previous function (the inversion) to the inpput of this one. We can now produce power images per condition based on a 0 to 600ms time window and a 0 to 80Hz frequency window. For \texttt{Contrast type} select \texttt{Evoked} and for output space and format select \texttt{MNI} and \texttt{Mesh} . You should now be able to run the complete batch which re-does the inversion and outputs two surface meshes (one for each condition). You can view these meshes from the main menu : \texttt{Render}/ \texttt{Display}. The output image for the face condition (and the IID algorithm) is shown below. \begin{figure} \begin{center} \includegraphics[width=100mm]{meg_sloc/slide7} \caption{\em Summary power image from IID source reconstruction on mesh.\label{meg_sloc:fig:7}} \end{center} \end{figure} \subsection{Other MSP options} The MSP algorithm is optimized to give the simplest source distribution that explains the most data. However the library of priors (possible patch locations) must be pre-specified in advance. This could potentially cause a problem if the source of interest were not precisely centred on one of the patches in the default library. To this end Jose David Lopez (Conf Proc IEEE Eng Med Biol Soc. 2012;2012:1534-7.) has produced a version of MSP which uses multiple random patch libraries to invert the same data several times. We can make a new batch file for this. So restart the Batch editor from the main menu \texttt{Batch}. Then from the \texttt{SPM} drop-down menu select \texttt{M/EEG} / \texttt{source reconstruction} / \texttt{Source inversion, iterative}. Select \texttt{Classic} as the custom source reconstruction algorithm- this is basically the original version of the MSP algorithm without any re-scaling factors to allow mixing of modalies or group imaging. It is advanatageous in many cases as the lack of these scaling factors means that it is a true generative model of the data (and it becomes possible to test between different head positions etc). Note however that these differences in pre-processing mean that at the moment the data entering the inversion (for custom and classic options) are different and so it is not possible to compare between solutions from these two pipleines. The rest of the parameters (time, frequency windows etc) can remain as they were in the last section. The new options are the choice over the number of patches in the library and the number of iterations. You can play with these parameters to adjust the relative burden of computation time. For example- allowing just 2 patches and many iterations will make this something like a (cortically constrained) multiple dipole (patch) fit. Alternatively, having lots of patches initially will mean the computation time is spent on pruning this set (with ARD or GS etc). You also have control over the number of temporal and spatial modes which will be used here (this makes it easier to compare between models where the lead field matrix has changed). The alogorithm returns the current distribution based on the patch set with maxmum free energy. \begin{figure} \begin{center} \includegraphics[width=100mm]{meg_sloc/slide8} \caption{\em Source inversions based on the same data but using randomly selected sets of 512 spatial priors.\label{meg_sloc:fig:8}} \end{center} \end{figure} An alternative to many spatial priors is to have a single prior that is optimised using functional constraints. This idea was put forward by Belardinelli P et al. PLoS One. 2012;7(12). Here a single candidate source covariance is estimated using beamformer priors and then regularized (in much the same way as IID and COH matrices are) in the Bayesian framework. You can access this by selecting \texttt{EBB} (Empirical Bayes Beamformer) as the inversion type; but you should set the number of iterations here to 1 (as there is only a single prior and it will not change over repetiitions). \begin{figure} \begin{center} \includegraphics[width=100mm]{meg_sloc/slide9} \caption{\em Source inversions based on the same data but using a single beamformer prior.\label{meg_sloc:fig:9}} \end{center} \end{figure} You can see that the beamformer image is much more focal than any other image type (and it is fast to compute). However there will be many situations in which it is sub-optimal (such as if you were to simulate two correlated sources). In Belardinelli et al. the authors found that this failure was reflected in the free energy; meaning that it is still possible to directly compare this solution with GS , IID etc. \section{Dipole fitting to the average} Up until this point the analysis we have used could have been applied to either induced or evoked changes in electrical activity. The only difference being that it would not have made much sense to look at the MSPs for specific time-instants in the induced case and we would have proceeded directly to look for changes in a time-frequency window. To examine the dipole fit routine we will however concentrate on the averaged data file which will contain only evoked changes. For this final section we will revert back to the main gui. Press \texttt{Average}. Select the simulated data file and leave all of the other options as default. Press \texttt{3D source Reconstruction}. \subsection{Load/preview the data} In the main menu click on the drop-down \texttt{Display} menu. Select \texttt{M/EEG}. For the dipole fitting we are going to use averaged MEG data, this is prefixed with an ``m'' in SPM. You can generate this file by averaging the epoched file that we have used until now. Select the file \texttt{msim\_cdbespm12\_SPM\_CTF\_MEG\_example\_faces1\_3D.mat}. The two sources we simulated were at 10Hz an 20Hz frequency so we can select times when only one or both of them were active. At 235ms there is only one dominant source and at 205ms both sources are clearly visible at the sensor level. We will now move on to explore Bayesian dipole fitting to these two time instants. \subsection{Inversion} In the main menu window, select \texttt{3D Source Reconstruction}. Click \texttt{Load} and select the averaged simulated dataset above. Proceed by pressing the \texttt{Invert} button. Select the \texttt{VB-ECD} button. \subsubsection{Fitting a single dipole with no priors} At the \texttt{time\_bin or average\_win} prompt enter ``235''. For \texttt{Trial type number} choose ``1'' (we want to model the faces data). At the \texttt{Add dipoles to model} click \texttt{Single}. For \texttt{location prior} click \texttt{Non-info}. For \texttt{Moment prior} click \texttt{Non-info}. At the \texttt{Add dipoles to 1 or stop?} prompt click \texttt{stop}. At the \texttt{Data SNR (amp)} leave as default \texttt{5}. Leave the default number of iterations at ``10''. You will see the 10 successive fits of the same data using a random starting location and moment. At each fit maps of the predicted and simulated data along with free-energy values and percent variance explained are shown. The final plot will be similar to Figure~\ref{meg_sloc:fig:10} where the model (i.e. dipole) which maximised the evidence (the best iteration is shown with a red dot) is displayed. Note down the model evidence (in this case -7.508e2, but the absolute value in your implementation may be different). The Bayesian dipole fit algorithm will be most useful when one has some prior knowledge of the sources (such as location, orientation or symmetry). Typical dipole fit algorithms fit 3 location parameters per dipole and then estimate the moment through a pseudo-inverse. The VB-ECD algorithm however fits 6 parameters per dipole as the moments are also allowed prior values. That is, if you have no prior knowledge then the Bayesian method will be generally less robust than such fitting methods (as more parameters are being fit). However it is when prior knowledge is supplied that the Bayesian methods become optimal. \begin{figure} \begin{center} \includegraphics[width=140mm]{meg_sloc/slide10} \caption{\em Results of fitting a single dipole with noninformative priors.\label{meg_sloc:fig:10}} \end{center} \end{figure} \subsubsection{Fitting a single dipole with reasonable and unreasonable priors} We will now provide some prior knowledge to the dipole fit perhaps led by the literature or a particular hypothesis. In this case we know the answer, but let us specify a location a couple of cm from where we know the source to be and try the fit again. At the \texttt{time\_bin or average\_win} prompt enter ``235''. For \texttt{Trial type number} choose ``1'' (we want to model the faces data). At the \texttt{Add dipoles to model} click \texttt{Single}. For \texttt{location prior} click \texttt{Informative}. For the location enter ``-62 -20 10''. For prior location variance leave at ``100 100 100'' mm$^2$. This means that we are not sure about the source location to better than 10mm in each dimension. For \texttt{Moment prior} click \texttt{Non-info}. At the \texttt{Add dipoles to 1 or stop?} prompt click \texttt{stop}. Leave the default number of iterations at ``10''. Again you will get a final fit location and model evidence (-7.455e2), which should have improved (be more positive) on the evidence above (because in this case our prior was more informative). Now go through exactly the same procedure as above but for the prior location enter ``-62 +20 10'', i.e. on the wrong side of the head. You will note that the algorithm finds the correct location but the evidence for this model (with the incorrect prior) is lower (-7.476e2). \subsubsection{Fitting more dipoles} We will start by examining the time instant at which we can clearly see a two-dipolar field pattern. At the \texttt{time\_bin or average\_win} prompt enter ``205'' (not that we are now changing the data so the subsquent evidence values will not be comparable with those at 235ms). For \texttt{Trial type number} choose ``1''. At the \texttt{Add dipoles to model} click \texttt{Single}. For \texttt{location prior} click \texttt{Informative}. For the location enter ``62 -20 10''. For prior location variance enter ``400 400 400'' mm$^2$, that is, the prior standard deviation on the dipole location is 20mm in each direction. For \texttt{Moment prior} click \texttt{Non-info}. At the \texttt{Add dipoles to 1 or stop?} prompt click \texttt{Single}. For \texttt{location prior} click \texttt{Informative}. For the location enter ``-62 -20 10''. For prior location variance enter ``400 400 400'' mm$^2$. At the \texttt{Add dipoles to 1 or stop?} prompt click \texttt{stop}. Leave the default number of iterations at ``10''. Note down the final model evidence (-2.548e2). Alternatively we can exploit the fact that we have prior knowledge that the dipoles will be approximately left-right symmetric in location and orientation (this means we have fewer free parameters or a simpler model). At the \texttt{time\_bin or average\_win} prompt enter ``205''. For \texttt{Trial type number} choose ``1''. At the \texttt{Add dipoles to model} click \texttt{Symmetric Pair}. For \texttt{location prior} click \texttt{Informative}. For the205 location enter \texttt{62 -20 10}. For prior location variance enter ``400 400 400'' mm$^2$. For \texttt{Moment prior} click \texttt{Non-info}. At the \texttt{Add dipoles to 2 or stop?} prompt click \texttt{stop}. Leave the default number of iterations at ``10''. Note that the final locations are approximately correct, but importantly the model evidence (-5.235e2) is lower than previously. Given this information one would accept the (more complex) two distinct dipole model over the symmetric pair model.
{ "alphanum_fraction": 0.7854166667, "avg_line_length": 105.3658536585, "ext": "tex", "hexsha": "23fa2200e6996647b4bc49e86cf0140e0a0453c1", "lang": "TeX", "max_forks_count": 24, "max_forks_repo_forks_event_max_datetime": "2022-02-08T06:47:37.000Z", "max_forks_repo_forks_event_min_datetime": "2015-03-26T21:30:03.000Z", "max_forks_repo_head_hexsha": "2028515a244fcec88c072d4a66b97bbc57dc15c0", "max_forks_repo_licenses": [ "RSA-MD" ], "max_forks_repo_name": "wiktorolszowy/diffusion_fMRI", "max_forks_repo_path": "software/spm12/man/meg_sloc/meg_sloc.tex", "max_issues_count": 1, "max_issues_repo_head_hexsha": "2028515a244fcec88c072d4a66b97bbc57dc15c0", "max_issues_repo_issues_event_max_datetime": "2020-07-06T23:53:13.000Z", "max_issues_repo_issues_event_min_datetime": "2020-07-06T21:37:06.000Z", "max_issues_repo_licenses": [ "RSA-MD" ], "max_issues_repo_name": "wiktorolszowy/diffusion_fMRI", "max_issues_repo_path": "software/spm12/man/meg_sloc/meg_sloc.tex", "max_line_length": 1141, "max_stars_count": 25, "max_stars_repo_head_hexsha": "4a1b6510cb32db6e2e4dff57bb81e6ece993f9db", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "spunt/bspm", "max_stars_repo_path": "thirdparty/spm12/man/meg_sloc/meg_sloc.tex", "max_stars_repo_stars_event_max_datetime": "2021-09-12T16:18:42.000Z", "max_stars_repo_stars_event_min_datetime": "2015-03-26T21:29:58.000Z", "num_tokens": 5251, "size": 21600 }
\subsection{The Chinese Room Revisited} \p{John Searle's \q{Chinese Room} argument \mdash{} about someone who behaves like he understands Chinse by matching characters to responses from a vast table \mdash{} is often understood as claiming that \q{symbol processing} by itself can never produce real understanding, which is \i{semantic} and \i{conceptual}. Modern technology makes this thought-experiment less hypothtical: automated telephone systems often use a template mechanism that is practically like Searle's Chinese Room, understanding a limited range of sentencs and producing a limited range of responss. But there are two different kinds of questions we can ask in relation to Searle's argument: some more philosophical and some more practical. } \p{On the philosophical side, we should properly assess the important questions as being qualitative and not quantitative: it's not as if a synthesized phone system is just not a very \i{good} conervsationalist; it's that a software machine simply isn't the \i{kind of thing} that we can say actually understands language. This is plausible if we say that emotions and empathy are intrinsic to language; that we can't properly understand language if we do not grasp the emotions residing behind expressions. Indeed, as the case of Grandma's window shows, our status as comptent interlopers depends on reading intentions behind expressions, and it sems hard to do this if we can't experientially empathize with our linguistic partners. } \p{Maybe we are now just pushing the important questions back to reappear: Ok, can computers be programmed to feel emotions? Is there a meaningful distinction between meaningfully, experientially having emotions and just behaving as if you hav them? Are emotions themselves somehow functionalizable apart from their chemical/hormonal substrate so that systems with very different physical realization than ours be said to have emotions? I can see how this dbate can go different ways. But I'd also argue that any ell-organized dialog about these questions will be only tangentially about language \mdash{} in which case, neither linguistics nor philosophy of language themselves can answer questions about what kind of systems (on metaphysical criteria) actually \q{do} language. That would imply that affirming a computer's linguistic capabilities as \i{real} linguistic understanding is a disciplinary non-sequitor for linguistics proper. Nothing in the linguist's arsenal either dmeonstrates or depends on AI agents actually \i{being} part of our linguistic community or just mimicking language-use to some (sometimes helpful) degree. } \p{The more practical questions raised by Searle's Chinese Room come into play to the degree that the philosophical trail I just sketched turns many analyses into a non-starter. Consider these two questions to a hypothetical automated telephone service: \begin{sentenceList}\sentenceItem{} What time does the office open? \sentenceItem{} \label{itm:phone} What time does train 100 depart from Newark? \end{sentenceList} While we can see a template holding canned responses for both cases, (\ref{itm:phone}) needs to do more than just fit the input to the nearest pattern; it has to pull out the dynamically variant details (train \i{100} from \i{Newark}) and use those to fill in details in the response. This is something like \i{parsing} the original question. So we can add bits and pieces of genuine linguistic processing to a minimal response-template system \mdash{} a real version of what Searle appeared to imagine in the Room. With enough added features the primitive template-driven kernel can evolve into a complex AI-powered Natural Language Processor. } \p{In that case we may imagine that \q{language understanding} exists on a spectrum. The primitive telephone service and an erudite bard may lie on opposite ends of a spectrum, but they share a spectrum between them. In this case, their differences are quantitative more than qualitative. The bard just has more \i{features} we associate with total linguistic behavior. } \p{However, this quantitative view still leaves open the question of where among the \q{features} do we have something that actually drives language competence? Searle's Chinese Room helps point out these questions: it's reasonable to say that the simplest template-response system does not really understand language at all, since it is a pattern-matching system that does not have any structural relation to language itself. Analogous capabilities can be developed for a system which matches any kind of input to a pattern directing an output, based on any metric of similarity. The patterning reflects an actual \i{linguistic} parse only insofar as it selects elements via syntactic criteria, like grasping the non-template variables as \i{100} and \i{Newark}. So, even if the holistic behavior different systems lies on a linguistic-competence sclae, not all \i{parts} of the system seem to bear the weight of actually \i{realizing} linguistic competence equally. } \p{One reading of the Chinese Room is that \i{no} part of a system is truly linguistic. This includes the argument that holistically the Chinese Room \i{does} speak Chinese: Searle's discussion suggests that no \i{part} understands Chinese, but if we can imagine the entire room as a single system this \q{entity} can be treated as a fluent Chinese speaker. Even if we reject that analysis, we could agree that, even among humans, \i{parts} of our language system arguably do not understand language: not nerve cells, not neural clustes for auditory processing, or sytax, or conceptualization, etc. It is us, the whole system, that uses language. The reason why \q{holistic} claims that \q{the entire room} speaks Chinese sound dubious may not be because something is \i{structurally} lacking in that whole system, but because it's not the kind of whole system \mdash{} with one body, one consciousness, one personhood \mdash{} that we think of as a conversant. } \p{Those who find Searle's analysis compelling probably believe that there \i{is} some meaningful difference between us (or at least people fluent in Chinese) and the Chinese Room. A further alternative, however, is that \i{we} are not language-usrs, at least not in the way we think we are. This claim can be expounded as follows: the philosophy of language, interactively with linguistics, seems to be looking for some essntial kernel of linguistic capability that distinguishes us from AI engines or template-response system. That is, AI-skeptics want to sift through all the models of processes within languages, the central domains in linguistics, and find the few genreas of linguistic processing that are unique to human language \mdash{} and computationally intractable. These would be the smoking gun evidnce that no artifical system can equate to human language-use because there is some essential stage in the linguistic pipeline that computers computationally can't realize. } \p{However, even if we accept premises that the Chinese Room case suggests this analysis and moreover it agrees with our underlying intuitions, there remains the possibility that computers are indeed lacking some stage associated with language \mdash{} but it is not a \i{linguistic} stage. If something like an Interface Theory of Meaning is correct, all linguistic processing is intermediary to some other cognitive layer: and perhaps the human quintessence lies on the far side, so it both limits what computers can linguistically achieve and lies outside of linguistics proper. } \p{} \p{} \p{}
{ "alphanum_fraction": 0.7892838743, "avg_line_length": 47.3414634146, "ext": "tex", "hexsha": "22954bdd7331b6d24dbbfa3decae8402d49f0349", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "8e3fe51f5e9071fb24b41586b5151576a932dd1b", "max_forks_repo_licenses": [ "BSL-1.0" ], "max_forks_repo_name": "ScignScape-RZ/ntxh", "max_forks_repo_path": "itm/ngml/section2a.ngml.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "8e3fe51f5e9071fb24b41586b5151576a932dd1b", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "BSL-1.0" ], "max_issues_repo_name": "ScignScape-RZ/ntxh", "max_issues_repo_path": "itm/ngml/section2a.ngml.tex", "max_line_length": 78, "max_stars_count": null, "max_stars_repo_head_hexsha": "8e3fe51f5e9071fb24b41586b5151576a932dd1b", "max_stars_repo_licenses": [ "BSL-1.0" ], "max_stars_repo_name": "ScignScape-RZ/ntxh", "max_stars_repo_path": "itm/ngml/section2a.ngml.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 1805, "size": 7764 }
\chapter{Long-range electron correlation}\label{chap:vdw-methods} {\sffamily This chapter presents a review of the state of the art of microscopic models of van der Waals interactions that have been applied beyond simple toy models and have a basis in the adiabatic-connection fluctuation--dissipation (ACFD) formula for the exchange-correlation energy. While the reviewed methods are all prior works of other authors, the unified range-separation formalism based on the nonlocal polarizability, formal classification along the different approximations to the ACFD formula, and several re-derivations of existing methods are novel. Most of the original and derived results in this chapter have been published in \citep{HermannCR17}. } \section{Range separation of electron correlation}\label{sec:range-separation} The vdW force between atomic bodies held together by covalent, ionic, or metallic binding is always caused by the long-range electron correlation, but not all effects of the long-range correlation are considered to be a vdW force. In metals, the electrons from the nonconducting bands are localized on atoms, which form nonuniform islands in the sea of approximately uniform electron density of the conducting electrons \citep{TaoPRB10}. Here, the long-range correlation between the conducting electrons contributes to the metallic binding. In nonmetals, however, all electrons are nonconducting, the electron density is nowhere uniform, and long-range correlation is mostly associated with vdW interactions. The electronic structure within a single uniform subsystem differs qualitatively in many aspects from that in a nonuniform system. In a uniform system, the exchange effects, the KS density response function, and the XC kernel decay only algebraically with distance (they are long-ranged) as a result of the conducting electrons, whereas they decay exponentially (they are short-ranged) in nonuniform systems~\cite{GePRB15}. (The true density response function decays algebraically in both cases because of electron correlation.) Correspondingly, semilocal and hybrid XC functionals capture both short-range and long-range part of the XC energy in uniform systems, but only the short-range part in the nonuniform systems. The vdW interactions can be therefore associated with all long-range electron correlation except for that between conducting electrons within a single uniform subsystem, which is fortunately covered by semilocal and hybrid density functionals. The nonuniform situations include interactions between conducting electrons in disjoint metallic bodies, interactions of conducting electrons with localized electrons, either in the same metallic body, or in other bodies, as well as all interactions between localized electrons. The XC energy can be formally divided into a short-range (sr) and long-range (lr) part via the ACFD formula in~\eqref{eq:acfd-xc} by separating the double spatial integral into two parts using a range-separating function, $f$, which should decay at least exponentially fast, \begin{equation} \begin{gathered} \iint\mathrm d\mathbf r_1\mathrm d\mathbf r_2=\iint\mathrm d\mathbf r_1\mathrm d\mathbf r_2\big(1-f(\mathbf r_1,\mathbf r_2)\!\big)+\iint\mathrm d\mathbf r_1\mathrm d\mathbf r_2f(\mathbf r_1,\mathbf r_2)\equiv\iint_\text{sr}\mathrm d\mathbf r_1\mathrm d\mathbf r_2+\iint_\text{lr}\mathrm d\mathbf r_1\mathrm d\mathbf r_2 \\ f(\mathbf r,\mathbf r)=0\qquad f(\mathbf r,\mathbf r+\mathbf R)=1-O(\exp(-R)) \end{gathered} \end{equation} Considering the range-separating function to be a functional of the density, $f\equiv f[n]$, it can be in principle always constructed exactly for a given short-range (or long-range) part. The short-range part of the ACFD expression accounts for short-range density fluctuations interacting via the short-range part of the Coulomb operator, while the long-range part accounts for long-range fluctuations interacting via the long-range Coulomb operator. The statements above about the XC energy in uniform and nonuniform systems can then be summarized in the following way, \begin{equation} \underbracket[0pt][2.5ex]{\overbracket[0pt][2.55ex]{E_\text{xc}}^\text{uniform:}}_\text{nonuniform:}=\overbrace{\underbrace{E_\text{x,sr}+E_\text{c,sr}}_\text{semilocal/hybrid}+\underbrace{E_\text{x,lr}}_{\approx 0}+\underbrace{E_\text{c,lr}}_\text{vdW}}^\text{semilocal/hybrid} \end{equation} Of course, such a range separation is exact only in principle, because the association of a given XC functional with a particular range-separating function is unknown. With the caveat about the uniform systems, the vdW interactions can then be associated with the long-range XC energy, \begin{equation} \begin{gathered} E_\text{xc,lr}=-\frac1{2\pi}\int_0^\infty\mathrm du\iint\mathrm d\mathbf r_1\mathrm d\mathbf r_2\int_0^1\mathrm d\lambda\,\chi(\mathbf r_1,\mathbf r_2,\mathrm iu;\lambda)v_\text{lr}(\mathbf r_1,\mathbf r_2) \\ v_\text{lr}(\mathbf r,\mathbf r')=f(\mathbf r,\mathbf r')v(|\mathbf r-\mathbf r'|) \end{gathered} \label{eq:range-separation} \end{equation} In this setup, care must be taken about the potential double counting of the long-range XC energy in uniform systems from the semilocal or hybrid functionals and from the long-range ACFD formula. This double-counting does not matter in situations when the result of a calculation is an energy difference, such as when calculating the adsorption energy of a molecule on a metal surface. But it may pose a problem in other cases, for instance when investigating a lattice expansion of a metal. The TD-DFT Dyson-like equation for the density response function in~\eqref{eq:dyson-td-dft} can be effectively range-separated by introducing some long-range effective Coulomb operator, $v_\text{eff}$, grouping the XC kernel (short-ranged in nonuniform systems) and the corresponding short-range Coulomb operator, $v-v_\text{eff}$, and averaging the resulting effective density response function, $\chi_\text{eff}$, over $\lambda$, \begin{equation} \begin{aligned} \chi^{-1}(\mathbf r,\mathbf r',u;\lambda)&=\chi^{-1}(\mathbf r,\mathbf r',u;0)-f_\text{xc}(\mathbf r,\mathbf r',u;\lambda)-\lambda\big(v(|\mathbf r-\mathbf r'|)-v_\text{eff}(\mathbf r,\mathbf r')\!\big)-\lambda v_\text{eff}(\mathbf r,\mathbf r') \\ &\approx\chi^{-1}_\text{eff}(\mathbf r,\mathbf r',u)-\lambda v_\text{eff}(\mathbf r,\mathbf r') \end{aligned} \label{eq:dyson} \end{equation} Interpreting the response functions as tensors on the vector space of spatial functions, and using the shorthand multiplication notation for tensor composition, the full density response function can be expressed explicitly, \begin{equation} \begin{aligned} \textstyle\int_0^1\mathrm d\lambda\chi(u;\lambda)&=\textstyle\int_0^1\mathrm d\lambda\big(\chi_\text{eff}(u)+\chi_\text{eff}(u)\lambda v_\text{eff}\chi(u;\lambda)\!\big) \\ &=\textstyle\int_0^1\mathrm d\lambda\big(\chi_\text{eff}(u)+\chi_\text{eff}(u)\lambda v_\text{eff}\chi_\text{eff}(u)+\chi_\text{eff}(u)\lambda v_\text{eff}\chi_\text{eff}(u)\lambda v_\text{eff}\chi(u;\lambda)\!\big) \\ &=\sum_{n=0}^\infty\big(\textstyle\int_0^1\mathrm d\lambda\,\lambda^n\big)\chi_\text{eff}(u)\big(v_\text{eff}\chi_\text{eff}(u)\!\big)^n \\ &=\sum_{n=0}^\infty\frac{1}{n+1}\big(\chi_\text{eff}(u)v_\text{eff}\big)^{n+1}v_\text{eff}^{-1} \\ &=\ln\big(1-\chi_\text{eff}(u)v_\text{eff}\big)v_\text{eff}^{-1} \end{aligned} \end{equation} The effective density response function, defined in this way, is guaranteed to be short-ranged in nonuniform systems. The equivalent expressions can be written for the nonlocal dipole polarizability. Plugging this expression to the long-range ACFD formula then gives the long-range XC energy in terms of the effective density response function and the two long-ranged Coulomb operators, $v_\text{eff}$ and $v_\text{lr}$, \begin{equation} \begin{aligned} E_\text{xc,lr}&=-\frac1{2\pi}\int_0^\infty\mathrm du\iint\mathrm d\mathbf r_1\mathrm d\mathbf r_2\big[\ln\big(1-\chi_\text{eff}(u)v_\text{eff}\big)v_\text{eff}^{-1}v_\text{lr}\big](\mathbf r_1,\mathbf r_2) \\ &\equiv-\frac1{2\pi}\int_0^\infty\mathrm du\operatorname{Tr}_{\mathbf r}\Big(\ln\big(1-\chi_\text{eff}(u)v_\text{eff}\big)v_\text{eff}^{-1}v_\text{lr}\Big) \end{aligned} \end{equation} The effective range of $v_\text{lr}$ is governed by the complementary range of the method for the short-range XC energy (typically a semilocal XC functional), whereas the range of $v_\text{eff}$ is controlled by the range of the model for the effective density response, discussed below. \subsection{Static correlation} In metals, strongly correlated materials, and spin-unpaired systems, the Coulomb interaction between electrons becomes at least as determining for the basic structure of the wave function as the kinetic energy. This leads to the general failure of mean-field models such as the HF and semilocal KS-DFT approximations, except for metals and KS-DFT, in which the LDA-based XC functionals capture the electron correlation within uniform electron density effectively. In spin-unpaired (open-shell) systems, this effect is often called static or left--right correlation. Spin-unpaired systems typically result from breaking chemical bonds and separating the resulting fragments. In such cases, a mean-field method can describe well the spin-paired unbroken system, but fails for the spin-unpaired separated fragments. This might suggest that static correlation should be a part of the long-range correlation energy. But consider the case of dissociating a hydrogen molecule in the (singlet) ground state into two hydrogen atoms. The system of two separated atoms has zero total spin, and if one electron is measured on the left atom with some spin, the other will be certainly on the right atom because of Coulomb repulsion, and it will have the opposite spin. The quantum states of the two hydrogen atoms are entangled, making them really a single quantum system, albeit noninteracting. In a mean-field method, the probabilities of finding opposite-spin electrons at some points are uncorrelated, and there is 50\% chance that both electrons will be located on the same atom. This underlying issue manifests differently in the HF and KS schemes. In the HF approximation, the missing opposite-spin correlation in the wave function results in a spurious on-site repulsion. In KS-DFT, on the other hand, the lack of correlation in the KS ground state is an expected part of the theory, resulting in two hydrogen atoms with mixed spin densities. Current XC functionals then fail by giving a different XC energy for such mixed-spin hydrogen atoms than for a pure-spin hydrogen atom. This demonstrates that the problem of static correlation in DFT is in fact a local problem that can be formulated on a single isolated hydrogen atom. Alternatively, one can argue that the Coulomb operator in the ACFD formula goes to zero at large distances, whereas the static correlation persists at all distances, so it must be the short-range (on-site) structure of the (spin) density response function that determines the correct XC energy in systems with static correlation. In any case, static correlation is part of the short-range XC energy, and the long-range correlation energy is indeed responsible solely for vdW interactions (except in uniform systems). That being said, the incorrect treatment of static correlation can have a large effect on the response properties of the electronic system, and so the long-range correlation energy as well. For instance, minimization of the total electronic energy with respect to semilocal and hybrid density functionals (which are incapable of treating static correlation) leads to electron densities that are far too diffuse and polarizable, which yields overestimated vdW interactions. This issue, however, is present already in the isolated systems, and is independent of the long-range interaction between them. \section{Local effective polarizability}\label{sec:local-pol} This section argues that the formulation of the ACFD formula in terms of the nonlocal polarizability is a better starting point for developing approximate models than the formulation in terms of the density response function. Expressing $\chi$ in terms of $\boldsymbol\alpha$, using the same integration-by-parts technique as when deriving the relationship between $\chi$ and $\boldsymbol\alpha$ in~\eqref{eq:alpha-chi}, and transferring the divergence operators on the Coulomb operator, the ACFD formula can be cast in terms of the nonlocal dipole polarizability, \begin{equation} \begin{gathered} E_\text{xc,lr}[\boldsymbol\alpha]=\frac1{2\pi}\int_0^\infty\mathrm du\iint\mathrm d\mathbf r_1\mathrm d\mathbf r_2\int_0^1\mathrm d\lambda\,\boldsymbol\alpha(\mathbf r_1,\mathbf r_2,\mathrm iu;\lambda)\mathbf T_\text{lr}(\mathbf r_1,\mathbf r_2) \\ \mathbf T_\text{lr}(\mathbf r,\mathbf r')=\boldsymbol\nabla\otimes\boldsymbol\nabla' v_\text{lr}(\mathbf r,\mathbf r') \end{gathered} \end{equation} Following the same procedure as with the density response function, this can be recast in terms of the effective polarizability, \begin{equation} \begin{aligned} E_\text{xc,lr}=\frac1{2\pi}\int_0^\infty\mathrm du\operatorname{Tr}_{\mathbf r}\Big(\ln\big(1+\boldsymbol\alpha_\text{eff}(u)\mathbf T_\text{eff}\big)\mathbf T_\text{eff}^{-1}\mathbf T_\text{lr}\Big) \end{aligned} \label{eq:acfd-vdw} \end{equation} \begin{figure} \includegraphics[center]{media/chi-vs-alpha} \caption{\textbf{Density response function vs.\ dipole polarizability.} Contour plots of model one-dimensional nonlocal (left) and local (right) responses encoded as the density response function (top), $\chi(x,y)$, and the dipole polarizability (bottom), $\alpha(x,y)$, defined in~\eqref{eq:model-responses}. The red and blue colors correspond to positive and negative values. The red lines denote the positions of the two responding Gaussian charge densities on the $x$-axis. }\label{fig:chi-vs-alpha} \end{figure} There are three reasons why the effective-polarizability version of the ACFD formula turns out to be a good starting point for approximate models. First, models of $\boldsymbol\alpha_\text{eff}$ can effectively capture all short-range XC effects, which modify the magnitude of the bare KS polarizability, without accounting for these effects explicitly via the Dyson equation. Second, such models do not need to represent the short-range structure of the polarizability correctly, because it interacts only via the long-range dipole operators. Third, the density response function has a complex nodal structure, as it describes depletion of the electron density at some points and its accumulation elsewhere. In contrast, the corresponding polarizability is a rotation-free smooth vector field that encodes that underlying nodal structure implicitly in terms of its local behavior via the divergence operators in~\eqref{eq:alpha-chi}. This is true even in the case of a long-ranged nonlocal density response function that is characteristic of uniform systems. Therefore, the strength of the response is translated directly into the magnitude of the polarizability, whereas it is translated only indirectly into the magnitude of the gradient of the density response function. For illustration, consider two one-dimensional (1D) Gaussian charge densities located at $\pm 1$ (crude model of atoms), and two model density response functions, one local, $\chi_\text{loc}$, the other nonlocal, $\chi_\text{nlc}$, (Figure~\ref{fig:chi-vs-alpha}), \begin{equation} \begin{gathered} \chi_\text{nlc}(x,x')=-(\mathrm e^{-(x+1)^2}-\mathrm e^{-(x-1)^2})(\mathrm e^{-(x'+1)^2}-\mathrm e^{-(x'-1)^2}) \\ \chi_\text{loc}(x,x')=-(x+1)\mathrm e^{-(x+1)^2}(x'+1)\mathrm e^{-(x'+1)^2}-(x-1)\mathrm e^{-(x-1)^2}(x'-1)\mathrm e^{-(x'-1)^2} \end{gathered} \label{eq:model-responses} \end{equation} In one dimension, the dipole polarizability is a scalar, and uniquely determined by integrating over the density response function, \begin{equation} \alpha^\text{1D}(x,x')=\int_{-\infty}^x\mathrm dy\int_{-\infty}^{x'}\mathrm dy'\chi^\text{1D}(y,y') \end{equation} Even in these trivial models, the density response function changes sign around atoms, and has a nontrivial nodal structure, whereas the polarizability is positive everywhere. Furthermore, the nonlocal density response translates into a polarizability that is still localized, but over a larger region spanning both atoms. These observations are crucial for multipole expansions of both $\chi$ and $\boldsymbol\alpha$ discussed below. The localized nature of the dipole polarizability combined with the insensitivity of the long-range ACFD formula to the short-range structure of the effective polarizability hints at the possibility of a relatively accurate local representation of $\boldsymbol\alpha_\text{eff}$, formally obtained by integrating over some neighborhood, $M(\mathbf r)$, around each point, $\mathbf r$, \begin{equation} \boldsymbol\alpha_\text{eff}(\mathbf r,\mathbf r',u)\approx\delta(|\mathbf r-\mathbf r'|)\int_{M(\mathbf r)}\mathrm d\mathbf r''\boldsymbol\alpha_\text{eff}(\mathbf r,\mathbf r'',u)\equiv\delta(|\mathbf r-\mathbf r'|)\boldsymbol\alpha_\text{eff}(\mathbf r,u) \end{equation} Since $\alpha_\text{eff}(\mathbf r,u)$ depends on the properties of the system only in the near neighborhood of $\mathbf r$, good semilocal approximations to it can be constructed using local quantities such as the electron density, its gradient, or the KS kinetic energy density, in a similar manner that the hierarchy of semilocal XC functionals is built. In nonuniform systems, the polarizability is localized only algebraically, the effective neighborhoods would need to be larger, and correspondingly, the range separation of the Dyson-like equation would need to be shifted towards larger separations. On the other hand, in the case of a vdW interaction between a metal and a nonmetal, the long-range nonlocal electronic fluctuations in the former do not have any long-range counterpart in the latter, preventing any correlation on such length scales, and only relatively short-range fluctuations in the metal contribute to the vdW attraction. For this reason, the local effective polarizability can effectively capture the true response even in such cases. \subsection{Harmonic oscillator as a polarizability model} The frequency dependence of the imaginary part of the density response function or dipole polarizability encodes the full optical (electromagnetic) spectrum. This is equivalent to knowing the full energy spectrum of the corresponding Hamiltonian, which is a much harder problem than calculating the ground-state energy. Accordingly, the ACFD formula contains the polarizability under the integral sign over all frequencies, and it is sufficient to model the spectrum only such that its sum-total is represented accurately. This is done conveniently by modeling directly the imaginary-axis dependence of the polarizability, $\boldsymbol\alpha_\text{eff}(\mathbf r,\mathrm iu)$, which is a real-valued monotonically decreasing function, and must decay quadratically in the high-frequency limit. These conditions are satisfied by a simple rational function (the simplest Padé approximant), specified by two parameters, $\boldsymbol\alpha(0)$ and $\omega$, \begin{equation} \boldsymbol\alpha_\text{eff}(\mathbf r,\mathrm iu)\approx\frac{\boldsymbol\alpha_\text{eff}(\mathbf r,0)}{1+\frac{u^2}{\omega(\mathbf r)^2}} \end{equation} The interpretation of this formula is provided by a charged harmonic oscillator, for which it is the exact result. Consider a particle with a charge, $q$, and mass, $m$, in a harmonic potential, $v_\text{ext}(\mathbf r)=m\omega^2/2$, and under a dissipative force, $-m\zeta\mathrm d\mathbf r/\mathrm dt$ (Lorentz oscillator). The total polarizability of such a system can be expressed in a closed form, \begin{equation} \alpha_\text{tot}^\text{LO}(u;\zeta)=\frac{q^2/m}{\omega^2-u^2+\mathrm i\zeta u} \label{eq:osc-pol} \end{equation} The electronic Hamiltonian without any interaction with the environment is nondissipative, and the corresponding oscillator model is recovered at the limit of $\zeta\rightarrow0$, \begin{equation} \begin{aligned} \lim_{\zeta\rightarrow 0}\alpha_\text{tot}^\text{LO}(u;\zeta)&=\frac{q^2/m}{\omega^2-u^2}+\frac\pi2\frac{q^2}{m\omega}\delta(u-\omega) \\ \alpha_\text{tot}^\text{LO}(\mathrm iu;0)&=\frac{q^2/m\omega^2}{1+u^2/\omega^2} \end{aligned} \end{equation} The same result is obtained using either a classical or quantum treatment. \section{Classification of vdW methods} \begin{figure} \includegraphics[center]{media/methods-diagrams} \caption{\textbf{Coarse-graining and many-body expansion.} Different kinds of approximations to the adiabatic-connection fluctuation--dissipation formula for the long-range exchange--correlation energy are shown on the ball-and-sticks model of the benzene dimer. (\textbf a) Legend for the graphical representation of polarizabilities, $\boldsymbol\alpha$, and dipole operators, $\mathbf T$. The clouds around the effective polarizability represent its effective spatial extent. (\textbf b) The random-phase approximation is an ab-initio many-body method without coarse-graining that approximates the true coupling of the bare Kohn--Sham polarizability with a bare dipole operator. The long-range part of the XC energy is formed by terms in which at least one of the dipole operators is long-ranged. The red and orange colors denote second- and third-order terms, respectively. (\textbf c) Nonlocal density functionals are second-order effective models without coarse-graining. The polarizability is approximated locally. (\textbf d) Many-body dispersion is a coarse-grained atomic model. }\label{fig:diagrams} \end{figure} Most existing models of long-range correlation can be described in terms of various approximations to the range-separated effective-polarizability version of the ACFD formula in~\eqref{eq:acfd-vdw}. One of them is the already discussed local representation of the effective polarizability. Two other general and common approximations are spatial coarse-graining of the system and truncation of the infinite logarithm series, $\ln(1-x)=\sum_{n=1}x^n/n$. The two types of approximations are illustrated in Figure~\ref{fig:diagrams}. \subsection{Coarse-graining of continuous quantities}\label{sec:coarse-graining} Given a set of functions, $w_p(\mathbf r)$, that partition a space into fragments, $\sum_p w_p(\mathbf r)\equiv1$, and respective centers of the fragments, $\mathbf R_p$, each spatial function or operator, such as the dipole polarizability, can be represented as a sum over the partitioned components, $\boldsymbol\alpha_{pq}$, which can be in turn expanded in the basis of solid harmonics (multipole expansion), $\alpha_{pq,ll'mm'}$, around the centers \citep{Stone13}, \begin{equation} \boldsymbol\alpha(\mathbf r,\mathbf r',u)=\sum_{pq}w_p(\mathbf r)w_q(\mathbf r')\boldsymbol\alpha(\mathbf r,\mathbf r',u)\equiv\sum_{pq}\boldsymbol\alpha_{pq}(\mathbf r,\mathbf r',u)\rightarrow\alpha_{pq,ll'mm'} \end{equation} (Here, $l$, $l'$ start from 1, because the expanded quantity is a tensor. The corresponding expansion of the scalar density response fucntion, $\chi$, would start from $l=l'=0$.) The dipole potential is expanded correspondingly. Unlike the Fourier transformation, the multipole expansion is not invertible, but like the Fourier transformation, it introduces a correspondence between spatial integrals and infinite sums, \begin{equation} \mathbf P(\mathbf r,u)=-\int\mathrm d\mathbf r'\boldsymbol\alpha(\mathbf r,\mathbf r',u)\mathbf E(\mathbf r',u) \ \Leftrightarrow\ \mathbf P_{p,lm}(u)=-\sum_{q,l'm'}\alpha_{pq,ll'mm'}(u)E_{q,l'm'} \end{equation} The sums can be made implicit by arranging the multipole moments in vectors and matrices, \begin{equation} \begin{gathered} \boldsymbol\alpha=\begin{pmatrix} \boldsymbol\alpha_{pp} & \boldsymbol\alpha_{pQ} & \cdots \\ \boldsymbol\alpha_{Qp} & \boldsymbol\alpha_{QQ} & \cdots \\ \vdots & \vdots & \ddots \end{pmatrix}\\ \boldsymbol\alpha_{pp}=\begin{pmatrix} \alpha_{11,11xx} & \alpha_{11,11xy} & \alpha_{11,11xz} & \alpha_{12,11xx} & \cdots \\ \alpha_{11,11yx} & \alpha_{11,11yy} & \alpha_{11,11yz} & \alpha_{12,11yx} & \cdots \\ \alpha_{11,11zx} & \alpha_{11,11zy} & \alpha_{11,11zz} & \alpha_{12,11zx} & \cdots \\ \alpha_{21,11xx} & \alpha_{21,11xy} & \alpha_{21,11xz} & \alpha_{22,11xx} & \cdots \\ \vdots & \vdots & \vdots & \vdots & \ddots \end{pmatrix}\qquad \end{gathered} \end{equation} Under this notation, the long-range ACFD formula in~\eqref{eq:acfd-vdw} has exactly the same form, but the operators are infinite matrices instead of nonlocal functions, and the trace is not over space, but over the fragments and multipole moments, \begin{equation} \begin{aligned} E_\text{xc,lr}=\frac1{2\pi}\int_0^\infty\mathrm du\operatorname{Tr}_{p,lm}\Big(\ln\big(1+\boldsymbol\alpha_\text{eff}(u)\mathbf T_\text{eff}\big)\mathbf T_\text{eff}^{-1}\mathbf T_\text{lr}\Big) \end{aligned} \label{eq:acfd-cg} \end{equation} The motivation for this multipole reformulation is that because both $\mathbf T_\text{eff}$ and $\mathbf T_\text{lr}$ are long-ranged and their moments decay increasingly faster for higher $l$'s, all the matrix multiplications (infinite sums) converge quickly and can be approximated well by finite sums. The feasibility of the coarse-graining and multipole expansions is dictated by the choice of the fragments and the response properties of the system. In a nonuniform system, the nonlocal effective polarizability is exponentially localized on atoms, and atom-centered fragments are a natural basis of a quickly converging multipole expansion. In a uniform system, the effective polarizability is long-ranged, diffused, and there are no natural centers for the multipole expansion, leading to large higher moments and slow convergence or even divergence of the expansion. In principle, this problem is mitigated in combination with the KS-DFT, because the long-range XC energy within the uniform systems is captured by the semilocal or hybrid functional, and the multipole convergence of the correlation energy due to interaction with a separate uniform or nonuniform system is helped by larger separations between the fragments. But such an interplay is not well understood, and none of the coarse-grained models reviewed in this chapter take advantage of this cancellation. In general, the complex interaction of the delocalized response of metals and localized response of nonmetals (insulators or molecules) is one of the hardest problems for general approximate models of the long-range electron correlation. It has been treated in select systems by effective parametrization of the metallic response from experimental measurements of the dielectric function \citep{RuizPRL12}, but the lack of any general efficient model still hinders modeling of hybrid material interfaces. \subsection{Truncation of many-body expansion} The operator logarithm in~\eqref{eq:acfd-vdw} is defined as an infinite series, and writing it out explicitly in terms of individual orders leads to a many-body decomposition of the XC energy, \begin{multline} E_\text{xc,lr}= \textstyle\frac1{2\pi}\int_0^\infty\mathrm du\operatorname{Tr}\big(\boldsymbol\alpha_\text{eff}(u)\mathbf T_\text{lr}\big) -\frac1{4\pi}\int_0^\infty\mathrm du\operatorname{Tr}\big(\boldsymbol\alpha_\text{eff}(u)\mathbf T_\text{eff}\boldsymbol\alpha_\text{eff}(u)\mathbf T_\text{lr}\big) \\ \textstyle+\frac1{6\pi}\int_0^\infty\mathrm du\operatorname{Tr}\big(\boldsymbol\alpha_\text{eff}(u)\mathbf T_\text{eff}\boldsymbol\alpha_\text{eff}(u)\mathbf T_\text{eff}\boldsymbol\alpha_\text{eff}(u)\mathbf T_\text{lr}\big) -\ldots \end{multline} The name ``many-body'' is best motivated in the coarse-grained models where the individual terms correspond to interactions between increasing number of fragments (bodies). (The order does not necessarily correspond to the number of bodies. At fourth order, for instance, some terms are a back-and-forth interaction between two bodies.) When constructed from the bare KS polarizability, the first term would evaluate to the long-range part of the exact exchange (plus higher-order exchange screening), which is negligible in nonuniform systems but can be relevant in uniform systems (where it would be typically covered by a semilocal XC functional). With any local approximation for the effective polarizability, the first term evaluates exactly to zero. The second term is the leading term for vdW interactions and the basis of all nonlocal correlation functionals and coarse-grained pairwise methods reviewed below. The third term corresponds to the Axilrod--Teller--Muto (ATM) three-body potential \citep{AxilrodJCP43,MutoNS43} when coarse-grained to atoms. When the $\boldsymbol\alpha_\text{eff}\mathbf T_\text{eff}$ term in the logarithm is small enough, the series converges quickly, and the logarithm can be approximated well by a truncated expansion. While this is usually not the case for the total long-range XC energy of a system, $\mathbf T_\text{eff}$ is often small enough between the interacting subsystems, and if one is interested only in the interaction energy, the series can be often truncated already after the second order, because the higher-order terms cancel out. However, this is not the case in general, especially in strongly polarizable systems, or in lower-dimensional systems where $\boldsymbol\alpha_\text{eff}\mathbf T_\text{eff}$ is highly anisotropic. The degree of approximation made by truncating the infinite logarithm series is difficult to assess a priori, and Chapter~\ref{chap:pi-pi} shows an example of a system where the higher-order terms contribute substantially to interaction energies. \subsection{Kohn--Sham response and random-phase approximation}\label{sec:rpa} The approximations to the ACFD formula that are full many-body and not coarse-grained can be based based on the bare KS response. Because the KS density response function can be calculated directly from the KS orbitals (eq.\ \ref{eq:adler-wiser}), these approximations are usually formulated and evaluated in the $\chi v$-representation rather than the $\boldsymbol\alpha\mathbf T$-representation. Furthermore, because the bare response has a well-defined short-range structure, this construction allows the evaluation of the total XC energy, not only its long-range part, so the use of these methods goes far beyond long-range correlation energy. Here, however, we discuss the methods mostly from the perspective of vdW interactions. The simplest of these methods is the RPA, which translates into zero XC kernel in the formalism of the ACFD formula and TD-DFT~\cite{RenJMS12}. This corresponds to setting the effective polarizability to the bare KS polarizability, and the effective dipole operator to the full dipole operator, \begin{equation} \begin{aligned} E_\text{c}^\text{RPA}=\frac1{2\pi}\int_0^\infty\mathrm du\operatorname{Tr}_{\mathbf r}\Big(\ln\big(1+\boldsymbol\alpha(u;\lambda=0)\mathbf T\big)-\boldsymbol\alpha(u;\lambda=0)\Big) \end{aligned} \end{equation} In the $\chi v$-representation, the expression can be evaluated straightforwardly using the explicit expression for $\chi(u;\lambda=0)$~\cite{FurchePRB01}. The omitted XC kernel is short-ranged in nonuniform systems, but its omission influences both short-range and long-range correlation energy, because the short-range XC effects in the polarizability eventually couple via the long-range dipole operator in the ACFD formula. As a result, although RPA does not suffer from any systematic errors in the long-range correlation energies, the overall accuracy is often worse than that of the many effective models reviewed below \citep{OlsenPRB13a}. This is further amplified in vdW systems in equilibrium geometries, where the short-range XC energy also contributes to the total interaction energy. Attempts at improvement go both ways, replacing the short-range correlation energy with a better model than RPA, as well as improving the effective polarizability. \citet{KurthPRB99} suggested to correct the short-range correlation energy from RPA with that from a semilocal XC functional, in what they called the RPA+ method. Rather than explicitly range-separating the ACFD expression, RPA+ removes the RPA short-range part by subtracting correlation energy from a specially designed semilocal correlation functional, $E_\text{c}^\text{GGA@RPA}$, and reintroduces it back with standard semilocal functional, $E_\text{c}^\text{GGA}$. \begin{equation} E_\text{c}^\text{RPA+}=E_\text{c}^\text{RPA}-E_\text{c}^\text{GGA@RPA}+E_\text{c}^\text{GGA} \end{equation} $E_\text{c}^\text{GGA@RPA}$ is constructed in a similar way as standard functionals, but its uniform part is parameterized to reproduce the RPA energy of the electron gas rather than the true energy. In a later version, this was refined so that also the gradient correction satisfied the short-range behavior of RPA~\cite{YanPRB00}. RPA+ attempts to fix the short-range correlation energy of RPA, but the long-range part is unchanged, so the vdW force remains the same, and it is only the interaction due to electron-density overlap, which occurs at equilibrium, that can be possibly improved. Furthermore, the range separation in RPA+ is unsystematic in the sense that there is no guarantee that $E_\text{c}^\text{GGA@RPA}$ and $E_\text{c}^\text{GGA}$ have the same effective range. \citep{ToulousePRA04} formulated a range-separated version of the KS scheme, in which the XC functional is designed from the beginning to treat only the short-range part of the electron correlation. This leads to an alternative range separation of the ACFD formula, in which $\boldsymbol\alpha(\lambda)$ is not the polarizability of the wave function that minimizes $\langle\Psi|\hat T+\lambda\hat V|\Psi\rangle$, but rather of one that minimizes $\langle\Psi|\hat T+\lambda\hat V_\text{lr}|\Psi\rangle$ \citep{ToulousePRL09}. In this scheme, the RPA of the Dyson-like equation results in a model in which the effective polarizability is still equal to the bare KS polarizability, like in normal RPA, but the effective dipole operator is only the long-range part of the full operator. The underlying assumption then is that the dipole operator and the XC kernel partially cancel out at short range, giving a different estimate of the effective polarizability than normal RPA\@. This is supported by numerical evidence on select small systems. A similar scheme, proposed earlier by \citet{KohnPRL98}, also uses a range-separated version of the KS scheme, but instead of obtaining the true polarizability at the RPA level, $\chi(\lambda)$ is obtained for each $\lambda$ by explicitly perturbing the corresponding $\lambda$-scaled system with electric field. A straightforward way to improve the RPA is to devise approximate XC kernels, which improves the short-range behavior of the polarizability, and hence both short-range and long-range correlation energies. Extending the LDA to the time domain, the adiabatic LDA (ALDA) assumes that the XC kernel has no memory, leading to a frequency-independent local XC kernel, \begin{equation} \begin{aligned} f_\text{xc}^\text{ALDA}(\mathbf r,\mathbf r',t-t')&=\delta(t-t')\frac{\delta^2E_\text{xc}^\text{LDA}[n]}{\delta n(\mathbf r)\delta n(\mathbf r')}=\delta(t-t')\delta(\mathbf r-\mathbf r')\frac{\mathrm d^2\big(n\varepsilon_\text{xc}^\text{UEG}(n)\!\big)}{\mathrm d n^2}\bigg|_{n=n(\mathbf r')} \\ f_\text{xc}^\text{ALDA}(\mathbf r,\mathbf r',u)&=\delta(\mathbf r-\mathbf r')\frac{\mathrm d^2\big(n\varepsilon_\text{xc}^\text{UEG}(n)\!\big)}{\mathrm d n^2}\bigg|_{n=n(\mathbf r')} \end{aligned} \end{equation} Unlike LDA, which is exact for the uniform electron gas (UEG), ALDA does not give the true XC kernel of the UEG (which is nonlocal in both time and space), and violates several known properties of the true XC kernel. Despite that, it is a useful approximation in TD-DFT calculations when one is interested only in a certain range of the frequency spectra. Still, it turns out not to be a good approximation in the ACFD formula, where it give worse results than the absent XC kernel of the RPA \citep{LeinPRB00}. \citet{OlsenPRB12} constructed a correction to ALDA by fixing its large-$\mathbf q$ (short-range) behavior in the UEG to better reproduce the known exact behavior. Taking this renormalized ALDA (rALDA) kernel, transforming back to real space and using the mean density in two points as the corresponding uniform density, this procedure gives a universal XC kernel, \begin{equation} f_\text{xc}^\text{rALDA}(\mathbf r,\mathbf r',u)=f_\text{xc}^\text{UEG}\big(|\mathbf r-\mathbf r'|;n=\tfrac12(n(\mathbf r)+n(\mathbf r')\!)\!\big) \end{equation} This construction is computationally no more demanding that RPA, but improves upon RPA in almost every case tested \citep{OlsenPRB13,OlsenPRL14}. The rALDA XC kernel gives a more realistic short-range screening of the bare KS polarizability, resulting in more accurate long-range correlation energies and better description of vdW systems. A different path towards improving the accuracy of RPA can be taken using the many-body perturbation (MBPT) theory. This is possible because, as \citet{Gell-MannPR57} showed, yet another equivalent definition of RPA is via a certain subset of Feynman diagrams, the so-called ring diagrams. Summing different subsets of the diagrams similar to those corresponding to RPA then leads to different RPA-like models and sometimes confusing terminology, when a certain modification of the XC kernel in RPA is equivalent to adding additional terms to the RPA XC energy that do not seem to be related to RPA \citep{ScuseriaJCP08,JansenJCP10,AngyanJCTC11}. The second-order Møller--Plesset correlation energy (MP2) consists of the Coulomb direct and exchange terms, of which only the former is long-ranged. In this context, RPA can be understood as the sum of all MP2-like direct terms (ring diagrams) in the infinite MBPT expansion. Similarly, the MP2 exchange can be renormalized by replacing one of the Coulomb interactions with the RPA sequence of ring diagrams, leading to the second-order screen exchange (SOSEX). Furthermore, unlike in the Møller--Plesset perturbation theory, where the first order is guaranteed to be zero, single-electron excitations contribute to the XC energy in the MBPT based on KS orbitals. Combining RPA, SOSEX and RPA-renormalized single-excitation correction then results in the renormalized second-order perturbation theory (rPT2) \citep{RenPRL11,RenPRB13}. Although the MP2 exchange term is short-ranged, the renormalization in SOSEX is long-ranged, and the long-range correlation energy of rPT2 is different from that of RPA\@. The combined improvements of the short-range and long-range XC energy in rPT2 compared to RPA lead to improved accuracy in vdW binding energies. \subsection{Nonlocal density functionals}\label{sec:vdwdf} The models of long-range correlation energy reviewed in this section are in the class of approximations to the ACFD formula that truncate the many-body expansion at second order, but do not do any spatial coarse-graining. This leads to XC functionals that are characterized by nonlocal dependence of the XC energy density on the electron density via some nonlocal kernel, $\Phi$, \begin{equation} E_\text{c,lr}^\text{nl-df}=\frac12\int\mathrm d\mathbf r\mathrm d\mathbf r'n(\mathbf r)n(\mathbf r')\Phi[n](\mathbf r,\mathbf r')=\int\mathrm d\mathbf r\,n(\mathbf r)\int\mathrm d\mathbf r'\tfrac12 n(\mathbf r')\Phi[n](\mathbf r,\mathbf r') \label{eq:nldf} \end{equation} The derivation of~\eqref{eq:nldf} from the ACFD formula for the XC energy starts with truncating the logarithm expansion at second order, \begin{equation} E_\text{xc,lr}\approx \frac1{2\pi}\int_0^\infty\mathrm du\operatorname{Tr}_{\mathbf r,\mathbb R^3}\big(\boldsymbol\alpha_\text{eff}(u)\mathbf T_\text{lr}-\tfrac12\boldsymbol\alpha_\text{eff}(u)\mathbf T_\text{eff}\boldsymbol\alpha_\text{eff}(u)\mathbf T_\text{lr}\big) \end{equation} Here, $\operatorname{Tr}_{\mathbb R^3}$ denotes the trace over the three Cartesian vector components. In the next step, the effective polarizability is approximated with a local isotropic polarizability, \begin{equation} \boldsymbol\alpha_\text{eff}(\mathbf r,\mathbf r',u)\approx\mathbf I\alpha_\text{eff}(\mathbf r,u)\delta(\mathbf r-\mathbf r') \end{equation} This results in the first-order term being zero, which means that such a functional cannot capture any exchange energy, which is intentional, since the nonlocal functionals are designed to capture only the long-range correlation energy. The locality of the effective polarizabilities reduces two of the four integrals in the second-order term, and the isotropy allows to take the polarizabilities out of the trace, \begin{equation} \begin{aligned} E_\text{c,lr}&\approx -\frac1{4\pi}\int_0^\infty\mathrm du\iint\mathrm d\mathbf r\mathrm d\mathbf r'\operatorname{Tr}_{\mathbb R^3}\big(\alpha_\text{eff}(\mathbf r,u)\mathbf T_\text{eff}(\mathbf r,\mathbf r')\alpha_\text{eff}(\mathbf r',u)\mathbf T_\text{lr}(\mathbf r',\mathbf r)\!\big) \\ &=-\frac1{4\pi}\int_0^\infty\mathrm du\iint\mathrm d\mathbf r\mathrm d\mathbf r'\alpha_\text{eff}(\mathbf r,u)\alpha_\text{eff}(\mathbf r',u)\operatorname{Tr}_{\mathbb R^3}\big(-\mathbf T_\text{eff}(\mathbf r,\mathbf r')\mathbf T_\text{lr}(\mathbf r,\mathbf r')\!\big) \\ \end{aligned} \label{eq:acfd-nlc} \end{equation} Both $\mathbf T_\text{eff}$ and $\mathbf T_\text{lr}$ go to the bare dipole operator for large distances, and the trace can be rewritten in terms of a range-separating function, $f$, \begin{equation} \operatorname{Tr}_{\mathbb R^3}\big(-\mathbf T_\text{eff}(\mathbf r,\mathbf r')\mathbf T_\text{lr}(\mathbf r,\mathbf r')\!\big)\equiv f(\mathbf r,\mathbf r')\frac6{|\mathbf r-\mathbf r'|^6} \label{eq:r6-func} \end{equation} This is the origin of the $1/R^6$ dependence of the pairwise vdW force. A general form of the local effective polarizability used in many models is obtained from the polarizability of a harmonic oscillator by setting the ratio of the charge and mass to that of an electron, $q/m=1$, and substituting the electron density for the charge, \begin{equation} \alpha_\text{tot}^\text{HO}(\mathrm iu)=\frac{q^2/m}{\omega^2+u^2} \quad\longrightarrow\quad \alpha_\text{eff}[n](\mathbf r,\mathrm iu)=\frac{n(\mathbf r)}{\omega^2_\text{eff}[n](\mathbf r)+u^2} \label{eq:local-eff-pol} \end{equation} Besides the obvious reason of modeling electrons, the charge--mass ratio of one is motivated by the $f$-sum rule for an electronic system that dictates that $\alpha_\text{tot}(\mathrm iu)\rightarrow N/u^2$ ($N$ is the number of electrons), which the form above automatically satisfies. (Strictly speaking, this is not necessary, because the rule does not need to be satisfied in any local form, and furthermore, the local effective polarizability is not supposed to integrate to the total polarizability without any long-range coupling. However, the local form is a straightforward way to satisfy the global rule.) The local effective resonance frequency, $\omega^2_\text{eff}$, can be in general any functional of the electron density, but is often approximated locally. Combining~\eqref{eq:local-eff-pol},~\eqref{eq:r6-func}, and~\eqref{eq:acfd-nlc}, the approximated ACFD formula can be recast in the form of a nonlocal density functional, where the nonlocal kernel is a functional of the effective resonance frequency and the (unspecified) range-separating function, \begin{equation} E_\text{c,lr}\approx \frac12\iint\mathrm d\mathbf r\mathrm d\mathbf r'\,n(\mathbf r)n(\mathbf r')\bigg(-\frac3{\pi}\int_0^\infty\mathrm du\frac1{\omega^2[n](\mathbf r)+u^2}\frac1{\omega^2[n](\mathbf r')+u^2}\frac{f(\mathbf r,\mathbf r')}{|\mathbf r-\mathbf r'|^6}\bigg) \label{eq:acfd-kernel} \end{equation} The asymptotic behavior of the long-range correlation energy calculated in this way is fully specified by $\omega_\text{eff}$. The first general functional of this form, referred to simply as the vdW density functional (vdW-DF), was developed by \citet{DionPRL04} as a culmination of a program set up by \citet*{AnderssonPRL96}. The program was followed along a different branch by \citet{HultPRL96,HultPRB99}, but this effort did not result in a general functional of only the electron density. Although the derivation of the vdW-DF starts from the ACFD formula, it follows quite a different direction than the framework in this chapter, and most of the approximations along the way are done in reciprocal space, until everything is transformed back to real space in the end. However, the final result can still be cast in the form of~\eqref{eq:acfd-kernel}. The effective resonance frequency in the vdW-DF is constructed from a GGA-type XC energy density, \begin{equation} \omega^2_\text{vdW-DF}[n](\mathbf r)=4\pi^2\varepsilon_\text{xc}^\text{vdW-DF}[n]^4 =4\pi^2\left( \varepsilon_\text{xc}^\text{UEG}(n) +A\left\lvert\frac{\mathbf\nabla n}{n^\frac76}\right\rvert^2 \right)^4 \end{equation} The first equality is motivated by using $\omega^2_\text{eff}$ to calculate the XC energy of a slowly varying electron gas via the ACFD formula. The particular choice of the semilocal approximation to the XC energy density is rather arbitrary and completely independent of the semilocal functional potentially used to complete the vdW-DF at short range. The value of the numerical parameter $A$ can be derived in different ways using different first-principles arguments, leading to substantially different values and results for vdW binding energies~\cite{LeePRB10}. A serious disadvantage of the vdW-DF in light of other long-range correlation models is that its range-separating function is fixed by the underlying theory. Because of the construction in the reciprocal space, the parameter $A$ appears both in the effective resonance frequency and the range-separating function. Since the asymptotic behavior of any nonlocal functional depends only on $\omega_\text{eff}$, not the range-separating function, the parameter $A$ is essentially fixed, and there is no remaining freedom in the range-separating function that could be adjusted for a particular choice of a short-range semilocal functional in a full KS-DFT calculation. The form of the range-separating function is complex due to the reciprocal-space formulation, but there are two underlying physical motivations for it. When the two oscillators given by the resonance frequencies $\omega_\text{eff}$ are close to each other such that their ground-state wave functions would overlap, the underlying model does not work anymore, the corresponding part of the XC energy must be covered by the semilocal functional, and the dipole coupling must be damped. This is effectively achieved by increasing the resonance frequency as $k^2$ in the reciprocal space. The second damping mechanism is that the nonlocal functional must evaluate to zero for the uniform electron gas, whose long-range correlation energy is already covered by a semilocal or a hybrid functional. This forces the range-separating function to negative values at short range, to counterbalance the attractive contribution from the long range. The complex form of the vdW-DF was gradually simplified by \citeauthor{VydrovJCP09}. In the vdW-DF-09~\cite{VydrovJCP09}, the range-separating mechanism was constructed independently of the effective resonance, making the nonlocal functional adaptable to any semilocal or hybrid functional, which also resulted in improved accuracy for vdW binding energies. Furthermore, the local resonance frequency was somewhat modified, \begin{equation} \omega^2_\text{vdW-DF-09}[n](\mathbf r)=(4\pi)^2\underbrace{\frac{4^3\pi^2}{3^6}}_{\doteq 0.87}\left( \varepsilon_\text{x}^\text{UEG}(n) +B\left\lvert\frac{\mathbf\nabla n}{n^\frac76}\right\rvert^2 \right)^4 \label{eq:vdw-df-09} \end{equation} Further simplification was achieved in the VV09 functional, which used a substantially different form of $\omega_\text{eff}$, \begin{equation} \omega^2_\text{VV09}[n](\mathbf r)=\frac{4\pi}3n(\mathbf r)+C\frac{|\boldsymbol\nabla n|^4}{n^4} \label{eq:vv-pol} \end{equation} Here, $4\pi n$ is the resonance frequency of the macroscopic (small-$\mathbf q$ limit) plasmon fluctuations of the uniform electron gas. The factor of $1/3$ comes from the Clausius--Mossotti relation between the microscopic local polarizability and the macroscopic dielectric function. The density-gradient term is a local model of a band gap obtained from considering the behavior of the electron density in the density tail of a finite system. The range-separating mechanism of VV09 is still constructed in reciprocal space. The final attempt at a simplified formulation of the vdW-DF, named VV10, was constructed entirely in real space~\cite{VydrovJCP10a}. Both the resonance frequency and range-separating function of~\eqref{eq:acfd-kernel} have a simple form in VV10. The former is the same as in VV09, and the latter is constructed using the same mechanism of reduced polarizabilities of overlapped oscillators as in the original vdW-DF, but in real space, \begin{equation} f_\text{VV10}(\mathbf r,\mathbf r')=\frac{\alpha_\text{eff}(\mathbf r,\mathrm iu;\omega=\omega_\text{VV09}+Dn^\frac13/|\mathbf r-\mathbf r'|^2)\alpha_\text{eff}(\mathbf r',\mathrm iu;\omega=\omega_\text{VV09}+Dn^\frac13/|\mathbf r-\mathbf r'|^2)}{\alpha_\text{eff}(\mathbf r',\mathrm iu;\omega=\omega_\text{VV09})\alpha_\text{eff}(\mathbf r',\mathrm iu;\omega=\omega_\text{VV09})} \label{eq:damping-vv10} \end{equation} As $1/|\mathbf r-\mathbf r'|^2$ grows at short distances, the effective resonance frequency of the local oscillators increases, reducing their polarizability. The parameter $D$ is used to adjust the range of this mechanism. \subsection{Pairwise interatomic models}\label{sec:pairwise} The oldest approaches to fix the missing long-range electron correlation in HF or semilocal KS-DFT calculations are of the interatomic pairwise form, \begin{equation} E_\text{c,lr}\approx-\frac12\sum_{pq}C_{6,pq} \frac{f(\mathbf R_p,\mathbf R_q)}{|\mathbf R_p-\mathbf R_q|^6} \label{eq:pairwise} \end{equation} Here, $f$ is some range-separating (damping) function, $\mathbf R_q$ are the atom coordinates, and the so-called dispersion coefficients, $C_{6,pq}$, determine the asymptotic interaction between two atoms. This type of interatomic potential has origin in empirical force fields dating back to the Lennard--Jones potential, even before it was clear that the correct leading term of the vdW force is $1/R^6$. In the context of electronic-structure methods, it was first used by~\citet{HepburnCPL75} to correct interaction curves of rare-gas dimers from HF calculations. This approach was later extended to molecules and KS-DFT calculations, and the $C_6$ coefficients were extended to a wider range of systems \citep{HalgrenJACS92,MooijJPCA99,ElstnerJCP01,WuJCP02}. \citet{GrimmeJCC04} then presented a parametrization of $C_6$ and $f$, termed DFT-D (`D' for dispersion), that could in principle be applied to any molecule or solid, in combination with any XC functional. This marked a start of routine addition of the long-range correlation energy to semilocal KS-DFT calculations. The pairwise interatomic model of~\eqref{eq:pairwise} can be obtained as a coarse-grained truncated approximation to the ACFD formula. The derivation follows the same course of second-order truncation and local approximation to the effective polarizability as nonlocal vdW XC functionals, but starting from the coarse-grained multipole-expanded ACFD formula in~\eqref{eq:acfd-cg}, \begin{equation} E_\text{c,lr}\approx \frac1{4\pi}\int_0^\infty\mathrm du\operatorname{Tr}_{p,lm}\big(\boldsymbol\alpha_\text{eff}(\mathrm iu)\mathbf T_\text{eff}\boldsymbol\alpha_\text{eff}(\mathrm iu)\mathbf T_\text{lr}\big) \end{equation} Here, the trace is over multipole moments and fragments, which are chosen to be atoms in most cases. (In this context, the formal definition of an atom in a molecule is given by some partition function.) Approximating the local effective polarizability as isotropic, $\alpha_{\text{eff},pll'mm'}=\delta_{ll'}\delta_{mm'}\alpha_{\text{eff},pl}$, the formula is reduced as in the case of nonlocal vdW XC functionals, \begin{gather} \begin{aligned} E_\text{c,lr}&\approx \frac1{4\pi}\int_0^\infty\mathrm du\sum_{pq}\sum_{ll'}\alpha_{\text{eff},pl}(\mathrm iu)\alpha_{\text{eff},ql'}(\mathrm iu)\big[\operatorname{Tr}_m(\mathbf T_\text{eff}\mathbf T_\text{lr})\big]_{pq,ll'} \\ &=-\frac12\sum_{pq}\sum_{ll'}\left(\frac{K_{ll'}}{2\pi}\int_0^\infty\mathrm du\,\alpha_{\text{eff},pl}(\mathrm iu)\alpha_{\text{eff},ql'}(\mathrm iu)\right)\frac{f_{ll'}(\mathbf R_p,\mathbf R_q)}{|\mathbf R_p-\mathbf R_q|^{2+2l+2l'}} \\ &\equiv-\frac12\sum_{pq}\sum_{ll'}C_{2+2l+2l',pq}\frac{f_{ll'}(\mathbf R_p,\mathbf R_q)}{|\mathbf R_p-\mathbf R_q|^{2+2l+2l'}} \end{aligned}\label{eq:pairwise-mult} \\ K_{ll'}=\lim_{\mathbf R\rightarrow\infty}\frac{\sum_m T_{ll'mm}(\mathbf R)}{|\mathbf R|^{2+2l+2l'}} \end{gather} The standard pairwise formula of~\eqref{eq:pairwise} is recovered by keeping only the lowest dipole--dipole term ($l=l'=1$, $K_{11}=6$), where the expression for the corresponding dispersion coefficient is called the Casimir--Polder integral, \begin{equation} C_{6,pq}=\frac3\pi\int_0^\infty\mathrm du\,\alpha_{\text{eff},p1}(\mathrm iu)\alpha_{\text{eff},q1}(\mathrm iu) \end{equation} Some pairwise methods are formulated directly in terms of the dispersion coefficients, not the underlying polarizabilities, in which case approximate combination rules for calculating unknown heteronuclear coefficients from known homonuclear coefficients are useful. Such rules can be derived from the Casimir--Polder integral using some model polarizability. An often used rule is obtained from the harmonic-oscillator model, \begin{equation} \begin{aligned} C_{6,pq}^\text{HO}&=\frac3\pi\int_0^\infty\mathrm du\frac{\alpha_p(0)\omega_p^2}{\omega_p^2+u^2}\frac{\alpha_q(0)\omega_q^2}{\omega_q^2+u^2}=\frac{3\alpha_p(0)\alpha_q(0)\omega_p\omega_q}{2(\omega_p+\omega_q)} \\ &=\frac{2C_{6,pp}C_{6,qq}}{C_{6,pp}\frac{\alpha_q(0)}{\alpha_p(0)}+C_{6,qq}\frac{\alpha_p(0)}{\alpha_q(0)}} \end{aligned} \label{eq:comb-rule} \end{equation} Using the single-pole polarizability of the harmonic oscillator in situations where the true spectrum is more complex, such as in the equation above, is called the \citet{UnsoldZP27} approximation. The models of Grimme are different from the rest reviewed in this section in that they are formulated only in terms of the geometry of a molecule, $\{\mathbf R_p\}$, not the electron density. This makes them straightforwardly useful even for empirical short-range electronic models that do not produce any electronic density, but at the same time, it makes it much harder to achieve truly general models, because the electron density encodes much useful information about the system. The first version of DFT-D used fixed homonuclear $C_6$ coefficients, the combination of~\eqref{eq:comb-rule} with all polarizability ratios set to 1, and a range-separating function constructed from vdW radii that did not go to 1 in infinity~\cite{GrimmeJCC04}. The second version was a numerical reparametrization of the first one with a changed combination rule, which set the polarizability ratios equal to those of the $C_6$ coefficients~\cite{GrimmeJCC06}. In the first and second version, the atomic $C_6$ coefficients do not depend on the molecular environment, which is a crude approximation. The third version was an improvement in several regards~\cite{GrimmeJCP10}. The range separation was modified to obey the correct asymptotic behavior. An elementary dependence of the $C_6$ coefficients on the environment was included via geometrical factors estimating the coordination number of an atom. The dipole--quadrupole term ($l=1$, $l'=2$) from~\eqref{eq:pairwise-mult} was included, and a three-atom correction was suggested, which is the third-order triple-dipole term in the logarithm expansion of the coarse-grained ACFD formula. The corresponding dispersion coefficients, $C_8$ and $C_9$, are obtained by combination rules similar to those for the $C_6$ coefficient. Soon after the first version of DFT-D and in stark contrast to it, \citet{BeckeJCP05b} developed a method to calculate $C_6$ coefficients from first principles, using an approximation to the polarizability based on the dipole moment of the XC hole of the HF model, the exchange-hold dipole method (XDM). Their initial derivation was rather heuristic, with a wrong prefactor, but the final result can be in fact obtained directly from the Casimir--Polder integral using the fluctuation--dissipation theorem for the density response function of~\eqref{eq:fluctuation-dissipation} and the Unsöld approximation: \begin{equation} \begin{aligned} C_6&=\frac3\pi\int_0^\infty\mathrm du\alpha_\text{tot}(\mathrm iu)^2 \approx\frac3\pi\int_0^\infty\mathrm du\left(\frac{\alpha_\text{tot}(0)}{1+u^2/\omega^2}\right)^2 =\frac34\alpha_\text{tot}(0)^2\omega \\ &=\tfrac12\alpha_\text{tot}(0)\frac3\pi\int_0^\infty\mathrm du\frac{\alpha_\text{tot}(0)}{1+u^2/\omega^2} =\tfrac12\alpha_\text{tot}(0)\frac3\pi\int_0^\infty\mathrm du\alpha_\text{tot}(\mathrm iu) \\ &=\tfrac12\alpha_\text{tot}(0)\frac1\pi\int_0^\infty\mathrm du\operatorname{Tr}_{\mathbb R^3}\big(\boldsymbol\alpha_\text{tot}(\mathrm iu)\!\big) \\ &=\tfrac12\alpha_\text{tot}(0)\frac1\pi\int_0^\infty\mathrm du\iint\mathrm d\mathbf r\mathrm d\mathbf r'\operatorname{Tr}_{\mathbb R^3}\big(\boldsymbol\alpha(\mathbf r,\mathbf r',\mathrm iu)\!\big) \\ &=\tfrac12\alpha_\text{tot}(0)\iint\mathrm d\mathbf r\mathrm d\mathbf r'\operatorname{Tr}_{\mathbb R^3}\big(-\mathbf r\otimes\mathbf r'{\textstyle\frac1\pi\int_0^\infty}\chi(\mathbf r,\mathbf r',\mathrm iu)\!\big) \\ &=\tfrac12\alpha_\text{tot}(0)\iint\mathrm d\mathbf r\mathrm d\mathbf r'\,\mathbf r\cdot\mathbf r'n(\mathbf r)\big(n_\text{xc}(\mathbf r,\mathbf r')+\delta(\mathbf r-\mathbf r')\!\big) \\ &=\tfrac12\alpha_\text{tot}(0)\int\mathrm d\mathbf r\,\left(\mathbf rn(\mathbf r)\cdot\int\mathrm d\mathbf r'\,\mathbf r'\big(n_\text{xc}(\mathbf r,\mathbf r')+\delta(\mathbf r-\mathbf r')\!\big)\!\right) \\ &\equiv\tfrac12\alpha_\text{tot}(0)\int\mathrm d\mathbf r\,\mathbf d_n(\mathbf r)\cdot\mathbf d_\text{xc}(\mathbf r) \end{aligned} \end{equation} Here, the $C_6$ coefficient is expressed in terms of the static polarizability and the correlation between the local dipole moment of the total density and of the XC hole with its reference electron. This expression provides accurate $C_6$ coefficients when provided with accurate correlated XC holes, but fails short of good accuracy when evaluated with the approximate XC hole from the HF model~\cite{AngyanJCP07}. The XDM uses a slightly modified expression that autocorrelates the XC hole dipole moment, \begin{equation} C_6\approx\tfrac12\alpha_\text{tot}(0)\int\mathrm d\mathbf r\,\mathbf d_\text{xc}(\mathbf r)\cdot\mathbf d_\text{xc}(\mathbf r) \end{equation} This version works remarkably well with the HF XC hole, but the reasons for this unexpected accuracy are not well understood~\cite{AngyanJCP07,HesselmannJCP09,AyersJMC09}. A semilocal approximation to the XC hole by \citet{BeckePRA89} works as well as that from the HF model, and with the additional benefit of reduced computational complexity~\cite{BeckeJCP05a}. To formulate a general interatomic pairwise method, the dipole moment of the XC hole is coarse-grained using the partitioning scheme devised by \citet{HirshfeldTCA77}. In this scheme, the atomic partition functions, $w_p$, are constructed from radially averaged electron densities of isolated atoms, $n^\text{free}$, \begin{equation} w^\text{Hirsh}_p(\mathbf r)= \frac{n^\text{free}_p(|\mathbf r-\mathbf R_p|)} {\sum_q n^\text{free}_q(|\mathbf r-\mathbf R_q|)} \label{eq:hirshfeld} \end{equation} The corresponding static dipole polarizabilities of the atomic fragments are calculated from free-atom dipole polarizabilities, assuming that they scale linearly with the Hirshfeld measure of a volume (Hirshfeld volume), \begin{gather} \alpha_{p1}(0)=\alpha_{p1}^\text{free}(0)\frac{V^\text{Hirsh}_p[n]}{V^\text{Hirsh}_p[n^\text{free}]} \\ V_p^\text{Hirsh}[n]=\int\mathrm d\mathbf r\,n(\mathbf r)w^\text{Hirsh}_p(\mathbf r)|\mathbf r-\mathbf R_p|^3 \label{eq:hirshfeld-vol} \end{gather} The fragment $C_6$ coefficients are then calculated from the fragment polarizabilities and coarse-grained XC hole dipole moment, \begin{equation} C_{6,pp}^\text{XDM}=\tfrac12\alpha_{p1}(0)\int\mathrm d\mathbf r\,w_p(\mathbf r)\mathbf d_\text{xc}(\mathbf r)\cdot\mathbf d_\text{xc}(\mathbf r) \end{equation} The harmonic-oscillator combination rule is used to get the rest of the $C_6$ coefficients. The XDM can be extended to higher-multipole dispersion coefficients by calculating higher multipole moments of the XC hole polarization around each atomic center~\cite{BeckeJCP06,JohnsonJCP06}. The XDM dispersion coefficients were paired with two distinct empirical suggestions for the range-separating function, one based on the ratio of approximate short-range and long-range correlation energies, the other on the vdW radii~\cite{BeckeJCP07}, \begin{align} f_{11}^\text{XDM1}(\mathbf R_p,\mathbf R_q)&=\left(1+A\left( \frac{C_{6,pq}/|\mathbf R_p-\mathbf R_q|^6} {E_{\text{c},p}^\text{free}+E_{\text{c},q}^\text{free}}\right)\right)^{-1} \\ f_{11}^\text{XDM2}(\mathbf R_p,\mathbf R_q)&=\left(1+A\left(\frac{R_{\text{vdW},p}+R_{\text{vdW},q}+B}{|\mathbf R_p-\mathbf R_q|}\right)^6\right)^{-1} \end{align} Here, $E_{\text{c},p}^\text{free}$ is the correlation energy of a free atom calculated with some semilocal correlation functional. A simple yet accurate interatomic pairwise method was developed by \citet{TkatchenkoPRL09} (TS), who extended the free-atom scaling approach to all the atomic parameters, including the $C_6$ coefficients and the vdW radii, and thus formulating the calculation of interatomic pairwise vdW interactions into a true density functional. Assuming that the excitation energies of the atoms are independent of the volume, the Unsöld approximation and the Casimir--Polder integral dictate that the $C_6$ coefficients scale with the second power of the Hirshfeld volume ratio, \begin{equation} C_{6,pq}=C_{6,pq}^\text{free}\left(\frac{V^\text{Hirsh}_p[n]}{V^\text{Hirsh}_p[n^\text{free}]}\right)^2 \\ \label{eq:TS} \end{equation} The free-atom reference values may not be the most effective choice in metals and some solids, whose electron density is often substantially different from the superposition of free-atom densities. \citet{ZhangPRL11} and \citet{RuizPRL12} used an adapted TS method, where the reference values are obtained from bulk macroscopic dielectric constant. The TS method uses a logistic function as a range-separating function, with the free-atom vdW radii naturally scaled by the cubic root of the Hirshfeld-volume ratio, \begin{equation} \label{eq:damping-ts} f_{11}^\text{TS}(\mathbf R_p,\mathbf R_q)=\left(1 +\exp\left[ -A\left(B\frac{|\mathbf R_p-\mathbf R_q|}{R_{\text{vdw},p}+R_{\text{vdw},q}}-1\right) \right]\right)^{-1} \end{equation} \citet{SatoJCP09,SatoJCP10} developed an atomic pairwise method based on the local effective polarizability functional from the vdW-DF-09 nonlocal functional. Their local response dispersion (LRD) method is an explicit realization of the coarse-graining approach outlined in Section~\ref{sec:coarse-graining}. A system is described by the local effective polarizability given by the harmonic-oscillator formula with the resonance frequency from~\eqref{eq:vdw-df-09}. The atomic fragments are defined using the partitioning functions from the scheme by \citet{BeckeJCP88}, which is most often used to define atomic radial grids in KS-DFT calculations, but here it is used as an alternative to the Hirshfeld partitioning. The partitioned polarizability is used to calculate a coarse-grained representation of the system via multipole expansion and Casimir--Polder integrals up to the $C_{10}$ coefficient. The LRD method uses yet another range-separating function, parametrized by the polarizabilities in place of the vdW radii, \begin{equation} f_{11}^\text{LRD}(\mathbf R_p,\mathbf R_q) =\exp\left( -\left( \frac {|\mathbf R_p-\mathbf R_q|} {A\big(\sqrt[3]{\alpha_{\text{eff},p}}+\sqrt[3]{\alpha_{\text{eff},q}}\big)+B} \right)^6 \right) \end{equation} \citet{SilvestrelliPRL08} formulated a pairwise method in which the coarse-grained fragments are not atoms, but Wannier functions (WFs)~\cite{MarzariRMP12}. Wannier functions are any set of localized one-electron wave functions that in principle form a complete basis. In finite molecular systems, they are called Boys orbitals. The Wannier functions of conducting and nonconducting electrons are localized algebraically and exponentially, respectively. In the vdW-WF method, each WF is approximated with a single spherically symmetric exponential function that has the same width (second central moment) as the true WF\@. The polarizability of the approximate WF is calculated with the polarizability functional of \citet*{AnderssonPRL96} (ALL), \begin{equation} \alpha_{\text{eff},p}(\mathrm iu) =\int_{\mathbf r\in\Omega_A} \mathrm d\mathbf r\frac{n_p(\mathbf r)}{4\pi n_p(\mathbf r)+u^2},\quad \Omega_A=\{\mathbf r:|\mathbf\nabla n_p(\mathbf r)|<kn_p(\mathbf r)^\frac76\} \end{equation} Here, $n_p$ is the electron density of the WF and $k$ is a nonempirical constant. The $C_6$ coefficients between the WFs are calculated from the Casimir--Polder integral, and the range-separating function is the same as in the TS method, with vdW radii of the WFs defined via an electron density cutoff~\cite{SilvestrelliJCP09}. The vdW-WF scheme has two theoretical shortcomings: first, the partitioning of the total electron density is only approximate because of the use of the approximate WFs, and second, the ALL polarizability functional was designed for the total electron density, not one-electron densities. Coarse-grained methods in which the fragment polarizabilities and $C_6$ coefficients are calculated directly, rather than obtained by explicit partitioning of some continuous quantity, may be sensitive to a particular choice of the partitioning scheme. This motivated a series of modified Hirshfeld partitioning schemes that should capture better the redistribution of the electron density in a molecule with respect to free atoms. \citet{SteinmannJCTC10,SteinmannJCTC11} adapted the XDM to use the self-consistent Hirshfeld scheme, which gives a more consistent description of ionic systems~\cite{BultinckJCP07}. \citet{BuckoJCTC13,BuckoJCP14} did the same with the TS method. The self-consistent Hirshfeld partitioning uses the same stockholder formula in~\eqref{eq:hirshfeld} as the original scheme, but the reference densities are generalized and depend recursively on the partitioning, leading to equations that need to be solved iteratively~\cite{VerstraelenJCTC12}. A common form of the generalized reference densities, used in the modified XDM and TS methods, is a linear combination of free-atom and free-ion densities that maintains the charge of the Hirshfeld-partitioned atomic density. This scheme is complicated by the instability of many isolated anions, which requires addition of auxiliary negative charges, making the partitioning somewhat arbitrary. \subsection{Many-body dispersion framework}\label{sec:mbd} The fourth and final class of approximations to the ACFD formula covers nontruncated coarse-grained models. A common theme of all such models is to interpret the Unsöld approximation with its single resonance frequency literally, and model a real molecular system as a collection of coupled charged oscillators. The corresponding Hamiltonian describes a system of distinguishable particles characterized by a charge, $q_i$, and a mass, $m_i$, each having its own harmonic potential defined by the resonance frequency, $\omega_i$, and a center, $\mathbf R_i$, interacting via the Coulomb force, \begin{multline} \hat H_\text{osc}=\sum_i\frac{\mathbf{\hat p}_i^2}2+\sum_i\frac12 m_i\omega_i^2|\mathbf{\hat r}_i-\mathbf R_i|^2 \\ +\sum_{i<j}q_i q_j\left( \frac1{|\mathbf{\hat r}_i-\mathbf{\hat r}_j|} -\frac1{|\mathbf{\hat r}_i-\mathbf R_j|} -\frac1{|\mathbf R_i-\mathbf{\hat r}_j|} +\frac1{|\mathbf R_i-\mathbf R_j|} \right) \end{multline} The centers of the harmonic potentials additionally host a compensating charge of the opposite sign. If the centers are the same as those of the atoms, this Hamiltonian can be interpreted as a very crude approximation to the electronic Hamiltonian, in which all electrons of individual atoms are described by distinguishable psuedoelectrons that move in an effective potential which is the combined result of the nuclear potential and the mean field of the electrons. In particular, any exchange effects and hence charge transfer and delocalization are scrapped. Expanding the Coulomb operator in a multipole series and keeping only the dipole term results in dipole-coupled oscillator Hamiltonian, \begin{equation} \hat H_\text{dosc}=\sum_i\frac{\mathbf{\hat p}_i^2}2+\sum_i\frac12 m_i\omega_i^2|\mathbf{\hat r}_i-\mathbf R_i|^2+\sum_{i<j}q_i q_j(\mathbf{\hat r}_i-\mathbf R_i)\mathbf T(\mathbf R_j-\mathbf R_i)(\mathbf{\hat r}_j-\mathbf R_j) \label{eq:mbd} \end{equation} A useful property of this Hamiltonian is that it can be solved exactly by coordinate transformation. Introducing mass-scaled coordinates, $\hat{\boldsymbol\xi}_i=\sqrt{m_i}(\mathbf{\hat r}_i-\mathbf R_i)$, $\hat{\boldsymbol\xi}=(\hat{\boldsymbol\xi}_1\hat{\boldsymbol\xi}_2\ldots)$, using the expression for the polarizability of a charged harmonic oscillator in~\eqref{eq:osc-pol}, and the fact that the kinetic-energy operator is invariant with respect to unitary transformations, the dipole-coupled Hamiltonian can be transformed into uncoupled quasi oscillators, \begin{equation} \begin{aligned} \hat H_\text{dosc}&=\sum_i\frac{\mathbf{\hat p}_i^2}2+\sum_i\frac12\omega_i^2\hat{\boldsymbol\xi}_i^2+\frac12\sum_{ij}\omega_i\omega_j\sqrt{\alpha_i(0)\alpha_j(0)}\hat{\boldsymbol\xi}_i\mathbf T_{ij}\hat{\boldsymbol\xi}_j \\ &\equiv\sum_i\frac{\mathbf{\hat p}_i^2}2+\frac12\hat{\boldsymbol\xi}\mathbf Q(\boldsymbol\alpha(0),\boldsymbol\omega,\mathbf T)\hat{\boldsymbol\xi} \\ &=\sum_i\frac{\mathbf{\hat p}_i'^2}2+\frac12\hat{\boldsymbol\xi'}\tilde{\boldsymbol\omega}^2\hat{\boldsymbol\xi'} \\ &=\sum_{n=1}^{3N}\left(\frac{\hat p_\iota'^2}2+\frac12\tilde\omega_\iota^2\hat\xi_\iota'^2\right) \end{aligned} \label{eq:dosc-hamil} \end{equation} Here, $\tilde\omega_n^2$ are eigenvalues of the real symmetric matrix $\mathbf Q$, $\mathbf Q=\mathbf V\tilde{\boldsymbol\omega}^2\mathbf V^\mathrm T$, and $\boldsymbol\xi'$ are the coupled coordinates, $\boldsymbol\xi'=\mathbf V^\mathrm T\boldsymbol\xi$, which describe different collective oscillations. The ground-state wave function of this system is then a simple product of the single-oscillator ground-state wave functions, and the ground-state energy is a sum of the single-oscillator ground-state energies, $E_0=\sum_n\tilde\omega_n/2$. Drawing analogy with the RPA, the individual oscillators model the particle-like quasi electrons in some coarse-grained way, while the coupled oscillations model the wave-like electron oscillations. This Hamiltonian has been used many times to obtain various qualitative properties of long-range electron correlation \citep{BadeJCP57,BadeJCP57a,MahanJCP65,LucasP67,RenneCPL67,DonchevJCP06}, but only recently to formulate general quantitative methods. The relevance of the dipole-coupled oscillator model to the true electronic system can be derived directly from the coarse-grained ACFD formula in~\eqref{eq:acfd-cg}. Assuming that the effective and long-range dipole operators are equal, $\mathbf T_\text{lr}=\mathbf T_\text{eff}$, using the Unsöld and local approximations for the effective frequency, $\boldsymbol\alpha_\text{eff}(\mathrm iu)=\boldsymbol\alpha(0)/(1+u^2/\boldsymbol\omega^2)$, and truncating the multipole expansion at $L$-th order, the integration over frequencies can be performed analytically~\cite{TkatchenkoJCP13}, \begin{multline} E_\text{c,lr} \approx\frac1{2\pi}\int_0^\infty\mathrm du \operatorname{Tr}_{p,\mathbb R^3}\big( \ln(1+\boldsymbol\alpha_\text{eff}(iu)\mathbf T_\text{lr}) \!\big) \\ =\frac1{2\pi}\int_0^\infty\mathrm du \operatorname{Tr}_{p,\mathbb R^3}\bigg( \ln\Big( \frac {\boldsymbol\omega^2 +\boldsymbol\alpha_\text{eff}(0)^{\frac12}\boldsymbol\omega \mathbf T_\text{lr} \boldsymbol\alpha_\text{eff}(0)^{\frac12}\boldsymbol\omega +u^2} {\boldsymbol\omega^2+u^2} \Big) \!\bigg) \\ =\frac1{2\pi}\int_0^\infty\mathrm du \operatorname{Tr}_{p,\mathbb R^3}\big( \ln\big(\mathbf Q(\boldsymbol\alpha_\text{eff}(0),\boldsymbol\omega,\mathbf T_\text{lr})+u^2\big)-\ln(\boldsymbol\omega^2+u^2)\!\big) \\ =\frac1{2\pi}\int_0^\infty\mathrm du\operatorname{Tr}_{p,\mathbb R^3}\big(\ln(\tilde{\boldsymbol\omega}^2+u^2)-\ln(\boldsymbol\omega^2+u^2)\!\big) =\sum_{n=1}^{3N}\frac1{2\pi}\int_0^\infty\mathrm du\ln\left(\frac{\tilde\omega_\iota^2+u^2}{\omega_\iota^2+u^2}\right) \\ =\sum_{n=1}^{L(L+2)N}\frac{\tilde\omega_n}2-\sum_{l=1}^L\sum_{p=1}^N(2l+1)\frac{\omega_p}2 \label{eq:mbd-rpa} \end{multline} When truncated at the dipole term ($L=1$), the matrix $\mathbf Q$ is identical to that in~\eqref{eq:dosc-hamil}, and the approximate long-range correlation energy is equal to the difference in the ground-state energy between the dipole-coupled oscillators and noninteracting oscillators. The exact equivalence between the dipole-coupled oscillators and the approximated ACFD formula breaks when going beyond the dipole approximation. The effective Hamiltonian that corresponds to the matrix $\mathbf Q$ truncated at $L$-th multipole order has $L(L+2)N$ independent coordinates, $\boldsymbol\xi$, each with a corresponding resonance frequency and polarizability, and the analytic integration over frequency can be performed for any $L$. In contrast, the coupled-oscillator Hamiltonian has always $3N$ coordinates, independent of the degree of the multipole expansion of the Coulomb operator, and the interaction terms above the dipole order are formed from nonlinear combinations of the coordinates, making the Hamiltonian unsolvable in closed form. Use of the coupled-dipole approach to formulate general methods for the long-range correlation energy was initiated in the many-body dispersion (MBD) model developed by \citet{TkatchenkoPRL12}. MBD reuses the effective dynamic polarizability as approximated in the TS pairwise method and combines it with a physically motivated effective dipole operator. Motivated by the Gaussian shape of the harmonic-oscillator ground-state wave function, $\mathbf T_\text{eff}$ in MBD is derived from the screened Coulomb interaction between two Gaussian unit-charge densities with widths $\sigma$, $\sigma'$ \citep{MayerPRB07}, \begin{align} v_\text{gg}(|\mathbf R|)&=\frac1{(\pi\sigma\sigma')^3}\iint\mathrm d\mathbf r\mathrm d\mathbf r'\frac{\mathrm e^{-\frac{|\mathbf r|^2}{\sigma^2}}\mathrm e^{-\frac{|\mathbf r'-\mathbf R|^2}{\sigma'^2}}}{|\mathbf r-\mathbf r'|}=\operatorname{erf}\Bigg(\frac{\lvert\mathbf R\rvert}{\sqrt{\sigma^2+\sigma'^2}}\Bigg)\frac1{\lvert\mathbf R\rvert} \\ \mathbf T_\text{gg}(\mathbf R)&=\boldsymbol\nabla\otimes\boldsymbol\nabla'v_\text{gg}(|\mathbf r-\mathbf r'|)\Big|_{\substack{\mathbf r=\mathbf R\\\mathbf r'=\mathbf 0}} \label{eq:mayer} \end{align} When used as $\mathbf T_\text{eff}$ in MBD, the widths are derived from the corresponding dipole polarizabilities, making the effective dipole operator frequency-dependent, \begin{equation} \sigma_p(u)=\left(\tfrac13\sqrt{\tfrac\pi2}\alpha_{\text{eff},p1}(\mathrm iu)\right)^\frac13 \end{equation} In general, $\mathbf T_\text{lr}\neq\mathbf T_\text{eff}$, and the frequency integral cannot be evaluated analytically as shown above. To circumvent this obstacle, \citet{AmbrosettiJCP14} separated $\mathbf T_\text{gg}$ further into the long-range part and the short-ranged remainder, \begin{equation} \mathbf T_\text{gg}(u)=\big(\mathbf T_\text{gg}(u)-\mathbf T_\text{lr}\big)+\mathbf T_\text{lr}\equiv\mathbf T_\text{sr}(u)+\mathbf T_\text{lr} \end{equation} The long-range correlation energy is then calculated in two steps. First, the effective polarizability is screened by the short-range dipole operator, \begin{gather} \begin{aligned} \textstyle\int_0^1\mathrm d\lambda\boldsymbol\alpha(u;\lambda) &=\textstyle\int_0^1\mathrm d\lambda\big(\boldsymbol\alpha_\text{eff}^{-1}(u)+\lambda\mathbf T_\text{gg}(u)\big) \\ &=\textstyle\int_0^1\mathrm d\lambda\big(\boldsymbol\alpha_\text{eff}^{-1}(u)+\lambda\mathbf T_\text{sr}(u)+\lambda\mathbf T_\text{lr}\big) \\ &\equiv\textstyle\int_0^1\mathrm d\lambda\big(\boldsymbol\alpha_\text{sr}^{-1}(u;\lambda)+\lambda\mathbf T_\text{lr}\big) \\ &\approx\textstyle\int_0^1\mathrm d\lambda\big(\boldsymbol\alpha_\text{eff}'^{-1}(u)+\lambda\mathbf T_\text{lr}\big) \\ &=\ln\big(1+\boldsymbol\alpha_\text{eff}'^{-1}(u)\mathbf T_\text{lr}\big) \end{aligned}\label{eq:mbd-dyson} \\ \boldsymbol\alpha'_{\text{eff},p}(u)=\sum_q\operatorname{Tr}_{\mathbb R^3}\big(\boldsymbol\alpha_{\text{sr},pq}(u;\lambda=1)\!\big) \label{eq:contraction} \end{gather} Second, the dipole-coupled Hamiltonian in~\eqref{eq:mbd} is solved with $\boldsymbol\alpha(0)$ and $\boldsymbol\omega$ calculated from $\boldsymbol\alpha_\text{eff}'$, and $\mathbf T$ set to $\mathbf T_\text{lr}$, which is defined using the range-separating function of the TS method with a smoother switching profile. \citet{SilvestrelliJCP13} developed another method inspired by MBD in which the oscillators do not model the response of the atoms, but of Wannier functions. This Wannier-based MBD is to the pairwise vdW-WN method what the range-separated MBD is to the pairwise TS method. Unlike in vdW-WN, here the polarizabilities of the Wannier functions are not calculated using a local polarizability functional, but directly from the Hirshfeld volumes of the Wannier functions.
{ "alphanum_fraction": 0.7740367531, "avg_line_length": 104.3655629139, "ext": "tex", "hexsha": "6d39e30155971d935ebc2295316ababf98ce2173", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "7b56b7fba557c58f624cb3ef88b1c91d5f72a765", "max_forks_repo_licenses": [ "CC-BY-4.0" ], "max_forks_repo_name": "azag0/dissertation", "max_forks_repo_path": "chapters/long-range-electron-correlation.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "7b56b7fba557c58f624cb3ef88b1c91d5f72a765", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "CC-BY-4.0" ], "max_issues_repo_name": "azag0/dissertation", "max_issues_repo_path": "chapters/long-range-electron-correlation.tex", "max_line_length": 470, "max_stars_count": null, "max_stars_repo_head_hexsha": "7b56b7fba557c58f624cb3ef88b1c91d5f72a765", "max_stars_repo_licenses": [ "CC-BY-4.0" ], "max_stars_repo_name": "azag0/dissertation", "max_stars_repo_path": "chapters/long-range-electron-correlation.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 21893, "size": 78796 }
\documentclass{article} \usepackage{multirow} \usepackage{algorithm} \usepackage{algpseudocode} \usepackage[hidelinks]{hyperref} \newcommand{\Invariant}{\textbf{\underline{Invariant:}}} \newcommand{\topic}[1]{\textbf{#1:}} \newcommand{\ls}{\\*[1ex]} \newcommand{\timecase}[2]{ \vspace{1em} \begin{center} \begin{tabular}{ |c|c|c| } \hline \textbf{Time} & \textbf{Best Case} & \textbf{Worst Case} \\ \cline{2-3} \textbf{Complexity} & $\Theta \left( #1 \right)$ & $\Theta \left( #2 \right)$ \\ \cline{2-3} \hline \end{tabular} \end{center} } \begin{document} \tableofcontents \pagebreak \section{Insertion sort algorithm and proof} \subsection{Algorithm} \begin{algorithm} \caption{Insertion Sort} \begin{algorithmic}[1] \Require array of comparable elements \Function{insertion\_sort}{array} \Comment{0 indexed array assumed} \For {$j = 1$ \textbf{to} $array.length - 1$} \State $key = array[j]$; \State $i = j - 1$; \While{$i \ge 0$ and $array[i] > key$} \State $array[i + 1] = array[i]$; \State $i = i - 1$; \EndWhile \State $array[i + 1] = key$; \EndFor \EndFunction \end{algorithmic} \end{algorithm} \subsection{Proof} \Invariant The subarray array[0..j-1], is always in sorted order. \ls \topic{Initialization} Initially we have j = 1, so the array[0] (a single element) is always sorted. \ls \topic{Maintenance} For loop works by moving forward from j = 1, in each iteration, it searches for the position of current key then places it there while shifting every value down the line by one. \ls \topic{Termination} when the loop terminates, j = array.length = n, so array[0..n-1] is in sorted order, which is the entire space value of array. So we conclude that entier array is sorted. \timecase{n}{n^2} \pagebreak \section{Linear search algorithm and proof} \subsection{Algorithm} \begin{algorithm} \caption{Linear Search Algorithm} \begin{algorithmic}[1] \Require Array to search given element in and the value to search for \Function {linear\_search}{$array, value$} \For {$index = 0$ \textbf{to} $array.length - 1$} \If {$array[index] == value$} \State \Return $index$; \EndIf \EndFor \State \Return $index$; \EndFunction \end{algorithmic} \end{algorithm} \subsection{Proof} \Invariant The value is not within the range array[0..index-1] \ls \topic{Initialization} Initially we have index = 0, so the value isn't within the array range array[0..-1] which is empty arrray. \ls \topic{Maintenance} Within each for loop if the value pointed by index isn't equal to the value, we increment otherwise we return the index and terminate the function. In either case, the value will not be contained within the range array[0..index-1]. \ls \topic{Termination} The return value of index could either point to the value itself, or point to the value one after the size limit of array. In former case, since the value was first found at that index, there is no possibility for the same value to exist within the sub-array, array[0..index-1]. In later case, the value of index is one plus the total size of array. This implies, that the value isn't contained within the array at all. So, we conclude that linear search will always aware us about the presence or absence of the value in the given array. \timecase{1}{n} \pagebreak \section{Selection sort algorithm and proof} \subsection{Algorithm} \begin{algorithm} \caption{Selection sort algorithm} \begin{algorithmic}[1] \Function{selection\_sort}{$array$} \For {$i = 0$ \textbf{to} $array.length - 2$} \State index = i; \For {$j = i + 1$ \textbf{to} $array.length - 1$} \If {$array[index] > array[j]$} \State index = j; \EndIf \EndFor \State \Call{swap}{$array[index], array[i]$}; \EndFor \EndFunction \end{algorithmic} \end{algorithm} \subsection{Proof} \Invariant The sub array array[0..i-1] is always in sorted order. \ls \topic{Initialization} Initially we have i = 1, so the array[0] (a single element) is always sorted. \ls \topic{Maintenance} In each iteration, we move in the array by one position. At each position we find the smallest value since that position then swap the value at given position with found smallest value. This ensures that the array[0..i-1] is always sorted in ascending order. We only need to move by n-1 position, as the final will be sorted implicitly. \ls \topic{Termination} When the loop ends, we have i = array.length, so array[0..i-1] which comprises of the entire array is sorted. Hence we can conclude that the entier array is sorted. \timecase{n^2}{n^2} \end{document}
{ "alphanum_fraction": 0.6637669432, "avg_line_length": 37.7328244275, "ext": "tex", "hexsha": "afa8c8f32ba2a5519c4d1c733a5c65faab2156ff", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "0adcc4463444d0015f20693abb2f79309e758c0e", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "Hungerarray/Algorithms", "max_forks_repo_path": "docs/algorithms.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "0adcc4463444d0015f20693abb2f79309e758c0e", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "Hungerarray/Algorithms", "max_issues_repo_path": "docs/algorithms.tex", "max_line_length": 440, "max_stars_count": null, "max_stars_repo_head_hexsha": "0adcc4463444d0015f20693abb2f79309e758c0e", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "Hungerarray/Algorithms", "max_stars_repo_path": "docs/algorithms.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 1370, "size": 4943 }
% Standard preamble \documentclass[12pt]{article} \usepackage{geometry, hyperref, sectsty} \usepackage[resetlabels]{multibib} \newcites{pubs,inps,subs,talks} {Peer-Reviewed Publications,In Press,Under Review,Contributed Talks} \makeatletter \def\@biblabel#1{} \makeatother %pagestyle % US STYLE \geometry{ letterpaper, total={7in,9.5in}, left=0.75in, top=0.75in} % EUROPEAN STYLE %\geometry %{ % a4paper, % total={170mm,280mm}, % left=20mm, % top=10mm, %} \parindent = 0pt \parskip = 6pt % Set font \renewcommand{\familydefault}{\sfdefault} % Set title size/spacing \sectionfont{\fontsize{14}{4}\selectfont} \sectionfont{\fontsize{14}{16}\selectfont\sectionrule{0ex}{0pt}{-1ex}{1pt}} %\titlespacing*{\section}{0pt}{14pt plus 4pt minus 4pt}{4pt plus 2pt minus 2pt} % Define bullet \def\bullitem{\par\hangindent=15pt \hangafter=1 \noindent\hbox to 20pt{\hfil$\bullet$\hfil}\ignorespaces} % Define hrulefull \def\hrulefull{\noindent\rule{\textwidth}{0.2pt}} \begin{document} % Header/Contact Info {\large \textbf{Tyler H.\ Chang}} \begin{tabular}{llll} Argonne National Laboratory & $\quad$ & Email:&\hskip -3pt \href{mailto:[email protected]}{[email protected]}\\ Mathematics and Computer Science (MCS) Division & $\quad$ & Website:&\hskip -3pt \url{http://thchang.github.io}\\ 9700 S.\ Cass Ave.\ Bldg.\ 240, Lemont, IL 60439 & $\quad$ & GitHub:&\hskip -3pt \url{http://github.com/thchang} \\ \end{tabular} % Begin document \section*{Primary Interests} Numerical optimization, machine learning, algorithms, parallel computing, and scientific software. \section*{Education} Ph.D., May 2020, Computer Science, Virginia Polytechnic Institute \& State University (Virginia Tech) \medskip B.S., May 2016, Computer Science \& Mathematics, Virginia Wesleyan University, \textit{Summa Cum Laude} \section*{Research Experience} \hangafter=1 \hangindent=0.3in (June, 2020 -- Present) \textbf{Postdoc: Argonne National Lab}, Math.\ \& Computer Science (MCS) Div. \bullitem R\&D in multiobjective simulation optimization, parallel computing, and scientific software. %\bullitem Machine Learning for Self-Driving Labs: %automating material discovery via continuous-flow chemistry. %\bullitem Member of FASTMATH: A SciDAC institute spanning %5 national labs and 5 universities. \medskip \hangafter=1 \hangindent=0.3in (Aug, 2016 -- May, 2020) \textbf{Cunningham Fellow: Virginia Tech}, Dept.\ of Computer Science. \bullitem R\&D in numerical analysis, math software, algorithms, parallel computing, and data science. \bullitem Math \& Algorithms team in VarSys: A NSF-funded study of HPC performance variability. \medskip \hangafter=1 \hangindent=0.3in (June, 2019 -- Dec, 2019) \textbf{SCGSR Awardee: Argonne National Lab}, MCS Division. \bullitem R\&D in multiobjective optimization, funded via ORAU/ORISE (see awards for details). \medskip \hangafter=1 \hangindent=0.3in (Feb -- Aug, 2016) \textbf{Research Assistant: Old Dominion University}, Dept.\ of Computer Science. \bullitem R\&D in GPU computing and parallelization of NASA's FUN3D CFD kernel on NVIDIA GPUs. \medskip \hangafter=1 \hangindent=0.3in (Summer 2014, Winter 2014, Summer 2015, \& Winter 2015) \textbf{Intern: US Army Research Labs}, Computational Science Division (CSD) and Guidance Technology Branch (GTB). \bullitem CSD --- R\&D in autonomous driving (Summer 2015) \& virtual reality (Winter 2015). \bullitem GTB --- R\&D in computer vision (Summer 2014) \& embedded systems (Winter 2014). \medskip \newpage \section*{Awards} \textbf{Nominee for Outstanding Dissertation Award} (2021) The Dept.\ of Computer Science's nominee for Virginia Tech's \href{https://graduateschool.vt.edu/about/awards/student/oustanding-dissertation-award.html}{Outstanding Dissertation Award} for the year 2020. \textbf{Cunningham Doctoral Fellow} (2016--2020). The \href{https://graduateschool.vt.edu/funding/types-of-funding/funding-sponsored-by-the-graduate-school/cunningham-doctoral-assistantships.html} {Cunningham doctoral fellowship} is a Virginia Tech graduate school wide award, guaranteeing 4 years of research funding. \textbf{Davenport Leadership Fellow} (2016--17 \& 2019--20). The Davenport leadership fellowship is a supplemental award given by Virginia Tech, College of Engineering on a per-year basis. \textbf{DOE SCGSR Awardee} (2019). One of 70 proposals funded by the United States Dept.\ of Energy, Office of Science Graduate Student Research program, during the \href{https://www.energy.gov/articles/doe-s-science-graduate-student-research-program-selects-70-students-pursue-research-doe}{2018, 2nd call for proposals}. \textbf{Pratt Fellow} (2017--18 \& 2018--19). The Pratt fellowship is a supplemental award given by Virginia Tech, College of Engineering on a per-year basis. \section*{Publicly Available Software} \textbf{DELAUNAYSPARSE} is a software package for computing the Delaunay interpolant in medium to high dimensions. Both serial and parallel drivers are available with interfaces in \texttt{Fortran}, \texttt{C/C++}, \texttt{Python 3.6+}, and command line. Download: \url{https://vtopt.github.io/DelaunaySparse}. \textbf{QAML} is a \texttt{Python} package for embedding polynomial sum of squares minimization problems on the D-Wave quantum annealer. Download: \url{https://github.com/tchlux/qaml}. \section*{Pending Software} \textbf{PARMOO} is a \texttt{Python} library and framework for solving large-scale multiobjective simulation optimization problems, while exploiting any available structures in the simulation. Integrates with \texttt{libEnsemble} for portable, scalable parallelism. Code not yet publicly available. \textbf{VTMOP} is a solver and framework for computationally expensive blackbox multiobjective optimization problems with continuous variables and 2+ objectives. Written in \texttt{Fortran}; a \texttt{Python} interface is also available through \href{https://github.com/Libensemble/libensemble/blob/master/libensemble/gen_funcs/vtmop.py}{libEnsemble}. \nocitepubs{*} %\nociteinps{*} \nocitesubs{*} \nocitetalks{*} \bibliographystylepubs{cv} %\bibliographystyleinps{cv} \bibliographystylesubs{cv} \bibliographystyletalks{cv} \bibliographypubs{pubs} %\bibliographyinps{inps} \bibliographysubs{subs} \bibliographytalks{talks} \newpage \section*{Professional Activities} \textbf{Referee:} For The Visual Computer Journal (2021); ACM Transactions on Mathematical Software (2021); Super Computing (2021); Quantum Information Processing (2021); Mathematical and Computer Applications (2021); Journal of Machine Learning Research (2019); IEEE SoutheastCon (2018 -- 2020). \textbf{Session Chair:} SIAM Conference on Optimization (2021); SIAM Conference on Computational Science and Engineering (2021); IEEE SoutheastCon (2018). \textbf{Membership:} ACM (2015 -- Present); SIAM (2016 -- Present); SCS (2020 -- Present). \textbf{Counselor / Founding Member:} Virginia Tech CS Graduate Counsel (Fall, 2017 -- Fall, 2019). Organizing professional and social events for current and prospective graduate students. \section*{Teaching Experience} \hangafter=1 \hangindent=0.3in (Spring, 2020) \textbf{Instructor of Record: Virginia Tech}, Dept.\ of Computer Science. \bullitem Instructor for {\it CS 3114: Data Structures and Algorithms}. \bullitem Responsible for moving course material and infrastructure online due to COVID-19. \medskip \hangafter=1 \hangindent=0.3in (Spring, 2013 -- Fall, 2015) \textbf{Subject Tutor: Virginia Wesleyan University}, Learning Center. \bullitem Subject tutor for undergraduate courses in calculus, computer science, and statistics. \end{document}
{ "alphanum_fraction": 0.7701059239, "avg_line_length": 28.8566037736, "ext": "tex", "hexsha": "b87f8f1a6372209f739ea751cab7f0094257a726", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "5a35dab03f107793488d1afe18ed0164285d96e8", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "thchang/cv", "max_forks_repo_path": "fullcv.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "5a35dab03f107793488d1afe18ed0164285d96e8", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "thchang/cv", "max_issues_repo_path": "fullcv.tex", "max_line_length": 157, "max_stars_count": null, "max_stars_repo_head_hexsha": "5a35dab03f107793488d1afe18ed0164285d96e8", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "thchang/cv", "max_stars_repo_path": "fullcv.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 2209, "size": 7647 }
\chapter{Modules} \label{chap:Modules} The source code of RespVis is structured into modules written in the ES module format. Currently, all these modules are combined into a single, monolithic library bundle during the build process. In the future, each module will be released on its own to allow users to import only the ones they need. The reason for this is that most users will likely only require a subset of all the features included in the library and it would unnecessarily increase the size of their bundles to import all of them. A good example of this is D3, which also separates its considerable amount of features into different modules that can be successively added to a project when the need arises. At the time of writing, the RespVis library contains five different modules: the core, legend, tooltip, bar, and point modules. Each of these modules contains submodules that have been grouped by thematic similarity. The core module holds the core functionality of the library that all other modules depend on, which includes the layouter, axes, chart and chart window base components, and various utility functions and types. The legend module contains the implementation of a legend component that is mostly meant to describe discrete data dimensions by rendering distinct values as labeled symbols. The tooltip module holds functions to control the showing, placement, and content of tooltips, as well as utility functions that simplify the configuration and initialization of tooltips on series components. % None of the modules listed so far contain components that render visual marks, which are characteristic for visualizations. The bar module distinguishes between normal, grouped, and stacked bars and includes various low-level and high-level components to render each of those types. Similarly, the point module contains low-level and high-level components to visualize point charts. All of the different modules and the dependencies between them are shown in Figure~\ref{fig:Modules}. \begin{figure}[tp] \centering \includegraphics[keepaspectratio,width=\linewidth,height=\fullh]{diagrams/respvis-modules.pdf} \caption[Modules of RespVis]{ This diagram shows the different modules of the RespVis library. It also shows the most important submodules contained in the individual modules. The directional arrows connecting modules indicate dependencies between them. \imgcredit{Image created by the author of this thesis using \href{https://www.diagrams.net/}{diagrams.net}.} } \label{fig:Modules} \end{figure} \section{Core Module} The core module contains the necessary core functionality of the library. It is the base module that all other modules depend on and includes various utility functions, the layouter, axes, chart base components, and chart window base components. RespVis heavily relies on utility functions to reuse and structure recurring operations. The core module contains utilities to deal with arrays, elements, selections, and texts, as well as geometric utilities that simplify the handling of positions, sizes, rectangles, circles, and paths. The layouter is a custom component that enables controlling the layout of SVG elements with CSS. % It achieves this by replicating the DOM tree of SVG elements that should be layed out with HTML \code{<div>} elements, applying the appropriate CSS configuration on the replicated elements, and storing their calculated layout information on the original SVG elements. % Render functions can then use the stored layout information on layed out SVG elements to render their content so that it fits within the corresponding boundaries. Axis components have been included in the core module because they are important components that occur in nearly every visualization. % However, since only cartesian charts have been implemented thus far, only cartesian axes can be found in the implementation. Lastly, the chart and chart window components provide base functionalities that simplify the creation of more specialized charts and chart windows. The core module implementation is located in the \code{src/lib/core/} directory of the project. \subsection{Utilities} The utilities provided by RespVis are split on multiple modules that are placed in the \code{utilities/} directory of the core module. These modules include types and functions that perform array, element, selection and text operations as well as modules that simplify geometric operations with positions, sizes, rectangles, circles and paths. Utility functions are grouped into modules by the type of entity they operate on. This grouping is also reflected in the names of functions, which all begin with the type of entity a function is associated with. \TODO{Capitalize "core module", "array utility module", ... ?} Array utilities can be found in the \code{utilities/array.ts} module. The \code{Array} class in the JavaScript base implementation already offers a wide variety of convenient methods to work with arrays. These methods form a solid foundation that allows handling of a broad range of situations. However, not everything is covered out of the box and some things require manual implementations, which is why the RespVis library offers additional functions that simplify commonly encountered tasks. The \code{arrayEquals} function is used to verify equality of two arrays and also works if they contain nested arrays within them. Type guard functions are used to determine the type of a variable at runtime. For this purpose, two different array type guard functions are provided in the array utility module: \code{arrayIs}, which evaluates to true if the passed parameter is an array, and \code{arrayIs2D}, which evaluates to true if the passed parameter is a two-dimensional array. The \code{arrayIs} function is merely a wrapper around the \code{Array.isArray} method. Theoretically, the \code{Array.isArray} method could be used directly instead of the \code{arrayIs} function, but because the \code{arrayIs2D} function is required, the \code{arrayIs} function has also been added for consistency reasons. The last function in the array utility module is the \code{arrayPartition} function. This function receives an array and a partition size as parameters and returns a partitioned version of the input array with each chunk containing the number of items specified by the partition size parameter. % elementRelativeBounds % elementComputedStyleWithoutDefaults % elementSVGPresentationAttrs The element utilities module located at \code{utilities/elements.ts} in the core module contains functions and constants related to document elements. The \code{elementRelativeBounds} function is used to calculate the bounding box of an element relative to the bounding box of its parent in viewport coordinates. Internally, it uses the \code{getBoundingClientRect} function, which returns the actual bounding box of an element in viewport coordinates and, as opposed to other ways of accessing an element's position, it also takes transformations into account. Every element has a set of CSS styles applied on them and the \code{Window.getComputedStyle} method can be used to access the active style of an element. The style declaration object returned by this method contains all possible CSS properties and their values, regardless of whether or not they are set to default values. Sometimes this behavior may be desired, but in this library, the computed style is used during the preparation of a downloadable SVG document to transform styles set in CSS to attributes on the individual elements. If every possible style property on every element would be mapped to an attribute, the resulting SVG document would be unnecessarily bloated because only those properties that are not set to their default values actually have an effect. For this reason, the \code{elementComputedStyleWithoutDefaults} function has been implemented to calculate the computed style of an element and remove all properties that are set to default values from the returned style declaration object. This is implemented by adding a \code{<style-dummy>} element as a sibling of the element of interest, getting the computed styles of both elements, and calculating the difference between them. To speed up these calculations, the \code{elementComputedStyleWithoutDefaults} function accepts an array of property names as its second parameter and will only consider the properties listed in this array. The constant \code{elementSVGPresentationAttrs} array contains all names of the presentation attributes listed in the SVG 1.1 specification \parencite{SVG11}. Since only these SVG attributes can be styled via CSS, only these properties need to be taken into account when preparing downloadable SVG documents. % Transition default type variables % Selection default type variables % SelectionOrTransition default type variables % Selection.attr add possible null return value % Selection.dispatch allow Partial parameters % isSelection % isTransition \TODO{When to write \code{Selection} and when to write Selection? What about plural? \code{Selections} or \code{Selection}s?} Selection utilities are implemented in the \code{utilities/selection.ts} module. They include typing improvements for the D3 \code{Selection}, \code{Transition}, and \code{SelectionOrTransition} interfaces and type guards to distinguish between \code{Selection}s and \code{Transition}s. The \code{Selection}, \code{Transition}, and \code{SelectionOrTransition} interfaces allow the specification of four different type variables: the type of elements contained in the \code{Selection} or \code{Transition}, the type of data that is bound to those elements, the type of the parents of those elements, and the type of data that is bound to those parents. In most cases, the type variables related to parent elements do not influence the logic of code using these interfaces and could be omitted to keep it more concise. For this reason, these interface have been reexported with default types set on all of the type variables. This means that whenever type variables need to be manually specified, only those that need to be set to specific types need to be explicitly stated. Further typing improvements have been made to the \code{attr} and \code{dispatch} methods of the \code{Selection} interface. The D3 type declarations of the \code{Selection.attr} method do not include \code{null} as a possible return value. This is wrong because this method will actually result in a \code{null} value when reading an attribute that does not exist. To fix this inconsistency and catch potential bugs related to this during compilation, the type declaration of the \code{Selection.attr} method has been overwritten in the Selection utility module to also include \code{null} as a possible return value. A less important but nonetheless convenient improvement has been made to the type declaration of the \code{Selection.dispatch} method. This method allows the dispatching of custom events with certain parameters that control different aspects of how this event is dispatched and the data that may be bound to it. In practice, not all parameters need to be specified at every invocation because the implementation of the \code{Selection.dispatch} method will provide default values for all of them. However, this is not reflected in the type declaration of the function, which requires every parameter to be provided every time the function is called. To fix this, the Selection utility module provides a type declaration overwrite for the \code{Selection.dispatch} function that wraps the type of the parameters parameter into the \code{Partial} type. Apart from these typing improvements, this module also provides the \code{isSelection} and \code{isTransition} type guard functions that are used to distiguish between \code{Selection}s and \code{Transition}s. Utilities for dealing with \code{<text>} elements can be found in the \code{utilities/text.ts} module. It contains rather basic functionalities that simply set specific data attributes to specific values on \code{<text>} elements. The text utility module holds functions that set data attributes controlling the horizontal and vertical alignment of \code{<text>} elements, as well as their orientation. Horizontal and vertical alignment is configured using the \code{textAlignHorizontal} and \code{textAlignVertical} functions. These functions respectively set the \code{data-align-h} and \code{data-align-v} attribute on a Selection or Transition to the value passed into either function as a string enum parameter of type \code{HorizontalAlignment} or \code{VerticalAlignment}. The \code{HorizontalAlignment} enum represents the string values \code{\"left\"}, \code{\"center\"} and \code{\"right\"}, while the \code{VerticalAlignment} enum represents the values \code{\"top\"}, \code{\"center\"} and \code{\"bottom\"}. The distinct \code{data-align-h} and \code{data-align-v} attribute values are then used in the \code{respvis.css} file to declare different \code{text-anchor} and \code{dominant-baseline} values that control the alignment of \code{<text>} elements. Text orientation is set using the \code{textOrientation} function. This function sets the \code{data-orientation} attribute on a Selection or Transition to the value specified via the string enum parameter of type \code{Orientation}. The \code{Orientation} enum represents the values \code{\"horizontal\"} and \code{\"vertical\"}. These \code{data-orientation} attribute values are then used in CSS to set the \code{text-anchor}, \code{dominant-baseline}, and \code{transform} properties of a \code{<text>} element, in order to rotate it accordingly and position it correctly inside the bounding box calculated by the layouter. % Position interface % positionRound % positionEquals % positionToString % positionFromString % positionToAttrs (SelOrTrans) % positionFromAttrs (SelOrTrans) % positionToTransformAttr (SelOrTrans) The core module also contains utilities that simplify geometric operations. One of these utilities is the position utility module located at \code{utilities/position.ts}. This module contains the \code{Position} interface and various functions to perform operations related to it. The \code{Position} interface consists of the \code{x} and \code{y} number members. Rounding these members is necessary to be able to correctly compare equality of two \code{Position}s and to not render unnecessarily long strings when transforming them into string representations. This rounding is performed with the \code{positionRound} function, which allows the specification of the number of decimals the members variables should be rounded to. Equality comparision between two \code{Position} variables can be performed with the \code{positionEquals} function, which evaluates to \code{true} if both \code{Position}s are equal and \code{false} if not. To transform a \code{Position} into its string representation of the form \code{\"x, y\"}, the \code{positionToString} function can be used. Its counterpart, the \code{positionFromString} function, can be used to transform a string in the correct format into a \code{Position}. A large part of RespVis consists of modifying the attributes of elements. Therefore, the \code{positionToAttrs} function can be used to set the \code{x} and \code{y} attributes of a \code{SelectionOrTransition} to the values of the \code{x} and \code{y} members of a \code{Position}. Similarly, the \code{positionToTransformAttr} function can be used to set the \code{transform} attribute of a \code{SelectionOrTransition} to a translation representing a \code{Position}. The position utility module also contains the \code{positionFromAttrs} function, which can be used to create a \code{Position} from the \code{x} and \code{y} attributes of a \code{SelectionOrTransition}. % Size % sizeRound % sizeEquals % sizeToString % sizeFromString % sizeToAttrs % sizeFromAttrs The size utility module is located at \code{utilities/size.ts} in the core module is very similar to the position utility module. It contains the \code{Size} interface, which consists of the \code{width} and \code{height} number member variables. The \code{sizeRound} function is used to round the member variables of a \code{Size} object to a certain number of decimals. To compare two \code{Size} objects for equality, the \code{sizeEquals} function can be used. Similar to the equivalent functions in the position utility module, the \code{sizeToString} and \code{sizeFromString} functions can be used to convert between \code{Size} objects and their string representations. Moreover, the \code{sizeToAttrs} can be used to set the \code{width} and \code{height} attributes of a \code{SelectionOrTransition} to the values of a \code{Position} object and the \code{sizeFromAttrs} function can be used to create a new \code{Position} object from the values of these attributes. % Rect % rectRound % rectEquals % rectToString % rectFromString % rectToAttrs % rectMinimized % rectFitStroke % rectPosition % rectCenter % rectLeft, rectRight, rectTop, rectBottom % rectTopLeft, rectTopRight, rightBottomRight, rightBottomLeft Utilities for dealing with rectangles can be found in the rectangle utility module, which is located under \code{utilities/rect.ts} in the core module. This module contains the \code{Rect} interface, which is the union of the \code{Position} and \code{Size} interfaces and therefore describes an object with the number member variables \code{x}, \code{y}, \code{width}, and \code{height}. Similar to the position and size utility modules, this module contains the \code{rectRound} function to round \code{Rect} objects, the \code{rectEquals} function to compare two of them for equality, the \code{rectToString} and \code{rectFromString} functions to convert between \code{Rect} objects and their string representations, and the \code{rectToAttrs} and \code{rectFromAttrs} functions to convert between objects and \code{x}, \code{y}, \code{width}, and \code{height} attributes. Since the \code{Rect} interface is a combination of the \code{Position} and \code{Size} interfaces, most of the functions in this module internally use the functions provided by the position and size utility modules. The \code{rectMinimized} function is used in transitions that grow or shrink a \code{<rect>} element from or to their center. It creates a minimized version of the passed \code{Rect}, which is infinitely small and positioned at the original \code{Rect}'s center. When declaring a stroke for SVG elements, it is drawn exactly on the outline of an element's silhouette. This means that a stroke will extend outside the original bounds of an element by half the stroke width, which can lead to unwanted artefacts like the stroke of bars in a bar chart overlapping over the chart's axes. To counteract this, the \code{rectFitStroke} function is provided to adjust the bounding box of \code{Rect} objects to account for a stroke of a certain width around them. Lastly, the rectangle utility module provides functions to calculate specific positions inside of rectangles. The most generic of these functions is the \code{rectPosition} functions. This function enables the calculation of a position inside of a rectangle via a two-dimensional parameter that expresses a position as the percental width and height distance from a rectangle's top-left corner. All other position-calculating rectangle utility functions are simply shorthand functions that internally call the \code{rectPosition} function. The \code{rectCenter} function returns a \code{Position} object representing the center position of a \code{Rect} object. The \code{rectLeft}, \code{rectRight}, \code{rectTop}, and \code{rectBottom} functions return \code{Position} objects that represent the middle position of a \code{Rect} object's edges. Similarly, The \code{rectTopLeft}, \code{rectTopRight}, \code{rectBottomRight}, \code{rectBottomLeft} functions can be used to calculate the corner positions of a rectangle. % Circle % circleRound % circleEquals % circleToString % circleFromString % circleToAttrs % circleFromAttrs % circleMinimized % circleFitStroke % circlePosition % circleInsideRect % circleOutsideRect The last geometric primitive whose handling is simplified by a RespVis utility module is a circle. The circle utility module can be found at \code{utilities/circle.ts} in the core module. It contains the \code{Circle} interface, which describes a circle object as a \code{center} property of type \code{Position} and a \code{radius} property of type \code{number}. This module also contains equivalent functions to those found in previously mentioned utility modules: \code{circleRound}, \code{circleEquals}, \code{circleToString}, \code{circleFromString}, \code{circleToAttrs}, \code{circleFromAttrs}, \code{circleMinimized}, and \code{circleFitStroke}. Furthermore, the \code{circlePosition} function can be used to calculate positions using an angle that defines the direction and an optional parameter that defines the distance from the circle's center as a percentage of the circle's radius. The circle utility module also contains functions to create circles from rectangles. These functions are the \code{circleInsideRect} function to calculate the largest circle that can fit inside of a rectangle and the \code{circleOutsideRect} function to calculate the smallest circle that encloses a rectangle. % pathRect % pathCircle The purpose of the path utility module is to provide functions that simplify the creation of path definitions that can be set on \code{<path>} elements. It is located at \code{utilities/path.ts} in the core module and only contains a small number of functions. The \code{pathRect} function creates a path definition that represents a rectangle. A \code{<path>} element with a path definition representing a rectangle can be used instead of a \code{<rect>} element. Similarly, the \code{pathCircle} function creates a path definition that represents a circle. Such a path definition is used on a \code{<path>} element to render a circle instead of using a \code{<circle>} element. The reason for using \code{<path>} elements rather than more descriptive shape elements is that path elements can change their shape dynamically without replacing elements. Since only the \code{d} attribute of a path needs to change when the path's shape is altered, it is also possible to smoothly transition between shapes by interpolating the path definition strings. \subsection{Layouter} % layouter \subsection{Axes} % base % bottom axis % left axis \subsection{Chart} % chart % chart cartesian \subsection{Chart Window} % chart window % menu dropdown % series checkbox % tool filter nominal % tool download svg % resize event dispatcher \section{Legend Module} \section{Tooltip Module} \section{Bar Module} \subsection{Basic Bars} \subsection{Grouped Bars} \subsection{Stacked Bars} \section{Point Module}
{ "alphanum_fraction": 0.8051108508, "avg_line_length": 87.9732824427, "ext": "tex", "hexsha": "5904aaa6141c20dff643adbd6e6de22afc525e7d", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "e13095c84f68f87f23ffe97ada1349f893eb2538", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "AlmostBearded/respvis-thesis", "max_forks_repo_path": "thesis-modules.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "e13095c84f68f87f23ffe97ada1349f893eb2538", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "AlmostBearded/respvis-thesis", "max_issues_repo_path": "thesis-modules.tex", "max_line_length": 488, "max_stars_count": 2, "max_stars_repo_head_hexsha": "e13095c84f68f87f23ffe97ada1349f893eb2538", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "AlmostBearded/respvis-thesis", "max_stars_repo_path": "thesis-modules.tex", "max_stars_repo_stars_event_max_datetime": "2021-02-07T18:00:19.000Z", "max_stars_repo_stars_event_min_datetime": "2021-02-06T11:51:06.000Z", "num_tokens": 4835, "size": 23049 }
% !TEX root = ../../../proposal.tex \vspace{-2pt} \subsection{Vulnerability measurements} We performed additional scans from the University of Michigan of the public IPv4 address space between February 16, 2016 and February 22, 2016, to measure vulnerability to special DROWN\@. %% % Direct adaptation of David's raw numbers %% \begin{table}[ht!] %% \newcommand{\sdrotate}[1]{\rotatebox[origin=l]{90}{#1}} %% \newcommand{\sdTrue}{Yes} %% \newcommand{\sdFalse}{No} %% \centering\small %% \begin{tabular}{rcccccc} %% \toprule %% \multicolumn{1}{c}{\sdrotate{IPv4 HTTPS hosts that \ldots}} & %% \sdrotate{Support SSLv2} & %% \sdrotate{Allow SSLv2 export ciphers} & %% \sdrotate{Have OpenSSL extra\_clear bug} & %% \sdrotate{Support SSLv3 or later} & %% \sdrotate{Have browser-trusted certificates} & %% \sdrotate{Use keys vulnerable to Special DROWN\ } \\ %% \midrule %% 9919486 & \sdFalse & \sdFalse & \sdFalse & \sdTrue & \sdFalse & \sdFalse \\ %% 4840401 & \sdFalse & \sdFalse & \sdFalse & \sdTrue & \sdFalse & \sdTrue \\ %% 13040553 & \sdFalse & \sdFalse & \sdFalse & \sdTrue & \sdTrue & \sdFalse \\ %% 1405843 & \sdFalse & \sdFalse & \sdFalse & \sdTrue & \sdTrue & \sdTrue \\ %% 53996 & \sdTrue & \sdFalse & \sdFalse & \sdFalse & \sdFalse & \sdFalse \\ %% 534873 & \sdTrue & \sdFalse & \sdFalse & \sdTrue & \sdFalse & \sdFalse \\ %% 5403 & \sdTrue & \sdFalse & \sdFalse & \sdTrue & \sdFalse & \sdTrue \\ %% 638334 & \sdTrue & \sdFalse & \sdFalse & \sdTrue & \sdTrue & \sdFalse \\ %% 12933 & \sdTrue & \sdFalse & \sdFalse & \sdTrue & \sdTrue & \sdTrue \\ %% 160 & \sdTrue & \sdFalse & \sdTrue & \sdFalse & \sdFalse & \sdFalse \\ %% 7 & \sdTrue & \sdFalse & \sdTrue & \sdTrue & \sdFalse & \sdFalse \\ %% 4591 & \sdTrue & \sdFalse & \sdTrue & \sdTrue & \sdFalse & \sdTrue \\ %% 5 & \sdTrue & \sdFalse & \sdTrue & \sdTrue & \sdTrue & \sdFalse \\ %% 588 & \sdTrue & \sdFalse & \sdTrue & \sdTrue & \sdTrue & \sdTrue \\ %% 21212 & \sdTrue & \sdTrue & \sdFalse & \sdFalse & \sdFalse & \sdFalse \\ %% 466319 & \sdTrue & \sdTrue & \sdFalse & \sdTrue & \sdFalse & \sdFalse \\ %% 32515 & \sdTrue & \sdTrue & \sdFalse & \sdTrue & \sdFalse & \sdTrue \\ %% 325878 & \sdTrue & \sdTrue & \sdFalse & \sdTrue & \sdTrue & \sdFalse \\ %% 80985 & \sdTrue & \sdTrue & \sdFalse & \sdTrue & \sdTrue & \sdTrue \\ %% 51266 & \sdTrue & \sdTrue & \sdTrue & \sdFalse & \sdFalse & \sdFalse \\ %% 123 & \sdTrue & \sdTrue & \sdTrue & \sdTrue & \sdFalse & \sdFalse \\ %% 3138719 & \sdTrue & \sdTrue & \sdTrue & \sdTrue & \sdFalse & \sdTrue \\ %% 74 & \sdTrue & \sdTrue & \sdTrue & \sdTrue & \sdTrue & \sdFalse \\ %% 834207 & \sdTrue & \sdTrue & \sdTrue & \sdTrue & \sdTrue & \sdTrue \\ %% \bottomrule %% \end{tabular} %% \caption{Scan results indicate widespread vulnerability to the %% Special DROWN attack.} %% \end{table} As shown in Table~\ref{table:special}, 9.1\,M HTTPS servers (26\%) are vulnerable to special DROWN, as are 2.5\,M HTTPS servers with browser-trusted certificates~(14\%). 66\% as many HTTPS hosts are vulnerable to special DROWN as to general DROWN\@ (70\% for browser-trusted servers). While there are 2,704,201 public keys that are vulnerable to general DROWN, we found 1,133,906 vulnerable to special DROWN (41\% as many). Vulnerability among Alexa 1M domains is lower, with only 9\% of Alexa domains vulnerable (7\% for browser-trusted domains). Since special DROWN enables active man-in-the-middle attacks, any host serving a browser-trusted certificate with at least one name that appears on any certificate with a key exposed by a special DROWN oracle is vulnerable to a man-in-the-middle attack. Extending our search to account for shared names, we find 3.8\,M~(22\%) of hosts with browser-trusted certificates are vulnerable to man-in-the-middle, as well as 19\% of browser-trusted Alexa 1M.
{ "alphanum_fraction": 0.641375822, "avg_line_length": 56.4857142857, "ext": "tex", "hexsha": "7bf126654f2617b09c68c0123ff5c7eb9dcdb64a", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "5607114fb4340c5b6e944c73ed6019006d3ebec9", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "dadrian/dissertation", "max_forks_repo_path": "papers/drown/paper/clear-measurement.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "5607114fb4340c5b6e944c73ed6019006d3ebec9", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "dadrian/dissertation", "max_issues_repo_path": "papers/drown/paper/clear-measurement.tex", "max_line_length": 167, "max_stars_count": null, "max_stars_repo_head_hexsha": "5607114fb4340c5b6e944c73ed6019006d3ebec9", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "dadrian/dissertation", "max_stars_repo_path": "papers/drown/paper/clear-measurement.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 1307, "size": 3954 }
\subsection{Numeric types} All numeric types (except complex) support: \mintinline{python}{x + y}, \mintinline{python}{x - y}, \mintinline{python}{x * y}, \mintinline{python}{x / y} (quotient), \mintinline{python}{x // y} (floored quotient), \mintinline{python}{x % y}, \mintinline{python}{abs(x)}, \mintinline{python}{x ** y} (power). \input{sections/built-in-types-numerics-int} \input{sections/built-in-types-numerics-float} \input{sections/built-in-types-numerics-complex} %
{ "alphanum_fraction": 0.71875, "avg_line_length": 30, "ext": "tex", "hexsha": "d20f5060cbbc81c9db63c4ff8b01cf9fcb760c2d", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2016-11-24T19:55:47.000Z", "max_forks_repo_forks_event_min_datetime": "2016-11-24T19:55:47.000Z", "max_forks_repo_head_hexsha": "dd7d6f30d945733f7ed792fcccd33875b59d240f", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "remigiusz-suwalski/programming-notes", "max_forks_repo_path": "src/python3/sections/built-in-types-numerics.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "dd7d6f30d945733f7ed792fcccd33875b59d240f", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "remigiusz-suwalski/programming-notes", "max_issues_repo_path": "src/python3/sections/built-in-types-numerics.tex", "max_line_length": 48, "max_stars_count": 1, "max_stars_repo_head_hexsha": "dd7d6f30d945733f7ed792fcccd33875b59d240f", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "remigiusz-suwalski/programming-notes", "max_stars_repo_path": "src/python3/sections/built-in-types-numerics.tex", "max_stars_repo_stars_event_max_datetime": "2022-02-28T05:03:18.000Z", "max_stars_repo_stars_event_min_datetime": "2022-02-28T05:03:18.000Z", "num_tokens": 148, "size": 480 }
\chapter{Linear Multistep Methods} \begin{intro} In the previous methods we obtained the value after the next time step always by using \emph{one} initial value at the beginning of the current time interval, possibly with the help of intermediate steps. These methods often are accused to have a higher computation time than methods which use several previous points, the argument being that function values at these points have been computed already. Such methods using values of several time steps in the past are called multistep methods. They are constructed such that using more steps yields a method of higher order. We will begin this chapter by introducing some of the formulas. Afterwards, we will study their stability and convergence properties. \end{intro} \begin{example}[Adams-Moulton formulas] \label{ex:lmm:2} \index{Adams-Moulton methods} Basically, there are two construction principles for the multistep methods: Quadrature and numerical differentiation. We postpone the latter to example~\ref{ex:lmm:3} and deal with the former for now. As first example we choose the class of Adams-Moulton methods for which the integral from point $t_{k-1}$ to point $t_{k}$ is approximated by a quadrature of the points $t_{k-\lmms}$ to $t_k$, hence \begin{gather} \label{eq:lmm:16} y_k = y_{k-1} + \sum_{r=0}^\lmms f_{k-r} \int_{t_{k-1}}^{t_k} L_r(t) \dt, \end{gather} where $f_j$ denotes the function value $f(t_j, y_j)$ and $L_r(t)$ the Lagrange interpolation polynomial to point $t_r$ with respect to the points $t_{k-\lmms},\dots,t_k$. This is shown in Figure~\ref{fig:lmm:adams-moulton}. \begin{figure}[tbp] \begin{center} \includegraphics[width=.9\textwidth]{fig/adams-moulton.tikz} \end{center} \caption{The quadrature of Adams-Moulton formulas: the integration interval is marked by the wavy line in the end. The support points of the quadrature are stated under the line.} \label{fig:lmm:adams-moulton} \end{figure} Since the integral involves the point being computed itself, these methods are implicit. The first of these are \input{definitions/adams-moulton} \end{example} \begin{example}[Adams-Bashforth formulas] \label{ex:lmm:1} \index{Adams-Bashforth methods} With the same principle we obtain explicit methods by omitting the point in time $t_k$ in the definition of the interpolation polynomial. See Figure~\ref{fig:lmm:adams-bashforth}. \begin{figure}[tbp] \begin{center} \includegraphics[width=.9\textwidth]{fig/adams-bashforth.tikz} \end{center} \caption{The quadrature of Adams-Bashforth formulas: the integration interval is marked by the wavy line in the end. The support points of the quadrature are stated under the line.} \label{fig:lmm:adams-bashforth} \end{figure} This yields quadrature formulas of the form \begin{gather} \label{eq:lmm:17} y_k = y_{k-1} + \sum_{r=1}^\lmms f_{k-r} \int_{t_{k-1}}^{t_k} L_r(t) \dt. \end{gather} Again, we list the first few: \input{definitions/adams-bashforth} \end{example} \begin{example} \label{ex:lmm:3} \index{BDF methods} Backward differencing formulas (BDF) are as well based on Lagrange interpolation at the points $t_{k-\lmms}$ to $t_k$. In contrast to Adams formulas they do not use quadrature for the right hand side, but rather the derivative of the interpolation polynomial in the point $t_k$. Using Lagrange interpolation polynomials $L_i(t)$, we let \begin{gather*} y(t) = \sum_{r=0}^\lmms y_{n-r} L_{n-r}(t), \end{gather*} where $y_n$ is still to determine. Now we assume that $y$ solves the ODE in point $t_n$, hence \begin{gather*} y'(t_n) = f(t_n, y_n) = \sum_{r=0}^\lmms y_{n-r} L'_{n-r}(t_n). \end{gather*} This yields the following schemes: \input{definitions/bdf} For an example on how to derive these schemes see the appendix. \end{example} \begin{remark} We know from introduction to numerical analysis, that the numerical differentiation and the extrapolation, the evaluation of interpolation polynomials outside of the interval which is spanned through the interpolation points, are not stable. Therefore, we expect stability problems for the Adams-Bashforth and BDF methods. Moreover, we remember that Lagrange interpolation with equidistant support points is unstable for a high-degree polynomials. Therefore, we also expect that all methods above perform well only with moderate order. \end{remark} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{Definition and consistency of LMM} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \input{definitions/lmm} \begin{remark} \index{step size!constant} The LMM was defined for constant step size $h$. In principle it is possible to implement the method with a variable step size but we restrict ourselves to the constant case. Notes to the step size control can be found later on in this chapter. \end{remark} \begin{remark} One-step methods were always denoted by describing how to compute $y_1$ from $y_0$. Here, the notation becomes more complicated, but sometimes we consider only $y_s$ computed from $y_0,\dots,y_{s-1}$ implying the same rules for $y_k$ computed from $y_{k-\lmms},\dots,y_{k-1}$. \end{remark} \input{definitions/lmm-errors} \begin{lemma} Consider the differential equation \begin{gather*} y' = f(t,y) \qquad y(t_0) = y_0 \end{gather*} where f is given continuously differentiable and $y(t)$ is the exact solution. For the local error we obtain \begin{gather} y(t_k)-y_k = \left( \alpha_0 \identity - h\beta_0 \frac{\partial f}{\partial y}(t_k,\eta) \right)^{-1} (L_h u)(t_k). \end{gather} Here $\eta$ is a value between $y(t_k)$ and $y_k$ if $f$ is a scalar function. If $f$ is multidimensional, the matrix $\frac{\partial f}{\partial y}(t_k,\eta)$ is the Jacobi matrix, which rows are evaluated at possible places between $y(t_k)$ and $y_k$. \end{lemma} \begin{proof} Considering the local error we can assume exact initial values and therefore we can transform ~\ref{eq:lmm:4} to: \begin{gather*} \alpha_\lmms y_k + \sum\limits_{r=1}^\lmms \alpha_{\lmms-r} y(t_{k-r}) = h \left( \beta_\lmms f_k + \sum\limits_{r=1}^\lmms \beta_{\lmms-r} f_{k-r} \right) \end{gather*} We transform further: \begin{multline*} \sum\limits_{r=0}^{\lmms} \left( \alpha_r y(t_{k-r}) - h \beta_r f(t_{k-r},y(t_{k-r})) \right) \\ - \alpha_0 y(t_k) + h \beta_0 f(t_k, y(t_k)) + \alpha_0 y_k - h \beta_0 f(t_k,y_k) = 0. \end{multline*} We now insert ~\ref{eq:lmm:9} which results in \begin{gather*} (L_h y)(t_k) = \alpha_0 \left( y(t_k) - y_k \right) - h \beta_0 \left( f(t_k,y(t_k)) - f(t_k,y_k) \right) \\ \left( y(t_k) - y_k \right) \left( \alpha_0 \identity - h \beta_0 \frac{f(t_k,y(t_k)) - f(t_k,y_k)}{y(t_k) - y_k} \right) \end{gather*} By application of the mean value theorem and subsequent transformation we obtain the statement of the theorem. \end{proof} % \cite[Lemma 2.2, p. 369]{HairerNorsettWanner93} \input{definitions/lmm-consistency} \input{theorems/lmm-bramble-hilbert} \begin{proof} We start with the Taylor expansion of a solution $u$ of the ODE and the corresponding right hand side $f$ for $t_k$, where we insert, unlike usual, $f=u'$: \begin{alignat*}2 u(t) &= \sum_{i=0}^p \frac{u^{(i)}(t_k)}{i!}(t-t_k)^i + \frac{u^{(p+1)}(\xi)}{(p+1)!}(t-t_k)^{p+1} &=:& \phi(t) + r_u(t) \\ f\bigl(t,u(t)\bigr) &= \sum_{i=1}^p \frac{u^{(i)}(t_k)}{(i-1)!}(t-t_k)^{i-1} + \frac{u^{(p+1)}(\xi)}{p!}(t-t_k)^{p} &=:& \phi'(t) + r_f(t), \end{alignat*} with the Taylor polynomial $\phi(t)$ of degree $p$ and remainder $r_u(t)$ and $r_f(t)$. Out of this we calculate: \begin{align*} L_h u(t_k) =& \sum_{r=0}^\lmms \alpha_{\lmms-r} \phi(t_{k-r}) - h \sum_{r=0}^\lmms \beta_{\lmms-r} \phi'(t_{k-r}) \\ &+ \sum_{r=0}^\lmms \alpha_{\lmms-r} r_u(t_{k-r}) - h \sum_{r=0}^\lmms \beta_{\lmms-r} r_f(t_{k-r}). \end{align*} Since $t_{k-r}-t_k = r h$, the first row equals a polynomial $\psi(h)$ in $h$ of degree $p$. For the second row we insert the reminder estimate $r_u(t) = \mathcal O((t-t_k)^{p+1}) = h r_f(t)$ and get: \begin{gather} \label{eq:lmm:15} L_h u(t_k) = L_h \phi(t_k) + \mathcal O(h^{p+1}) = \psi(h) + \mathcal O(h^{p+1}). \end{gather} According to the definition of the truncation error, this term has to be of order $p+1$, such that the method is of order $p$. However it is $\psi$ of degree $p$. This can only hold true if $L_h\phi=\psi\equiv 0$. On the other hand $\tau_h(t_k)$ automatically is of order $p$. Since $u$ is the solution of an arbitrary right hand side, this condition has to be satisfied for all kind of Taylor polynomials $\phi$ of degree $p$. \end{proof} \begin{Theorem}{lmm-consistency} \index{step size!constant} A LMM with constant step size is consistent of order $p$ if and only if \begin{gather} \label{eq:lmm:12} \begin{split} \sum_{r=0}^\lmms \alpha_{r} &= 0, \\ \sum_{r=0}^\lmms \bigl(\alpha_{r}r^q - q \beta_{r} r^{q-1}\bigr) &= 0, \qquad q = 1,\dots,p \end{split} \end{gather} \end{Theorem} \begin{proof} According to lemma~\ref{Lemma:lmm-bramble-hilbert} it is sufficient to show that ~\eqref{eq:lmm:12} is equivalent to $L_h \phi_q=0$ for polynomials of degree $q\le p$. Due to linearity of the method it however is sufficient to show this for a basis of the polynomial space of degree $p$. For that we choose the monomial basis of the form \begin{gather*} \pi_q(t) = \left(\frac{t-t_{k-\lmms}}h\right)^q,\qquad q=0,\dots,p. \end{gather*} For those it holds: $\pi_q(t_{k-r}) = (\lmms-r)^q$. Now we see that the first condition is $L_h\pi_0 = 0$ (here is $\pi_0'\equiv0$) and the second condition is $L_h\pi_q = 0$. \end{proof} \begin{todo} Beispiel einer konsistenten LMM, die nicht konvergiert. \end{todo} \begin{remark} As shown in a homework problem, a consistent LMM is not necessary convergent. To understand this behavior and develop criteria for convergence we need to diverge into the theory of difference equations. \end{remark} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{Properties of difference equations} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{intro} The stability of LMM can be understood by employing the fairly old theory of difference equations. In order to keep the presentation simple in this section, we use a different notation for numbering indices in the equations. Nevertheless, the coefficients of the characteristic polynomial are the same as for LMM. \end{intro} \input{definitions/difference-equation} \begin{Lemma}{lmm:1} The solutions of the equation~\eqref{eq:lmm:3} with $y_n\in \R$ or $y_n\in \C$ form a vector space of dimension $\lmms$. \end{Lemma} \begin{proof} Since the equation~\eqref{eq:lmm:3} is linear and homogeneous, it is obvious that if two sequences of solutions $\{y^{(1)}\}$ and $\{y^{(2)}\}$ satisfy the equation, sums of multiples of them satisfy it too. As soon as the initial values $y_0$ to $y_{\lmms-1}$ are chosen, all other sequence members are uniquely defined. Moreover it holds \begin{gather*} y_0=y_1=\dots=y_{\lmms-1}=0 \quad\Longrightarrow\quad y_n = 0, \;n \ge 0. \end{gather*} Therefore it is sufficient to consider the first $\lmms$ values. If they are linear independent, then the overall sequences are and vice versa. Thus, the initial values form a $\lmms$ dimensional vector space. \end{proof} \input{theorems/difference-equation-solutions} \begin{proof} Inserting the solution $y_n = \xi^n$ into the difference equation results in \begin{gather*} \sum_{r=0}^\lmms \alpha_{r} \xi^{n+r} = \xi^{n} \sum_{r=0}^\lmms \alpha_{r} \xi^{r} = \xi^{n} \chi(\xi) = 0. \end{gather*} \end{proof} \input{theorems/difference-equation-basis} \begin{proof} First we observe that the sum of the multiplicities of the roots results in the degree of the polynomial: \begin{gather*} \lmms = \sum_{i=1}^\iota \nu_i. \end{gather*} Moreover we know because of Lemma~\ref{Lemma:lmm:1}, that $\lmms$ is the dimension of the solution space. We show that the sequences $\{y^{(i,k)}_n\}$ are linear independent. This is clear for sequences of different index $i$. It is also clear for different roots, because for $n\to\infty$ the exponential function nullifies the influence of the polynomials. It remains to show that the sequences $\{y^{(i,k)}_n\}$ in fact are solutions of the difference equations. For $k=0$ we have proven this already in lemma~\ref{Lemma:difference-equation-solutions}. We proof the fact here for $k=2$ and for a double zero $\xi_i$; the principle for higher order roots should be clear then. Equation~\eqref{eq:lmm:3} applied to the sequence $\{n \xi_i^n\}$ results in \begin{align*} \sum_{r=0}^{\lmms} \alpha_r (n+r) \xi_i^{n+r} &= n \xi_i^n \sum_{r=0}^{\lmms} \alpha_r \xi_i^{r} + \xi_i^{n+1} \sum_{r=1}^{\lmms} \alpha_r r \xi_i^{r-1} \\ &= n \xi_i^n \rho(\xi_i) + \xi_i^{n+1} \rho'(\xi_i) = 0. \end{align*} Here the term with $\alpha_0$ vanishes, because it is multiplied with $r=0$. $\rho(\xi_i) = \rho'(\xi_i) = 0$ because $\xi_i$ is a multiple root. \end{proof} \input{theorems/root-test} \begin{proof} According to theorem~\ref{Theorem:difference-equation-basis} we can write all solutions as linear combinations of the sequences $y^{(i,k)}$ in equation~\eqref{eq:lmm:6}. Therefore, \begin{enumerate} \item all solutions to $|\xi_i|<1$ for $n\to\infty$ converge to zero \item all solutions to $|\xi_i|>1$ for $n\to\infty$ divergence to infinity \item all solutions to $|\xi_i|=1$ for $n\to\infty$ stay bounded if and only if $\xi_i$ is simple. \end{enumerate} This proves the statement of the theorem. \end{proof} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{Stability and convergence} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{remark} In contrast to one-step methods the convergence of multistep methods follows not directly from the consistency of the method, if the right hand side of the differential equation satisfies the Lipschitz condition~\eqref{eq:IVP:1}. Analog to the A-stability we will discuss this by means of a simple model problem and we will deduce stability conditions. \end{remark} \begin{remark} In the following we investigate the solution to a fixed point in time $t$ with a shrinking step size $h$. Therefore we choose $n$ steps of step size $h = t/n$ and let $n$ go towards infinity. \end{remark} \input{definitions/lmm-stability} \input{theorems/lmm-stability} \begin{proof} The application of the LMM to the equation~\eqref{eq:lmm:1} results in the difference equation \begin{gather*} \sum_{r=0}^{\lmms} \alpha_{\lmms-r} y_{n-r} = 0. \end{gather*} Now we have to proof that the solutions for fixed $t = h n$ stay bounded if $h\to 0$. But we also see that the upper equation does not contain $h$. Therefore we have to examine, if the solutions $y_n$ stay bounded for $n\to \infty$. By resorting the summation we obtain a difference equation of the form~\eqref{eq:lmm:3}. Due to corollary~\ref{Corollary:root-test} it follows the statement of the theorem. \end{proof} \begin{Corollary}{adams-stability} Adams-Bashforth and Adams-Moulton methods are stable. \end{Corollary} \begin{proof} For all of these methods the first generating polynomial is $\rho(x) = x^\lmms-x^{\lmms-1}$. It has the simple root $\xi_1 = 1$ and the $\lmms-1$-fold root 0. \end{proof} \begin{Theorem}{BDF-stability} The BDF methods are stable for $\lmms \le 6$ and not stable for $\lmms \ge 7$. \end{Theorem} \input{definitions/lmm-convergence} \input{theorems/lmm-one-step} \begin{proof} From the general form of LMM we obtain \begin{gather*} \frac1{\alpha_s} \sum_{r=0}^\lmms \alpha_{\lmms-r} y_{k-r} = \frac{h}{\alpha_s} \sum_{r=0}^{\lmms-1} \beta_{\lmms-r} f_{k-r} + \beta_s f_k. \end{gather*} We rewrite this to \begin{gather*} y_k = -\sum_{r=1}^{\lmms} \alpha'_{\lmms-r} y_{k-r} + h\psi_h(t_{k-1}, Y_{k-1}), \end{gather*} where we implicitly enter this formula as value for $y_k$ in the computation of $f_k$. It remains to realize that this is the first set of $d$ equations in~\eqref{eq:lmm-one-step:1}, and that the remaining ones are just shifting $y_i$ to $y_{i+1}$. \end{proof} \input{theorems/lmm-one-consistency} \begin{proof} The first component of $Y_k - \widehat Y_k$ is the local error of step $k$, which is of order $h^{p+1}$ by the assumption. The other components vanish by the definition of the method. \end{proof} \input{theorems/lmm-one-stability} \begin{proof} We notice that $\widehat\rho(x) = \sum \alpha'_{s-r} x^r$ is the characteristic polynomial of the matrix $A$ and thus its eigenvalues are the roots of $\widehat\rho(x)$, which has the same roots as the generating polynomial $\rho(x)$. By the root test, we know that simple roots, which correspond to irreducible blocks of dimension one have maximal modulus one. Furthermore, every Jordan block of dimension greater than one corresponds to a multiple root, which by assumption has modulus strictly less than one. It is easy to see that such a block admits a modified canonical form \begin{gather*} J_i = \begin{pmatrix} \lambda_i & 1- \abs{\lambda_i}\\ & \lambda_i &\ddots\\ &&\ddots & 1- \abs{\lambda_i}\\ &&&\lambda_i \end{pmatrix}. \end{gather*} Thus, the canonical form $J = T^{-1}AT$ has norm $\norm{J}_{\infty} \le 1$. If we define the norm \begin{gather*} \norm{x} = \norm{(T^{-1}\otimes \identity)x}_\infty, \end{gather*} we obtain the result by \begin{multline*} \norm{(A\otimes \identity)x} = \norm{(T^{-1}\otimes \identity)(A\otimes \identity)x}_\infty = \norm{(J\otimes \identity)(T^{-1}\otimes \identity)x}_\infty \\ \le \norm{(T^{-1}\otimes \identity)x}_\infty = \norm{x}. \end{multline*} \end{proof} \input{theorems/lmm-convergence} \begin{proof} We reduce the proof to convergence of a one-step method with \begin{gather} \label{eq:lmm:18} Y_k = G(Y_{k-1}) = (A\otimes \identity) Y_{k-1} + h \verfahren_h(t_{k-1}, Y_{k-1}). \end{gather} Let $Y_{k-1}$ and $Z_{k-1}$ be two initial values for the interval $I_k$. By the previous lemma, we have in the norm defined there, for sufficiently small $h$, and assuming a Lipschitz constant $L_h$ for $\verfahren_h$ : \begin{gather} \label{eq:lmm:19} \norm{G(Y_{k-1})-G(Z_{k-1})} \le (1+h L_h) \norm{Y_{k-1}-Z_{k-1}}. \end{gather} Thus, the local error $\eta_k = U_k - \widehat Y_k$ at step $k$, which by Lemma~\ref{Lemma:lmm-one-consistency} is bounded by $M h^{p+1}$, accumulates until step $n$ at most to $h^{p+1}(1_h L_h)^{n-k}$. We have: \begin{align*} \norm{U_1 - Y_1} &\le (1+h L_h)\norm{U_0-y_0} + M h^{p+1} \\ \norm{U_2 - Y_2} &\le (1+h L_h)^2\norm{U_0-y_0} + M h^{p+1} \bigl(1+ (1+h L_h)\bigr)\\ \norm{U_3 - Y_3} &\le (1+h L_h)^3\norm{U_0-y_0} + M h^{p+1} \Bigl(\bigl(1+ (1+h L_h)+ (1+h L_h)^2\bigr)\Bigr)\\ \norm{U_n-Y_n} &\le e^{n h L_h}\norm{U_0-Y_0} + \frac{M h^p}{L_h}\bigl(e^{n h L_h} - 1\bigr). \end{align*} \end{proof} \subsection{Starting procedures} \begin{intro} In contrast to one-step methods, where the numerical solution is obtained solely from the differential equation and the initial value, multistep methods require more than one start value. An LMM with $s$ steps requires $s$ known start values $y_{k-s}, \dots, y_{k-1}$. Mostly, they are not provided by the IVP itself. Thus, general LMM decompose into two parts: \begin{itemize} \item a \emph{starting phase} where the start values are computed in a suitable way and \item a \emph{run phase} where the LMM is executed. \end{itemize} It is crucial that the method of the starting phase provides a suitable order corresponding to the LMM of the run phase, recall Definition \ref{Definition:lmm-convergence}. Moreover, it should have analog properties to the LMM, like explicit/implicit or applicability to stiff problems. % We now consider different starting procedures for an implicit LMM with % convergence order $p$. According to Definition \ref{Definition:lmm-convergence} % the starting values are required to have the same convergence order. Possible choices for the starting phase include multistep methods with variable order and one-step methods. \end{intro} \begin{example}[Self starter] A 2-step BDF method requires $y_0$ and $y_1$ to be known. $y_0$ is given by the initial value while $y_1$ is unknown so far. To guarantee that the method has order 2, $y_1$ needs to be locally of order 2 at least \begin{align}\label{eq:lmm_1BDFstarter} |u(t_1)-y_1| \leq c_0 h^2. \end{align} This is ensured, for example, by one step of the 1-step BDF method. However, starting an LMM with $s>2$ steps by a first-order method and then successively increasing the order until $s$ is reached does not provide the desired global order. That is due to the fact that the first step limits the overall convergence order to 2, compare \eqref{eq:lmm_1BDFstarter}. Nevertheless, self starters are often used in practice. \end{example} % In this case local % error estimates are used to bound the errors of the starting values and all % approximations of the run phase by reducing the step sizes, % %. Moreover, also in % %the run phase the step sizes are controlled using local error estimates, % see the % discussion on step size control in Section \ref{section:step_size_control}. \begin{example}[Runge-Kutta starter] \label{Example:RKstarter} One can use Runge-Kutta methods to start LMM. Since only a fixed number of starting steps are performed, the local order of the Runge-Kutta approximation is crucial. For an implicit LMM with convergence order $p$ and stepsize $h$ one could use an RK method with consistency order $p-1$ with the same stepsize $h$. Consider a 3-step BDF method. Thus, beside $y_0$, we need start values $y_1, y_2$ with errors less than $c_0 h^3$. They can be computed by RK methods of consistency order $2$, for example by two steps of the 1-stage Gau\ss \ collocation method with step size $h$ since it has consistency order $2s=2$, see theorem \ref{Theorem:gauss-consistency}. \end{example} \begin{example}[Continuous Runge-Kutta starter] Another option is to use continuous Runge-Kutta methods and to evaluate the continuous approximation to obtain the required starting values. In constrast to Example \ref{Example:RKstarter} one could also use the continuous polynomial approximation of Gau\ss \ collocation to start a 3-step BDF method. One step with step size $2h$ of a 2-stage Gau\ss \ collocation method would give a polynomial of degree $2$ which is then evaluated in $t_1=t_0+h$ and $t_2=t_1+h$ to obtain $y_1, y_2$. According to Theorem \ref{Theorem:collocation-continuous} $y_1, y_2$ have the appropriate order. \end{example} \begin{remark} In practice not the order of a procedure is crucial but rather the fact that the errors of all approximations (the start values and all approximations of the run phase) are bounded by the user-given tolerance, compare Section \ref{section:step_size_control}. Thus, the step sizes of all steps are controlled using local error estimates. Hence, self starting procedures usually start with very small step sizes and increase them successively. Due to their higher orders RK starters usually are allowed to use moderate step sizes in the beginning. Generally, LMM are applied with variable step sizes and orders in practice (see e.g. Exercise 7.2). \end{remark} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{LMM and stiff problems} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{Definition*}{lmm-stability-region}{A-stability of LMM} \index{stability region!of a LMM} The linear model difference equation \begin{gather} \label{eq:lmm:7} \sum_{r=0}^{\lmms} \bigl(\alpha_{\lmms-r} - z \beta_{\lmms-r}) y_{n-r} = 0. \end{gather} is obtained by applying an LMM to the model equation $u' = \lambda u$ and inserting $z=h\lambda$. The \define{stability region} of an LMM is the set of points $z\in \C$, for which all solution sequences $\{y_n\}$ of the equation~\eqref{eq:lmm:7} stay bounded for $n\to\infty$. An LMM is called \define{A-stable}, if the stability region contains the left half-plane of $\C$. \end{Definition*} \begin{Definition}{lmm-stability-polynomial} The stability polynomial of an LMM is obtained by inserting $y_n = x^n$ into the linear model difference equation to obtain \begin{gather} \label{eq:lmm:8} r_z(x) = \sum_{r=0}^{\lmms} \bigl(\alpha_{\lmms-r} - z \beta_{\lmms-r}) x^{\lmms-r}. \end{gather} \end{Definition} \begin{remark} Instead of the simple amplification function $r(z)$ of the one-step methods, we get here a function of two variables. The point $z$ for which we want to show stability and the artificial variable $x$ from the analysis of the method. \end{remark} \begin{Lemma}{lmm-stability} Let $\{\xi_1(z),\dots,\xi_\lmms(z)\}$ be the set of roots of the stability polynomial $r_z(x)$ as functions of $z$. A point $z\in \C$ is in the stability region of a LMM, if these roots satisfy the root test in corollary~\ref{Corollary:root-test}. \end{Lemma} \begin{proof} The proof is analog to theorem~\ref{Theorem:lmm-stability}. \end{proof} \begin{Theorem*}{dahlquist2}{2nd Dahlquist barrier} \defindex{Dahlquist barrier (second)} There is no A-stable LMM of order $p>2$. Among the A-stable LMM of order 2, the trapezoidal rule (Crank-Nicolson) has the smallest error constant. \end{Theorem*} \subsection{Relaxed A-stability} \begin{intro} Motivated by the fact that there are no higher order A-stable LMM and by highly dissipative problems, people have introduced relaxed concepts of A-stability. \end{intro} \input{definitions/aa-stability} \begin{remark} The introduction of the A(0)-stability is motivated by linear systems of the form $u'=-Au$ with symmetric, positive definite matrix $A$. In fact one requires there only stability on the real axis because all eigenvalues are real. Thus, any positive angle $\alpha$ is sufficient. Similarly A($\alpha$)-stable LMM are suitable for linear problems in which high frequently vibration ($\Im\lambda$ large) decay fast ($-\Re\lambda$ large). In all cases one observes corresponding properties of the Jacobian matrix $\partial_u f$ for the application of nonlinear problems. \end{remark} \begin{example} The stability regions of the stable BDF methods are in Figure~\ref{fig:bdf-stability}. The corresponding values for A($\alpha$)-stability and stiff stability are in Table \ref{tab:bdf-stability}. \end{example} \begin{figure}[tp] \centering \includegraphics[width=.45\textwidth]{fig/stability-bdf.png} \hfill \includegraphics[width=.45\textwidth]{fig/stability-bdf-zoom.png} \caption{Boundaries of stability regions of BDF1 to BDF6. Unstable region right of the origin. Zoom on the right} \label{fig:bdf-stability} \end{figure} \begin{table}[tp] \centering \begin{tabular}{c|cccccc} $k$ & 1 & 2 & 3 & 4 & 5 & 6 \\\hline $\alpha$ & 90$^\circ$ & 90$^\circ$& 86.03$^\circ$ & 73.35$^\circ$& 51.84$^\circ$ & 17.84$^\circ$ \\ $D$ & 0 & 0 & 0.083& 0.667& 2.327& 6.075 \end{tabular} \caption{Values for A($\alpha$)- and stiff stability for BDF methods of order $k$.} \label{tab:bdf-stability} \end{table} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{Predictor-corrector schemes} \begin{Definition*}{predictor-corrector}{Predictor-corrector methods} Assume a pair of time stepping schemes, one explicit, one implicit, \begin{align*} \hat y_{k} &= \hat \verfahren_p(y_{k-1}) \\ y_k &= \verfahren_c(y_{k-1},y_{k}), \end{align*} we can use $\hat y_k$ as initial value for the Newton iteration for $y_k$. In an extreme case, we let \begin{gather*} y_k = \verfahren_c(y_{k-1},\hat y_{k}), \end{gather*} without any further iteration. \end{Definition*} \begin{remark} Predictor-corrector methods were developed strongly around Adams-Moulton and Adams-Bashforth methods, since the implicit ones have much smaller error constants. Given that these methods offer no considerable advantages compared to Runge-Kutta methods, but stability properties and implementation are weak points, We omit their discussion. A simple predictor for BDF methods can be obtained, since they are based on an interpolating polynomial. Thus, we simply extrapolate this polynomial to the next point in time. \end{remark} \begin{example} While the predictor-corrector idea sounds reasonable, we have to be careful with stiff problems, the original reason for using implicit methods. Take again our favorite IVP \begin{gather*} u' = \lambda u, \qquad u(0) = 1. \end{gather*} We apply the BDF(1) scheme, namely the implicit Euler method, with step size 1. According to its stability function~\eqref{eq:impl:stabil:impleuler}, we obtain \begin{gather*} y_1 = \frac1{1-\lambda}. \end{gather*} Hence, the interpolating polynomial is \begin{gather*} y(t) = (1-t) + \frac1{1-\lambda} t = 1 + \frac{\lambda}{1-\lambda}t. \end{gather*} For the mildly stiff problem $\lambda = -3$, we obtain \begin{gather*} y_1 = 0.25, \qquad y_2 = 0.0625, \qquad \hat y_2 = y(2) = -0.5. \end{gather*} Thus, the extrapolated value is already a much worse initial value for a Newton iteration than using the value from the previous time step. While this example was particularly chosen to exhibit such failure, it does show that extrapolation of stiff problems has its pitfalls. Here, we end up with a time step restriction which is comparable to the stability condition of the explicit method. \end{example} %%% Local Variables: %%% mode: latex %%% TeX-master: "notes" %%% End:
{ "alphanum_fraction": 0.6715595154, "avg_line_length": 39.5964467005, "ext": "tex", "hexsha": "313f1dbee3c945dd164ddc64a86d489b6dde0e0f", "lang": "TeX", "max_forks_count": 6, "max_forks_repo_forks_event_max_datetime": "2020-11-05T19:07:29.000Z", "max_forks_repo_forks_event_min_datetime": "2018-05-15T19:28:53.000Z", "max_forks_repo_head_hexsha": "73f23770e2b02f1b4a67987744ceffbd9ce797d7", "max_forks_repo_licenses": [ "CC-BY-4.0" ], "max_forks_repo_name": "ahumanita/notes", "max_forks_repo_path": "ode/lmm.tex", "max_issues_count": 2, "max_issues_repo_head_hexsha": "73f23770e2b02f1b4a67987744ceffbd9ce797d7", "max_issues_repo_issues_event_max_datetime": "2018-08-31T12:58:14.000Z", "max_issues_repo_issues_event_min_datetime": "2018-05-24T07:31:37.000Z", "max_issues_repo_licenses": [ "CC-BY-4.0" ], "max_issues_repo_name": "ahumanita/notes", "max_issues_repo_path": "ode/lmm.tex", "max_line_length": 125, "max_stars_count": null, "max_stars_repo_head_hexsha": "73f23770e2b02f1b4a67987744ceffbd9ce797d7", "max_stars_repo_licenses": [ "CC-BY-4.0" ], "max_stars_repo_name": "ahumanita/notes", "max_stars_repo_path": "ode/lmm.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 9603, "size": 31202 }
\clearpage \section{Feb $26^{th}$ discussion} (Prepared by Jai Arora) \vspace{0.3cm} \begin{itemize} \item \textbf{Sonu Mehta} : When we eliminate a common subexpression, would we still add a new mapping? For example, if the statement $s$ in consideration is $d := y + z$, and $set_{in}(s)$ has the mapping $(x, y+z)$, then will we add a mapping for $(d,y+z)$ in $set_{out}(s)$? \textbf{A:} Yes, we will add a mapping for $(d,y+z)$ after the elimination, despite $y+z$ already being stored in $x$, as the mapping for $x$ may get killed off at some later point in the analysis, and we still want to keep the information that $d$ contains the value of $y+z$. \item \textbf{Vaibhav} : Why would starting from a more aggresive value in DFA give a more precise solution to the fixed point iterative algorithm? \textbf{A:} This is because that the Semi-Lattice organization of the values is organized in such a way that if $x \leq y$, then it is always okay to replace $y$ with $x$ ($x$ is more relaxed than $y$).\\ For example, in liveness analysis, we have that ${\tt true} \leqslant {\tt false}$, and the ${\tt true}$ value is a more relaxed as it only says that a variable $x$ "may" be live (figuring out if a variable is definitely $live$ is an undecidable problem). Hence starting from a lower value may give a less precise solution. \item \textbf{Vaibhav} : In the fixed point iterative algorithm for DFA analysis, if we have more than one program point that do not satisfy the DFA rules, then would picking one over the other make a difference? \textbf{A:} Picking one program point over the other in the algorithm would not make a difference with respect to the final solution, but it will make a difference with respect to the efficiency. \item \textbf{Arpit Saxena} : Consider a statement of the form $x := e$, and $x$ is dead just after the statement. In our discussion, we declared this statement \underline{dead code}, and then removed it. But what if the expression $e$ has some side-effects as well, like a function call which modifies the memory? \textbf{A:} So far we haven't considered function calls in our discussion, but such function calls might have to be handled separately. This can be dealt with in $2$ ways: \begin{itemize} \item Do not remove statements with function calls (Less precise and more conservative) \item The other way is to consider Memory as another variable in the liveness analysis. Assume that memory is dead at the end of the ${\tt main}$ function, and then modify the transfer function (A more precise solution). \end{itemize} \item \textbf{Jai Arora} : Copy propagation is very similar to Common Subexpression elimination as a DFA Analysis, and they both create opportunities for each other. Also, copy propagation focuses on a subset of all the expressions, so it is possible to modify the transfer function and combine both the analyses? \textbf{A:} Yes, we can combine both of them together, infact, common subexpression elimination subsumes copy propagation. Hence we can say that common subexpression elimination creates more opportunities for itself. But copy propagation is a much weaker and a cheaper analysis from the compiler's point of view. Also, sometimes it may just be sufficient to use copy propagation, so it makes sense to keep both the optimizations separate. \item \textbf{Anirudh Panigrahi} : How are function calls represented in the intermediate representation? Is it modelled as a jump to a separate basic block? \textbf{A:} LLVM IR has support for function call abstractions. So a function in a language like C will have a corresponding function in the LLVM IR. \item \textbf{Jai Arora} : In Global Constant Propagation, if we keep track of more than one variable simultaneously, then we have a chance to get improvements in the time complexity of the algorithm, but we may get a precision advantage in this case. Is it possible to get a similar advantage in Liveness Analysis? \textbf{A:} Yes, we can have a more precise but complex liveness analysis if we keep track of all the variables simultaneously. For example: Consider a statement $s: x := y + z$, where $x$ and $y$ are dead just after the statement. Then we can do the analysis in $2$ ways: \begin{itemize} \item We simply use our previous transfer function, and say that $y$ is live just before the statement as $s$ uses $y$ \item We can also use the fact that $x$ and $y$ both are dead just after the statement, which would mean that $y$ would also be dead before this statement as $x$ is not getting used. \end{itemize} Note that the second way makes the analysis more precise but even more complex. We don't need to do it as we can run original Liveness algorithm multiple times and we would get the same effect. \item \textbf{Arpit Saxena} : CPU's are known to reorder instructions for efficiency reasons. Do compilers also do the same, or is it left to the CPU? \textbf{A:} Yes, compilers also reorder instructions (More examples in later modules). We only reorder instructions only when we preserve the meaning of the program. CPUs use this technique to reduce the number of cycles by taking some unrelated compuation and executing it in some stall cycle. \item \textbf{Sonu Mehta} : How are pass by reference semantics modelled in the IRs such as LLVM? \textbf{A:} References will get converted to pointers at the LLVM IR level in this case. \end{itemize} \clearpage
{ "alphanum_fraction": 0.7078862204, "avg_line_length": 97.85, "ext": "tex", "hexsha": "b785aa61be058446dcfd524be32362d6ad16acc0", "lang": "TeX", "max_forks_count": 4, "max_forks_repo_forks_event_max_datetime": "2021-04-12T19:11:33.000Z", "max_forks_repo_forks_event_min_datetime": "2021-02-16T08:32:53.000Z", "max_forks_repo_head_hexsha": "af3788cde815a5b1d19f206ec8605c0e372c1833", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "arpit-saxena/compiler-notes", "max_forks_repo_path": "feb26discussion.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "af3788cde815a5b1d19f206ec8605c0e372c1833", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "arpit-saxena/compiler-notes", "max_issues_repo_path": "feb26discussion.tex", "max_line_length": 319, "max_stars_count": null, "max_stars_repo_head_hexsha": "af3788cde815a5b1d19f206ec8605c0e372c1833", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "arpit-saxena/compiler-notes", "max_stars_repo_path": "feb26discussion.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 1358, "size": 5871 }
\documentclass[]{jocg} \usepackage{amsmath,amsfonts} \usepackage{amsthm} \usepackage[utf8]{inputenc} \usepackage[english]{babel} \usepackage{booktabs} \usepackage{mathtools} \usepackage[mathlines]{lineno} \usepackage[]{graphicx} \usepackage{algorithm} \usepackage[noend]{algpseudocode} \linenumbers \setcounter{tocdepth}{1} % -- part,chapters,sections \usepackage[ backend=biber, style=nature, sorting=ynt ]{biblatex} \addbibresource{main.bib} \newcommand{\RR}{\mathbb{R}} \newcommand{\PP}{\mathcal{P}} \newcommand{\EE}{\mathcal{E}} \newcommand{\set}[1]{\{#1\}} \newcommand{\norm}[1]{\|#1\|} \newcommand{\abs}[1]{|#1|} \newcommand{\chordarc}{{s_{\textrm{arc}}}} \DeclareMathOperator{\diam}{\mathrm{diam}} \DeclareMathOperator{\conv}{\mathcal{CH}} \newtheorem{proposition}{Proposition}[section] \theoremstyle{definition} \newtheorem{definition}[proposition]{Definition} \theoremstyle{remark} \newtheorem{remark}[proposition]{Remark} \title{% \MakeUppercase{Measuring polygonal niceness}% \thanks{% This paper constitutes the completion of the final project for AMS 545 ``Computational Geometry''. } } \author{% Sharmila~Duppala% \thanks{% \affil{Department of Computer Science}, \email{[email protected]}% }\, and David~Kraemer.% \thanks{% \affil{Department of Applied Mathematics}, \email{[email protected]}% } } \begin{document} \maketitle \begin{abstract} The $\alpha$-fatness and chord-arc scores as measures of the ``niceness'' of polygons are developed and explored, with initial theoretical and empirical results. \end{abstract} \tableofcontents \section{Introduction} Convexity is from many perspectives the ``gold standard'' of niceness for planar subsets, because it excludes at once large classes of pathological sets altogether. Yet for establishing a quantitative measure of niceness that matches with intuitive or legal understandings, convexity is both too restrictive and too general. The relevant consideration at present revolves around drawing electoral districts. The immense political consequences of favorable (or unfavorable) maps has resulted in the development of creative classes of polygonal regions utilized for this purpose. There exists a strong civic prerogative for providing quantitative measures of niceness. In a concurrent opinion in the 2004 suit \textit{Vieth v. Jubelirer} decided by the U.S. Supreme Court, Justice Anthony Kennedy writes \begin{quote} Because, in the case before us, we have no standard by which to measure the burden appellants claim has been imposed on their representational rights, appellants cannot establish that the alleged political classifications burden those same rights. Failing to show that the alleged classifications are unrelated to the aims of apportionment, appellants’ evidence at best demonstrates only that the legislature adopted political classifications. That describes no constitutional flaw, at least under the governing Fourteenth Amendment standard. \cite{2004vieth} \end{quote} Ought convexity be considered as such a standard? In practice, convexity is too restrictive (e.g., for coastal or river borders) and too general (e.g., a tiny vertical strip) for this purpose. The aim of this paper is to propose several alternative measures to determine the niceness of polygonal regions in a quantitatively rigorous way. We introduce two classes of measures, the $\alpha$-score and the chord-$f$ scores, develop some of their basic properties, and show how they perform on simulated data. The $\alpha$-fatness quantifies how much a given polygon resembles a ball (with respect to some norm). When $f$ measures the perimeter of a polygon, the chord-$f$ score quantifies the extent to which ``local nonconvexity'' affects the distribution of the total boundary. Both measures generally reward convex polygons, but punish certain convex ``offenders.'' Of course, there exist nonconvex polygons who score well on both measures as well. This paper proceeds as follows. Section 2 provides a theoretical overview for the $\alpha$-fatness and chord-$f$ scores and gives preliminary results on simple examples. Section 3 outlines the algorithms used to perform discretizations and compute the $\alpha$-fatness and chord-$f$ scores. Section 4 gives the results of numerical simulations for these measures on randomly generated ``typical'' polygons and provides data about the success of discretization. Section 5 includes a discussion of the main results of the paper and its implications for the electoral districting process. Source code for this project can be found at \url{https://github.com/DavidNKraemer/polygonal-niceness}. Documentation, cruft removal, and user interface improvements are presently evolving. \section{Theoretical background} The area of a subset $S \subseteq \RR^2$ of the plane is denoted by $\lambda(S)$. We will denote the boundary of $S$ by $\partial S$. For two points $x,y \in \RR^2$, the closed line segment with endpoints in $x$ and $y$ is denoted $[x,y]$. For $p \in [1, \infty]$, we define the $p$-ball as $B_{p}(x, \varepsilon) = \set{y \in \RR^2 : \norm{x-y}_p \leq \varepsilon}$ and we will only be interested in the closed case. Unless otherwise stated, we shall operate in the Euclidean world with $p = 2$. Throughout we shall assume that $P \subseteq \RR^2$ is a simple bounded closed polygon. The class of such polygons is given by $\PP$. Its perimeter is denoted by $\abs{\partial P}$. \subsection{The $\alpha$-fatness score} One approach to measuring the relative ``fatness'' of a polygon seeks to identify regions of the shape which behave highly unlike balls. \begin{definition} The $\alpha$-fatness score of a polygon $P$ is given by \begin{equation*} \alpha(P) = \inf\set{\frac{\lambda[P \cap B(x, \varepsilon)]}{\lambda[B(x,\varepsilon)]} : \varepsilon > 0, x \in P} \end{equation*} where we impose that the ball $B(x,\varepsilon)$ does not contain $P$. \label{def:alpha} \end{definition} The constraint that $P \not\subseteq B(x,\varepsilon)$ ensures definiteness: otherwise, let $\varepsilon \to \infty$ and the ratio $\frac{\lambda[P \cap B(x, \varepsilon)]}{\lambda[B(x, \varepsilon)]}$ always approaches $0$. An intuition for $\alpha(P)$ emerges from considering how it behaves on balls. \begin{proposition} For any $P$, $\alpha(P) \leq \frac{1}{4}$, and if $P = B(x_0, r)$ then we have equality. \label{prop:alpha} \end{proposition} \begin{proof} To show that $\alpha(P) \leq \frac{1}{4}$, it suffices to show only for convex $P$, because $P \subseteq \conv(P)$ implies $\lambda[P \cap B(x, \varepsilon)] \leq \lambda[\conv(P) \cap B(x, \varepsilon)]$. Let $d = \diam (P)$ and fix $x,y \in \partial P$ such that $\abs{[x,y]} = d$. Assume without loss of generality that $x$ lies above $y$, and consider the ball $B(x, d)$. Then $P$ does not meet the upper disk of $B(x,d)$, for otherwise we could choose $z \in P$ in the upper disk of $B(x,d)$ which forms a chord of $P$ longer than $d$. A similar argument shows that $P$ cannot occupy more than half of the lower disk of $B(x,d)$. Hence \begin{equation*} \alpha(P) \leq \frac{\lambda[P \cap B(x,d)]}{\lambda[B(x,d)]} \leq \frac{1}{4}, \end{equation*} as needed. The fact that $\alpha(B(x_0, r)) = \frac{1}{4}$ relies on a neat result \cite{10.2307/30044198} that shows that the $L_p$ ellipsoid $\EE$ with radii $a, b$ has area \begin{equation*} \lambda(\EE) = a b \cdot 4 \frac{\Gamma(1+\frac{1}{p})^2}{\Gamma(1+\frac{n}{p})}. \end{equation*} In our case, $\alpha(B(x_0, r))$ is achieved at a ball on a ``corner'' point with radius $2r$. \end{proof} Hence, the $\alpha$-score is greater for polygons which exhibit ``niceness'' characteristics. It is not affected tremendously by local nonconvexity in $P$ so long as $P$ ``snugly fits'' into a ball shape, which is a global property. \subsection{The chord-$f$ score} A different class of ``fatness'' measures arises through partitioning $P$ into two (interior disjoint) polygons $P = P' \cup P''$ via chords. A chord is a pair $x,y \in \partial P$ such that the segment $[x,y]$ is contained in the interior of $P$ except at the endpoints $x$ and $y$. Such a chord defines a partition of $P = P' \cup P''$ by orientation, so that $\partial P'$ is the arc from $x$ to $y$ together with $[x,y]$ and that $\partial P''$ is the arc from $y$ to $x$ together with $[x,y]$. \begin{definition} Let $f : \mathcal{P} \to \RR$. For a given $P \in \mathcal{P}$, its \emph{chord-$f$ score} is given by \begin{equation*} s_f (P) = \inf \set{\max(f(P'), f(P'')) : x, y \in \partial P} \end{equation*} where the chord $[x,y]$ partitions $P = P' \cup P''$. \label{def:chord-f} \end{definition} The intuition for $f$ is some global measure of ``cost'' associated with respect to $P$. For example, if $f(P) = \abs{\partial P}$, then $s_f$ identifies the partition (or limiting sequence of partitions) which minimizes the maximum subpolygon perimeter. Indeed, measures such as perimeter or area are the typical choices of $f$, but any suitable property of $P$ may be employed. The chord-$f$ score can be understood relatively simply in the context of a minimax game. Max, given a polygon with a chord partitioning it, always chooses the subpolygon associated with a greater $f$ (i.e., the worst cost of the partition). Min, who plays first, tries to find a chord with the least-bad worst cost with respect to $f$. The member of this class of scores under this present investigation is $\chordarc$, the so-called \emph{chord-arc} score. It is important to note that the length of the chord $[x,y]$ which partitions $P$ is included in the estimation of $\chordarc$. One implication of this is that the chord-arc score need not be bounded above by $\frac{\abs{\partial P}}{2}$. We have several facts about $\chordarc$ which develop further intuition for the score. \begin{proposition} \begin{enumerate} \item Let $R$ be a rectangle with height $h$ and length $\ell$. Assume that $h \leq \ell$. Then $\chordarc(R) = 2h + \ell$. \item Let $P$ be convex with perimeter $\abs{\partial P} = w$. Then $\chordarc(P) \leq \frac{w}{2} + \diam (P)$. \item Let $C$ be a circle with ``perimeter'' $2\pi r$. Then $\chordarc(C) = (\pi + 2)r$. \end{enumerate} \label{rem:chordarc} \end{proposition} \begin{proof} \begin{enumerate} \item That the optimal partition bisects $R$ is readily apparent, for this minimizes the maximum perimeter length of either sub-rectangle. A case analysis confirms that the preferred strategy for Min is to choose a chord along the smaller dimension (i.e., $h$) to bisect $R$, and in this case the maximum arc length is $2h + 2\left( \frac{\ell}{2} \right) = 2h + \ell$, as needed. \item For $P$ convex, a perimeter-bisecting chord, whose length is at most $\diam(P)$, always exists. \item This follows from part 2 by noticing that the length of \emph{any} perimeter-bisecting chord is $\diam(C) = 2r$. \qedhere \end{enumerate} \end{proof} \section{Overview of empirical design} In principal computing the $\alpha$-fatness and chord-$f$ scores relies on sweeping the continuum of points of $P$ and evaluating the aforementioned measurements. To estimate this process in practice, we employ the following discretizing scheme. Let $P$ be determined by the vertices $[v_1, \dots, v_n]$ given in counterclockwise order. Then, given $\delta > 0$, loop through the vertices of $P$, checking if $\norm{v_{i+1} - v_i} > \delta$. In the case where this holds, we insert $v_i' = \frac{v_i + v_{i+1}}{2}$ after $v_i$ into the polygon and resume the loop. This ensures that for any two adjacent vertices $v_{i}$ and $v_{i+1}$ in the representation of $P$ we have $\norm{v_{i+1} - v_i} \leq \delta$. As $\delta \to 0$ this estimates $\partial P$ more exactly. All computations are done with respect to the $L_{\infty}$ norm. The implementation of this procedure is given in Algorithm \ref{alg:delta}. Depending on the relative coordinates of the vertex set compared to $\delta$, the worst-case runtime of refinement varies. In the case where the maximum distance is $\delta K$, the refinement adds $O(\lg^2 K)$ vertices between the two coordinates. The overall runtime is $O(n \lg^2 K)$ and the total quantity of vertices after refinement is $O(n + n\lg^2 K)$. %% Pseudocode for the delta refinement \begin{algorithm}[h] \begin{algorithmic}[1] \Procedure{RefineBy}{$P,\delta$} \State{$P = [v_1, v_2, v_3, \dots, v_n]$}; \State{$v = v_1$} \While {$\textit{next}(v) \ne v_1$} \If {$\norm{\textit{next}(v) - v}_2 > \delta$} \State{\textrm{Insert $\textit{midpoint}(v, \textit{next}(v))$ into $P$ after $v$}} \Comment{Now $\textit{next}(v)$ is the midpoint} \EndIf \EndWhile \EndProcedure \end{algorithmic} \caption{% The procedure used to refine a given polygon $P$ by a specified $\delta > 0$. This modifies the existing $P$ to satisfy the regularity condition that $\norm{v_{i+1} - v_i} \leq \delta$ for each adjacent vertex pair $v_{i+1}, v_i$ on the discretized boundary $\partial P$. } \label{alg:delta} \end{algorithm} Given a $\delta$-refined polygon $P$, the computation of $\alpha(P)$ is relatively straightforward, with the simplification that points $x$ on which the ratio is computed lie on $\partial P$. The computation steps through $[v_1, \dots, v_n]$ and for each $v_i$ the radius $\varepsilon$ of the minimum-enclosing $\infty$-ball centered at $v_i$ is computed. Then for $\varepsilon_k = \frac{k}{100} \varepsilon$, $k = 1, \dots, 100$, the ratio \begin{equation*} \frac{\lambda[P \cap B_{\infty}(v_i, \varepsilon_k)]}{B_{\infty}(v_i, \varepsilon_k)} \end{equation*} is computed and the smallest such ratio is kept. The minimum over all $v_i$ is given as $\alpha(P)$. The implementation of the $\alpha$ score is given in Algorithm \ref{alg:alpha}. Though we use a sweep from $k = 1, \dots, 100$, any suitable range may be selected as an alternative. Unfortunately, there is no analogous ``bisection'' technique for determining the minimizing $k$ because this ratio need not be monotonic. Since $P$ need not be convex, the intersection $P \cap B_{\infty}(v, \varepsilon_k)$ requires $O(n \log n)$ steps. For $n$ vertices, the runtime performance is $O(n^2 \log n)$. %% Pseudocode for the alpha score \begin{algorithm}[h] \begin{algorithmic}[1] \Function{$\alpha$Score}{$P$} \State{$P = [v_1, v_2, v_3, \dots, v_n]$}; \State{$\textrm{min} = \infty$} \For {$v \in P$} \State{$\varepsilon = \textrm{radius of minimum enclosing ball of $P$ at $v$}$} \For {$k = 1, 2, \dots, 100$} \Comment{$100$ is arbitrary} \State{$\varepsilon_k = \frac{k}{100} \varepsilon$} \State{$\textrm{area} = \frac{\lambda[P \cap B_{\infty}(v, \varepsilon_k)]}{B_{\infty}(v, \varepsilon_k)}$} \If {$\textrm{min} > \textrm{area}$} \State{$\textrm{min} = \textrm{area}$} \EndIf \EndFor \EndFor \State{\Return{\textrm{min}}} \EndFunction \end{algorithmic} \caption{% The $\alpha$-fatness function. } \label{alg:alpha} \end{algorithm} Similarly, given a $\delta$-refined polygon $P$, the chord-$f$ score is computed by sweeping all possible chords of $P$. For each point $v_i, v_j \in \partial P$ with $i < j$, if $[v_i, v_j]$ is indeed a valid chord of $P$, the partition $P = P' \cup P''$ is computed and the maximum of $f(P')$ and $f(P'')$ is kept. The minimum such value is given as $s_f(P)$. The implementation of the chord-$f$ score is given in Algorithm \ref{alg:chordf}. For $n$ vertices, $O(n^2)$ potential chords are computed. To determine if a pair of vertices is a chord takes $O(n)$ time, which leads to an overall runtime performance of $O(n^3)$. %% Pseudocode for the chord f score \begin{algorithm}[h] \begin{algorithmic}[1] \Function{ChordScore}{P,f} \State{$P = [v_1, v_2, v_3, \dots, v_n]$}; \For {$v \in P$} \For {$w \in [\textit{next}(v), \dots, v_n]$} \If {$[v,w]$ is a chord of $P$} \State{Partition $P = P' \cup P''$ by $[v,w]$} \State{$\textrm{submax} = \textit{max}(f(P'), f(P''))$} \If {$\textrm{min} > \textrm{submax}$} \State{$\textrm{min} = \textrm{submax}$} \EndIf \EndIf \EndFor \EndFor \State{\Return{\textrm{min}}} \EndFunction \end{algorithmic} \caption{% The chord-$f$ function. } \label{alg:chordf} \end{algorithm} All of these algorithms are implemented in CGAL \cite{cgal:eb-18a} with the extensive use of the 2D Polygon library \cite{cgal:gw-p2-18a}. For our data generation, we make the minor modification that measured ${\chordarc}$ is the ratio of the actual $\chordarc$ score with the perimeter of the overall polygon. This changes the interpretation of some of the theoretical results from the previous section but allows for easier interpretation of the data. \section{Empirical results} We evaluate our measurements on randomly generated polygons with vertices in the unit square $[0,1] \times [0,1]$. We consider collections of such polygons of 10, 50, and 100 vertices. In Figure \ref{fig:delta-alpha} and Figure \ref{fig:delta-arc}, we measure the effect on polygons of 10 vertices of sweeping $\delta$ between $0.05$ and $1$ on the resulting measurements. Figure \ref{fig:alph-inft-10} shows a comparison of $\alpha$-fatness and relative ${\chordarc}_{\infty}$ scores for polygons on 10 vertices. Figure \ref{fig:alph-one-50} shows a comparison of $\alpha$-fatness and relative ${\chordarc}_1$ scores for polygons on 50 vertices. Figure \ref{fig:alph-inft-100} shows a comparison of $\alpha$-fatness and relative ${\chordarc}_{\infty}$ scores for polygons on 100 vertices. Our theoretical results seem to be confirmed in that ``ball''-like polygons get the highest $\alpha$-scores, and it seems that polygons with fewer ``nonconvexities'' score better on the relative ${\chordarc}_{\infty}$ measure. We observe an overall decline in the relative $\chordarc$ scores as the number of vertices of the polygons increases. This is attributed to the corresponding increase in the size of the underlying perimeters of each $P$. The observation is not shared with the $\alpha$ score, but this can be understood as a sort of ``coastline'' phenomenon, in which polygonal perimeters increase without limit while the areas converge. \begin{figure}[t] \centering \begin{tabular}{lrrrrrrrrrr} \toprule $\delta$ & $P_1$ & $P_2$ & $P_3$ & $P_4$ & $P_5$ & $P_6$ & $P_7$ & $P_8$ & $P_9$ & $P_{10}$ \\ \midrule 0.05 & 0.23 & 0.21 & 0.11 & 0.039 & 0.15 & 0.038 & 0.12 & 0.10 & 0.12 & 0.13 \\ 0.15 & 0.23 & 0.21 & 0.11 & 0.039 & 0.15 & 0.038 & 0.12 & 0.10 & 0.12 & 0.13 \\ 0.26 & 0.23 & 0.21 & 0.11 & 0.039 & 0.15 & 0.038 & 0.12 & 0.10 & 0.12 & 0.13 \\ 0.36 & 0.23 & 0.21 & 0.11 & 0.039 & 0.15 & 0.038 & 0.12 & 0.10 & 0.12 & 0.13 \\ 0.47 & 0.23 & 0.21 & 0.11 & 0.039 & 0.15 & 0.038 & 0.12 & 0.10 & 0.12 & 0.13 \\ 0.57 & 0.23 & 0.21 & 0.11 & 0.039 & 0.15 & 0.038 & 0.12 & 0.10 & 0.12 & 0.13 \\ 0.68 & 0.23 & 0.21 & 0.11 & 0.039 & 0.15 & 0.038 & 0.12 & 0.10 & 0.12 & 0.13 \\ 0.78 & 0.23 & 0.21 & 0.11 & 0.039 & 0.15 & 0.038 & 0.12 & 0.10 & 0.12 & 0.13 \\ 0.89 & 0.23 & 0.21 & 0.11 & 0.039 & 0.15 & 0.038 & 0.12 & 0.10 & 0.12 & 0.13 \\ 1 & 0.23 & 0.21 & 0.11 & 0.039 & 0.15 & 0.038 & 0.12 & 0.10 & 0.12 & 0.13 \\ \bottomrule \end{tabular} \caption{% Effect of sweeping $\delta$ on measurements of $\alpha$-score on 10 randomly generated polygons with 10 vertices in $[0,1]^2$. Unsurprisingly, the refinement of the different polygons has no effect on the $\alpha$-fatness score, since it introduces no additional area to the polygon and since the ``corners'' are already present. } \label{fig:delta-alpha} \end{figure} \begin{figure}[t] \centering \begin{tabular}{lrrrrrrrrrr} \toprule $\delta$ & $P_1$ & $P_2$ & $P_3$ & $P_4$ & $P_5$ & $P_6$ & $P_7$ & $P_8$ & $P_9$ & $P_{10}$ \\ \midrule 0.05 & 0.83 & 0.46 & 0.83 & 0.89 & 0.54 & 0.67 & 0.62 & 1.00 & 0.86 & 0.55 \\ 0.15 & 0.79 & 0.90 & 0.72 & 0.60 & 0.90 & 0.67 & 0.99 & 0.83 & 0.66 & 0.90 \\ 0.26 & 1.00 & 0.75 & 0.50 & 0.60 & 0.90 & 0.49 & 0.77 & 0.52 & 0.72 & 0.69 \\ 0.36 & 0.89 & 1.00 & 0.50 & 0.60 & 0.52 & 0.49 & 1.00 & 0.92 & 0.74 & 0.51 \\ 0.47 & 0.61 & 1.00 & 0.50 & 0.75 & 0.52 & 0.24 & 1.00 & 0.62 & 1.00 & 0.51 \\ 0.57 & 0.61 & 1.00 & 0.25 & 0.75 & 0.52 & 0.24 & 1.00 & 0.62 & 1.00 & 0.51 \\ 0.68 & 0.61 & 1.00 & 0.25 & 0.75 & 0.52 & 0.24 & 1.00 & 0.62 & 1.00 & 0.51 \\ 0.78 & 0.61 & 1.00 & 0.25 & 0.75 & 0.52 & 0.24 & 1.00 & 0.62 & 1.00 & 0.51 \\ 0.89 & 0.61 & 1.00 & 0.25 & 0.75 & 0.52 & 0.24 & 1.00 & 0.62 & 1.00 & 0.51 \\ 1 & 0.61 & 1.00 & 0.25 & 0.75 & 0.52 & 1.00 & 1.00 & 0.62 & 1.00 & 0.51 \\ \bottomrule \end{tabular} \caption{% Effect of sweeping $\delta$ on measurements of ${\chordarc}_\infty$ on 10 randomly generated polygons with 10 vertices in $[0,1]^2$. The overall behavior is difficult to summarize. Though one might hope that as $\delta \to 0$ we see a stabilization in the chard arc score, this does not appear to occur. Moreover, there does not seem to be even a monotonic evolution in the score as $\delta \to 0$. This is likely due to the fact that smaller values of $\delta$ permit stranger nonconvexities to emerge as viable chords. } \label{fig:delta-arc} \end{figure} \begin{figure}[t] \centering \includegraphics[height=0.8\textheight]{../plots/u_10_alpha_score_chord_arc_infinity_vertices_0-05_delta_ranking.jpg} \caption{$\alpha$ scores and ${\chordarc}_{\infty}$ scores for randomly generated polygons on 10 vertices.} \label{fig:alph-inft-10} \end{figure} \begin{figure}[t] \centering \includegraphics[height=0.8\textheight]{../plots/u_50_alpha_score_chord_arc_one_vertices_0-05_delta_ranking.jpg} \caption{$\alpha$ scores and ${\chordarc}_{1}$ scores for randomly generated polygons on 50 vertices.} \label{fig:alph-one-50} \end{figure} \begin{figure}[t] \centering \includegraphics[height=0.8\textheight]{../plots/u_100_alpha_score_chord_arc_infinity_vertices_0-05_delta_ranking.jpg} \caption{$\alpha$ scores and ${\chordarc}_{\infty}$ scores for randomly generated polygons on 100 vertices.} \label{fig:alph-inft-100} \end{figure} \section{Discussion} We have developed two classes of measurements for determining the niceness of polygonal regions in the plane and established initial results, both theoretical and empirical, for evaluating their effectiveness. The $\alpha$-fatness measures the extent to which the underlying polygon resembles a ball. It rewards polygons which fill squarely compact regions and punishes oblong and spread out polygons. The chord-arc score measures the extent to which the underlying polygon contains significant ``local nonconvexities'' that are not well distributed throughout the entire polygon. Interestingly, both measures perform well (even optimally) on specific types of convex polygons but severely punishes others. We generally find that these measures conform in some sense to an intuitive understanding of niceness. Extensions on these measures can be examined. The connected $\alpha$-fatness score, for example, computes the intersection of $B(x,\varepsilon)$ with the connected component of $P$ containing $x$. This might prove a more reliable estimate of intuitive niceness than the conventional $\alpha$-fatness score, because it excludes components of $P$ from consideration which are close in a coarse sense, but not necessarily close in a topological sense. The myriad of possible choices from which to choose $f$ leaves open a wide space for exploration. A more suitable measure than simply the perimeter may emerge as a consequence. The next extension of this work is to apply these measures to electoral districts and evaluate their success at distinguishing between offending and ``nice'' districts. Given an increasing corpus of election districts ruled out or struck down by constitutional grounds, it may be possible to learn a classification model using these measures. Alternatively, it may be discovered that these measures are sufficient to distinguish between offending and nice districts in an unsupervised fashion. Whether quantitative measures that conform to intuitive understandings of niceness is relevant to crafting electoral districts at all remains an open question. General compactness and contiguity are only part of the requirements which an electoral district must satisfy under the present regime, whereas others, such as those given in the 1965 Voting Rights Act, bear less consequence from the pure shape of the district. Doubtless these quantitative measures could form part of the evaluation process for determining new districts, but the actual policy will almost surely involve ensembles of measures, both quantitative and otherwise, in idealized future redistricting. \printbibliography[heading=bibintoc] \end{document}
{ "alphanum_fraction": 0.6967577127, "avg_line_length": 46.1796733212, "ext": "tex", "hexsha": "7b76b3320e4d0432bed3cf7312cf304b5a3246d3", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "6bf89adeee7586ea66d96a4682ad0662c4e0ab67", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "DavidNKraemer/polygonal-niceness", "max_forks_repo_path": "paper/main.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "6bf89adeee7586ea66d96a4682ad0662c4e0ab67", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "DavidNKraemer/polygonal-niceness", "max_issues_repo_path": "paper/main.tex", "max_line_length": 120, "max_stars_count": null, "max_stars_repo_head_hexsha": "6bf89adeee7586ea66d96a4682ad0662c4e0ab67", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "DavidNKraemer/polygonal-niceness", "max_stars_repo_path": "paper/main.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 8174, "size": 25445 }
%% Copyright (C) 2010-2012, Gostai S.A.S. %% %% This software is provided "as is" without warranty of any kind, %% either expressed or implied, including but not limited to the %% implied warranties of fitness for a particular purpose. %% %% See the LICENSE file for more information. \section{Process} A Process is a separated task handled by the underneath operating system. \begin{windows} Process is not yet supported under Windows. \end{windows} \subsection{Example} The following examples runs the \command{cat} program, a Unix standard command that simply copies on its (standard) output its (standard) input. \begin{urbiscript} var p = Process.new("cat", []); [00000004] Process cat \end{urbiscript} \noindent Just created, this process is not running yet. Use \lstinline|run| to launch it. \begin{urbiscript} p.status; [00000005] not started p.run(); p.status; [00000006] running \end{urbiscript} \noindent Then we feed its input, named \lstinline|stdin| in the Unix tradition, and close its input. \begin{urbiscript} p.stdin << "1\n" | p.stdin << "2\n" | p.stdin << "3\n" |; p.status; [00000007] running p.stdin.close(); \end{urbiscript} \noindent At this stage, the status of the process is unknown, as it is running asynchronously. If it has had enough time to ``see'' that its input is closed, then it will have finished, otherwise we might have to wait for awhile. The method \lstinline|join| means ``wait for the process to finish''. \begin{urbiscript} p.join(); p.status; [00000008] exited with status 0 \end{urbiscript} \noindent Finally we can check its output. \begin{urbiscript} p.stdout.asList(); [00000009] ["1", "2", "3"] \end{urbiscript} \subsection{Prototypes} \begin{refObjects} \item[Object] \end{refObjects} \subsection{Construction} A Process needs a program name to run and a possibly-empty list of command line arguments. Calling \lstinline|run| is required to execute the process. \begin{urbiscript} Process.new("cat", []); [00000004] Process cat Process.new("cat", ["--version"]); [00000004] Process cat \end{urbiscript} \subsection{Slots} \begin{urbiscriptapi} \item[asProcess] Return \this. \begin{urbiscript} do (Process.new("cat", [])) { assert (asProcess() === this); }|; \end{urbiscript} \item[asString] \lstinline|Process| and the name of the program. \begin{urbiassert} Process.new("cat", ["--version"]).asString() == "Process cat"; \end{urbiassert} \item[done] Whether the process has completed its execution. \begin{urbiscript} do (Process.new("sleep", ["1"])) { assert (!done); run(); assert (!done); join(); assert (done); }|; \end{urbiscript} \item[join] Wait for the process to finish. Changes its status. \begin{urbiscript} do (Process.new("sleep", ["2"])) { var t0 = System.time; assert (status.asString() == "not started"); run(); assert (status.asString() == "running"); join(); assert (t0 + 2s <= System.time); assert (status.asString() == "exited with status 0"); }|; \end{urbiscript} \item[kill] If the process is not \lstinline|done|, interrupt it (with a \lstinline|SIGKILL| in Unix parlance). You still have to wait for its termination with \lstinline|join|. \begin{urbiscript} do (Process.new("sleep", ["1"])) { run(); kill(); join(); assert (done); assert (status.asString() == "killed by signal 9"); }|; \end{urbiscript} \item[name] The (base) name of the program the process runs. \begin{urbiassert} Process.new("cat", ["--version"]).name == "cat"; \end{urbiassert} \item[run] Launch the process. Changes it status. A process can only be run once. \begin{urbiscript} do (Process.new("sleep", ["1"])) { assert (status.asString() == "not started"); run(); assert (status.asString() == "running"); join(); assert (status.asString() == "exited with status 0"); run(); }|; [00021972:error] !!! run: process was already run \end{urbiscript} \item[runTo] %%% FIXME: \item[status] An object whose slots describe the status of the process. %%% FIXME: \item[stderr] An \refObject{InputStream} (the output of the Process is an input for \urbi) to the standard error stream of the process. \begin{urbiunchecked} do (Process.new("urbi-send", ["--no-such-option"])) { run(); join(); assert { stderr.asList() == ["urbi-send: invalid option: --no-such-option", "Try `urbi-send --help' for more information."]; }; }|; \end{urbiunchecked} \item[stdin] An \refObject{OutputStream} (the input of the Process is an output for \urbi) to the standard input stream of the process. \begin{urbiscript} do (Process.new(System.programName, ["--version"])) { run(); join(); assert { stdout.asList()[1] == "Copyright (C) 2004-2012 Gostai S.A.S."; }; }|; \end{urbiscript} \item[stdout] An \refObject{InputStream} (the output of the Process is an input for \urbi) to the standard output stream of the process. \begin{urbiscript} do (Process.new("cat", [])) { run(); stdin << "Hello, World!\n"; stdin.close(); join(); assert (stdout.asList() == ["Hello, World!"]); }|; \end{urbiscript} \end{urbiscriptapi} %%% Local Variables: %%% coding: utf-8 %%% mode: latex %%% TeX-master: "../urbi-sdk" %%% ispell-dictionary: "american" %%% ispell-personal-dictionary: "../urbi.dict" %%% fill-column: 76 %%% End:
{ "alphanum_fraction": 0.6824438997, "avg_line_length": 21.5569105691, "ext": "tex", "hexsha": "a969b1be87480f8de5299f748e8d0a2853f16a67", "lang": "TeX", "max_forks_count": 15, "max_forks_repo_forks_event_max_datetime": "2021-09-28T19:26:08.000Z", "max_forks_repo_forks_event_min_datetime": "2015-01-28T20:27:02.000Z", "max_forks_repo_head_hexsha": "fb17359b2838cdf8d3c0858abb141e167a9d4bdb", "max_forks_repo_licenses": [ "BSD-3-Clause" ], "max_forks_repo_name": "jcbaillie/urbi", "max_forks_repo_path": "doc/specs/process.tex", "max_issues_count": 7, "max_issues_repo_head_hexsha": "fb17359b2838cdf8d3c0858abb141e167a9d4bdb", "max_issues_repo_issues_event_max_datetime": "2019-02-13T10:51:07.000Z", "max_issues_repo_issues_event_min_datetime": "2016-09-05T10:08:33.000Z", "max_issues_repo_licenses": [ "BSD-3-Clause" ], "max_issues_repo_name": "jcbaillie/urbi", "max_issues_repo_path": "doc/specs/process.tex", "max_line_length": 70, "max_stars_count": 16, "max_stars_repo_head_hexsha": "fb17359b2838cdf8d3c0858abb141e167a9d4bdb", "max_stars_repo_licenses": [ "BSD-3-Clause" ], "max_stars_repo_name": "jcbaillie/urbi", "max_stars_repo_path": "doc/specs/process.tex", "max_stars_repo_stars_event_max_datetime": "2021-10-05T22:16:13.000Z", "max_stars_repo_stars_event_min_datetime": "2016-05-10T05:50:58.000Z", "num_tokens": 1547, "size": 5303 }
\chapter*{Preface} \label{preface} A personal statement about the purpose and scope of the thesis/dissertation could be included in the preface. The tone of the preface, however, must be academic and appropriate to scholarly work. This page is optional.
{ "alphanum_fraction": 0.7846153846, "avg_line_length": 43.3333333333, "ext": "tex", "hexsha": "42ceb43e7d637d3fb98d646b1bdfb0a7c4222058", "lang": "TeX", "max_forks_count": 3, "max_forks_repo_forks_event_max_datetime": "2019-01-09T19:04:21.000Z", "max_forks_repo_forks_event_min_datetime": "2018-06-18T16:47:41.000Z", "max_forks_repo_head_hexsha": "8ee38a57f7301530d4c6b413bc41ff60fcff2cea", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "tapjdey/thesis_template", "max_forks_repo_path": "front-matter/preface.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "8ee38a57f7301530d4c6b413bc41ff60fcff2cea", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "tapjdey/thesis_template", "max_issues_repo_path": "front-matter/preface.tex", "max_line_length": 76, "max_stars_count": 3, "max_stars_repo_head_hexsha": "8ee38a57f7301530d4c6b413bc41ff60fcff2cea", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "tapjdey/thesis_template", "max_stars_repo_path": "front-matter/preface.tex", "max_stars_repo_stars_event_max_datetime": "2022-01-28T06:22:29.000Z", "max_stars_repo_stars_event_min_datetime": "2018-11-09T18:20:56.000Z", "num_tokens": 60, "size": 260 }
\section{Calculus of Vector and Matrix Valued Functions} \subsection{Exercise 1} That $\dot{x}(t) = 0 \implies x(t) = c$ follows immediately from the mean value inequality for vector-valued functions, $\norm{x(b) - x(a)} \leq (b - a) \norm{x'(t)}$ for some $t \in (a, b)$. My guess is that Lax was hinting at applying the mean value theorem for real-valued functions to $(x(b) - x(a), y)$. \subsection{Exercise 2} We have that \begin{align*} \dv{x} A^{-1} A = 0 = \bigg(\dv{x} A^{-1}\bigg) A + A^{-1} \bigg(\dv{x} A\bigg) \implies \dv{x} A^{-1} = -A^{-1} \bigg( \dv{x} A \bigg) A^{-1} \end{align*} \subsection{Exercise 3} The matrix $A + B$ satisfies $(A + B)^2 = I$, so we have that \begin{align*} e^{A+B} &= \sum_{k = 0}^\infty \frac{(A + B)^k}{k!} \\ &= \bigg(\sum_{k = 0}^\infty \frac{1}{(2k)!}\bigg) I + \bigg(\sum_{k = 0}^\infty \frac{1}{(2k + 1)!}\bigg) (A + B) \\ &= \cosh(1) I + \sinh(1) (A + B) \end{align*} \subsection{Exercise 4} We can use uniform continuity to swap the limits to get \begin{align*} \lim_{m \to \infty} \norm{\dot{E}_m(t) - F(t)} &= \lim_{m \to \infty} \lim_{h \to 0} \norm{\frac{E_m(t + h) - E_m(t)}{h} - F(t)} \\ &= \lim_{h \to 0} \lim_{m \to \infty} \norm{\frac{E_m(t + h) - E_m(t)}{h} - F(t)} \\ &= \lim_{h \to 0} \norm{\frac{E(t + h) - E(t)}{h} - F(t)} \\ &\implies \dot{E}(t) = F(t) \end{align*} \subsection{Exercise 5} Let $M \geq \norm{A(t)}, \norm{\dot{A}(t)}$. Then we can use the expression for the derivative of $A^k(t)$ to get \begin{align*} \norm{\dv{t} A^k} &= \norm{\dot{A} A^{k - 1} + A \dot{A} A^{k - 2} + ... + A^{k - 1}\dot{A}} \\ &\leq kM^k \\ \implies \norm{\dot{E}_m(t) - \dot{E}_n(t)} &\leq \sum_{k = n + 1}^m \frac{kM^k}{k!} \end{align*} so $\dot{E}_m(t)$ converges. \subsection{Exercise 6} \begin{align*} \dv{t} \log\det e^{At} = \Tr(e^{-At} A e^{At}) = \Tr(A) \end{align*} \subsection{Exercise 7} Let $v$ be an eigenvector of $A$ with eigenvalue $a$. Then \begin{align*} e^A v &= \sum_{k = 0}^\infty \frac{A^k v}{k!} \\ &= \sum_{k = 0}^\infty \frac{a^k v}{k!} \\ &= e^a v \end{align*}
{ "alphanum_fraction": 0.4948979592, "avg_line_length": 43.5555555556, "ext": "tex", "hexsha": "fa273cd8fbce4d5c2af6854d0a298a1d244cfcaf", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "5ea4efe2267053695123a2c2d2c5171a672f61d4", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "2014mchidamb/Math-Exercise-Guides", "max_forks_repo_path": "Linear_Algebra_Lax/chapter_9.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "5ea4efe2267053695123a2c2d2c5171a672f61d4", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "2014mchidamb/Math-Exercise-Guides", "max_issues_repo_path": "Linear_Algebra_Lax/chapter_9.tex", "max_line_length": 151, "max_stars_count": 1, "max_stars_repo_head_hexsha": "5ea4efe2267053695123a2c2d2c5171a672f61d4", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "2014mchidamb/Math-Exercise-Guides", "max_stars_repo_path": "Linear_Algebra_Lax/chapter_9.tex", "max_stars_repo_stars_event_max_datetime": "2019-09-19T07:33:25.000Z", "max_stars_repo_stars_event_min_datetime": "2019-09-19T07:33:25.000Z", "num_tokens": 920, "size": 2352 }
\section{Related Work} \subsection{Parallel Method} \begin{frame} \frametitle{Parallel Method} \begin{itemize} \setlength\itemsep{1em} \item The wavefront computation is {\em not} cache-friendly. \begin{itemize} \item To address this cache issue, Maleki et al. developed a technique to exploit more parallelism in the dynamic programming. \end{itemize} \item The alternative to wavefront method is the traditional row-by-row approach. \end{itemize} \end{frame} \subsection{Variable Gapped Longest Common Subsequence} \begin{frame} \frametitle{Suffix Maximum Query in the VGLCS Algorithm} \begin{itemize} \setlength\itemsep{1em} \item Peng's sequential VGLCS algorithm uses disjoint sets for suffix maximum query. \item We instead use {\em sparse table} to support incremental suffix/range maximum queries in our VGLCS algorithm. \end{itemize} \end{frame} \begin{frame} \frametitle{Blocked Sparse Table} \begin{itemize} \setlength\itemsep{1em} \item Fischer proposed blocked sparse table for better performance than the unblocked sparse table, and Fischer's algorithm builds least ancestor tables for answering range maximum query. \item We instead use a {\em rightmost-pops} encoding for Cartesian trees. \end{itemize} \end{frame} \begin{frame} \frametitle{Cache Issues on Blocked Sparse Table} \begin{itemize} \setlength\itemsep{1em} \item Demaine proposed {\em cache-aware} operations on Cartesian tree. \item Masud presents a new encoding method that reduces the number of instructions. \item Our rightmost-pops encoding for Cartesian trees also reduces cache misses, and with a much simpler implementation than Demaine's encoding. \end{itemize} \end{frame}
{ "alphanum_fraction": 0.7418994413, "avg_line_length": 27.1212121212, "ext": "tex", "hexsha": "5c5f75a585194d28db4ab4eaac58ba0f9d6badd8", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "87fe1c71e14cf7ed6092f728b085b735cf683a4b", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "morris821028/parallel-VGLCS", "max_forks_repo_path": "doc/IEEE/slides/partial/related.tex", "max_issues_count": 2, "max_issues_repo_head_hexsha": "87fe1c71e14cf7ed6092f728b085b735cf683a4b", "max_issues_repo_issues_event_max_datetime": "2017-02-24T00:13:34.000Z", "max_issues_repo_issues_event_min_datetime": "2017-02-21T02:01:16.000Z", "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "morris821028/parallel-VGLCS", "max_issues_repo_path": "doc/IEEE/slides/partial/related.tex", "max_line_length": 63, "max_stars_count": 2, "max_stars_repo_head_hexsha": "87fe1c71e14cf7ed6092f728b085b735cf683a4b", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "morris821028/parallel-VGLCS", "max_stars_repo_path": "doc/IEEE/slides/partial/related.tex", "max_stars_repo_stars_event_max_datetime": "2020-12-22T07:30:24.000Z", "max_stars_repo_stars_event_min_datetime": "2017-02-11T08:45:21.000Z", "num_tokens": 520, "size": 1790 }
\documentclass[letterpaper,11pt]{article} \usepackage{latexsym} \usepackage[empty]{fullpage} \usepackage{titlesec} \usepackage{marvosym} \usepackage[usenames,dvipsnames]{color} \usepackage{verbatim} \usepackage{enumitem} \usepackage[hidelinks]{hyperref} \usepackage{fancyhdr} \usepackage[english]{babel} \usepackage{tabularx} \input{glyphtounicode} \pagestyle{fancy} \fancyhf{} % clear all header and footer fields \fancyfoot{} \renewcommand{\headrulewidth}{0pt} \renewcommand{\footrulewidth}{0pt} % Adjust margins \addtolength{\oddsidemargin}{-0.5in} \addtolength{\evensidemargin}{-0.5in} \addtolength{\textwidth}{1in} \addtolength{\topmargin}{-.5in} \addtolength{\textheight}{1.0in} \urlstyle{same} \raggedbottom \raggedright \setlength{\tabcolsep}{0in} % Sections formatting \titleformat{\section}{ \vspace{-4pt}\scshape\raggedright\large }{}{0em}{}[\color{black}\titlerule \vspace{-5pt}] % Ensure that generate pdf is machine readable/ATS parsable \pdfgentounicode=1 %------------------------- % Custom commands \newcommand{\resumeItem}[2]{ \item\small{ \textbf{#1}{: #2 \vspace{-2pt}} } } % Just in case someone needs a heading that does not need to be in a list \newcommand{\resumeHeading}[4]{ \begin{tabular*}{0.99\textwidth}[t]{l@{\extracolsep{\fill}}r} \textbf{#1} & #2 \\ \textit{\small#3} & \textit{\small #4} \\ \end{tabular*}\vspace{-5pt} } \newcommand{\resumeSubheading}[4]{ \vspace{-1pt}\item \begin{tabular*}{0.97\textwidth}[t]{l@{\extracolsep{\fill}}r} \textbf{#1} & #2 \\ \textit{\small#3} & \textit{\small #4} \\ \end{tabular*}\vspace{-5pt} } \newcommand{\resumeSubSubheading}[2]{ \begin{tabular*}{0.97\textwidth}{l@{\extracolsep{\fill}}r} \textit{\small#1} & \textit{\small #2} \\ \end{tabular*}\vspace{-5pt} } \newcommand{\resumeSubItem}[2]{\resumeItem{#1}{#2}\vspace{-4pt}} \renewcommand{\labelitemii}{$\circ$} \newcommand{\resumeSubHeadingListStart}{\begin{itemize}[leftmargin=*]} \newcommand{\resumeSubHeadingListEnd}{\end{itemize}} \newcommand{\resumeItemListStart}{\begin{itemize}} \newcommand{\resumeItemListEnd}{\end{itemize}\vspace{-5pt}} %------------------------------------------- %%%%%% CV STARTS HERE %%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{document} %----------HEADING----------------- \begin{tabular*}{\textwidth}{l@{\extracolsep{\fill}}r} \textbf{\href{https://benjaminkrueger.xyz/}{\Large Benjamin Krueger}} & Email : \href{mailto:[email protected]}{[email protected]}\\ \href{https://benjaminkrueger.xyz/}{Personal Site: benjaminkrueger.xyz} & Mobile : +1-573-612-8937 \\ \href{https://github.com/benkrueger}{Github: https://github.com/benkrueger} & \href{https://www.linkedin.com/in/benjamin-krueger/}{Linkedin: https://www.linkedin.com/in/benjamin-krueger/} \\ \end{tabular*} %-----------EDUCATION----------------- \section{Education} \resumeSubHeadingListStart \resumeSubheading {University Of Missouri St Louis}{St Louis, Missouri} {Bachelor of Science in Computer Science; GPA: 3.25 }{Aug. 2020 -- Dec. 2021} \resumeSubheading {Missouri University Of Science and Technology}{Rolla, MO} {Bachelors of Science in Computer Science (transferred) }{Aug. 2015 -- Jul. 2020} \resumeSubHeadingListEnd %-----------EXPERIENCE----------------- \section{Experience} \resumeSubHeadingListStart \resumeSubheading {United States Geological Survey}{Rolla, MO} {Student Software Contractor}{Jan 2020 - Jan 2021} \resumeItemListStart \resumeItem{Unit Testing}{Unit tested python and c++ GIS data processing software.} \resumeItem{Refactoring}{Refactored python and c++ GIS data processing software.} \resumeItem{GIS}{Automated GIS data processing with GDAL.} \resumeItem{Web Testing}{Wrote automated frontend tests in python utilizing Selenium.} \resumeItem{Web Development}{Utilized Django, Postgres, Javascript and HTML to implement GIS specification web application.} \resumeItemListEnd \resumeSubheading {Hunter Engineering}{Bridgeton, MO} {Software Engineering Intern}{Jan 2019 - Aug 2019} \resumeItemListStart \resumeItem{Agile}{Collaborated with a team to following Agile development methodologies.} \resumeItem{Testing}{Followed a test driven development workflow for c\# microservice development.} \resumeItem{Cloud}{Utilized Azure DevOps workflow to automate builds, tests, and static code analysis.} \resumeItem{Development}{Implemented native libraries in c++ with c\# bindings.} \resumeItem{Microservices}{Helped team deliver microservice architecture application in c++ and c\#.} \resumeItemListEnd \resumeSubheading {Missouri University Of Science and Technology}{Rolla, MO} {IT Student Co-op}{Jun 2017 - Jan 2018} \resumeItemListStart \resumeItem{Management} {Lead a team of part time student employees delegating desktop administration tasks.} \resumeItem{Active Directory} {Wrote active directory group policy for desktop administration.} \resumeItem{Desktop Administration}{Performed all roles related to packaging, updating and licensing desktop software.} \resumeItemListEnd \resumeSubSubheading{IT Student Employee}{Jun 2016 - Nov 2019 } \resumeItemListStart \resumeItem{Softare Packaging}{Developed and tested desktop software packages with Perl and System Center Configuration Manager.} \resumeItem{Desktop Administration}{Provisioned and deployed desktops, virtual machines and sotware packaged with SCCM.} \resumeItem{VDI Software}{Packaged desktop software for VDI streaming.} \resumeItem{Troubleshooting}{Resolved various deployment and configuration issues in desktop lab environment.} \resumeItemListEnd \resumeSubHeadingListEnd %Undergrad Cert \section{Undergrad Certificate} \resumeSubHeadingListStart \item{\textbf{Cybersecurity}}{: University of Missouri St Louis, 2021} \resumeSubHeadingListEnd %--------PROGRAMMING SKILLS------------ \section{Programming Skills} \resumeSubHeadingListStart \item{ \textbf{Languages}{: Python, C++, SQL, Java, Perl,Javascript, C\#, Bash} \hfill } \item{ \textbf{Technologies}{: VMware, GDAL, Django, Postgres, Bootstrap, Linux, Docker} \hfill } \resumeSubHeadingListEnd %------------------------------------------- \end{document}
{ "alphanum_fraction": 0.6915729472, "avg_line_length": 37.0914285714, "ext": "tex", "hexsha": "6ce746f02577965ad1b4f4d351030429f0717b28", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "9619b0d50492d41d7530be040e5277d306143d52", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "benkrueger/resume", "max_forks_repo_path": "resume.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "9619b0d50492d41d7530be040e5277d306143d52", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "benkrueger/resume", "max_issues_repo_path": "resume.tex", "max_line_length": 192, "max_stars_count": null, "max_stars_repo_head_hexsha": "9619b0d50492d41d7530be040e5277d306143d52", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "benkrueger/resume", "max_stars_repo_path": "resume.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 1833, "size": 6491 }
% % %% %% %%% PREAMBLE %%% %% %% % % % Document class \documentclass[11pt]{beamer} \usetheme{Boadilla} \usecolortheme{beaver} \useinnertheme{rectangles} \setbeamertemplate{navigation symbols}{} % Font \usepackage{fontspec} \setmainfont[% SmallCapsFont={* Caps},% enable small capital font family SlantedFont={* Slanted},% enable slanted font family ]{Latin Modern Roman} % Language \usepackage{polyglossia} \setdefaultlanguage{english} \setotherlanguage{french} \usepackage[autostyle=true]{csquotes} % Floats \usepackage{float} \usepackage{booktabs} \usepackage{multicol} % % %% %% %%% DOCUMENT %%% %% %% % % % Information \title{Installing \LaTeX} \subtitle{Distributions and editors} \author[A. Quenon]{Alexandre Quenon} % Text \begin{document} % *** Title page *** % \begin{frame} \titlepage \end{frame} % *** Contents *** % \begin{frame} \tableofcontents \end{frame} % *** Tutorial *** % \section{Software} \begin{frame} \frametitle{\LaTeX-related programs} To write a document with \LaTeX{}, the user must install specific programs, as for any other text editor. These are: \begin{itemize} \item a distribution, \item an editor. \end{itemize} The \LaTeX{} distribution contains all tools necessary to generate a document (usually a PDF) from the \enquote{code}. The editor is a program which simplifies the \LaTeX{} writing by providing shortcuts for compilation (cf. next tutorial) and shortcuts for text styling (e.g., \texttt{CTRL+B} creates bold text). \end{frame} \section{Distribution} \begin{frame} \frametitle{Which distribution to choose} \framesubtitle{Comparison of LaTeX distributions based on OS compatibility} \begin{table} \begin{tabular}{*{4}{c}} \toprule &\multicolumn{3}{c}{Distributions} \\ \cmidrule(l){2-4} OS & \TeX{} Live & MiK\TeX{} & Mac\TeX{} \\ \midrule Windows & yes & yes & no \\ Linux & yes & yes & yes \\ MacOS & no & yes & yes \\ \bottomrule \end{tabular} \end{table} \end{frame} \begin{frame} \frametitle{Which distribution to choose} \framesubtitle{Feedback from my own experience} As I have used both Windows and Linux (mainly CentOS) operating systems but never Mac, I cannot provide any feedback about \alert{Mac\TeX{}}. Sorry for Mac users\dots However, I have used extensively \alert{MiK\TeX{}} (on Windows only) and \alert{\TeX{} Live} (both Linux and Windows): \begin{itemize} \item I faced problems for using some packages or compilers with MiK\TeX{} that were not existing with \TeX{} Live such as \begin{itemize} \item compilation issues when using LuaLaTeX (cf. tutorial B002), \item \texttt{biber} engine not recognised while I was using the \texttt{biblatex} package, \end{itemize} \item this is what I experienced \begin{itemize} \item I compared up-to-date versions, \item I did it a few years ago so it could have changed since the comparison was done, \end{itemize} \item I \textbf{recommend} the use of \textbf{\TeX{} Live}. \end{itemize} \end{frame} \section{Editor} \begin{frame} \frametitle{Which editor to choose} \framesubtitle{Comparison of LaTeX editors based on OS compatibility} \begin{table} \begin{tabular}{*{4}{c}} \toprule &\multicolumn{3}{c}{Editors} \\ \cmidrule(l){2-4} OS & \TeX{}studio & \TeX{}maker & \TeX{}nicCenter \\ \midrule Windows & yes & yes & yes \\ Linux & yes & yes & no \\ MacOS & yes & yes & no \\ \bottomrule \end{tabular} \end{table} \end{frame} \begin{frame} \frametitle{Which editor to choose} \framesubtitle{Feedback from my own experience} As I have used both \TeX{}maker and \TeX{}studio but \alert{\TeX{}nicCenter}. Sorry that I cannot provide any feedback about it\dots I have used a lot \alert{\TeX{}maker} (on Windows only) then I switched to \alert{\TeX{}studio} (both Linux and Windows) because to my opinion: \begin{itemize} \item the tool was more \enquote{clever} considering the key management (references, bibliography), \item it proposes automatic alignment in \texttt{tabular} environments (even though this is not working when the \texttt{multicolumn} command is used). \end{itemize} However, for those who like a fine control, e.g. for compilation, \TeX{}maker is providing a more visible interface. Anyway, the editor is a matter of taste. So try some and do not hesitate to change and to compare them to find yours. \end{frame} \section{Installation examples} \begin{frame} \frametitle{Installation process} \framesubtitle{Preliminary notes} Here after examples of installation for different distributions are shown. Among the details provided for each step, the installer download is intended for Windows users. For Linux users, it is strongly recommended to use the program installer utility related to the Linux distribution. For example, \texttt{apt-get} on Ubuntu. \end{frame} \begin{frame} \frametitle{Installation process} \framesubtitle{Installing \TeX{} Live} Here below the installation of the \TeX{} Live distribution together with the \TeX{}studio editor is presented. Follow the steps: \begin{enumerate} \item install the \TeX{} Live distribution \begin{enumerate} \item download the \href{https://www.tug.org/texlive/acquire-netinstall.html}{installer} and run it, \item choose \enquote{Custom install} and click on \enquote{Next}, \item select the scheme and the packages that you need \begin{itemize} \item if you have any doubt select the \enquote{scheme-full} that installs all packages, \item otherwise you can change the installation collections, \end{itemize} \item click on \enquote{Install TeX Live} $\rightarrow$ this takes a moment, \end{enumerate} \item install the \TeX{}studio editor \begin{enumerate} \item download the \href{https://www.texstudio.org/}{installer} and run it, \item the editor should detect your \LaTeX{} distribution at first run. \end{enumerate} \end{enumerate} \end{frame} \begin{frame} \frametitle{Installation process} \framesubtitle{Installing MiK\TeX{}} Here below the installation of the MiK\TeX{} distribution together with the \TeX{}studio editor is presented. Follow the steps: \begin{enumerate} \item install the MiK\TeX{} distribution \begin{enumerate} \item go in the \href{https://miktex.org/download}{download} section of the MiK\TeX{} website, \item click on the \enquote{All downloads} tab, \item select the installer that you need \begin{itemize} \item Basic Installer for a standard \LaTeX{} system, \item Net Installer for a complete \LaTeX{} system, \end{itemize} \item run the installer $\rightarrow$ this takes a moment, \end{enumerate} \item install the \TeX{}studio editor \begin{enumerate} \item download the \href{https://www.texstudio.org/}{installer} and run it, \item the editor should detect your \LaTeX{} distribution at first run. \end{enumerate} \end{enumerate} \end{frame} \end{document}
{ "alphanum_fraction": 0.7049593724, "avg_line_length": 32.2986425339, "ext": "tex", "hexsha": "1da66ed057624d3ef332aca618ff90739fab19d3", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "fb17aab27bae727267605897c6d00ab65b097f23", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "Arkh42/LaTeX_magic", "max_forks_repo_path": "Tutorials/B001__Installing_LaTeX/Quick_reference.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "fb17aab27bae727267605897c6d00ab65b097f23", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "Arkh42/LaTeX_magic", "max_issues_repo_path": "Tutorials/B001__Installing_LaTeX/Quick_reference.tex", "max_line_length": 195, "max_stars_count": null, "max_stars_repo_head_hexsha": "fb17aab27bae727267605897c6d00ab65b097f23", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "Arkh42/LaTeX_magic", "max_stars_repo_path": "Tutorials/B001__Installing_LaTeX/Quick_reference.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 2111, "size": 7138 }
\chapter{Future directions} \label{Future} The current focus is the compressed branch trace, however there a number of other types of processor trace that would be useful (detailed below in no particular order). These should be considered as possible features that maybe added in future, once the current scope has been completed. \section{Data trace} The trace encoder will outputs packets to communicate information about loads and stores to an off-chip decoder. To reduce the amount of bandwidth required, reporting data values will be optional, and both address and data will be able to be encoded differentially when it is beneficial to do so. This entails outputting the difference between the new value and the previous value of the same transfer size, irrespective of transfer direction. Unencoded values will be used for synchronisation and at other times. \section{Fast profiling} In this mode the encoder will provide a non-intrusive alternative to the traditional method of profiling, which requires the processor to be halted periodically so that the program counter can be sampled. The encoder will issue packets when an exception, call or return is detected, to report the next instruction executed (i.e. the destination instruction). Optionally, the encoder will also be able to report the current instruction (i.e. the source instruction). \section{Inter-instruction cycle counts} In this mode the encoder will trace where the CPU is stalling by reporting the number of cycles between successive instruction retirements. \section{Using a jump target cache to further improve efficiency} The encoder could include a small cache of uninferable jump targets, managed using a least-recently-used (LRU) algorithm. When an uninferable PC discontinuity occurs, if the target address is present in the cache, report the index number of the cache entry (typically just a few bits) rather than the target address itself. The decoder would need to model the cache in order to know the target address associated with each cache entry. \textbf{DISCUSSION POINT}: This mode needs more analysis before we commit. The primary concern is whether it will result in an overall gain in efficiency. Packet formats 0 and 2 are used to report differential addresses with and without branch history information respectively. Format 0 is currently reserved. This could be redefined as being equivalent to either format 0, or format 2, but reporting a jump target cache index instead of a differential address. However, ideally we would want the ability to output the equivalent of both format 1 and format 2 with a jump target cache index. In order to do that we will need to add an extra bit somewhere. These are the options: \begin{itemize} \item Define format 0 as jump target cache index without branch information, and add a bit to format 1 to indicate whether it contains an address or a jump target cache index; \item Define format 0 as jump target cache index with branch information, and add a bit to format 2 to indicate whether it contains an address or a jump target cache index; \item Leave format 0 reserved, and add a bit to both formats 0 and 2 to indicate whether they contain an address or a jump target cache index. \end{itemize} For this mode to be useful, the overall efficiency gain achieved as a result of jump target cache indexes requiring fewer bits to encode than differential addresses needs to be enough to overcome the efficiency loss of adding a bit to every packet.
{ "alphanum_fraction": 0.7849223947, "avg_line_length": 52.2898550725, "ext": "tex", "hexsha": "5ab2d1570aef1a52db149d76a90a856f7c4b2435", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "b3087884827d7ce2a361eab8d0eb32eac6aaee82", "max_forks_repo_licenses": [ "CC-BY-4.0" ], "max_forks_repo_name": "rajeshkrmeena/riscv-trace-spec", "max_forks_repo_path": "futures.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "b3087884827d7ce2a361eab8d0eb32eac6aaee82", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "CC-BY-4.0" ], "max_issues_repo_name": "rajeshkrmeena/riscv-trace-spec", "max_issues_repo_path": "futures.tex", "max_line_length": 106, "max_stars_count": 1, "max_stars_repo_head_hexsha": "b3087884827d7ce2a361eab8d0eb32eac6aaee82", "max_stars_repo_licenses": [ "CC-BY-4.0" ], "max_stars_repo_name": "rajeshkrmeena/riscv-trace-spec", "max_stars_repo_path": "futures.tex", "max_stars_repo_stars_event_max_datetime": "2019-04-25T09:34:02.000Z", "max_stars_repo_stars_event_min_datetime": "2019-04-25T09:34:02.000Z", "num_tokens": 758, "size": 3608 }
\section*{Patents} \vspace{-2mm} \begin{itemize} \item \href{http://google.com/patents/US8793499}{U.S.\ patent 8,793,499} for \emph{Nested Digital Signatures with Constant File Size.} \item \href{http://www.google.com/patents/US6987461}{U.S.\ patent 6,987,461} for \emph{System and Method for Addressing Optical Emanations from an Information Processing Device.} \end{itemize}
{ "alphanum_fraction": 0.6833333333, "avg_line_length": 30, "ext": "tex", "hexsha": "aeda02d524d128a1d751a8115b3e58277793511a", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "3c6d25f9518bb055d1d9b0ef23b5af876bb6e8b3", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "jloughry/CV", "max_forks_repo_path": "section_patents.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "3c6d25f9518bb055d1d9b0ef23b5af876bb6e8b3", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "jloughry/CV", "max_issues_repo_path": "section_patents.tex", "max_line_length": 69, "max_stars_count": null, "max_stars_repo_head_hexsha": "3c6d25f9518bb055d1d9b0ef23b5af876bb6e8b3", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "jloughry/CV", "max_stars_repo_path": "section_patents.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 131, "size": 420 }
\documentclass{article} % For LaTeX2e \usepackage{style/nips12submit_e,times} \usepackage{amssymb,amsmath,amsthm} \usepackage{graphicx} \usepackage{style/preamble} \usepackage{natbib} \usepackage{hyperref} \usepackage{color} \definecolor{mydarkblue}{rgb}{0,0.08,0.45} \hypersetup{ pdftitle={}, pdfauthor={}, pdfsubject={}, pdfkeywords={}, pdfborder=0 0 0, pdfpagemode=UseNone, colorlinks=true, linkcolor=mydarkblue, citecolor=mydarkblue, filecolor=mydarkblue, urlcolor=mydarkblue, pdfview=FitH} \title{Bayesian Interpretations of RKHS Embedding Methods} %\title{Bayesian Interpretations of Maximum Mean Discrepancy-based Procedures} \author{ David Duvenaud\\ Department of Engineering\\ University of Cambridge\\ \texttt{[email protected]} \\ } % The \author macro works with any number of authors. There are two commands % used to separate the names and addresses of multiple authors: \And and \AND. % % Using \And between authors leaves it to \LaTeX{} to determine where to break % the lines. Using \AND forces a linebreak at that point. So, if \LaTeX{} % puts 3 of 4 authors names on the first line, and the last on the second % line, try using \AND instead of \And before the third author name. \newcommand{\fix}{\marginpar{FIX}} \newcommand{\new}{\marginpar{NEW}} \nipsfinalcopy % Uncomment for camera-ready version \begin{document} \maketitle \begin{abstract} In recent years, RKHS embeddings have been used to propose new two-sample tests, independence tests, approximate integration, inference and message-passing algorithms. Starting from a correspondence between the Maximum Mean Discrepancy and the posterior variance of the integral of a Gaussian process, we derive corresponding Bayesian interpretations of these methods. Using these interpretations, we then shed light on the original frequentist procedures. \end{abstract} \section{Introduction} \section{Reproducing Kernel Hilbert Space Embeddings} Well-known reproducing property [Saitoh, 1988]: If $\mathcal{F}$ is an RKHS on $\mathbb{R}^D$, with $k(x,x')$ the associated kernel, and $\psi(x)$ be the feature map of $k$, where % \begin{align} \psi(x) = k(\cdot, x) \end{align} then \begin{align} f(x) = \left\langle f, \psi(x)\right\rangle \quad \forall f \in \mathcal{F}, \forall x \in \mathbb{R}^D \end{align} % \section{Maximum Mean Discrepancy} For selecting pseudosamples, herding relies on an objective based on the maximum mean discrepancy \citep{Sriperumbudur2010}. MMD measures the divergence between two distributions, $p$ and $q$ with respect to a class of integrand functions $\mathcal{F}$ as follows: % \begin{align} \mmd_{\mathcal{F}}\left(p,q\right) = \sup_{f\in\mathcal{F}}\left\vert\int f(x) p(x) dx - \int f(x) q(x) dx \right\vert \end{align} Intuitively, if two distributions are close in the MMD sense, then no matter which function $f$ we choose from $\mathcal{F}$, the difference in its integral over $p$ or $q$ should be small. A particularly interesting case is when the function class $\mathcal{F}$ is functions of unit norm from a reproducing kernel Hilbert space (RKHS) $\He$. In this case, the MMD between two distributions can be conveniently expressed using expectations of the associated kernel $k(x, x')$ only \citep{Sriperumbudur2010}: % \begin{align} MMD^2_{\He}(p,q) =& \sup_{\substack{f\in\He\\\Hnorm{f}=1}}\left\vert\int f(x) p(x) dx - \int f(x) q(x) dx\right\vert^2\label{eqn:rkhs-mmd}\\ = & \Hnorm{\mu_{p} - \mu_{q}}^2\\ = & \iint k(x,y) p(x) p(y) dx dy - 2 \iint k(x,y) p(x) q(y) dx dy + \iint k(x,y) q(x) q(y) dx dy, \end{align} % where in the above formula % \begin{align} \mu_{p}=\int \phi(\vx)p(\vx)d\vx\in\He \end{align} % denotes the \emph{mean element} associated with the distribution $p$. For characteristic kernels, such as the Gaussian kernel, the mapping between a distribution and its mean element is bijective. As a consequence, $\mmd_{\He}(p,q)=0$ if and only if $p=q$, making it a powerful measure of divergence. \section{Integrals of functions drawn from Gaussian process} [In this section, show that you can integrate under a GP prior in closed form] \begin{figure}[h!] \centering \includegraphics[width=11cm]{figures/bq_buildup/buildup_samps3} \end{figure} \section{The Link between MMD and GP Integrals} [This section will show that the posterior variance of the difference between the integral of a GP with respect to two different distributions is equal to MMD. This generalizes a result from \citep{huszar2012herdingbmcuai}] \begin{prop} Given that $f$ is drawn from a standard GP prior, the expected squared difference in the integral between $f(x)$ against $p(x)$ minus the integral of $f(x)$ against $q(x)$ is equal to the squared Maximum Mean Discrepancy between $p$ and $q$. \label{prop:gp_mmd} \end{prop} % \begin{proof} The proof involves invoking the representer theorem, using bilinearity of scalar products and the fact that if $f$ is a standard Gaussian process then $\forall g\in\He: \left\langle f,g\right\rangle \sim \mathcal{N}(0,\Hnorm{g}^2)$: \begin{align} %&\varianceargs{}{Z_{f,p}\vert f(x_1), \dots, f(x_N), x_1, \dots, x_N}=\\ %&\varianceargs{}{Z_{f,p}\vert x_1, \dots, x_N}=\\ & \varianceargs{f\sim GP}{ \int f(x) p(x) dx - \int f(x) q(x) dx }\\ &= \mathbb{E}_{f\sim GP} \left( \int f(x) p(x) dx - \int f(x) q(x) dx \right)^2\\ &= \mathbb{E}_{f\sim GP} \left( \int \left\langle f, \phi (x)\right\rangle p(x) dx - \int \left\langle f, \phi (x)\right\rangle q(x) dx \right)^2\\ &= \mathbb{E}_{f\sim GP} \left\langle f, \int\phi(x) p(x) dx - \int\phi(x) q(x) dx \right\rangle^2\\ &= \mathbb{E}_{f\sim GP} \left\langle f, \mu_p - \mu_{q}\right\rangle^2\\ &= \Hnorm{\mu_p - \mu_q}^2\\ &= \mmd^2(p,q) \end{align} \end{proof} \subsection{Other links} Energy distances and parzen window estimators. [Smola's talk] \section{Kernel Herding and Bayesian Quadrature} In this section we show that Bayesian Quadrature Kernel herding \citep{chen2010super} \begin{prop} The expected variance in the Bayesian quadrature estimate $\epsilon^{2}_{\bq{}}$ is the maximum mean discrepancy between the target distribution $p(x)$ and $q_{\bq{}}(x) = \sum_{n=1}^{N}w^{(n)}_{\bq{}}\delta_{x_n}(x)$ \end{prop} % \begin{proof} Because the true integral is $\int f(x) p(x) dx$ and the BQ estimator is given by $\sum_i f(x_i) w_{BQ}^{(i)}$, Proposition \ref{prop:gp_mmd} implies that thier expected difference is given by $\mmd^2(p,q_{BQ})$. \end{proof} \subsection{Empirical estimator} [Re-derive the empirical estimator] \section{Kernel Two-sample Tests} The kernel two-sample test was introducted by \cite{gretton2005kernel, gretton2008kernel}. It was also proven [cite] that if $X \sim p, Y \sim q$, then $\mmd(X,Y)$ is an unbiased estimator of $\mmd(p,q)$. Furthermore, the empirical MMD converges to the true MMD uniformly for any $p,q$. related to Kernel ICA \cite{bach2003kernel}, which normalizes to lessen the effect of the marginals. \begin{figure} %\includegraphics[width=\columnwidth]{figures/bound_curve_rkhs} \caption{A graphical intepretation of the kernel two-sample test: The test statistic measures expected difference in integrals between the two empirical distributions.} \label{fig:ktst} \end{figure} \subsection{Bayesian Two-sample tests} A Bayesian two-sample test was proposed by \cite{borgwardt2009bayesian}. The nonparametric version of their test places Dirichlet process mixture of Gaussian priors on $p(x)$ and $q(x)$, and computes the Bayes factor comparing the hypotheses $\mathcal{H}_0 : p(x) = q(x)$ and $\mathcal{H}_1 : p(x) \neq q(x)$. This approach is appealing because it directly compares the likelihood of the two cases that two-sample tests are designed to compare. However, the DP-MoG likelihood ratio cannot be computed in closed form. In comparison, the test statistic given by $\mmd(p,q)$ is an indirect measure by which to compare the two hypotheses of interest, but it does have a closed form solution. \section{Hilbert-Schmidt Independence Criterion and GP Integrals} New test statistic based on infinite-dimensional Frobenius norm of cross-covariance matrix of features of x and y: [Gretton et. al, 2005] \begin{align} \hsic(p(x,y),k_x, k_y) & = || C_{xy} ||^2_{HS} \\ & = \expectargs{x, x', y, y'}{k_x(x,x')k_y(y,y)} + \expectargs{x, x'}{k_x(x,x')}\expectargs{y, y'}{k_y(y,y')} \\ \nonumber & \qquad - 2 \expectargs{x, y}{\expectargs{x'}{k_x(x,x')}\expectargs{y'}{k_y(y,y')}} \end{align} \begin{prop} The HSIC is equivalent to the variance in the squared difference of integrals of functions drawn from a GP prior against the joint distribution, and the product of the marginal distributions. \end{prop} % \begin{proof} We use the identities that for a zero-mean GP, \begin{align} \expectargs{f\sim \gp}{ f(x,y)} & = k(x,y) \\ \expectargs{f\sim \gp}{ f(x,x',y,y')} & = k(x,x',y,y') \end{align} \begin{align} & \varianceargs{f\sim \gp}{ \int \!\!\! \int\!\! f(x,y)p_{xy}(x,y)dxdy - \int \!\!\! \int\!\! f(x',y')p_x(x')p_y(y')dx dy}\\ & = \expectargs{f\sim \gp}{ \left( \int \!\!\! \int\!\! f(x,y)p_{xy}(x,y)dxdy - \int \!\!\! \int\!\! f(x',y')p_x(x')p_y(y')dx dy \right)^2}\\ % & = \expectargs{f\sim \gp}{ \int \!\!\! \int\!\! f(x,y)p_{xy}(x,y)dxdy \int \!\!\! \int\!\! f(x',y')p_{xy}(x',y')dx'dy'}\\ \nonumber & \qquad -2 \expectargs{f\sim \gp}{ \int \!\!\! \int\!\! f(x,y)p_{xy}(x,y)dxdy \int \!\!\! \int\!\! f(x',y')p_{x}(x')p_{y}(y')dx'dy'}\\ \nonumber & \qquad + \expectargs{f\sim \gp}{ \int \!\!\! \int\!\! f(x,y)p_{x}(x')p_{y}(y')dxdy \int \!\!\! \int\!\! f(x',y')p_{x}(x')p_{y}(y')dx'dy'}\\ % & = \int \!\!\! \int \!\!\!\int \!\!\! \int\!\! k(x,x',y,y')p_{xy}(x,y)p_{xy}(x',y')dxdydx'dy' \\ \nonumber & \qquad + \int \!\!\! \int\!\!\! \int \!\!\! \int\!\! k(x,x',y,y') p_{x}(x)p_{x}(x')dxdx' p_y(y) p_y(y') dydy' \\ \nonumber & \qquad - 2 \int \!\!\! \int \!\!\!\int \!\!\! \int\!\! k(x,x',y,y') p_x(x') dx' p_y(y')dy p_x(x) p_y(y) dx dy \\ \end{align} % Then, using the assumption that $k(x,x',y,y') = k_x(x,x') k_y(y,y')$, we can continue: % \begin{align} & = \int \!\!\! \int \!\!\!\int \!\!\! \int\!\! k_x(x,x')k_y(y,y')p_{xy}(x,y)p_{xy}(x',y')dxdydx'dy' \\ \nonumber & \qquad + \int \!\!\! \int\!\! k_x(x,x')p_{x}(x)p_{x}(x')dxdx'\int \!\!\! \int\!\! k_y(y,y') p_y(y) p_y(y') dydy' \\ \nonumber & \qquad - 2 \int \!\!\! \int\!\! \left[\int\!\! k_x(x,x') p_x(x') dx' \right]\left[\int\!\!k_y(y,y') p_y(y')dy\right] p_x(x) p_y(y) dx dy \\ % & = \expectargs{x, x', y, y'}{k_x(x,x')k_y(y,y)} + \expectargs{x, x'}{k_x(x,x')}\expectargs{y, y'}{k_y(y,y')} \\ \nonumber & \qquad - 2 \expectargs{x, y}{\expectargs{x'}{k_x(x,x')}\expectargs{y'}{k_y(y,y')}} \\ & = \hsic(p(x,y),k_x, k_y) \end{align} \end{proof} \subsection{Generalization of HSIC} If we can come up with a more sensible kernel $k(x,x',y,y')$ than the factored kernel $k(x,x',y,y') = k_x(x,x') k_y(y,y')$, we obtain a more general solution: % \begin{align} GHSIC & = \int \!\!\! \int \!\!\!\int \!\!\! \int\!\! k(x,x',y,y')p_{xy}(x,y)p_{xy}(x',y')dxdydx'dy' \\ \nonumber & \qquad + \int \!\!\! \int\!\!\! \int \!\!\! \int\!\! k(x,x',y,y') p_{x}(x)p_{x}(x')dxdx' k_y(y,y') p_y(y) p_y(y') dydy' \\ \nonumber & \qquad - 2 \int \!\!\! \int \!\!\!\int \!\!\! \int\!\! k(x,x',y,y') p_x(x') dx' p_y(y')dy p_x(x) p_y(y) dx dy \\ & = \expectargs{x, x', y, y'}{ k(x,x',y,y')p_{xy}(x,y)p_{xy}(x',y')} \\ \nonumber & \qquad + \expectargs{x, x', y, y'}{ k(x,x',y,y') p_{x}(x)p_{x}(x') p_y(y) p_y(y') } \\ \nonumber & \qquad - 2 \expectargs{x, x', y, y'} k(x,x',y,y') p_x(x') p_y(y') p_x(x) p_y(y) \end{align} \begin{figure} %\includegraphics[width=\columnwidth]{figures/bound_curve_rkhs} \caption{A graphical intepretation of the HSIC: The expected difference in integrals under the joint empirical distribution, and the product of marginal empirical distributions.} \label{fig:hsic} \end{figure} \subsection{Empirical estimator} The empricial estimate of HSIC is given by: \begin{align} \vectornorm{\hat{\Sigma}^{(N)}_{YX}}^2_{HS} = \frac{1}{N^2} \end{align} \section{Mean Embeddings and Conditional Mean Embeddings} Many results in the MMD literature discuss the \emph{mean embedding} of a distribution in a RKHS. What is the corresponding Bayesian intepretation? Based on a result due to \cite{krikproof}, we can say that if $f \sim \gp$, % \begin{align} \mu_{p(x)} & = \int \! \phi(x) p(x) dx \\ & = \int \! k(x, \cdot) p(x) dx \\ & = \expectargs{f \sim \gp}{ \int \!\! f(x) f(\cdot) p(x) dx} \\ & = \expectargs{f \sim \gp}{ f(\cdot) \int \!\! f(x) p(x) dx} % & = \int \!\!\! \int \!\! f(x) f(\cdot) p(x) dx \gp \left( f | \vzero, k(x, x') \right) df \end{align} % Thus we can interpret the mean embedding of $p(x)$ as the expected covariance of each point of the function with its integral with respect to $p(x)$. %For characteristic kernels [cite], the mean embedding fully characterizes the distribution. \section{Kernel Conditional Independence Tests} Kernel Conditional Independence Tests \cite{fukumizu2008kernel} Cross-covariance operator is defined as:Proposition \ref{prop:gp_mmd} \begin{align} \left\langle f, \Sigma_{YX} g\right\rangle = \cov[f(X), g(Y)] = \expect{f(X)g(Y)} - \expect{f(X)} \expect{g(Y)} \end{align} Normalized conditional cross-covariance operator: \begin{align} V_{YX|Z} = V_{YZ} V_{ZX} \end{align} conditional covariance operator: \begin{align} \Sigma_{YX|Z} = \Sigma_{YX} - \Sigma_{YZ} \Sigma_{ZZ}\inv \Sigma_{ZX} \end{align} Attempted interpretation of the unnormalized version of their test: \begin{align} I^{CONDU} & = \varianceargs{f\sim \gp}{ \int \!\!\! \int\!\! f(x,y, z, z')p_{xy}(x,y|z)dxdy - \int \!\!\! \int\!\! f(x',y')p_x(x'|z)p_y(y'|z)dx dy} \\ & = \expectargs{z\sim p(z)}{\mmd( p_{xy}(x,y|z)dxdy, p_x(x'|z)p_y(y'|z)} \\ \end{align} We should probably think of conditional independence given $z$ as a function of $z$, but we can summarize this function by its expectation. The naive test is identically zero under the empirical distribution: \begin{align} BCIT_1 & = \expectargs{z\sim p(z)}{\varianceargs{f\sim \gp}{ \int \!\!\! \int\!\! f(x,y)p_{xy}(x,y|z)dxdy - \int \!\!\! \int\!\! f(x',y')p_x(x'|z)p_y(y'|z)dx dy}} \\ & = \expectargs{z\sim p(z)}{\mmd( p_{xy}(x,y|z)dxdy, p_x(x'|z)p_y(y'|z)} \\ & = \frac{1}{N} \sum_i \int \!\!\! \int \!\!\!\int \!\!\! \int\!\! k(x,x',y,y',z,z)p(x,y|z)p(x',y'|z)dxdydx'dy' \\ \nonumber & \qquad + \int \!\!\! \int\!\!\! \int \!\!\! \int\!\! k(x,x',y,y',z,z) p(x|z)p(x'|z)dxdx' p(y|z) p(y'|z) dydy' \\ \nonumber & \qquad - 2 \int \!\!\! \int \!\!\!\int \!\!\! \int\!\! k(x,x',y,y',z,z) p(x'|z) dx' p(y'|z)dy p(x|z) p(y|z) dx dy \\ & = 0 \end{align} using the empirical distribution for $p(x,y|z)$, which will be multiplied by zero whenever $p(z) = 0$. \subsection{Bayesian Interpretation of HS norm} The HS-Norm of operator $A$ is given by $||A||_{HS}^2 = \sum_i,j \left\langle \psi_j, A \phi_i \right\rangle^2_{\mathcal{H}_2}$. \section{Related Work} The MMD has been related to energy distances \cite{sejdinovic2012hypothesis, sejdinovic2012equivalence}, proving that energy distances [define] are equivalent to MMD. Thus, we can claim as a corollary of Proposition \ref{prop:gp_mmd} that energy distances are also equivalent to the variance of difference of integrals. \section{Open Questions} Can the Bayesian interpretation be extended to covariate shift? What is the intepretation of normalization in the kernel conditional dependence work? Instead of low-rank approximation, can we use other sparse GP methods? \section{Applications} \subsection{Semi-supervised classification or ICA} We can learn the kernel such that the empirical distribution of labeled examples from the same class are forced to be together, and those from different classes are forced to be apart. Can we then interpret $f$, the function drawn from the GP prior? Would $f$ then be the "class membership" function? \subsection{The witness function} Does the witness function have a Bayesian interpretation? It equals $w(x') = \int (p(x)-q(x)) k(x, x') dx$, which is the difference between p and q convolved with the kernel function. \begin{align} %&\varianceargs{}{Z_{f,p}\vert f(x_1), \dots, f(x_N), x_1, \dots, x_N}=\\ %&\varianceargs{}{Z_{f,p}\vert x_1, \dots, x_N}=\\ & \mmd^2(p,q) \\ &= \varianceargs{f\sim GP}{ \int f(x) p(x) dx - \int f(x) q(x) dx }\\ &= \varianceargs{f\sim GP}{ \int f(x) \left[ p(x) -q(x) \right] dx}\\ &= \expectargs{f\sim GP}{ \left(\int f(x) \left[ p(x) -q(x) \right] dx \right)^2}\\ &= \expectargs{f\sim GP}{ \left(\int f(x) \left[ p(x) -q(x) \right] dx \right)\left(\int f(x') \left[ p(x') -q(x') \right] dx' \right)}\\ &= \expectargs{f\sim GP}{ \int \int f(x) f(x') \left[ p(x) -q(x) \right] \left[ p(x') -q(x') \right] dx dx' }\\ &= \int \int k(x,x') \left[ p(x) -q(x) \right] \left[ p(x') -q(x') \right] dx dx'\\ &= \int w(x) \left[ p(x') -q(x') \right] dx' \end{align} % So, the expectation of the witness function is the MMD. \section{Summary} \begin{table}[h] \caption{Almost-equivalent Methods} \label{sample-table} \begin{center} \begin{tabular}{llll} \multicolumn{2}{c}{ \bf Frequentist Method} &\multicolumn{2}{c}{\bf Bayesian Method} \\ \hline \\ $\mathcal{O}(N^2)$ & Kernel herding & $\mathcal{O}(N^3)$ & Bayesian Quadrature \\ $\mathcal{O}(N^2)$ & Hilbert-Schmidt Independence Criterion & $\mathcal{O}(N^3)$ & Variance of difference of integrals \\ $\mathcal{O}(N^2)$ & Kernel Two-sample test & $\mathcal{O}(N^3)$ & Variance of difference of integrals \end{tabular} \end{center} \end{table} \begin{table}[h] \caption{Related Methods} \label{sample-table} \begin{center} \begin{tabular}{llll} \multicolumn{1}{c}{ \bf Frequentist Method} &\multicolumn{1}{c}{\bf Bayesian Method} \\ \hline \\ Kernel Regression (Nadayara-Watson) & GP Regression \\ Functional ANOVA, HKL & Additive Gaussian Processes \\ Kernel Bayes' Rule & Bayesian Quadrature for Ratios \\ Conditional Mean Embeddings & Gaussian Process Dynamic Programming \\ Kernel Message Passing & ??? \end{tabular} \end{center} \end{table} \subsubsection*{Acknowledgments} We would like to thank Miguel Hernandez-Labato, Roger Grosse, Philipp Hennig, and Michael Osborne for helpful suggestions and advice. \bibliographystyle{style/icml2013} \bibliography{connections} \end{document}
{ "alphanum_fraction": 0.6667391068, "avg_line_length": 46.2462311558, "ext": "tex", "hexsha": "90c04643b2c9a6331a8f27c3660bd9a286687d13", "lang": "TeX", "max_forks_count": 3, "max_forks_repo_forks_event_max_datetime": "2021-07-25T04:31:47.000Z", "max_forks_repo_forks_event_min_datetime": "2017-11-23T13:45:20.000Z", "max_forks_repo_head_hexsha": "cecf8f1af028efd95c6262fb257f4ed9978a57e1", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "duvenaud/herding-paper", "max_forks_repo_path": "talks/connections.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "cecf8f1af028efd95c6262fb257f4ed9978a57e1", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "duvenaud/herding-paper", "max_issues_repo_path": "talks/connections.tex", "max_line_length": 522, "max_stars_count": 17, "max_stars_repo_head_hexsha": "cecf8f1af028efd95c6262fb257f4ed9978a57e1", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "duvenaud/herding-paper", "max_stars_repo_path": "talks/connections.tex", "max_stars_repo_stars_event_max_datetime": "2022-02-11T22:45:05.000Z", "max_stars_repo_stars_event_min_datetime": "2015-04-16T14:19:50.000Z", "num_tokens": 6321, "size": 18406 }
\documentclass[a4paper]{article} \usepackage[a4paper]{geometry} \usepackage{miscdoc} %\usepackage[scaled=0.85]{luximono} \usepackage{underscore} \begin{document} \title{The \Package{underscore} package} \author{Donald Arseneau\thanks{Documentation file assembled by Robin Fairbairns}} \date{13 September 2006, version 1.0} \maketitle \section*{Features} The package alters the command \cs{_} (which normally prints an underscore character or facsimile) so that the hyphenation of constituent words is not affected, and hyphenation is permitted after the underscore. For example, \begin{quote} \texttt{compound\cs{_}fracture} \end{quote} hyphenates as \begin{quote} \texttt{com- pound\cs{_}- frac- ture} \end{quote} If you prefer the underscore to break without a hyphen (but still with the same rules as for explicit hyphen-breaks) then use the \option[nohyphen] package option. A simple ``\texttt{_}'' acts just like ``\cs{_}'' in text mode, but makes a subscript in maths mode, so \begin{quote} \ttfamily activation_energy \$E_a\$ \end{quote} is printed as \begin{quote} activation_energy $E_a$ \end{quote} Both forms use an underscore character if the font encoding contains one (e.g., with \cmdinvoke{usepackage}[T1]{fontenc} or typewriter fonts in any encoding), but they use a rule if there is no proper character (just as unmodified \LaTeX{} does). \section*{Deficiencies} The skips and penalties ruin any kerning with the underscore character (when a character is used). However, there doesn't seem to be much, if any, such kerning in the EC fonts, and there is never any kerning with a rule. You must avoid ``\texttt{_}'' in file names and in cite or ref tags, or you must use the \Package{babel} package, with its active-character controls, or you must give the \option[strings] option, which attempts to redefine several commands (and may not work perfectly). Even without the \option[strings] option or \Package{babel}, you can use occasional underscores like: ``\cmdinvoke{include}{file\string\string_name}''. \subsection*{Option \protect\option[strings]} The default operation is quite simple and needs no customization; but you must avoid using ``\texttt{_}'' in any place where LaTeX uses an argument as a string of characters for some control function or as a name. These include the tags for \cs{cite} and \cs{ref}, file names for \cs{input}, \cs{include}, and \cs{includegraphics}, environment names, counter names, and placement parameters (like \option[t]). The problem with these contexts is that they are `moving arguments' but LaTeX does not `switch on' the ``\cs{protect} mechanism'' for them. If you need to use the underscore character in these places, the package option \option[strings] is provided to redefine commands that take such a string argument so that protection is applied (with \cs{protect} made to be \cs{string}). The list of commands this provision affects is given in \cs{UnderscoreCommands}, with \cs{do} before each one; plus several others covering \cs{input}, \cs{includegraphics}, \cs{cite}, \cs{ref}, and their variants. Not included are many commands regarding font names, anything with counter names, environment names, page styles, and versions of \cs{ref} and \cs{cite} defined by external packages (e.g., \cs{vref} and \cs{citeyear}). You can add to the list of supported commands by defining \cs{UnderscoreCommands} before loading this package; e.g. \begin{quote} \begin{verbatim} \usepackage{chicago} \newcommand{\UnderscoreCommands}{% (\cite already done) \do\citeNP \do\citeA \do\citeANP \do\citeN \do\shortcite \do\shortciteNP \do\shortciteA \do\shortciteANP \do\shortciteN \do\citeyear \do\citeyearNP } \usepackage[strings]{underscore} \end{verbatim} \end{quote} Not all commands can be supported this way! Only commands that take a string argument \emph{first} can be protected. One optional argument before the string argument is also permitted, as exemplified by \cs{cite}: both \cmdinvoke{cite}{tags} and \cmdinvoke{cite}[text]{tags} are allowed. A command like \cs{@addtoreset} which takes two counter names as arguments could not be protected by listing it in \cs{UnderscoreCommands}. \textit{When you use the \option[strings] option, you must load this package \textbf{last}} (or nearly last). There are two reasons for this requirement: \begin{enumerate} \item The redefinitions done for protection must come after other packages define their customized versions of those commands. \item The \option[strings] option requires the ``\texttt{_}'' character to be activated immediately in order for the cite and ref tags to be read properly from the \texttt{.aux} file as plain strings, and this catcode setting might disrupt other packages. \end{enumerate} The \Package{babel} package implements a protection mechanism for many commands, and will be a complete fix for most documents without the \option[strings] option. Many add-on packages are compatible with \Package{babel}, so they will get the strings protection also. However, there are several commands that are not covered by \Package{babel}, but can easily be supported by \option[strings] and \cs{UnderscoreCommands} mechanism. Beware the potential conflict using both \option[strings] and \Package{babel} (though none have been reported, yet); load babel last. \subsection*{Implementation notes} The first setting of ``\texttt{_}'' to be an active character is performed in a local group so as to not interfere with other packages. The catcode setting is repeated with ``\cs{AtBeginDocument}'' so the definition is in effect for the text. However, the catcode setting is repeated immediately when the \option[strings] option is detected. The definition of the active ``\texttt{_}'' is essentially: \begin{quote} \cs{ifmmode} \cs{sb} \cs{else} \cs{BreakableUnderscore} \cs{fi} \end{quote} where \cs{sb} retains the normal subscript meaning of \texttt{_} and where \cs{BreakableUnderscore} is essentially \cs{_}. The rest of the definition handles the \cs{protect}ion without causing \cs{relax} to be inserted before the character. \cs{BreakableUnderscore} uses \cs{nobreak}\cs{hskip}\cs{z@skip} to separate the underscore from surrounding words, thus allowing \TeX{} to hyphenate them, but preventing free breaks around the underscore. Next, it checks the current font family, and uses the underscore character from tt fonts or otherwise \cs{textunderscore} (which is a character or rule depending on the font encoding). After the underscore, it inserts a discretionary hyphenation point as \cs{usc@dischyph}, which is usually just \cs{-} except that it still works in the tabbing environment; if the \option[nohyphen] option is in effect, the empty discretionary \cmdinvoke{discretionary}{}{}{} is used instead. After that, another piece of non-breaking interword glue is inserted. Ordinarily, the comparison \cs{ifx}\cs{f@family}\cs{ttdefault} will fail because \cs{ttdefault} is `long' whereas \cs{f@family} is not\footnote{the package author says ``boooo hisss'' about this\dots}, but \cs{ttdefault} is redefined to be non-long \cs{AtBeginDocument}. The \cs{_} command is then defined to use \cs{BreakableUnderscore}. If the \option[strings] option has not been given, that is all! Under the \option[strings] option, the list of special commands is processed to: \begin{itemize} \item retain the original command as \cs{US_}\meta{command} (e.g., \cs{US_ref}) \item redefine the command as \cs{US@prot}\cs{US_command} for ordinary commands (\cs{US@prot}\cs{US_ref}) or as \cs{US@protopt}\cs{US_command} when an optional argument is possible (e.g., \cs{US@protopt}\cs{US_bibitem}). \item self-protecting commands (e.g., \cs{cite}) retain their self-protection. \end{itemize} Diagnosing the state of the pre-existing command is done by painful contortions involving \cs{meaning}. \cs{US@prot} and \cs{US@protopt} read the argument, process it with \cs{protect} enabled, then invoke the saved \cs{US_command}. \end{document} %%% Local Variables: %%% mode: latex %%% TeX-master: t %%% End:
{ "alphanum_fraction": 0.7654244817, "avg_line_length": 43.8054054054, "ext": "tex", "hexsha": "9a44f2baa2b228286dec3b469208e0c2d2d62029", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2021-05-22T19:58:30.000Z", "max_forks_repo_forks_event_min_datetime": "2021-05-22T19:58:30.000Z", "max_forks_repo_head_hexsha": "069ef9c53fa91c0565deb9601e99e22c0ae14aa9", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "stamer/texmlbus", "max_forks_repo_path": "volume/src/server/samples/samples/underscore/underscore.tex", "max_issues_count": 2, "max_issues_repo_head_hexsha": "069ef9c53fa91c0565deb9601e99e22c0ae14aa9", "max_issues_repo_issues_event_max_datetime": "2021-11-04T22:03:24.000Z", "max_issues_repo_issues_event_min_datetime": "2020-06-12T06:09:38.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "stamer/texmlbus", "max_issues_repo_path": "volume/src/server/samples/samples/underscore/underscore.tex", "max_line_length": 81, "max_stars_count": 21, "max_stars_repo_head_hexsha": "069ef9c53fa91c0565deb9601e99e22c0ae14aa9", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "stamer/texmlbus", "max_stars_repo_path": "volume/src/server/samples/samples/underscore/underscore.tex", "max_stars_repo_stars_event_max_datetime": "2022-03-26T16:34:11.000Z", "max_stars_repo_stars_event_min_datetime": "2020-05-18T08:01:09.000Z", "num_tokens": 2202, "size": 8104 }
This chapter provides a quick hands-on introduction to using Haizea in simulation mode. Even if you intend to use Haizea in combination with another system, such as OpenNebula, you may still find this guide useful to familiarise yourself with the Haizea configuration file and command-line interface. This chapter assumes that Haizea is installed on your system. If you have arrived at this chapter directly, you may want to read Chapter~\ref{chap:whatis} (``What is Haizea?'') first, although you should still be able to follow the instructions in this chapter without reading Chapter~\ref{chap:whatis} first. \section{The \texttt{haizea} command} The main command in the Haizea system is, unsurprisingly, the \texttt{haizea} command. Running this command starts up the Haizea lease manager, which is then ready to receive and schedule lease requests. As described in Chapter~\ref{chap:whatis}, Haizea can run in one of three modes: unattended simulated mode, interactive simulated mode, and OpenNebula mode. In this chapter we will focus on the simulation modes, starting with the ``unattended'' variety. Both simulation modes, and the OpenNebula mode, will be described in more detail in the next chapters. When running Haizea in unattended simulation mode, the inputs to Haizea are going to be the following: \begin{description} \item [The Haizea configuration file:] A text file containing all the options \item [A request \emph{tracefile}:] A text file containing a list of lease requests. Since we are using ``simulated time'', we won't be able to use Haizea interactively (we will be able to do this when we switch to the ``real time'' mode later in the chapter). Instead, we need to provide all the lease requests beforehand. \end{description} Based on the configuration file and the lease requests, the simulator produces a schedule for those leases, which you will be able to follow through logging messages printed by Haizea. At the end of the simulation, Haizea also saves a fair amount of raw data and statistics to disk which can be used to produce reports and graphs (a module to do this for you is in the works). This particular mode, with simulated time, is particularly useful when you want to take a list of request (potentially spanning weeks or months) to see what happens when you tweak the scheduling options (without having to wait weeks or months for the result). So, let's start by writing a configuration file specifying the simulation options (e.g., the characteristics of the simulated cluster) and the scheduling options. \section{The configuration file} A sample configuration file is provided with Haizea and is located in \texttt{/usr/share/haizea/etc/sample\_trace.conf} (or \texttt{\$HOME/share/haizea/etc/sample\_trace.conf} if you installed Haizea in your home directory). For this guide, you may want to make a copy of this file and use that instead (so you can preserve the original sample file). If you look at the contents of the file, you will see that it also includes documentation on every option (if you find the documentation gets in the way, there is also a \texttt{sample\_trace\_barebones.conf} file that has the same options, but without any documentation). For now, take a look at the following three options: \begin{wideshellverbatim} [simulation] starttime: 2006-11-25 13:00:00 resources: 4 CPU:100 Memory:1024 \end{wideshellverbatim} These options are used to describe the characteristics of our simulated cluster. In particular, we're using a 4-node cluster, each node with 1 CPU, 1024 MB of memory. In this document, we will represent this cluster over time like this: \begin{center} \includegraphics{images/quickstart_leasegraph1.png} \end{center} For example, the following figure shows a lease scheduled on Node 1 from 13:00 to 14:00: \begin{center} \includegraphics{images/quickstart_leasegraph2.png} \end{center} The \texttt{starttime} option is used to specify the time at which the simulated clock should start. As you will see, the configuration file has an abundance of other options. We will cover some of them in this chapter, but a more complete reference can be found in Appendix~\ref{app:conffile}. \section{The tracefile} As mentioned earlier, the simulator will read trace requests from a tracefile. The location of this tracefile is specified in the configuration file, in the \texttt{[tracefile]} section: \begin{wideshellverbatim} [tracefile] tracefile: /usr/share/haizea/traces/sample.lwf \end{wideshellverbatim} The default value is a sample tracefile included with Haizea. If you copy the file to a different location, make sure to update the \texttt{tracefile} option accordingly. The format of this file is LWF (Lease Workload Format), an XML format which is particular to Haizea. For now, don't worry about parsing the trace format in detail; it is fairly human-readable and you can also find details on the LWF format in Appendix~\ref{app:lwf}. \begin{wideshellverbatim} <lease-workload name="sample"> <description> A simple trace where an AR lease preempts a best-effort lease that is already running. </description> <lease-requests> <!-- The lease requests are initially commented out --> <!-- First lease request --> <!-- ... --> <!-- Second lease request --> <!-- ... --> </lease-requests> </lease-workload> \end{wideshellverbatim} As you can see, there are two lease requests in the file, but they are initially commented out. We will take a closer look at each of these requests next. \section{Running the simulator} Now that we have a configuration file and a tracefile, we can run the simulator. You can run Haizea with the sample configuration file like this: \begin{shellverbatim} haizea -c /usr/share/haizea/etc/sample_trace.conf \end{shellverbatim} Which results in the following output: \begin{wideshellverbatim} [2006-11-25 13:00:00.00] RM Starting resource manager [2006-11-25 13:00:00.00] TFILE Loading tracefile /usr/share/haizea/traces/sample.lwf [2006-11-25 13:00:00.00] TFILE Loaded workload with 0 requests () [2006-11-25 13:00:00.00] CLOCK Starting simulated clock [2006-11-25 13:00:00.00] CLOCK Simulated clock has stopped [2006-11-25 13:00:00.00] RM Stopping resource manager gracefully... [2006-11-25 13:00:00.00] RM --- Haizea status summary --- [2006-11-25 13:00:00.00] RM Number of leases (not including completed): 0 [2006-11-25 13:00:00.00] RM Completed leases: 0 [2006-11-25 13:00:00.00] RM Completed best-effort leases: 0 [2006-11-25 13:00:00.00] RM Queue size: 0 [2006-11-25 13:00:00.00] RM Accepted AR leases: 0 [2006-11-25 13:00:00.00] RM Rejected AR leases: 0 [2006-11-25 13:00:00.00] RM Accepted IM leases: 0 [2006-11-25 13:00:00.00] RM Rejected IM leases: 0 [2006-11-25 13:00:00.00] RM ---- End summary ---- \end{wideshellverbatim} Now that you've seen the tracefile, you can see why the simulator starts up and immediately stops: all the lease requests in the tracefile are commented out, and there's nothing to schedule. Go ahead and uncomment the first lease request, which looks like this: \begin{wideshellverbatim} <lease-request arrival="00:00:00"> <lease preemptible="true"> <nodes> <node-set numnodes="1"> <res type="CPU" amount="100"/> <res type="Memory" amount="1024"/> </node-set> </nodes> <start></start> <duration time="01:00:00"/> <software> <disk-image id="foobar.img" size="1024"/> </software> </lease> </lease-request> \end{wideshellverbatim} This is a request for a best-effort lease (notice how the starting time is left empty, meaning it's up to Haizea to determine the start time), requested at time 00:00:00 (right at the start of the simulation), requiring 1 hour, and only one node. Now run Haizea again. You should now see the following: \begin{wideshellverbatim} [2006-11-25 13:00:00.00] RM Starting resource manager [2006-11-25 13:00:00.00] TFILE Loading tracefile /usr/share/haizea/traces/sample.lwf [2006-11-25 13:00:00.00] TFILE Loaded workload with 1 requests (1 Best-effort) [2006-11-25 13:00:00.00] CLOCK Starting simulated clock [2006-11-25 13:00:00.00] LSCHED Lease #1 has been requested. [2006-11-25 13:00:00.00] LSCHED Lease #1 has been marked as pending. [2006-11-25 13:00:00.00] LSCHED Queued best-effort lease request #1, 1 nodes for 01:00:00.00. [2006-11-25 13:00:00.00] LSCHED Next request in the queue is lease 1. Attempting to schedule... [2006-11-25 13:00:00.00] VMSCHED Lease #1 has been scheduled on nodes [1] from 2006-11-25 13:00:00.00 to 2006-11-25 14:00:00.00 [2006-11-25 13:00:00.00] VMSCHED Started VMs for lease 1 on nodes [1] [2006-11-25 14:00:00.00] VMSCHED Stopped VMs for lease 1 on nodes [1] [2006-11-25 14:00:00.00] VMSCHED Lease 1's VMs have shutdown. [2006-11-25 14:00:00.00] CLOCK Simulated clock has stopped [2006-11-25 14:00:00.00] RM Stopping resource manager gracefully... [2006-11-25 14:00:00.00] RM --- Haizea status summary --- [2006-11-25 14:00:00.00] RM Number of leases (not including completed): 0 [2006-11-25 14:00:00.00] RM Completed leases: 1 [2006-11-25 14:00:00.00] RM Completed best-effort leases: 1 [2006-11-25 14:00:00.00] RM Queue size: 0 [2006-11-25 14:00:00.00] RM Accepted AR leases: 0 [2006-11-25 14:00:00.00] RM Rejected AR leases: 0 [2006-11-25 14:00:00.00] RM Accepted IM leases: 0 [2006-11-25 14:00:00.00] RM Rejected IM leases: 0 [2006-11-25 14:00:00.00] RM ---- End summary ---- \end{wideshellverbatim} The above corresponds to the following schedule: \begin{center} \includegraphics{images/quickstart_leasegraph2.png} \end{center} A best-effort request is received at 13:00 and, since the cluster is empty, it is scheduled immediately. Notice how the VMs for the lease start at 13:00 and stop at 14:00. For now, we're assuming that the disk images are predeployed on the physical nodes (we will modify this option in the next section). Now go ahead and uncomment the second lease request, which looks like this: \begin{wideshellverbatim} <lease-request arrival="00:15:00"> <lease preemptible="false"> <nodes> <node-set numnodes="4"> <res type="CPU" amount="100"/> <res type="Memory" amount="1024"/> </node-set> </nodes> <start> <exact time="00:30:00"/> </start> <duration time="00:30:00"/> <software> <disk-image id="foobar.img" size="1024"/> </software> </lease> </lease-request> \end{wideshellverbatim} This is a request for an advance reservation lease (notice how there is an exact starting time specified), requesting all four nodes for 30 minutes. So, what would happen if we also added this AR lease? Since it requires all the cluster resources from 13:30 to 14:00, the best-effort lease will be unable to run in that time interval. Since the leases are implemented as VMs, Haizea will still schedule the best-effort lease to start at 13:00, but will suspend it before the AR lease starts, and will resume it once the AR lease has finished. In effect, we want the schedule to look like this: \begin{center} \includegraphics{images/quickstart_leasegraph3.png} \end{center} Uncomment the AR lease request, and run Haizea again. You should now see the following: \begin{wideshellverbatim} [2006-11-25 13:00:00.00] RM Starting resource manager [2006-11-25 13:00:00.00] TFILE Loading tracefile /usr/share/haizea/traces/sample.lwf [2006-11-25 13:00:00.00] TFILE Loaded workload with 2 requests (1 Best-effort + 1 AR) [2006-11-25 13:00:00.00] CLOCK Starting simulated clock [2006-11-25 13:00:00.00] LSCHED Lease #1 has been requested. [2006-11-25 13:00:00.00] LSCHED Lease #1 has been marked as pending. [2006-11-25 13:00:00.00] LSCHED Queued best-effort lease request #1, 1 nodes for 01:00:00.00. [2006-11-25 13:00:00.00] LSCHED Next request in the queue is lease 1. Attempting to schedule... [2006-11-25 13:00:00.00] VMSCHED Lease #1 has been scheduled on nodes [1] from 2006-11-25 13:00:00.00 to 2006-11-25 14:00:00.00 [2006-11-25 13:00:00.00] VMSCHED Started VMs for lease 1 on nodes [1] [2006-11-25 13:15:00.00] LSCHED Lease #2 has been requested. [2006-11-25 13:15:00.00] LSCHED Lease #2 has been marked as pending. [2006-11-25 13:15:00.00] LSCHED Scheduling AR lease #2, 4 nodes from 2006-11-25 13:30:00.00 to 2006-11-25 14:00:00.00. [2006-11-25 13:15:00.00] LSCHED Must preempt leases [1] to make room for lease #2 [2006-11-25 13:15:00.00] LSCHED Preempting lease #1... [2006-11-25 13:15:00.00] LSCHED ... lease #1 will be suspended at 2006-11-25 13:30:00.00. [2006-11-25 13:15:00.00] LSCHED AR lease #2 has been scheduled. [2006-11-25 13:29:28.00] VMSCHED Stopped VMs for lease 1 on nodes [1] [2006-11-25 13:29:28.00] VMSCHED Suspending lease 1... [2006-11-25 13:30:00.00] VMSCHED Lease 1 suspended. [2006-11-25 13:30:00.00] VMSCHED Started VMs for lease 2 on nodes [2, 3, 4, 1] [2006-11-25 13:30:00.00] LSCHED Next request in the queue is lease 1. Attempting to schedule... [2006-11-25 13:30:00.00] VMSCHED Lease #1 has been scheduled on nodes [1] from 2006-11-25 14:00:00.00 (resuming) to 2006-11-25 14:31:04.00 [2006-11-25 14:00:00.00] VMSCHED Stopped VMs for lease 2 on nodes [2, 3, 4, 1] [2006-11-25 14:00:00.00] VMSCHED Resuming lease 1... [2006-11-25 14:00:00.00] VMSCHED Lease 2's VMs have shutdown. [2006-11-25 14:00:32.00] VMSCHED Resumed lease 1 [2006-11-25 14:00:32.00] VMSCHED Started VMs for lease 1 on nodes [1] [2006-11-25 14:31:04.00] VMSCHED Stopped VMs for lease 1 on nodes [1] [2006-11-25 14:31:04.00] VMSCHED Lease 1's VMs have shutdown. [2006-11-25 14:31:04.00] CLOCK Simulated clock has stopped [2006-11-25 14:31:04.00] RM Stopping resource manager gracefully... \end{wideshellverbatim} Notice how the above corresponds to the previous figure. In particular, notice the following: \begin{itemize} \item When the AR lease request is received, Haizea looks at the schedule and determines that the only way to schedule the AR lease is to preempt the best-effort lease. However, instead of cancelling that lease, it will just reschedule it so it is suspended right before the AR lease start. Note that Haizea will always try to minimise the number of preemption (in this case, we're forcing the situation for demonstration purposes) by assigning the AR lease to resources that are available without preempting other leases. \item Shortly before the AR lease starts, the best-effort lease is suspended (the time required to do this is estimated by Haizea based on an option in the configuration file). When the AR lease ends at 14:00, Haizea begins resuming the suspended best-effort lease. \end{itemize} \section{The scheduling options} Haizea has several scheduling options that control how Haizea selects resources and schedules leases. For example, the above example assumed that leases can be suspended (which they generally always can be when running as virtual machines). What would happen if this were not possible? You can modify the suspension option in the \texttt{[scheduling]} section to find out: \begin{wideshellverbatim} [scheduling] ... suspension: none ... \end{wideshellverbatim} Rerun Haizea. Now, when the AR lease arrives at 13:15, the scheduler will realise it has to preempt the best-effort lease to make room for the AR lease, but will no longer be able to suspend it. The only option is to cancel the best-effort lease and resubmit it to the queue: \begin{wideshellverbatim} [2006-11-25 13:15:00.00] LSCHED Preempting lease #1... [2006-11-25 13:15:00.00] LSCHED ... lease #1 has been cancelled and requeued \end{wideshellverbatim} Now, the best-effort lease can only be scheduled after the AR lease, at 14:00: \begin{wideshellverbatim} [2006-11-25 13:15:00.00] VMSCHED Lease #1 has been scheduled on nodes [1] from 2006-11-25 14:00:00.00 to 2006-11-25 15:00:00.00 \end{wideshellverbatim} So, the schedule would end up looking like this: \begin{center} \includegraphics{images/quickstart_leasegraph4.png} \end{center} Notice how, although suspending a lease is a disruptive activity which can delay the completion time of a best-effort request, it is still much better than completely cancelling a request and waiting for enough resources to accommodate the entire (uninterrupted) duration of the lease. Another scheduling option you can modify is whether Haizea should transfer the VM's disk image from an image repository before the lease can start. You can do this by modifying the \texttt{lease-deployment} option: \begin{wideshellverbatim} [general] ... lease-preparation: imagetransfer ... \end{wideshellverbatim} If you look at the bottom of the sample configuration file, you will find a section called \texttt{[deploy-imagetransfer]} with all the image transfer options. Rerun Haizea again. You should get a schedule similar to the previous one, but with some extra messages indicating that image transfers are taking place: \begin{wideshellverbatim} [2006-11-25 13:00:00.00] DEPLOY Starting image transfer for lease 1 [2006-11-25 13:01:22.00] DEPLOY Completed image transfer for lease 1 \end{wideshellverbatim} As you can see, the best-effort lease can no longer start right at 13:00, since an image transfer has to take place before the starting time. The same is true of the AR lease, but notice how Haizea schedules the image transfer in such a way that the AR lease can still start at 13:30 as planned (instead of delaying the starting time until 13:31:22). There are several other options you can modify in the \texttt{[scheduling]} section, such as what backfilling algorithm to use, whether to allow lease migration or not, etc. These options are described in the following chapters, and in Appendix~\ref{app:conffile}. \section{Interactive simulations} Up to this point, Haizea has been scheduling leases in ``simulated time''. This meant that we provided Haizea with a lease workload beforehand, ran it, and got the results of scheduling that workload much earlier than it would have actually taken to run the leases (e.g., if we requested a 30 minute lease, we didn't have to wait 30 minutes for the lease to complete; Haizea just skipped from the start to the end of the lease). This ``fast forward'' approach is useful if you want to experiment with different scheduling parameters and workloads. However, you can also run Haizea in simulation and in ``real time''. To do this, you need to change the \texttt{clock} option of the \texttt{[simulation]} section: \begin{wideshellverbatim} [simulation] ... clock: real ... \end{wideshellverbatim} If you run Haizea in this mode, it will run a daemon that is ready to accept your requests interactively through a command-line interface, instead of processing a list of requests provided beforehand. You should see the following when running the \texttt{haizea} command: \begin{wideshellverbatim} Started Haizea daemon with pid NNNN \end{wideshellverbatim} You will then get control of your console back. If you're wondering where all the logging messages are being saved to, they're now being sent to a file. The default logfile is \texttt{/var/tmp/haizea.log}. You can take a peek at it like this: \begin{shellverbatim} tail /var/tmp/haizea.log \end{shellverbatim} You will notice messages like this: \begin{wideshellverbatim} [2008-09-24 14:14:18.58] CLOCK Going back to sleep. Waking up at 2008-09-24 14:15:19.00 to see if something interesting has happened by then. \end{wideshellverbatim} Since time is not simulated, Haizea doesn't know what the ``next time'' to skip to will be, so it will simply wake up periodically to see if anything interesting has happened (like a new request). This interval can be changed in the configuration file: \begin{wideshellverbatim} [simulation] ... wakeup-interval: 10 ... \end{wideshellverbatim} However, when Haizea plans an event (e.g., leases that have to start or end), it will wake up specifically to handle that event (instead of waiting for the wakeup interval to conclude). So, let's give Haizea something to do. The \texttt{haizea-request-lease} command is used to request leases. For example, the following command is used to request an 1-node AR lease one minute in the future, for ten minutes: \begin{wideshellverbatim} haizea-request-lease -t +00:02:00 -d 00:10:00 -n 1 --non-preemptible \ -c 1 -m 512 -i foobar.img -z 600 \end{wideshellverbatim} Additionally, you can also write a lease request using the XML format seen previous, save it to a file, and have \texttt{haizea-request-lease} command parse it: \begin{wideshellverbatim} haizea-request-lease -f request.xml \end{wideshellverbatim} You can find more details on this command's parameters by running \texttt{haizea-request-lease -h} or taking a look at Appendix~\ref{app:cli}. Once you've submitted the lease, you should see the following: \begin{wideshellverbatim} Lease submitted correctly. Lease ID: 1 \end{wideshellverbatim} You can check the status of your submitted lease by looking at the log file or, more conveniently, using this command: \begin{shellverbatim} haizea-list-leases \end{shellverbatim} You should see the following: \begin{wideshellverbatim} ID Type State Starting time Duration Nodes 1 AR Scheduled 2009-08-04 11:25:57.00 00:10:00.00 1 \end{wideshellverbatim} Note: You may not see your lease right away, since Haizea has to ``become aware'' of it (which won't happen until it wakes up to check if there are any new requests). Future versions of Haizea will enable it to be notified immediately of incoming requests. Remember that the lease has been requested one minute into the future, so it will remain in a ``Scheduled'' state for a couple seconds. If you run \texttt{haizea-list-leases} periodically, you should see it pass through a couple other states. If image transfers are still enabled, it will first transition to the ``Preparing'' state: \begin{wideshellverbatim} ID Type State Starting time Duration Nodes 1 AR Preparing 2009-08-04 11:25:57.00 00:10:00.00 1 \end{wideshellverbatim} And then to the ``Active'' state: \begin{wideshellverbatim} ID Type State Starting time Duration Nodes 1 AR Active 2009-08-04 11:25:57.00 00:10:00.00 1 \end{wideshellverbatim} Now let's request a best-effort lease: \begin{wideshellverbatim} haizea-request-lease -t best_effort -d 00:10:00 -n 4 --non-preemptible \ -c 1 -m 512 -i foobar.img -z 600 \end{wideshellverbatim} The list of leases will now look like this: \begin{wideshellverbatim} ID Type State Starting time Duration Nodes 1 AR Active 2009-08-04 11:25:57.00 00:10:00.00 1 2 Best-effort Scheduled Unspecified 00:10:00.00 4 \end{wideshellverbatim} Note how, for best-effort leases, the starting time is set to ``Unspecified'', which means this time is not specified by the user, but instead determined on a best-effort basis by the scheduler. Since the lease is in a ``Scheduled'' state, that means that it has been assigned a starting time (although that information is currently not available through the command-line interface; it can be seen in the Haizea log). Now try to rerun the \texttt{haizea-request-lease} command a couple times (i.e., lets submit a couple more best-effort requests). The scheduler won't be able to schedule them, since they require all the available nodes, and the AR lease is using up one of them. The previous best-effort lease was scheduled because Haizea's default behaviour is to schedule at most one best-effort lease in the future if resources cannot be found right away (this is due to Haizea's use of backfilling algorithms; for now, don't worry if you don't know what they are). Anyway, the list of leases should now look like this: \begin{wideshellverbatim} ID Type State Starting time Duration Nodes 1 AR Active 2009-08-04 11:25:57.00 00:10:00.00 1 2 Best-effort Scheduled Unspecified 00:10:00.00 4 3 Best-effort Queued Unspecified 00:10:00.00 4 4 Best-effort Queued Unspecified 00:10:00.00 4 5 Best-effort Queued Unspecified 00:10:00.00 4 6 Best-effort Queued Unspecified 00:10:00.00 4 \end{wideshellverbatim} Notice how the extra best-effort requests have been queued. If you only want to see the contents of the queue, you can use the following command: \begin{shellverbatim} haizea-show-queue \end{shellverbatim} This should show the following: \begin{wideshellverbatim} ID Type State Starting time Duration Nodes 3 Best-effort Queued Unspecified 00:10:00.00 4 4 Best-effort Queued Unspecified 00:10:00.00 4 5 Best-effort Queued Unspecified 00:10:00.00 4 6 Best-effort Queued Unspecified 00:10:00.00 4 \end{wideshellverbatim} When you're done, you can shut Haizea down cleanly by running the following: \begin{shellverbatim} haizea --stop \end{shellverbatim} \section{Other things you can do with Haizea} At this point, we have seen how to run simple simulations with Haizea. However, there is a lot more that Haizea can do: \begin{description} \item[Run on real hardware] First and foremost, almost everything you just saw above in simulation can be done on real hardware. This is accomplished by using Haizea with the OpenNebula virtual infrastructure manager. So, if you have a Xen or KVM cluster, you can just install OpenNebula and Haizea to enable your users to request VM-based leases on your cluster. This is explained in Chapter~\ref{chap:opennebula}. \item[Run complex simulations] This chapter concerned itself mostly with scheduling two leases on a 4-node cluster during a span of roughly 2 hours. \emph{Boring}. Haizea can handle more complex simulations, and also provides the necessary tools for you to easily run multiple simulations with different profiles. For example, in the Haizea paper ``Combining Batch Execution and Leasing Using Virtual Machines'' (see the Haizea publication page: \url{http://haizea.cs.uchicago.edu/pubs.html}) we simulated running 72 30-day workloads in six different configurations, or 36 years of lease scheduling. Running multiple simulations is explained in Section~\ref{sec:multiplesim} \item[Produce reports and graphs] The above examples relied on reading the Haizea log messages or peeking into Haizea's schedule using command-line tools. This is ok for a simple simulation, but no fun when you're scheduling thousands of leases. Haizea saves a fair amount of raw data to disk with scheduling metrics, utilization information, etc. which can be used to generate reports and graphs. We are in the process of producing tools that will allow you to easily analyse that data and create graphs, although some pointers on how to interpret the raw data produced by Haizea are presented in Chapter~\ref{chap:analysing}. \end{description}
{ "alphanum_fraction": 0.7317354063, "avg_line_length": 60.0129589633, "ext": "tex", "hexsha": "06ccdee0b65286c5c0305ffa01aeaac5838adebc", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "797e1b0ae19b41887c8970298de3adb9498034f3", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "Hamdy/haizea", "max_forks_repo_path": "doc/manual/quickstart.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "797e1b0ae19b41887c8970298de3adb9498034f3", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "Hamdy/haizea", "max_issues_repo_path": "doc/manual/quickstart.tex", "max_line_length": 711, "max_stars_count": 1, "max_stars_repo_head_hexsha": "797e1b0ae19b41887c8970298de3adb9498034f3", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "Hamdy/haizea", "max_stars_repo_path": "doc/manual/quickstart.tex", "max_stars_repo_stars_event_max_datetime": "2017-10-31T22:17:31.000Z", "max_stars_repo_stars_event_min_datetime": "2017-10-31T22:17:31.000Z", "num_tokens": 7803, "size": 27786 }
\chapter{Equalizers and co-Equalizers in Certain Categories} It is a rough draft. Errors are possible. \fxwarning{Change notation $\prod$ $\rightarrow$ $\prod^{(L)}$.} \section{Equalizers} Categories $\cont (\mathcal{C})$ are defined above. I will denote $W$ the forgetful functor from $\cont (\mathcal{C})$ to $\mathcal{C}$. In the definition of the category $\cont (\mathcal{C})$ take values of $\uparrow$ as principal morphisms. \fxwarning{Wording.} \begin{lem} Let $f : X \rightarrow Y$ be a morphism of the category $\cont (\mathcal{C})$ where $\mathcal{C}$ is a concrete category (so $W f = \uparrow \varphi$ for a $\mathbf{Rel}$-morphism $\varphi$ because $f$ is principal) and $\im \varphi = A \subseteq \Ob Y$. Factor it $\varphi = \mathcal{E}^{\Ob Y} \circ u$ where $u : \Ob X \rightarrow A$ using properties of $\mathbf{Set}$. Then $u$ is a morphism of $\cont (\mathcal{C})$ (that is a continuous function $X \rightarrow \iota_A Y$). \end{lem} \begin{proof} $(\mathcal{E}^{\Ob Y})^{- 1} \circ \varphi = (\mathcal{E}^{\Ob Y})^{- 1} \circ \mathcal{E}^{\Ob Y} \circ u$; $(\mathcal{E}_{\mathcal{C}}^{\Ob Y})^{- 1} \circ \uparrow \varphi = (\mathcal{E}_{\mathcal{C}}^{\Ob Y})^{- 1} \circ \mathcal{E}_{\mathcal{C}}^{\Ob Y} \circ \uparrow u$; $(\mathcal{E}_{\mathcal{C}}^{\Ob Y})^{- 1} \circ \uparrow \varphi = \uparrow u$; $X \sqsubseteq (\uparrow u)^{- 1} \circ \pi_A Y \circ \uparrow u \Leftrightarrow X \sqsubseteq (\uparrow \varphi)^{- 1} \circ \mathcal{E}_{\mathcal{C}}^{\Ob Y} \circ \pi_A Y \circ (\mathcal{E}_{\mathcal{C}}^{\Ob Y})^{- 1} \circ \uparrow \varphi \Leftrightarrow X \sqsubseteq (\uparrow \varphi)^{- 1} \circ \mathcal{E}_{\mathcal{C}}^{\Ob Y} \circ (\mathcal{E}_{\mathcal{C}}^{\Ob Y})^{- 1} \circ Y \circ \mathcal{E}_{\mathcal{C}}^{\Ob Y} \circ (\mathcal{E}_{\mathcal{C}}^{\Ob Y})^{- 1} \circ \uparrow \varphi \Leftrightarrow X \sqsubseteq (\uparrow \varphi)^{- 1} \circ Y \circ \uparrow \varphi \Leftrightarrow X \sqsubseteq (W f)^{- 1} \circ Y \circ W f$ what is true by definition of continuity. \end{proof} Equational definition of equalizers: \url{http://nforum.mathforge.org/comments.php?DiscussionID=5328/} \begin{thm} The following is an equalizer of parallel morphisms $f, g : A \rightarrow B$ of category $\cont (\mathcal{C})$: \begin{itemize} \item the object $X = \iota_{\setcond{ x \in \Ob A }{ f x = g x }} A$; \item the morphism $\mathcal{E}^{\Ob X, \Ob A}$ considered as a morphism $X \rightarrow A$. \end{itemize} \end{thm} \begin{proof} Denote $e = \mathcal{E}^{\Ob X, \Ob A}$. Let $f \circ z = g \circ z$ for some morphism $z$. Let's prove $e \circ u = z$ for some $u : \Src z \rightarrow X$. Really, as a morphism of $\mathbf{Set}$ it exists and is unique. Consider $z$ as as a generalized element. $f (z) = g (z)$. So $z \in X$ (that is $\Dst z \in X$). Thus $z = e \circ u$ for some $u$ (by properties of $\mathbf{Set}$). The generalized element $u$ is a $\cont (\mathcal{C})$-morphism because of the lemma above. It is unique by properties of $\mathbf{Set}$. \end{proof} We can (over)simplify the above theorem by the obvious below: \begin{obvious} $\setcond{ x \in \Ob A }{ f x = g x } = \dom (f \cap g)$. \end{obvious} \section{Co-equalizers} \url{http://math.stackexchange.com/questions/539717/how-to-construct-co-equalizers-in-mathbftop} Let $\sim$ be an equivalence relation. Let's denote $\pi$ its canonical projection. \begin{defn} $f / \sim = \uparrow \pi \circ f \circ \uparrow \pi^{- 1}$ for every morphism $f$. \end{defn} \begin{obvious} $\Ob (f / \sim) = (\Ob f) / r$. \end{obvious} \begin{obvious} $f / \sim = \langle \uparrow^{\mathsf{FCD}} \pi \times^{(C)} \uparrow^{\mathsf{FCD}} \pi \rangle f$ for every morphism $f$. \end{obvious} To define co-equalizers of morphisms $f$ and $g$ let $\sim$ be is the smallest equivalence relation such that $f x = g x$. \begin{lem} Let $f : X \rightarrow Y$ be a morphism of the category $\cont (\mathcal{C})$ where $\mathcal{C}$ is a concrete category (so $W f = \uparrow \varphi$ for a $\mathbf{Rel}$-morphism $\varphi$ because $f$ is principal) such that $\varphi$ respects $\sim$. Factor it $\varphi = u \circ \pi$ where $u : \Ob (X / \sim) \rightarrow \Ob Y$ using properties of $\mathbf{Set}$. Then $u$ is a morphism of $\cont (\mathcal{C})$ (that is a continuous function $X / \sim \rightarrow Y$). \end{lem} \begin{proof} $f \circ X \circ f^{- 1} \sqsubseteq Y$; $\uparrow u \circ \uparrow \pi \circ X \circ \uparrow \pi^{- 1} \circ \uparrow u^{- 1} \sqsubseteq Y$; $\uparrow u \in \mathrm{C} (\uparrow \pi \circ X \circ \uparrow \pi^{- 1} , Y) = \mathrm{C} (X / \sim , Y)$. \end{proof} \begin{thm} The following is a co-equalizer of parallel morphisms $f, g : A \rightarrow B$ of category $\cont (\mathcal{C})$: \begin{itemize} \item the object $Y = f / \sim$; \item the morphism $\pi$ considered as a morphism $B \rightarrow Y$. \end{itemize} \end{thm} \begin{proof} Let $z \circ f = z \circ g$ for some morphism $z$. Let's prove $u \circ \pi = z$ for some $u : Y \rightarrow \Dst z$. Really, as a morphism of $\mathbf{Set}$ it exists and is unique. $\Src z \in Y$. Thus $z = u \circ \pi$ for some $u$ (by properties of $\mathbf{Set}$). The function $u$ is a $\cont (\mathcal{C})$-morphism because of the lemma above. It is unique by properties of $\mathbf{Set}$ ($\pi$ obviously respects equivalence classes). \end{proof} \section{Rest} \begin{thm} The categories $\cont (\mathcal{C})$ (for example in $\mathbf{Fcd}$ and $\mathbf{Rld}$) are complete. \fxwarning{Note that small complete category is a preorder!} \end{thm} \begin{proof} They have products and equalizers. \end{proof} \begin{thm} The categories $\cont (\mathcal{C})$ (for example in $\mathbf{Fcd}$ and $\mathbf{Rld}$) are co-complete. \end{thm} \begin{proof} They have co-products and co-equalizers. \end{proof} \begin{defn} I call morphisms $f$ and $g$ of a category with embeddings \emph{equivalent} ($f \sim g$) when there exist a morphism $p$ such that $\Src p \sqsubseteq \Src f$, $\Src p \sqsubseteq \Src g$, $\Dst p \sqsubseteq \Dst f$, $\Dst p \sqsubseteq \Dst g$ and $\iota_{\Src f, \Dst f} p = f$ and $\iota_{\Src g, \Dst g} p = g$. \end{defn} \begin{problem} Find under which conditions: \begin{enumerate} \item Equivalence of morphisms is an equivalence relation. \item Equivalence of morphisms is a congruence for our category. \end{enumerate} \end{problem}
{ "alphanum_fraction": 0.6386033857, "avg_line_length": 34.1030927835, "ext": "tex", "hexsha": "d4d1bd133398ff05270554a49d597c666ceb9ded", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "d1d02a6515a6dabbc5d30b0c00a3e6a9878b36b1", "max_forks_repo_licenses": [ "CC-BY-4.0" ], "max_forks_repo_name": "vporton/algebraic-general-topology", "max_forks_repo_path": "chap-equalizers.tex", "max_issues_count": 2, "max_issues_repo_head_hexsha": "d1d02a6515a6dabbc5d30b0c00a3e6a9878b36b1", "max_issues_repo_issues_event_max_datetime": "2020-03-13T02:05:02.000Z", "max_issues_repo_issues_event_min_datetime": "2019-12-30T07:16:23.000Z", "max_issues_repo_licenses": [ "CC-BY-4.0" ], "max_issues_repo_name": "vporton/algebraic-general-topology", "max_issues_repo_path": "chap-equalizers.tex", "max_line_length": 110, "max_stars_count": 9, "max_stars_repo_head_hexsha": "d1d02a6515a6dabbc5d30b0c00a3e6a9878b36b1", "max_stars_repo_licenses": [ "CC-BY-4.0" ], "max_stars_repo_name": "vporton/algebraic-general-topology", "max_stars_repo_path": "chap-equalizers.tex", "max_stars_repo_stars_event_max_datetime": "2021-09-03T04:56:16.000Z", "max_stars_repo_stars_event_min_datetime": "2018-06-26T00:14:44.000Z", "num_tokens": 2394, "size": 6616 }
\documentclass[../../main]{subfiles} \pagestyle{fancy} \begin{document} \chapter{Part II Introduction} \thispagestyle{fancy} \section{A virtual hypothesis} \todo[inline]{Write introduction, history of the result etc} \qquad Denote by $\di$ the theory\todo{Rephrase this in terms of generic embeddings?} \eq{ \text{$\zfc + \ch\ + $ there is an $\omega_{1}$-dense ideal on $\omega_{1}$} } and by $\di^+$ the theory \eq{ \text{$\zfc + \ch\ + $ there is an $\omega_{1}$-dense ideal on $\omega_{1}$ such that the induced generic}\\ \text{embedding restricted to the ordinals is independent of the generic object.} } In this paper, we will give a full proof of the following result, \begin{theorem} The following theories are equiconsistent: \begin{enumerate} \item $\zf + \ad_{\mathbb{R}} + \Theta$ is regular, \item $\di^+$ \end{enumerate} \end{theorem} Most of our results don't require the full strength of $\di^+$ and we've stated the needed assumptions in these cases, but the reader may simply assume $\di^+$ for the remainder of this paper if they wish to. \section{Mice and games} \todo[inline]{Define mice, $M_n^\sharp(x)$, iteration game, exit extender, cutpoint, condenses well, relativises well} \section{Core model induction} Before we start with the actual induction, this section will attempt to give the reader an overview of what's going to happen, which will hopefully make it easier to understand the lemmata along the way to the finish line. \qquad A core model induction is a way of producing determinacy models from strong hypotheses. What we're trying to do is to show that many subsets of the reals are determined, so one place to start could be the projective hierarchy, as Martin has shown that $\zfc$ alone proves that all the Borel sets are determined. \qquad To show that the projective sets are determined, we use the M\" uller-Neeman-Woodin result that $\bf\Sigma^1_{n+1}$-determinacy is equivalent to the existence and iterability of $M_n^\sharp(x)$ for every real $x$. So from our given hypothesis we then, somehow, manage to show that all these mice exist, giving us projective determinacy. \qquad A next step could then be to notice that the projective sets of reals are precisely those reals belonging to $J_1(\mathbb R)$, so we would then want to show that \textit{all} the sets of reals in $L(\mathbb R)$ are determined, by an induction on the levels. Kechris-Woodin-Solovay \todo{Check authors.} shows that we only need to check that the sets of reals in $J_{\alpha+1}(\mathbb R)$ for so-called \textit{critical} ordinals $\alpha$ are determined. \qquad This is convenient, since Steel (see \cite{scalesinL(R)}) has characterised these critical ordinals and showed that they fall into a handful of cases, the notable ones being the \textit{inadmissible case} and the \textit{end of gap case}. Long story short, the so-called \textit{Witness Equivalence} shows that to prove $J_{\alpha+1}(\mathbb R)\models\ad$ for a critical ordinal $\alpha$, it suffices to show that $M_n^F(x)$ exists and is iterable for a certain operator $F$, in analogy with what happened with the projective sets. \qquad This part of the induction, showing $\ad^{L{(\mathbb R)}}$, is an instance of an \textit{internal} core model induction: we're showing that all the sets of reals in some fixed inner model are determined. Crucially, for these internal core model inductions to work, we need a \textit{scale analysis} of the model at hand. In this paper we will be working with the \textit{lower part model} $\lp(\mathbb R)$, which contains all the sets of reals of $L(\mathbb R)$ and more, and generalisations of such lower part models. The scale analysis of $\lp(\mathbb R)$ is shown in Steel \todo{Reference Scales in $K(\mathbb R)$ and the companion paper.}, and the scale analysis for the generalised versions is shown in Trang and Schlutzenberg \todo{Reference their scale paper.}. \qquad Our first internal step will thus show that every set of reals in $\lp(\mathbb R)$ is determined, which consistency-wise is a tad stronger than having a limit of Woodin cardinals. This first step can be seen as showing that all sets of reals in the pointclass $(\Sigma^2_1)^{\lp(\mathbb R)}$ \todo{Boldface?} are determined, since being in this pointclass precisely means that you belong to an initial segment of $\lp(\mathbb R)$. \qquad The \textit{external} core model induction takes this further. If we define $\Gamma_\infty$ to be the set of all determined sets of reals, we want to see how big this pointclass is. We organise this by looking at the so-called \textit{Solovay sequence} $\bra{\theta_\alpha\mid\alpha\leq\Omega}$ of $L(\Gamma_\infty,\mathbb R)$, whose length can be seen as a measure of ``how many determined sets of reals there are'' in a context without the axiom of choice. \qquad If $\Omega=0$ then it can be shown that $L(\Gamma_\infty,\mathbb R)$ and $\lp(\mathbb R)$ have the same sets of reals\todo{Even equal I think}, so if we want to show that $\Omega>0$ then it suffices to find some determined set of reals which is not in $\lp(\mathbb R)$. This is done by producing a so-called $(\Sigma^2_1)^{L(\Gamma_\infty,\mathbb R)}$-fullness preserving hod pair $(\P_0,\Lambda_0)$, which will have the property that $\Lambda_0\notin\lp(\mathbb R)$ and that $\Lambda_0$ is determined when viewed as a set of reals. This yields a contradiction to $\Omega=0$, so we must have that $\Omega>0$. \qquad Next, if we assume that $\Omega=1$ then we show that $L(\Gamma_\infty,\mathbb R)$ and $\lp^{\Lambda_0}(\mathbb R)$ have the same sets of reals, where $\lp^{\Lambda_0}(\mathbb R)$ is one of the generalised versions of $\lp(\mathbb R)$ we mentioned above. We do another internal induction to show that every set of reals in $\lp^{\Lambda_0}(\mathbb R)$ is determined, and then proceed to construct a $\Sigma^2_1(\Lambda_0)^{L(\Gamma_\infty,\mathbb R)}$-fullness preserving hod pair $(\P_1,\Lambda_1)$, which again has the property that $\Lambda_1\notin\lp^{\Lambda_0}(\mathbb R)$, so that we must have $\Omega>1$. \qquad Now assume that $\Omega=2$ --- this step will look like the general \textit{successor case}. This time we're working with $\lp^{\Gamma_0,\Lambda_1}(\mathbb R)$, where $\Gamma_0:=\Gamma(\P_0,\Lambda_0)$ is a pointclass generated by $(\P_0,\Lambda_0)$. We again produce a $\Sigma^2_1(\Lambda_1)^{\lp^{\Gamma_0,\Lambda_1}(\mathbb R)}$-fullness preserving hod pair $(\P_2,\Lambda_2)$ with $\Lambda_2\notin\lp^{\Gamma_0,\Lambda_1}(\mathbb R)$, showing that $\Omega>2$. \qquad As for the limit case, if we assume that $\Omega=\gamma$, we let $\Gamma_\gamma:=\bigcup_{\alpha<\gamma}\Gamma_\alpha$ and coiterate all the previous mice to get some $\P_\gamma$, which we then have to show has a $\Sigma^2_1(\Lambda_\gamma)^{\lp^{\Gamma_\gamma,\oplus_{\alpha<\gamma}\Lambda_\alpha}}$-fullness preserving iteration strategy $\Lambda_\gamma$. As before, this strategy will be determined as a set of reals and won't be in $L(\Gamma_\infty,\mathbb R)$, a contradiction, which shows that $\Omega>\gamma$. \qquad In this paper, we will be able to do the tame case, the successor case, and the limit case when $\Omega$ is singular, which shows that we end up getting that $\Omega$ is regular. \end{document}
{ "alphanum_fraction": 0.7407661678, "avg_line_length": 99.7671232877, "ext": "tex", "hexsha": "60fb0a1383aec4e2f3dc5190de37a043ea43420b", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "21481596be517c874e311797f5a70829e0cba7d3", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "saattrupdan/phd", "max_forks_repo_path": "chapters/part2/introduction.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "21481596be517c874e311797f5a70829e0cba7d3", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "saattrupdan/phd", "max_issues_repo_path": "chapters/part2/introduction.tex", "max_line_length": 775, "max_stars_count": null, "max_stars_repo_head_hexsha": "21481596be517c874e311797f5a70829e0cba7d3", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "saattrupdan/phd", "max_stars_repo_path": "chapters/part2/introduction.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 2036, "size": 7283 }
\section{DNS client} % Short description/overview of module functions \subsection{pico$\_$dns$\_$client$\_$nameserver} \subsubsection*{Description} Function to add or remove nameservers. \subsubsection*{Function prototype} \begin{verbatim} int pico_dns_client_nameserver(struct pico_ip4 *ns, uint8_t flag); \end{verbatim} \subsubsection*{Parameters} \begin{itemize}[noitemsep] \item \texttt{ns} - Pointer to the address of the name server. \item \texttt{flag} - Flag to indicate addition or removal (see further). \end{itemize} \subsubsection*{Flags} \begin{itemize}[noitemsep] \item \texttt{PICO$\_$DNS$\_$NS$\_$ADD} - to add a nameserver \item \texttt{PICO$\_$DNS$\_$NS$\_$DEL} - to remove a nameserver \end{itemize} \subsubsection*{Return value} On success, this call returns 0 if the nameserver operation has succeeded. On error, -1 is returned and \texttt{pico$\_$err} is set appropriately. \subsubsection*{Errors} \begin{itemize}[noitemsep] \item \texttt{PICO$\_$ERR$\_$EINVAL} - invalid argument \item \texttt{PICO$\_$ERR$\_$ENOMEM} - not enough space \item \texttt{PICO$\_$ERR$\_$EAGAIN} - resource temporarily unavailable \end{itemize} \subsubsection*{Example} \begin{verbatim} ret = pico_dns_client_nameserver(&addr_ns, PICO_DNS_NS_ADD); ret = pico_dns_client_nameserver(&addr_ns, PICO_DNS_NS_DEL); \end{verbatim} \subsection{pico$\_$dns$\_$client$\_$getaddr} \subsubsection*{Description} Function to translate an url text string to an internet host address IP. \subsubsection*{Function prototype} \begin{verbatim} int pico_dns_client_getaddr(const char *url, void (*callback)(char *ip, void *arg), void *arg); \end{verbatim} \subsubsection*{Parameters} \begin{itemize}[noitemsep] \item \texttt{url} - Pointer to text string containing url text string (e.g. www.google.com). \item \texttt{callback} - Callback function, returning the internet host address IP and the provided argument. The returned string has to be freed by the user. \item \texttt{arg} - Pointer to an identifier for the request. The pointer is returned in the callback. \end{itemize} \subsubsection*{Return value} On success, this call returns 0 if the request is sent. On error, -1 is returned and \texttt{pico$\_$err} is set appropriately. \subsubsection*{Errors} \begin{itemize}[noitemsep] \item \texttt{PICO$\_$ERR$\_$EINVAL} - invalid argument \item \texttt{PICO$\_$ERR$\_$ENOMEM} - not enough space \item \texttt{PICO$\_$ERR$\_$EAGAIN} - resource temporarily unavailable \end{itemize} \subsubsection*{Example} \begin{verbatim} int ret = pico_dns_client_getaddr("www.google.com", cb_getaddr, &identifier); \end{verbatim} \subsection{pico$\_$dns$\_$client$\_$getname} \subsubsection*{Description} Function to translate an internet host address IP to an url text string. \subsubsection*{Function prototype} \begin{verbatim} int pico_dns_client_getname(const char *ip, void (*callback)(char *url, void *arg), void *arg); \end{verbatim} \subsubsection*{Parameters} \begin{itemize}[noitemsep] \item \texttt{ip} - Pointer to text string containing an internet host address IP (e.g. 8.8.4.4) \item \texttt{callback} - Callback function, receiving the url text string. Note: the returned string has to be freed by the user. \item \texttt{arg} - Pointer to an identifier for the request. The pointer is returned in the callback. \end{itemize} \subsubsection*{Return value} On success, this call returns 0 if the request is sent. On error, -1 is returned and \texttt{pico$\_$err} is set appropriately. \subsubsection*{Errors} \begin{itemize}[noitemsep] \item \texttt{PICO$\_$ERR$\_$EINVAL} - invalid argument \item \texttt{PICO$\_$ERR$\_$ENOMEM} - not enough space \item \texttt{PICO$\_$ERR$\_$EAGAIN} - resource temporarily unavailable \end{itemize} \subsubsection*{Example} \begin{verbatim} int ret = pico_dns_client_getname("8.8.4.4", cb_getname, &identifier); \end{verbatim}
{ "alphanum_fraction": 0.7564300412, "avg_line_length": 33.5172413793, "ext": "tex", "hexsha": "8d4a51b4c5762e16636d1236f65afd45eed95ef4", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2022-01-23T22:33:35.000Z", "max_forks_repo_forks_event_min_datetime": "2022-01-23T22:33:35.000Z", "max_forks_repo_head_hexsha": "aa4a14398f47dcf5189983d60223260b5134b5d5", "max_forks_repo_licenses": [ "0BSD" ], "max_forks_repo_name": "warpBytes/multizone-secure-iot-stack", "max_forks_repo_path": "ext/picotcp/docs/user_manual/chap_api_dns_c.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "aa4a14398f47dcf5189983d60223260b5134b5d5", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "0BSD" ], "max_issues_repo_name": "warpBytes/multizone-secure-iot-stack", "max_issues_repo_path": "ext/picotcp/docs/user_manual/chap_api_dns_c.tex", "max_line_length": 159, "max_stars_count": 3, "max_stars_repo_head_hexsha": "aa4a14398f47dcf5189983d60223260b5134b5d5", "max_stars_repo_licenses": [ "0BSD" ], "max_stars_repo_name": "warpBytes/multizone-secure-iot-stack", "max_stars_repo_path": "ext/picotcp/docs/user_manual/chap_api_dns_c.tex", "max_stars_repo_stars_event_max_datetime": "2021-09-23T16:40:14.000Z", "max_stars_repo_stars_event_min_datetime": "2021-02-06T17:52:43.000Z", "num_tokens": 1124, "size": 3888 }
% !TeX root = ./report.tex \maketitle %\listoftodos \begin{abstract} The geometric independent set problem is a special case of the independent set problem where the graph is the intersection graph of a set of geometric objects. The challenge of the geometric independent set problem is to find the biggest set of non-intersecting objects, i.e. the maximum independent set (MIS). In this report I will present polynomial time approximation schemes (PTAS) for the geometric independent set problem for unit disk graphs (UDG) and for unit height rectangles. The geometric independent set problem is in general $\NP$- hard and therefore intractable. The algorithms presented here use different approaches to guarantee a polynomial running time for a constant approximation quality. \end{abstract} \tableofcontents \section{Introduction} The geometric independent set problem, i.e. finding a (maximum) set of non-intersecting geometric shapes is a problem that can arise in many different fields. In this report I will introduce algorithms to give approximate solutions for the MIS of unit disk graphs as well as for the geometric independent set problem for unit height rectangles. Unit disks may correspond to the area of influence of radio towers where frequencies can only be assigned to radio towers where this areas do not intersect (i.e. an independent set) to prevent interference~\cite{chamaret}. The maximum independent set of unit height rectangles could correspond to a labelling of a two dimensional map with a uniform font~\cite{agarwallabel}. \subsection{Maximum independent set problem for graphs} An independent set of a graph is a set of vertices that are not adjacent (connected with an edge). Finding a maximum independent set of an arbitrary graph is known to be an \NP-complete problem~\cite{misnp}. The best known exact algorithms have a runtime \bigo{1.22^n} \cite{exacta,exactb} (already quite good compared to the \bigo{n^2 2^n} naive brute force approach resulting from checking every subset of vertices). If the graph is an interval intersection graph (see \Fref{sec:ig}) the MIS can be found in polynomial time (similar to the algorithm described in~\Fref{sec:greedyalg}) though for many (most) other intersection graphs it is still \NP-complete but approximate solutions can be found efficiently. \subsection{Polynomial time approximation scheme} As finding exact solutions for \NP-hard problems is not feasible in most cases, algorithms that solve a given instance of an intractable problem in polynomial time with a bounded loss in solution quality are of interest. A so called polynomial time approximation scheme (PTAS) is a family of algorithms that find a solution with a solution quality that is bounded by a factor of $(1+\varepsilon)$ for any $\varepsilon > 0$. %\todo{PTAS subsection überarbeiten, ist recht konfus aktuell} %\begin{align*} %\frac{S_{\text{OPT}}}{\rho} \quad \text{or} %\quad \frac{S_{\text{OPT}}}{1\pm \varepsilon} \qquad (S_{\text{OPT}} \ldots \text{quality of optimal solution}) That is, if the optimal solution is $S_\text{OPT}$, the algorithm finds a solution with a quality of at least $\nicefrac{S_\text{OPT}}{1+\varepsilon}$ for a maximization problem and a solution with a quality of at most $(1+\varepsilon)S_\text{OPT}$ for a minimization problem. %\end{align*} in polynomial time. For maximization problems $\rho> 1$, for minimization problems $\rho < 1$ (or $(1+\varepsilon)$ respectively $(1 - \varepsilon)$ for some $\varepsilon > 0$). In both cases the algorithm has runtime polynomial in input-size though the dependency on $\varepsilon$ is usually not polynomial. The subset of problems in \NP\ (actually $\mathcal{NPO}$ -- \NP-\ Optimization) with a polynomial time approximation scheme with constant quality bound (with \bigo{1}-approximation) is also called $\mathcal{APX}$. If the problem admits $1\pm \varepsilon$- approximation schemes (for arbitrarily small $\varepsilon > 0$) it is in the subset $\mathcal{PTAS}$ of $\mathcal{APX}$. \subsection{Intersection graphs}\label{sec:ig} An intersection graph of a set of objects represents how these objects intersect each other. \Fref{fig:igex} shows an example of such an arrangement and its corresponding intersection graph. \begin{figure*}[!h] \centering \begin{subfigure}[t]{0.5\textwidth} \centering \begin{tikzpicture}[scale=0.6] \clip (-1.2,-1.2)rectangle(7.1,4.6); \newcommand{\centers}{(0,1),(0,0),(5,3),(2,1),(0.5,0.7),(4,3),(1.6,3.5),(6,0.5)} \foreach [count=\i] \coord in \centers{\draw[thick] \coord circle(1);} \end{tikzpicture} \caption{Geometric shapes (unit disks)} \end{subfigure}% ~ \begin{subfigure}[t]{0.5\textwidth} \centering \begin{tikzpicture}[scale=0.6] \clip (-1.2,-1.2)rectangle(7.1,4.6); % link centers if circles intersect \newcommand{\centers}{(0,1),(0,0),(5,3),(2,1),(0.5,0.7),(4,3),(1.6,3.5),(6,0.5)} \foreach [count=\i] \coord in \centers{\draw[gray!30,thick] \coord circle(1);} \foreach[count=\i] \a in \centers { \foreach[count=\j] \b in \centers { \ifnum \j < \i \draw[very thick] let \p1=\a, \p2=\b, \n1={veclen(\x1-\x2,\y1-\y2)} in {\ifdim \n1 < 2 cm \a -- \b \fi}; \fi } } % draw circles \foreach \coord in \centers{\fill \coord circle(3pt);} \end{tikzpicture} \caption{Corresponding intersection graph} \end{subfigure} \caption{Arrangement of geometric shapes and corresponding intersection graph}\label{fig:igex} \end{figure*} It is also possible to reduce the edge set of the intersection graph to e.g.\ those edges where the corresponding objects intersect on an area bigger than a certain threshold (e.g. finding feasible locations for base stations where overlapping regions are inefficient). An example of this is in \Fref{fig:igex2} with a threshold of $20\%$ of the circle area. \begin{figure}\centering \end{figure} \begin{figure*}[!h] \centering \begin{subfigure}[t]{0.5\textwidth} \centering \begin{tikzpicture}[scale=0.35,rotate=90] \newcommand{\centers}{(0,0),(2,1),(1.6,3.5),(0,5),(5,3),(0.6,7),(4,6)} { \foreach \coord in \centers{\draw[thick] \coord circle(2);}} \newcommand{\centerss}{(0,0),(1.6,3.5),(5,3),(0.6,7),(4,6)} \end{tikzpicture} \caption{Geometric shapes (unit disks)} \end{subfigure}% ~ \begin{subfigure}[t]{0.5\textwidth} \centering \begin{tikzpicture}[scale=0.35,rotate=90] \newcommand{\centers}{(0,0),(2,1),(1.6,3.5),(0,5),(5,3),(0.6,7),(4,6)} { \foreach \coord in \centers{\draw[gray!40, thick] \coord circle(2);}} {\draw[ultra thick] (0,0) -- (2,1) -- (1.6,3.5) -- (0,5)-- (0.6,7);} { \foreach \coord in \centers{\fill \coord circle(0.15);}} \end{tikzpicture} \caption{Corresponding intersection graph} \end{subfigure} \caption{Edges in IG correspond to at least $20\%$ intersecting area of the circles.}\label{fig:igex2} \end{figure*} \subsubsection{Construction of intersection graphs from geometric objects} Given an arrangement $\mathcal A$ of $n$ geometric objects its intersection graph $G=(V,E)$ can be obtained in the following way: \begin{itemize} \item The vertex set $V$ consists of $n$ vertices --- each of them uniquely represents one of the geometric shapes of $\mathcal A$ \item For every pair $s_i, s_j$ of vertices check whether they intersect each other and add the edge $\langle i, j\rangle$ to the edge set $E$ if this is the case. \end{itemize} Obviously the runtime of this scheme is \bigo{n^2}. The worst case performance of finding an intersection graph is always \bigo{n^2} as there can be a quadratic amount of intersections (if all objects intersect each other). If there are less intersection this can usually be improved to \bigo{n \log n + I} where $I$ is the amount of intersections (once again, with quadratic upper bound) and the other term is from sorting the objects according to some criterion, e.g. the leftmost point of the object on the x-axis. Sweepline algorithms are used for this, the best known is the Bentley--Ottman algorithm for line segments~\cite{bentleyott} with runtime \bigo{n \log n +n I}. \subsubsection{Geometric representation of intersection graph} %Every graph can be interpreted as intersection graph (Erdös et al.~\cite{erdos1966representation} gave a constructive proof with of some weirdly shaped geometric objects\todo{genauer erklären}, Finding a valid representation, i.e. a mapping $f : V \to \mathbb R^d$, from the set of vertices to a vector representation of the geometric objects defined by $x_1, \ldots x_d \in \mathbb R$ (e.g. for disks in the plane $d=3$ ($x-,y-$ coordinates and radius), for rectangles $d=4$ (two points with two coordinates each)) is in many cases a $\NP$-hard problem \cite{nphard} and it is therefore not feasible to find a geometric representation of a given graph and use the additional information to simplify the general independent set problem. \subsection{Robust algorithms} An algorithm $\mathcal A$ computes a function $f: \mathcal G \to \mathcal H$. If the algorithm is only able to compute the correct result $f(i)$ for $i \in \mathcal U \subset G$ there are elements in $\mathcal G$ that are not in $\mathcal U$ and therefore the algorithm may not compute the correct result (or may not terminate at all). \begin{definition} An algorithm $\mathcal A$ computes $f$ \textit{robustly on $\mathcal U$} if: \begin{itemize} \item for all instances $i\in\mathcal U$ it returns the correct result $f(i)$ \item for all instances $i\in\mathcal G\setminus \mathcal U$ the algorithm either returns $f(i)$ or a certificate showing that $i\notin \mathcal U$. \end{itemize} \end{definition} \section{M(W)IS for unit disk graphs (UDG)} In this section I will present a robust PTAS for both, the weighted and unweighted unit disk graph independent set problem as introduced by Nieberg et al.~\cite{nieberg}. The algorithm does not depend on the geometric representation, though the polynomial runtime is only guaranteed if such a representation exists because it uses the area $\pi$ of the unit disks to get an upper bound on the amount of disks in an independent set of a certain area. \subsection{Preliminaries} A unit disk graph $G=(V,E)$ is a graph where there exists a geometric representation $f:V \to \mathbb R^2$ i.e.\ a function that maps every vertex to a point to the center point on the real Cartesian plane of the disk in the geometric representation such that: \begin{align} (u,v) \in E \leftrightarrow ||f(u) - f(v)|| \leq 2.\label{eq:maxdist} \end{align} As finding this representation is \NP-hard it is not feasible to compute a valid representation of the given graph. Furthermore it is also \NP-hard to determine if a valid geometric representation even exists~\cite{nphard}. Furthermore, let $\rho = 1+\varepsilon$ denote the desired approximation ratio where $\varepsilon > 0$. \subsection{MIS for unweighted UDG}\label{sec:misudg} The algorithm is given a UDG $G = (V,E)$ and the desired result is a set $I \subseteq V$ that has a cardinality that is at least $\alpha(G)\rho^{-1}$ where $\alpha(G)$ is the maximum size of an independent set in $G$. The algorithm starts at an arbitrary node $v\in V$ and computes the sets \begin{align*} N_r = N_r(v) := \{w\in V|w \text{ has distance at most $r$ from $v$}\} \end{align*} for $r = 0,1,\ldots$. Starting from $N_0$ the algorithm computes the maximum independent set $I_r \subset N_r$ of these $r$-neighborhoods until the condition \begin{align} |I_{r+1}| > \rho|I_r| \label{eq:udgcond} \end{align}is no longer fulfilled - let $\bar r$ be the smallest $r$ where \fref{eq:udgcond} is violated. Such a $\bar r$ must exist and it has a constant upper bound. \Fref{fig:udgalg} shows how these neighborhoods are selected on an example graph. \input{img/NIn.tex} \begin{theorem}[Nieberg et al.~\cite{nieberg}] There exists a constant $c= c(\rho)$ such that $\bar r \leq c$ \end{theorem} \begin{proof} Due to \Fref{eq:maxdist} any vertex $w\in N_r$ satisfies \begin{align*} ||f(w) - f(v)|| \leq 2r. \end{align*} it is therefore possible to draw a circle with radius $R = 2r+1$ that contains all disks representing the vertices in $N_r$. This circle also contains the disk representations of the vertices in $I_r$ which have a disjoint area of $\pi$ each (as their radius equals $1$). This leads to an upper bound on the size of each independent set: \begin{align} |I_r| \leq \pi R^2 /\pi = (2r+1)^2 = O(r^2).\label{eq:udupper} \end{align} From \Fref{eq:udgcond}follows: \begin{align} |I_r| > \rho|I_{r-1}| > \ldots > \rho^r|I_0| = \rho^r.\label{eq:udglower} \end{align} Combining \eqref{eq:udupper} and \eqref{eq:udglower} yields \begin{align} \rho^r < |I_r| \leq O(r^2), \label{eq:upperboundlol} \end{align} an inequality (two of them actually) where the lower bound grows asymptotically faster then the upper bound with increasing $r$ -- therefore there exists a constant upper bound $c(\rho)$ for $r$. \end{proof} The fact that the size of the independent sets calculated for the various $N_r$ ($r\leq \bar r$) implies a polynomial runtime of $\bigo{n^{C^2}}$ (where $C = \bigo{r} = \bigo{1/\varepsilon^2 \log (1/\varepsilon}$). The full algorithm proceeds as follows: \begin{enumerate} \item Calculate the independent set $I_{\bar r}$ starting from an arbitrary vertex $v$. \item Remove the vertices in $N_{\bar r +1 }$ from the graph $G$ to get the graph $G' = (V',E') = G\setminus N_{\bar r +1}$ (including edges incident to any of the removed vertices). \item Repeat the previous two steps for $G'$ until there is no vertex left i.e.\ $G' = (\varnothing,\varnothing)$. \item Combine all $I_{\bar r}$ to get a $\rho$-approximate maximum independent set. \end{enumerate} \subsubsection{Proof of correctness \& approximation guarantee} The correctness and approximation guarantee follows from the following two theorems by Nieberg et al.~\cite{nieberg}: \begin{theorem} Suppose that we can compute an independent set $I'\subset V\setminus N_{\bar r+1}$ of the graph $G'$. Then $I:=I_{\bar r} \cup I'$ is an independent set for $G$. \end{theorem} \begin{proof} A vertex $v\in I'\subset V'$ has no neighbor $n\in N_{\bar r}$ as the distance between $v$ and any such $n$ is at least $\bar r +2$ (else it would have been in $N_{\bar r}$ and removed from the vertex set of $G'$). Therefore $I$ is an independent set. \end{proof} \begin{theorem} Suppose inductively that we can compute a $\rho$-approximate independent set $I'\subset V\setminus N_{\bar r+1}$ of the graph $G'$. Then $I:=I_{\bar r} \cup I'$ is a $\rho$-approximate independent set for $G$. \end{theorem} \begin{proof} As $\bar r$ is chosen in such a way that the maximum independent set of $N_{\bar r}$ is a $\rho$-approximate independent set of $N_{\bar r +1}$ (ensured by \Fref{eq:udgcond}): \begin{align*} |I_{\bar r +1}| \leq \rho|I_{\bar r}|. \end{align*} Therefore the following holds: \begin{align*} \alpha(G[N_{\bar r +1}]) \leq \rho|I_{\bar r}| = \rho\alpha(N_{\bar r}). \end{align*} Furthermore it is obvious that the size of an independent set of a graph is at most as large as the sum of the sizes of independent sets of subgraphs that form a partition of the initial graph. Applying this here yields \begin{align*} \alpha(G)& \leq \alpha(G[N_{\bar r +1 }]) + \alpha(V\setminus G[N_{\bar r +1}]) \leq \rho|I|\\ \alpha(G)& \leq \rho|I|, \end{align*} and thus proves that the desired approximation guarantee is fulfilled. %\TODO[maybe make this proof a little easier to understand] \end{proof} \subsection{MWIS for weighted UDG}\label{sec:mwisudg} If the given UDG has a vector $\vec w$ assigning positive values $w_i$ to every vertex $v_i$ and the objective is to find an independent set of maximum weight the algorithm described in \Fref{sec:misudg} can be easily adapted to yield an independent set that has a weight that is at least $\rho^{-1}$ the weight of the independent set with the maximum weight. The following things have to be adapted: \paragraph{Starting node $v_0$:} Don't start at an arbitrary node $v_0$ but at the node with the heighest weight. \paragraph{Stopping criterion:} Modify the stopping criterion in \Fref{eq:udgcond} to consider the weight instead of only the size. The function $W: V^n\to \mathbb R$ maps vertices to their weight and sets of vertices to the sum of their weight. \begin{align*} W(I_{r+1}) > \rho W(I_r) \end{align*} Obtaining the bound to get a constant upper bound (as in \Fref{eq:upperboundlol}) is rather straightforward:\\ \begin{theorem}[Nieberg et al.~\cite{nieberg}] There exists a constant $c= c(\rho)$ such that $\bar r \leq c$ \end{theorem} \begin{proof} The idea of the proof is the same as for the unweighted case. As the upper bound use: \begin{align*} W(I_r) &= \sum_{i\in I_r}W(v_i)\leq \sum_{i\in I_r}W(v_0) = |I_r|w_0 \intertext{where $w_0$ is the weight of $v_0$ -- the vertex with highest weight. And the lower bound use:} W(I_r) &> \rho W(I_{r-1} > \ldots > \rho^r W(I_0) = \rho^r w_0 \end{align*}As it still holds that $I_r$ is bounded by $\bigo{r^2}$ (using the area of the unit disks) we get the same upper and lower bounds but with an additional constant factor for the weight of the initial node. \end{proof} \subsection{Robustness} The algorithms outlined in \Fref{sec:misudg} and \Fref{sec:mwisudg} are PTAS with the desired solution quality as long as the graph really is a unit disk graph. To get a robust PTAS for arbitrary graphs from this there are some minor modifications necessary. Observe, that in both cases the approximation quality as well as the overall correctness did not depend on any properties of a UDG. Only for getting the constant bound on the sizes of the independent sets $I_r$ the area of the disks was used. \\ The only necessary modification is therefore to look whether an independent set of size $|I_r^*| > (2r+1)^2$ can be found and if this is the case return it as certificate that the given graph is not a UDG. Finding this set is also possible in polynomial time as not the maximum independent set $I_r\subset N_r$ has to be found (the maximum size of this independent set would be unbounded and therefore impossible to determine in polynomial time) but only an independent subset with a size of at least $(2r+1)^2+1$ is necessary which is still possible in polynomial time. \section{MIS for unit height rectangles} The maximum independent set problem for unit height rectangle is a problem that arises when labelling maps. The goal is to find a maximum non-intersecting set of possible labels in the plane to ensure legibility. In this section I will present a $2$-approximation algorithm with \bigo{n \log n} runtime for the unweighted MIS problem for unit height rectangles introduced by Agarwal et al.~\cite{agarwallabel} which can be improved upon with a dynamic programming approach to get a $(1+\nicefrac{1}{k})$-approximation algorithm with runtime \bigo{n \log n + n^{2k-1}} for $k\geq 1$. The idea of the improvement will be presented here, for the technical details the original paper has to be consulted. For this algorithms the arrangement of rectangles is required. \subsection{A $2$-approximation algorithm} The set of $n$ rectangles $R$ with unit height is given. The maximum independent set of rectangles is $I_\text{OPT}$. To get a $2$-approximate independent set, the algorithm partitions the $n$ rectangles into disjunct subsets, calculates the maximum independent set of those subsets and returns the conjunction of those independent sets. To divide the rectangles into sets horizontal lines $\ell_1, \ell_2,\ldots,\ell_m$ where $m\leq n$ are drawn such that the following three conditions hold: \begin{itemize} \item The distance between two adjacent lines is $>1$. i.e.\ bigger than the height of a rectangle \item Each line intersects at least one rectangle \item Each rectangle is intersected by exactly one line \end{itemize} To find the correct positions for the lines, first sort the rectangles according to their vertical coordinates and draw the first line at the top of the rectangle with smallest $y$-coordinate and the following lines at the top of the rectangle with the smallest $y$-coordinate that is not yet intersected by a line. \Fref{fig:uhr1a} illustrates this. Now every rectangle has an intersecting line and the initial arrangement of rectangles is partitioned into sets $R_1,\ldots,R_m$ where for every $1\leq i \leq m$ the set $R_i$ contains those rectangles that are intersected by the line $\ell_i$. Due to the fact that the distance between two lines is bigger than the height of the rectangles a rectangle in the set $R_i$ can only intersect with rectangles in the sets $R_i$ and $R_i\pm1$. Therefore the independent sets $I_i$ of the even sets $R_{2n}$ and the odd sets $R_{2n+1}$ ($0\geq n \geq \nicefrac{m}{2}$) can be combined to two independent sets $I_{\text{odd}} = \bigcup_{n> 0} I_{2n}$ and $I_{\text{even}} = \bigcup_{n\geq 0} I_{2n+1}$. \subsubsection{Greedy algorithm for MIS of intervals}\label{sec:greedyalg} The MIS of every subset $R_i$ can be found in polynomial time with a greedy algorithm. As can be seen in \Fref{fig:uhr1c} two rectangles that are intersected by a common horizontal line only intersect each other iff the intervals of their $x$-coordinates intersect each other. Therefore the problem of finding a MIS of the rectangles in one of the sets $R_i$ can be reduced to finding a MIS of the horizontal projection of these rectangles. For this a greedy algorithm with runtime \bigo{n\log n} exists. The idea of this algorithm is the following inductive argument: For every point $p$ on the $x$-axis at most one interval in the MIS can be \textit{active}, i.e.\ having a start point $s$ and an end point $e$ s.t.\ $s\leq p\leq e$. It follows that if two (or more) intervals intersect each other at one point at most one of these intervals can be added to an independent set. Choosing the interval with the lowest endpoint allows choosing more intervals after this interval (see \Fref{fig:uhr1d}). \begin{figure*}[!h] \centering \begin{subfigure}[t]{0.5\textwidth} \centering \definecolor{cadmiumred}{rgb}{0.89, 0.0, 0.13} \definecolor{cadmiumgreen}{rgb}{0.01, 0.75, 0.24} \begin{tikzpicture} \draw[very thick,cadmiumred] (1,0) rectangle+(1,1); \draw[very thick,cadmiumred] (6,0) rectangle+(0.5,1); \draw[very thick,cadmiumred] (3.5,0.2) rectangle+(1.5,1); \draw[very thick,cadmiumred] (5.2,0.4) rectangle+(1,1); \draw[very thick,cadmiumred] (3,0.8) rectangle+(0.9,1); \draw[very thick,MidnightBlue] (2.2,1.3) rectangle+(1.5,1); \draw[very thick,MidnightBlue] (1,1.4) rectangle+(1.9,1); \draw[very thick,MidnightBlue] (3.5,1.6) rectangle+(2,1); %\definecolor{darkpastelgreen}{rgb} \draw[very thick,cadmiumgreen] (0.2,2.5) rectangle+(1.9,1); \draw[very thick,cadmiumgreen] (6,2.5) rectangle+(1.2,1); \draw[very thick,cadmiumgreen] (2.2,2.8) rectangle+(1.3,1); \draw[very thick,cadmiumgreen] (1,2.9) rectangle+(0.7,1); \draw[very thick,cadmiumgreen] (5.3,2.9) rectangle+(0.8,1); \draw[very thick,cadmiumgreen] (3,3) rectangle+(1.5,1); \draw[very thick,cadmiumgreen] (3.6,3.1) rectangle+(1,1); \draw[very thick,cadmiumgreen] (4.7,3.3) rectangle+(0.5,1); \draw[very thick,dashed] (0,1.0)--+(7.5,0) node[right] {$\ell_1$}; \draw[very thick,dashed] (0,2.3)--+(7.5,0)node[right] {$\ell_2$}; \draw[very thick,dashed] (0,3.5)--+(7.5,0)node[right] {$\ell_3$}; \end{tikzpicture} \caption{Unit height rectangles partitioned into three sets $\color{cadmiumred}{R_1},\color{MidnightBlue}{R_2},\color{cadmiumgreen}{R_3}$ with intersecting lines $\ell_1,\ell_2,\ell_3$.}\label{fig:uhr1a} \end{subfigure}% ~ \vline ~ \begin{subfigure}[t]{0.5\textwidth} \centering \begin{tikzpicture}[scale=1] \draw[very thick,gray!50] (3.5,0.2) rectangle+(1.5,1); \draw[very thick,gray!50] (6,0) rectangle+(0.5,1); \draw[very thick,gray!50] (2.2,1.3) rectangle+(1.5,1); \draw[ultra thick] (1,0) rectangle+(1,1); \draw[ultra thick] (3,0.8) rectangle+(0.9,1); \draw[ultra thick] (5.2,0.4) rectangle+(1,1); \draw[ultra thick] (1,1.4) rectangle+(1.9,1); \draw[ultra thick] (3.5,1.6) rectangle+(2,1); \draw[very thick,gray!50] (3,3) rectangle+(1.5,1); \draw[very thick,gray!50] (6,2.5) rectangle+(1.2,1); \draw[very thick,gray!50] (0.2,2.5) rectangle+(1.9,1); \draw[ultra thick] (1,2.9) rectangle+(0.7,1); \draw[ultra thick] (2.2,2.8) rectangle+(1.3,1); \draw[ultra thick] (3.6,3.1) rectangle+(1,1); \draw[ultra thick] (4.7,3.3) rectangle+(0.5,1); \draw[ultra thick] (5.3,2.9) rectangle+(0.8,1); \draw[very thick,dashed] (0,1.0)--+(7.5,0) node[right] {$\ell_1$}; \draw[very thick,dashed] (0,2.3)--+(7.5,0)node[right] {$\ell_2$}; \draw[very thick,dashed] (0,3.5)--+(7.5,0)node[right] {$\ell_3$}; \end{tikzpicture} \caption{MIS $I_1,I_2,I_3$ (black) for the sets $R_i$. Note that rectangles from IS of two adjacent sets ($I_i, I_{i\pm 1}$) may intersect each other.} \end{subfigure}\\ \begin{subfigure}[t]{0.5\textwidth} \centering \begin{tikzpicture}[scale=1] \draw[very thick] (1,2.9) rectangle+(0.7,1); \draw[very thick] (3,3) rectangle+(1.5,1); \draw[very thick] (6,2.5) rectangle+(1.2,1); \draw[very thick] (0.2,2.5) rectangle+(1.9,1); \draw[very thick] (2.2,2.8) rectangle+(1.3,1); \draw[very thick] (3.6,3.1) rectangle+(1,1); \draw[very thick] (4.7,3.3) rectangle+(0.5,1); \draw[very thick] (5.3,2.9) rectangle+(0.8,1); \draw[very thick,dashed] (0,3.4)--+(7.5,0); \begin{scope}[yshift=2.2cm] \draw[very thick] (0.2,0) rectangle+(1.9,0); \draw[very thick] (1,0.1) rectangle+(0.7,0); \draw[very thick] (2.2,0) rectangle+(1.3,0); \draw[very thick] (3,0.1) rectangle+(1.5,0); \draw[very thick] (3.6,0.0) rectangle+(1,0); \draw[very thick] (4.7,0.0) rectangle+(0.5,0); \draw[very thick] (5.3,0.0) rectangle+(0.8,0); \draw[very thick] (6,0.1) rectangle+(1.2,0); %\draw[very thick,dashed] (0,1)--+(7.5,0); \end{scope} \end{tikzpicture} \caption{To find a MIS of $R_3$ only the $x$-coordinates have to be considered. Two rectangles intersect iff their projection to a horizontal line (below) intersects.} \label{fig:uhr1c} \end{subfigure}% ~ \vline ~ \begin{subfigure}[t]{0.5\textwidth} \centering \begin{tikzpicture}[scale=1] \draw[ultra thick] (1,0) rectangle+(0.7,0); \draw[ultra thick,gray!50] (0.2,0.1) rectangle+(1.9,0); \draw[ultra thick] (2.2,0.2) rectangle+(1.3,0); \draw[ultra thick,gray!50] (3,0.3) rectangle+(1.5,0); \draw[ultra thick] (3.6,0.4) rectangle+(1,0); \draw[ultra thick] (4.7,0.5) rectangle+(0.5,0); \draw[ultra thick] (5.3,0.6) rectangle+(0.8,0); \draw[ultra thick,gray!50] (6,0.7) rectangle+(1.2,0); %\draw[very thick,dashed] (0,1)--+(7.5,0); \end{tikzpicture} \caption{Horizontal projection of rectangles in $R_3$ ordered (in vertical direction) by their endpoint. Greedy algorithm adds interval with nearest endpoint (bottom to top in this figure) that does not intersect rectangles already in the set.} \label{fig:uhr1d} \end{subfigure} \caption{Illustration of the $2$-approximation algorithm for the unit height rectangle MIS problem.}\label{fig:uhr1} \end{figure*} \subsubsection{Correctness and proof of approximation quality} \begin{theorem} The set $I_{\text{even}} = \bigcup_{n> 0} I_{2n}$ obtained by combining the maximum independent sets $I_{n}$ of the sets $R_{n}$ for even $n$ is also a maximum independent set of $R_{\text{even}} = \bigcup_{n>0}R_{2n}$. \end{theorem} \begin{proof} The rectangles in this set are independent because a rectangle $r$ from the set $I_x\subseteq R_x$ can't intersect any other rectangle in the set $I_x$ because if this were the case $I_x$ would not be an independent set. Furthermore the rectangle $r$ can't intersect any rectangle $r'\in I_\text{even}\setminus I_x$ as $r$ and $r'$ both are intersected by a horizontal line with a vertical distance $2+\varepsilon$ with $\varepsilon>0$ and therefore the vertical distance between $r$ and $r'$ is at least $\varepsilon$. As every independent set $I_x$ is the MIS of $R_x$ the combination $I_\text{even}$ is also the MIS of $R_\text{even}$. The same arguments apply to the set $I_\text{odd}\subseteq R_\text{odd}$.\\ \end{proof} The approximation quality guarantee follows from the fact that the MIS $I_\text{OPT}$ of all the rectangles is at most $I_\text{even} + I_\text{odd}$. Therefore choosing the bigger of the sets $I_\text{even}$ and $I_\text{odd}$ leads to a $2$-approximation in the case that $|I_\text{even}| = |I_\text{odd}|$. If one of the sets has more elements than the other the approximation is even better. \subsubsection{Runtime of $2$-approximation algorithm} The runtime of the algorithm is dominated by the sorting of the rectangles which takes time \bigo{n \log n}. Partitioning the sorted rectangles with horizontal lines can be done in \bigo{n} by iteratively taking the first (i.e.\ having the smallest $y$-coordinate) not-yet-intersected rectangle and adding a line at it's upper $y$-coordinate. Finding the MIS of every set $R_i$ is also bounded by \bigo{n\log n} resulting from the sorting the rectangles in each of the sets of the partition. Selecting the rectangles for the independent sets $I_i$, combining the even and odd independent sets and comparing their sizes can all be done in \bigo{n}. \subsection{Improving to a $(1+\nicefrac{1}{k})$-approximation algorithm} The algorithm described before separates the rectangles into independent subproblems for which efficient algorithms exist. The idea from Agarwal et al. was to improve this method by increasing the size of these subproblems in a way that still ensures that they can be solved in polynomial time for some fixed $k$. In the $2$-approximation algorithm the MIS of $R_n$ for either even or odd $n$ were discarded completely. The idea of the improved algorithm is to partition the rectangles in the same way as before but to solve the MIS problem for sets of rectangles intersected by $k$ consecutive lines. Those sets are referred to as \textit{subgroups} and are defined in the following way: \begin{equation*} R_i^k = \bigcup_{n = i}^{i+k-1}R_n. \end{equation*} For some value $k$ there are $k+1$ \textit{groups} $G_1, \ldots, G_{k+1}$ that correspond to the sets $R_\text{odd}$ and $R_\text{even}$ \begin{equation*} G_j = R_1^{j-1} \cup \bigcup_{i\geq 0}R_{i(k+1)+j}^k = R \setminus \bigcup_{i\geq 0} R_{i(k+1)+j}. \end{equation*} The group $G_j$ is therefore the set of all rectangles $R$ except those intersected by every $(k+1)$-th line starting from the $j$-th line. Also all subgroups $R_i^k$ in $G_j$ are independent from each other, i.e.\ there are no two rectangles in two different subgroups that intersect each other. Computing the MIS $I_i \subseteq G_i$ for all $i\in \{1,\ldots,k+1\}$ and picking the biggest of those independent sets leads to a $(1+\nicefrac{1}{k})$-approximate solution, %. %The idea behind this technique is the \textit{shifting technique} introduced by Hochbaum and Maass~\cite{shifting}. %\begin{theorem} as for every $j$ the set $R\setminus G_j$ contains rectangles intersected by at most $\lceil \nicefrac{m}{k+1}\rceil$ lines %, computing the MIS for each $G_j$ and choosing the largest independent set yields a $(1+\nicefrac{1}{k})$-approximate solution as% and therefore at most $\nicefrac{|I_\text{OPT}|}{k+1}$ rectangles can be missed due to the pigeon hole principle. \\ To get the maximum independent set for the groups $G_i$ a dynamic programming scheme is used that exceeds the scope of this report. In Agarwal et al.~\cite[Section 4.2]{agarwallabel} more details can be found. %\end{theorem} %\begin{proof} %The method seperates the rectangles into disjunct $k+1$ sets and takes the MIS of $k$ sets. If the biggest MIS obtained by this method does not adhere to the approximation guarantee the independent set of $R \setminus G_j$ contains more than $\nicefrac{|I_\text{OPT}|}{k+1}$ rectangles. %Assume that the MIS of $G_j$ is bigger than the MIS of $G_i$ for all $i\in \{1,\ldots,k+1\}\setminus j$. Furthermore assume that $I_i < I_\text{OPT} - \nicefrac{|I_\text{OPT}|}{k+1}$, i.e. the size of the MIS does not adhere to the $(1+\nicefrac{1}{k})$-approximation guarantee. If this is the case $I_j$ contains at least $ \nicefrac{|I_\text{OPT}|}{k+1}$ rectangles less than the optimal solution and the $k$ other $I_i$ omit even more. As all $I_i$ as well as $I_j$ omit rectangles from disjunct sets of rectangles %\end{proof} %\paragraph*{TEST}I'm citing stuff here %\cite{chamaret} %\cite{mcdiarmid} %\cite{fonseca} %\clearpage
{ "alphanum_fraction": 0.730180721, "avg_line_length": 79.4156479218, "ext": "tex", "hexsha": "6b88540a9859b385e97669aebfda982adfd64575", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "1cf226f1b8c73d6e67e885f80e32c929b8420324", "max_forks_repo_licenses": [ "CC-BY-4.0" ], "max_forks_repo_name": "oerpli/Presentations", "max_forks_repo_path": "2016-04-13 - Geometric Independent Set/Report/content.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "1cf226f1b8c73d6e67e885f80e32c929b8420324", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "CC-BY-4.0" ], "max_issues_repo_name": "oerpli/Presentations", "max_issues_repo_path": "2016-04-13 - Geometric Independent Set/Report/content.tex", "max_line_length": 1002, "max_stars_count": 1, "max_stars_repo_head_hexsha": "1cf226f1b8c73d6e67e885f80e32c929b8420324", "max_stars_repo_licenses": [ "CC-BY-4.0" ], "max_stars_repo_name": "oerpli/Presentations", "max_stars_repo_path": "2016-04-13 - Geometric Independent Set/Report/content.tex", "max_stars_repo_stars_event_max_datetime": "2020-09-24T10:57:24.000Z", "max_stars_repo_stars_event_min_datetime": "2020-09-24T10:57:24.000Z", "num_tokens": 9878, "size": 32481 }
\documentclass[letterpaper,12pt,leqno]{article} \usepackage{paper,math,notes} \begin{document} \title{Mathematical Methods for Macroeconomics: Exercises} \author{Pascal Michaillat} \date{} \begin{titlepage} \maketitle \end{titlepage} \section*{Dynamic Programming} \subsection*{Exercise 1.} Consider the following optimal growth problem: Given initial capital $k_{0}>0$, choose consumption $\bc{c_{t}} _{t =0}^{+\infty}$ to maximize utility \begin{equation*} \sum_{t=0}^{\infty}\b ^{t}\cdot \ln{c_{t}} \end{equation*} subject to the resource constraint \begin{equation*} k_{t+1}=A\cdot k_{t}^{\a}-c_{t}. \end{equation*} The parameters satisfy $0<\b<1,\;A>0,\;0<\a <1.$ \begin{enumerate} \item Derive the optimal law of motion of consumption $c_{t}$ using a Lagrangian. \item Identify the state variable and the control variable. \item Write down the Bellman equation. \item Derive the following Euler equation: \begin{equation*} c_{t+1}=\b\cdot \a\cdot A\cdot k_{t+1}^{\a -1}\cdot c_{t}. \end{equation*} \item Derive the first two value functions, $V_{1}(k)$ and $V_{2}(k)$, obtained by iteration on the Bellman equation starting with the value function $V_{0}\bp{k} \equiv 0$. \item The process of determining the value function by iterations using the Bellman equation is commonly used to solve dynamic programs numerically. The algorithm is called \textit{value function iteration}. For this optimal growth problem, one can show show using value function iteration that the value function is \[V\bp{k} =\kappa +\frac{\ln{k^{\a}}}{1-\a\cdot \b},\] where $\k$ is a constant. Using the Bellman equation, determine the policy function $k'(k)$ associated with this value function. \item In light of these results, for which reasons would you prefer to use the dynamic-programming approach instead of the Lagrangian approach to solve the optimal growth problem? And for which reasons would you prefer to use the Lagrangian approach instead of the dynamic-programming approach? \end{enumerate} \subsection*{Exercise 2.} Consider the problem of choosing consumption $\bc{c_{t}}_{t=0}^{+\infty}$ to maximize expected utility \begin{equation*} \E_{0}\sum_{t=0}^{+\infty}\b^{t}\cdot u\bp{c_{t}} \end{equation*} subject to the budget constraint \begin{equation*} c_{t}+p_{t}\cdot s_{t+1}=\bp{d_{t}+p_{t}}\cdot s_{t}. \end{equation*} $d_{t}$ is the dividend paid out for one share of the asset, $p_{t}$ is the price of one share of the asset, and $s_{t}$ is the number of shares of the asset held at the beginning of period $t$. In equilibrium, the price $p_{t}$ of one share is solely a function of dividends $d_{t}$. Dividends can only take two values $d_{l}$ and $d_{h}$, with $0<d_{l}<d_{h}$. Dividends follow a Markov process with transition probabilities \begin{equation*} \P\bp{d_{t+1}=d_{l}\mid d_{t}=d_{l}} =\P \bp{d_{t+1}=d_{h}\mid d_{t}=d_{h}} =\rho \end{equation*} with $1>\rho >0.5.$ \begin{enumerate} \item Identify state and control variables. \item Write down the Bellman equation. \item Derive the following Euler equation: \begin{equation*} p_{t}\cdot u'\bp{c_{t}} =\b\cdot \E{\bp{d_{t+1}+p_{t+1}} \cdot u'\bp{c_{t+1}} \mid d_{t}} . \end{equation*} \item Suppose that $u\bp{c} =c$. Show that the asset price is higher when the current dividend is high. \end{enumerate} \subsection*{Exercise 3.} Consider the following optimal growth problem: Given initial capital $k_{0}>0$, choose consumption and labor $\bc{c_{t},l_{t}}_{t=0}^{+\infty}$ to maximize utility \begin{equation*} \sum_{t=0}^{+\infty}\b^{t}\cdot u\bp{c_{t},l_{t}} \end{equation*} subject to the law of motion of capital \begin{align*} k_{t+1}&=A_{t}\cdot f\bp{k_{t},l_{t}} -c_{t}. \end{align*} In addition, we impose $0\leq l_{t}\leq 1$. The discount factor $\b \in \bp{0,1} $. The function $f$ is increasing and concave in both arguments. The function $u$ is increasing and concave in $c$, decreasing and convex in $l$. \paragraph{Deterministic case} First, suppose $A_{t}=1$ for all $t$. \begin{enumerate} \item What are the state and control variables? \item Write down the Bellman equation. \item Derive the following optimality conditions: \begin{align*} \pd{u\bp{c_{t},l_{t}}}{ c_{t}} &=\b \cdot \pd{u\bp{c_{t+1},l_{t+1}}}{c_{t+1}} \cdot \pd{f\bp{k_{t+1},l_{t+1}}}{ k_{t+1}}\\ \pd{u\bp{c_{t},l_{t}}}{ c_{t}}\cdot \pd{f\bp{k_{t},l_{t}}}{l_{t}} &=-\pd{u\bp{c_{t},l_{t}}}{l_{t}}. \end{align*} \item Suppose that the production function $f\bp{k,l} =k^{\a}\cdot l^{1-\a}$. Determine the ratios $c/k$ and $l/k$ in steady state. \end{enumerate} \paragraph{Stochastic case} Now, suppose $A_{t}$ is a stochastic process that takes values $A_{1}$ and $A_{2}$ with the following probability: \begin{equation*} \P{A_{t+1}=A_{1}\mid A_{t}=A_{1}} =\P{A_{t+1}=A_{2}\mid A_{t}=A_{2}} =\rho . \end{equation*} \begin{enumerate}\setcounter{enumi}{4} \item Write down the Bellman equation. \item Derive the optimality conditions. \end{enumerate} \section*{Optimal Control} \subsection*{Exercise 4.} Consider the following optimal growth problem: Given initial capital $k_{0}>0$, choose a consumption path $\bc{c_{t}}_{t\geq 0}$ to maximize utility \begin{align*} \int_{0}^{\infty}e^{-\rho\cdot t} \cdot \ln{c_{t}} dt \end{align*} subject to the law of motion of capital \begin{align*} \dot{k}_{t} &=f\bp{k_{t}} -c_{t}-\d \cdot k_{t}. \end{align*} The discount factor $\rho>0$, and the production function $f$ satisfies \[f\bp{k} =A\cdot k^{\a},\] where $\a \in \bp{0,1}$ and $A>0$. \begin{enumerate} \item Write down the present-value Hamiltonian. \item Show that the Euler equation is \begin{align*} \frac{\dot{c}_{t}}{c_{t}} &=\a \cdot A \cdot k_{t}^{\a -1}-\bp{\d +\rho}. \end{align*} \item Solve for the steady state of the system. \end{enumerate} \subsection*{Exercise 5.} Consider the following investment problem: Given initial capital $k_{0}$, choose the investment path $\bc{i_{t}} _{t \geq 0}$ to maximize profits \begin{align*} \int_{0}^{\infty} e^{-r\cdot t}\bs{f\bp{k_{t}} -i_{t}-\frac{\chi}{2}\cdot \bp{\frac{i_{t}^{2}}{k_{t}}}} dt \end{align*} subject to the law of motion of capital (we assume no capital depreciation) \[\dot{k}_{t} =i_{t}.\] The interest rate $r>0$, the capital adjustment cost $\chi>0$, and the production function $f$ satisfies $f'>0$ and $f''<0$. \begin{enumerate} \item Write down the current-value Hamiltonian. \item Use the optimality conditions for the current-value Hamiltonian to derive the following differential equations: \begin{align*} \dot{k}_{t} &=\bp{\frac{q_{t}-1}{\chi}}\cdot k_{t} \\ \dot{q}_{t} &=r\cdot q_{t}-f'\bp{k_{t}} -\frac{1}{2\cdot \chi}\bp{q_{t}-1}^{2} \end{align*} \item Solve for the steady state. \end{enumerate} \section*{Differential Equations} \subsection*{Exercise 6.} Find the solution of the initial value problem \begin{align*} \dot{a}(t) &=r\cdot a(t) +s \\ a\bp{0} &=a_{0} \end{align*} where both $r$ and $s$ are known constant. \subsection*{Exercise 7.} Find the solution of the initial value problem \begin{align*} \dot{a}(t) &=r(t)\cdot a(t) +s(t) \\ a\bp{0} &=a_{0} \end{align*} where both $r(t)$ and $s(t)$ are known functions of $t.$ \subsection*{Exercise 8.} Consider the linear system of FODEs given by \begin{equation*} \bm{\dot{x}}(t)=\bs{ \begin{array}{ll} 1 & 1 \\ 4 & 1 \end{array}} \bm{x}(t). \end{equation*} \begin{enumerate} \item Find the general solution of the system. \item What would you need to find a specific solution of the system? \item Draw the trajectories of the system. \end{enumerate} \subsection*{Exercise 9.} Consider the initial value problem \begin{align*} \dot{k}(t) &=s\cdot f\bp{k(t)} -\d\cdot k(t) \\ k\bp{0} &=k_{0} \end{align*} where the saving rate $s\in \bp{0,1} $, the capital depreciation rate $\d \in \bp{0,1}$, and the production function $f$ satisfies the \textit{Inada conditions}. That is, $f$ is continuously differentiable and \begin{align*} f(0)&=0\\ f'(x)&>0\\ f''(x)&<0\\ \lim_{x\to 0} f'(x)&=+\infty\\ \lim_{x\to +\infty} f'(x)&=0. \end{align*} \begin{enumerate} \item Give a production function $f$ that satisfies the Inada conditions. \item Find the steady state of the system. \item Draw the dynamic path of $k(t) $ and show that it converges to the steady state. \end{enumerate} \subsection*{Exercise 10.} The solution of the problem studied in Exercise 4 is characterized by a system of two nonlinear first-order differential equations: \begin{align*} \dot{k}_{t} &=f\bp{k_{t}} -c_{t}-\d \cdot k_{t}\\ \frac{\dot{c}_{t}}{c_{t}} &=\a \cdot A \cdot k_{t}^{\a -1}-\bp{\d +\rho}. \end{align*} The first FODE is the law of motion of capital. The second FODE is the Euler equation, which describes the optimal path of consumption over time. \begin{enumerate} \item Draw the phase diagram of the system. \item Linearize the system around its steady state. \item Show that the steady state is a saddle point locally. \item Suppose the economy is in steady state at time $t_{0}$ and there is an unanticipated decrease in the discount factor $\rho$. Show on your phase diagram the transition dynamics of the model. \end{enumerate} \subsection*{Exercise 11.} The solution of the investment problem studied in Exercise 5 is characterized by a system of two nonlinear first-order differential equations: \begin{align*} \dot{k}_{t} &=\bp{\frac{q_{t}-1}{\chi}}\cdot k_{t} \\ \dot{q}_{t} &=r\cdot q_{t}-f'\bp{k_{t}} -\frac{1}{2\cdot \chi}\bp{q_{t}-1}^{2}. \end{align*} The first FODE is the law of motion of capital $k_{t}$. The second FODE is the law of motion of the co-state variable $q_{t}$. \begin{enumerate} \item Draw the phase diagram. \item Show that the steady state is a saddle point locally. \end{enumerate} \subsection*{Exercise 12.} Consider a discrete time version of the typical growth model: \begin{align*} k(t+1) &=f\bp{k(t)} -c(t) +\bp{1-\d}\cdot k(t) \\ c(t+1) &=\b\cdot \bs{ 1+f'\bp{k(t)} -\d }\cdot c(t) . \end{align*} The discount factor $\b \in \bp{0,1}$, the rate of depreciation of capital $\d \in \bp{0,1}$, initial capital $k_{0}$ is given, and the production function $f$ satisfies the Inada conditions. These two equations are a system of first-order difference equations. Whereas a system of first-order differential equations relates $\bm{\dot{x}}(t) $ to $\bm{x}(t)$, a system of first-order difference equations relate $\bm{x}(t+1) $ to $\bm{x}(t)$. In this exercise, we will see that we can study a system of first-order difference equations with the tools that we used to study systems of first-order differential equations. In particular, we can use phase diagrams to understand the dynamics of the system. \begin{enumerate} \item Construct a phase diagram for the system. First, define \begin{align*} \D k & \equiv k(t+1) -k(t) , \\ \D c & \equiv c(t+1) -c(t) . \end{align*} Second, draw the $\D k=0$ locus and the $\D c=0$ locus on the $(k,c)$ plane. Finally, find the steady state as the intersection of the $\D k=0$ locus and the $ \D c=0$ locus. \item Show that the steady state is a saddle point in the phase diagram. \end{enumerate} \subsection*{Exercise 13.} We consider the following optimal growth problem. Given initial human capital $h_{0}$ and initial physical capital $k_{0}$, choose consumption $c(t) $ and labor $l(t) $ to maximize utility \begin{equation*} \int_{0}^{\infty}e^{-\rho\cdot t}\cdot \ln{c} dt \end{equation*} subject to \begin{align*} \dot{k}_{t} &=y_{t}-c_{t}-\d\cdot k_{t} \\ \dot{h}_{t} &=B\cdot \bp{1-l_{t}}\cdot h_{t}. \end{align*} Output $y_{t}$ is defined by \[y_{t}\equiv A\cdot k_{t}^{\a}\cdot \bp{l_{t}\cdot h_{t}} ^{\b}.\] We also impose that $0 \leq l_{t}\leq 1$. The discount factor $\rho>0$, the rate of depreciation of physical capital $\d>0$, the constants $A>0$ and $B>0$, and the production function parameters $\a\in \bp{0,1}$ and $\b \in\bp{0,1}$. \begin{enumerate} \item Give state and control variables. \item Write down the present-value Hamiltonian for this problem. \item Derive the optimality conditions. \item Show that the growth rate of consumption $c(t)$ is \begin{equation*} \frac{\dot{c}}{c}=\frac{\a\cdot y}{k}-\bp{\d +\rho} . \end{equation*} \item From now on, we assume that $B=0$. Show that $l=1$. \item Draw the phase diagram in the $(k,c)$ plane. \item Show on the diagram that the steady state of the system is a saddle point. \item Derive the Jacobian of the system. \item Show that the steady state of the system is a saddle point. \end{enumerate} \end{document}
{ "alphanum_fraction": 0.6949533912, "avg_line_length": 42.3265306122, "ext": "tex", "hexsha": "52a021dd2d76cb314de43cb88708694adfea6ed3", "lang": "TeX", "max_forks_count": 21, "max_forks_repo_forks_event_max_datetime": "2022-03-25T16:38:21.000Z", "max_forks_repo_forks_event_min_datetime": "2022-01-25T18:14:51.000Z", "max_forks_repo_head_hexsha": "e78569b10b76f4bec2af50360eb07a11089d782b", "max_forks_repo_licenses": [ "CC-BY-4.0" ], "max_forks_repo_name": "pascalmichaillat/math-for-macro", "max_forks_repo_path": "homework/exercises.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "e78569b10b76f4bec2af50360eb07a11089d782b", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "CC-BY-4.0" ], "max_issues_repo_name": "pascalmichaillat/math-for-macro", "max_issues_repo_path": "homework/exercises.tex", "max_line_length": 442, "max_stars_count": 59, "max_stars_repo_head_hexsha": "e78569b10b76f4bec2af50360eb07a11089d782b", "max_stars_repo_licenses": [ "CC-BY-4.0" ], "max_stars_repo_name": "pascalmichaillat/math-for-macro", "max_stars_repo_path": "homework/exercises.tex", "max_stars_repo_stars_event_max_datetime": "2022-03-25T13:17:46.000Z", "max_stars_repo_stars_event_min_datetime": "2022-01-24T10:22:34.000Z", "num_tokens": 4111, "size": 12444 }
\documentclass[12pt,english]{article} \usepackage{preamble} \newcommand{\macrotikz}{\documentclass[tikz]{standalone} \usepackage{tikz} \usepackage{pgfplots} \pgfplotsset{compat=newest} \usetikzlibrary{plotmarks} \usetikzlibrary{arrows.meta} \usepgfplotslibrary{patchplots} \usepackage{grffile} } \begin{document} \cleardoublepage \pagenumbering{gobble} \input{titlepage} \tableofcontents \newpage \cleardoublepage \pagenumbering{arabic} \section{Introduction} \input{sections/intro.tex} \section{Optimal communication chain over the ideal channel}\label{sec:step1} \input{sections/step1.tex} \section{Low-density parity check code} \input{sections/step2.tex} \section{Time and frequency synchronisation} \input{sections/step3.tex} \end{document}
{ "alphanum_fraction": 0.8050397878, "avg_line_length": 20.3783783784, "ext": "tex", "hexsha": "7f2c6327be62cc5529c855b9e7f289963ddef925", "lang": "TeX", "max_forks_count": 5, "max_forks_repo_forks_event_max_datetime": "2022-03-18T09:02:37.000Z", "max_forks_repo_forks_event_min_datetime": "2018-10-08T10:48:54.000Z", "max_forks_repo_head_hexsha": "c63a0617cc679de76166c62727a0c778099f9d62", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "amatepl/DVB-S2", "max_forks_repo_path": "report/main.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "c63a0617cc679de76166c62727a0c778099f9d62", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "amatepl/DVB-S2", "max_issues_repo_path": "report/main.tex", "max_line_length": 77, "max_stars_count": 3, "max_stars_repo_head_hexsha": "c63a0617cc679de76166c62727a0c778099f9d62", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "mpetitjean/DVB-S2", "max_stars_repo_path": "report/main.tex", "max_stars_repo_stars_event_max_datetime": "2022-03-14T07:46:58.000Z", "max_stars_repo_stars_event_min_datetime": "2019-09-08T10:12:11.000Z", "num_tokens": 231, "size": 754 }
\section{Clustering} \ifCPy \cvCPyFunc{KMeans2} Splits set of vectors by a given number of clusters. \cvdefC{int cvKMeans2(const CvArr* samples, int nclusters,\par CvArr* labels, CvTermCriteria termcrit,\par int attempts=1, CvRNG* rng=0, \par int flags=0, CvArr* centers=0,\par double* compactness=0);} \cvdefPy{KMeans2(samples,nclusters,labels,termcrit)-> None} \begin{description} \cvarg{samples}{Floating-point matrix of input samples, one row per sample} \cvarg{nclusters}{Number of clusters to split the set by} \cvarg{labels}{Output integer vector storing cluster indices for every sample} \cvarg{termcrit}{Specifies maximum number of iterations and/or accuracy (distance the centers can move by between subsequent iterations)} \ifC \cvarg{attempts}{How many times the algorithm is executed using different initial labelings. The algorithm returns labels that yield the best compactness (see the last function parameter)} \cvarg{rng}{Optional external random number generator; can be used to fully control the function behaviour} \cvarg{flags}{Can be 0 or \texttt{CV\_KMEANS\_USE\_INITIAL\_LABELS}. The latter value means that during the first (and possibly the only) attempt, the function uses the user-supplied labels as the initial approximation instead of generating random labels. For the second and further attempts, the function will use randomly generated labels in any case} \cvarg{centers}{The optional output array of the cluster centers} \cvarg{compactness}{The optional output parameter, which is computed as $\sum_i ||\texttt{samples}_i - \texttt{centers}_{\texttt{labels}_i}||^2$ after every attempt; the best (minimum) value is chosen and the corresponding labels are returned by the function. Basically, the user can use only the core of the function, set the number of attempts to 1, initialize labels each time using a custom algorithm (\texttt{flags=CV\_KMEAN\_USE\_INITIAL\_LABELS}) and, based on the output compactness or any other criteria, choose the best clustering.} \fi \end{description} The function \texttt{cvKMeans2} implements a k-means algorithm that finds the centers of \texttt{nclusters} clusters and groups the input samples around the clusters. On output, $\texttt{labels}_i$ contains a cluster index for samples stored in the i-th row of the \texttt{samples} matrix. \ifC % Example: Clustering random samples of multi-gaussian distribution with k-means \begin{lstlisting} #include "cxcore.h" #include "highgui.h" void main( int argc, char** argv ) { #define MAX_CLUSTERS 5 CvScalar color_tab[MAX_CLUSTERS]; IplImage* img = cvCreateImage( cvSize( 500, 500 ), 8, 3 ); CvRNG rng = cvRNG(0xffffffff); color_tab[0] = CV_RGB(255,0,0); color_tab[1] = CV_RGB(0,255,0); color_tab[2] = CV_RGB(100,100,255); color_tab[3] = CV_RGB(255,0,255); color_tab[4] = CV_RGB(255,255,0); cvNamedWindow( "clusters", 1 ); for(;;) { int k, cluster_count = cvRandInt(&rng)%MAX_CLUSTERS + 1; int i, sample_count = cvRandInt(&rng)%1000 + 1; CvMat* points = cvCreateMat( sample_count, 1, CV_32FC2 ); CvMat* clusters = cvCreateMat( sample_count, 1, CV_32SC1 ); /* generate random sample from multigaussian distribution */ for( k = 0; k < cluster_count; k++ ) { CvPoint center; CvMat point_chunk; center.x = cvRandInt(&rng)%img->width; center.y = cvRandInt(&rng)%img->height; cvGetRows( points, &point_chunk, k*sample_count/cluster_count, (k == (cluster_count - 1)) ? sample_count : (k+1)*sample_count/cluster_count ); cvRandArr( &rng, &point_chunk, CV_RAND_NORMAL, cvScalar(center.x,center.y,0,0), cvScalar(img->width/6, img->height/6,0,0) ); } /* shuffle samples */ for( i = 0; i < sample_count/2; i++ ) { CvPoint2D32f* pt1 = (CvPoint2D32f*)points->data.fl + cvRandInt(&rng)%sample_count; CvPoint2D32f* pt2 = (CvPoint2D32f*)points->data.fl + cvRandInt(&rng)%sample_count; CvPoint2D32f temp; CV_SWAP( *pt1, *pt2, temp ); } cvKMeans2( points, cluster_count, clusters, cvTermCriteria( CV_TERMCRIT_EPS+CV_TERMCRIT_ITER, 10, 1.0 )); cvZero( img ); for( i = 0; i < sample_count; i++ ) { CvPoint2D32f pt = ((CvPoint2D32f*)points->data.fl)[i]; int cluster_idx = clusters->data.i[i]; cvCircle( img, cvPointFrom32f(pt), 2, color_tab[cluster_idx], CV_FILLED ); } cvReleaseMat( &points ); cvReleaseMat( &clusters ); cvShowImage( "clusters", img ); int key = cvWaitKey(0); if( key == 27 ) break; } } \end{lstlisting} \cvCPyFunc{SeqPartition} Splits a sequence into equivalency classes. \cvdefC{ int cvSeqPartition( \par const CvSeq* seq,\par CvMemStorage* storage,\par CvSeq** labels,\par CvCmpFunc is\_equal,\par void* userdata ); } \begin{description} \cvarg{seq}{The sequence to partition} \cvarg{storage}{The storage block to store the sequence of equivalency classes. If it is NULL, the function uses \texttt{seq->storage} for output labels} \cvarg{labels}{Ouput parameter. Double pointer to the sequence of 0-based labels of input sequence elements} \cvarg{is\_equal}{The relation function that should return non-zero if the two particular sequence elements are from the same class, and zero otherwise. The partitioning algorithm uses transitive closure of the relation function as an equivalency critria} \cvarg{userdata}{Pointer that is transparently passed to the \texttt{is\_equal} function} \end{description} \begin{lstlisting} typedef int (CV_CDECL* CvCmpFunc)(const void* a, const void* b, void* userdata); \end{lstlisting} The function \texttt{cvSeqPartition} implements a quadratic algorithm for splitting a set into one or more equivalancy classes. The function returns the number of equivalency classes. % Example: Partitioning a 2d point set \begin{lstlisting} #include "cxcore.h" #include "highgui.h" #include <stdio.h> CvSeq* point_seq = 0; IplImage* canvas = 0; CvScalar* colors = 0; int pos = 10; int is_equal( const void* _a, const void* _b, void* userdata ) { CvPoint a = *(const CvPoint*)_a; CvPoint b = *(const CvPoint*)_b; double threshold = *(double*)userdata; return (double)((a.x - b.x)*(a.x - b.x) + (a.y - b.y)*(a.y - b.y)) <= threshold; } void on_track( int pos ) { CvSeq* labels = 0; double threshold = pos*pos; int i, class_count = cvSeqPartition( point_seq, 0, &labels, is_equal, &threshold ); printf("%4d classes\n", class_count ); cvZero( canvas ); for( i = 0; i < labels->total; i++ ) { CvPoint pt = *(CvPoint*)cvGetSeqElem( point_seq, i ); CvScalar color = colors[*(int*)cvGetSeqElem( labels, i )]; cvCircle( canvas, pt, 1, color, -1 ); } cvShowImage( "points", canvas ); } int main( int argc, char** argv ) { CvMemStorage* storage = cvCreateMemStorage(0); point_seq = cvCreateSeq( CV_32SC2, sizeof(CvSeq), sizeof(CvPoint), storage ); CvRNG rng = cvRNG(0xffffffff); int width = 500, height = 500; int i, count = 1000; canvas = cvCreateImage( cvSize(width,height), 8, 3 ); colors = (CvScalar*)cvAlloc( count*sizeof(colors[0]) ); for( i = 0; i < count; i++ ) { CvPoint pt; int icolor; pt.x = cvRandInt( &rng ) % width; pt.y = cvRandInt( &rng ) % height; cvSeqPush( point_seq, &pt ); icolor = cvRandInt( &rng ) | 0x00404040; colors[i] = CV_RGB(icolor & 255, (icolor >> 8)&255, (icolor >> 16)&255); } cvNamedWindow( "points", 1 ); cvCreateTrackbar( "threshold", "points", &pos, 50, on_track ); on_track(pos); cvWaitKey(0); return 0; } \end{lstlisting} \fi \fi \ifCpp \cvCppFunc{kmeans} Finds the centers of clusters and groups the input samples around the clusters. \cvdefCpp{double kmeans( const Mat\& samples, int clusterCount, Mat\& labels,\par TermCriteria termcrit, int attempts,\par int flags, Mat* centers );} \begin{description} \cvarg{samples}{Floating-point matrix of input samples, one row per sample} \cvarg{clusterCount}{The number of clusters to split the set by} \cvarg{labels}{The input/output integer array that will store the cluster indices for every sample} \cvarg{termcrit}{Specifies maximum number of iterations and/or accuracy (distance the centers can move by between subsequent iterations)} \cvarg{attempts}{How many times the algorithm is executed using different initial labelings. The algorithm returns the labels that yield the best compactness (see the last function parameter)} \cvarg{flags}{It can take the following values: \begin{description} \cvarg{KMEANS\_RANDOM\_CENTERS}{Random initial centers are selected in each attempt} \cvarg{KMEANS\_PP\_CENTERS}{Use kmeans++ center initialization by Arthur and Vassilvitskii} \cvarg{KMEANS\_USE\_INITIAL\_LABELS}{During the first (and possibly the only) attempt, the function uses the user-supplied labels instaed of computing them from the initial centers. For the second and further attempts, the function will use the random or semi-random centers (use one of \texttt{KMEANS\_*\_CENTERS} flag to specify the exact method)} \end{description}} \cvarg{centers}{The output matrix of the cluster centers, one row per each cluster center} \end{description} The function \texttt{kmeans} implements a k-means algorithm that finds the centers of \texttt{clusterCount} clusters and groups the input samples around the clusters. On output, $\texttt{labels}_i$ contains a 0-based cluster index for the sample stored in the $i^{th}$ row of the \texttt{samples} matrix. The function returns the compactness measure, which is computed as \[ \sum_i \|\texttt{samples}_i - \texttt{centers}_{\texttt{labels}_i}\|^2 \] after every attempt; the best (minimum) value is chosen and the corresponding labels and the compactness value are returned by the function. Basically, the user can use only the core of the function, set the number of attempts to 1, initialize labels each time using some custom algorithm and pass them with \par (\texttt{flags}=\texttt{KMEANS\_USE\_INITIAL\_LABELS}) flag, and then choose the best (most-compact) clustering. \cvCppFunc{partition} Splits an element set into equivalency classes. \cvdefCpp{template<typename \_Tp, class \_EqPredicate> int\newline partition( const vector<\_Tp>\& vec, vector<int>\& labels,\par \_EqPredicate predicate=\_EqPredicate());} \begin{description} \cvarg{vec}{The set of elements stored as a vector} \cvarg{labels}{The output vector of labels; will contain as many elements as \texttt{vec}. Each label \texttt{labels[i]} is 0-based cluster index of \texttt{vec[i]}} \cvarg{predicate}{The equivalence predicate (i.e. pointer to a boolean function of two arguments or an instance of the class that has the method \texttt{bool operator()(const \_Tp\& a, const \_Tp\& b)}. The predicate returns true when the elements are certainly if the same class, and false if they may or may not be in the same class} \end{description} The generic function \texttt{partition} implements an $O(N^2)$ algorithm for splitting a set of $N$ elements into one or more equivalency classes, as described in \url{http://en.wikipedia.org/wiki/Disjoint-set_data_structure}. The function returns the number of equivalency classes. \fi
{ "alphanum_fraction": 0.6704227669, "avg_line_length": 41.6369863014, "ext": "tex", "hexsha": "74ab65506ff5256e2661bd28c2d63846e0613ca3", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "42642d8c632da53f60f2610b056547137793021b", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "eirTony/INDI1", "max_forks_repo_path": "to/lang/OpenCV-2.2.0/doc/core_clustering_search.tex", "max_issues_count": 14, "max_issues_repo_head_hexsha": "42642d8c632da53f60f2610b056547137793021b", "max_issues_repo_issues_event_max_datetime": "2016-12-10T07:24:15.000Z", "max_issues_repo_issues_event_min_datetime": "2016-11-24T10:46:39.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "eirTony/INDI1", "max_issues_repo_path": "to/lang/OpenCV-2.2.0/doc/core_clustering_search.tex", "max_line_length": 335, "max_stars_count": null, "max_stars_repo_head_hexsha": "42642d8c632da53f60f2610b056547137793021b", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "eirTony/INDI1", "max_stars_repo_path": "to/lang/OpenCV-2.2.0/doc/core_clustering_search.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 3200, "size": 12158 }
%!TEX root = ../notes.tex \section{March 9, 2022} ***(missed?)
{ "alphanum_fraction": 0.6129032258, "avg_line_length": 20.6666666667, "ext": "tex", "hexsha": "92bc718d691e4a82ea9ffe2b5f6f294870954c27", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "9784be9e0faa57bbb3c421d8a104daadebf99a2f", "max_forks_repo_licenses": [ "BSL-1.0" ], "max_forks_repo_name": "jchen/math1580-notes", "max_forks_repo_path": "lectures/2022-03-09.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "9784be9e0faa57bbb3c421d8a104daadebf99a2f", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "BSL-1.0" ], "max_issues_repo_name": "jchen/math1580-notes", "max_issues_repo_path": "lectures/2022-03-09.tex", "max_line_length": 25, "max_stars_count": 1, "max_stars_repo_head_hexsha": "9784be9e0faa57bbb3c421d8a104daadebf99a2f", "max_stars_repo_licenses": [ "BSL-1.0" ], "max_stars_repo_name": "jchen/math1580-notes", "max_stars_repo_path": "lectures/2022-03-09.tex", "max_stars_repo_stars_event_max_datetime": "2022-02-14T15:03:38.000Z", "max_stars_repo_stars_event_min_datetime": "2022-02-14T15:03:38.000Z", "num_tokens": 24, "size": 62 }
\input{mmd6-beamer-leader} \usepackage[utf8]{inputenc} \usepackage{natbib} \usepackage{graphicx} \title{Example of an Article Class Document} \author{John Smith} \date{August 2019} \input{mmd6-beamer-begin} \section{Introduction} There is a theory which states that if ever anyone discovers exactly what the Universe is for and why it is here, it will instantly disappear and be replaced by something even more bizarre and inexplicable. There is another theory which states that this has already happened. \begin{figure}[h!] \centering \includegraphics[scale=1.7]{universe} \caption{The Universe} \label{fig:universe} \end{figure} \section{Conclusion} ``I always thought something was fundamentally wrong with the universe'' \citep{adams1995hitchhiker} \bibliographystyle{plain} \bibliography{references} \input{mmd6-beamer-footer} \end{document}
{ "alphanum_fraction": 0.7763761468, "avg_line_length": 24.2222222222, "ext": "tex", "hexsha": "491d59ae1ee6054f9091c4a1b3c5d4eecbed9c56", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "5084d9508f55ab7f254f821916a8ef10d189f541", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "ericnchen/aws-lambda-latex", "max_forks_repo_path": "examples/beamer-mmd/main.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "5084d9508f55ab7f254f821916a8ef10d189f541", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "ericnchen/aws-lambda-latex", "max_issues_repo_path": "examples/beamer-mmd/main.tex", "max_line_length": 79, "max_stars_count": null, "max_stars_repo_head_hexsha": "5084d9508f55ab7f254f821916a8ef10d189f541", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "ericnchen/aws-lambda-latex", "max_stars_repo_path": "examples/beamer-mmd/main.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 236, "size": 872 }
\documentclass[twoside,onecolumn,10pt]{waflarticle} % can use option mtpro \usepackage{graphicx} \usepackage{rotating} \usepackage{scalefnt} \usepackage{bm} \usepackage{fancyhdr} \usepackage{etoolbox} \usepackage{amsmath} %\AtBeginEnvironment{eqnarray}{\setlength{\arraycolsep}{2pt}} %\usepackage[hidelinks]{hyperref} \newlength\lengthfigure % declare a figure width unit \setlength\lengthfigure{0.16\textwidth} % make the figure width unit scale with the textwidth \renewcommand{\fontsizetable}{\footnotesize\scalefont{1.0}} \renewcommand{\fontsizefigure}{\footnotesize\scalefont{1.1}} \renewcommand{\vec}[1]{\bm{#1}} \let\citen\cite \setcounter{tocdepth}{3} \newcommand{\alb}{\vspace{0.1cm}\\} % array line break \newcommand{\mfd}{\displaystyle} \newcommand{\ns}{{n_{\rm s}}} \newcommand{\nd}{3} \renewcommand{\vec}[1]{\bm{ #1 }} \journalvolume{Journal of Computational Physics, Vol.\ 300, Pages 779--799, 2015.} \title{Modeling Weakly-Ionized Plasmas in Magnetic Field:\\ a New Computationally-Efficient Approach} \author{ Bernard Parent\thanks{Associate Professor, Dept.\ of Aerospace Engineering, Pusan National University, Busan 609-735, Korea, http://bernardparent.ca.}, ~~Sergey O.\ Macheret\thanks{Professor, School of Aeronautics and Astronautics, Purdue University, West Lafayette, IN 47907-2045, USA.}, ~~and Mikhail N.\ Shneider\thanks{Senior Research Scientist, Dept.\ of Mechanical and Aerospace Engineering, Princeton University, Princeton, NJ 08544-5263, USA.}\\ } %\setlength\nomenclaturelabelwidth{0.13\hsize} % optional, default is 0.03\hsize %\setlength\nomenclaturecolumnsep{0.09\hsize} % optional, default is 0.06\hsize \nomenclature{ \begin{nomenclaturelist}{Roman symbols} \item[$\vec{A}$] origin of the magnet dipole moment \item[$A$] cross-sectional area, $\rm m^2$ \end{nomenclaturelist} \begin{nomenclaturelist}{Greek symbols} \item[$\alpha_{ij}$] commonly used term in the diffusion matrix $K$ \end{nomenclaturelist} \begin{nomenclaturelist}{Superscripts} \item[$\star$] sum of turbulent and molecular diffusion \end{nomenclaturelist} \begin{nomenclaturelist}{Subscripts} \item[$t$] turbulent \end{nomenclaturelist} } \abstract{ Despite its success at simulating accurately both non-neutral and quasi-neutral weakly-ionized plasmas, the drift-diffusion model has been observed to be a particularly stiff set of equations. Recently, it was demonstrated that the stiffness of the system could be relieved by rewriting the equations such that the potential is obtained from Ohm's law rather than Gauss's law while adding some source terms to the ion transport equation to ensure that Gauss's law is satisfied in non-neutral regions. Although the latter was applicable to multicomponent and multidimensional plasmas, it could not be used for plasmas in which the magnetic field was significant. This paper hence proposes a new computationally-efficient set of electron and ion transport equations that can be used not only for a plasma with multiple types of positive and negative ions, but also for a plasma in magnetic field. Because the proposed set of equations is obtained from the same physical model as the conventional drift-diffusion equations without introducing new assumptions or simplifications, it results in the same exact solution when the grid is refined sufficiently while being more computationally efficient: not only is the proposed approach considerably less stiff and hence requires fewer iterations to reach convergence but it yields a converged solution that exhibits a significantly higher resolution. The combined faster convergence and higher resolution is shown to result in a hundredfold increase in computational efficiency for some typical steady and unsteady plasma problems including non-neutral cathode and anode sheaths as well as quasi-neutral regions. } \begin{document} \pagestyle{fancy} \fancyhead{} \fancyhead[CO,CE]{\begin{minipage}{\textwidth}\footnotesize\center \it {B.\ Parent, S.\ O.\ Macheret, M.\ N.\ Shneider, ``Modeling Weakly-Ionized Plasmas in Magnetic Field: a New Computationally-Efficient Approach'',\\ {Journal of Computational Physics}, Vol.\ 300, Pages 779--799, 2015.}~\\ \end{minipage}} \renewcommand{\headrulewidth}{0.0pt} \pagenumbering{arabic} \setcounter{page}{1} \maketitle % \tableofcontents % \makenomenclature %% \listoftables %% \listoffigures \linespread{1.05} \section{Introduction} \dropword Generally referred to as magneto-plasmadynamics or magnetohydrodynamics (MHD), the process of applying a force on a fluid in motion using a magnetic field is the main mechanism behind several new aerospace technologies such as shockwave control in supersonic flows \cite{pof:2002:poggie,aiaa:2004:shneider}, power generation during re-entry using a MHD generator \cite{jpp:2009:fujino,aiaa:2009:wan,jsr:2012:kim}, heat shield in hypersonic flows \cite{jsr:2013:bizek,jsr:2012:kawamura}, thrust generation using a Faraday accelerator \cite{jpp:2005:parent,jpp:2007:parent}, or efficiency improvement of pulse detonation engines through MHD energy bypass \cite{jpp:2012:zeineh}. In such devices, the working fluid on which the magnetic field acts is air ionized either through high electric fields, through electron or microwave beams, or through potassium or cesium seeding. Independently of the ionization process, the ionization fraction of the air remains low (typically less than 0.1\% or so) due to the energy needed to ionize the air being high relative to the flow enthalpy. For this reason, air plasmas in aerospace applications can be considered \emph{weakly-ionized}. % weakly-ionized plasmas are such that the electrons collide mostly with non-ionized atoms or molecules, and the momentum and energy of the flow is mostly due to the bulk non-ionized atomic/molecular species When assuming quasi-neutrality throughout, the numerical simulation of weakly-ionized plasmas in magnetic field can be accomplished efficiently by obtaining the potential from the generalized Ohm's law (see Refs.\ \cite{aiaa:2009:wan, jpp:2007:parent} for instance). However, because the quasi-neutral assumption limits its use to plasmas in which the positive and negative charges closely approach each other, such a strategy can not be applied in the vicinity of dielectric surfaces or within the cathode and anode sheaths where the positive charge density differs substantially from the negative charge density. Because the accurate modelling of the non-neutral regions near the surfaces is often critical due to the large voltage (and hence power) drop within cathode sheaths, the numerical simulation of many weakly-ionized plasmas can not be accomplished through the generalized Ohm's law without inducing excessive error in the solution. Rather, it is deemed necessary for many problems to obtain the potential from Gauss's law and to solve additional transport equations to account for the motion of the ions and the electrons with respect to the neutrals. Commonly referred to as the ``drift-diffusion model'', such a strategy was first demonstrated viable in solving weakly-ionized gases under the influence of an externally-applied magnetic field in Ref.\ \cite{jcp:2004:surzhikov}, and was used subsequently to obtain multiple solutions of gas discharges in which the electrons were magnetized (see for instance Refs.\ \cite{jpp:2008:poggie,jap:2009:shang,bookchapter:2009:shang}). Despite its success at predicting accurately both non-neutral and quasi-neutral plasmas in the presence of magnetic field, the drift-diffusion model has been observed to be an exceptionally stiff set of equations. That is, the system of equations is such that it forces a numerical method to use an integration steplength which is excessively small in relation to the smoothness of the exact solution, hence resulting in a disproportionate number of iterations to reach convergence. The stiffness is further exacerbated should the plasma contain quasi-neutral regions of substantial size, in which case the number of iterations needed to obtain a solution is in the order of millions. In Ref.\ \cite{jcp:2013:parent}, it was argued that the stiffness of the drift-diffusion model originates from the potential equation based on Gauss's law being particularly sensitive to small errors in the charged species densities when the plasma becomes quasi-neutral. It was then demonstrated that the stiffness of the system could be relieved by rewriting the equations such that the potential is obtained from Ohm's law rather than Gauss's law while adding some source terms to the ion transport equation to ensure that Gauss's law is satisfied in non-neutral regions (see Ref.\ \cite{jcp:2013:parent} and also Ref.\ \cite{jcp:2007:crispel}). The recast of the drift-diffusion set of equations first proposed in Ref.\ \cite{jcp:2013:parent} was extended to multicomponent and multidimensional plasmas in Ref.\ \cite{jcp:2014:parent}, where several test cases involving quasi-neutral plasmas between dielectrics and non-neutral discharges between electrodes showed a remarkable improvement in computational efficiency compared to the conventional approach: Not only did the recast set of equations permit the use of considerably higher integration steplengths resulting in a thirtyfold or more reduction in the number of iterations to reach convergence, but it also resulted in a higher resolution of the converged solution whenever the plasma included quasi-neutral regions of substantial size. The combined gains in resolution and convergence rates resulted in the recast system of equations being typically 100 times more computationally efficient than the conventional drift-diffusion equations while not sacrificing on the generality of the physical model. Despite being generally applicable to weakly-ionized plasmas in multiple dimensions including plasmas with various types of ions (including negative ions), the recast set of transport equations presented in Ref.\ \cite{jcp:2014:parent} is not applicable to a plasma in which either the ions or the electrons are magnetized and can hence not be used to solve problems in which the external magnetic field is significant. The goal of this paper is hence to craft a new computationally-efficient set of electron and ion transport equations that can be used not only for a multicomponent and multidimensional plasma, but also for a plasma in magnetic field. As will be shown subsequently, this will require the potential equation to be based on the \emph{generalized} Ohm's law rather than the standard form of Ohm's law and to require a change in the definition of the ambipolar electric field when recasting the transport equations for the negatively-charged species. As in prior work, it is ensured that the recast set of equations is obtained from the same physical model as the conventional drift-diffusion equations and, as such, yields the same exact solution either within quasi-neutral regions or within non-neutral regions including cathode, anode, and dielectric sheaths. \section{Physical Model} Let us now outline the physical model from which the recast computationally-efficient set of transport equations will be subsequently derived. Commonly referred to as the ``fluid model'' or ``drift-diffusion model'', the physical model under consideration treats the neutrals and each charged species as independent fluids with their own velocities interacting with the other fluids through collision forces. In the presence of a magnetic field, the drift-diffusion model yields the following mass conservation equation for each charged species (either electrons, positive ions, or negative ions): % \begin{equation} \frac{\partial N_k}{\partial t} + \sum_{i=1}^3 \frac{\partial}{\partial x_i} N_k \vec{V}_i^k = W_k \label{eqn:massconservation} \end{equation} % where $k$ is an index associated with the species to be solved and where the species velocity $\vec{V}^k$ is obtained from: % \begin{equation} \vec{V}^k=\vec{V}^{\rm n} + s_k \mu_k \left(\vec{E}+\vec{V}^k \times \vec{B}\right)-\frac{\mu_k}{|C_k| N_k}\nabla P_k \label{eqn:Vvector} \end{equation} % The latter expression for the species velocity can be obtained from the momentum equation by assuming that the terms related to inertia change and to collision forces between charged species are negligible compared to the terms related to the collision forces between the charged species and the neutrals. In the mass and momentum equations above, $N_k$ is the number density of species $k$, $\vec{V}_i^k$ is the $i$th component of the $k$th species velocity including drift and diffusion, $\vec{V}^{\rm n}$ is the neutrals velocity vector including drift and diffusion, $W_k$ is the source term containing all chemical reactions, and $P_k$ is the partial pressure. As well, $s_k$ is the species sign (equal to $+1$ for the positive ions and to $-1$ for the electrons and the negative ions), $C_k$ is the species charge (equal to $-e$ for the electrons, to $+e$ for the singly-charged positive ions, to $-2e$ for the doubly-charged negative ions, etc, with $e$ the elementary charge), $\vec{E}$ is the electric field vector, $\vec{B}$ the magnetic field vector, and $\mu_k$ the species mobility. It can be convenient to rewrite the charged species velocity vector in tensor form as follows: % \begin{equation} \vec{V}^{k}_i = \vec{V}^{\rm n}_i+\sum_{j=1}^\nd s_k \wtilde{\mu}^k_{ij} \vec{E}_j^{\rm n} - \sum_{j=1}^\nd \frac{\wtilde{\mu}^{k}_{ij}}{|C_k| N_k} \frac{\partial P_k}{\partial x_j} \label{eqn:V} \end{equation} % with $\vec{E}^{\rm n}$ being the effective electric field in the neutrals reference frame: % \begin{equation} \vec{E}^{\rm n} \equiv \vec{E}+\vec{V}^{\rm n} \times \vec{B} \label{eqn:En} \end{equation} % and with the mobility tensor equal to: % \begin{equation} \!\!\! \begin{array}{l}\mfd \wtilde{\mu}^k =\frac{\mu_k}{1+\mu_k^2|\vec{B}|^2}\left[\!\!\begin{array}{ccc} 1+\mu_k^2 \vec{B}_1^2 & \mu_k^2\vec{B}_1\vec{B}_2+s_k \mu_k \vec{B}_3 & \mu_k^2\vec{B}_1\vec{B}_3-s_k \mu_k \vec{B}_2 \alb \mu_k^2\vec{B}_1\vec{B}_2-s_k\mu_k\vec{B}_3 & 1+\mu_k^2\vec{B}_2^2 & \mu_k^2\vec{B}_2\vec{B}_3+s_k\mu_k\vec{B}_1 \alb \mu_k^2\vec{B}_1\vec{B}_3 +s_k\mu_k\vec{B}_2 & \mu_k^2 \vec{B}_2\vec{B}_3-s_k\mu_k\vec{B}_1 & 1+\mu_k^2\vec{B}_3^2 \end{array} \!\!\!\!\right] \end{array} \label{eqn:mutilde} \end{equation} % In the latter, the magnetic field $\vec{B}$ corresponds to the externally applied magnetic field, as the induced magnetic field can be shown to have negligible impact on many weakly-ionized plasmas (the so-called low magnetic Reynolds number approximation). When the induced magnetic field plays a negligible role, and when the applied (external) magnetic field does not vary in time, it can be demonstrated that the Maxwell equations reduce to the solution of Gauss's law: % \begin{equation} \sum_{j=1}^3 \frac{\partial \vec{E}_j}{\partial x_j}=\frac{1}{\epsilon_0} \sum_{k=1}^\ns C_k N_k \label{eqn:gauss} \end{equation} % in which the electric field vector can be expressed in terms of a potential function as follows: % \begin{equation} \vec{E}_j=-\frac{\partial \phi}{\partial x_j} \label{eqn:potential} \end{equation} % The electric field potential $\phi$ exists as long as the curl of the electric field is zero, which is the case when the magnetic field does not vary in time. Although not required to solve the system of equations outlined above, one physical parameter that is often used when analyzing plasma flowfields is the current density $\vec{J}$, which is defined as: % \begin{equation} \vec{J}_i \equiv \sum_{k=1}^\ns C_k N_k \vec{V}^k_i \label{eqn:Jdefinition} \end{equation} % After substituting in the latter the velocity tensor from Eq.\ (\ref{eqn:V}), the following expression for the current can be obtained: % \begin{equation} \begin{array}{l} \mfd \vec{J}_i = \sum_{j=1}^3 \wtilde{\sigma}_{ij} \vec{E}_j^{\rm n} - \sum_{j=1}^3 \sum_{k=1}^\ns s_k \wtilde{\mu}^k_{ij} \frac{\partial P_k}{\partial x_j} + \rho_{\rm e} \vec{V}_i^{\rm n} \end{array} \label{eqn:J} \end{equation} % in which the tensor conductivity and the net charge density are defined as: % \begin{equation} \begin{array}{l} \mfd \wtilde{\sigma}\equiv\sum_{k=1}^\ns |C_k| N_k \wtilde{\mu}^k \end{array} \label{eqn:sigmatilde} \end{equation} % % \begin{equation} \rho_{\rm e}\equiv\sum_{k=1}^\ns C_k N_k \label{eqn:rhoe} \end{equation} % Finally, a constitutive relation that is needed to close the system of equations is the ideal gas law which yields the partial pressure given the number density and the temperature: % \begin{equation} P_k = N_k k_{\rm B} T_k \label{eqn:Pk} \end{equation} % where $T_k$ is the species temperature. Commonly used to simulate weakly-ionized plasmas in the presence of magnetic field, the physical model outlined above can predict accurately not only quasi-neutral phenomena such as ambipolar diffusion and ambipolar drift but also non-neutral phenomena within cathode and anode sheaths. It can also be used to simulate multicomponent plasmas in which there are several types of ions (either negative or positive) as well as unsteady plasmas in which the displacement current is significant. Nonetheless, it is pointed out that the physical model used herein is subject to several assumptions, with the most critical being the following: (i) the forces due to collisions between charged species are small compared to forces due to collisions between the charged species and the neutrals; (ii) the induced magnetic field is negligible; (iii) within the momentum equation, the terms related to the inertia change are negligible compared to the terms related to collision forces. As was demonstrated in Ref.\ \cite{jcp:2011:parent}, such assumptions are well justified as long as the plasma remains weakly-ionized (i.e. the ionization fraction should remain lower than $10^{-3}$ or so), which is the case for a wide variety of plasmas used in industrial applications. \section{Conventional Governing Equations} When using digital computers, the conventional approach to simulate the drift-diffusion physical model outlined in the previous section consists of solving a transport equation for each charged species along with the potential equation obtained from Gauss's law (see for instance Refs.\ \cite{jcp:2004:surzhikov,jpp:2008:poggie,jap:2009:shang,book:2012:surzhikov,aiaaconf:2014:surzhikov}). The charged species transport equation can be derived from the mass conservation equations as outlined in Eq.\ (\ref{eqn:massconservation}) with the species velocity from the momentum equation applicable to a weakly-ionized plasma as shown in Eq.\ (\ref{eqn:V}). This yields the following: % \begin{equation} \frac{\partial N_k}{\partial t} + \sum_{i=1}^3 \frac{\partial}{\partial x_i} N_k\left(\vec{V}^{\rm n}_i+\sum_{j=1}^\nd s_k \wtilde{\mu}^k_{ij} \vec{E}_j^{\rm n} - \sum_{j=1}^\nd \frac{\wtilde{\mu}^{k}_{ij}}{|C_k| N_k} \frac{\partial P_k}{\partial x_j}\right) = W_k \end{equation} % When the magnetic field is strong (resulting in an electron Hall parameter $\mu_{\rm e}|\vec{B}|$ approaching or exceeding 1), and when the transport equations are solved through an implicit integration strategy, it is beneficial to the stability of the method to extract from the pressure gradient terms the diffusion terms that are diagonally dominant. This can be achieved by first subtracting and adding a pressure gradient term on the LHS as follows: % \begin{equation} \frac{\partial N_k}{\partial t} + \sum_{i=1}^3 \frac{\partial}{\partial x_i} N_k\left(\vec{V}^{\rm n}_i+\sum_{j=1}^\nd s_k \wtilde{\mu}^k_{ij} \vec{E}_j^{\rm n} - \sum_{j=1}^\nd \frac{\wtilde{\mu}^{k}_{ij}}{|C_k| N_k} \frac{\partial P_k}{\partial x_j} -\frac{\mu_k}{|C_k| N_k} \frac{\partial P_k}{\partial x_i} +\frac{\mu_k}{|C_k| N_k} \frac{\partial P_k}{\partial x_i} \right) = W_k \end{equation} % Then, expand the partial pressure using the ideal gas law $P_k=N_k k_{\rm B} T_k$ and reformat: % \begin{equation} \begin{array}{l}\mfd \frac{\partial N_k}{\partial t} + \sum_{i=1}^3 \frac{\partial}{\partial x_i} N_k\left(\vec{V}^{\rm n}_i+\sum_{j=1}^\nd s_k \wtilde{\mu}^k_{ij} \vec{E}_j^{\rm n} - \sum_{j=1}^\nd \frac{\wtilde{\mu}^{k}_{ij}-\delta_{ij} \mu_k}{|C_k| N_k} \frac{\partial P_k}{\partial x_j}\right) -\sum_{i=1}^3 \frac{\partial}{\partial x_i} \left(\frac{T_k k_{\rm B} \mu_k}{|C_k| } \frac{\partial N_k}{\partial x_i} \right)\alb\mfd =W_k+ \sum_{i=1}^3 \frac{\partial}{\partial x_i} \left(\frac{N_k k_{\rm B} \mu_k}{|C_k| } \frac{\partial T_k}{\partial x_i} \right) \end{array} \end{equation} % with $\delta_{ij}$ the Kronecker delta which is equal to 1 should $i=j$ and to 0 otherwise. We can write the latter in matrix form as follows: % \begin{equation} R = Z \frac{\partial U}{\partial t} + \sum_{i=1}^3 \frac{\partial}{\partial x_i} A_i U - \sum_{i=1}^3 \frac{\partial}{\partial x_i} \left( K \frac{\partial U}{\partial x_i}\right) -S \end{equation} % where $R$ is the residual vector and the other matrices correspond to: % \begin{eqnarray} \left[U\right]_k&=&N_k \alb \left[A_i\right]_{k,k} &=& \vec{V}^{\rm n}_i+\sum_{j=1}^\nd s_k \wtilde{\mu}^k_{ij} \vec{E}_j^{\rm n} - \sum_{j=1}^\nd \frac{\wtilde{\mu}^{k}_{ij}-\delta_{ij} \mu_k}{|C_k| N_k} \frac{\partial P_k}{\partial x_j} \alb \left[K\right]_{k,k} &=& \frac{T_k k_{\rm B} \mu_k}{|C_k|} \alb \left[Z\right]_{k,k} &=& 1 \alb \left[S\right]_k &=&W_k+ \sum_{i=1}^3 \frac{\partial}{\partial x_i} \left(\frac{N_k k_{\rm B} \mu_k}{|C_k| } \frac{\partial T_k}{\partial x_i} \right) \end{eqnarray} % where the notation $[M]_{k,k}$ denotes the diagonal element on the $k$th row of the matrix $M$ while the notation $[F]_k$ refers to the element on the $k$th row of the vector $F$. The latter yields a diagonally-dominant diffusion matrix $K$ even at high electron Hall parameter, which is a necessary condition for stable integration using an implicit method. As well, extracting the non-diagonal diffusion terms from matrix $K$ and inserting them in the convection matrix $A$ permits standard central stencils to be used when discretizing the diffusion terms. However, should they include cross-derivatives, the diffusion terms would require non-standard upwinded stencils or they would lead to spurious oscillations at a high Hall parameter (see Ref.\ \cite{jcp:2011:parent} for more details on this point). The electric field is obtained from the potential as in Eq.\ (\ref{eqn:potential}) which itself is found by integrating the potential equation concurrently to the mass conservation equations. To ensure that Gauss's law is satisfied within the non-neutral regions, it is necessary to obtain the potential equation from Gauss's law by substituting Eq.\ (\ref{eqn:potential}) into Eq.\ (\ref{eqn:gauss}): % \begin{equation} \sum_{j=1}^3 \frac{\partial^2 \phi}{\partial x_j^2}=-\frac{\rho_{\rm e}}{\epsilon_0} \label{eqn:potentialgauss} \end{equation} % The latter constitute what is here denoted as the ``conventional governing equations'', which are commonly used to solve weakly-ionized plasmas (either quasi-neutral or non-neutral) in the presence of a magnetic field using discrete methods. \section{Recast of the Positively-Charged Species Transport Equations} The ``conventional governing equations'' outlined in the previous section are well known to be exceptionally stiff. Such stiffness has been observed to be independent of the type of integration strategy used, either explicit or fully-implicit. As first outlined in Ref.\ \cite{jcp:2013:parent}, the stiffness originates from the potential equation based on Gauss's law being particularly sensitive to small errors in the electron or ion densities whenever the plasma becomes quasi-neutral. One approach that has been shown successful in relieving the stiffness is by rewriting the governing equations such that the electric field is obtained from a potential based on Ohm's law rather than Gauss's law (see Refs.\ \cite{jcp:2013:parent} and \cite{jcp:2014:parent}). As well, to ensure that Gauss's law is satisfied some source terms need to be added to the positive ion transport equations. We here generalize the approach proposed in Ref.\ \cite{jcp:2014:parent} to a plasma in magnetic field. To do so, it is convenient to first define $\Delta \vec{V}^k$ as the difference between the velocity of species $k$ and the velocity of species $k$ should the magnetic field be zero: % \begin{equation} \Delta \vec{V}^k_i \equiv \vec{V}^k_i - \left(\vec{V}^{\rm n}_i+ s_k \mu_k \vec{E}_i - \frac{\mu_k}{|C_k| N_k} \frac{\partial P_k}{\partial x_i}\right) \label{eqn:deltaV} \end{equation} % Isolate $\vec{V}^k$ in the latter and substitute in Eq.\ (\ref{eqn:massconservation}), and simplify noting that $s_k=1$ and $C_k$ is positive for the positive ions: % \begin{equation} \frac{\partial N_{k}}{\partial t} + \sum_{i=1}^3 \frac{\partial}{\partial x_i} \left(N_k \Delta \vec{V}_i^k +N_{k} \vec{V}^{\rm n}_i+N_{k} \mu_k \vec{E}_i - \frac{\mu_k }{|C_{k}| } \frac{\partial P_{k}}{\partial x_i}\right) = W_{k} \end{equation} % The source terms that must be added to ensure that Gauss's law is satisfied can be obtained by multiplying Gauss's law Eq.\ (\ref{eqn:gauss}) by $\mu_{k} N_{k}$ and rearranging: % \begin{equation} 0=\mu_{k} N_{k} \sum_{i=1}^3 \frac{\partial \vec{E}_i}{\partial x_i}-\mu_{k} N_{k}\frac{1}{\epsilon_0} \sum_{r=1}^\ns C_r N_r \end{equation} % Then, we add the latter to the former to obtain: % \begin{equation} \frac{\partial N_{k}}{\partial t} + \sum_{i=1}^3 \frac{\partial}{\partial x_i} \left(N_k \Delta \vec{V}_i^k +N_{k} \vec{V}^{\rm n}_i+N_{k} \mu_k \vec{E}_i - \frac{\mu_k }{|C_{k}| } \frac{\partial P_{k}}{\partial x_i}\right) = W_{k}+\mu_{k} N_{k} \sum_{i=1}^3 \frac{\partial \vec{E}_i}{\partial x_i}-\mu_{k} N_{k}\frac{1}{\epsilon_0} \sum_{r=1}^\ns C_r N_r \end{equation} % And we note that the following statement holds: % \begin{equation} \mu_{k} N_{k} \sum_{i=1}^3 \frac{\partial \vec{E}_i}{\partial x_i} = \sum_{i=1}^3 \frac{\partial }{\partial x_i} \mu_{k} N_{k} \vec{E}_i - \sum_{i=1}^3 \vec{E}_i \frac{\partial }{\partial x_i} \mu_{k} N_{k} \end{equation} % Substitute the latter in the former, rewrite the partial pressure term using the ideal gas law outlined in Eq.\ (\ref{eqn:Pk}), expand the pressure derivatives, and rearrange: % \begin{equation} \begin{array}{l}\mfd \frac{\partial N_{k}}{\partial t} + \sum_{i=1}^3 \frac{\partial}{\partial x_i} N_k \left(\Delta \vec{V}_i^k + \vec{V}^{\rm n}_i\right) - \sum_{i=1}^3 \frac{\partial}{\partial x_i} \left( \frac{\mu_k k_{\rm B} T_{k}}{|C_{k}| } \frac{\partial N_{k}}{\partial x_i} \right) + \sum_{i=1}^3 \vec{E}_i \frac{\partial }{\partial x_i} \mu_{k} N_{k} \alb\mfd~~~~~ = W_{k} -\mu_{k} N_{k}\frac{1}{\epsilon_0} \sum_{r=1}^\ns C_r N_r + \sum_{i=1}^3 \frac{\partial}{\partial x_i} \left( \frac{\mu_k k_{\rm B} N_{k}}{|C_{k}| } \frac{\partial T_{k}}{\partial x_i} \right) \end{array} \label{eqn:positivespecies} \end{equation} % The latter is the modified transport equation for the positively-charged species, and must be used for all positive ions. It is obtained without introducing assumptions or simplifications from the physical model outlined in Section 2. \section{Recast of the Negatively-Charged Species Transport Equations} It can be shown that a system of equations composed of the recast transport equation outlined in Eq.\ (\ref{eqn:positivespecies}) for the positive ions, combined with the standard transport equation for the negative species (Eq.\ (\ref{eqn:massconservation})), and combined with a potential equation based on the generalized Ohm's law \cite{jcp:2011:parent} has the same exact solution as the conventional governing equations outlined in Section 3 while not exhibiting high stiffness. However, such a set of equations would yield a rather low resolution and require significantly more nodes to reach the same accuracy within quasi-neutral regions of the plasma. As was demonstrated in Ref.\ \cite{jcp:2013:parent}, such can be overcome by rewriting the transport equation for the negative species in \emph{ambipolar form} following the approach outlined in Ref.\ \cite{jcp:2011:parent:2}. Rewriting the transport equations in ambipolar form increases the resolution because it reduces the dependence of the potential equation on the charged species transport equations in quasi-neutral regions. The ambipolar form of the negatively-charged species transport equations can be obtained by first isolating $\vec{V}_i^k$ in Eq.\ (\ref{eqn:deltaV}) and substituting in Eq.\ (\ref{eqn:massconservation}) noting that $s_k=-1$ when the species is negatively-charged: % \begin{equation} \begin{array}{l}\mfd \frac{\partial N_k}{\partial t} + \sum_{i=1}^3 \frac{\partial}{\partial x_i} N_k \left( \Delta \vec{V}_i^k +\vec{V}^{\rm n}_i - \mu_k \vec{E}_i \right) - \sum_{i=1}^3 \frac{\partial}{\partial x_i} \left( \frac{\mu_k }{|C_k|} \frac{\partial P_k}{\partial x_i} \right) = W_k \end{array} \end{equation} % Then, without loss of generality, we can add and subtract the ambipolar electric field $\vec{E}^\prime$ to the electric field: % \begin{equation} \begin{array}{l}\mfd \frac{\partial N_k}{\partial t} + \sum_{i=1}^3 \frac{\partial}{\partial x_i} N_k \left( \Delta \vec{V}_i^k +\vec{V}^{\rm n}_i - \mu_k \left(\vec{E} -\vec{E}^\prime \right)_i \right) - \sum_{i=1}^3 \frac{\partial}{\partial x_i} \mu_k N_k \vec{E}^\prime_i - \sum_{i=1}^3 \frac{\partial}{\partial x_i} \left( \frac{\mu_k }{|C_k|} \frac{\partial P_k}{\partial x_i} \right) = W_k \end{array} \label{eqn:negspecies1} \end{equation} % As outlined in Ref.\ \cite{jcp:2014:parent}, significant gains in resolution can be reached when $\vec{E}^\prime$ is defined as the component of the electric field that cancels out all components of the current except due to drift. However, defining the ambipolar electric field in this manner leads to some difficulties when the plasma is in the presence of a magnetic field. Not only does this result in a particularly complicated transport equation in which several terms are expensive to compute, but this also entails convergence hangs when the magnetic field reaches high values. We here find it necessary to define the ambipolar electric field in a slightly different manner as the component of the electric field that cancels out all components of the \emph{unmagnetized} current except due to drift and due to the motion of the neutrals, with the unmagnetized current being the current that would be obtained locally should the magnetic field be zero. The ambipolar electric field thus takes on the form: % \begin{equation} \vec{E}_i^\prime \equiv \vec{E}_i - \frac{1}{\sigma} \left(\vec{J}_i-\sum_{r=1}^\ns C_r N_r \Delta \vec{V}_i^r -\rho_{\rm e} \vec{V}_i^{\rm n} \right) \label{eqn:Eprimedefinition} \end{equation} % where the term within the bracket on the RHS can be easily shown to be the unmagnetized current density (the current density in the absence of a magnetic field): % \begin{equation} \mfd \vec{J}_i-\sum_{r=1}^\ns C_r N_r \Delta \vec{V}_i^r - \rho_{\rm e} \vec{V}_i^{\rm n} = \sigma \vec{E}_i - \sum_{r=1}^\ns s_r \mu_r \frac{\partial P_r}{\partial x_i} \label{eqn:Jzero} \end{equation} % and where the conductivity $\sigma$ is defined as: % \begin{equation} \sigma \equiv \sum_{k=1}^\ns \mu_k |C_k| N_k \end{equation} % After substituting Eq.\ (\ref{eqn:Jzero}) in Eq.\ (\ref{eqn:Eprimedefinition}) it can be shown that: % \begin{equation} \vec{E}_i^\prime = \sum_{r=1}^\ns \frac{s_r \mu_r }{\sigma} \frac{\partial P_r}{\partial x_i} \label{eqn:Eprime} \end{equation} % Then, after substituting $\vec{E}-\vec{E}^\prime$ from Eq.\ (\ref{eqn:Eprimedefinition}) and $\vec{E}^\prime$ from Eq.\ (\ref{eqn:Eprime}) into Eq.\ (\ref{eqn:negspecies1}) and rearranging, we obtain: % \begin{equation} \begin{array}{l}\mfd \frac{\partial N_k}{\partial t} + \sum_{i=1}^3 \frac{\partial}{\partial x_i} N_k \left( \sum_{r=1}^\ns \frac{\delta_{rk}\sigma+ C_r N_r \mu_k}{\sigma} \Delta \vec{V}_i^r +\left(1+\frac{\mu_k \rho_{\rm e}}{\sigma}\right)\vec{V}^{\rm n}_i - \frac{\mu_k}{\sigma} \vec{J}_i \right) \alb\mfd~~~~ - \sum_{i=1}^3 \sum_{r=1}^{\ns} \frac{\partial}{\partial x_i} \left( \mu_r \left( \frac{\delta_{rk}}{|C_k|}+ \frac{ \mu_k N_k s_r }{\sigma} \right) \frac{\partial P_r}{\partial x_i}\right) = W_k \end{array} \end{equation} % where $\delta_{rk}$ is the Kronecker delta. After splitting the derivative involving the current $\vec{J}$ into two terms and noting that the divergence of the current can be written as: % \begin{equation} \sum_{i=1}^3 \frac{\partial \vec{J}_i}{\partial x_i}=-\sum_{r=1}^\ns C_r \frac{\partial N_r}{\partial t} \end{equation} % the following is obtained: % \begin{equation} \begin{array}{l}\mfd \sum_{r=1}^{\ns} \frac{\delta_{rk}\sigma+C_r \mu_k N_k}{\sigma} \frac{\partial N_r}{\partial t} + \sum_{i=1}^3 \frac{\partial}{\partial x_i} N_k \left(\sum_{r=1}^\ns \frac{\delta_{rk}\sigma+C_r \mu_k N_r}{\sigma} \Delta \vec{V}_i^r +\left(1+\frac{\mu_k\rho_{\rm e}}{\sigma}\right)\vec{V}^{\rm n}_i\right) \alb\mfd~~~~ - \sum_{i=1}^3 \vec{J}_i \frac{\partial}{\partial x_i} \left( \frac{\mu_k N_k}{\sigma} \right) - \sum_{i=1}^3 \sum_{r=1}^{\ns} \frac{\partial}{\partial x_i} \left(\mu_r \left(\frac{ \delta_{rk} }{ |C_k|}+ \frac{ \mu_k N_k s_r}{\sigma}\right) \frac{\partial P_r}{\partial x_i}\right) = W_k \end{array} \end{equation} % We then use the ideal gas relationship in Eq.\ (\ref{eqn:Pk}), expand the partial derivatives, and rewrite: % \begin{equation} \begin{array}{l}\mfd \sum_{r=1}^{\ns} \frac{\delta_{rk}\sigma+C_r \mu_k N_k}{\sigma} \frac{\partial N_r}{\partial t} + \sum_{i=1}^3 \frac{\partial}{\partial x_i} N_k \left(\sum_{r=1}^\ns \frac{\delta_{rk}\sigma+C_r \mu_k N_r}{\sigma} \Delta \vec{V}_i^r +\left(1+\frac{\mu_k\rho_{\rm e}}{\sigma}\right)\vec{V}^{\rm n}_i\right) - \sum_{i=1}^3 \vec{J}_i \frac{\partial}{\partial x_i} \left( \frac{\mu_k N_k}{\sigma} \right) \alb\mfd~~~~ - \sum_{i=1}^3 \sum_{r=1}^{\ns} \frac{\partial}{\partial x_i} \left(\mu_r k_{\rm B} T_r\left(\frac{ \delta_{rk} }{ |C_k|}+ \frac{ \mu_k N_k s_r}{\sigma}\right) \frac{\partial N_r}{\partial x_i}\right) = W_k + \sum_{r=1}^\ns \sum_{i=1}^3 \frac{\partial}{\partial x_i} \left( \mu_r k_{\rm B} N_r \left(\frac{\delta_{rk}}{|C_k|}+\frac{s_r \mu_k N_k}{\sigma}\right) \frac{\partial T_r}{\partial x_i} \right) \end{array} \label{eqn:negativespecies} \end{equation} % The latter is the proposed ``ambipolar form'' of the transport equation for the negatively-charged species. It must be used not only for the electrons but for all negative ions. It is emphasized that the recast Eq.\ (\ref{eqn:negativespecies}) is obtained from the physical model outlined in Section 2 without making any assumption or simplification. As such, it can be used not only in quasi-neutral regions but also in non-neutral regions including cathode and anode sheaths. \section{Proposed Governing Equations} We can combine the transport equation for the positively-charged species, Eq.\ (\ref{eqn:positivespecies}) and the transport equation for the negatively-charged species, Eq.\ (\ref{eqn:negativespecies}) into a single equation: % \begin{equation} \begin{array}{l}\mfd \sum_{r=1}^{\ns} \frac{\delta_{rk}\sigma + \beta^-_k C_r \mu_k N_k}{\sigma} \frac{\partial N_r}{\partial t} + \sum_{i=1}^3 \frac{\partial}{\partial x_i} N_k \left(\sum_{r=1}^\ns \frac{\delta_{rk}\sigma+\beta_k^- C_r \mu_k N_r}{\sigma} \Delta \vec{V}_i^r + \left(1+\beta_k^- \frac{\mu_k\rho_{\rm e}}{\sigma}\right)\vec{V}^{\rm n}_i\right) \alb\mfd~~~~ - \sum_{i=1}^3 \beta_k^- \vec{J}_i \frac{\partial}{\partial x_i} \left( \frac{\mu_k N_k}{\sigma} \right) + \sum_{i=1}^3 \beta_k^+ \vec{E}_i \frac{\partial }{\partial x_i} \mu_{k} N_{k} - \sum_{i=1}^3 \sum_{r=1}^{\ns} \frac{\partial}{\partial x_i} \left(\mu_r k_{\rm B} T_r\left(\frac{ \delta_{rk} }{ |C_k|}+ \frac{\beta_k^- s_r \mu_k N_k }{\sigma}\right) \frac{\partial N_r}{\partial x_i}\right) \alb\mfd~~~~ = W_k + \sum_{r=1}^\ns \sum_{i=1}^3 \frac{\partial}{\partial x_i} \left( \mu_r k_{\rm B} N_r \left(\frac{\delta_{rk}}{|C_k|}+\frac{\beta_k^- s_r \mu_k N_k}{\sigma}\right) \frac{\partial T_r}{\partial x_i} \right) -\beta_k^+ \mu_{k} N_{k}\frac{\rho_{\rm e}}{\epsilon_0} \end{array} \end{equation} % with % \begin{equation} \beta_k^\pm = \max(0,~\pm s_k) \end{equation} % We can simplify further the latter by defining the ambipolar tensor $\alpha$ as: % \begin{equation} \alpha_{kr} \equiv \frac{\delta_{rk}\sigma+ \beta_k^- C_r \mu_k N_k}{\sigma} = |C_r| \left( \frac{\delta_{rk}}{|C_k|}+ \frac{ \beta_k^- s_r \mu_k N_k}{\sigma}\right) = \frac{N_k}{N_r}\frac{\delta_{rk}\sigma+ \beta_k^- C_r \mu_k N_r}{\sigma} \end{equation} % and by noting that the following equality holds: % \begin{equation} 1+\beta_k^- \frac{\mu_k\rho_{\rm e}}{\sigma} = \sum_{r=1}^\ns \frac{\delta_{rk}\sigma+ \beta_k^- C_r \mu_k N_r}{\sigma} \end{equation} % We thus obtain: % \begin{equation} \begin{array}{l}\mfd \sum_{r=1}^{\ns} \alpha_{kr} \frac{\partial N_r}{\partial t} + \sum_{i=1}^3 \sum_{r=1}^\ns \frac{\partial}{\partial x_i} \alpha_{kr} \left( \Delta \vec{V}_i^r + \vec{V}^{\rm n}_i \right)N_r + \sum_{i=1}^3 \left( \beta_k^+ \vec{E}_i- \beta_k^- \vec{J}_i \right) \frac{\partial}{\partial x_i} \mu_{k} N_{k} \left( \beta_k^+ + {\textstyle \frac{1}{\sigma}} \beta_k^- \right) \alb\mfd~~~~ - \sum_{i=1}^3 \sum_{r=1}^{\ns} \frac{\partial}{\partial x_i} \left(\frac{\mu_r k_{\rm B} T_r \alpha_{kr}}{|C_r|} \frac{\partial N_r}{\partial x_i}\right) = W_k -\beta_k^+ \mu_{k} N_{k}\frac{\rho_{\rm e}}{\epsilon_0} + \sum_{r=1}^\ns \sum_{i=1}^3 \frac{\partial}{\partial x_i} \left( \frac{\mu_r k_{\rm B} N_r \alpha_{kr}}{|C_r|} \frac{\partial T_r}{\partial x_i} \right) \end{array} \label{eqn:positivenegativespecies} \end{equation} % The proposed charged species transport equations can also be written in general matrix form as follows: % \begin{equation} R=Z\frac{\partial U}{\partial t} + \sum_{i=1}^3\frac{\partial}{\partial x_i} A_i U + \sum_{i=1}^3 G_i \frac{\partial}{\partial x_i} B U - \sum_{i=1}^3 \frac{\partial}{\partial x_i} \left( K \frac{\partial U}{\partial x_i} \right)-S \label{eqn:Rproposed} \end{equation} % where $R$ is the residual which is driven to zero through the iterative process, and where the other matrices can be shown to correspond to: % \begin{eqnarray} \left[U\right]_k&=&N_k \alb \left[A_i\right]_{k,k}&=&\sum_{r=1}^\ns \frac{\delta_{rk}\sigma+\beta_k^- C_r \mu_k N_r}{\sigma} \Delta \vec{V}_i^r + \left(1+\beta_k^- \frac{\mu_k\rho_{\rm e}}{\sigma}\right)\vec{V}^{\rm n}_i \alb % \left[A_i\right]_{k,k}&=&\sum_{r=1}^\ns \alpha_{kr}\frac{N_r}{N_k} \left(\Delta \vec{V}_i^r +\vec{V}^{\rm n}_i\right) \alb \left[B\right]_{k,k}&=&\beta_k^+ \mu_k + \frac{1}{\sigma}\beta_k^- \mu_k \alb \left[G_i\right]_{k,k}&=&\beta_k^+ \vec{E}_i - \beta_k^- \vec{J}_i \alb \left[K\right]_{k,r} &=& \frac{\mu_r k_{\rm B} T_r \alpha_{kr}}{|C_r|} \alb \left[Z\right]_{k,r} &=& \alpha_{kr} \alb \left[S\right]_k &=& W_k -\beta_k^+ \mu_{k} N_{k}\frac{\rho_{\rm e}}{\epsilon_0} + \sum_{r=1}^\ns \sum_{i=1}^3 \frac{\partial}{\partial x_i} \left( \frac{\mu_r k_{\rm B} N_r \alpha_{kr}}{|C_r|} \frac{\partial T_r}{\partial x_i} \right) \end{eqnarray} % In the latter, the velocity difference $\Delta \vec{V}$ can be obtained by substituting Eq.\ (\ref{eqn:V}) into Eq.\ (\ref{eqn:deltaV}): % \begin{equation} \Delta \vec{V}_i^k = \sum_{j=1}^\nd s_k \wtilde{\mu}^k_{ij} \vec{E}_j^{\rm n} + \sum_{j=1}^\nd \left(\frac{\delta_{ij} \mu_k-\wtilde{\mu}^{k}_{ij}}{|C_k| N_k}\right) \frac{\partial P_k}{\partial x_j} - s_k \mu_k \vec{E}_i \end{equation} % while the current density is obtained from Eq.\ (\ref{eqn:J}) and the electric field is obtained from the potential equation based on Ohm's law which can be derived from the physical model outlined in Section 2 following the approach shown in Ref.\ \cite{jcp:2011:parent}: % \begin{equation} \!\begin{array}{l} \mfd\sum_{i=1}^3 \frac{\partial}{\partial x_i}\left(\sum_{j=1}^3 \wtilde{\sigma}_{ij} \left(-\frac{\partial \phi}{\partial x_j} + \left(\vec{V}^{\rm n} \times \vec{B}\right)_j\right) - \sum_{j=1}^3 \sum_{k=1}^\ns s_k \wtilde{\mu}^{k}_{ij} \frac{\partial P_k}{\partial x_j} + \rho_{\rm e} \vec{V}_i^{\rm n} \right)=- \frac{\partial \rho_{\rm e}}{\partial t} \end{array} \end{equation} % from which the electric field can be found using Eq.\ (\ref{eqn:potential}). It is noted that the set of equations proposed herein is obtained from the same physical model as the conventional set of equations without introducing any additional assumption or simplification. As such, the exact solution obtained from the proposed set of equations is identical to the one obtained from the conventional set not only for quasi-neutral plasmas with significant ambipolar diffusion and drift phenomena but also for non-neutral sheaths, for unsteady plasmas, as well as for plasmas where the displacement current is non-negligible. Despite yielding the same exact solution as the conventional equations, the set of equations proposed herein is advantaged by being considerably less stiff and hence requiring significantly less computing effort to reach convergence. Further, as will be shown below in the Test Cases section, the proposed equations yield a considerably higher resolution within plasma regions that are quasi-neutral. \section{Boundary Conditions} When the electron Hall parameter (i.e.\ the product between the electron mobility and the magnitude of the magnetic field) becomes significant due to a strong applied magnetic field, the enforcement of boundary conditions can become problematic. The difficulties arise when imposing a zero current condition perpendicular to dielectric surfaces by setting to zero the component of $\vec{J}$ perpendicular to the surface in Eq.\ (\ref{eqn:J}). When the magnetic field is non-zero, such leads to the potential $\phi$ at the boundary node depending not only on the properties of the nearest inner node but also on the properties of the adjacent boundary nodes. Numerical experiments show that such direct dependence between boundary nodes entails major convergence problems for many plasma flowfields either when using the conventional or the proposed set of governing equations. The convergence difficulties become more severe when the electron Hall parameter approaches or exceeds 0.1 often leading to the solution diverging towards aphysical states or continuously oscillating without reaching a root. One way that this problem can be overcome is by setting the magnetic field to zero on all boundary nodes and on all inner nodes adjacent to boundary nodes. In doing so, the plasma is not subject to a magnetic field near the surfaces and the same boundary conditions as used for a plasma in the absence of magnetic field can be specified: % \begin{equation} \frac{\partial }{\partial \eta} N_+ \vec{V}^{+}_\eta = 0 {~~~~~\rm and~~~~~} N_{-}=0 {~~~~~\rm and~~~~~} N_{\rm e}=\frac{\gamma}{\mu_{\rm e}} \sum_{k=1}^\ns N_k \mu_k \beta_k^+ {~~~~~\rm for~dielectrics~or~for~} \vec{E}_\eta<0 \end{equation} % % \begin{equation} N_{+}=0 {~~~~~\rm and~~~~~} \frac{\partial }{\partial \eta} N_- \vec{V}^{-}_\eta = 0 {~~~~~\rm and~~~~~} \frac{\partial }{\partial \eta} N_{\rm e} \vec{V}^{\rm e}_\eta= 0 {~~~~~\rm otherwise} \end{equation} % with $\gamma$ being the secondary emission coefficient and the subscripts/superscripts ``e'', ``$-$'', and ``$+$'' denoting the electron species, the negative ion species, and the positive ion species respectively. In the latter $\eta$ refers to the coordinate perpendicular to the boundary and pointing away from the surface towards the nearest inner node, while $\vec{E}_\eta$ and $\vec{V}^k_\eta$ refer to the electric field component and $k$th species velocity component in the direction of $\eta$. As well, the potential on the dielectrics is specified such that the current perpendicular to dielectric surfaces is zero. Because the magnetic field is zero at the boundary nodes and the near-boundary nodes, this yields the following expression for the dielectrics potential: % \begin{equation} \mfd \frac{\partial \phi}{\partial \eta} = - \frac{1}{\sigma}\sum_{k=1}^\ns s_k \mu_k \frac{\partial P_k}{\partial \eta}{~~~~~\rm for~dielectrics~only} \end{equation} % It may be argued that setting the magnetic field to zero on the boundary and near-boundary nodes may entail some errors in the converged solution. However, such errors disappear as the grid is refined because the volume of the unmagnetized regions near the surfaces becomes insignificant compared to the total volume of the plasma. When applying the boundary conditions, it is found necessary to under-relax the update of the number densities and of the potential on the boundary nodes in order to prevent convergence hangs. For all cases here considered, either when using the conventional or the proposed equations, the relaxation factor is set to 0, 0.8, and 0.5 for the ion, electron, and potential equations respectively. The relaxation factor is such that, when set to 0, the boundary node property does not depend on its previous value when being updated and, when set to 1, the boundary node property remains unaltered. \section{Discretization and Integration} To enable a fair comparison, both the conventional and the proposed sets of equations are discretized using the same stencils and solved using the same iterative procedure. The convection terms are discretized with the Steger-Warming scheme turned second-order accurate through the Van Leer TVD limiter, while the diffusion terms are discretized with centered stencils, and the term $G \delta_x BU$ is discretized as specified in Ref.\ \cite{jcp:2013:parent}. When extending the Steger-Warming scheme or the centered diffusion stencils to multiple dimensions, a dimensional-splitting strategy is employed in which the derivatives are split amongst the several dimensions and then discretized using one-dimensional operators. Because the problems tackled herein are such that the molecular and ambipolar diffusion effects are sufficiently strong to prevent discontinuities in the charged species densities, it is found unnecessary to use the upwinded discretization stencils for the potential equation proposed in Ref.\ \cite{jcp:2011:parent} (which are needed to prevent even-odd node discoupling of the potential to occur in the vicinity of discontinuities). Rather, all terms within the potential equation are discretized with centered stencils whether the potential equation is based on Gauss's law or the generalized Ohm's law. The discretized set of transport equations is converged by minimizing the residual vector through a block-implicit ADI algorithm \cite{jcp:1980:briley}. To improve the convergence rates, it is found beneficial to linearize the Townsend ionization chemical source terms under the condition of constant current and to partially linearize the source terms added to enforce Gauss's law within the proposed equations (see Ref.\ \cite{jcp:2014:parent} for details). The potential equation either based on Ohm's law or Gauss's law is not solved in coupled form with the species transport equations. Rather, as in other plasma solvers, a more robust integration strategy is to solve the potential equation independently through a scalar approximate factorization algorithm (see Ref.\ \cite{jcp:2011:parent}). Thus, the convergence procedure consists of performing one iteration in pseudotime of the charged species transport equations keeping the potential constant, followed by one or more iterations in pseudotime of the potential keeping the charged species densities constant. Such a procedure is repeated until the residual of the potential and of all species transport equations is reduced sufficiently. Through trial and error, it is found that specifying the pseudotime step as follows for the potential equation yields optimal convergence rates: % \begin{equation} \Delta \tau_\phi = \left\{ \begin{array}{ll} \mfd L_{\rm c}\cdot \raisebox{-1.5ex}{$\stackrel{\stackrel{\scriptstyle 3}{\textstyle \rm min}}{\scriptstyle i=1}$} \left(\frac{\Delta x_i}{\sigma_{\rm ref}+\sigma}\right) & \textrm{for the potential equation based on Ohm's law} \alb \mfd L_{\rm c}\cdot \raisebox{-1.5ex}{$\stackrel{\stackrel{\scriptstyle 3}{\textstyle \rm min}}{\scriptstyle i=1}$} \left({\Delta x_i}\right) & \textrm{for the potential equation based on Gauss's law} \alb \end{array} \right. \end{equation} % As well, either for the proposed or conventional charged species transport equations, the pseudotime step is set to the same value for all nodes and equal to the highest value that satisfies the following condition on all inner nodes: % \begin{equation} \Delta \tau_N \le {\rm CFL}\cdot \raisebox{-1.5ex}{$\stackrel{\stackrel{\scriptstyle 3}{\textstyle \rm min}}{\scriptstyle i=1}$} \left(\mfd\frac{\Delta x_i}{a_{\rm ref}+\mu_{\rm e} \left|\vec{E}_i-\vec{E}^\prime_i\right|}\right)~~~~\textrm{for all inner nodes} \end{equation} % where $\Delta x_i$ is the grid spacing in the $i$th dimension, $L_{\rm c}$ is a characteristic length that varies depending on the problem solved, $a_{\rm ref}$ and $\sigma_{\rm ref}$ are some reference sound speed and conductivity typically set to 300~m/s and 0.003~S/m, and CFL is a user-defined parameter that is set to some low value (close to 1) in the initial stages of convergence and then progressively increased. How the parameters affecting the local pseudotime step are varied will be outlined in more detail for each test case. \section{Test Cases} The test cases considered herein consist of simulating electron-beamed ionized air plasmas enclosed by dielectrics and electrodes. Unless otherwise stated, the air plasma includes 6 components ($\rm N_2$, $\rm O_2$, $\rm e^-$, $\rm N_2^+$, $\rm O_2^+$, and $\rm O_2^-$) with the reaction rates as well as the mobilities for each charged species being taken from Ref.\ \cite{jcp:2014:parent}. The plasma chemical reactions include Townsend ionization, electron-beam ionization, ion-ion recombination, electron attachment as well as dissociative recombination. For simplicity and to avoid the handling of the O and N species, we rewrite the dissociative recombination reaction for oxygen $\rm e^- + O_2^+ \rightarrow O+O$ to $\rm e^- + O_2^+ \rightarrow O_2$ and similarly for nitrogen. As can be demonstrated starting from the energy transport equation for each charged species, the electric field appearing within the expressions for the mobilities and within the Townsend ionization rates must be substituted by the \emph{effective electric field} in the respective species reference frame, i.e.\ $|\vec{E}+\vec{V}^k \times \vec{B}|$. A justification for doing so as well as details on how to obtain the effective electric field through an iterative process are given in Appendix A. The characteristic length scale needed to converge the potential equation is varied cyclically for all cases as $L_{\rm c}=0.0003,~0.003,~0.03,~0.0003~{\rm m},...$, while the CFL number and the number of potential subiterations are outlined below on a case by case basis. % \begin{table*} \center \begin{threeparttable} \tablecaption{Relative error assessment in solving the multicomponent plasma test case (with quasi-neutral region).\tnote{a,b}} \label{tab:case6-error} \fontsizetable \begin{tabular*}{\textwidth}{l@{\extracolsep{\fill}}llllcllll} \toprule ~ & \multicolumn{9}{c}{Average relative error }\\ \cmidrule{2-10} ~ & \multicolumn{4}{c}{$\mfd\frac{1}{S N_{\rm ref}}\int_{0}^{S}{\left|N_{\rm e}-\left(N_{\rm e}\right)_{\rm exact}\right|}~{\rm d}S$} &~& \multicolumn{4}{c}{$\mfd\frac{1}{S \phi_{\rm ref}}\int_{0}^{S}{\left|\phi-\phi_{\rm exact}\right|}~{\rm d}S$} \\ \cmidrule{2-5}\cmidrule{7-10} Governing equations & $44^2$\,nodes & $87^2$\,nodes & $173^2$\,nodes & $345^2$\,nodes &~& $44^2$\,nodes & $87^2$\,nodes & $173^2$\,nodes & $345^2$\,nodes \\ \midrule Proposed & 0.180 & 0.0730 & 0.0250 & 0.0071 &~& 0.205 & 0.0785 & 0.0148 & 0.0047\\ Conventional & 0.201 & 0.0811 & 0.0287 & 0.0081 &~& 0.830 & 0.326 & 0.120 & 0.041 \\ \bottomrule \end{tabular*} \begin{tablenotes} \item[a] The ``exact solution'' is computed with the proposed governing equations and a mesh of $689^2$ nodes \item[b] The domain surface area $S$ is set to 9~mm$^2$ while the reference density $N_{\rm ref}$ and reference potential $\phi_{\rm ref}$ are given values of $10^{17}$/m$^3$ and 10~V respectively. \end{tablenotes} \end{threeparttable} \end{table*} % Being less stiff and offering significantly faster convergence are not the only advantages that the proposed equations offer. The present approach is further advantaged by offering a higher resolution of the converged solution within quasi-neutral regions. This becomes apparent when evaluating the relative error on the potential on several meshes, as is done in Table \ref{tab:case6-error}: for a given mesh size, the error on the potential is typically reduced fourfold when using the proposed equations. Differently put, the present approach can achieve the same accuracy on the potential with a mesh size being 4 times smaller, which would result approximately in a eightfold decrease in computational effort (the computing time is proportional to the mesh size times the number of iterations, with the number of iterations typically increasing with the square root of the mesh size in 2D). The relatively low resolution of the solution when using the conventional equations is attributed to the potential equation based on Gauss's law being strongly function of the net charge density, which is itself highly sensitive to small errors of the electron and ion densities in regions of quasi-neutrality. This leads to an amplification of the numerical error when computing the potential and a somewhat low resolution of the converged solution. Because the potential equation based on the generalized Ohm's law is not strongly dependent on the net charge density, the proposed equations are not subject to such an error amplification and lead to a solution that is more accurate for a given mesh size. When simulating a non-neutral plasma in which the electrons are magnetized and in which there is a quasi-neutral region of significant size, it is here seen that the proposed equation set is not only considerably less stiff than the conventional set but also exhibits a higher resolution of the converged solution. Combined together, these lead to a considerable two-hundredfold decrease in computing effort for a desired level of accuracy. \subsection{Gas Discharge} % \begin{figure}[!b] \fontsizefigure \center \includegraphics[width=3.5\lengthfigure]{setup_discharge.pdf} \caption{Gas discharge test case; all dimensions in millimeters.} \label{fig:setup_discharge} \end{figure} % The third test case here investigated consists of a gas discharge between two electrodes with a voltage difference of 800 Volts and with the boundary conditions and problem setup schematized in Fig.\ \ref{fig:setup_discharge}. The magnetic field is fixed to 0.8~Tesla and is perpendicular to the computational plane. This results in a significant magnetic field effect on the flow properties due to the electron Hall parameter being of $0.5$. Because such a test case requires a very high number of iterations to converge when using the conventional equations, and because solving a 6-component plasma chemical model requires significantly more computing effort than a 3-component model, it is here decided to reduce the complexity of the chemical model by solving a reduced set of reactions involving only 3 species. The air plasma here considered is thus composed of one type of positive air ions, one type of air neutral molecules, and electrons, with the chemical reactions and mobilities taken from Ref.\ \cite{jcp:2014:parent}. The voltage difference is sufficient for Townsend ionization to occur near the cathode, and the electron-beam power deposited is sufficiently high for a quasi-neutral region to form near the anode. Such a test case is hence well suited to test the performance of the proposed equations in simulating non-neutral cathode sheaths in the presence of a magnetic field typical of plasma magneto-aerodynamics. \section{Conclusions} A new set of equations is here presented to simulate weakly-ionized plasmas in magnetic field using the drift-diffusion fluid model. The proposed set of equations consists of obtaining the potential from the generalized Ohm's law rather than from Gauss's law as is the case in the conventional set of equations. To ensure that Gauss's law is satisfied in non-neutral regions some source terms are added to the ion transport equations. Because the proposed equations are obtained from the same physical model as the conventional equations without introducing new simplifications, they yield the same exact solution either in non-neutral or in quasi-neutral plasma regions (including sheaths near the surfaces as well as regions with significant ambipolar diffusion and drift). The present equation set is nonetheless advantaged over the conventional set by not being subject to high stiffness when the plasma includes one or more zones of quasi-neutrality. Reducing the stiffness of the system permits larger integration steplengths to be used which leads to a significant decrease in the number of iterations to reach convergence. Several test cases reveal that the integration steplength can be increased by 20 times or more leading typically to a thirtyfold decrease in the iteration count whenever a quasi-neutral region of substantial size forms within the plasma. Such gains in convergence acceleration are observed to be independent of the size of the mesh, of the current magnitude, or of the strength of the magnetic and electric fields. Another advantage of the proposed equations is in yielding a higher resolution of the converged solution within (or in the vicinity of) quasi-neutral regions when the externally-applied magnetic field is significant. Indeed, several grid convergence studies of plasmas with large quasi-neutral regions show that the electric field potential is subject to excessive error when obtained from a potential equation based on Gauss's law. This is attributed to the latter amplifying the error associated with the charged species densities when the net charge tends towards zero. Such an error amplification is avoided when obtaining the potential from the generalized Ohm's law because the latter is not strongly dependent on the net charge density, which leads to the conventional set of equations typically requiring 5 times more nodes to yield the same accuracy as the proposed set within or nearby quasi-neutral regions. Not only is the present set of equations advantaged by being less stiff and hence exhibiting faster convergence, but it also results in a more accurate solution on a given mesh. When combined together, these gains in resolution and convergence acceleration result in a one-hundredfold or more increase in computational efficiency for typical steady and unsteady problems involving a quasi-neutral region of substantial size. On the other hand, should the plasma be entirely non-neutral and not include zones of quasi-neutrality, the proposed equations are observed to converge as rapidly and to exhibit as high a resolution as the conventional set. Because the proposed governing equations yield significant computational advantages with no associated drawback, they are unconditionally recommended to simulate numerically through the drift-diffusion model weakly-ionized plasmas in the presence of magnetic field. \section*{Acknowledgment} This research was supported by a 2-year Pusan National University Research Grant. %% The Appendices part is started with the command \appendix; %% appendix sections are then done as normal sections % \appendix \appendix \section{Effective Electric Field in Species Reference Frame} \label{AppendixA} A justification is here given to why the effective electric field must be determined in the electron reference frame rather than in the neutrals reference frame when computing the electron mobility and the Townsend ionization rates. Let us start from the electron energy transport equation as taken from Ref.\ \cite[page 34]{book:1991:raizer}: % \begin{equation} \frac{\partial }{\partial t} \left( \frac{3}{2} N_{\rm e} k_{\rm B} T_{\rm e} \right) + \sum_{i=1}^\nd \frac{\partial }{\partial x_i} \left(\frac{5}{2} N_{\rm e} k_{\rm B} T_{\rm e} \vec{V}_i^{\rm e} \right) - \sum_{i=1}^\nd \frac{\partial }{\partial x_i} \kappa_{\rm e} \frac{\partial T_{\rm e}}{\partial x_i} = \vec{F}^{\rm e}\cdot \vec{V}^{\rm e} - \frac{3}{2} N_{\rm e} k_{\rm B} T_{\rm e} \zeta_{\rm e} \nu_{\rm m} - Q_{\rm ei} \end{equation} % where $Q_{\rm ei}$ represents the amount of energy the electrons lose in creating new electrons through Townsend ionization (that is, the product between the ionization potential and the number of electrons per unit volume per unit time created by electron-impact processes), $\kappa_{\rm e}$ is the thermal diffusivity, $\nu_{\rm m}$ the collision frequency, $\vec{F}^{\rm e}$ is the force per unit volume acting on the electrons due to electromagnetic fields in the electron reference frame, and $\zeta_{\rm e}$ is a term function of the effective electric field which can be determined similarly as in Ref.\ \cite{misc:1995:boeuf}. Consider the energy transport equation in the ``local approximation'' and neglect the unsteady, convective, and diffusive terms. Then, noting that the collision frequency can be written as follows: % \begin{equation} \nu_{\rm m}=\frac{e}{m_{\rm e}\mu_{\rm e}} \end{equation} % we obtain the following expression for the electron temperature: % \begin{equation} T_{\rm e} = \frac{2 m_{\rm e} \mu_{\rm e}}{3 e N_{\rm e} k_{\rm B} \zeta_{\rm e} } \left( \vec{F}^{\rm e}\cdot \vec{V}^{\rm e} - Q_{\rm ei}\right) \label{eqn:appendix:Te} \end{equation} % where the force per unit volume acting on the electrons in the electron reference frame due to electromagnetic fields corresponds to: % \begin{equation} \vec{F}^{\rm e} = -eN_{\rm e}\left(\vec{E} + \vec{V}^{\rm e} \times \vec{B}\right) \label{eqn:appendix:Fe} \end{equation} % and where, in the ``local approximation'', the electron velocity can be taken from Eq.\ (\ref{eqn:Vvector}) neglecting the pressure gradients: % \begin{equation} \vec{V}^{\rm e}=\vec{V}^{\rm n} - \mu_{\rm e} \left(\vec{E}+\vec{V}^{\rm e} \times \vec{B}\right) \label{eqn:appendix:Ve} \end{equation} % Substitute Eq.\ (\ref{eqn:appendix:Ve}) and Eq.\ (\ref{eqn:appendix:Fe}) in Eq.\ (\ref{eqn:appendix:Te}): % \begin{equation} T_{\rm e} = \frac{2 m_{\rm e} \mu_{\rm e}}{3 e N_{\rm e} k_{\rm B} \zeta_{\rm e} } \left( -e N_{\rm e} \left(\vec{E} + \vec{V}^{\rm e} \times \vec{B}\right)\cdot \left(\vec{V}^{\rm n} - \mu_{\rm e} \left(\vec{E}+\vec{V}^{\rm e} \times \vec{B}\right)\right) - Q_{\rm ei}\right) \end{equation} % Because the magnitude of the neutrals velocity can be assumed small compared to the magnitude of the electron velocity, the electron temperature becomes: % \begin{equation} T_{\rm e} = \frac{2 m_{\rm e} \mu_{\rm e}}{3 e N_{\rm e} k_{\rm B} \zeta_{\rm e} } \left( N_{\rm e} \mu_{\rm e} e |\vec{E}+\vec{V}^{\rm e}\times \vec{B}|^2 - Q_{\rm ei}\right) \end{equation} % From the latter, it is clear that in the local approximation the electron temperature is a function of the electric field in the electron reference frame $|\vec{E}+\vec{V}^{\rm e}\times \vec{B}|$, not of the electric field in the laboratory frame $|\vec{E}|$. Because the electron mobility and the Townsend ionization rates depend on the electron temperature, and because the electron temperature is a function of the electric field in the electron reference frame, it follows that the electron mobility and Townsend ionization rates should be determined from an effective electron electric field as follows: % \begin{equation} E_{\rm eff}^{\rm e}=|\vec{E}^{\rm e}|=|\vec{E}+\vec{V}^{\rm e}\times\vec{B}| \label{eqn:Ee_refframe} \end{equation} % where the electron velocity $\vec{V}^{\rm e}$ can be obtained from Eq.\ (\ref{eqn:V}): % \begin{equation} \vec{V}^{\rm e}_i = \vec{V}^{\rm n}_i+\sum_{j=1}^\nd s_{\rm e} \wtilde{\mu}^{\rm e}_{ij} \vec{E}_j^{\rm n} - \sum_{j=1}^\nd \frac{\wtilde{\mu}^{\rm e}_{ij}}{|C_{\rm e}| N_{\rm e}} \frac{\partial P_{\rm e}}{\partial x_j} \label{eqn:Ve_refframe} \end{equation} % Thus, to find $E_{\rm eff}^{\rm e}$, proceed iteratively: (i) Find $\vec{V}^{\rm e}$ from Eq.\ (\ref{eqn:Ve_refframe}), (ii) Find $E_{\rm eff}^{\rm e}=|\vec{E}^{\rm e}|$ from Eq.\ (\ref{eqn:Ee_refframe}), and (iii) update $\wtilde{\mu}^{\rm e}$ using the latest value of $E_{\rm eff}^{\rm e}$ found in step (ii). Repeat steps (i)-(iii) until $E_{\rm eff}^{\rm e}$ is converged. It is sometimes necessary to under-relax the update of the effective electric field $E_{\rm eff}^{\rm e}$ in the above iterative process in order to prevent some convergence hang, with the relaxation factor typically given a value of 0.9. Similarly, it can be demonstrated that the effective electric field $E_{\rm eff}^k$ needed for the ion mobilities must also be determined in the reference frame of the charged species under consideration (i.e. $E_{\rm eff}^k=|\vec{E}^k|=|\vec{E}+\vec{V}^k\times \vec{B}|$). The electric field in the ion frame of reference can be obtained for each ion species through the use of Eq.\ (\ref{eqn:V}) by following a similar iterative process as outlined above. \bibliographystyle{waflarticle} \bibliography{all} \end{document}
{ "alphanum_fraction": 0.7351943569, "avg_line_length": 76.7626728111, "ext": "tex", "hexsha": "a995cb540173ad13e719d2e6a9125ead2aeb4bc5", "lang": "TeX", "max_forks_count": 2, "max_forks_repo_forks_event_max_datetime": "2022-01-19T01:06:13.000Z", "max_forks_repo_forks_event_min_datetime": "2020-06-10T07:17:34.000Z", "max_forks_repo_head_hexsha": "a38615797ff779d8779ccc4cd17f441abd50c463", "max_forks_repo_licenses": [ "BSD-2-Clause-FreeBSD" ], "max_forks_repo_name": "bernardparent/TEXSTYLE", "max_forks_repo_path": "waflarticle/article.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "a38615797ff779d8779ccc4cd17f441abd50c463", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "BSD-2-Clause-FreeBSD" ], "max_issues_repo_name": "bernardparent/TEXSTYLE", "max_issues_repo_path": "waflarticle/article.tex", "max_line_length": 1663, "max_stars_count": 4, "max_stars_repo_head_hexsha": "a38615797ff779d8779ccc4cd17f441abd50c463", "max_stars_repo_licenses": [ "BSD-2-Clause-FreeBSD" ], "max_stars_repo_name": "bernardparent/TEXSTYLE", "max_stars_repo_path": "waflarticle/article.tex", "max_stars_repo_stars_event_max_datetime": "2020-06-10T07:17:32.000Z", "max_stars_repo_stars_event_min_datetime": "2019-06-24T04:30:25.000Z", "num_tokens": 19412, "size": 66630 }
\documentclass[11pt,a4paper]{report} \usepackage{amsmath,amsfonts,amssymb,amsthm,epsfig,epstopdf,titling,url,array} \usepackage{enumitem} \usepackage{changepage} \usepackage{graphicx} \usepackage{caption} \theoremstyle{plain} \newtheorem{thm}{Theorem}[section] \newtheorem{lem}[thm]{Lemma} \newtheorem{prop}[thm]{Proposition} \newtheorem*{cor}{Corollary} \theoremstyle{definition} \newtheorem{defn}{Definition}[section] \newtheorem{conj}{Conjecture}[section] \newtheorem{exmp}{Example}[section] \newtheorem{exercise}{Exercise}[section] \theoremstyle{remark} \newtheorem*{rem}{Remark} \newtheorem*{note}{Note} \def\changemargin#1#2{\list{}{\rightmargin#2\leftmargin#1}\item[]} \let\endchangemargin=\endlist \begin{document} \section*{Problem} Moe has built an awesome atomic clock. It has one hand that moves at a perfectly constant speed around a 24 hour dial. The problem is he has no way to calibrate it. He asks Joe for advice and Joe asks him what his objective is. He says he wants it to be exactly right as often as possible. Joe thinks for a minute and says, ``That’s going to take a lot of electricity.'' Explain what he means. \section*{Solution} Let $s$ be the rotational speed that Moe chooses expressed in revolutions per day. If Moe can find \emph{exactly} 1 as the rotational speed, he will be right all the time. Unfortunately, the probability of that is $0$. Now consider what happens when Moe's chosen speed is off by $\delta$ revolutions per day. If $\delta < 0$ and Moe manages to start the clock at exactly the right time, it will drift by $\delta$ each 24 hours and it will take $1/\delta$ days for it to be exactly right again (when it has fallen so far behind that it is showing exactly the right time). Somewhat paradoxically, the smaller $\delta$ is, the longer this will take and hence the smaller the number of times per any time interval that the clock will be exactly right. The same analysis obviously applies for $\delta > 0$. Now suppose that Moe just lets the hand move as fast as possible. Then it will pass the correct time once every revolution it makes (plus whatever time it takes to complete a revolution). So the faster he can make it go, the more frequently the reading is exactly right. Hence Joe's comment ``that will take a lot of electricity.'' \end{document}
{ "alphanum_fraction": 0.7724525043, "avg_line_length": 47.2653061224, "ext": "tex", "hexsha": "886f35967eb239032d023ed207f15d8e061e0d19", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "c231561593ef7de6264c21d2c78d736866c1b341", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "psteitz/problems", "max_forks_repo_path": "clock/clock.tex", "max_issues_count": 1, "max_issues_repo_head_hexsha": "c231561593ef7de6264c21d2c78d736866c1b341", "max_issues_repo_issues_event_max_datetime": "2022-01-03T21:08:11.000Z", "max_issues_repo_issues_event_min_datetime": "2022-01-03T21:08:11.000Z", "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "psteitz/problems", "max_issues_repo_path": "clock/clock.tex", "max_line_length": 80, "max_stars_count": null, "max_stars_repo_head_hexsha": "c231561593ef7de6264c21d2c78d736866c1b341", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "psteitz/problems", "max_stars_repo_path": "clock/clock.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 633, "size": 2316 }
\chapter{Exponential family distributions} \label{chap:stats-appendix} \import{./}{stats-appendix.tex} \chapter{Supplementary material for \autoref{chap:ibhc}} \label{chap:ibhc-appendix} \import{./}{ibhc-appendix.tex} \chapter{Supplementary material for \autoref{chap:nvmp}} \label{chap:nvmp-appendix} \import{./}{nvmp-appendix.tex} \chapter{Supplementary material for \autoref{chap:loracs}} \label{chap:loracs-appendix} \import{./}{loracs-appendix.tex} \chapter{Supplementary material for \autoref{chap:solar}} \label{chap:solar-appendix} \import{./}{solar-appendix.tex}
{ "alphanum_fraction": 0.7673611111, "avg_line_length": 28.8, "ext": "tex", "hexsha": "8349a5729c6e6ce71b5d12780a063c6d24e506a5", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "5fbf70c0645e44b2992f3cb4d7c2fbbbf7592d7f", "max_forks_repo_licenses": [ "CC-BY-4.0" ], "max_forks_repo_name": "sharadmv/thesis", "max_forks_repo_path": "writeup/content/appendix/init.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "5fbf70c0645e44b2992f3cb4d7c2fbbbf7592d7f", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "CC-BY-4.0" ], "max_issues_repo_name": "sharadmv/thesis", "max_issues_repo_path": "writeup/content/appendix/init.tex", "max_line_length": 58, "max_stars_count": 1, "max_stars_repo_head_hexsha": "5fbf70c0645e44b2992f3cb4d7c2fbbbf7592d7f", "max_stars_repo_licenses": [ "CC-BY-4.0" ], "max_stars_repo_name": "sharadmv/thesis", "max_stars_repo_path": "writeup/content/appendix/init.tex", "max_stars_repo_stars_event_max_datetime": "2020-04-30T01:28:54.000Z", "max_stars_repo_stars_event_min_datetime": "2020-04-30T01:28:54.000Z", "num_tokens": 173, "size": 576 }
% Created 2017-02-22 Wed 11:51 % Intended LaTeX compiler: pdflatex \documentclass[presentation]{beamer} \usepackage[utf8]{inputenc} \usepackage[T1]{fontenc} \usepackage{graphicx} \usepackage{grffile} \usepackage{longtable} \usepackage{wrapfig} \usepackage{rotating} \usepackage[normalem]{ulem} \usepackage{amsmath} \usepackage{textcomp} \usepackage{amssymb} \usepackage{capt-of} \usepackage{hyperref} \usetheme{Madrid} \author{Clarissa Littler} \date{\today} \title{If all math was computable\ldots{}} \hypersetup{ pdfauthor={Clarissa Littler}, pdftitle={If all math was computable\ldots{}}, pdfkeywords={}, pdfsubject={}, pdfcreator={Emacs 24.5.1 (Org mode 9.0.3)}, pdflang={English}} \begin{document} \maketitle \section{Talk} \label{sec:org2e66927} \begin{frame}[label={sec:orge0c899c}]{You wake up} \end{frame} \begin{frame}[label={sec:org6560e10}]{Programming class} \begin{block}{} \end{block} \end{frame} \begin{frame}[label={sec:orgbc548ad}]{Anti-virus} \begin{block}{A virus is a self-replicating program} \end{block} \end{frame} \begin{frame}[label={sec:org66acbfc}]{The new Hello World} \begin{block}{Travelling Salesman} \begin{center} \includegraphics[width=.9\linewidth]{2000px-Hamiltonian_path.svg.png} \end{center} \end{block} \end{frame} \begin{frame}[label={sec:orge85e13e}]{Banking} \begin{block}{Dedicated lines} \begin{center} \includegraphics[width=.9\linewidth]{Telegraph_Cable_Office.jpg} \end{center} \end{block} \end{frame} \begin{frame}[label={sec:org3aa09b8}]{Who uses passwords?} \begin{block}{} \end{block} \end{frame} \begin{frame}[label={sec:org7667ee8}]{Choice} \end{frame} \begin{frame}[label={sec:org7047978}]{Reverse engineering} \end{frame} \begin{frame}[label={sec:org275239c}]{Intellectual property} \begin{center} \includegraphics[width=.9\linewidth]{Felix_3D_Printer_-_Printing_Head.JPG} \end{center} \end{frame} \begin{frame}[label={sec:org0e6d1fc}]{\ldots{}wait?} \begin{block}{} Something occurs to you \end{block} \end{frame} \begin{frame}[label={sec:orge643353}]{Finite programs} \begin{block}{} Finite keyboards + finite length = countable number of programs \end{block} \end{frame} \begin{frame}[label={sec:orgadc44d3}]{Reals and Integers} \begin{block}{All the real numbers between 0 and 1} \begin{center} \begin{tabular}{lllll} \(a_1\) & \(a_2\) & \(a_3\) & \(a_4\) & \ldots{}\\ \(b_1\) & \(b_2\) & \(b_3\) & \(b_4\) & \ldots{}\\ \(c_1\) & \(c_2\) & \(c_3\) & \(c_4\) & \ldots{}\\ \(d_1\) & \(d_2\) & \(d_3\) & \(d_4\) & \ldots{}\\ \ldots{} & \ldots{} & \ldots{} & \ldots{} & \ldots{}\\ \end{tabular} \end{center} \end{block} \end{frame} \begin{frame}[label={sec:orga059993}]{A table of programs} \end{frame} \begin{frame}[label={sec:org31e98b8}]{If we fuss with the diagonal} \end{frame} \begin{frame}[label={sec:orgd75fb14}]{A program that can't exist} \end{frame} \begin{frame}[label={sec:org2489ca5}]{You wake up} \end{frame} \begin{frame}[label={sec:org75bb20e}]{The Real World: Programming is \alert{hard}} \end{frame} \begin{frame}[label={sec:org88b31cc}]{The Real World: Programming is \alert{finite}} \begin{block}{The finite nature of} \end{block} \end{frame} \begin{frame}[label={sec:org1c34829}]{Thank you} \begin{block}{} {\Huge Thank you for coming out! } \end{block} \end{frame} \end{document}
{ "alphanum_fraction": 0.7155963303, "avg_line_length": 27.7118644068, "ext": "tex", "hexsha": "9d34babb92cf9a62821d1c81af2ab3b0a509100e", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "ea80d8ab203775d8ab65c462d325008d62a3b623", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "clarissalittler/talks", "max_forks_repo_path": "IfMath.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "ea80d8ab203775d8ab65c462d325008d62a3b623", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "clarissalittler/talks", "max_issues_repo_path": "IfMath.tex", "max_line_length": 84, "max_stars_count": 1, "max_stars_repo_head_hexsha": "ea80d8ab203775d8ab65c462d325008d62a3b623", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "clarissalittler/talks", "max_stars_repo_path": "IfMath.tex", "max_stars_repo_stars_event_max_datetime": "2017-06-24T03:20:44.000Z", "max_stars_repo_stars_event_min_datetime": "2017-06-24T03:20:44.000Z", "num_tokens": 1195, "size": 3270 }
\chapter{Diagnostics} \label{chap-4} Determining the similarity between a painted distribution in the SNS and a Danilov distribution requires measurement of the 4D transverse phase space distribution. A direct measurement using a slit-scan \cite{Cathey2018} is not possible at high energy, so the distribution must be reconstructed from lower-dimensional projections. In this chapter, we first describe the available hardware to measure such projections in the SNS. We then describe several methods to perform the reconstruction using 1D and/or 2D projections, as well as the implementation of these methods in the SNS. \section{Available hardware and constraints} The phase space measurement must be performed in the ring-target beam transfer (RTBT) section of the SNS after the beam has been accumulated in the ring. The RTBT is effectively an extension of the ring that is traversed once. It is straightforward to vary the number of accumulated turns to measure the beam at any time during injection. The RTBT optics are shown in Fig.~\ref{fig:rtbt_optics} along with the locations of four wire-scanners — WS20, WS21, WS23, and WS24 — near the target. % \begin{figure}[!p] \includegraphics[width=\textwidth]{Images/chapter4/rtbt_optics.pdf} \caption{$\beta$ functions, phase advances, and quadrupole/wire-scanner locations in the second half of the RTBT. The plot ends at the spallation target.} \label{fig:rtbt_optics} \end{figure} % Each wire-scanner consists of three thin tungsten wires mounted on a fork — one wire is vertical, another is horizontal, and another is tilted at a forty-five-degree angle. The 1D projections of the distribution onto axes perpendicular to the wires are generated by moving the fork across the beam and measuring the charge induced by secondary emission from the wires \cite{Henderson2014}. The four wire-scanners can be run in parallel and take approximately five minutes to move across the beam and return to their original positions. Their step size is 3 mm and their dynamic range is approximately 100. They are run at a beam pulse frequency of 1 Hz.\footnote{Each data point corresponds to a separate beam pulse, so the measurement relies on pulse-to-pulse stability.} The measured profiles can be used to estimate $\langle{xx}\rangle$, $\langle{yy}\rangle$, and $\langle{uu}\rangle$, where the $u$ axis is tilted at angle $\phi = \pi/4$ above the $x$ axis, as well as $\langle{xy}\rangle$ from % \begin{equation} \langle{xy}\rangle = \frac{\langle{uu}\rangle - \langle{xx}\rangle \cos^2\phi - \langle{yy}\rangle \sin^2\phi}{2\sin\phi\cos\phi} . \end{equation} % The SNS employs a target imaging system (TIS) to measure the 2D projection of the distribution on the target \cite{Blokland2010}. The SNS target is a stainless steel vessel containing liquid mercury. Its nose is prepared with a Cr:Al2O3 coating that releases light when impacted by the proton beam. Due to the high-radiation environment, the light is collected by a mirror, deflected, and focused onto an optical fiber bundle which guides the light to a camera some distance away. The TIS configuration is shown in Fig.~\ref{fig:tis}. % \begin{figure}[!p] \centering \includegraphics[width=\textwidth]{Images/chapter4/tis1.png} \caption{Configuration of the SNS target imaging system. (From \cite{Blokland2010}.)} \label{fig:tis} \end{figure} % The optics in the RTBT can be modified, but there are constraints. The $\beta$ functions should be kept below $\approx$ 30 m/rad in the wire-scanner region and below $\approx$ 100 m/rad closer to the target to avoid excess beam loss. At the target, it is best to keep the $\beta$ functions near their default values of $\beta_x \approx$ 60 m/rad and $\beta_y \approx$ 6 m/rad to satisfy peak density and beam size requirements on the target. In addition to these constraints, quadrupoles in the wire-scanner region share power supplies: there is a horizontal group \{QH18, QH20, QH22, QH24\} and a vertical group \{QV19, QV21, QV23, QV25\}. The last five magnets — QH26, QV27, QH28, QV29, and QH30 — are individually controlled. \section{4D phase space reconstruction from 1D projections}\label{sec:Phase space reconstruction from 1D projections} \subsection{Method description} The covariance matrix $\bm{\Sigma}$ can be reconstructed from 1D projections \cite{book:Minty2003, Woodley2000, Prat2014}. We seek to reconstruct $\bm{\Sigma}$ at position $a$ by measuring $\langle{xx}\rangle$, $\langle{yy}\rangle$ and $\langle{xy}\rangle$ at position $b$, downstream of $a$. Assuming linear transport, the two covariance matrices are related by % \begin{equation} \bm{\Sigma}_b = \mathbf{M} \bm{\Sigma}_a \mathbf{M}^T, \end{equation} % where $\mathbf{M}$ is the linear transfer matrix from $a$ to $b$. We repeat the measurement at least four times with different transfer matrices — either by changing the measurement location or by changing the machine optics — and write % \begin{equation} \begin{bmatrix} {\langle{xx}\rangle}^{(1)} \\ {\langle{xy}\rangle}^{(1)} \\ {\langle{yy}\rangle}^{(1)} \\ {\langle{xx}\rangle}^{(2)} \\ {\langle{xy}\rangle}^{(2)} \\ {\langle{yy}\rangle}^{(2)} \\ {\langle{xx}\rangle}^{(3)} \\ {\langle{xy}\rangle}^{(3)} \\ {\langle{yy}\rangle}^{(3)} \\ \vdots \end{bmatrix}_b = \mathbf{A} \begin{bmatrix} \langle{xx}\rangle \\ \langle{xx'}\rangle \\ \langle{xy}\rangle \\ \langle{xy'}\rangle \\ \langle{x'x'}\rangle \\ \langle{x'y}\rangle \\ \langle{x'y'}\rangle \\ \langle{yy}\rangle \\ \langle{yy'}\rangle \\ \langle{y'y'}\rangle \\ \end{bmatrix}_a . \end{equation} % The superscripts represent the measurement index. The transpose of the coefficient matrix $\mathbf{A}$ for a single measurement is % \begin{equation} \mathbf{A}^T = \begin{bmatrix} M_{11}M_{11} & M_{11}M_{31} & M_{31}M_{31} \\ 2M_{11}M_{12} & M_{12}M_{31} + M_{11}M_{32} & 2M_{31}M_{32} \\ 2M_{11}M_{13} & M_{13}M_{31} + M_{11}M_{33} & 2M_{31}M_{33} \\ 2M_{11}M_{14} & M_{14}M_{31} + M_{11}M_{34} & 2M_{31}M_{34} \\ M_{12}M_{12} & M_{12}M_{32} & M_{32}M_{32} \\ 2M_{12}M_{13} & M_{13}M_{32} + M_{12}M_{33} & 2M_{32}M_{33} \\ 2M_{12}M_{14} & M_{14}M_{32} + M_{12}M_{34} & 2M_{32}M_{34} \\ M_{13}M_{13} & M_{13}M_{33} & M_{33}M_{33} \\ 2M_{13}M_{14} & M_{14}M_{33} + M_{13}M_{34} & 2M_{33}M_{34} \\ M_{14}M_{14} & M_{14}M_{34} & M_{34}M_{34} \end{bmatrix} \end{equation} % where $M_{ij}$ is the $i$,$j$ element of the transfer matrix for that measurement. The system is solved using linear least squares (LLSQ). The measurement has a geometric interpretation, illustrated in Fig.~\ref{fig:ws_emittance_measurement} for the 2D case. % \begin{figure}[!p] \centering \includegraphics[width=0.9\textwidth]{Images/chapter4/ws_emittance_measurement2.png} \caption{Illustration of 2D emittance measurement.} \label{fig:ws_emittance_measurement} \end{figure} % Each measured beam size defines two lines in the $x$-$x'$ plane at $b$; when the lines are transported back to $a$, their intersection bounds the phase space ellipse. In the general case, each measurement at $b$ defines a 2D surface in 4D phase space and the intersection of these surfaces at $a$ bounds the phase space ellipsoid. \subsection{Implementation in the SNS} To perform the wire scans, the beam is set to a pulse frequency of 1 Hz, and the beam loss monitors in the wire-scanner region are masked due to the higher-than-normal losses when the wires cross the beam core. Wire-scanner data acquisition is performed by the Profile Tools and Analysis (PTA) application. After the four wire-scanners complete their scan, a time-stamped file is produced containing the measured profiles. The four wire-scanners produce four equations, exactly determining the cross-plane moments, so no optics changes are needed in principle; however, additional measurements should reduce the error. In the 2D case, it is typically recommended to space $n$ measurements by $\pi / n$ in phase advance \cite{book:Minty2003}. This may be due to the geometric interpretation of Fig.~\ref{fig:ws_emittance_measurement}: in normalized phase space, the rotation angle of the measurement lines is equivalent to the phase advance and the lines are evenly spaced around the phase space ellipse. The four wire-scanners in the RTBT are already somewhat evenly spaced in phase advance, and it was determined that a 30$\degree$ window around each wire-scanner would provide sufficient coverage. Due to the shared power supplies of the quadrupoles in the wire-scanner region, there is limited control of the phase advances between the wire-scanners. We instead vary the phase advances from QH18 (the first varied quadrupole) to WS24 (the last wire-scanner), which changes the phase advances at WS20, WS21, and WS23 by similar amounts. To set the phase advances at WS24 while constraining the beam size in the wire-scanner region, two power supplies (eight quadrupoles) upstream of WS24 were varied using an optimizer that minimizes the following cost function: % \begin{equation} C(\mathbf{g}) = \left\Vert{\tilde{\bm{\mu}} - \bm{\mu} }\right\Vert^2 + \epsilon \left\Vert \Theta\left( \tilde{\bm{\beta}}_{max} - \bm{\beta}_{max} \right) \right\Vert^2 . \end{equation} % The quadrupole field strengths are contained in the vector $\mathbf{g}$. The calculated and desired phase advances at WS24 are $\bm{\mu} = (\mu_x, \mu_y)$, and $\tilde{\bm{\mu}} = (\tilde{\mu}_x, \tilde{\mu}_y)$, respectively. The maximum calculated and allowed $\beta$ functions in the wire-scanner region are $\bm{\beta}_{max} = (\beta_{x_{max}}, \beta_{y_{max}})$ and $\tilde{\bm{\beta}}_{max} = (\tilde{\beta}_{x_{max}}, \tilde{\beta}_{y_{max}})$, respectively. $\Theta$ is the Heaviside step function. Finally, $\epsilon$ is a constant.\footnote{We are assuming that the beam is approximately matched to the lattice optics so that the calculated phase advances are close to the true phase advances.} After the model optics are computed, the live quadrupole settings must be changed. The SNS employs a machine protection system (MPS) that will cause the machine to trip if the RTBT quadrupole strengths wander outside a certain window, so this window is extended beforehand. Additionally, the MPS will activate if the fractional change in field strength is too large; to solve this problem, the field strength is changed in small steps. A GUI application to perform the above tasks was developed in the OpenXAL framework for use in the SNS control room. In the first pane of the application, the user can set the phase advances at WS24 and view the model optics and phase advances throughout the RTBT. In the second pane, the user can load wire-scanner output files and choose the reconstruction location. These files contain the wire-scanner profiles, RMS parameters, and Gaussian fit parameters. They also contain an integer that defines the machine state at the time of the measurement. The application reads this number, synchronizes the model with the machine state, and computes the transfer matrices from the wire-scanners to the reconstruction location. The RMS moments and transfer matrices are then used to reconstruct the covariance matrix. The resulting beam parameters are printed and compared to the model lattice parameters. The 2D projections of the covariance ellipsoid are plotted along with the measurement lines, with the option to view in normalized coordinates. \subsection{Measurement of a production beam} The multi-optics method was tested on a fully accumulated production beam. The phase advances at WS24 were varied in a 30$\degree$ range over ten steps: the first half of the scan held the vertical phase advance fixed while varying the horizontal phase advance, and the second half of the scan held the horizontal phase advance fixed while varying the vertical phase advance. The result of the reconstruction is shown in Fig.~\ref{fig:prod_meas} for a location just before QH18. % \begin{figure}[!p] \centering \begin{subfigure}{0.9\textwidth} \centering \includegraphics[width=\textwidth]{Images/chapter4/prod_meas_lines.pdf} \end{subfigure} \par\medskip \begin{subfigure}{0.6\textwidth} \centering \begin{tabular}{lll} \small\textbf{Parameter} & \small\textbf{Measurement} & \small\textbf{Model} \\ \midrule \small$\beta_x$ [m/rad] & \small22.06 $\pm$ 0.29 & \small22.00 \\ \small$\beta_y$ [m/rad] & \small4.01 $\pm$ 0.02 & \small3.81 \\ \small$\alpha_x$ & \small2.33 $\pm$ 0.04 & \small2.37 \\ \small$\alpha_y$ & \small-0.49 $\pm$ 0.01 & \small-0.60 \\ \small$\varepsilon_1$ [mm~mrad] & \small33.02 $\pm$ \small0.05 & - \\ \small$\varepsilon_2$ [mm~mrad] & \small25.67 $\pm$ \small1.03 & - \\ \small$\varepsilon_x$ [mm~mrad] & \small32.85 $\pm$ \small0.05 & - \\ \small$\varepsilon_y$ [mm~mrad] & \small25.87 $\pm$ \small0.12 & - \\ \end{tabular} \end{subfigure} \par\medskip \caption{Reconstructed beam parameters and graphical output from a multi-optics emittance measurement of a production beam.} \label{fig:prod_meas} \end{figure} % The best-fit ellipses in the $x$-$x'$ and $y$-$y'$ planes are normalized by the reconstructed Twiss parameters. The uncertainties in the beam parameters were calculated by propagating the standard deviations of the ten reconstructed moments obtained from the LLSQ estimator.\footnote{See Appendix A of \cite{Faus-Golfe2016}.} The reconstructed Twiss parameters are close to the model parameters computed from the linear transfer matrices of the ring and RTBT, showing that the beam is matched. The intrinsic emittances are almost equal to the apparent emittances, showing that there is very little cross-plane correlation in the beam. This is expected for a production beam. \subsection{Sensitivity to errors} A comprehensive study of errors in the multi-optics 4D emittance measurement was completed at the SwissFEL Injector Test Facility (SITF) by Prat and Aiba in \cite{Prat2014}. They considered errors in the measured moments, quadrupole field and alignment errors, beam energy errors, beam mismatch at the reconstruction point, and dispersion/chromaticity \cite{Mostacci2012}, concluding that the multi-optics measurement remained accurate, reporting $< 5\%$ uncertainty in the intrinsic emittances. We initially performed similar studies in PyORBIT using envelope tracking to estimate the reconstruction errors in the RTBT, also concluding that the method should remain accurate \cite{Hoover2021-IPAC}. Space charge forces, which can render the method invalid for high-perveance beams \cite{Anderson2002}, can be neglected: the space charge tune shift in the ring is around 3\%, and the distance between the reconstruction and measurement locations is much smaller than the length of the ring. In summary, the multi-optics emittance measurement is feasible in the SNS. The only downside is the long measurement time, for the following reasons. First, we are not only interested in the beam emittances at a single time but are also interested in the growth and evolution of the emittances throughout accumulation. For example, it could be possible for $\varepsilon_2$ to remain small (as desired) in the first half of accumulation before growing to a much larger value in the second half of accumulation. Measurement of this emittance evolution would convey valuable information about the beam dynamics and allow for qualitative comparison with computer simulation. Second, it would be beneficial to quickly evaluate various machine states, the best of which will be unknown during initial experiments. Finally, a practical point: the time reserved for accelerator physics (AP) experiments at the SNS is limited, and the setup for initial experiments will be much longer than typical AP experiments for reasons discussed in Chapter \ref{chap-5}. Thus, the fixed-optics method — in which only four profiles are used in the reconstruction — is preferred.\footnote{An additional benefit of the fixed-optics method is that there is no potential for steering errors in the RTBT, which could enhance beam loss.} A modest reduction in accuracy for the increase in speed is warranted since weak cross-plane correlations ($\varepsilon_1 \varepsilon_2 \approx \varepsilon_x \varepsilon_y$) are uninteresting for our purposes and do not need to be resolved. Using only one set of optics from the previous scan resulted in a covariance matrix that was not positive-definite, producing imaginary intrinsic emittances. We label this a failed fit. A nonlinear solver \cite{Raimondi1993} or Cholesky decomposition \cite{Agapov2007} can be used to ensure a valid covariance matrix, but we found that the answer depended strongly on the initial guess provided to the solver and on which measurement in the scan was used in the reconstruction. To investigate the failure of the fixed-optics method, a covariance matrix was generated with cross-plane moments set to zero and within-plane moments matched to the lattice optics, then tracked to the wire-scanners using the known transfer matrices. The reconstruction was then performed 1000 times with 3\% random noise added to the moments.\footnote{To select the proper noise level, the wire-scanners were run seven times without changing the machine optics, producing seven estimated moments for each of the three wires on each of the four wire-scanners. The profiles are highly reproducible: for each wire, the maximum difference between any two moments was always less than 3\% of the mean moment. Therefore, the noisy moments were sampled within $\pm$ 3\% of the true moments.} Failed fits were discarded. Fig.~\ref{fig:prod_sensitivity} shows that the reconstructed intrinsic emittances in the successful trials are very sensitive to changes to the ``measured" moments. % \begin{figure}[!p] \vspace*{5cm} \includegraphics[width=\textwidth]{Images/chapter4/prod_sensitivity.pdf} \caption{Monte Carlo simulation of a fixed-optics emittance measurement in the RTBT. Trials were repeated until several thousand successful fits were obtained. 3\% noise was assumed for the measured moments. Transfer matrix errors were ignored. The correct values are $\varepsilon_1$ = $\varepsilon_x$ = 32 mm~mrad, $\varepsilon_2$ = $\varepsilon_y$ = 20 mm~mrad.} \label{fig:prod_sensitivity} \vspace*{5cm} \end{figure} % Unlike the apparent emittances, the intrinsic emittances are strongly correlated and are not centered on the correct values. We refer to the difference between the mean emittances and the true emittances as the \textit{bias}.\footnote{The bias and strong correlation between the intrinsic emittances stems from the fact that $\varepsilon_1\varepsilon_2 \le \varepsilon_x\varepsilon_y$. In the case at hand, the reconstructed $\varepsilon_x$ and $\varepsilon_y$ are essentially constant in each trial.} Sensitivity of fixed-optics 4D emittance measurements was observed by Woodley and Emma \cite{Woodley2000} and studied more recently by Agapov, Blair, and Woodley \cite{Agapov2007} as well as Faus-Golfe et al. \cite{Faus-Golfe2016}, all in the context of design studies for a future International Linear Collider (ILC). The motivation for these studies is that a flat beam ($\varepsilon_y \ll \varepsilon_x$) would be ideal for the ILC, and that $\varepsilon_y$ can be minimized by measuring and removing any cross-plane correlation in the beam. Woodley and Emma proposed to abandon the fixed-optics method due to the bias in the reconstructed intrinsic emittances introduced by large errors in the measured moments, suggesting to instead measure the 2D emittance and iteratively minimize $\varepsilon_y$. Agapov, Blair, and Woodley revisited this problem and showed that the linear system used to reconstruct the cross-plane moments can easily become ill-conditioned. The sensitivity of a linear system $\mathbf{A} \mathbf{x} = \mathbf{b}$ to errors in $\mathbf{b}$ is determined by the condition number $C = \Vert \mathbf{A} \Vert \Vert \mathbf{A}^{-1} \Vert$ (or the pseudo-inverse $\mathbf{A}^\dagger = (\mathbf{A}^T\mathbf{A})^{-1} \mathbf{A}^T$ if $\mathbf{A}$ is not square) where $\Vert \dots \Vert$ is a matrix norm \cite{Golub1985}. As an example, consider four wire-scanners that are evenly spaced in phase advance and connected by rotation matrices. Since the transfer matrices are uncoupled, there are three independent subsystems to solve: $x$-$x'$, $y$-$y'$, and the cross-plane moments. Let the coefficient matrices for these subsystems be $\mathbf{A}_{xx}$, $\mathbf{A}_{yy}$, and $\mathbf{A}_{xy}$, respectively, and the condition numbers be $C_{xx}$, $C_{yy}$, and $C_{xy}$. Recall that the within-plane moments are overdetermined while the cross-plane moments are exactly determined. Fig.~\ref{fig:fodo_condition_number} plots the inverse of these condition numbers as a function of the wire-scanner spacing. % \begin{figure}[!p] \centering \vspace*{2cm} \includegraphics[width=\textwidth]{Images/chapter4/fodo_condition_number.pdf} \caption{Condition numbers of the coefficient matrices produced by four wire-scanners that are evenly spaced in phase advance and connected by rotation matrices.} \label{fig:fodo_condition_number} \vspace*{2cm} \end{figure} % $C_{xx}$ and $C_{yy}$ approach $\infty$ when the spacing is $\pi/2$ in their respective planes, in which case two pairs of measurements provide degenerate information, while $C_{xy}$ depends on the difference between the phase advances. The pattern will be more complicated for different optics and/or additional wire-scanners. The error and uncertainty in the emittance reconstruction, as well as the number of failed fits, mirrors these condition numbers. Using this framework, Faus-Golfe et al. developed analytical formulas to determine whether a given system can accurately measure the intrinsic emittances. They also suggested that the planned ILC emittance measurement station, which contained four wire-scanners, could be modified to reduce the sensitivity. We performed a similar modification to the RTBT wire-scanner region. To find a new set of optics, the phase advances at WS24 ($\mu_x$, $\mu_y$) were varied in a $90\degree$ window centered on their nominal values ($\mu_{x0}$, $\mu_{y0}$); at each setting, the condition numbers were calculated and the reconstruction was simulated with true emittances $\varepsilon_x = \varepsilon_y = \varepsilon_1 = \varepsilon_2$ = 20 mm~mrad. Failed trials were discarded. The resulting biases and standard deviations of the reconstructed emittances are plotted in Fig.~\ref{fig:rtbt_montecarlo_emittances}. % \begin{figure}[!p] \centering \includegraphics[width=0.9\textwidth]{Images/chapter4/rtbt_montecarlo_emittances.pdf} \caption{Simulated 4D emittance reconstruction errors as a function of the phase advances at WS24.} \label{fig:rtbt_montecarlo_emittances} \end{figure} % Settings that produced no successful trials appear as white cells. The apparent emittances are not displayed because they remained within 1\% of their true values at every optics setting. Modifying the optics so that $\mu_x = \mu_{x0} + 45\degree$, $\mu_y = \mu_{y0} - 45\degree$ reduces the bias to $\approx 7\%$ and the standard deviation to $\approx 5\%$. The fraction of failed fits, which is very large along the diagonal in the figure, is reduced to zero. It is also important to examine the effect of mismatched beam parameters on the accuracy of the reconstruction. Recall that the phase advance is the integral of the inverse of the $\beta$ function. In a periodic system, there is a unique periodic solution for the $\beta$ function, but this is not true in a transfer line such as the RTBT; thus, the phase advance in the RTBT depends on the Twiss parameters at the ring extraction point — the RTBT entrance. All previous phase advance calculations have assumed that the beam Twiss parameters are the same as the ring Twiss parameters at extraction. This is generally a safe assumption since turn-by-turn mismatch oscillations are washed out during painting. It is possible, however, for space charge to effectively modify the ring Twiss parameters, resulting in mismatch when entering the RTBT. This modification is small during production painting, as shown in Fig.~\ref{fig:prod_meas}, but it is expected (from simulations) that more significant mismatch could occur if the space charge density is increased and/or if the beam energy is decreased. To examine the effect of mismatch, we first moved the operating point to $\mu_x = \mu_{x0} + 45\degree$, $\mu_y = \mu_{y0} - 45\degree$, then varied the initial Twiss parameters at BPM17 in the RTBT and repeated the Monte Carlo trials. There are four parameters: $\alpha_x$, $\alpha_y$, $\beta_x$, and $\beta_y$. We based the range of each parameter on a measurement in which the reconstructed Twiss parameters were different than the nominal Twiss parameters, shown in Table~\ref{tab:mismatch}. % \begin{table}[!p] \centering \caption{Reconstructed and model Twiss parameters at BPM 17 in the RTBT (see Experiment 2 in Chapter \ref{chap-5}.)} \begin{tabular}{lll} \midrule \textbf{Parameter} & \textbf{Measured} & \textbf{Model} \\ \midrule $\beta_x$ [m/rad] & 6.26 & 5.49 \\ $\beta_y$ [m/rad] & 20.82 & 19.25 \\ $\alpha_x$ & -0.89 & -0.78 \\ $\alpha_y$ & 1.17 & 1.91 \\ \midrule \end{tabular} \label{tab:mismatch} \end{table} % The beam mismatch is unlikely to exceed these values in future experiments.\footnote{Details about the measurement are left for Chapter \ref{chap-5}.} Therefore, to examine the effect of mismatch, we first moved the operating point to $\mu_x = \mu_{x0} + 45\degree$, $\mu_y = \mu_{y0} - 45\degree$, then varied $\beta_x$ and $\beta_y$ within a $\pm 20\%$ window around their model values, $\alpha_x$ within a $\pm 15\%$ window, and $\alpha_y$ within a $-40\%, +10\%$ window to extend beyond the measured discrepancies, and repeated the Monte Carlo trials for each initial beam, thus producing a collection of means and standard deviations for the reconstructed intrinsic emittances. The left plot in Fig.~\ref{fig:mismatch} displays the standard deviations and biases for $\varepsilon_1$ (pink) and $\varepsilon_2$ (blue). % \begin{figure}[!p] \centering \includegraphics[width=\textwidth]{Images/chapter4/mismatch2.pdf} \caption{Bias and standard deviation of $\varepsilon_1$ (pink) and $\varepsilon_2$ (blue) from simulated reconstructions in the RTBT. In each plot, the collection of points is generated by varying the initial beam Twiss parameters. The true values of the emittances are printed on the top of the figures.} \label{fig:mismatch} \end{figure} % Although most of the points are clustered near the original bias and standard deviation of 7\% and 5\%, respectively, the bias increases to nearly 15\% in some cases, which may make it difficult to resolve weak cross-plane correlation; however, the measurement should still resolve strong cross-plane correlation. This is demonstrated in the rest of the plots in Fig.~\ref{fig:mismatch}, in which the entire process is repeated with $\varepsilon_1 / \varepsilon_2 > 1$. The bias in the reconstruction quickly decreases — the emittances are clustered around their true values. We conclude that with small modifications to the RTBT optics, the fixed-optics method should be sufficient for fast 4D emittance measurements in the SNS. As detailed in Chapter 5, such measurements will be needed to evaluate various machine settings within a single study period, especially in initial experiments, as well as to measure the emittance growth during accumulation for qualitative comparison with simulation. The multi-optics method should be used once a promising machine state is found (or if time allows) to reduce the uncertainty. \subsubsection{Other uses of 1D projections} To close this section, we mention that there is information to be gained from 1D projections in addition to the root-mean-square reconstruction just described. First, the measured projections can be compared to the ideal ``half-circle" projections of a uniform density ellipse.\footnote{The best expected case is a uniform density core with small nonlinear tails, the 1D projection of which is distinguishable from a Gaussian curve, but it may be difficult to distinguish intermediate cases with larger tails. The method we employ in the next chapter is to calculate the standard deviation of the measured profile, plot the projections of an ideal Gaussian and uniform density elliptical distribution with the same standard deviation, and visually compare the three curves. More quantitative methods may be used in the future.} Second, the projections can be used to reconstruct the $x$-$x'$ or $y$-$y'$ distribution using the tomographic methods described in the next section; it may be possible to include cross-plane information in the reconstruction using diagonal projections. \section{4D phase space reconstruction from 2D projections} Tomographic methods are well-established for the reconstruction of 2D phase space distributions from 1D projections in transverse phase space \cite{Hock2014} and longitudinal phase space \cite{Evans2014}. The concept has recently been extended to the reconstruction of the 4D transverse phase space, both in theory and in practice \cite{Hock2013b, Wang2019, Wolski2020}. This section begins with a brief discussion of tomography in two dimensions as applied to beam diagnostics, then moves on to describe the accuracy and limitations of several 4D reconstruction algorithms. Finally, the use of tomography to reconstruct the 4D phase space distribution from beam images on the SNS target is discussed. \subsection{Tomography for beam diagnostics} Several algorithms exist to reconstruct 2D images from 1D projections, such as filtered back-projection (FBP), algebraic reconstruction (ART) \cite{Slaney1988}, and maximum entropy (MENT) \cite{Minerbo1979}. Projections of an object are normally obtained by illuminating the object at different angles. Although the measured projections of a 2D phase space distribution ($x$-$x'$) are always along the $x$ axis, we can take advantage of the known transfer matrix $\mathbf{M}$ between the measurement location $b$ and the reconstruction location $a$ to obtain the projections at different angles in $x$-$x'$ at the reconstruction location. The measured projection of the $x$-$x'$ distribution at $b$ is % \begin{equation} p_b(x_b) = \int_{-\infty}^{\infty} f(x_b, x'_b) dx'_b. \end{equation} % When the distribution is transported back to $a$, the projection will be along axis $\tilde{x}_a$, which is rotated at angle $\theta$ above the $x_a$ axis. The projection angle is computed from the transfer matrix \cite{Hock2013a}: % \begin{equation}\label{eq:proj_trans_1} \tan\theta = \frac{M_{12}}{M_{11}}. \end{equation} % The distance along the projection axis will be scaled: % \begin{equation}\label{eq:proj_trans_2} r = \frac{x_b}{\tilde{x}_a} = \sqrt{M_{11}^2 + M_{12}^2}. \end{equation} % The projection must then be scaled to conserve its area. The projections at $a$ and $b$ are related by % \begin{equation}\label{eq:proj_trans_3} p_a(\tilde{x}_a) = r p_b(r \tilde{x}_a). \end{equation} % Standard tomography algorithms can be applied to the scaled projections. Reconstructing the distribution in normalized phase space can reduce errors \cite{Hock2011}. Recall the normalization matrix $\mathbf{V}$ from Eq.~\eqref{eq:CS_parameterization}. Note that % \begin{equation} \begin{aligned} \mathbf{x}_b = \mathbf{M} \mathbf{x}_a = \mathbf{M} \mathbf{V} (\mathbf{V}^{-1} \mathbf{x}_a) , \end{aligned} \end{equation} % where $\mathbf{V}$ depends on the Twiss parameters at $a$. Eq.~\eqref{eq:proj_trans_1}, Eq.~\eqref{eq:proj_trans_2}, and Eq.~\eqref{eq:proj_trans_3} can be applied to the matrix $\mathbf{M} \mathbf{V}$ to obtain the projections in the normalized phase space at $a$. After the image is reconstructed, the true distribution can be obtained by transforming the grid coordinates using $\mathbf{V}$ and interpolating at the transformed coordinates. Any Twiss parameters can be used to form $\mathbf{V}$; if the Twiss parameters are matched to the distribution, the rotation angle of the projection will be the phase advance from $a$ to $b$, and the reconstructed distribution will be circular in the normalized phase space. \subsection{4D reconstruction as a series of 2D reconstructions} Recent work by Hock et al. reduces 4D reconstruction to a series of 2D reconstructions when the $x$-$y$ projections are available \cite{Hock2013a}. The method, which we refer to as Hock's method, is as follows. Assume that the rotation angles in $x$-$x'$ and $y$-$y'$ can be independently controlled. Let the angles in $x$-$x'$ be \{$\theta_{x_1}$, $\dots$, $\theta_{x_k}$, $\dots, \theta_{x_K}$\} and the angles in $y$-$y'$ be \{$\theta_{y_1}$, $\dots$, $\theta_{y_l}$, $\dots$, $\theta_{y_L}$\}. The projections are stored in an array $\mathbf{S}$, where $\mathbf{S}_{i,j,k,l}$ is the intensity at point ($x_i$, $y_j$) on the screen for angles $\theta_{x_k}$, $\theta_{y_l}$. Consider a single row of an image, fixing $y_j$, which gives a 1D projection of a slice of the distribution onto the $x$-axis at the screen. If we fix $\theta_{y}$ and vary $\theta_{x}$, we produce a set of 1D projections that can be used to reconstruct the $x$-$x'$ phase space distribution for this slice using any 1D $\rightarrow$ 2D reconstruction method. This is repeated for each $y_j$ and $\theta_{y_l}$. Now, for each $x$ and $x'$ in the reconstruction grid, we have set of projections of the $y$-$y'$ distribution onto the $y$-axis at the screen for different $\theta_{y}$; thus, for each $x$ and $x'$ in the reconstruction grid, we can reconstruct the $y$-$y'$ distribution. This completes the reconstruction. To test the method, the 600,000-particle distribution from Fig.~\ref{fig:Holmes} was used. In \cite{Hock2013a}, filtered back-projection (FBP) was used for the 2D reconstructions. FBP requires many projections — something that is not always possible in the context of beam diagnostics. Simultaneous algebraic reconstruction (SART) is a possible alternative when the number of projections is small. The accuracy of SART will depend on the number of projections and the range of projection angles. Fig.~\ref{fig:tomo_sim_art2D} demonstrates this by reconstructing the $y$-$y'$ distribution from 1D projections as these numbers are varied. % \begin{figure}[!p] \centering \vspace*{3.0cm} \includegraphics[width=0.7\textwidth]{Images/chapter4/tomo_sim_art2d.png} \caption{SART accuracy as a function of number of projections and range of projection angles.} \label{fig:tomo_sim_art2D} \vspace*{3.0cm} \end{figure} % It appears that if the projection angles are distributed over a significant range, the accuracy does not improve much beyond 10-15 projections. As discussed later, 15 projections are likely near the maximum possible in the SNS for each 2D reconstruction if using Hock's method. In the following simulated 4D reconstruction, the phase advances in both planes were scanned over $180\degree$ in 12 steps; at each step, the distribution was transported to and then binned on a virtual screen. The reconstruction was performed in normalized phase space, and it was assumed that the distribution was matched to the lattice parameters. Fig.~\ref{fig:tomo_sim_target_scan} shows the simulated images in normalized space with a screen resolution of $75 \times 75$. % \begin{figure}[!p] \centering \includegraphics[width=\textwidth]{Images/chapter4/tomo_sim_target_scan_full.png} \caption{Simulated $x$-$y$ projections as the horizontal (rows) and vertical (columns) phase advances are varied.} \label{fig:tomo_sim_target_scan} \end{figure} % Three SART iterations were used for each 2D reconstruction. The 2D projections of the reconstructed distribution are compared to those of the original distribution in Fig.~\ref{fig:tomo_sim_rec_hock_proj_2D} in normalized phase space, which shows good agreement. % \begin{figure}[!p] \centering \includegraphics[width=0.8\textwidth]{Images/chapter4/tomo_sim_rec_hock_proj_2D_ver.png} \caption{Simulated reconstruction using Hock's method (normalized phase space).} \label{fig:tomo_sim_rec_hock_proj_2D} \end{figure} % Notice that the projections of the reconstructed distribution, such as $x$-$y'$, are present in the simulated $x$-$y$ images in Fig.~\ref{fig:tomo_sim_target_scan}. The reason is straightforward: if the phase advance in the vertical plane is $\pi$/2, then $y \rightarrow y'$ and $f(x, y) \rightarrow f(x, y')$ \cite{Hock2013a}. This method is preferred because it leverages 2D reconstruction algorithms. Open-source implementations of these algorithms are widely available and the conditions needed for accurate reconstructions are well-understood, primarily due to the use of tomography in medical imaging. \subsection{Direct 4D reconstruction} If the phase advances cannot be independently controlled or if only a very small number of projections can be collected, 2D reconstruction algorithms must be generalized to 4D. Several algorithms generalize to any number of dimensions, but they may be difficult to implement, the conditions for an accurate reconstruction may be unclear, and the time and space complexity may make the method infeasible. Here, we focus on one method that has recently been experimentally demonstrated, then mention a few more that could be explored in future work. \subsubsection{ART} Each measured projection on the screen produces the following set of equations: % \begin{equation}\label{eq:art} \bm{\rho} = \mathbf{P} \bm{\psi}. \end{equation} % $\bm{\rho}$ is a vector of the pixel intensities on the screen and $\bm{\psi}$ is a vector of the phase space coordinates on the reconstruction grid. To form $\mathbf{P}$, we place a particle at the center of each bin in the reconstruction grid and track the particles to the screen using the transfer matrix. $\mathbf{P}_{i, j} = 1$ if particle $j$ landed in bin $i$ on the screen; otherwise, $\mathbf{P}_{i, j} = 0$. The equations produced by subsequent measurements are stacked, and the resulting system of equations is solved using a sparse least squares solver. This method has been used to reconstruct the phase space distribution in the Compact Linear Accelerator for Research and Applications (CLARA), a low-energy test facility \cite{Wolski2020}. For an $N \times N \times N \times N$ reconstruction grid, an $N \times N$ measurement grid, and $n$ measurements, $\bm{\rho}$ has $nN^2$ elements, $\bm{\psi}$ has $n N^4$ elements, and $\mathbf{P}$ has $n N^2 \times N^4$ elements. In practice, these significant storage requirements limit the resolution of the reconstruction grid to $N \approx 50$ \cite{Wolski2020}. In Fig.~\ref{fig:tomo_sim_rec_art_proj_2D}, the method was applied to the same simulated distribution, but $8 \times 8$ projections were used instead of $15 \times 15$. % \begin{figure}[!p] \centering \includegraphics[width=0.8\textwidth]{Images/chapter4/tomo_sim_rec_art_proj_2D_ver.png} \caption{Simulated reconstruction using algebraic reconstruction (normalized phase space).} \label{fig:tomo_sim_rec_art_proj_2D} \end{figure} % Although the main features of the distribution are present in the reconstruction, there are streaking artifacts outside the beam core that are not present in Fig.~\ref{fig:tomo_sim_rec_hock_proj_2D}, although is likely that the performance could improve if a larger number of projections were used. Unfortunately, the algorithm took hours to execute as opposed to minutes for the previous example, even with the reduced grid resolution. \subsubsection{Additional methods} Here are two additional methods to reconstruct the 4D phase space distribution that could be explored in future work. Among the distributions consistent with the measured projections, MENT selects the distribution with the maximum entropy. For example, without any measurements constraining the solution, MENT will produce a uniform distribution. It can perform well with few projections and has been used for 2D reconstruction in particle accelerators \cite{Hock2013a}. The downside is that the iterative numerical solution is difficult to implement and may struggle to converge when the number of projections is large. For 4D reconstruction from $x$-$y$ projections, the MENT could be used to perform the 2D reconstructions in Hock's method, which may result in improved performance over SART. Alternatively, just as ART was generalized to four dimensions in the previous section, MENT could be generalized to four dimensions.\footnote{In principle, MENT can perform a 4D reconstruction using 1D projections \cite{Sander1979}; however, it seems a priori unlikely for this to produce an accurate result: imagine reconstructing a 3D image from 1D projections.} An analytic MENT solution has recently been derived and used for the 4D reconstruction of an SNS minipulse using $x$-$x'$ and $y$-$y'$ projections from a laser wire \cite{Wong-forthcoming}. Another method is to generate a particle bunch, track the bunch to the screen, weight each particle by the measured signal at the bin where it fell on the screen, and generate new particles in the region of that particle according to its weight. The advantage of this method is that it does not assume linear transport and that it can perform well with few projections. It was experimentally demonstrated by Wang et. al. in the Xi’an Proton Application Facility (XiPAF) using six projections \cite{Wang2019}. \subsection{Implementation in the SNS} The idea to use SNS target images for tomographic reconstruction of the phase space distribution was proposed late in this research. Due to this fact, as well as time constraints and unexpected machine downtime, the methods described in the previous subsection were not able to be applied to real data; this is left for future work. Nonetheless, the following paragraphs describe how the reconstruction can be performed in the SNS. \subsubsection{Optics control} We desire independent control of the horizontal and vertical phase advances. The optics control developed for the wire-scanner measurement can be used here. The constraints are now that the $\beta$ functions remain below 30 m/rad in the wire-scanner region, below 100 m/rad before the target, and stay within 15\% of their nominal values at the target. Fig.~\ref{fig:target_phase_scan_1} shows that both the horizontal and vertical phase advances can be independently scanned in a 180$\degree$ range. Fig.~\ref{fig:target_phase_scan_2} overlays the $\beta$ functions and phase advances throughout the RTBT for every step in the scan, showing that the beam size constraints are not violated. The horizontal axis starts at the first varied quadrupole and ends at the target. % \begin{figure}[!p] \centering \vspace*{2.0cm} \includegraphics[width=\textwidth]{Images/chapter4/target_phase_scan1.png} \caption{Scan of the phase advances at the target.} \label{fig:target_phase_scan_1} \vspace*{2.0cm} \end{figure} % \begin{figure}[!p] \centering \includegraphics[width=\textwidth]{Images/chapter4/target_phase_scan2.png} \caption{$\beta$ functions and phase advances vs. position for the scan in Fig.~\ref{fig:target_phase_scan_1}.} \label{fig:target_phase_scan_2} \end{figure} % Computing each optics setting takes approximately sixteen seconds using an OpenXAL solver. It also takes time to change the magnet strengths in the machine, trigger the beam, and collect a batch of target images. The time available in most accelerator physics studies is eight to ten hours at a maximum, so we place an upper limit on the number of images collected during the scan at $15 \times 15$, for which it takes around one hour to calculate the optics and one hour to collect the images. \subsubsection{Image acquisition and processing} Target image acquisition is handled entirely by the target imaging system software. Live target images are displayed in the SNS control room. It is straightforward to access the image from an OpenXAL script as an 80,000 element array. The script to perform the target scan repeatedly modifies the RTBT quadrupoles, triggers the beam, and saves the image array to a file. The unprocessed target images are not ideal. First, to reduce pulse-to-pulse variation, the images can be averaged over a few pulses. Second, the beam passes through 2 meters of Helium at atmospheric pressure before the target; due to radiation damage, light from the gas appears as a streaking artifact on the lower-right of the image \cite{Blokland2010}. Although this has been corrected by delaying the shutter opening by a few microseconds, the issue has occasionally resurfaced when the beam energy is different than 1 GeV. If these images are collected, they can be identified later by placing a maximum value on the pixels far from the image center, particularly in the lower-right region. Third, there are visible grid lines from the fiber bundle. A Gaussian blur is therefore applied to the image as in Fig.~\ref{fig:target_image}. Finally, there are four dark spots on the image that serve as fiducial markers; they are visible when the beam is large. In this work, the dark spots are left in the image. % \begin{figure}[!p] \centering \vspace*{5cm} \includegraphics[width=1.0\textwidth]{Images/chapter4/target_image.png} \caption{Image of the beam on the target.} \label{fig:target_image} \vspace*{5cm} \end{figure} % \subsubsection{Other uses of 2D projections} There is information to be gained from 2D projections of the distribution in addition to the tomographic 4D reconstruction just described. The projections can be compared to a uniform density ellipse. Additionally, one can observe the variation in the $x$-$y$ correlation coefficient as the difference between the horizontal and vertical phase advances is varied. This reveals any ``hidden" cross-plane correlations, as in Fig.~\ref{fig:tomo_sim_target_scan}. Finally, by computing the RMS moments of the images, the covariance matrix can be reconstructed using the least squares method described in Section \ref{sec:Phase space reconstruction from 1D projections}; the advantage would be that data collection is much faster than for the wire-scanners and that the $\langle xy \rangle$ moment is computed directly.
{ "alphanum_fraction": 0.7638024429, "avg_line_length": 108.4677419355, "ext": "tex", "hexsha": "3d3d7c8167813210554f15ebadd52638ea1a73b7", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "53845b2acfd6da962c19967a98987208988d841e", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "austin-hoover/dissertation", "max_forks_repo_path": "MainText/chapter4.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "53845b2acfd6da962c19967a98987208988d841e", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "austin-hoover/dissertation", "max_issues_repo_path": "MainText/chapter4.tex", "max_line_length": 1551, "max_stars_count": null, "max_stars_repo_head_hexsha": "53845b2acfd6da962c19967a98987208988d841e", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "austin-hoover/dissertation", "max_stars_repo_path": "MainText/chapter4.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 11807, "size": 47075 }
%%% This file contains definitions of various useful macros and environments %%% %%% Please add more macros here instead of cluttering other files with them. %%% %%% Minor tweaks of style % These macros employ a little dirty trick to convince LaTeX to typeset % chapter headings sanely, without lots of empty space above them. % Feel free to ignore. \makeatletter \def\@makechapterhead#1{ {\parindent \z@ \raggedright \normalfont \Huge\bfseries \thechapter. #1 \par\nobreak \vskip 20\p@ }} \def\@makeschapterhead#1{ {\parindent \z@ \raggedright \normalfont \Huge\bfseries #1 \par\nobreak \vskip 20\p@ }} \makeatother % This macro defines a chapter, which is not numbered, but is included % in the table of contents. \def\chapwithtoc#1{ \chapter*{#1} \addcontentsline{toc}{chapter}{#1} } % Draw black "slugs" whenever a line overflows, so that we can spot it easily. \overfullrule=1mm %%% Macros for definitions, theorems, claims, examples, ... (requires amsthm package) \theoremstyle{plain} \newtheorem{thm}{Theorem} \newtheorem{lemma}[thm]{Lemma} \newtheorem{claim}[thm]{Claim} \theoremstyle{plain} \newtheorem{defn}{Definition} \theoremstyle{remark} \newtheorem*{cor}{Corollary} \newtheorem*{rem}{Remark} \newtheorem*{example}{Example} %%% An environment for proofs %%% FIXME %%% \newenvironment{proof}{ %%% FIXME %%% \par\medskip\noindent %%% FIXME %%% \textit{Proof}. %%% FIXME %%% }{ %%% FIXME %%% \newline %%% FIXME %%% \rightline{$\square$} % or \SquareCastShadowBottomRight from bbding package %%% FIXME %%% } %%% An environment for typesetting of program code and input/output %%% of programs. (Requires the fancyvrb package -- fancy verbatim.) \renewcommand{\theFancyVerbLine}{\ttfamily \normalfont \arabic{FancyVerbLine}:} %% bigger line numbers \DefineVerbatimEnvironment{spicecode}{Verbatim}{fontsize=\footnotesize,frame=single, numbers=left, tabsize=4,commandchars=\\\{\}} \DefineVerbatimEnvironment{code}{Verbatim}{fontsize=\footnotesize,frame=single, tabsize=4} \newminted{csharp}{frame=single, linenos, tabsize=4, mathescape=true, fontsize=\footnotesize} \usemintedstyle{vs} %does not seem to work? %% macro for coloring of the equations \makeatletter \def\mathcolor#1#{\@mathcolor{#1}} \def\@mathcolor#1#2#3{% \protect\leavevmode \begingroup \color#1{#2}#3% \endgroup } \makeatother % environment for algorithms \newenvironment{alg}[1] { \begin{algorithm} \caption{#1} \begin{algorithmic}[1] } { \end{algorithmic} \end{algorithm} } % environment for small inline circuit examples \newenvironment{circuitdev} { \begin{center} \begin{circuitikz}[scale=0.8, font=\footnotesize] \draw } { ; \end{circuitikz} \end{center} } %%% inline quote \newcommand{\shortquote}[1]{`\textit{#1}'} %%% force italics in the quote \newenvironment{blockquote} { \begin{quotation} \itshape } { \end{quotation} } %%% macro for introducing line breaks without hyphenation \newcommand{\+}{\discretionary{}{}{}} %%% Asymptotic notaion \newcommand{\bigO}[1]{\mathcal{O}(#1)} %%% The field of all real and natural numbers \newcommand{\R}{\mathbb{R}} \newcommand{\N}{\mathbb{N}} %%% Useful operators for statistics and probability \DeclareMathOperator{\pr}{\textsf{P}} \DeclareMathOperator{\E}{\textsf{E}\,} %\DeclareMathOperator{\var}{\textrm{var}} \DeclareMathOperator{\sd}{\textrm{sd}} %%% Transposition of a vector/matrix \newcommand{\T}[1]{#1^\top} %%% Various math goodies \newcommand{\goto}{\rightarrow} \newcommand{\gotop}{\stackrel{P}{\longrightarrow}} \newcommand{\maon}[1]{o(n^{#1})} %\newcommand{\abs}[1]{\left|{#1}\right|} \newcommand{\dint}{\int_0^\tau\!\!\int_0^\tau} \newcommand{\isqr}[1]{\frac{1}{\sqrt{#1}}} %%% Various table goodies \newcommand{\pulrad}[1]{\raisebox{1.5ex}[0pt]{#1}} \newcommand{\mc}[1]{\multicolumn{1}{c}{#1}} %%% Macro for red text in to-do sections \newcommand{\todo}[1]{{\color{red} TODO: [#1]}}
{ "alphanum_fraction": 0.7097848361, "avg_line_length": 25.0256410256, "ext": "tex", "hexsha": "8f44c778716c4e71a1aacf2e8b79cedae0de060b", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "bd471b3b6d3fd46c385a8b2aa31042fdc3b8f0fa", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "rzikm/NextGenSpice", "max_forks_repo_path": "thesis/tex/macros.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "bd471b3b6d3fd46c385a8b2aa31042fdc3b8f0fa", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "rzikm/NextGenSpice", "max_issues_repo_path": "thesis/tex/macros.tex", "max_line_length": 129, "max_stars_count": null, "max_stars_repo_head_hexsha": "bd471b3b6d3fd46c385a8b2aa31042fdc3b8f0fa", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "rzikm/NextGenSpice", "max_stars_repo_path": "thesis/tex/macros.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 1252, "size": 3904 }
%% %% This is file is the documentation for 'beamerthemeUBayreuth.sty' %% %% The UBayreuth beamer theme is as a simple, neutral LaTeX beamer theme using the %% colors of the University Bayreuth. %% %% THIS THEME IS NOT AN OFFICIAL ONE!! %% %% --------------------------------------------------------------------------- %% Copyright 2017 Sebastian Friedl %% %% This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 %% International License (https://creativecommons.org/licenses/by-sa/4.0/). %% %% This means that if you change the theme and re-distribute it, you must re- %% tain the copyright notice header and license it under the same CC-BY-SA %% license. %% This does not affect any presentations that you create with the theme. %% %% --------------------------------------------------------------------------- % !TeX spellcheck = en_US % !TeX document-id = {b3b4668f-f5d8-4010-ac4e-2eb3098d15f4} % !TeX TXS-program:compile=txs:///pdflatex/[--shell-escape] \documentclass[12pt,a4paper]{scrartcl} \usepackage[utf8]{inputenc} \usepackage[english]{babel} \usepackage{amsmath} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{csquotes} \usepackage{graphicx} \usepackage{hyperref} \usepackage{minted} \usepackage{pgf} \usepackage[osf,slantedGreek]{mathpazo} \usepackage[osf,scale=.92]{roboto} \parindent 0pt \title{The UBayreuth \LaTeX\ beamer theme} \author{Sebastian Friedl} \date{March 13, 2017} \hypersetup{pdftitle={The UBayreuth \LaTeX\ beamer theme},pdfauthor={Sebastian Friedl}} \begin{document} \maketitle \thispagestyle{empty} \section*{Important note} This theme is an unofficial theme. \\ It uses the colors of the University Bayreuth. \section*{Dependencies} The UBayreuth theme requires \LaTeXe\ and the following packages: \begin{itemize} \ttfamily \item appendixnumberbeamer \item etoolbox \item calc \item pgfopts \item keyval \item tikz \end{itemize} \section{Using the theme} For using the theme, you have to copy the file \texttt{beamerthemeUBayreuth.sty} into the folder containing the master file of your presentation. Advanced users may also install the style file on their local system. \par After that, you use the command \mintinline{LaTeX}{\usetheme{UBayreuth}} to set the theme used in your presentation to the UBayreuth theme. \section{Theme options} You can pass some options to the theme which influence how the theme looks. \\ Syntax: \ \ \mintinline{LaTeX}{\usetheme[options]{UBayreuth}} \\\vspace{0.5ex} \textbf{Available options:} \begin{itemize} \item \texttt{numbering} \\ Influences how the frames are numbered \begin{itemize} \item \texttt{none} \\ The frames aren't numbered % \item \texttt{counter} \\ The frames are numbered with a simple counter % \item \texttt{fraction} \hfill \textit{default} \\ The frames are numbered with current frame / total frames % \item \texttt{appedix} \\ An \enquote{Anhang} remark is shown instead of the frame numbers. \\ This setting gets automatically set for frames in the appendix. \end{itemize} % \item \texttt{footline} \\ Influences how the footline of the theme looks \begin{itemize} \item \texttt{none} \\ No footline is shown % \item \texttt{standard} \hfill \textit{default} \\ The standard footline containing author, title and framenumbers is shown. % \item \texttt{smallcaps} \\ The standard footline with small caps font shape \end{itemize} % \item \texttt{titleformat title} \\ Influences the style of the presentation title on the title frame \begin{itemize} \item \texttt{regular} \hfill \textit{default}\\ The presentation title is set in normal boldface style % \item \texttt{smallcaps} \\ The presentation title is set in boldface and small caps style \end{itemize} \end{itemize} \section{Style sample} This style sample was made using the sample presentation \enquote{Steine} created by A.~Arzberger and S.~Friedl. \\ It is licensed under the \textit{Creative Commons Attribution-ShareAlike 4.0 International} license. \begin{center} \pgfimage[width=0.4\textwidth,page=1]{steine_UBayreuth} \pgfimage[width=0.4\textwidth,page=2]{steine_UBayreuth} \\ \pgfimage[width=0.4\textwidth,page=3]{steine_UBayreuth} \pgfimage[width=0.4\textwidth,page=4]{steine_UBayreuth} \\ \pgfimage[width=0.4\textwidth,page=5]{steine_UBayreuth} \pgfimage[width=0.4\textwidth,page=6]{steine_UBayreuth} \\ \pgfimage[width=0.4\textwidth,page=7]{steine_UBayreuth} \pgfimage[width=0.4\textwidth,page=8]{steine_UBayreuth} \end{center} \section{License} The UBayreuth theme is derived work from Matthias Vogelgesang's metropolis beamer theme. Thus, both copyright headers printed below apply. \subsection*{UBayreuth beamer theme} Copyright 2017 Sebastian Friedl \\ This work is licensed under a \textit{Creative Commons Attribution-ShareAlike~4.0 International} License (\url{https://creativecommons.org/licenses/by-sa/4.0/}). \\ This means that if you change the theme and re-distribute it, you must retain the copyright notice header and license it under the same CC-BY-SA license. \\ This does not affect any presentations that you create with the theme. \subsection*{metropolis beamer theme} Copyright 2015 Matthias Vogelgesang and the LaTeX community. A full list of contributors can be found at \begin{center} \url{https://github.com/matze/mtheme/graphs/contributors} \end{center} and the original template was based on the HSRM theme by Benjamin Weiss. This work is licensed under a \textit{Creative Commons Attribution-ShareAlike~4.0 International} License (\url{https://creativecommons.org/licenses/by-sa/4.0/}). \end{document}
{ "alphanum_fraction": 0.7179310345, "avg_line_length": 37.4193548387, "ext": "tex", "hexsha": "eeae7a8f66c0adcb55e85847f06d89c1920f591c", "lang": "TeX", "max_forks_count": 10, "max_forks_repo_forks_event_max_datetime": "2021-10-12T04:13:23.000Z", "max_forks_repo_forks_event_min_datetime": "2019-11-02T03:10:26.000Z", "max_forks_repo_head_hexsha": "3ab28a23fb60cb0a97fcec883847e2d8728b98c0", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "lemoxiao/Awesome-Beamer-Collection", "max_forks_repo_path": "200+ beamer 模板合集/LaTeX-beamer-styles-master(五种不同颜色)/Themes/UBayreuth/doc/beamerthemeUBayreuth_documentation.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "3ab28a23fb60cb0a97fcec883847e2d8728b98c0", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "lemoxiao/Awesome-Beamer-Collection", "max_issues_repo_path": "200+ beamer 模板合集/LaTeX-beamer-styles-master(五种不同颜色)/Themes/UBayreuth/doc/beamerthemeUBayreuth_documentation.tex", "max_line_length": 221, "max_stars_count": 13, "max_stars_repo_head_hexsha": "3ab28a23fb60cb0a97fcec883847e2d8728b98c0", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "lemoxiao/Awesome-Beamer-Collection", "max_stars_repo_path": "200+ beamer 模板合集/LaTeX-beamer-styles-master(五种不同颜色)/Themes/UBayreuth/doc/beamerthemeUBayreuth_documentation.tex", "max_stars_repo_stars_event_max_datetime": "2021-12-24T09:27:26.000Z", "max_stars_repo_stars_event_min_datetime": "2019-07-30T04:09:54.000Z", "num_tokens": 1735, "size": 5800 }
\section{Personal Projects} \resumeSubHeadingListStart \resumeSubheading {\href{https://github.com/xtutran/public-resume}{public-resume}}{Github} {Keep my resume update to date and automatic generate sharable PDF}{2021} \resumeSubHeadingListEnd
{ "alphanum_fraction": 0.7613636364, "avg_line_length": 44, "ext": "tex", "hexsha": "2e67484d780f090701f7db89fb85784f6a35df23", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "82774a963f14dc8f3310f169d8c883fe78e71b7c", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "xtutran/auto-resume", "max_forks_repo_path": "resume/projects.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "82774a963f14dc8f3310f169d8c883fe78e71b7c", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "xtutran/auto-resume", "max_issues_repo_path": "resume/projects.tex", "max_line_length": 79, "max_stars_count": null, "max_stars_repo_head_hexsha": "82774a963f14dc8f3310f169d8c883fe78e71b7c", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "xtutran/auto-resume", "max_stars_repo_path": "resume/projects.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 68, "size": 264 }
%%%%%%%%%%%%%%%%%%%%%%% file template.tex %%%%%%%%%%%%%%%%%%%%%%%%% % % This is a general template file for the LaTeX package SVJour3 % for Springer journals. Springer Heidelberg 2010/09/16 % % Copy it to a new file with a new name and use it as the basis % for your article. Delete % signs as needed. % % This template includes a few options for different layouts and % content for various journals. Please consult a previous issue of % your journal as needed. % %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % % First comes an example EPS file -- just ignore it and % proceed on the \documentclass line % your LaTeX will extract the file if required % %\documentclass{svjour3} % onecolumn (standard format) %\documentclass[smallcondensed]{svjour3} % onecolumn (ditto) \documentclass[smallextended]{svjour3} % onecolumn (second format) %\documentclass[twocolumn]{svjour3} % twocolumn % \smartqed % flush right qed marks, e.g. at end of proof % \usepackage{graphicx} \usepackage{lineno,hyperref} \usepackage{amssymb} \usepackage{array} \usepackage{framed} \usepackage{graphicx} \usepackage{float} \usepackage{supertabular} \usepackage[table]{xcolor} \usepackage{listings} \usepackage{times} \usepackage [latin1]{inputenc} %\usepackage [utf8]{inputenc} \usepackage[T1]{fontenc} \setlength{\marginparwidth}{2cm} % Some very useful LaTeX packages include: % (uncomment the ones you want to load) \providecommand{\tabularnewline}{\\} \usepackage{booktabs, multicol, multirow} \usepackage[acronym,nomain]{glossaries} \usepackage[inline]{enumitem} \usepackage{algorithm} \usepackage{algorithmicx} \usepackage{algpseudocode} \usepackage{amsmath} \usepackage{amsfonts} \usepackage{subfigure} \usepackage{grffile} \usepackage{tabularx} \usepackage{colortbl} \usepackage{hhline} \usepackage{hyperref} \usepackage{here} \usepackage{graphicx} % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % new commands % % % \usepackage[colorinlistoftodos,prependcaption,textsize=tiny]{todonotes} % % Convenience commands for the paper editing process % \newcommand{\comment}[1]{{\textbf{\color{red}[#1]}}} \newcommand{\fixed}[1]{{\textbf{\color{blue}[#1]}}} % % Convenience commands for references % \newcommand{\secref}[1]{Section~\ref{sec:#1}} \newcommand{\tabref}[1]{Table~\ref{tab:#1}} \newcommand{\figref}[1]{Figure~\ref{fig:#1}} \newcommand{\quotes}[1]{``#1''} \newcommand{\argmin}{\arg\!\min} \newcommand{\argmax}{\arg\!\max} %\newtheorem{definition}{Definition} \renewcommand{\algorithmicrequire}{\textbf{Input:}} \renewcommand{\algorithmicensure}{\textbf{Output:}} % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % Acronym definitions \newacronym{sps}{SPS}{Stream Processing Systems} \newacronym{pe}{PE}{Processing Elements} \newacronym{dag}{DAG}{Directed Acyclic Graph} % \usepackage{pgf} \usepackage{tikz} \usetikzlibrary{arrows,automata} \usetikzlibrary{shapes.misc} \tikzset{cross/.style={cross out, draw=black, minimum size=3*(#1-\pgflinewidth), inner sep=0pt, outer sep=0pt}, %default radius will be 1pt. cross/.default={10pt}} \usepackage{array} \usepackage{balance} \usepackage{url} \usepackage{rotating} \usepackage{changes} % COLORING AND USEFUL COMMAND FOR EDITING CHANGES \definechangesauthor[name={MB},color=orange]{MB} \newcommand{\addMB}[1]{\added[id=MB]{#1}} \newcommand{\delMB}[1]{\deleted[id=MB]{#1}} \newcommand{\repMB}[2]{\replaced[id=MB]{#2}{#1}} \usepackage{todonotes} \newcommand{\todoMB}[2]{\linespread{0.7}\todo[color=yellow!50,#1]{\scriptsize\textbf{MB:}#2}} % % \usepackage{mathptmx} % use Times fonts if available on your TeX system % % insert here the call for the packages your document requires %\usepackage{latexsym} % etc. % % please place your own definitions here and don't use \def but % \newcommand{}{} % % Insert the name of "your journal" with % \journalname{myjournal} % \begin{document} \title{Verifying Big Data Topologies \emph{By-Design}:\\ a Semi-Automated Approach} %\subtitle{Do you have a subtitle?\\ If so, write it here} %\titlerunning{Short form of title} % if too long for running head \author{Marcello M. Bersani \and Francesco Marconi \and Damian A. Tamburri \and Andrea Nodari \and Pooyan Jamshidi} %\authorrunning{Short form of author list} % if too long for running head \institute{Marcello and Francesco \at Politecnico di Milano, Milan, Italy \\ \email{$[$marcellomaria.bersani, francesco.marconi$][email protected]} % \\ % \emph{Present address:} of F. Author % if needed \and Damian \at TU/e - JADS\\ \email{[email protected]} % \\ \and Pooyan and Andrea \at Imperial College London\\ \email{$[$p.jamshidi,a.nodari15$][email protected]} } \date{Received: date / Accepted: date} % The correct dates will be entered by the editor \maketitle \begin{abstract} Big data architectures have been gaining momentum in recent years. For instance, Twitter uses stream processing frameworks like Apache Storm to analyse billions of tweets per minute and learn the trending topics. However, architectures that process big data involve many different components interconnected via semantically different connectors. Such complex architectures make %it a difficult task for software architects to refactor the initial designs. possible refactoring of the applications a difficult task for software architects, as applications might be very different with respect to the initial designs. As an aid to designers and developers, we developed OSTIA (Ordinary Static Topology Inference Analysis) that allows detecting the occurrence of common anti-patterns across big data architectures and exploiting software verification techniques on the elicited architectural models. This paper illustrates OSTIA and evaluates its uses and benefits on three industrial-scale case-studies. \keywords{Big Data Architectures \and Software Design \& Analysis \and Big Data Systems Verification} % \PACS{PACS code1 \and PACS code2 \and more} % \subclass{MSC code1 \and MSC code2 \and more} \end{abstract} \section{Introduction} \label{intro} %\input{intro} %%\begin{itemize} %%\item I would follow the path of the abstract, we should probably provide some numbers and info on storm %%\item mind you we should stress on the innovative aspects of the paper and tech. there is nothing strictly related to it %%\item we should comment on what could be done with OSTIA in combination with Eclipse Based tech. %%\end{itemize} %%%Big data architectures have been gaining momentum in the last few years. For example, Twitter uses complex Stream topologies featuring frameworks like Storm to analyse and learn trending topics from billions of tweets per minute. However, verifying the consistency of said topologies often requires de- ployment on multi-node clusters and can be expensive as well as time consuming. As an aid to designers and developers evaluating their Stream topologies at design-time, we developed OSTIA, that is, ?On-the-fly Storm Topology Inference Analysis?. OSTIA allows reverse-engineering of Storm topologies so that designers and developers may: (a) use previously existing model- driven verification&validation techniques on elicited models; (b) visualise and evaluate elicited models against consistency checks that would only be available at deployment and run-time. We illustrate the uses and benefits of OSTIA on three real-life industrial case studies. %%%%%% %%%%%% intro needs a bit of refinement with what we say in the title and a few more definitions should be included (e.g., about streaming and what it represents or why topologies ?are? the Big Data architecture)? Perhaps we should also increase the stress and focus on Quality and deployability aspects (i.e., the main topics of next year?s QoSA) in the intro, and how OSTIA aids at improving these aspects Big data or \emph{data-intensive} applications (DIAs) process large amounts of data for the purpose of gaining key business intelligence through complex analytics using machine-learning techniques \cite{bdsurvey,ml4bd}. These applications are receiving increased attention in the last years given their ability to yield competitive advantage by direct investigation of user needs and trends hidden in the enormous quantities of data produced daily by the average Internet user. According to Gartner~\cite{gartner} %\footnote{\url{http://www.gartner.com/newsroom/id/2637615}} business intelligence and analytics applications will remain a top focus for Chief-Information Officers (CIOs) of most Fortune 500 companies until at least 2019-2021. However, the cost of ownership of the systems that process big data analytics are high due to infrastructure costs, steep learning curves for the different frameworks (such as Apache Storm~\cite{storm}, %\footnote{\url{http://storm.apache.org/}}, Apache Spark~\cite{spark} %\footnote{\url{http://spark.apache.org/}} or Apache Hadoop~\cite{hadoop}) typically involved in design and development of big data applications %\footnote{\url{https://hadoop.apache.org/}} and complexities in large-scale architectures. %and their governance within networked organizations. %\todoMB{}{Aggiunto un ``and'' per chiudere la frase (c'era gia').} A key complexity of the above design and development activity lies in quickly and continuously refining the configuration parameters of the middleware and service platforms on top of which the DIA is running \cite{wicsabd}. The process in question is especially complex as the number of middleware involved in DIAs design increases; the more middleware are involved the more parameters need co--evaluation (e.g., latency or beaconing times, caching policies, queue retention and more) - \emph{fine-tuning these ``knobs" on so many concurrent technologies requires an automated tool to speed up this heavily manual, trial-and-error continuous fine-tuning process}. We argue that a primary entry-point for such fine-tuning is the DIA's graph of operations along with the configurations that the graph is decorated with, for execution. This is possible when the adopted framework decomposes the computation in term of concurrent operations on data that are subject to a specific precedence relation. %\todoMB{}{Aggiunto commento.} On one hand, the graph in question is a DAG --- a Directed Acyclic Graph representing the cascade of operations to be applied on data in a batch (i.e., slicing the data and analysing one partition at the time with the same operations) or stream (i.e., continuous data analysis) processing fashion. On the other hand, the application graph can either be known to the designer or it can be directly extracted from DIA code. This second scenario is where our research solution comes in. % %Effectiveness, in big data terms, means that the architecture as well as the architecting processes and tools are able to support design, deployment, operation, refactoring and subsequent (re-)deployment of architectures continuously and consistently with runtime restrictions imposed by big data development frameworks. %Storm, for example, is an Apache big data processing middleware which requires the processing elements to represent a Directed-Acyclic-Graph (DAG). In toy topologies (comprising few components), such constraints can be effectively checked manually, however, when the number of components in such architectures increases to real-life industrial scale architectures, it is enormously difficult to verify even these ``simple" structural DAG constraints. %\textbf{TODO: can we add an example of said consistency checks/issues?} \\ %We argue that the above notion of architecture and architecting effectiveness can be maintained through continuous architecting of big data applications consistently with a DevOps organisational structure \cite{ossslr,devops}. % % %In the big data domain, continuous architecting means supporting the continuous and incremental improvement of big data architectural designs - e.g., by changing the topological arrangement of architecture elements or any of their properties such as queue lengths - using on-the-fly analyses on running applications exploiting platform and infrastructure monitoring data. For example, the industrial parter that aided the evaluation of the results in this paper is currently facing the issue of continuously changing and re-arranging their stream processing application. Particularly, they require to change in response to: (a) types of content that need analysis (multimedia images, audios as opposed to news articles and text); (b) types of users that need recommendation (e.g., governments as opposed to single users). Changing and constantly re-arranging an application's architecture requires constant and \emph{continuous architecting} of architecture elements, their interconnection and their visible properties. Moreover, providing automated support to this continuous architecting exercise, reduces the (re-)design efforts and increases the speed of big data architectures' (re-)deployability by saving the effort of running trial-and-error experiments on expensive infrastructure. This paper illustrates and evaluates OSTIA, which stands for ``Ordinary Static Topology Inference Analysis" -- OSTIA is a tool which retrieves data-intensive topologies to allow for: (a) \emph{anti-pattern analysis} - OSTIA allows detection of known and established design anti-patterns for data-intensive applications; (b) \emph{transparent formal verification} - OSTIA transposes the recovered data-intensive topology models into equivalent formal models for the purpose of verifying temporal properties, such as basic queue-safety clauses \cite{icsoft}. First, during its reverse-engineering step, OSTIA recovers a JSON file describing the technical structure details and configurations in the targeted topologies. %analyses the architecture to verify whether it is consistent with development restrictions and/or deployment constraints of the underlying development frameworks (e.g., constraints).\todoMB{}{Frase non chiara.} To do so, OSTIA hardcodes intimate knowledge on key big data processing frameworks (Apache Storm and Apache Hadoop2, in our case) and their dependency structure in the form of a meta-model \cite{mda}. This knowledge is necessary to infer from data-intensive source-code topologies are correct and correctly configured. As previously stated, currently, OSTIA focuses on Apache Hadoop and Apache Storm, i.e., the most famous and established real-time batch and stream processing engines \cite{storm, toshniwal2014storm}, respectively. % Secondly, such representations may be used for further analysis through model verification thanks to formal verification techniques \cite{icsoft}. The verification approach is lightweight and it is carried out in a completely transparent fashion to OSTIA users. %in at least five scenarios: (a) realising an exportable visual representation of the developed topologies; (b) verifying, the structural constraints on topologies that would only become evident during infrastructure setup or runtime operation; (c) verifying the topologies against anti-patterns \cite{patternoriented2000} that may lower performance and limit deployability/executability; (d) manipulate said topologies to elicit non-obvious structural properties such as linearisation or cascading; (e) finally, use topologies for %In an effort to offer said support in a DevOps fashion, OSTIA was engineered to act as an architecture recovery mechanism that closes the feedback loop between operational data architectures (Ops phase) and their refactoring phase (Dev phase). As previously stated, currently, OSTIA focuses on Apache Hadoop and Apache Storm, i.e., the most famous and established real-time batch and stream processing engines \cite{storm, toshniwal2014storm}, respectively. This paper outlines OSTIA, elaborating on the major usage scenario above, its benefits, and limitations. Also, we evaluate OSTIA using case-study research to conclude that OSTIA does in fact provide valuable insights for refactoring of big data architectures. %\todoMB{}{Ho tolto il rifermento al continuos architecting e alle applicazioni stream-based. Meglio rimanere generici.} Although a previous version of this paper was published in the proceedings of WICSA 2015 \cite{wicsabd}, we introduce the following novel contributions: \begin{itemize} \item we extended OSTIA to address Apache Hadoop data-intensive applications and re-executed the evaluation in line with this addition; \item we extended OSTIA with a formal verification feature for using a formal model built via Constraint LTL over-clocks (CLTLoc)~\cite{BRS15} - an extension of the well-known Linear Temporal Logic (LTL)~\cite{ltl} with variables measuring the elapsing of time. This feature operates verification on CLTLoc specifications and is completely transparent to OSTIA users, checking autonomously for safety of OSTIA-elicited topologies; %\item we extended OSTIA with 3 heuristics\todoMB{}{Non le trovo nel paper.} that expert big data practitioners in the WICSA 2015 working group panel suggested as valuable additions; \end{itemize} We released OSTIA as an open-source software~\cite{ostia}. The rest of the paper is structured as follows. The next section elaborates further on the notion of refactoring for DIAs. Section \ref{ra} outlines our research design and context of study. Section \ref{rs} outlines OSTIA. Section \ref{eval} evaluates OSTIA while Section \ref{disc} discusses results and evaluation outlining OSTIA limitations and threats to validity. Finally, Sections \ref{rw} and \ref{conc} report related work and conclude the paper. \section{Research Methods} \label{ra} %\input{ra} %\begin{itemize} %\item so we had a focus group to actually elaborate the aprroach %\item then we used explorative prototyping to elicit the initial version of the prototype and then refined that by means of case study, we could mention that we used ATC as an action-research source (this is what we are doing now internally in WP2 actually) %\item ... %\end{itemize} % %The work we elaborated in this paper is stemming from the following research question: % %\begin{center} %\emph{``What are the sub-optimal structural"} %\end{center} %% \comment{I would say if we connect it to the special need in DICE would makes more sense?} % %This research question emerged as part of our work within the DICE EU H2020 project~\cite{dice2020} %%\footnote{\url{http://www.dice-h2020.eu/}} %where we evaluated our case-study owners' scenarios and found that their continuous architecting needs were: (a) focusing on the topological abstractions and surrounding architectural specifications; (b) focusing on bridging the gap between insights from Ops to (re-)work at the Dev level; (c) their needs primarily consisted in maintaining architectural consistency during refactoring. %In pursuit of the research question above, From a methodological perspective, the results outlined in this paper were elaborated as follows and made concrete through the actions in Sec.~\ref{sec:antipatternextraction} and \ref{sec:researchsolutioneval}. \subsection{Extracting Anti-Patterns for Big Data Applications}\label{sec:antipatternextraction} The anti-patterns illustrated in this paper were initially elaborated within 3 structured focus-groups \cite{focusgroup} involving practitioners from a different organization in each focus-group round; subsequently, we interviewed 2 domain-expert (5+ years of experience) researchers on big data technologies as a control group. The data was analyzed with a simple card-sorting exercise. The patterns emerged from the card-sorting were confirmed/disproved with the patterns emerging from our interview-based control group; disagreement between the two groups was evaluated Inter-Rater Reliability assessment using the well-known Krippendorf Alpha coefficient \cite{content} (assessment of $K_{alpha}=0,89$). %; \todoMB{}{Dam, non capsico la frase prima del punto-e-virgola} Table \ref{tabba} outlines the population we used for this part of the study. The practitioners were simply required to elaborate on the most frequent structural and anti-patterns they encountered on their DIA design and experimentation. \begin{table} \caption{Focus-Groups population outline.}\label{tabba} \begin{tabular}{|c|c|c|c|} \hline \textbf{Role} & \textbf{\#Participants} & \textbf{Mean Age} & \textbf{Mean Exp. With DIAs (\#months)}\tabularnewline \hline Architect & 3 & 35,3 & 17,3\tabularnewline \hline Developer & 4 & 27,7 & 36,2\tabularnewline \hline Operator & 5 & 31,1 & 38,1\tabularnewline \hline Manager & 3 & 44,2 & 18,4\tabularnewline \hline \end{tabular} \end{table} The focus-group sessions were structured as follows: (a) the practitioners were presented with a data-intensive architectural design using standard UML structure and behavior representations (a component view and an activity view \cite{NittoJGST16}); (b) the practitioners were asked to identify and discuss any bottlenecks or structural limitations in the outlined designs; (c) finally, the practitioners were asked to illustrate any other anti-pattern the showcased topologies did not contain. %\todoMB{}{Spiegare meglio il focus-group}. %Following the focus-group, through self-ethnography \cite{selfeth} and brainstorming we identified the series of essential consistency checks, algorithmic evaluations as well as anti-patterns that can now be applied through OSTIA while recovering an architectural representation for Storm topologies. %We designed OSTIA %%\footnote{\url{https://github.com/maelstromdat/OSTIA}} %to support the incremental and iterative refinement of streaming topologies based on the incremental discovery and correction of the anti-patterns. % %\begin{figure*} % \centering % \includegraphics[width=12cm]{socialsensorother} % \caption{A sample Storm topology (readable from left to right) using an UML object diagram in the SocialSensor App, notes identify types for nodes (i.e., Bolts or Spouts).} % \label{socialsensor-topology} %\end{figure*} \subsection{Research Solution Evaluation}\label{sec:researchsolutioneval} OSTIA's evaluation is threefold. First, we evaluated our solution using an industrial case-study offered by one of the industrial partners in the DICE EU H2020 Project consortium~\cite{dice2020}. %\footnote{\url{http://www.dice-h2020.eu/}}. The partner in question uses open-source social-sensing software to elaborate a subscription-based big-data application that: (a) aggregates news assets from various sources (e.g., Twitter, Facebook, etc.) based on user-desired specifications (e.g., topic, sentiment, etc.); (b) presents and allows the manipulation of data. The application in question is based on the SocialSensor App~\cite{socialsensor} %\footnote{\url{https://github.com/socialsensor}} %(see %Fig. \ref{socialsensor-topology} for a sample realised using a simple UML object diagram). In particular, the topology in %Fig. \ref{socialsensor-topology} extracts data from sources and divides and arranges contents based on type (e.g., article vs. media), %later updating a series of databases (e.g., Redis) with these elaborations. % which features the combined action of three complex streaming topologies based on Apache Storm. The models that OSTIA elicited from this application were showcased to our industrial partner in a focus group aimed at establishing the value of insights produced as part of OSTIA-based analyses. Our qualitative assessment was based on questionnaires and open discussion. Second, to further confirm the validity of OSTIA analyses and support, we applied it on two open-source applications featuring Big-Data analytics, namely: (a) the DigitalPebble application, %\footnote{\url{https://github.com/DigitalPebble/storm-crawler}}, ``A text classification API in Java originally developed by DigitalPebble Ltd. The API is independent from the ML implementations and can be used as a front end to various ML algorithms''~\cite{storm-crawler}; (b) the StormCV application, %\footnote{\url{https://github.com/sensorstorm/StormCV}} ``StormCV enables the use of Apache Storm for video processing by adding computer vision (CV) specific operations and data model; the platform enables the development of distributed video processing pipelines which can be deployed on Storm clusters"~\cite{stormCV}. %\begin{itemize} %\item elaborate on the case-study partner %\item elaborate on the case at hand for that partner %\item elaborate on the origin of the application and how it uses social-sensor %\item also, elaborate on the case by NETF which starts and stems from KILLRWEATHER %\item should we elaborate on something else? %\end{itemize} %\textbf{TODO: @anyone, feel free to elaborate more!!} Third, finally, as part of the OSTIA extension recapped in this manuscript, we applied formal verification approaches using the Zot~\cite{zot} %\footnote{\url{https://github.com/fm-polimi/zot}} model-checker following an approach tailored from previous work \cite{icsoft,BRS15}. %\comment{this section need a bit of refactoring it's not focused enough} \section{Results: OSTIA Explained} \label{rs} %\input{rs} %\input{anti-pattern} %% \textbf{@Pooyan,Andrea: here we should probably elaborate on OSTIA's architecture and the design principles that led us to define it as such... also we might want to elaborate on its components, the structure I'm suggesting below is merely tentative but it will give us ahead start!!} %% \begin{itemize} %% \item add and comment the meta-model of storm and how OSTIA uses that as a reference to draw and check models which are consistent with the technology %% \item OSTIA Architecture %% \item we should probably elaborate the architecture part (or on a separate "implementation" part or paragraph) with a link to the downloadable technology - @Andrea: can we bundle it up as plugin for Eclipse? E.g., somehow using RCP? %% \item OSTIA Antipatterns Module %% \item OSTIA Visualisation Module %% \item OSTIA extensibility %% \item OSTIA explanation of use and simple usage scenario %% \item OSTIA explanation of use and simple usage scenario of continuous architecting %% \end{itemize} %This section outlines OSTIA starting form a brief recap of the technology it is currently designed to support. Further on, This section introduces how OSTIA was designed to support design-time analysis and continuous improvement of data-intensive applications, using the Storm framework as a running example. For this reason, a brief recap of Storm is given to understand the rationale behind OSTIA. %Finally, the section outlines an example meta-model for Storm that captures all restrictions and rules (e.g., for configuration, topology, dependence, messaging, etc.) in the framework. OSTIA uses this and similar meta-models as a reference every time the application is run to recover and analyse operational topologies. \subsection{A Concrete Example: The Storm Architecture} Storm is a technology developed at Twitter \cite{toshniwal2014storm} in order to face the problem of processing of streaming of data. It is defined as a distributed processing framework which is able to analyse streams of data. A Storm topology is a DAG composed by nodes of two types: spouts and bolts. The former type includes nodes that process the data entering the topology, for instance querying APIs or retrieve information from a message broker, such as Apache Kafka\footnote{\url{http://kafka.apache.org/}}. The latter executes operations on data, such as filtering or serialising. %%%OSTIA was designed to retrieve and analyse big data topologies, allowing their refactoring in a way which is consistent with framework restrictions, rules and regulations part of the Storm framework. To do so, OSTIA uses a meta-model for the Storm framework which acts as an operational image of all said restrictions and rules that OSTIA needs to maintain. %%%Essentially OSTIA uses the meta-model as such an operational image for Storm, for two purposes: (a) checking that Storm restrictions (e.g., Spouts initiate the topology) and constraints (e.g., grouping policies) are valid on models recovered by OSTIA; (b) keep checking said restrictions and constraints during continuous architecting. \todoMB{}{Chiarire.} %%%To give a hint of the complexity of the technology, we outline the meta-model in Fig. \ref{stormmm}. where, for example, the grouping restrictions that Storm envisions are captured in an enumeration of constraints (see the $<<$Grouping$>>$ element or the $<<$ReplicationFactor$>>$ concrete parameter). Key elements of the meta-model are the following: %%%\begin{itemize} %%%\item $<<$TopologyConfiguration$>>$ contains the parameters necessary for the Storm framework to be configured and to run on the selected infrastructure. OSTIA checks that these parameters are present or that defaults are correctly inplace;\todoMB{}{In che senso?} %%%\item $<<$Topology$>>$ specifies the topological construct being elicited for the analysed Storm application, as composed of the $<<$Bolt$>>$ and the $<<$Spout$>>$ meta-elements; %%%\item $<<$Grouping$>>$ contains restrictions on the possible groupings of the $<<$Bolt$>>$ and the $<<$Spout$>>$ meta-elements within the elicited topology. OSTIA checks these restrictions upon recovery and exporting of topologies;\todoMB{}{In che senso?} %%%\end{itemize} %%% %%%\begin{figure*} %%%\centering %%%% \begin{sideways} %%% \includegraphics[width=16cm]{Stormmm} %%%% \end{sideways} %%% \caption{The Storm Meta-Model, an overview.} %%% \label{stormmm} %%%\end{figure*} %%% %%%A complete overview of the details of this meta-model and the restrictions captured therein is beyond the scope of this paper - rather, the entire purpose of OSTIA is to hide their complexity: for example, notice the \emph{TopologyConfiguration} meta-class, where we deliberately selected 22 (about 10\% of the entire set) of parameters possibly configurable for running Storm topologies. Further detail on the Storm meta-model may be found on the full technical report describing our technology\footnote{\url{http://dice-h2020.eu/deliverables/D2.1}}. \subsection{OSTIA Tool Architecture} \subsubsection{Architecture Overview} The overall architecture of OSTIA is depicted in Figure \ref{archostia}. The logical architectural information of the topology is retrieved by OSTIA via static analysis of the source code. OSTIA generates a simple intermediate format to be used afterwards by other algorithmic processes. OSTIA is indeed architected in a way that additional algorithmic analyses similar to our anti-pattern analyses can be easily added. These functionalities are carried out with the information that resides in the intermediate format and provide added value for the design-time analysis and verification. Since the information in the intermediate format only rely on the logical code analysis, the algorithmic analyses require some additional information regarding the running topology, such as, for instance, the end to end latency and throughput of the topology or the mean duration of the computation carried out by the computational nodes when they process a unit of data. \begin{figure}[H] \begin{center} \includegraphics[width=9cm,draft]{fig0} \caption{OSTIA extensible architecture.}\label{archostia} \end{center} \end{figure} Such information will be continuously added to the intermediate repository via runtime monitoring of the topology on real deployment cluster. These provide appropriate and rich information for refactoring the initial architecture and enabling performance-driven DevOps \cite{brunnert2015performance}. Finally, OSTIA allows users to export the topology in different formats (specifically, JSON, Dot, CSV, and XMI) to analyse and continuously improve the topology with other tools --- in the scope of this paper we focus on verification \emph{by-design} featuring formal verification. {\color{blue} \subsubsection{Architecture Properties and Extensibility} The architectural design of the OSTIA tool was incepted using a modular model-driven architecture \cite{mda} in mind. More specifically, the tool provides a platform-independent and topology-based analysis module which elicits topologies from data-intensive applications using an technology-agnostic format based on the ``.Dot" notation, a well-known standard graph-representation format. On top of this analysis module, the architecture provides a design and analysis module which outputs a visualization of the graph-formatted input. Finally, the tool provides a pattern-analysis module with graph-analysis and pattern-mining functions; one function per pattern is used in this module. Finally, the tool provides a software-verification interlay relying on third-party tools from previous and related work as outlined in sec. \ref{verification}. From an extensibility perspective, the architecture provides a basis template commented within the source-code as a basic format to be used to extend each module; in principle, extending designers need to simply ``instantiate" this template within the module and recall the extension from the visualization layer to warrant for OSTIA extensibility. } %%%\subsection{A Concrete Example: The Storm Architecture} %%% %%%Storm is a technology developed at Twitter \cite{toshniwal2014storm} in order to %%%face the problem of processing of streaming of data. It is defined as a %%%distributed processing framework which is able to analyse streams of data. A Storm topology is a computational graph composed by nodes of two types: spouts and bolts. The former type includes nodes that process the data entering the topology, for instance %%%querying APIs or retrieve information from a message broker, such as Apache %%%Kafka\footnote{\url{http://kafka.apache.org/}}. The latter executes operations on data, such as filtering or serialising. %%% %%%\subsubsection{Storm Framework Meta-Model} %%% %%%OSTIA was designed to retrieve and analyse big data topologies, allowing their refactoring in a way which is consistent with framework restrictions, rules and regulations part of the Storm framework. To do so, OSTIA uses a meta-model for the Storm framework which acts as an operational image of all said restrictions and rules that OSTIA needs to maintain. %%%Essentially OSTIA uses the meta-model as such an operational image for Storm, for two purposes: (a) checking that Storm restrictions (e.g., Spouts initiate the topology) and constraints (e.g., grouping policies) are valid on models recovered by OSTIA; (b) keep checking said restrictions and constraints during continuous architecting. \todoMB{}{Chiarire.} %%%To give a hint of the complexity of the technology, we outline the meta-model in Fig. \ref{stormmm}. where, for example, the grouping restrictions that Storm envisions are captured in an enumeration of constraints (see the $<<$Grouping$>>$ element or the $<<$ReplicationFactor$>>$ concrete parameter). Key elements of the meta-model are the following: %%%\begin{itemize} %%%\item $<<$TopologyConfiguration$>>$ contains the parameters necessary for the Storm framework to be configured and to run on the selected infrastructure. OSTIA checks that these parameters are present or that defaults are correctly inplace;\todoMB{}{In che senso?} %%%\item $<<$Topology$>>$ specifies the topological construct being elicited for the analysed Storm application, as composed of the $<<$Bolt$>>$ and the $<<$Spout$>>$ meta-elements; %%%\item $<<$Grouping$>>$ contains restrictions on the possible groupings of the $<<$Bolt$>>$ and the $<<$Spout$>>$ meta-elements within the elicited topology. OSTIA checks these restrictions upon recovery and exporting of topologies;\todoMB{}{In che senso?} %%%\end{itemize} %%% %%%\begin{figure*} %%%\centering %%%% \begin{sideways} %%% \includegraphics[width=16cm]{Stormmm} %%%% \end{sideways} %%% \caption{The Storm Meta-Model, an overview.} %%% \label{stormmm} %%%\end{figure*} %%% %%%A complete overview of the details of this meta-model and the restrictions captured therein is beyond the scope of this paper - rather, the entire purpose of OSTIA is to hide their complexity: for example, notice the \emph{TopologyConfiguration} meta-class, where we deliberately selected 22 (about 10\% of the entire set) of parameters possibly configurable for running Storm topologies. Further detail on the Storm meta-model may be found on the full technical report describing our technology\footnote{\url{http://dice-h2020.eu/deliverables/D2.1}}. %%%%\comment{a description about the verification analyses and more details about the implementations should be added here} %%% %%%\subsubsection{Storm: A Formal Interpretation} %%%Model-checking can serve as a means to enact continuous architecting of Storm topologies. Topologies can undergo formal verification, for example, to assess temporal properties on their execution. %%%This section elaborates on the role of formal verification in OSTIA and describes the necessary background, modelling assumptions and model definition behind Storm topology verification. %%%In particular, we provide a non-deterministic model representing Storm topologies' behavior in terms of the delay connected to bolts' processing, spout input profile and node failures. Spout input profile is measured with rates of incoming tuples into the topology. %%%Verification in OSTIA is intended to discover possible design errors at design time which are caused by (i) under/over estimation of timing requirements of computational nodes or (ii) possible runtime node failures. %%%Therefore, in this context, we are interested in verifying properties like, for instance, the existence of an execution of the topology which guarantees queue-length boundedness even if failures occur with a certain delay. %%%Defining the formal model, requires the comprehension of %started by understanding and capturing %%%the behaviors of both spouts and bolts which, after choosing the level of abstraction of the model, allows us to abstract those behaviors accordingly, %in order %%%to formalize them as finite state machines. The purpose of this activity is defining the %possible %%%operations performed by nodes and their allowed orderings in a real implementation. %of such operations. %%%We then extend the model %by taking into account %%%considering the message buffers (or queues) and the quantity of tuples that are exchanged through the topology. %%%In addition, %to the correct ordering of the operations, we decided to %%%we introduce more specific temporal constraints %into the model, in order %%%to limit the time spent by the system in each state (or processing phase) and to elaborate the concept of \textit{rate}, intended as ``number of times an event is occurring every time unit''. %%%The formal modeling (see Section \ref{ver}) is based on real-time temporal logic, i.e., the topology behavior is defined through a temporal logic formula written in Constraint LTL over clocks (CLTLoc)~\cite{BRS15}. {\color{blue} \subsection{OSTIA Methodology} \label{sec:methodology} The OSTIA Methodology effectively combines two successful approaches commonly adopted software development. The first one is DevOps and the second one is Model-Driven Engineering. OSTIA can be adopted by both the Developers and Operators parts of the DevOps cycle that, together, contribute to the iterative developments cycle of software; and, in addition, it can be used to effectively enforce the model refinement that enables the shift from high-level abstract models to low-level refined ones. OSTIA takes part in the design process at the level of Developers as follows. Designers of applications can use OSTIA to model their application by means of an abstract modeling language, based on UML. The language allows them to design the application in terms of abstraction that model the computational nodes of the application and the data sources providing input data. Based on the adopted technology, that will be used for the implementation of the final artifact, the language offers suitable stereotypes modeling the relevant technology-dependent features and that enable the analysis of the application design by means of the OSTIA verification tool. This work focuses on two specific technologies and, therefore, the UML abstractions are only limited to those required to model Apache Storm applications and Hadoop applications. Moreover, on the Developers side, the designers can use OSTIA to iteratively refine the model of their application by running the automatic analysis on different application models, that are possibly instantiated with different parameter values (e.g., the number of workers in a node running a certain functionality of the Storm topology). On the other hand, OSTIA also participates to the DevOps cycle in the Operators side because it offers post-design analysis features. OSTIA, in fact, can be adopted by operators for the elicitation of the application architecture from its source code. In particular, a number of structural anti-pattern has been identified in this work as potential threats that might affect the performance of the application and even its correct behavior at runtime. OSTIA implements basic yet useful functionalities for static code analysis that can be used by designers and operators to discover possibly structural issues. The result of the analysis that OSTIA provides at this level is the application topology and the parts of the application that are likely to be a potential threat for the entire application. Combining the application topology with runtime information, that can be collected by standard monitoring framework, the designers can actually enforce a refinement iteration on their design, in addition to the one performed at design time, that is based on realistic information coming from the real deployment of the application. This step might turn out in a refactoring of the deployed design into a new refined solution that, in turn, can be verified with the OSTIA verification tool, deployed and later analyzed with the same OSTIA functionalities. Figure~\ref{fig:iterative-refinement} shows the refinement loop which is enabled by OSTIA. \begin{figure} \centering \includegraphics[scale=0.35,draft]{fig1} \caption{Iterative refinement support by OSTIA.} \label{fig:iterative-refinement} \end{figure} To make the OSTIA methodology a practice, the following activities reflected into the OSTIA tool. \begin{itemize} \item \textbf{Architecture elicitation} - the static analysis of the source code of the application extracts its topology and made it available for later analysis. \item \textbf{Structural anti-pattern identification} - standard algorithms for graph analysis (such as clustering) identify specific structures in the application topology that might lead to undesired behaviors. \item \textbf{Formal analysis} - model-checking of the annotated model of the application verifies the existence of executions that might burden the application runtime with an excessive workload. \end{itemize} \noindent The previous tools can be used in the following scenarios. \begin{itemize} \item \textbf{Architecture analysis}. A development team implements an application that has to satisfy certain requirements at runtime. OSTIA can be used to refine the application model before the implementation phase. \item \textbf{DevOps}. As part of a DevOps pipeline dedicated to data-intensive solutions, OSTIA can be used for instrumenting the continuous refactoring of the data-intensive application by studying the application structure and the underlying topology to improve their operational characteristics. \end{itemize} } %\subsection{OSTIA-Based Verification By-Design} %This section elaborates on the ways in which OSTIA supports continuous %architecting. First, we elaborate on the anti-patterns supported in %OSTIA. Second, we elaborate on the algorithmic analyse-and-refactor actions that OSTIA can %apply to topologies to provide alternative visualisation and improved topology structure. Third, we discuss how %OSTIA suggests an alternative architecture to improve the system %performance. Finally, we elaborate on how OSTIA can drive continuous %improvement assisted by formal verification. All figures in these sections use a %simple graph-like notation where nodes may be any topological element (e.g., %Spouts or Bolts in Apache Storm terms) while edges are directed data-flow %connections. \subsection{Topology Design Anti-Patterns Within OSTIA}\label{sec:anti-pattern} %\todoMB{inline}{Problema: dobbiamo giustificare perche' questi pattern sono potenzialmente dannosi.} This section elaborates on the anti-patterns we elicited (See Section \ref{ra}). These anti-patterns are elaborated further within OSTIA to allow for their detection during streaming topology inference analysis. Every pattern is elaborated using a simple graph-like notation where \emph{spouts} are nodes that have outgoing edges only whereas \emph{bolts} are nodes that can have either incoming or outgoing edges. \subsubsection{Multi-Anchoring} The Multi-Anchoring pattern is shown in Fig. \ref{fig:multi-anchoring1}. In order to guarantee fault-tolerant stream processing, tuples processed by bolts need to be anchored with the unique {\sf id} of the bolt and be passed to multiple acknowledgers (or ``ackers" in short) in the topology. In this way, ackers can keep track of tuples in the topology. Our practitioners agree that multiple ackers can indeed cause much overhead and influence the operational performance of the entire topology. %\emph{\bf TODO: what is the consequence of these anti-patterns? How does OSTIA detect?} %\emph{\bf Multi-anchoring is not supported at the moment. Besides, I am not sure it is an anti-patter but rather a design decision} %\begin{figure}[H] % \begin{center} % \includegraphics[width=2.5cm]{multi-anchoring} % \caption{Multi-anchoring.} % \label{fig:multi-anchoring} % \end{center} %\end{figure} \begin{figure} \centering \subfigure[Multi-anchoring.]{\includegraphics[width=3.5cm,draft]{fig2}}\caption{The multi-anchoring anti-pattern.}\label{fig:multi-anchoring1} %\hspace{1cm} \subfigure[Cycle-in.]{\includegraphics[width=2.25cm,draft]{fig3}}\caption{The cycle-in anti-pattern.}\label{fig:cycle1} \end{figure} \subsubsection{Cycle-in Topology} The Cycle-in pattern is shown in Fig. \ref{fig:cycle1}. Technically, it is possible to have cycle in Storm topologies. An infinite cycle of processing would create an infinite tuple tree and make it impossible for Storm to ever acknowledge spout emitted tuples. Therefore, cycles should be avoided or resulting tuple trees should be investigated additionally to make sure they terminate at some point and under a specified series of conditions (these conditions can be hardcoded in Bolt logic). The anti-pattern itself may lead to infrastructure overloading which in turn incurs in increased costs. %\emph{\bf A topology is already an infinite stream of tuple, the problem could be the overloading of some machines} %\emph{\bf At the cycle-detection is not supported (even if it is easy to implement)} %\begin{figure}[H] % \begin{center} % \includegraphics[width=2cm]{cycle} % \caption{Cycle-in Topology.} % \label{fig:cycle} % \end{center} %\end{figure} \subsubsection{Persistent Data} The persistent data pattern is shown in Fig. \ref{fig:persistence}. This pattern covers the circumstance wherefore if two processing elements need to update a same entity in a storage, there should be a consistency mechanism in place. OSTIA offers limited support to this feature, which we plan to look into more carefully for future work. More details on this support are discussed in the approach limitations section. %\emph{\bf Ostia does not support this. BTW is this static analysis? if not, is it off-topic?} \begin{figure} \begin{center} \includegraphics[width=5cm,draft]{fig4} \caption{Concurrency management in case of Persistent Data circumstances.} \label{fig:persistence} \end{center} \end{figure} \subsubsection{Computation Funnel} The computational funnel is shown in Fig. \ref{fig:funnel}. A computational funnel emerges when there is not a path from data source (spout) to the bolts that sends out the tuples off the topology to another topology through a messaging framework or through a storage. This circumstance should be dealt with since it may compromise the availability of results under the desired performance restrictions. \begin{figure} \begin{center} \includegraphics[width=6.5cm,draft]{fig5} \caption{computation funnel.} \label{fig:funnel} \end{center} \end{figure} %\subsection{algorithmic analyse-and-refactor actions on Stream Topologies}\label{algo} % %This section elaborates on the algorithmic analyse-and-refactor actions supported by OSTIA using the common graph-like notation introduced previously. OSTIA currently supports two topology content analysis (see Sec. \ref{1} and \ref{2}) as well as two topology layout analyses (see Sec. \ref{3} and \ref{4}). Only a part of these analyses is currently implemented in OSTIA. We discuss approach limitations further in Sect. \ref{disc}.\todoMB{}{Frase critica critica da rev. Solo alcune sono implementate.} % %\subsubsection{Fan-in/Fan-out}\label{1} % %The Fan-in/Fan-out algorithmic analyse-and-refactor action is outlined in Fig. \ref{fig:fan}. For each element of the topology, fan-in is the number of incoming %streams. Conversely, fan-out is the number outgoing streams. In the case of %bolts, both in and out streams are internal to the topology. For Spouts, %incoming streams are the data sources of the topology (e.g., message brokers, %APIs, etc) which live outside of the topology. % %\begin{figure} % \begin{center} % \includegraphics[width=3cm]{fan-in-out} % \caption{Fan-in fan-out in Stream topologies.} % \label{fig:fan} % \end{center} %\end{figure} % %This algorithmic analyse-and-refactor action allows to visualise instances where fan-in and fan-out numbers are differing.\todoMB{}{Solo visual quindi.} % %\subsubsection{Topology cascading}\label{2} % %The Cascading algorithmic analyse-and-refactor action is outlined in Fig. \ref{fig:cascading}. By topology cascading, we mean connecting two different Storm topologies via a messaging framework (e.g., Apache Kafka~\cite{kafka}). %%\footnote{\url{http://kafka.apache.org/}}). %Although cascading may simplify the development of topologies by encouraging architecture elements' reuse especially for complex but procedural topologies, this circumstance may well raise the complexity of continuous architecting and may require separation of concerns \cite{soc}. For example, Fig. \ref{fig:cascading} outlines an instance in which two topologies are concatenated together by a message broker. In this instance, formal verification may be applied on the left-hand side topology, which is more business-critical, while the right-hand side of the entire topology is improved by on-the-fly OSTIA-based analysis. Even though OSTIA support for this feature is still limited, we report it nonetheless since OSTIA was engineered to address multiple topologies at once. %%More details on this and similar limitations are discussed in Section \ref{lim}. %%\emph{\bf Ostia does not support this. I can't think of an easy way to implement it} % %\begin{figure} % \begin{center} % \includegraphics[width=6cm]{cascading} % \caption{cascading.} % \label{fig:cascading} % \end{center} %\end{figure} % %This algorithmic action allows to combine multiple cascading topologies.\todoMB{}{Non mi e' chiaro cosa faccia questa azione? E' visuale?} %REMOVED MOVED TO CASE-STUDY ANALYSIS %\subsubsection{Topology clustering}\label{3} %Topology clustering is outlined in Fig. \ref{fig:clustering}. Topology clustering implies identifying coupled processing elements (i.e., bolts and spouts) and cluster them together (e.g., by means of graph-based analysis) in a way that elements in a cluster have high cohesion and loose-coupling with elements in other clusters. Simple clustering or Social-Network Analysis mechanisms can be used to infer clusters. These clusters may require additional attention since they could turn out to become bottlenecks. Reasoning more deeply on clusters and their resolution may lead to establishing the Storm scheduling policy best-fitting with the application. We will elaborate on this in Section \ref{sec:performance-boosting}. %%\emph{\bf Does it relates with Storm scheduling?} % %\begin{figure} % \begin{center} % \includegraphics[width=6.5cm]{clustering} % \caption{clustering.} % \label{fig:clustering} % \end{center} %\end{figure} %REMOVED MOVED TO CASE-STUDY ANALYSIS %\subsubsection{Linearising a topology}\label{4} % %Topology linearisation is outlined in Fig. \ref{fig:linearizing}. Sorting the processing elements in a topology in a way that topology looks more linear, visually. This step ensures that visual investigation and evaluation of the structural complexity of the topology is possible by direct observation. It is sometimes essential to provide such a visualisation to evaluate how to refactor the topology as needed. %\begin{figure} % \begin{center} % \includegraphics[width=5cm]{linearizing} % \caption{linearising.} % \label{fig:linearizing} % \end{center} %\end{figure} % SECTION HEURISTICS REMOVED FOR SAFETY REASONS %%%\subsection{Performance Improvement Heuristics} %%%\label{sec:performance-boosting} %%%In this section, we elaborate on a specific case where algorithmic analyse-and-refactor actions improve the performance of the data-intensive application. More in particular, big data architectures typically need parameters tuning to achieve best %%%performance. For instance, in Storm developers have to specify the parallelism %%%level for each node, which is the number of processes instantiated for a %%%particular bolt or spout. OSTIA provides suggestions on how to change the %%%parallelism level of the nodes, using simple and fast heuristics together with %%%static analysis. %%% %%%After architects designed a Storm application, a scheduler instantiates the %%%topology on a cluster of machines. The default scheduler utilises a round-robin %%%strategy to fairly load the machines with the bolts and spouts. This a crucial %%%assumption for the heuristic used in OSTIA to perform well. There are several %%%proposals in the state of the art to change the default scheduler logic in order to boost the performance of the topologies \cite{Aniello2013Adaptive, R-Storm2015Peng}. However, many DIA users typically prefer the default scheduler, while having the opportunity to tune the parameters automatically behind the scenes. %%% %%%The OSTIA heuristic works as follows: A DIA architect runs OSTIA specifying the %%%number of machines used in the deployment and the number of instances of spouts %%%and bolts that can be spawned in each machine. OSTIA statically analyses the %%%topology and extracts the parallelism level for each component of the %%%topology. At this point, we sum of all instances need to be allocated and the %%%slots available on the machines ($machines * components\_for\_each\_machine$).\todoMB{}{Qui mi sfugge il concetto di slot.} %%% %%%OSTIA decides whether improvements are possible (i.e. \emph{slots %%% available} $>$ \emph{instances to be allocate}), and suggests changes to the %%%parallelism level to the nodes in order to improve the performance. The simplest %%%case occurs when the unallocated slots are enough to remove a machine from the %%%cluster, thus saving costs. %%% %%%Alternatively, OSTIA identifies a subset of nodes, called critical nodes, which %%%are important from a performance perspective. The critical nodes of a topology %%%are defined as the nodes with the highest \emph{weighted fan-in}. The %%%\emph{weighted fan-in} of a node \emph{N} is defined by Equation \ref{eq:wfi}. %%% %%%\begin{align} %%% \text{weighted fan-in(N)} = \frac{\sum_{X \rightarrow N \in Edges} parallelism(X)}{parallelism(N)} \label{eq:wfi} %%%\end{align} %%% %%%The critical nodes could be easily susceptible to overloading as their %%%parallelism level do not compensate the parallelism level of its %%%\emph{in-nodes}. Increasing the parallelism level gives the nodes more resources %%%to cope with high loads. %%% %%%For instance, let us take Figure \ref{topo1} as an example. There are 22 %%%components that need to be allocated.\todoMB{}{Forse e' meglio parlare di threads invece che components?} Suppose that our cluster is composed by 4 %%%machines and each machine fits 10 instances of components. OSTIA in this case %%%would suggest to simply remove one machine. Let us suppose that we have 3 %%%machines with 10 tasks each. At this point we have 30 slots available and 22 %%%components, therefore we have 8 slots available that can be used to increase the %%%performance. In order to decide the components to improves we identify the ones %%%with maximum \emph{weighted fan-in}. In the example nodes \emph{mediaExtractor} %%%and \emph{articleExtractor} with \emph{weighted fan-in} of 8. Finally, since we %%%have 8 free slots to share between two nodes\todoMB{}{Nodi, bolts?} we increase the parallelism level %%%of the two critical nodes of $8/2 = 4$, setting it from 1 to 5. %%% %%%The above heuristic approach concludes the support that OSTIA offers to quickly and continuously improving streaming topologies by means of algorithmic evaluations and analysis. Conversely, as previously stated we learned from industrial practice that the need might rise for more formal guarantees, e.g., concerning some key parts of a streaming topology with respect to key big-data properties such as queue-boundedness - i.e., a guarantee that the queue for a certain bolt will always be bounded - or queue-population - i.e., a guarantee that the queue never contains privacy sensitive objects. In similar scenarios, OSTIA offers the seamless capability of running complex and exhaustive formal verification over analysed topologies. The next section elaborates further on this key support offered by OSTIA. \subsection{DOT format for topology elicitation.} {\color{blue} As previously stated, the OSTIA tool is rigged to elicit and represent Big Data topologies using the ``*.dot" format; the format in question is a de-facto and de-iure graph description language. DOT graphs are typically files with the file extension gv or dot. Paraphrasing from Wikipedia, \emph{``Various programs can process DOT files. Some, such as dot, neato, twopi, circo, fdp, and sfdp, can read a DOT file and render it in graphical form. Others, such as gvpr, gc, acyclic, ccomps, sccmap, and tred, read DOT files and perform calculations on the represented graph. Finally, others, such as lefty, dotty, and grappa, provide an interactive interface [...]"}. A small excerpt of DOT code describing a graph with 4 nodes is the following: \begin{lstlisting}[basicstyle=\normalfont\ttfamily\small,tabsize=12,caption=DOT script describing an undirected graph N with four nodes.] graph N { n1 -- n2 -- n3; n2 -- n4; } \end{lstlisting} OSTIA uses the same approach as the aforementioned tools and instatiates the same design-patterns employed by the tools in question to enact formal-verification of data-intensive topologies. } % %\comment{PJ: Should we put this section into a separate section? I feel it is disconnected to the content of this section.} \subsection{OSTIA-Based Formal Verification}\label{ver}\label{verification} %\input{macros} \newcommand{\M}{\mathcal{M}} \newcommand{\timestr}{\mathcal{T}} \newcommand{\D}{\mathcal{D}} \newcommand{\C}{\mathcal{C}} %old, compatibility reasons \newcommand{\U}{\mathbf{U}} \newcommand{\Snc}{\mathbf{S}} \newcommand{\T}{\mathbf{T}} \newcommand{\R}{\mathbf{R}} \newcommand{\Nat}{\mathbb{N}} \newcommand{\Z}{\mathbb{Z}} \newcommand{\Real}{\mathbb{R}} \newcommand{\Q}{\mathbb{Q}} %old, compatibility reasons \newcommand{\X}[1]{\mathbf{X}\left(#1\right)} \newcommand{\Y}[1]{\mathbf{Y}\left(#1\right)} %\newcommand{\X}{\mathbf{X}} %\newcommand{\Y}{\mathbf{Y}} \newcommand{\Zed}{\mathbf{Z}} \newcommand{\Lng}{\mathscr {L}} \newcommand{\iFF}{\Leftrightarrow} \newcommand{\niFF}{\nLeftrightarrow} \newcommand{\SNC}{{\mathcal S}} \newcommand{\TRG}{{\mathcal T}} \newcommand{\zot}{$\mathds{Z}$ot} %old, compatibility reasons \newcommand{\G}[1]{\mathbf{G}\left(#1\right)} \newcommand{\F}[1]{\mathbf{F}\left(#1\right)} %\newcommand{\Q}{\mathbb{Q}} \newcommand{\triple}[3]{(#1, #2, #3)} \newcommand{\pair}[2]{(#1, #2)} \newcommand{\siff}{\Leftrightarrow} \newcommand{\A}{\mathcal{A}} \newcommand{\aX}{\mathrm{X}} \newcommand{\aY}{\mathrm{Y}} \newcommand{\x}{\mathbf{x}} \newcommand{\eqdef}{\stackrel{\mbox{\begin{tiny}def\end{tiny}}}{=}} % =def= \newcommand{\iFFdef}{\stackrel{\mbox{\begin{tiny}def\end{tiny}}}{\iFF}} % =def= \newcommand{\step}[1]{\xrightarrow{#1}} \newcommand{\pspace}{\textsc{PSpace}} \makeatletter \def\Eqlfill@{\arrowfill@\Relbar\Relbar\Relbar} \newcommand{\longmodels}[1][]{\,|\!\!\!\ext@arrow 0359\Eqlfill@{#1}} \makeatother \newcommand{\symodels}{\longmodels{\mbox{\it{\tiny sym}}}} \newcommand{\intervaLii}[2]{[#1,#2]} \newcommand{\intervaLie}[2]{[#1,#2)} \newcommand{\intervaLee}[2]{(#1,#2)} \newcommand{\interval}[2]{\langle #1,#2 \rangle} \newcommand{\set}[1]{\{ #1 \}} \newcommand{\tsys}[1]{\mathcal{S}(#1)} \newcommand{\lapp}[1]{\lfloor #1 \rfloor} \newcommand{\happ}[1]{\lceil #1 \rceil} \newcommand{\first}[2]{(H_{#1}\vee L_{#1}) \wedge(\neg(H_{#1}\vee L_{#1}) \Snc (#2))} \newcommand{\pname}[1]{\ensuremath{\textit{#1}}} \newcommand{\on}{\pname{on}} \newcommand{\off}{\pname{off}} \newcommand{\lon}{\pname{l}} \newcommand{\test}{\pname{test}} \newcommand{\resetc}{\pname{reset-c}} \newcommand{\turnoff}{\pname{turnoff}} \newcommand{\edge}[1]{\texttt{#1}} \newcommand{\enabled}[1]{\texttt{e}_{#1}} \newcommand{\visit}[1]{\mathit{visit}(#1)} \newcommand{\inv}[1]{\mathit{inv}(q_{#1})} \newcommand{\intg}[1]{\lfloor#1\rfloor} \newcommand{\fract}[1]{\mathit{frac(#1)}} %%%%%%%%%%%%%%% STORM MODEL COMMANDS \newcommand{\ori}{\mathtt{orig}} %commands with single parameter \newcommand{\p}[1]{\mathtt{process}_{#1}} \newcommand{\ta}[1]{\mathtt{take}_{#1}} \newcommand{\e}[1]{\mathtt{emit}_{#1}} \newcommand{\add}[1]{\mathtt{add}_{#1}} \newcommand{\f}[1]{\mathtt{fail}_{#1}} \newcommand{\buf}[1]{\mathtt{buffer}_{#1}} \newcommand{\startf}[1]{\mathtt{startFailure}_{#1}} \newcommand{\startid}[1]{\mathtt{startIdle}_{#1}} \newcommand{\id}[1]{\mathtt{idle}_{#1}} \newcommand{\cl}[1]{\mathtt{clock}_{#1}} \newcommand{\cltf}[1]{ \cl{to\f{#1}}} \newcommand{\ph}[1]{\mathtt{phase}_{#1}} %commands with two parameters (index, rate) \newcommand{\pr}[2]{\p{#1}(#2)} \newcommand{\tar}[2]{\ta{#1}(#2)} \newcommand{\er}[2]{\e{#1}(#2)} \newcommand{\addr}[2]{\add{#1}(#2)} \newcommand{\ra}[1]{r_{\add{#1}}} \newcommand{\rp}[1]{r_{\p{#1}}} \newcommand{\re}[1]{r_{\e{#1}}} \newcommand{\rt}[1]{r_{\ta{#1}}} \newcommand{\rf}[1]{r_{\mathtt{failure}_{#1}}} \newcommand{\rff}[2]{r_{\mathtt{failure}_{#1#2}}} \newcommand{\rr}[1]{r_{\mathtt{replay}_{#1}}} \newcommand{\reb}[1]{\bar{r}_{\e{#1}}} \newcommand{\rth}[1]{\hat{r}_{\ta{#1}}} \newcommand{\reh}[1]{\hat{r}_{\e{#1}}} \newcommand{\tph}[2]{t_{\ph{#1}}^{#2} } This section describes the formal modelling and verification employed in OSTIA. Our assumption for DIA refactoring is that architects eliciting and studying their topologies by means of OSTIA may want to continuously and incrementally improve it based on results from solid verification approaches. The approach, which was first proposed in \cite{MBER16}, relies on \textit{satisfiability checking}~\cite{MPS13}, an alternative approach to model-checking where, instead of an operational model (like automata or transition systems), the system (i.e., a topology in this context) is specified by a formula defining its executions over time and properties are verified by proving that the system logically entails them. %The logic we use is Constraint LTL over clocks CLTLoc is a real-time temporal logic and, in particular, a semantic restriction of Constraint LTL (CLTL)~\cite{DD07} allowing atomic formulae over $(\mathbb{R}, \set{<,=})$ where the arithmetical variables behave like clocks of Timed Automata (TA)~\cite{timed}. As for TA, clocks measures time delays between events: a clock $x$ measures the time elapsed since the last time when $x=0$ held, i.e., since the last ``reset'' of $x$. Clocks are interpreted over Reals and their value can be tested with respect to a positive integer value or reset to 0. % To analyse anomalous executions of Storm topologies which do not preserve the queue-length boundedness property for the nodes of the application, we consider CLTLoc with counters. Counters are discrete non-negative variables that are used in our model to represent the length of bolt queues over the time throughout the streaming processing realized by the application. Let $X$ be a finite set of clock variables $x$ over $\Real$, $Y$ be a finite set of variables over $\Nat$ and $AP$ be a finite set of atomic propositions $p$. CLTLoc formulae with counters are defined as follows: \begin{equation*}%\small \phi := \begin{gathered} p \mid x\sim c \mid y\sim c\mid \aX y\sim z\pm c \mid\phi \wedge \phi \mid \neg \phi \mid \\ \X{\phi} \mid \Y{\phi} %\mid \Zed\phi \mid \phi\U\phi \mid \phi\Snc\phi \end{gathered} \end{equation*} where $x \in X$, $y,z \in Y$, $c \in \Nat$ and $\sim \in \set{<,=}$, $\mathbf{X}$, $\mathbf{Y}$, $\U$ and $\Snc$ are the usual ``next'', ``previous'', ``until'' and ``since''. %The semantics of CLTLoc is defined with respect to $(\Real, \set{<,=})$ and $\pair{\Nat}{<}$, the latter representing positions in time. A \textit{model} is a pair $\pair{\pi}{\sigma}$, where $\sigma$ is a mapping associating every variable $x$ and position in $\Nat$ with value $\sigma(i,x)$ and $\pi$ is a mapping associating each position in $\Nat$ with subset of $AP$. The semantics of CLTLoc is defined as for LTL except for formulae $x\sim c$ and $\aX y \sim z\pm c$. %At position $i\in\Nat$, $ \pair{\pi}{\sigma}, i \models x\sim c \textbf{ iff } \sigma(i, x)\sim c$ and $\pair{\pi}{\sigma}, i \models \aX y\sim z \pm c \textbf{ iff } \sigma(i+1, z) \sim \sigma(i,z) \pm c$. Intuitively, formula $x\sim c$ states that the value of clock $x$ is $\sim$ than/to $c$ and formula $\aX y \sim z\pm c$ states that the next value of variable $y$ is $\sim$ to/than $z+c$. The standard technique to prove the satisfiability of CLTL and CLTLoc formulae is based on of B\"uchi automata \cite{DD07,BRS15} %the evidence has turned out that it may be rather expensive in practice, even in the case of LTL (the size of the automaton is exponential with respect to the size of the formula). but, for practical implementation, Bounded Satisfiability Checking (BSC)~\cite{MPS13} avoids the onerous construction of automata by means of a reduction to a decidable Satisfiability Modulo Theory (SMT) problem~\cite{BRS15}. %By unrolling the semantics of a formula for a finite number $k>0$ of steps, The outcome of a BSC problem is either an infinite ultimately periodic model or unsat. %\cite{BRS15} shows that BSC for CLTLoc is complete and that is reducible to a decidable Satisfiability Modulo Theory (SMT) problem. %A CLTLoc formula can be translated into the decidable theory of quantifier-free formulae with equality and uninterpreted functions combined with the theory of Reals over $(\Real,<)$. %, written QF-EUF$(\Real,<)$. CLTLoc allows the specification of non-deterministic models using temporal constraints wherein clock variables range over a dense domain and whose value is not abstracted. Clock variables represent, in the logical language and with the same precision, physical (dense) clocks implemented in real architectures. %They appear in formulae in the form $x \sim c$ to express a bound $c$ on the delay measured by clock $x$. Clocks are associated with specific events to measure time elapsing over the executions. As they are reset when the associated event occurs, in any moment, the clock value represents the time elapsed since the previous reset and corresponds to the elapsed time since the last occurrence of the event associated to it. We use such constraints to define, for instance, the time delay required to process tuples or between two node failures.\\ %Modeling topologies requires to express by formulae emitting rates which measure the number of tuples emitted by a spout node per time unit. %***TBC %\input{verification} %Verification techinques: %\begin{itemize} %\item Safety Verification %\item Performance analysis %\end{itemize} %A \textit{safety} property is intuitively defined in the formal verification context as a property stating that something ``bad'' will \textit{never} happen during execution.\cite{lamport1} %Cassical examples of safety properties are deadlock freedom and mutual exclusion, where the ``bad'' behaviour is respectively the deadlock occurrence and the simultaneous execution of a critical section. %One of the most important requirements for streaming systems is guaranteeing low latency while maintaining high throughput. %In a distributed --> Motivare questione delle code! % % %\subsection{Storm Formal Model} %\label{sec:storm-model} %To perform our verification tasks we defined a formal model expressed in CLTLoc with discrete variables. The resulting model is a non-deterministic infinite state system. % MOVED BEFORE %%%\subsubsection{OSTIA Models: a Formal Interpretation} %%%We started by understanding and capturing the behaviors of both spouts and bolts. %%%After choosing the level of abstraction of our model we simplified those behaviors accordingly, in order to formalize them as finite state machines. The purpose of this first activity was to define the possible operations and the allowed orderings of such operations. %%%We then extended the model by taking into account the message buffers (or queues) and the quantity of tuples that are exchanged through the topology. %%%In addition to the correct ordering of the operations, we decided to introduce more specific temporal constraints into the model, in order to limit the time spent by the system in each state (or processing phase) and to elaborate the concept of \textit{rate}, intended as ``number of times an event is occurring every time unit''.\\ %\subsubsection{Assumptions and level of abstraction} %We made several assumptions and abstractions while building the model: Building on top of the above framework, in \cite{MBER16} we provide a formal interpretation of the Storm (meta-)model which requires several abstractions and assumptions. %For example, some deployment details, such as the number of worker nodes and features of the underlying cluster, are abstracted away. %There is a single queuing layer: every bolt has a unique incoming queue and no sending queue, while the worker queues are not represented. In the same way, each bolt/spout has a single output stream. %Moreover, the content of messages is not relevant: all the tuples have the same fixed size and we represent only quantity of tuples moving through the system. \begin{itemize} \item key deployment details, e.g., the number of worker nodes and features of the underlying cluster, are abstracted away; \item each bolt/spout has a single output stream; \item %we simplified the message buffer system, assuming that there is a single queuing layer: every bolt has a unique incoming queue and no sending queue, while the worker queues are not represented; \item every operation is performed within minimum and maximum thresholds of time; \item %we do not take into account the content of the messages is not relevant: all the tuples have the same fixed size and we represent only quantity of tuples moving through the system; \end{itemize} %\subsubsection{Model Formalization} A Storm Topology is a directed graph $\mathbf{G} = \{ \mathbf{N}, Sub\}$ where the set of nodes $\mathbf{N} = \mathbf{S}\bigcup \mathbf{B}$ includes in the sets of spouts (\textbf{S}) and bolts (\textbf{B}) and %the set of edges $\mathbf{E} = \{ Sub_{i,j} | i \in \mathbf{B}, j \in \{\mathbf{S}\bigcup \mathbf{B}\} \}$ $Sub\subset\mathbf{N}\times\mathbf{N}$ defines how the nodes are connected each other via the subscription relation. Pair $(i,j)\in Sub$ indicates that ``bolt $i$ subscribes to the streams emitted by the spout/bolt $j$''. Spouts cannot subscribe to other nodes in the topology. Each bolt has a receive queue where the incoming tuples are collected before being read and processed. % by the node. The queues have infinite size and the level of occupation of each $j^{th}$ queue is described by the variable $q_j$. %\footnote{Spouts have no queues, by definition.} Spouts have no queues, and each spout can either \textit{emit} tuples into the topology or stay \emph{idle}. Each bolt can be in \emph{idle} state, in \emph{failure} state or in \emph{processing} state. While in the processing state, the bolt first reads tuples from its receive queue (\textit{take} action), then it performs its transformation (\textit{execute} action) and finally it \textit{emits} the output tuples in its output streams. \\ \begin{figure}[tb] \centering \includegraphics[width=0.5\linewidth,draft]{fig6} \caption{Finite state automaton describing bolt states.} \label{figure-fsa} \end{figure} %\begin{figure} % \centering % \begin{tikzpicture}[->,>=stealth',shorten >=1pt,auto,node distance=2.5cm,semithick, every node/.style={scale=0.55}] % % \tikzstyle{every state}=[fill=white,text=black,minimum width={width("execute")+10pt}] % % % \node[state] (I) {$idle$}; % \node[state] (T) [above right of=I] {$take$}; % \node[state] (E) [right of=T] {$execute$}; % \node[state] (F) [below right of=I] {$fail$}; % \node[state] (EM) [right of=F] {$emit$}; % % \path (I) edge node {} (T) % edge node {} (F) % edge [loop above] node {} (I) % (E) edge [loop right] node {} (E) % edge node {} (EM) % (T) edge node {} (E) % edge node {} (F) % (EM) edge node {} (I) % edge node {} (F) % (F) edge [loop below] node {} (F); % edge [bend right] node {} (I); % \end{tikzpicture} % \caption{Finite state automaton describing bolt states.} % \label{figure-fsa} %\end{figure} %To give an idea about how the model is formalized, An excerpt of the full model designed in \cite{MBER16} is shown in Fig. \ref{figure-fsa}. We provide, as an example, one of the formulae defining the processing state. Formula \ref{formula:1} can be read as \textit{``for all bolts: if a bolt j is processing tuples, then it has been processing tuples since it took those tuples from the queue, (or since the origin of the events), and it will keep processing those tuples until it will either emit them or fail. Moreover, the bolt is not in a failure state''.} \begin{align} \small % \bigwedge_{ i \in \mathbf{B} } \left( \begin{array}{l} \p{i} \Rightarrow \\ \p{i} \, \Snc \, ( \ta{i} \lor (\ori \land \p{i})) \land \\ \p{i} \, \U \, (\e{i} \lor \f{i}) \land \lnot \f{i} \end{array} \right) \label{formula:1} % \end{align} The number of tuples emitted by a bolt depends on the number of incoming tuples. The ratio $\frac{\#output\_tuples}{\#input\_tuples}$ %is used to expresses the ``kind of function'' performed by the bolt and is given as configuration parameter. All the emitted tuples are then added to the receive queues of the bolts subscribing to the emitting nodes. In the same way, whenever a bolt reads tuples from the queue, the number of elements in queue decreases. To this end, formula \ref{formula:2}, imposes that \textit{``if a bolt takes elements from its queue, the number of queued elements in the next time instant will be equal to the current number of elements plus the quantity of tuples being added (emitted) from other connectd nodes minus the quantity of tuples being read''.} \begin{align}\small % \bigwedge_{ \begin{subarray}{c} \,j \in B \end{subarray} } &( \ta{j}{} \Rightarrow (\aX q_j = q_j + \ra{j} - \rt{j} )) \label{formula:2} % \end{align} These functional constraints are fixed for all the nodes and they are not configurable. %What is configurable and can be tuned changing the parameters of the model is everything concerning The structure of the topology, the parallelism level of each node, the bolt function and the non-functional requirements, as, for example, the time needed for a bolt in order to process a tuple, the minimum and maximum time between failures and the spout emitting rate are configurable parameters of the model. Currently, the verification tool accepts %as configuration format a JSON file containing all the configuration parameters. OSTIA supports such format and is able to extract from static code analysis a partial set of features, and an almost complete set of parameters after monitoring a short run of the system. The user can complete the JSON file by adding some verification-specific settings. {\color{blue} \subsection{JSON format for verification.} Listing~\ref{lst:json-format} shows an excerpt of a JSON script describing a topology including two spouts, called $\mathtt{S}_1$ and $\mathtt{S}_2$, and three bolts, called called $\mathtt{B}_1$, $\mathtt{S}_2$ and $\mathtt{S}_3$. Spouts and bolts are modeled by means of a number of parameters that represent an abstraction of their (non-functional) behavior at runtime. The JSON format is a readable means that captures all the needed information, required to run the verification, that are classified into three distinct groups. A list of the main ones is included hereafter. \begin{itemize} \item Topology-related settings: \begin{itemize} \item list of spouts: \begin{itemize} \item \texttt{emit\_rate}: spout average tuple emitting rate. \end{itemize} \item list of bolts: \begin{itemize} \item \texttt{subs}: the list of all the nodes in the topology that send tuple to the bolt. \item \texttt{parallelism}: level of parallelism chosen for the bolt. This value can be extracted from the code implementing Storm topology or set at design time. \item \texttt{alpha}: average processing time for the single tuple. \item \texttt{sigma}: ration between number of output tuples and number of input tuples. This value is an abstraction of the functionality carried out by the bolt: values smaller than one model filtering functions whereas value greater than one model other generic function on input tuples. %\item \texttt{min\_ttf}: minimum time to failure \end{itemize} \item structure of the topology, expressed through the combination of the subscription lists (``\texttt{subs}'') of all the bolts composing the topology. \item \texttt{queue\_threshold}: the maximum level of occupancy that should not be exceeded by any queue. This value is extracted from the code implementing Storm topology or set at design time. \item \texttt{max\_idle\_time}: the maximum time for a bolt to be inactive. \end{itemize} \item Verification-related settings: the information in this section does not model the topology itself but actually relates to the analysis that is run on the topology. \begin{itemize} \item \texttt{num\_steps}: being the verification engine implemented according to the bounded model-checking approach, the value specifies the number of discrete time instants to be explored in the verification phase. \item \texttt{periodic\_queues}: the list of bolts whose queue size is analyzed. The verification procedure determines the existence of a system execution that leads to and increasing queue size for the bolts specified in the list. \item \texttt{plugin}: underlying model-checker to be used. \end{itemize} \end{itemize} \begin{lstlisting}[basicstyle=\normalfont\ttfamily\small,tabsize=12,caption=JSON script describing a simple topology. Dots are used as abbreviations.] { "app_name": "Simple Topology", "description": "", "version": "0.1", "topology":{ "spouts":[ {"id":"S1", "avg_emit_rate":2.0}, {"id":"S2", "avg_emit_rate":1.0} ], "bolts":[ {"id": "B1", ...}, {"id": "B2", "subs": ["S1", "S2"], "alpha": 5.0, "sigma": 0.5, "min_ttf": 1000, "parallelism": 10}, {"id": "B3", ...} ], "max_idle_time": 0.01, "queue_threshold": 2000 }, "verification_params":{ "plugin" : ..., "max_time" : ..., "num_steps": ..., "periodic_queues":["B1"]} } \end{lstlisting}\label{lst:json-format} } \section{Results} \label{eval} %\input{eval} %\textbf{@Marcello,Francesco: we should also probably elaborate on the kind of verification technique we are using and how that can help in evaluating the topology.. remember here we do not have the DICE restriction so we can mention any kind of analysis that it would be possible to run, also analyses that are currently in the hands of other DICE partners!!} %\begin{itemize} %\item we can use the ATC case study as much as we want - that yields already three topologies that we can infer %\item ATC has agreed that we can mention their role in this exercise, I also showed them the topology that we elicited basically with OSTIA and they already made considerations on how to improve it %\item in the evaluation we should also comment on how OSTIA can help you in visualizing the application topology that you may be considering to use by reusing a big-data application for something else... visualising the application topology and analysing it may allow you to improve it while you are using it as a starting point for your application %\item another application that we can use is the one that NETF is considering for their own scenario, KILLRWEATHER - \url{https://github.com/killrweather/killrweather} %\item any additional case that we can run? %\item what do the results show? do we have a way to quickly quantify the time that is saved by using this approach? e.g., the time that is saved in setting up and running the infrastructure and how much would that time saved have costed these could be valuable evaluation insights %\end{itemize} We evaluated OSTIA through qualitative evaluation and case-study research featuring an open-/closed-source industrial case study (see Section \ref{cs}) and two open-source case studies (see Section \ref{os}) on which we also applied OSTIA-based formal verification and refactoring (see Section \ref{ver}). The objective of the evaluation was two-fold: \begin{enumerate} \item[OBJ.1] Evaluate the occurrence of anti-patterns evidenced by our practitioners in both open- and closed-source DIAs; \item[OBJ.2] understand whether OSTIA-based analyses aid in refactoring towards formally-verified DIA topologies \emph{by-design}; \end{enumerate} \subsection{Establishing Anti-Patterns Occurrence with Case-Study Research: 3 Cases from Industry}\label{cs} %As previously introduced in Section \ref{ra}, OSTIA was evaluated using 3 medium/large topologies (11+ elements) part of the SocialSensor App. Our industrial partner is having performance and availability outages connected to currently unknown circumstances. Therefore, the objective of our evaluation for OSTIA was twofold: (a) allow our industrial partner to enact architecture refactoring of their application with the goal of discovering any patterns or hotspots that may be requiring further architectural reasoning; (b) understand whether OSTIA provided valuable feedback helping designers in tuning their application through a design-and-refactor loop.%to endure the continuous architecting exercise. In addition to formal verification, specific algorithms for graph analysis can be integrated in OSTIA to offer a deeper insight of the applications. For instance, the industrial case study has been analyzed with two algorithms to identify linear sequences of nodes and clusters in the topology graph. Topology linearisation results in sorting the processing elements in a topology in a way that topology looks more linear, visually. This step ensures that visual investigation and evaluation of the structural complexity of the topology is possible by direct observation. Topology clustering implies identifying coupled processing elements (i.e., bolts and spouts) and cluster them together (e.g., by means of graph-based analysis) in a way that elements in a cluster have high cohesion and loose-coupling with elements in other clusters. Simple clustering or Social-Network Analysis mechanisms can be used to infer clusters. Clusters may require, in general, additional attention since they could turn out to become bottlenecks. Reasoning more deeply on clusters and their resolution may lead to establishing the Storm scheduling policy best-fitting with the application. OSTIA standard output\footnote{Output of OSTIA analyses is not shown fully for the sake of space.} for the smallest of the three SocialSensor topologies, namely the ``focused-crawler" topology, is outlined in Fig. \ref{topo1}. \begin{figure*} \begin{center} \includegraphics[width=11cm,draft]{fig7} \caption{SocialSensor App, OSTIA sample output partially linearised (top) and cascaded (bottom left and right).} \label{topo1} \end{center} \end{figure*} %OSTIA has been proved particularly helpful in visualising the complex topologytogether with the parallelism level of each components. %REMOVED MOVED TO CASE-STUDY ANALYSIS %\subsubsection{Topology clustering}\label{3} %Topology clustering is outlined in Fig. \ref{fig:clustering}. Topology clustering implies identifying coupled processing elements (i.e., bolts and spouts) and cluster them together (e.g., by means of graph-based analysis) in a way that elements in a cluster have high cohesion and loose-coupling with elements in other clusters. Simple clustering or Social-Network Analysis mechanisms can be used to infer clusters. These clusters may require additional attention since they could turn out to become bottlenecks. Reasoning more deeply on clusters and their resolution may lead to establishing the Storm scheduling policy best-fitting with the application. We will elaborate on this in Section \ref{sec:performance-boosting}. %%\emph{\bf Does it relates with Storm scheduling?} % %\begin{figure} % \begin{center} % \includegraphics[width=6.5cm]{clustering} % \caption{clustering.} % \label{fig:clustering} % \end{center} %\end{figure} %REMOVED MOVED TO CASE-STUDY ANALYSIS %\subsubsection{Linearising a topology}\label{4} % %Topology linearisation is outlined in Fig. \ref{fig:linearizing}. Sorting the processing elements in a topology in a way that topology looks more linear, visually. This step ensures that visual investigation and evaluation of the structural complexity of the topology is possible by direct observation. It is sometimes essential to provide such a visualisation to evaluate how to refactor the topology as needed. %\begin{figure} % \begin{center} % \includegraphics[width=5cm]{linearizing} % \caption{linearising.} % \label{fig:linearizing} % \end{center} %\end{figure} Combining this information with runtime data (i.e., latency times) our industrial partner observed that the ``expander" bolt needed additional architectural reasoning. More in particular, the bolt in question concentrates a lot of the topology's progress on its queue, greatly hampering the topology's scalability. In our partner's scenario, the limited scalability was blocking the expansion of the topology in question with more data sources and sinks. In addition, the partner welcomed the idea of using OSTIA as a mechanism to enact the refactoring of the topology in question as part of the needed architectural reasoning. %\comment{for example? elaborate more on this. the reviewers will likely say its vague...} %Besides this pattern-based evaluation and assessment, OSTIA algorithmic analyses\todoMB{}{Non mi è chiaro quali siano le analisi oltre ai pattern...} OSTIA assisted our client in understanding that the topological structure of the SocialSensor app would be better fit for batch processing rather than streaming, since the partner observed autonomously that too many database-output spouts and bolts were used in their versions of the SocialSensor topologies. In so doing, the partner is now using OSTIA to drive the refactoring exercise towards a Hadoop Map Reduce~\cite{hadoop} %\footnote{\url{http://hadoop.apache.org/}} framework for batch processing. %\comment{this section is not good enough, we require to add more information, for example what would be the target architecture after refactoring looks like? we said some patters were idenfitified but we didnt say details of continuous rearchitecting...and also formal verification process for this} \begin{figure} \begin{center} \includegraphics[width=8cm,draft]{fig8} \caption{Industrial case-study, a refactored architecture.} \label{atc} \end{center} \end{figure} As a followup of our analysis, our partner is refactoring his own high-level software architecture adopting a lambda-like software architecture style \cite{lambda} (see Fig. \ref{atc}) which includes the Social-Sensor App (Top of Fig. \ref{atc}) as well as several additional computation components. In summary, the refactoring resulting from OSTIA-based analysis equated to deferring part of the computations originally intended in the expander bolt within the Social Sensor app to additional ad-hoc Hadoop Map Reduce jobs with similar purpose (e.g., the EntityExtractor compute node in Fig. \ref{atc}) and intents but batched out of the topological processing in Storm (see Fig. \ref{atc})\footnote{several other overburdened topological elements were refactored but were omitted here due to industrial secrecy}. Our qualitative evaluation of the refactored architecture by means of several interviews and workshops revealed very encouraging results. %However, we are yet to quantitatively evaluate whether the new software architecture actually reflects a tangible boost in terms of performance and scalability.\todoMB{}{Azz...pesante questa...} \subsection{Establishing Anti-Patterns Occurrence with Case-Study Research: 3 Cases from Open-Source}\label{os} To confirm the usefulness and capacity of OSTIA to enact a refactoring cycle, we applied it in understanding (first) and attempting improvements of two open-source applications, namely, the previously introduced DigitalPebble~\cite{digitalpebble} and %\footnote{\url{https://github.com/DigitalPebble}} and StormCV~\cite{stormCV} %\footnote{\url{https://github.com/sensorstorm/StormCV}} applications. Figures \ref{dp} and \ref{scv} outline standard OSTIA output for the two applications. Note that we did not have any prior knowledge concerning the two applications in question and we merely run OSTIA on the applications' codebase dump in our own experimental machine. OSTIA output takes mere seconds for small to medium-sized topologies (e.g., around 25 nodes). % \begin{figure} \begin{center} \includegraphics[width=4cm,draft]{fig9} \caption{StormCV topology (linearised).} \label{scv} \end{center} \end{figure} %%%% %%%%\begin{figure} %%%%\label{fig:oscasestudy} %%%%\centering %%%%\subfigure[{\footnotesize DigitalPebble topology.}]{\includegraphics[width=4.5cm]{output/crawl}}\label{dp} %%%%\hspace{0.5cm} %%%%\subfigure[{\footnotesize StormCV topology.}]{\includegraphics[width=3cm]{output/senti_storm}}\label{scv} %%%%\end{figure} The OSTIA output aided as follows: (a) the output summarised in Fig. \ref{dp} allowed us to immediately grasp the functional behavior of the DigitalPebble and StormCV topologies allowing us to interpret correctly their operations before reading long documentation or inspecting the code; (b) OSTIA aided us in visually interpreting the complexity of the applications at hand; (c) OSTIA allowed us to spot several anti-patterns in the DigitalPebble Storm application around the ``sitemap" and ``parse" bolts, namely, a multiple cascading instance of the multi-anchoring pattern and a persistent-data pattern. Finally, OSTIA aided in the identification of the computational funnel anti-pattern around the "status" bolt closing the DigitalPebble topology. With this evaluation at hand, developers in the respective communities of DigitalPebble and StormCV could refactor their topologies, e.g., aided by OSTIA-based formal verification that proves the negative effects of said anti-patterns. \begin{framed} \textbf{Summary for Obj 1.} The patterns we elicited thanks to focus-groups in industry indeed have an actual recurrent manifestation in both industry and open-source. OSTIA-based analysis can support reasoning and potential refactoring of the proposed anti-patterns. \end{framed} % %\comment{this section needs further elaboration. we need to elaborate our discussion to visually locate these anti patterns.} % \comment{you said first, where is the second? this is incomplete} % %\begin{figure} %\begin{center} % \includegraphics[width=2.7cm]{} % \caption{} % \label{scv} %\end{center} %\end{figure} \subsection{OSTIA-based Formal Verification and Refactoring} In this section we outline the results from OSTIA-based formal verification applied on (one of) the topologies used by our industrial partner in practice. Results provide valuable insights for improving these topologies through refactoring. \begin{figure} \begin{center} \includegraphics[width=6cm,draft]{fig10} \caption{DigitalPebble topology.} \label{dp} \end{center} \end{figure} The formal analysis of the ``focused-crawler'' topology confirmed the critical role of the ``expander'' bolt, previously noticed with the aim of OSTIA visual output. It emerged from the output traces that there exists an execution of the system, even without failures, where the queue occupation level of the bolt is unbounded. Figure~\ref{verif-trace} shows how the tool constructed a periodic model in which a suffix (highlighted by the gray background) of a finite sequence of events is repeated infinitely many times after a prefix (on white background). After ensuring that the trace is not a spurious model, we concluded that the expander queue, having an increasing trend in the suffix, is unbounded. As shown in the the output trace at the bottom of Fig.~\ref{verif-trace}, further analyses on the DigitalPebble use case revealed that the same problem affects the ``status'' bolt of the DigitalPebble topology. This finding from the formal verification tool reinforced the outcome of the anti-pattern module of OSTIA, showing how the presence of the computational funnel anti-pattern could lead to an unbounded growth in the queue of the ``status'' bolt. These types of heavyweight and powerful analyses are made easier by OSTIA in that our tool provides a ready-made analyzable models of the topologies making almost invisible the formal verification layer (other than manually setting and tuning operational parameters for verification). %on top of which OSTIA support is harnessed. % %\comment{discuss thge benefit of this, was it possible without formal verification? elaborate more} % alternative %This feedback persuaded to \begin{figure} \centering \includegraphics[width=1\linewidth,draft]{fig11} \caption{OSTIA-based formal verification output traces showing the evolution of the two bolts over time. Queue trends are displayed as solid black line. Dashed lines show the processing activity of the bolts, while the other lines illustrate the incoming tuples from the subscribed nodes (\texttt{emit} events).} \label{verif-trace} \end{figure} \begin{framed} \textbf{Summary for Obj 2.} OSTIA-based formal verification effectively evaluates the safety of DIAs focusing on their design-time representation; further investigation of the generalisability of this approach towards runtime is needed to scope the extent to which OSTIA offers support for continuous evolution. \end{framed} \section{Discussion} \label{disc} %\input{disc} This section discusses some findings and the limitations of OSTIA. \subsection{Findings and Observations} OSTIA represents one humble, but significant step at supporting practically the necessities behind developing and maintaining high-quality big-data application architectures. In designing and developing OSTIA we encountered a number of insights that may aid application refactoring. First, we found (and observed in industrial practice) that it is often common to develop ``runnable" architecture topology that will undergo for refactoring even after the deployment phase and while the application is running. %than improving the topology at design time for a tentatively perfect execution. This is mostly the case with big-data applications that are developed stemming from previously existing topologies or applications. OSTIA hardcodes this way of thinking by supporting reverse-engineering and recovery of deployed topologies for their incremental improvement. Such improvement is helpful because %these topologies running continuously on rented clusters and the refactoring can help in boosting the application, that therefore require less resources and less cost for the rented clusters. Although we did not carry out extensive qualitative or quantitative evaluation of OSTIA in this regard, we are planning additional industrial experiments for future work with the goal of increasing OSTIA usability and practical quality. Second, big-data applications design is an extremely young and emerging field for which not many software design patterns have been discovered yet. The (anti-)patterns and approaches currently hardcoded into OSTIA are inherited from related fields, e.g., pattern- and cluster-based graph analysis. Nevertheless, OSTIA may also be used to investigate the existence of recurrent and effective design solutions (i.e., design patterns) for the benefit of big-data application design. We are improving OSTIA in this regard by experimenting on two fronts: (a) re-design and extend the facilities with which OSTIA supports anti-pattern detection; (b) run OSTIA on multiple big-data applications stemming from multiple technologies beyond Storm (e.g., Apache Spark, Hadoop Map Reduce, etc.) with the purpose of finding recurrent patterns. A similar approach may feature OSTIA as part of architecture trade-off analysis campaigns \cite{atam}. Third, a step which is currently undersupported during big-data applications design is devising an efficient algorithmic breakdown of a workflow into an efficient topology. Conversely, OSTIA does support the linearisation and combination of multiple topologies, e.g., into a cascade. Cascading and similar super-structures may be an interesting investigation venue since they may reveal more efficient styles for big-data architectures beyond styles such as Lambda Architecture \cite{lambda} and Microservices \cite{balalaie2016microservices}. OSTIA may aid in this investigation by allowing the interactive and incremental improvement of multiple (combinations of) topologies together. \subsection{Approach Limitations and Threats to Validity}\label{lim} Although OSTIA shows promise both conceptually and as a practical tool, it shows several limitations. First of all, OSTIA only supports only a limited set of DIA middleware technologies. Multiple other big-data frameworks such as Apache Spark, Samza, exist to support both streaming and batch processing. Second, OSTIA only allows to recover and evaluate previously-existing topologies, its usage is limited to design improvement and refactoring phases rather than design. Although this limitation may inhibit practitioners from using our technology, the (anti-)patterns and algorithmic approaches elaborated in this paper help designers and implementors to develop the reasonably good-quality and ``quick" topologies upon which to use OSTIA for continuous improvement. Third, OSTIA does offer essential insights to aid deployment as well (e.g., separating or \emph{clustering} complex portions of a topology so that they may run on dedicated infrastructure) and therefore the tool may serve for the additional purpose of aiding deployment design. However, our tool was not designed to be used as a system that aids deployment planning and infrastructure design. Further research should be invested into combining on-the-fly technology such as OSTIA with more powerful solvers that determine infrastructure configuration details and similar technological tuning, e.g., the works by Peng et Al. \cite{PengGWRYC14} and similar. %Rather, as specified previously in the introduction, OSTIA was meant to evaluate and increase the quality of topologies \emph{before} they enter into operation since the continuous improvement cycles connected to operating the topology and learning from operation are often costly and still greatly inefficient. \comment{but we provided evidences that we exploit the operation data for formal verification and performance improvements. this paragraph is a bit vague, please edit} %Fourth, although we were able to discover a number of recurrent anti-patterns to be applied during OSTIA analysis, we were not able to implement all of them in practice and in a manner which allows to spot both the anti-pattern and any problems connected with it. \todoMB{}{Rev ha probabilmente frainteso e ci ha cazziato.}For example, detecting the ``Cycle-in topology" is already possible, however, OSTIA would not allow designers to understand the consequence of the anti-pattern, i.e., where in the infrastructure do the cycles cause troubles. Also, there are several features that are currently under implementation but not released within the OSTIA codebase, for example, the ``Persistent Data" and the ``Topology Cascading" features.\todoMB{}{Altra critica da rev...implementazione parziale.} In the future we plan to tackle the above limitations furthering our understanding of streaming design as well as the support OSTIA offers to designers during the refactoring process. \section{Related Work} \label{rw} %\input{rw} %\begin{itemize} %\item mention DICE %\item mention work by Len Bass on Big-Data %\item other stuff on big data? %\item feel free to extend this section with Previous work of course :) %\end{itemize} The work behind OSTIA stems from the EU H2020 Project called DICE~\cite{dice2020} %\footnote{\url{http://www.dice-h2020.eu/}} where we are investigating the use of model-driven facilities to support the design and quality enhancement of big data applications. Much similarly to the DICE effort, the IBM Stream Processing Language (SPL) initiative \cite{ibmspl} provides an implementation language specific to programming streams management (e.g., Storm jobs) and related reactive systems. In addition, there are several work close to OSTIA in terms of their foundations and type of support, e.g., works focusing on distilling and analysing big data topologies \emph{by-design}~\cite{SNASEL2017286}, as also highlighted in recent research by Kalantari et al. \cite{Kalantari2017}. First, from a non-functional perspective, much literature discusses quality analyses of Big Data topologies, e.g., from a performance~\cite{perfbd} or reliability point of view \cite{bigdatareliab}. Existing work use complex math-based approaches to evaluating a number of big data architectures, their structure and general configuration. However, these approaches do not suggest any architecture refactorings. With OSTIA, we automatically elicits a Storm topology, analyses the topologies against a number of consistency constraints that make the topology consistent with the framework. To the best of our knowledge, no such tool exists to date. Furthermore, as highlighted by Olshannikova et al. \cite{Olshannikova2015} the few works existing on big data processes and their visualization highlight a considerable shortcoming in tools and technologies to visualize and interact with data-intensive models at runtime \cite{Olshannikova2015}. Second, from a modelling perspective, approaches such as StormGen~\cite{stormgen} offer means to develop Storm topologies in a model-driven fashion using a combination of generative techniques based on XText and heavyweight (meta-)modelling, based on EMF, the standard Eclipse Modelling Framework Format. Although the first of its kind, StormGen merely allows the specification of a Storm topology, without applying any consistency checks or without offering the possibility to \emph{recover} said topology once it has been developed. By means of OSTIA, designers can work refining their Storm topologies, e.g., as a consequence of verification or failed checks through OSTIA. Tools such as StormGen can be used to assist preliminary development of quick-and-dirty topologies. %Fourth, from a verification perspective, no previous effort tried yet to combine formal verification and architectural modelling of streaming topologies. Our attempt serves as a first rudimentary effort towards using complex and valuable verification approaches in combination with lightweight and agile DevOps inspired tools and approaches. %%...\\ %%\textbf{@Marcello,Francesco: here we should probably elaborate on what kind of verification approach we are using and what other verifications may be done, e.g., using some related work at this point... e.g., is there any other verification attempt considering JSON as an interchange format? I would discuss these and compare them to OSTIA as a whole} %======= %\textbf{@Marcello,Francesco: here we should probably elaborate on what kind of verification approach we are using and what other verifications may be done, e.g., using some related work at this point... e.g., is there any other verification attempt considering JSON as an interchange format? I would discuss these and compare them to OSTIA as a whole}\\ Third, from a verification perspective, to the best of our knowledge, this represents the first attempt to build a formal model representing Storm topologies, and the first try in making a configurable model aiming at running verification tasks of non-functional properties for big data applications. While some works concentrate on exploiting big data technologies to speedup verification tasks~\cite{camilli2014}, others focus on the formalization of the specific framework, but remain application-independent, and their goal is rather to verify properties of the framework, such as reliability and load balancing~\cite{dicomputational}, or the validity of the messaging flow in MapReduce~\cite{yang2010formalizing}. %\footnote{The Authors' work is partially supported by the European Commission grant no. 644869 (EU H2020), DICE. Also, Damian's work is partially supported by the European Commission grant no. 610531 (FP7 ICT Call 10), SeaClouds.}. % %. % %Finally, several deployment modelling technologies may be related to OSTIA since their role is to model the deployment structure for Big data architectures such as in Celar\footnote{\url{https://github.com/CELAR/c-Eclipse}}, that is, a deployment modelling technology based on the TOSCA OASIS Standard\footnote{\url{https://www.oasis-open.org/apps/org/workgroup/tosca/}}. Celar may be used together with OSTIA In a scenario where OSTIA helps architecture refinement in function of infrastructure needs/requirements \section{Conclusion} %\input{conc} \label{conc} %Applications that make heavy use of big data application frameworks require intensive reasoning of the system architecture. %We set out to assist the design-time formal verification of big data designs by This paper proposes an approach allowing designers and developers to perform analysis of big-data applications by means of code analysis and formal verification techniques. OSTIA provides support to both in the following sense: %It provides automated constraint verification in order to identify design anti-patterns and provide structural refactorings. it helps designers and developers by recovering the architectural topology on-the-fly from the application code and by assisting them in: (a) reasoning on the topological structure and how to refine it; (b) exporting the topological structure consistently with restrictions of their reference development framework so that further analysis (e.g., formal verification) may ensue. In addition, while performing on-the-fly architecture recovery, the analyses focuses on checking for the compliance to essential consistency rules specific to targeted big data frameworks. (c) Finally, OSTIA allows designers to check whether the recovered topologies contain occurrences of key anti-patterns. By running a case-study with partner organizations, we observed that OSTIA assists designers and developers in establishing and continuously improving the quality of topologies behind their big data applications. %We confirmed this result running OSTIA on several open-source applications featuring streaming technologies. % \url{https://github.com/maelstromdat/OSTIA}. OSTIA can be easily extended to provide more refined tools for the analysis of data-intensive applications as it is general in the approach and modular with respect to the definition of (i) the anti-patterns to be considered and (ii) the formal analysis approaches and the application modeling to be adopted. For this reason, in addition to the practical evidence observed, we believe that OSTIA can be considered as a reference point in the development of data-intensive applications. This motivates us to further elaborate the anti-patterns, %that may emerge across big data topologies exploiting graphs analysis techniques inherited from social-networks analysis. Also, we plan to expand OSTIA to support technologies beyond the most common application framework for streaming %, i.e., Storm. and, finally, to further evaluate OSTIA using empirical evaluation. %{\small\subsubsection*{Acknowledgment} The Authors' work is partially supported by the European Commission grant no. 644869 (EU H2020), DICE.} %{\small\subsubsection*{Appendix} Please follow the link to navigate to the appendices: \url{http://tinyurl.com/zco4sdz}.} \section{List of abbreviations} All the abbreviation occurring in the work are listed here in the order of appearance. \begin{itemize} \item OSTIA - Ordinary Static Topology Inference Analysis. \item DIA - Data Intensive Application. \item CIO - Chief-Information Officer. \item DAG - Directed Acyclic Graph. \item WICSA - orking IEEE/IFIP Conference on Software Architecture. \item LTL - Linear Temporal Logic. \item CLTLoc - Constraint LTL over clocks. \item UML - Unified Modeling Language. \item DICE - Developing Data-Intensive Cloud Applications with Iterative Quality Enhancement. \item API - Application Program Interface. \item ML - Machine Learning. \item JSON - JavaScript Object Notation. \item CSV - Comma Separated Variable. \item XMI - XML Metadata Interchange (XML - Extensible Markup Language). \item CLTL - Constraint LTL. \item TA - Timed Automata. \item SMT - Satisfiability Modulo Theory. \item EMF - Eclipse Modeling framework. \end{itemize} \section{Declarations} \textbf{Availability of data and material}. \begin{itemize} \item The datasets generated and/or analysed during the current study are available in the [GitHub] repository, [\url{https://github.com/maelstromdat/OSTIA}]. % \item The datasets generated and/or analysed during the current study are available in the [NAME] repository, [PERSISTENT WEB LINK TO DATASETS]. \end{itemize} \medskip \noindent \textbf{Competing interests}. The authors declare that they have no competing interests \medskip \noindent \textbf{Funding}. The work is supported by the European Commission grant no. 0421 (Interreg ICT), Werkinzicht and the European Commission grant no. 787061 (H2020), ANITA. \medskip \noindent \textbf{Authors' contributions}. All the authors equally contributed to all the sections of the paper. \medskip \noindent \textbf{Acknowledgements}. The authors kindly acknowledge all the people supporting the ideas the allowed the creation of OSTIA. % BibTeX users please use one of %\bibliographystyle{spbasic} % basic style, author-year citations \bibliographystyle{spmpsci} % mathematics and physical sciences %\bibliographystyle{spphys} % APS-like style for physics \bibliography{ostia} % name your BibTeX data base \section{Figure titles and legends.} \listoffigures % Non-BibTeX users please use \end{document} % end of file template.tex
{ "alphanum_fraction": 0.777121465, "avg_line_length": 74.6106719368, "ext": "tex", "hexsha": "d4231bc319d90f49a6e16a8daaecfe5c4d68f245", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "d4cd350b34e14db816f2669833387d676d3d80ed", "max_forks_repo_licenses": [ "BSD-3-Clause" ], "max_forks_repo_name": "maelstromdat/OSTIA", "max_forks_repo_path": "SpringerOpen Submission/ostia.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "d4cd350b34e14db816f2669833387d676d3d80ed", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "BSD-3-Clause" ], "max_issues_repo_name": "maelstromdat/OSTIA", "max_issues_repo_path": "SpringerOpen Submission/ostia.tex", "max_line_length": 1291, "max_stars_count": 3, "max_stars_repo_head_hexsha": "d4cd350b34e14db816f2669833387d676d3d80ed", "max_stars_repo_licenses": [ "BSD-3-Clause" ], "max_stars_repo_name": "maelstromdat/OSTIA", "max_stars_repo_path": "SpringerOpen Submission/ostia.tex", "max_stars_repo_stars_event_max_datetime": "2015-12-11T21:03:30.000Z", "max_stars_repo_stars_event_min_datetime": "2015-11-09T14:44:41.000Z", "num_tokens": 26723, "size": 113259 }
\documentclass{article} \usepackage[utf8]{inputenc} \title{test123} \author{rodneyfarrow } \date{January 2019} \begin{document} \maketitle \section{Introduction} Studying economics was a second choice in terms of degree choice. However it has turned to be as enlightening as I thought it would. Economics has helped me develop a new perspective on life that I otherwise would not have developed. Data Science is a new interest for me, specifically artificial intelligence, machine learning and how this may tie into quantum computing in the future. I'm unable to fathom the full scope of possibilities of data science however, I understand the usefulness of the study of big data, and hope to one day help solve major issues using the techniques learned in this class and other techniques picked up along the way. I am interested in a few different topics that may be suitable project candidates such as; how state marijuana legalization might affect migration of people across states, does raising minimum wage affect employment negatively in the short run or long run, or how do interest rates affect stock prices? Hopefully after graduation I can put these skills to use in various sectors to gain practice. I'm not sure of the positions available but the main goal is to gain experience, I'm also interested in returning to school to complete a PHD in economics, however it depends on the opportunities that present themselves. \section{Equation} a^2 + b^2 = c^2 \end{document}
{ "alphanum_fraction": 0.8010716678, "avg_line_length": 71.0952380952, "ext": "tex", "hexsha": "b5ecb47c377faaec03aec24aabe8aabc8e9ee474", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "c315046d08ac8c3324511f07a6f0f60805ceffa2", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "RodneyFarrow/DScourseS19", "max_forks_repo_path": "ProblemSets/PS1.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "c315046d08ac8c3324511f07a6f0f60805ceffa2", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "RodneyFarrow/DScourseS19", "max_issues_repo_path": "ProblemSets/PS1.tex", "max_line_length": 1270, "max_stars_count": 1, "max_stars_repo_head_hexsha": "c315046d08ac8c3324511f07a6f0f60805ceffa2", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "RodneyFarrow/DScourseS19", "max_stars_repo_path": "ProblemSets/PS1.tex", "max_stars_repo_stars_event_max_datetime": "2019-01-17T20:22:43.000Z", "max_stars_repo_stars_event_min_datetime": "2019-01-17T20:22:43.000Z", "num_tokens": 316, "size": 1493 }
\documentclass[ManualeUtente]{subfiles} \begin{document} \chapter{Introduction} \section{Purpose of the document} The purpose of this document is to explain to a final user the correct use of \progetto. \scopoProdottoEN \section{References} \begin{itemize} \item \textbf{Google Chrome}\\ \nURI{https://www.google.com/intl/en/chrome/} \item \textbf{Mozilla Firefox}\\ \nURI{https://www.mozilla.org/en-US/firefox/new/} \item \textbf{MetaMask browser extension for Google Chrome}\\ \nURI{https://chrome.google.com/webstore/category/extensions} \item \textbf{MetaMask browser extension for Mozilla Firefox}\\ \nURI{https://addons.mozilla.org/en-US/firefox/addon/ether-metamask/} \end{itemize} \end{document}
{ "alphanum_fraction": 0.7342281879, "avg_line_length": 29.8, "ext": "tex", "hexsha": "78eedbb0bdf7aefdac8928b460a832285356efa5", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "8bd2f83b3d458e60e6185a91ec66a1a35468f1ef", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "M9k/Marvin-Documentazione---353", "max_forks_repo_path": "Esterni/ManualeUtente/introduzione.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "8bd2f83b3d458e60e6185a91ec66a1a35468f1ef", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "M9k/Marvin-Documentazione---353", "max_issues_repo_path": "Esterni/ManualeUtente/introduzione.tex", "max_line_length": 89, "max_stars_count": null, "max_stars_repo_head_hexsha": "8bd2f83b3d458e60e6185a91ec66a1a35468f1ef", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "M9k/Marvin-Documentazione---353", "max_stars_repo_path": "Esterni/ManualeUtente/introduzione.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 220, "size": 745 }
\section{Data Reduction and Calibration} \label{891_1:sec:data_reduction} \subsection{Image processing and spectral extraction} Basic reduction (overscan correction, bias/dark subtraction) was done with IRAF's CCDPROC package. Cosmic rays were cleaned with the CRUTIL package. Spectral extraction, wavelength calibration, and flux calibration were done with the HYDRA package. The multiple fiber sizes in \GP required modifications to the standard DOHYDRA procedure for flat-field generation, sky-subtraction, and flux calibration. A detailed walkthrough and discussion of these modifications can be found at \url{www.astro.wisc.edu/~eigenbrot/GradPak}; we summarize the important points below. \subsubsection{Wavelength Calibration} \label{891_1:sec:wavecal} Uncertainties in the wavelength solution in the blue ($<$\val{4500}{\AA}) are the dominant source of uncertainty in determining the line-of-sight doppler shift of the observed spectra. As shown in Chapter \ref{chap:891_2}, these velocities are important for determining the line-of-sight depth of our spectroscopic observations, which requires velocity precision better than the spectral resolution of our data. %% {\bf [QUESTION: for the purpose of velocities, why not %% just use the wavelengths longward of 4000? I think that would have %% been the right thing to do, but the broader point is that for %% doing chi$^2$ analysis down to H\&K we need wave calibration %% better than the spectral resolution. This is a point we echo in %% the last section on the instrumental dispersion so important to %% point it out here too: we need accurate first and second %% moments.]} The reason for the wavelength calibration uncertainty at short wavelengths is the combined dearth of strong spectral features in the CuAr arc-lamp and the sensitivity fall-off in the fibers, spectrograph optics, grating and CCD. The number of good spectral lines that can be fit from the available arc-lamps between \val{4130}{\AA} and \val{7300}{\AA} is 33. To characterize the uncertainties in the wavelength calibration, we measure the centers of Solar Ca H\&K absorption (from zodiacal or atmospheric scattering) and three HgI emission lines (atmospheric scattering from terrestrial sources, e.g., Tucson) present in our data (before sky subtraction). In all of our data exposures the foreground (zodiacal plus atmospheric) signal dominated over the flux from NGC 891 to the point where discrimination between Solar Ca H\&K and Ca H\&K in NGC 891 was not necessary. For each night of observation all wavelength-calibrated object frames are combined to improve S/N (especially relevant for measurements of absorption lines) before line centroids are measured with IRAF's FITPROFS routine. For each line measured the average offset from the known center across all \GP fibers is taken as an indication of the accuracy of the wavelength solution at that particular wavelength. The standard deviation of the measured centers across all \GP fibers is a measure of the precision of the wavelength solution at a particular wavelength. \begin{figure} \centering \includegraphics[width=\columnwidth]{891_1/figs/Wave_Err_comb.pdf} \caption[Accuracy and precision of wavelength calibration]{\label{891_1:fig:wave_err}\fixspacing Estimated accuracy and precision of the wavelength solution in the blue region of the spectrum based on measurements of known foreground features (HgI emission at 4047, 4358 and 5461 \AA\ from terrestrial light pollution, and Solar Fraunhofer H\&K lines from Ca) in our sky spectra. Each point corresponds to a measurement of a line from an individual night of NGC 891 observations. \emph{top:} The mean offset in km $^{-1}$ between measured line centers and the known values, averaged over all \GP fibers. \emph{bottom:} The standard deviation of the measured line centers in km s$^{-1}$, computed across all \GP fibers.} \end{figure} Accordingly, Figure \ref{891_1:fig:wave_err} shows the estimated accuracy and precision for each of the 10 nights of NGC 891 observations. At wavelengths $>\val{\asim 4000}{\AA}$ our wavelength solution is both accurate and precise to within \val{\asim 50}{\kms}. Below \val{4000}{\AA} (where the wavelength solution is determined via extrapolation of the CuAr lamp line data) we measure a systematic offset of \val{\asim 120}{\kms} with a comparable value for the random error. We therefore report an upper limit on the uncertainty in our wavelength solution to be \val{120}{\kms} at \val{4000}{\AA}, while noting that for most wavelengths this is an overestimation. The RMS uncertainty reported by the wavelength fitting routine is \val{\asim 90}{\kms}, which is likely a more accurate estimate of the uncertainty for the wavelength region covered by arc lines (\val{4130}{\AA}$<\lambda <$ \val{7300}{\AA}). Even in the worst case we are able to calibrate wavelengths to a higher precision than the spectral resolution (see \S\ref{891_1:sec:GPak_dispersion}), which varies from 180 to \val{570}{\kms} at \val{4000}{\AA} (depending on fiber size). Thus, accurate measurements of velocity based on $\chi^2$ fitting of absorption lines (as done in Chapter \ref{chap:891_2}) is possible with our data. %% The maximum offset from the true values give us an %% upper limit on the uncertainty in our wavelength solution. Using this %% method we find an upper limit of \val{\asim 100}{\kms} at %% \val{4000}{\AA}, which is consistent with the value of \val{\asim %% 90}{\kms} based on the RMS of the wavelength solution fit. {\bf %% [TODO: Let me get this straight: we are not using these features to %% improve the calibration, but just as a cross-check?]} \begin{figure} \centering \includegraphics[width=\columnwidth]{891_1/figs/skysub_comp.pdf} \caption[Sky subtraction example]{\label{891_1:fig:skysub_comp}\fixspacing Example of large sky-subtraction residuals from November 21st, 2014. The red spectrum shows fiber 80 after sky subtraction using a sky spectrum that is the average across the whole night. The blue spectrum shows the same fiber after sky subtraction using only the sky fibers from the same exposure.} \end{figure} \subsubsection{Flat Fields} \label{891_1:sec:flats} The combination of spectrograph setup and the multiple fiber sizes in \GP required two different dome flat exposure times: a short exposure (\val{1}{s}) to avoid saturating the largest fibers, and a long exposure (\val{4}{s}) to get adequate signal in the smallest fibers. %% Repeat % For this reason we took one set of \val{4}{s} dome flats to produce % enough signal in the smallest fibers (while saturating the largest % fibers) and one set of \val{1}{s} dome flats to avoid saturating the % largest fibers (while getting almost no signal in the smallest % fibers). Both sets of dome flat exposures were run through CCDPROC, combined, and had their apertures extracted independently by DOHYDRA. The so-called ``extraction'' produces a one-dimensional data vector (spectrum) for each fiber. In both cases the extraction used fiber traces computed from the \val{1}{s} flat for all data because we found that saturated large fibers in the \val{4}{s} flats introduced errors in the extraction process while the low signal in small fibers at \val{1}{s} was still adequate for tracing purposes. In practice, traces using the \val{4}{s} or \val{1}{s} are very similar, as confirmed by the intermediate-size fibers that have good signal without being saturated. This is expected given the fact that the two exposure sets were taken right after each other over a short period of time. Once the two flats had their apertures extracted, the resulting spectra were scaled to the same exposure time and stitched together into a ``master'' flat where the 1\farcs87 and 2\farcs81 fibers were taken from the \val{4}{s} flat and the 3\farcs75, 4\farcs69, and 5\farcs62 fibers were taken from the \val{1}{s} flat. This master flat was used in all subsequent reduction of the on-sky data. During the exercise of establishing this pipeline we discovered a lag in the shutter time that affects exposures shorter than \val{\asim 7}{s}. The WIYN Bench Spectrograph CCD has a linear slide shutter that has opening and closing times that are not equal. Because the shutter moves along the wavelength dimension of the CCD the effect of this inequality is to add an artificial spectral signature. The magnitude of this signature varies with exposure time, with the largest deviation measured as $\asim 20\%$ at \val{1}{s}. We removed this spectral mismatch between the \val{1}{s} and \val{4}{s} flats by normalizing all fibers in the \val{4}{s} flat to the mean spectrum of the \val{1}{s} flat. While this method still introduces the shutter spectrum present in the \val{4}{s} flat this spectrum is constant across all fibers and is thus removed during flux calibration. \subsubsection{Sky Subtraction} \label{891_1:sec:skysub} Each fiber size in \GP has 4 dedicated sky fibers, as illustrated in Figure \ref{891_1:fig:GradPak}. Sky subtraction was performed on a fiber-size basis using different beam numbers for each fiber size in HYDRA's SKYSUB routine. Since there are multiple sky fibers, rejection of contaminated sky fibers was possible, as indeed was required (see Figure \ref{891_1:fig:pointings} for P6, P4, P1). The data from the night of November 21st, 2014 (P1) showed strong variation in sky intensity across the course of the night. This variation did not subtract well when using a sky spectrum averaged across the whole night. In particular, the region between atmospheric NaI emission at \val{5684}{\AA} and the broad high-pressure sodium feature around \val{5900}{\AA} often left strong residuals when using a nightly average for the sky spectrum. Accurate sky subtraction was achieved by computing the average sky spectrum on an exposure-by-exposure basis and combining individual object exposures \emph{after} sky subtraction. For consistency across the observing program the data for all nights was sky subtracted in this way. Figure \ref{891_1:fig:skysub_comp} shows representative before-and-after spectra illustrating this issue. \begin{figure} \centering \includegraphics[width=\columnwidth]{891_1/figs/flux_cal_test.pdf} \caption[Comparison of flux calibration across multiple fiber sizes]{ \label{891_1:fig:sky_flux_comp}\fixspacing Flux calibration test for all fiber sizes when using standard star data from only 5\farcs62 fibers. Data are taken from a single galaxy exposure reduced through flux calibration, but not sky-subtracted. Each line represents the average of all 4 sky fibers for a given fiber size and all lines are normalized by the mean of all sky fibers. The dashed and dotted lines show deviations at the 5\% and 10\% level, respectively.} \end{figure} \subsection{Flux Calibration} \label{891_1:sec:flux_cal} %% {\bf [TO DO: There is basic argument that we need to make for why we %% think application of the flux cal from the large fibers to all %% fibers is valid. This consists of a combination of the fact that %% all of the fiber is of the same material. However, beause it is %% not necessarily from the same draw, there could be %% variations. That is where information from lab measurements in the %% \GP instrument description section would be useful.]} As discussed in \S\ref{891_1:sec:cal_data} atmospheric refraction limited our standard star observations to only the 5\farcs62 fibers. On each night the same two standard stars were observed in multiple 5\farcs62 fibers and a flux calibration was computed using the ONEDSPEC package. Time constraints in our observing program limited the range of airmasses available for standard star observations and for this reason we could not compute an extinction curve for our data. We instead used the KPNO extinction curve provided in ONEDSTDS\footnote{http://stsdas.stsci.edu/cgi-bin/gethelp.cgi?onedstds}. To zeroth order we expect the spectral response of all the \GP fibers to be the same because they are made of the same material and are all from the same foundry run \citep{Wood12}. In reality the \GP fibers come from different draws, were handled differently during construction, and have different degrees of termination quality. Consequently, there are throughput variations not only between fiber sizes, but also within fiber sizes, as illustrated in Figure \ref{GPtesting:fig:TL_FRD}. Throughput characteristics vary across the \GP array. In particular, the total throughput and focal ratio degradation are worse for the smallest fibers. The throughput variations are expected to be relatively grey (wavelength independent), but variations in FRD can lead to different, wavelength dependent losses within the spectrograph. Because the flat-field screen uniformly illuminates the array, the flat-fielding process should remove all of these relative variations. To assess the effects of using flux standards taken in a single fiber size to calibrate the entire \GP IFU we compared flux calibrated sky spectra for all fiber sizes. The data came from a single exposure of P3 (see Figure \ref{891_1:fig:pointings}) where none of the sky spectra appeared to be outliers due to source contamination from foreground stars or background galaxies. All sky fibers were flux calibrated using data from the 5\farcs62 fibers, and then the 4 sky fibers of each size were averaged together. Figure \ref{891_1:fig:sky_flux_comp} shows the average spectrum for each fiber size, normalized to the mean of \emph{all} sky fibers. At wavelengths greater than \val{4000}{\AA} the sky spectra are consistent between fiber sizes to within 5\%. Below \val{4000}{\AA} the flux calibration varies by up to $\asim$20\% due to low signal to noise at these wavelengths. %% {\bf Here's what MAB doesn't understand: (a) why do the %% strong sky lines, like OI at 5563 or so, show such large variations? %% (b) you say the flux cal varies a lot below 4000A due to low SNR in %% the smallest fibers yet the magenta curve for the 6'' fibers seems %% as aberrant as any.} To account for night-to-night offsets in extinction (caused by, e.g., high cirrus clouds) we scaled each individual exposure so that all exposures in a particular pointing have the same average flux in the region $\val{4500}{\AA} \leq \lambda \leq \val{5500}{\AA}$. Within a single night this scaling was always less than \asim 3\%, but between nights it could vary by a factor of up to 2 in cases where a night had poor observation conditions (high humidity, persistent clouds, etc.). The final spectra for each pointing are an average of individual exposures weighted by the square of the inverse of their deviation from the mean pointing flux in the range $\val{4500}{\AA} \leq \lambda \leq \val{5500}{\AA}$. \subsection{Instrumental Line Profile} \label{891_1:sec:GPak_dispersion} \begin{figure} \centering \includegraphics[width=\columnwidth]{891_1/figs/disp_paper.pdf} \caption[Variation of instrumental resolution in \GP fibers]{\label{891_1:fig:dispfunc}\fixspacing Instrumental resolution as measured from CuAr arc-lamps are plotted as points; the interpolated and extrapolated dispersion functions used in spectral modeling is shown by the dotted lines.} \end{figure} The optical distortions of the WIYN Bench Spectrograph results in an instrumental line profile that varies with wavelength and field location along the fiber slit for a fiber of a given size. The delivered line profile in the spectral dimension (the instrumental dispersion) is a convolution of this distortion with the anamorphically demagnified fiber image. Consequently the different fiber sizes in \GP cause the spectral line profile to be a different function of wavelength for each fiber size. We refer to this value as $\sigma_{\rm inst}$, reported in units of \kms. To account for the variations in $\sigma_{\rm inst}$ in our later fitting of model spectral energy distributions, we first measured the width of arc-lamp emission lines across our entire wavelength range for all fibers. Widths for each arc line were measured for every fiber, and then averaged for all fibers of the same size, weighted by poisson noise in each fiber. Arc lines with low signal-to-noise were combined together in groups over narrow wavelength ranges to improve the signal and assigned a new wavelength weighted by the signal-to-noise of the component lines. This was particularly important in the far blue where the lines were few and often weak. These data were interpolated to create dispersion functions, $\sigma_{{\rm inst},i}(\lambda)$, for each fiber size, $i$. Past the blue and red ends of the region covered by arc lines (\val{4130}{\AA}$<\lambda <$ \val{7300}{\AA}) the dispersion functions are extended with linear extrapolation. The line-width data and resulting instrumental dispersion functions can be found in Figure \ref{891_1:fig:dispfunc}. Note the hat-shaped profile seen for the smallest fibers, characteristic of the typical compromise focus value chosen for the all refractive Bench Spectrograph system. For larger fiber sizes the resolution is dominated by the geometric fiber diameter and the changes in the grating dispersion with wavelength.
{ "alphanum_fraction": 0.7818411822, "avg_line_length": 53.6759259259, "ext": "tex", "hexsha": "c2dc6e3bc5a32a4684de86b83a7d359dcee94731", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "113dfb95996777e2b36785d7ee80a824a671ab09", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "eigenbrot/eigenbrot-thesis", "max_forks_repo_path": "891_1/redux.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "113dfb95996777e2b36785d7ee80a824a671ab09", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "eigenbrot/eigenbrot-thesis", "max_issues_repo_path": "891_1/redux.tex", "max_line_length": 75, "max_stars_count": null, "max_stars_repo_head_hexsha": "113dfb95996777e2b36785d7ee80a824a671ab09", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "eigenbrot/eigenbrot-thesis", "max_stars_repo_path": "891_1/redux.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 4309, "size": 17391 }
\chapter{Testing} As discussed in chapter 1, test-driven development was used during testing of various elements of the project. Primarily, this was in the model and service layer, where logic was tested and test-driven development was of use. The view and controller layers of the application were generally tested through a strategy of acceptance testing, using a test table and noting results. This has been included in this chapter. Whilst initially, the plan was for development of these layers to also be driven by test-driven development, testing facilities such as HTMLUnit were difficult to assess, particularly in an application where dynamic elements were loaded at runtime. A recorded test, through a testing suite such as Selenium, was also considered. However, it was difficult to model this without defeating the purpose of agile development. This chapter will discuss the automated tests that had been generated, and provide a test table for layers that weren't covered by this. It was also discuss discussion with the customer, considered for the purposes of this chapter a form of acceptance testing. \section{Automated Testing} The automated tests that were carried out as part of this application were created using JUnit, a Java testing framework. The testing sets utilised focused on the logical aspects of the application; ensuring that constraints had been added correctly and the interfaces created for the model view were logically sound. During the development of the logical parts of the application, the concept of test-driven development was utilised. Interfaces and empty methods were taken advantage of to implement the planned design, as well as create unit tests for the desired behaviour from each part of the application. Development followed TDD's approach of ``red, green, refactor" - thinking about what needs to be developed by writing a test that would fail, writing the code to make the test case pass, and improving the code's efficiency and readability. The model layer tests focused on PointOfInterest, User, and Role. Many of the methods that were being tested were setters and getters - to write a single test case for each of these would have proved time-consuming, and inefficient. Therefore, \textit{OpenPOJO}\cite{OpenPOJO}, a third-party library, was used. OpenPOJO allows for setter and getter tests to be wrapped into one method, that uses randomly-generated values to test that the variables in a class can successfully be input, and the input can successfully be retrieved. \texttt{PointOfInterestTests} continues to test that the method implementing Haversine's Formula. The first tests confirms that the calculation is not made should it be queried on an object with the same coordinates as the Dyfi Wildlife Centre, as a conditional within the method checks for this. The second method confirms that the correct distance is returned between the Dyfi Wildlife Centre and a different coordinate pair. These test cases passed - \texttt{UserTests} and \texttt{RoleTests} did not have any helper methods or otherwise that required robust testing of its logic - for these classes the setter and getter validation is carried out. \texttt{PostcodeServiceTests} tests the connection and validity of the Postcodes.io API, and that the methods perform what they are intended to perform. Tests perform checking of whether a valid postcode is accepted by the API and vice versa. The test cases are available within the technical submission. They are available in the test folder and can be run through Maven with the \texttt{mvn test} command. \section{Test Tables} The strategy for testing of layers not mentioned in the aforementioned section followed that of a manual test table. Whilst this is not strictly TDD, particularly as the tests do not involve automation, test cases were written prior to implementation which ensured that there was forethought of the desired functionality of each part of the application. This section will be split into subsections for each test case. The subsection will detail the steps required to reproduce the test, and the expected outcome of reproducing these steps. The tests will also be correlated with the requirements, laid out in Appendix A. \subsection{TC1 - Confirm map can be reset} \subsubsection{Reference} Tasks 1 and 5 \subsubsection{Steps} \begin{enumerate} \item From the main page, zoom out from the map, and pan in a random direction. \item Hover over the filters floating action button, and click on the Reset button. \end{enumerate} \subsubsection{Input} N/A \subsubsection{Expected Output} The map is panned to the original coordinates of the Dyfi Wildlife Centre, and zoomed in to its original zoom level. \subsection{TC2 - Confirm adding a new point of interest} \subsubsection{Reference} Task 2 \subsubsection{Steps} \begin{enumerate} \item From the main page, click on the Admin Panel link in the top-right of the application. \item Log in as an administrator. \item Click on the Add tab. \item Enter data and click on the Add button. \end{enumerate} \subsubsection{Input} \begin{itemize} \item \textbf{Name: } Aberystwyth University \item \textbf{Description: } A University located in west Wales. \item \textbf{Filter: } Business \item \textbf{UK Postcode: } SY23 3DB \end{itemize} \subsubsection{Expected Output} The application will redirect you to the main page, and a marker at Aberystwyth University will be displayed. When clicking the marker, a popup window with the description will be shown, and it will be noted that it is 12.23 miles from the Dyfi Wildlife Centre. \subsection{TC3 - Confirm adding a new point of interest with no location fails} \subsubsection{Reference} Task 2 \subsubsection{Steps} \begin{enumerate} \item From the main page, click on the Admin Panel link in the top-right of the application. \item Log in as an administrator. \item Click on the Add tab. \item Enter data and click on the Add button. \end{enumerate} \subsubsection{Input} \begin{itemize} \item \textbf{Name: } Aberystwyth University \item \textbf{Description: } A University located in west Wales. \item \textbf{Filter: } Business \end{itemize} \subsubsection{Expected Output} An error will be displayed indicating that no location was entered. \subsection{TC4 - Confirm adding a new point of interest with both a postcode and a coordinate pair fails} \subsubsection{Reference} Task 2 \subsubsection{Steps} \begin{enumerate} \item From the main page, click on the Admin Panel link in the top-right of the application. \item Log in as an administrator. \item Click on the Add tab. \item Enter data and click on the Add button. \end{enumerate} \subsubsection{Input} \begin{itemize} \item \textbf{Name: } Aberystwyth University \item \textbf{Description: } A University located in west Wales. \item \textbf{Filter: } Business \item \textbf{UK Postcode: } SY23 3DB \item \textbf{Latitude: } 50 \item \textbf{Longitude: } 50 \end{itemize} \subsubsection{Expected Output} An error will be displayed indicating that both a postcode and coordinates were entered. \subsection{TC5 - Confirm duplicate points of interest in the same location cannot be entered} \subsubsection{Reference} Task 2 \subsubsection{Steps} \begin{enumerate} \item From the main page, click on the Admin Panel link in the top-right of the application. \item Log in as an administrator. \item Click on the Add tab. \item Enter data and click on the Add button. \item Repeat steps 1 to 4. \end{enumerate} \subsubsection{Input} \begin{itemize} \item \textbf{Name: } Aberystwyth University \item \textbf{Description: } A University located in west Wales. \item \textbf{Filter: } Business \item \textbf{UK Postcode: } SY23 3DB \end{itemize} \subsubsection{Expected Output} The first point of interest will be accepted, however an error will appear when trying to add the point of interest a second time, stating that it already exists. \subsection{TC6 - Confirm adding a new point of interest with only one half of the coordinate pair fails} \subsubsection{Reference} Task 2 \subsubsection{Steps} \begin{enumerate} \item From the main page, click on the Admin Panel link in the top-right of the application. \item Log in as an administrator. \item Click on the Add tab. \item Enter data and click on the Add button. \end{enumerate} \subsubsection{Input} \begin{itemize} \item \textbf{Name: } Senedd Cymru \item \textbf{Description: } The Welsh Parliament building \item \textbf{Filter: } Business \item \textbf{Latitude: } 51.4635262 \end{itemize} \subsubsection{Expected Output} An error will appear stating that an invalid location was entered. \subsection{TC7 - Confirm adding a new point of interest with no name fails} \subsubsection{Reference} Task 2 \subsubsection{Steps} \begin{enumerate} \item From the main page, click on the Admin Panel link in the top-right of the application. \item Log in as an administrator. \item Click on the Add tab. \item Enter data and click on the Add button. \end{enumerate} \subsubsection{Input} \begin{itemize} \item \textbf{Description: } A University located in west Wales. \item \textbf{Filter: } Business \item \textbf{UK Postcode: } SY23 3DB \end{itemize} \subsubsection{Expected Output} A tooltip will appear on the name field stating that it is a required field. \subsection{TC8 - Confirm editing a point of interest} \subsubsection{Reference} Task 2 \subsubsection{Steps} \begin{enumerate} \item From the main page, click on the Admin Panel link in the top-right of the application. \item Log in as an administrator. \item If no points of interest are available to edit, add a point of interest. \item Click on the List tab, and click on a point of interest's edit button. \item Enter data and click Update. \end{enumerate} \subsubsection{Input} \begin{itemize} \item \textbf{Description: } This description has been changed. \end{itemize} \subsubsection{Expected Output} The application will redirect you to the main page. When clicking on the edited point of interest's marker, the description will have changed. \subsection{TC9 - Confirm deleting a point of interest} \subsubsection{Reference} Task 2 \subsubsection{Steps} \begin{enumerate} \item From the main page, click on the Admin Panel link in the top-right of the application. \item Log in as an administrator. \item If no points of interest are available to delete, add a point of interest. \item Click on the List tab, and click on a point of interest's delete button. \end{enumerate} \subsubsection{Input} N/A \subsubsection{Expected Output} After confirming, the point of interest will be deleted, and no longer visible on the map. \subsection{TC10 - Confirm filtering a point of interest} \subsubsection{Reference} Task 5 \subsubsection{Steps} \begin{enumerate} \item If not already done, add multiple points of interest with at least one having a wildlife or a transport filter. \item On the main page, hover over the filters tab, and click Transportation. \end{enumerate} \subsubsection{Input} N/A \subsubsection{Expected Output} The only markers that are visible should be the ones where the category is transport. \subsection{TC11 - Confirm registering a new user} \subsubsection{Reference} Task 4 \subsubsection{Steps} \begin{enumerate} \item From the main page, click on the Admin Panel link in the top-right of the application. \item Log in as an administrator. \item Click on the Users tab, and click on the link to register a user. \item Enter data and click register. \item Log out and log in to registered account. \end{enumerate} \subsubsection{Input} \begin{itemize} \item \textbf{Username: } test\_user \item \textbf{Password: } test\_user \end{itemize} \subsubsection{Expected Output} The newly added user should be able to be logged into. \subsection{TC12 - Confirm adding a duplicate user is not possible} \subsubsection{Reference} Task 4 \subsubsection{Steps} \begin{enumerate} \item From the main page, click on the Admin Panel link in the top-right of the application. \item Log in as an administrator. \item Click on the Users tab, and click on the link to register a user. \item Enter data and click register. \item Log in to an administrator account. \item Repeat steps 3 and 4. \end{enumerate} \subsubsection{Input} \begin{itemize} \item \textbf{Username: } test\_user \item \textbf{Password: } test\_user \end{itemize} \subsubsection{Expected Output} An error page named \textbf{409 - Conflict} should appear. \subsection{TC12 - Confirm deleting a user} \subsubsection{Reference} Task 4 \subsubsection{Steps} \begin{enumerate} \item From the main page, click on the Admin Panel link in the top-right of the application. \item Log in as an administrator. \item Add a user, if no users are registered that can be delete. \item Click on Users, then click the delete icon next to the user to be deleted. \end{enumerate} \subsubsection{Input} N/A \subsubsection{Expected Output} The user should be deleted after a confirmation dialogue box. % 1543
{ "alphanum_fraction": 0.7813166717, "avg_line_length": 42.1993569132, "ext": "tex", "hexsha": "5526628ec2307ea43ee34121bec6c33dfb4c5a00", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "aaa60ad6dc8ebe8ee6a12bff1381406046e03468", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "MichaelMale/DyfiWildlifeCentre", "max_forks_repo_path": "docs/report/Chapters_Engineering/chapter4.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "aaa60ad6dc8ebe8ee6a12bff1381406046e03468", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "MichaelMale/DyfiWildlifeCentre", "max_issues_repo_path": "docs/report/Chapters_Engineering/chapter4.tex", "max_line_length": 874, "max_stars_count": null, "max_stars_repo_head_hexsha": "aaa60ad6dc8ebe8ee6a12bff1381406046e03468", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "MichaelMale/DyfiWildlifeCentre", "max_stars_repo_path": "docs/report/Chapters_Engineering/chapter4.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 3147, "size": 13124 }
\clearpage %or \cleardoublepage \phantomsection \chapter*{Notation} \markright{} \addcontentsline{toc}{chapter}{Notation} \begin{longtable}[l]{ l p{0.85\textwidth} } $C_P=\cfrac{P}{\rho n^3 D^5}$ & [-] power coefficient \\ $C_T=\cfrac{T}{\rho n^2 D^4}$ & [-] thrust coefficient \\ $D$ & [m] propeller diameter \\ $J=\cfrac{V}{nD}$ & [-] propeller advance ratio \\ $n$ & [rev/s] propeller revolution speed \\ $P$ & [W] power \\ $Q$ & [N$\cdot$m] power \\ $Q_C=\cfrac{Q}{\rho V^2 D^3}$ & [-] torque coefficient \\ $T$ & [N] thrust \\ $T_C=\cfrac{T}{\rho V^2 D^2}$ & [-] thrust coefficient (alternative form) \\ $V$ & [m/s] velocity \\ $\beta_{0.75}$ & [deg] propeller blade angle at 0.75 of radius \\ $\eta$ & [-] propeller efficiency \\ $\rho$ & [kg/m\textsuperscript{3}] air density \\ \end{longtable}
{ "alphanum_fraction": 0.4688679245, "avg_line_length": 46.0869565217, "ext": "tex", "hexsha": "6fe8aed3943bee3109d3ec09173cdd16f4538344", "lang": "TeX", "max_forks_count": 2, "max_forks_repo_forks_event_max_datetime": "2019-12-01T19:41:05.000Z", "max_forks_repo_forks_event_min_datetime": "2019-12-01T10:56:23.000Z", "max_forks_repo_head_hexsha": "9984f33c84787c4420f11f2834bb35e040e1f36f", "max_forks_repo_licenses": [ "CC0-1.0" ], "max_forks_repo_name": "marek-cel/mscsim-docs", "max_forks_repo_path": "tex/propellers_0.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "9984f33c84787c4420f11f2834bb35e040e1f36f", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "CC0-1.0" ], "max_issues_repo_name": "marek-cel/mscsim-docs", "max_issues_repo_path": "tex/propellers_0.tex", "max_line_length": 82, "max_stars_count": 7, "max_stars_repo_head_hexsha": "9984f33c84787c4420f11f2834bb35e040e1f36f", "max_stars_repo_licenses": [ "CC0-1.0" ], "max_stars_repo_name": "marek-cel/mscsim-docs", "max_stars_repo_path": "tex/propellers_0.tex", "max_stars_repo_stars_event_max_datetime": "2021-09-09T07:02:20.000Z", "max_stars_repo_stars_event_min_datetime": "2019-12-01T02:27:28.000Z", "num_tokens": 320, "size": 1060 }
%% %% pytorch-neural-doodle/docs/content/chapters/implementation.tex %% %% Created by Paul Warkentin <[email protected]> on 21/08/2018. %% Updated by Bastian Boll <[email protected]> on 07/10/2018. %% \section{Implementation} \label{section:implementation} \textbf{by Paul Warkentin} \\ As a starting point, Champandard examines user behavior while authoring style transfers. This is made possible through a social media bot \cite{deepforger2015}, making artistic style transfer algorithms available to a general public. It can be observed, that in the cases where the algorithm does not meet the users expectation, it is often due to a lack of semantic segmentation of style and content images. As an example, when transfering an artists style to a photographic portrait, one would expect the algorithm to transfer the color and texture of skintones the artist chose to the skin areas of the portrait. While this expectation may at times be met, the patch based style loss constructed above does not generally enforce such a behavior. The usage of semantic segmentation is especially prominent in more specialized approaches to style transfer \cite{yang2017semantic}, suggesting additional merit to the presented intuition. Note, that convolutional neural networks such as VGG do implicitly learn semantic segmentation of images \cite{thoma2016survey}, but this segmentation is not put to use in the above style loss construction because nearest neighbour patches are selected only with respect to similarity in texture. A segment of sky in the background of a style painting may absolutely be selected as nearest neighbour patch for a skintone area in the content picture, if local texture happens to be similar. This lack of semantic segmentation causes glitches and subverts the intention of the user but the disregard for the semantic segmentation extracted by the convolutional neural network is in some respects a byproduct of the design -- the style loss intentionally uses low layer responses to capture local texture information and to not capture higher level (content) responses such as semantic information. This leaves the user with a very limited number of control levers. Choosing the parameter \(\alpha\) which weights style loss against content loss presents a spectrum between a faithful reproduction of the content image and an unstructured reproduction of style texture with no regard to the content. This is insufficient in practice, especially for abstract styles and in those -- particularly interesting -- cases where style and content image are very dissimilar in perspective or subject. The core idea of Champandard lies in incorporating segmentation from a semantic map both indirectly as part of the nearest neighbour patch computation and directly into the style loss term. \subsection{Semantic maps} \textbf{by Paul Warkentin} \\ A semantic map is an image dispaying a semantic segmentation of another image. It has the same aspect ratio as the segmented image and encodes semantically similar sections as areas with the same color. This information can be provided by a user as an artistic control lever of the process. In fact, as \cite{doodles2016} points out, it is possible to go one step further, encoding not only the semantic affilliation of pixels in a given area with each other, but also the relationship of different semantic areas. This is done, by choosing similar colors for the segmentation of semantically similar (but not equal) areas. When working with an image of a landscape, one may choose to segment different variations of woodland appearance with different shades of one color, say green. The process of authoring semantic maps is technically easy, as will be described in the next section, but it can be artistically challenging. The latter can also be seen as opportunity: additional leeway in the artistic shaping of images presents the possibility of more appealing results. This fact has been noted by \cite{doodles2016} and is clearly corroborated through our experiments. \clearpage \subsection{Style Loss} \textbf{by Bastian Boll} \\ The semantic map of the style image is processed by the VGG network and activations for specific layers are gathered. In the case of a VGG 19 network, we choose the layers \texttt{conv3\_1} and \texttt{conv4\_1}. Denote the response tensor with \(m^s\). This tensor has two spatial dimensions and one channel dimension. The vector \(m^s\) is subsequently scaled by a factor \(\gamma\) and concatenated along the channel dimension with the response vector \(y^s\) of the style image \[x^s = y \,||\,\gamma m \] The analogous preparation steps are applied to the content image \[x^c = y^c \,||\,\gamma m^c \] For the reasons outlined in the introduction, a patch-based style loss approach is prefered over a Gram-based approach. We therefore compute \(k\times k\) patches \(\Psi (x^s)\) and \(\Psi (x^c)\) along the spatial dimensions. Our implementation specifically uses \(3\times 3\) patches. Given \(x^s\) and \(x^c\), the patch-based approach of \cite{mrf2016} can be applied without the need for significant changes. Nearest neighbour patches are computed with respect to normalized cross-correlation and we have the finished patch-based style loss \[\mathcal{L}_{s,p}(x^s,x^c) = \sum_i \|\Psi_i(x^c)-\Psi_{\text{NN}(i)}(x^s)\|_2^2\] The parameter \(\gamma\) can be seen as a user control point, specifying how heavily the segmentation should be weighted against local pixel conformance during nearest neighbour computation. This specific way of incorporating segmentation maps has two principal upsides: \begin{enumerate} \item The segmentation maps can easily be authored, as they do not need to conform to many formal specifications. They can have an arbitrary number of channels and can be drawn with a reduced resolution as compared to the original style- and content image. These are both conveniences in practical use. % embedding segmentation \item The algorithm being used is essentially the same as the patch based approach of \cite{mrf2016} because the datastructure being worked on does only change along the channel dimension and the operations being applied remain the same. In the common library implementation of respective convolutions, one does not need to change anything. The only extra implementation is the weighted concatenation of segmentation maps. \end{enumerate} Manual work can be saved by using a lower resolution for the segmentations and computational effort as well as memory can be saved by increasing the stride between successive patches. Applied moderately, neither of these simplifications have a large impact on practical results. \subsection{Content Loss} \textbf{by Paul Warkentin} \\ Content loss can be added without additional modifications to the original approach of Gatys et al. \cite{gatys2015neural}. With regard to semantic maps, it merits mentioning that much like the original implementation of \cite{doodles2016}, our present implementation can be run in a mode that requires a content segmentation but does not require a content image. In this mode, texture is merely semantically transfered from the style image to the content image. Content loss ist omitted, the total loss being minimized is exactly the style loss. Figure \ref{fig::nocontent} shows results for this mode. This is what the title of the present work refers to as a \emph{Neural Doodle}. \begin{figure} \begin{subfigure}[t]{0.32\textwidth} \centering \includegraphics[width=4.5cm]{renoir/renoir.png} \end{subfigure}%\hspace{5mm} ~ \begin{subfigure}[t]{0.32\textwidth} \centering \includegraphics[width=4.5cm]{renoir/out_result.png} \end{subfigure}%\hspace{5mm} ~ \begin{subfigure}[t]{0.32\textwidth} \centering \includegraphics[width=4.5cm]{renoir/out2_result.png} \end{subfigure} \\ \begin{subfigure}[t]{0.32\textwidth} \centering \includegraphics[width=4.5cm]{renoir/renoir_style.png} \end{subfigure}%\hspace{5mm} ~ \begin{subfigure}[t]{0.32\textwidth} \centering \includegraphics[width=4.5cm]{renoir/renoir_out_style.png} \end{subfigure}%\hspace{5mm} ~ \begin{subfigure}[t]{0.32\textwidth} \centering \includegraphics[width=4.5cm]{renoir/renoir_out_style_2.png} \end{subfigure} \caption[]{Reproduction of the paper results generated by our implementation. Original style image (Renoir) with respective segmentation in the left column. Two different segmentation maps (lower row) and respective transfer results (upper row). No content image was used and the content loss omitted.} \label{fig::nocontent} \end{figure}
{ "alphanum_fraction": 0.793471948, "avg_line_length": 97.8295454545, "ext": "tex", "hexsha": "2bf5887290edae6dbeb2bbef9ba52d3a6e9a19f6", "lang": "TeX", "max_forks_count": 2, "max_forks_repo_forks_event_max_datetime": "2020-01-08T15:02:12.000Z", "max_forks_repo_forks_event_min_datetime": "2019-05-31T19:27:38.000Z", "max_forks_repo_head_hexsha": "4b0c8da17351ef1662a0ce0bf5979027bafb130e", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "paulwarkentin/pytorch-neural-doodle", "max_forks_repo_path": "docs/content/chapters/02_implementation.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "4b0c8da17351ef1662a0ce0bf5979027bafb130e", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "paulwarkentin/pytorch-neural-doodle", "max_issues_repo_path": "docs/content/chapters/02_implementation.tex", "max_line_length": 1388, "max_stars_count": 15, "max_stars_repo_head_hexsha": "4b0c8da17351ef1662a0ce0bf5979027bafb130e", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "paulwarkentin/pytorch-neural-doodle", "max_stars_repo_path": "docs/content/chapters/02_implementation.tex", "max_stars_repo_stars_event_max_datetime": "2021-10-09T11:22:14.000Z", "max_stars_repo_stars_event_min_datetime": "2018-10-07T14:54:33.000Z", "num_tokens": 1994, "size": 8609 }
The module {\module{block}} contains code related to blocks and block chains. This includes code to check if a block header is valid (including verifying the properties of the staking asset in the ledger), whether a block is valid and if a block or block header is a valid successor to a block or block header. In order to verify these properties we need to know when an asset is allowed to stake a block. We also allow for the possibility of forfeiture of block rewards as a punishment for signing on two different short forks. {\bf{Note:}} Unit tests for the {\module{block}} module have not been written. {\bf{Note:}} In the Coq formalization the Coq module {\coqmod{Blocks}} corresponds to {\module{block}}. For more information, see~\cite{White2015b}. \section{Stake Modifiers} A {\defin{stake modifier}} is a 256 bit number. The type {\type{stakemod}} is defined as four 64-bit integers as a way of representing such a 256 bit number. The functions {\serfunc{seo\_stakemod}} and {\serfunc{sei\_stakemod}} serialize and deserialize stake modifiers. At each block height there will be a current stake modifier and a future stake modifier. The current stake modifier determines who will be able to stake the next block. The future stake modifier influences the next 256 current stake modifiers. The genesis current and future stake modifiers should be set in the variables {\var{genesiscurrentstakemod}} and {\var{genesisfuturestakemod}}. These will determine who will be able to stake the first $256$ blocks and will influence who will be able to stake the next $256$ blocks, so it is important that these genesis stake modifiers are chosen in a fair manner. The function {\func{set\_genesis\_stakemods}} sets {\var{genesiscurrentstakemod}} and {\var{genesisfuturestakemod}} by taking a 160-bit number (as a 40 character hex string), applying one round of {\tt{SHA256}} to obtain the value for {\var{genesiscurrentstakemod}} and another round of {\tt{SHA256}} to obtain the value for {\var{genesisfuturestakemod}}.\footnote{The plan was to choose some Bitcoin height in the future and when that height was reached to obtain the 160-bit seed number from the last 20 bytes of the hash of the Bitcoin block header at that height.} The following three functions operate on stake modifiers. \begin{itemize} \item {\func{stakemod\_pushbit}} takes a bit (as a boolean) and a stake modifier, shifts the 256-bit stake modifier (dropping the most significant bit) and using the new bit as the new least significant bit. \item {\func{stakemod\_lastbit}} takes a stake modifier and returns its most significant bit (as a boolean). \item {\func{stakemod\_firstbit}} takes a stake modifier and returns its least significant bit (as a boolean). \end{itemize} The current stake modifier changes from one block height to the next by taking the last bit of the future stake modifier and pushing this bit onto the current stake modifier. The future stake modifier changes from one block height to the next by pushing a new bit (either $0$ or $1$) onto the current future stake modifier. This implies those who stake blocks influence what will be the current stake modifiers, but this influence is limited. If one staker staked 50\% of blocks, the staker would choose approximately $128$ bits of the $256$ stake modifiers in the future. The hope is that this influence is not enough to significantly improve their chances in the future, as each bit not chosen by the staker also has a large influence on who will be able to stake. The function {\func{hitval}} performs one round of {\tt{SHA256}} on the least significant 32-bits of a 64-bit integer (intended to be the current time), a hash value (intended to be the asset id of the asset to stake) and a stake modifier (intended to be the current stake modifier). It returns the result as 256-bit number called the {\defin{hit value}}. \section{Targets} A value of type {\type{targetinfo}} is a triple consisting of the current stake modifier, the future stake modifier and the current target (represented by a {\type{big\_int}}). The target info used to determine if an asset is allowed to stake the next block. In particular, an asset can stake the hit value is less than the current target times the coinage of the staked asset (or, the coinage times $1.25$ if proof of storage is used). We have described above how the current and future stake modifiers change at each block height. The current target should also change in order to target an average $10$ minute block. The function {\func{retarget}} defines how the target changes after each block. \begin{itemize} \item {\var{genesistarget}} is set to the initial target used for the genesis block.\footnote{It is currently set to $2^{205}$, but this should be reevaluated after a test run and before the launch of Qeditas.} \item {\var{max\_target}} is set to the maximum value for the target (i.e., the minimum difficulty). It is currently set to $2^{220}$. \item {\func{retarget}} takes a target $\tau$ and a number of seconds $\Delta$ and returns a new target. The intention is that the given target is the current target and the number of seconds is the number of seconds between the current block and the previous block. The value is the minimum of either {\var{max\_target}} or $\frac{\tau (9000 + \Delta)}{9600}$. In particular, the value returned is never more than the value of {\var{max\_target}} and remains $\tau$ if $\Delta$ is 600. \end{itemize} \section{Proof of Storage} The consensus system for Qeditas is primarily proof-of-stake, but also includes a proof-of-storage component. A node can use evidence that it is storing some part of a term or document to increase the weight of its stake by $25\%$. The evidence is a value of type {\type{postor}}, defined by two constructors: \begin{itemize} \item ${\constr{PostorTrm}}(h,s,\alpha,k)$ is evidence of storage of part of a term of a type in a theory at a term address. The optional hash value $h$ identifies a theory, $s$ is a term, $\alpha$ is a type and $k$ is a hash value. Here $s$ should have type $\alpha$ in the theory identified by $h$. The way this typing constraint is ensured is by checking that the term address correspond to the object $s$ in theory $h$ has an owner as an object. This ownership asset should have assed id $k$. The term $s$ is intended to be minimal: all except exactly one left of the tree representing $s$ should be an abbreviation (i.e., {\constr{TmH}} of hash roots). (This minimality condition is checked by {\func{check\_postor\_tm\_r}}.) \item ${\constr{PostorDoc}}(\gamma,\nu,h,\Delta,k)$ is evidence of storage of part of a document at a publication address. Here $\gamma$ is a pay address, $\nu$ is a hash value (nonce), $h$ is an optional hash value (identifying a theory), $\Delta$ is a partial document (of type {\type{pdoc}}) and $h$ is a hash value. The intention is that $h$ is the asset id for an asset with preasset ${\constr{DocPublication}}(\gamma,\nu,h,\Delta')$ held and the publication address determined by hashing $\gamma$, $\nu$, $h$ and $\Delta'$. Here $\Delta'$ is a document with the same hash root as the partial document $\Delta$. The partial document $\Delta$ should be minimal: with exactly one document item containing more than hashes and with that one document item only containing one explicit leaf, with others abbreviated by hash roots. (This minimality condition is checked by {\func{check\_postor\_pdoc\_r}}.) \end{itemize} Values of type {\type{postor}} can be serialized and deserialized using {\serfunc{seo\_postor}} and {\serfunc{sei\_postor}}. The exception {\exc{InappropriatePostor}} is raised if a value of type {\type{postor}} is not an appropriate proof of storage because the term or partial document is not minimal. \begin{itemize} \item {\func{incrstake}} multiplies the number of cants being staked by $1.25$. This is the adjusted stake used when proof of storage is included. \item {\func{check\_postor\_tm\_r}} checks the minimality condition for a term, returning the hash of the unique important leaf upon success. \item {\func{check\_postor\_tm}} checks if ${\constr{PostorTrm}}(h,s,\gamma,k)$ can be used to increase the chances of staking. Let $\alpha$ be the (p2pkh) address where the asset to be staked is held. Let $\beta$ be the term address for the object $s$ of type $\gamma$ in the theory identified by $h$. Let $h'$ be the hash of the unique exposed leaf given by {\func{check\_postor\_tm\_r}}. Let $h''$ be the result of hashing the pair of $\beta$ and $h'$. Let $h'''$ be the result of hashing $\alpha$ with $h''$. There are two conditions: \begin{enumerate} \item A certain 16 bits of $h'''$ are all $0$. (This means that given a stake address $\alpha$, only one in every 65536 items of the form ${\constr{PostorTrm}}(h,s,\gamma,k)$ can possibly ever be used to help $\alpha$ stake, independent of targets and stake modifiers. \item The hit value of $h''$ is less than the target times the adjusted stake. \end{enumerate} \item {\func{check\_postor\_pdoc\_r}} checks the minimality condition for a partial document, returning the hash of the unique important leaf upon success. \item {\func{check\_postor\_pdoc}} checks if ${\constr{PostorDoc}}(\gamma,\nu,h,\Delta,k)$ can be used to increase the chances of staking. Let $\alpha$ be the (p2pkh) address where the asset to be staked is held. Let $\beta$ be the publication address for the corresponding document asset. Let $h'$ be the hash of the unique exposed leaf given by {\func{check\_postor\_pdoc\_r}}. Let $h''$ be the result of hashing the pair of $\beta$ and $h'$. Let $h'''$ be the result of hashing $\alpha$ with $h''$. \begin{enumerate} \item A certain 16 bits of $h'''$ are all $0$. (This means that given a stake address $\alpha$, only one in every 65536 items of the form ${\constr{PostorDoc}}(\gamma,\nu,h,\Delta,k)$ can possibly ever be used to help $\alpha$ stake, independent of targets and stake modifiers. \item The hit value of $h''$ is less than the target times the adjusted stake. \end{enumerate} \end{itemize} \section{Hits and Cumulative Stake} We now describe two functions for checking if an asset (optionally with proof of storage) is allowed to stake. This is sometimes informally referred to as ``checking for a hit.'' A third function {\func{check\_hit}} is deferred until we discuss block headers. \begin{itemize} \item {\func{check\_hit\_b}} is an auxiliary function which does most of the work to check if an currency asset can stake a block. It is given the block height, the birthday of the asset, the obligation of the asset, the number of cants $v$ in the currency asset, the current stake modifier, the current target, the current timestamp, the asset id of the asset to stake, the p2pkh address holding the stake address\footnote{Note that the obligation of the stake address may mean that a different person can spend the staking asset than the holder who can stake the asset. This could be used to, for example, ``loan'' assets to someone else to stake.} and an optional proof of storage. If no proof of storage is given, the asset can stake if its hit value (relative to the time stamp and current stake modifier) is less than the product of the target and the coinage (as computed by {\func{coinage}}) of the asset. Suppose a proof of storage is given. In this case, we consider an adjusted stake using $1.25 v$ instead of $v$. The asset can stake if the hit value of the asset is less than the target times the coinage of the adjusted stake and the proof of storage can be used (as judged by {\func{check\_postor\_tm}} or {\func{check\_postor\_pdoc}}). \item {\func{check\_hit\_a}} is simply a wrapper function which takes the target info (of type {\type{targetinfo}}) and calls {\func{check\_hit\_b}} after extracting the current stake modifier and current target from the target info. Factoring the functions this way makes it clear that {\func{check\_hit\_b}} does not depend on the future stake modifier. \end{itemize} The best block chain will be the one with the most cumulative stake.\footnote{The intention is also to have rolling checkpoints to prevent long range attacks.} The cumulative stake is represented by a {\type{big\_int}}. The function {\func{cumul\_stake}} computes the new cumulative stake given the previous cumulative stake, the current target $\tau$ and the latest delta time (time between blocks) $\Delta$. It computes this by adding the following (big integer) value to the previous cumulative stake: $$\lfloor \frac{\var{max\_target}}{\tau \Delta 2^{-20}} \rfloor$$ or adding $1$ if this value is less than $1$. \section{Block Headers} We now describe block headers. A block header is made up of two sets of information: the header data and the header signature. The data part is represented using the record type {\type{blockheaderdata}} while the signature part is represented using the record type {\type{blockheadersig}}. A block header (of type {\type{blockheader}}) is simply a pair of the data with the signature. The functions {\serfunc{seo\_blockheader}} and {\serfunc{sei\_blockheader}} serialize and deserialize block headers. There is a value {\var{fake\_blockheader}} which can be used when some data structure needs a block header to be initialized. The fields in the record type {\type{blockheaderdata}} are as follows: \begin{itemize} \item {\field{prevblockhash}} should contain the hash of the data in the previous block header (or {\val{None}} for the genesis block header). \item {\field{newtheoryroot}} should be the hash root of the current theory tree (optional {\type{ttree}}) after the block with this header has been processed. It will change if some transaction in the block publishes a theory specification. \item {\field{newsignaroot}} should be the hash root of the current signature tree (optional {\type{stree}}) after the block with this header has been processed. It will change if some transaction in the block publishes a signature specification. \item {\field{newledgerroot}} should be the hash root of the current compact tree ({\type{ctree}}) after the block with this header has been processed. This will always change since the asset staked will be spent and there will be outputs to the coinstake transaction of the block. \item {\field{stakeaddr}} should be the p2pkh address where the asset being staked is held. \item {\field{stakeassetid}} should be the asset id of the asset being staked. \item {\field{stored}} is an optional proof of storage ({\type{postor}}) and will be {\var{None}} if proof of storage was not used to help stake this block. \item {\field{timestamp}} is a 64-bit integer time stamp and should correspond to the time the block was staked. \item {\field{deltatime}} is a 32-bit integer which should contain the difference between the time stamp of this block and the time stamp of the previous block. (For the genesis block header, this should simply be $600$.) \item {\field{tinfo}} should be the target information (current stake modifier, future stake modifier and current target) for this block header. \item {\field{prevledger}} is an approximation of the compact tree before processing the block corresponding to this block header. This approximation must contain the asset being staked and, if proof of storage is included, the relevant object ownership asset or document asset. \end{itemize} The fields in the record type {\type{blockheadersig}} are as follows: \begin{itemize} \item {\field{blocksignat}} is a cryptographic signature of type {\type{signat}}. This should be a signature of a hash of the data in the block header. Unless an endorsement is used, the signature should be by the private key corresponding to the stake address. If an endorsement is used, the signature should be by the private key \item {\field{blocksignatrecid}} is an integer which should be between $0$ and $3$. It is included to help recover the public key for the address (either stake or endorsed) from the signature (see the function {\file{recover\_key}} in the module {\module{signat}}). \item {\field{blocksignatfcomp}} is a boolean indicating if the address (either stake or endorsed) corresponds to the compressed or uncompressed public key. \item {\field{blocksignatendorsement}} is an optional endorsement. If {\val{None}}, then signature corresponds to the stake address. Suppose it is $(\beta,r,b,\sigma)$ where $\beta$ is p2pkh address (the endorsed address), $r$ is an integer ($0\leq r\leq 3$), $b$ is a boolean and $\sigma$ is a cryptographic signature. Here $\sigma$ should be a signature of the Bitcoin message ``\verb+endorse+ $\beta$'' where $\beta$ is the endorsed address (as a Qeditas address in base58 format). The signature $\sigma$ should be by the private key corresponding to the address $\alpha$ and $r$ and $b$ are used to recover the public key. \end{itemize} The following functions operate on block headers: \begin{itemize} \item {\func{blockheader\_stakeasset}} takes block header data ({\type{blockheaderdata}}) and tries to return the staked asset by looking it up as {\field{stakeid}} at location {\field{stakeaddr}} in the compact tree {\field{prevledger}}. This can fail in two ways. First, it could be that the staked asset is not found, in which case an exception {\exc{HeaderNoStakedAsset}} is raised. Second, it could be that {\field{prevledger}} includes more information than is necessary to give the staked asset, in which case an exception {\exc{HeaderStakedAssetNotMin}} is raised.\footnote{The purpose of this condition is to prevent attackers from making unnecessarily large headers. The current implementation seems to be flawed, however, as it would not allow the relevant information from proof-of-storage to be included in {\field{prevledger}}.} \item {\func{hash\_blockheaderdata}} hashes the data in the block header. This is to determine the hash to be signed in the signature part as well as the hash to be used in the {\field{previousblockhash}} field of the next block header. \item {\func{check\_hit}} takes block header data ({\type{blockheaderdata}}) and checks if the given staked asset is allowed to create the block. It simply calles {\func{check\_hit\_a}} after extracting the target info ({\field{tinfo}}), time stamp ({\field{timestamp}}), stake asset id ({\field{stakeassetid}}), address where the staked asset is held ({\field{stakeaddr}}) and the optional proof of storage ({\field{stored}}) from given block header data. \item {\func{valid\_blockheader}} determines if a block header is a valid block at the current height. In order to check if the block is valid the staked asset must be retrieved from the previous ledger. The staked asset must be a currency asset worth $v$ cants. The auxiliary function {\func{valid\_blockheader\_a}} is called with the extra information given by this asset which in turn calls two (exported) functions: {\func{valid\_blockheader\_signat}} and {\func{valid\_blockheader\_allbutsignat}}. {\func{valid\_blockheader\_signat}} verifies the signature in the blockheader to be a valid signature (either directly or via endorsement) of the hash given by {\func{hash\_blockheaderdata}}. {\func{valid\_blockheader\_allbutsignat}} checks the following conditions: \begin{enumerate} \item The staked asset has the asset id declared in the header. \item The delta time is greater than $0$. \item The staked asset is a ``hit'' for the current block height. \item If proof of storage is included, then the asset id given for the object ownership of the term or for the document is in the given approximation of the previous ledger.\footnote{This probably no longer works if proof of storage is included, due to the minimality constraint on {\field{prevledger}}.} \end{enumerate} \item {\func{blockheader\_succ}} determines if a second block header is a valid successor to a first block header. The following conditions must be checked: \begin{enumerate} \item The second {\field{prevblockhash}} is the hash of the data in the first given block header. \item The second {\field{timestamp}} is the sum of the first {\field{timestamp}} and the second {\field{deltatime}}. \item The current stake modifier given in the second {\field{tinfo}} is the result of pushing the last bit of the future stake modifier of the first {\field{tinfo}} onto the current stake modifier of the first {\field{tinfo}}. \item The future stake modifier given in the second {\field{tinfo}} is the result of pushing a $0$ or a $1$ onto the future stake modifier of the first {\field{tinfo}}. \item The target given in the second {\field{tinfo}} is the result of retargeting using the target given in the first {\field{tinfo}} and the first {\field{deltatime}}. \end{enumerate} \end{itemize} \section{Proof of Forfeiture} Proof of forfeiture is optional data proving a staker signed on two recent chain forks within 6 blocks. When such a proof is supplied by a staker of a block, the new staker can take recent coinstake rewards from the double signing staker. Such a proof is a value of type {\type{poforfeit}} and consists of a 6-tuple $$(b_1,b_2,\overline{c_1},\overline{c_2},d,\overline{h}).$$ Here $b_1$ and $b_2$ are block headers which should contain different data but both be signed by the same stake address. The values $\overline{c_1}$ and $\overline{c_2}$ are lists of block header data each of which should have length at most 5. Finally, $v$ is the number of cants being forfeited and $\overline{h}$ is a list of hash values (asset ids of the rewards being forfeited). The function {\func{check\_poforfeit}} verifies if the given value of type {\type{poforfeit}} can be used to support forfeiture of rewards. It first verifies that the data in $b_1$ and $b_2$ are different (by ensuring their hashes are different) and are staked using assets at the same stake address $\alpha$. It also verifies that $\overline{c_1}$ and $\overline{c_2}$ have no more than 5 elements. It then verifies the signatures for $b_1$ and $b_2$. It calls {\func{check\_bhl}} on $\overline{c_1}$ and $\overline{c_2}$ to ensure that each forms a (reverse) chain connecting $b_1$ and $b_2$ to some previous block hashes $k_1$ and $k_2$, and then checks that $k_1 = k_2$. This implies $b_1$ and $b_2$ are signed block headers forking from a common block (with hash $k_1$). (The function {\func{check\_bhl}} also ensures that the hash of $b_2$ does not occur in $\overline{c_1}$ as this would mean the second chain is a subchain for the first, rather than a fork. Likewise it ensures the hash of $b_1$ does not occur in $\overline{c_2}$.) Finally it calls {\func{check\_poforfeit\_a}} which looks up assets by the asset ids listed in $\overline{h}$ and verifies that each is a reward less than 6 blocks old which was paid to address $\alpha$ and that the sum of these rewards is $v$ cants. \section{Blocks} A {\defin{block}} consists of a block header and a block delta. The block delta (implemented as the record type {\type{blockdelta}}) contains information about how to transform the previous ledger (compact tree) into the next ledger (compact tree). In particular, the stake output is given (which completes the coinstake transaction) and all other transactions in the block are given. In addition, an optional proof of forfeiture is given which may effectively increase the rewards given to the staker of the block. In order to transform the previous ledger, one will generally need to graft more information about the previous ledger than was given in the header. This graft is also given. The {\type{blockdelta}} record type consists of four fields: \begin{itemize} \item {\field{stakeoutput}} is the output to the coinstake transaction. \item {\field{forfeiture}} is an optional proof that a recent staker signed on a recent fork, thus justifying forfeiture of that staker's recent rewards. \item {\field{prevledgergraft}} is a graft providing the extra information needed by the output of the coinstake transaction, the other transactions in the block and optionally the data in the {\field{forfeiture}} field. \item {\field{blockdelta\_stxl}} is a list of signed transactions, the transactions in the block. \end{itemize} The functions {\serfunc{seo\_blockdelta}} and {\serfunc{sei\_blockdelta}} serialize and deserialize block deltas. The type {\type{block}} is the product of {\type{blockheader}} and {\type{blockdelta}}. The functions {\serfunc{seo\_block}} and {\serfunc{sei\_block}} serialize and deserialize blocks. \begin{itemize} \item {\func{coinstake}} builds the coinstake transaction by using the staked asset possibly combined with forfeited rewards as the input and taking {\field{stakeoutput}} from the block delta for the output. \item {\func{ctree\_of\_block}} returns the compact tree of a block (approximating the ledger state before processing the block) by taking {\field{prevledger}} from the block header data and grafting on {\field{prevledgergraft}} from the block delta. We call this the {\defin{compact tree of a block}}. \item {\func{tx\_of\_block}} combines all the transactions in the block (including the coinstake) into one large transaction combining all the inputs and all the outputs. This is used to check validity of blocks. \item {\func{txl\_of\_block}} returns a list of all (unsigned) transactions in the block, including the coinstake transaction and the underlying transactions listed in {\field{blockdelta\_stxl}} of the block delta. \item {\func{rewfn}} returns the number of cants of the reward at the current block height. The reward schedule is the same as Bitcoins (except for the amount of precision), except with the assumption that the first 350000 blocks have already passed (since this was the block height for the snapshot). We begin counting with a block height of $1$. From blocks $1$ to $70000$, the block reward is $25$ fraenks (2.5 trillion cants). After this the reward halves every $210000$ blocks. Since the initial distribution contained (slightly less than) 14 million fraenks, this leads to cap of 21 million fraenks. \item {\func{valid\_block}} checks if a block is valid at the given height. It does this by looking up the staked asset and passing the information to {\func{valid\_block\_a}} which checks the following conditions: \begin{enumerate} \item The header must be valid. \item The transaction outputs in {\field{stakeoutput}} must be valid (as judged by {\func{tx\_outputs\_valid}}). \item If the staked asset has an explicit obligation, then ensure the first output on {\field{stakeoutput}} is of a preasset with the same amount of cants and the same obligation sent to the stake address.\footnote{This is to support ``loaning'' assets for staking.} \item All outputs in {\field{stakeoutput}} except possibly the first is explicitly must be marked as a reward and have a lock in the obligation at least as long as the value given by {\func{reward\_locktime}}. Furthermore, all the outputs must be sent to the stake address. If the first output in {\field{stakeoutput}} is not marked as a reward, then it must also be sent to the stake address, must be a Currency asset with the same number of cants as the staked asset and must have the same obligation (possibly the default {\val{None}} obligation) as the staked asset. \item The compact tree of the block must support the coinstake transaction and it must have a reward at least\footnote{This is to allow for collection of fees and of forfeiture of recent awards. The fact that the output is not too high is guaranteed later.} as high as the value given by {\func{rewfn}}. \item There are no duplicate transactions listed in {\field{blockdelta\_stxl}}. \item The graft in {\field{prevledgergraft}} is valid. \item Each transaction in {\field{blockdelta\_stxl}} has valid signatures, is valid and is supported by the compact tree of the block. Furthermore, none of these outputs are marked as rewards, none of these transactions spend the asset being staked. Finally, each transaction consumes at least as many cants as required. \item No two transactions in {\field{blockdelta\_stxl}} spend the same input. \item No two transactions in {\field{blockdelta\_stxl}} create ownership as an object (resp., as a proposition) at the same term address. \item If a transaction in {\field{blockdelta\_stxl}} creates ownership as an object (resp., as a proposition) at a term address, then the output of the coinstake transaction does not create the same kind of ownership at the term address.\footnote{It would make sense to simply disallow creation of non-currency assets in the coinstake transaction, but this is not currently the case.} \item If proof of forfeiture is given, then check it is valid and remember the number of cants being forfeited. \item Let $\cC$ be the result of transforming the compact tree of the block ({\func{ctree\_of\_block}}) using the transactions of the block ({\func{txl\_of\_block}}). The hash root of $\cC$ must be {\field{newledgerroot}}. \item Let $\tau=(\iota,o)$ be the transaction of the block. The following must hold: \begin{itemize} \item The cost of the outputs of $\tau$ (see {\func{out\_cost}}) is equal to the sum of the assets being spent along with the reward ({\func{rewfn}}) and (possibly) the number of cants being forfeited. \item The transformation of the current theory tree by $o$ must have hash root $\field{newtheoryroot}$. \item The transformation of the current signature tree by $o$ must have hash root $\field{newsignatroot}$. \end{itemize} Upon success, {\func{valid\_block}} returns the transformed theory tree and the transformed signature tree. Upon failure, either {\val{None}} is returned or an exception is raised. \end{enumerate} \end{itemize} \section{Databases for Block Information} There are three databases for blocks, all using the hash of the block header as the key. The module {\module{DbBlockHeader}} is a database for block headers (implemented using {\module{Dbbasic2keyiter}}) and the module {\module{DbBlockDelta}} is a database for block deltas (implemented using {\module{Dbbasic2}}). \section{Chains} There are additional types {\type{blockchain}} and {\type{blockheaderchain}}. These can be used to represent (nonempty) chains of blocks or block headers.\footnote{It is not clear if this is explicitly needed.} In each case the representation is as a pair where the first component should be the most recent block of block header and the second component is a list of the previous blocks or block headers in reverse order. The variable {\var{genesisledgerroot}} gives the ledger root of the initial compact tree with the initial distribution. The value is (as of September 2016): \begin{verbatim} fc25150b4880e27235d4878637d32f0ffe2280e6 \end{verbatim} \begin{itemize} \item {\func{blockchain\_headers}} converts a block chain into block header chain by dropping the block deltas. \item {\func{ledgerroot\_of\_blockchain}} takes a block chain and returns the value of {\field{newledgerroot}} in the latest block header data. \item {\func{valid\_blockchain}} checks if a block chain is valid at a given height. This requires checking the validity of each block and that each block header is a valid successor to the previous block header. It also requires keeping up with the theory tree and signature tree. In the case of the genesis block, the {\field{prevblockhash}} should be {\val{None}}, the {\field{prevledger}} should have hash root {\var{genesisledgerroot}}, the {\field{tinfo}} should be composed of the values in {\var{genesisccurrentstakemod}}, {\var{genesisfuturestakemod}} and {\var{genesistarget}} and the {\field{deltatime}} should be $600$.\footnote{Alternatively, one could set a ``genesis timestamp'' and enforce that the {\field{deltatime}} of the genesis block is the difference between the time stamp of the genesis block and the fixed genesis timestamp.} \item {\func{valid\_blockheaderchain}} checks the validity of a block header chain. It is similar to {\func{valid\_blockchain}} but only checks the headers are valid instead of the full blocks. \end{itemize}
{ "alphanum_fraction": 0.7682352574, "avg_line_length": 62.7534246575, "ext": "tex", "hexsha": "3807526038be7967e1c9b80d52de7e186649cb8c", "lang": "TeX", "max_forks_count": 3, "max_forks_repo_forks_event_max_datetime": "2017-06-17T14:39:32.000Z", "max_forks_repo_forks_event_min_datetime": "2016-12-28T12:22:45.000Z", "max_forks_repo_head_hexsha": "151c6aaadea7b2d1ba46a172955ef2122bb66528", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "tezosprime/tezosprime", "max_forks_repo_path": "doc/techdoc/block.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "151c6aaadea7b2d1ba46a172955ef2122bb66528", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "tezosprime/tezosprime", "max_issues_repo_path": "doc/techdoc/block.tex", "max_line_length": 339, "max_stars_count": 16, "max_stars_repo_head_hexsha": "aa6a377abd3c0d244e276eadde6a84f5badb8549", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "dalcoder/dalilcoin", "max_stars_repo_path": "doc/techdoc/block.tex", "max_stars_repo_stars_event_max_datetime": "2021-04-24T15:10:52.000Z", "max_stars_repo_stars_event_min_datetime": "2017-01-26T10:54:22.000Z", "num_tokens": 8013, "size": 32067 }
\section{Prefix} \newglossaryentry{pow}{name=POW, description={Part of word}} \newglossaryentry{prep}{name=PREP, description={Preposition}} Prefix\index{prefix} is a part of a word (\gls{pow}) that stands before the word stem. One word may have more than one prefix. In other words, prefixes can be added one by one right to the beginning of the stem. Each prefix has its own semantic value and word changes its meaning due to the presence of exact prefixes. We can divide prefixes into several groups. \begin{itemize} \item Borrowed - prefixes that were borrowed from other languages (i.e. Latin, English etc.) \item Native - Slavic prefixes, that have a clear semantic value \end{itemize} Novoslovnica possesses 34 native\index{prefix!native} and 9 borrowed\index{prefix!borrowed} prefixes. Native: bez, v, vně, vųtrě, voz, vy, do, za, zad, iz, kų, među, na, nad, naǐ, ne, ni, o, od, pa, po, pod, poslě, pra, pre, pred, prez, pri, pro, råz, s, sô, u, črez. Borrowed: a, antï, aŭto, kŭazï, mono, multï, pan, para, super The chapter is to discuss the main prefixes, giving each of them a description and examples. Also one should consider that major part of prefixes are deeply connected with the prepositions. That means many prefixes can be used separately from the stem in a prepositional role. Such prefixes are marked with the PREP label. \textbf{Bez} \gls{prep} Meaning: “without” This prefix is an equivalent of the word “without” in English. It means the absence of something that we mention. As a prefix, it has no equivalence in English. Nevertheless, the suffix “-less” has a very similar meaning. Look at the example to get it. \underline{Examples:} \textit{Bez doma} - Without home \textit{Bezdušen} - Without soul/Soulless \textbf{V} \gls{prep} Meaning: “into” (direction or placement) This prefix has a semantic value of “leading into something”. The preposition equals to “in” or “into” in English. \underline{Examples:} \textit{V búdji }- In the building \textit{Vhoditi} - Enter \textbf{Vně} \gls{prep} Meaning: “outside” (placement) \underline{Examples:} \textit{Vně trudnostëǐ} - Out of problems \textit{Vněrečen} - Outspoken \textbf{Vųtrě} Meaning: “inside” (placement) \underline{Examples:} \textit{Vųtrě dušy} - Inside a soul \textit{Vųtrěseben} - Thoughtful \textbf{Voz} Meaning: “upside” (direction) \underline{Examples:} \textit{Vozmožen} - Possible \textit{Vozběg} - Take-off \textit{Vozvyšen} - Sublime \textbf{Vy} Meaning: “outside” (direction) This one means “leading outside something”. The prefix has no prepositional equivalent, remember word “Vy” means “You”. \underline{Examples:} \textit{Vyhod} - Exit \textit{Vyglěd} - Outlook \textbf{Do} \gls{prep} Meaning: “to” (destination) This prefix shows the destination of a process to some point. In English we can find an equivalent “to”. \underline{Examples:} \textit{Idati do stěny }- go to the wall \textit{Dodati} - add (something we give to a set of something to increase its amount or capacity) \textit{Dogovor} - Contract \textbf{Za} \gls{prep} Meaning: “for” (aim) The semantic value of this prefix is the aim of an action. We use it when we want to reach something, some object. English “for” can be found as revealing one of its semantic values in equal way. \underline{Examples:} \textit{Idati za cělïü} - go for the goal \textit{Zavariti čaǐ }- To brew tea (in order to make it hot) \textit{Zagovor} - Conspiracy \textbf{Zad} \gls{prep} Meaning: “behind, after” (follow) This prefix has an artificial origin (This semantic value was divided from the previous prefix). It means placing an object behind another one. The equivalent is “behind”. \underline{Examples:} \textit{Zad učilišta} - Behind the institute \textit{Zadnik} - Backdrop \textbf{Iz} \gls{prep} Meaning: “from” This prefix equals the English word “from”. \underline{Examples:} \textit{Jesòm iz Moskvy} - I am from Moscow \textit{Izhodnyǐ} - basic (from that we can develop something new) \textbf{Kų} Meaning: “what” \underline{Examples:} \textit{Kųda} - Where (What direction) \textit{Kųdy} - When (What time) \textbf{Na} \gls{prep} Meaning: “on” (placement) \underline{Examples:} \textit{Na ulicji} - On the street \textit{Naděja} - Hope \textit{Naglås} - Aloud \textbf{Nad} \gls{prep} Meaning: “above, over” (placement)” \underline{Examples:} \textit{Nad lěsom} - Above the forest \textit{Nadzemnyǐ} - Overground \textbf{Naǐ} Meaning: “the most” \underline{Examples:} \textit{Naǐdobòr} - The kindest \textit{Naǐskoryǐ} - The rapidest \textbf{Ne} \gls{prep} Meaning: “not” It has a similar value with English "un-" prefix. \underline{Examples:} \textit{Nevëlïkyǐ} - Not big \textit{Nesměšnyǐ} - Unfunny \textbf{Ni} \gls{prep} Meaning: “even this/so” \underline{Examples:} \textit{Ni doma, ni råboty} - Neither home nor work \textit{Niĝda} - Never \textbf{O} \gls{prep} Meaning: “about” \underline{Examples:} Bez doma - Without home Bezdušen - Without soul/Soulless \textbf{Od} \gls{prep} Meaning: “from, since” (direction) \underline{Examples:} \textit{Odlično} - Outstanding, OK \textit{Od časa} - From the time (Since ...) \textit{Odidati} - Go away \textbf{Pa} Meaning: “not real” \underline{Examples:} \textit{Pasyn} - Son-in-law \textit{Padčerj} - Daughter-in-law \textbf{Po} \gls{prep} Meaning: “along” (direction) \textit{Examples:} \textit{Po ulicě} - Along the street \textit{Poznane} - Studying \textbf{Pod} \gls{prep} Meaning: “under, beneath” (placement) \underline{Examples:} \textit{Pod vodoju} - Under the water \textit{Podvoden} - Underwater \textbf{Pra} Meaning: “grand” \textit{Examples:} \textit{Prababa} - Great-grandmother \textit{Praotec} - Ancestor \textbf{Pre} Meaning: “more than” \underline{Examples:} \textit{Prehod} - Transition \textit{Premnogo} - A lot \textbf{Pred} \gls{prep} Meaning: “before, in front of” (placement) \underline{Examples:} \textit{Pred stěnoju} - In front of the wall \textit{Predloga} - Offer \textbf{Prez} \gls{prep} Meaning: “rapidly through” \underline{Examples:} \textit{Prez trudnostj} - Through the challenge \textit{Prezumen} - Witty \textbf{Pri} \gls{prep} Meaning: “close to, approach” (direction) \underline{Examples:} \textit{Pri lěsu} - Close to the forest \textit{Prijěhati} - To arrive \textit{Pridavnik} - Adjective \textbf{Pro} \gls{prep} Meaning: “rapidly through” \underline{Examples:} \textit{Pro novinu} - About news (Brief speaking) \textit{Proglås} - Declaration \textbf{Råz} Meaning: “ex, extra” \underline{Examples:} \textit{Råzličije} - Difference \textit{Råzměn} - Exchange \textbf{S} \gls{prep} Meaning: “with” \underline{Examples:} \textit{S prijatelëm} - With friend \textit{Stvoriti} - To create \textbf{Sô} Meaning: “alongside” \underline{Examples:} \textit{Sôrádnik} - Collaborator \textit{Sôglåsnik} - Consonant \textbf{U} \gls{prep} Meaning: “near, ready to start an action” \underline{Examples:} \textit{U gråda} - Near the city \textit{Uhoditi} - Go away \textbf{Črez} \gls{prep} Meaning: “slowly through”: \underline{Examples:} \textit{Črez lěs} - Through the forest \textit{Črezměren} - Excessive
{ "alphanum_fraction": 0.7242713082, "avg_line_length": 19.5121293801, "ext": "tex", "hexsha": "4c6fe4a9c68af848b7ffb8440caf68d49fc8d7c3", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "8086752f2d6679f31e0f3b924ca02f670bcf9db9", "max_forks_repo_licenses": [ "BSD-3-Clause" ], "max_forks_repo_name": "SlavDom/novoslovnica-book", "max_forks_repo_path": "content/prefix6.1.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "8086752f2d6679f31e0f3b924ca02f670bcf9db9", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "BSD-3-Clause" ], "max_issues_repo_name": "SlavDom/novoslovnica-book", "max_issues_repo_path": "content/prefix6.1.tex", "max_line_length": 323, "max_stars_count": null, "max_stars_repo_head_hexsha": "8086752f2d6679f31e0f3b924ca02f670bcf9db9", "max_stars_repo_licenses": [ "BSD-3-Clause" ], "max_stars_repo_name": "SlavDom/novoslovnica-book", "max_stars_repo_path": "content/prefix6.1.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 2494, "size": 7239 }
% $Id$ \section{How to Adapt Applications for ESMF} \label{sec:Adoption} In this section we describe how to bring existing applications into the framework. \subsection{Individual Components} \begin{itemize} \item Decide what parts will become Gridded Components A Gridded Component is a self-contained piece of code which will be initialized, will be called once or many times to run, and then will be finalized. It will be expected to either take in data from other components/models, produce data, or both. Generally a computational model like an ocean or atmosphere model will map either to a single component or to a set of multiple nested components. \item Decide what data is produced A component provides data to other components using an ESMF State object. A component should fill the State object with a description of all possible values that it can export. Generally, a piece of code external to the component (the AppDriver, or a parent component) will be responsible for marking which of these items are actually going to be needed. Then the component can choose to either produce all possible data items (simpler but less efficient) or only produce the data items marked as being needed. The component should consult the \htmladdnormallink {CF data naming conventions}{http://cf-pcmdi.llnl.gov/} when it is listing what data it can produce. \item Decide what data is needed A component gets data from other components using an ESMF State object. The application developer must figure out how to get any required fields from other components in the application. \item Make the data blocks private A component should communicate to other components only through the framework. All global data items should be private to Fortran modules, and ideally should be isolated to a single derived type which is allocated at run time. \item Divide the code up into start/middle/end phases A component needs to provide 3 routines which handle initialization, running, and finalization. (For codes which have multiple phases of initialize, run, and finalize it is possible to have multiple initialize, run, and finalize routines.) The initialize routine needs to allocate space, initialize data items, boundary conditions, and do whatever else is necessary in order to prepare the component to run. For a sequential application in which all components are on the same set of processors, the run phase will be called multiple times. Each time the model is expected to take in any new data from other models, do its computation, and produce data needed by other components. A concurrent model, in which different components are run on different processors, may execute the same way. Alternatively, it may have its run routine called only once and may use different parts of the framework to arrange data exchange with other models. This feature is not yet implemented in ESMF. The finalize routine needs to release space, write out results, close open files, and generally close down the computation gracefully. \item Make a "Set Services" subroutine Components need to provide only a single externally visible entry point. It will be called at start time, and its job is to register with the framework which routines satisfy the initialize, run, and finalize requirements. If it has a single derived type that holds its private data, that can be registered too. \item Create ESMF Fields and FieldBundles for holding data An ESMF State object is fundamentally an annotated list of other ESMF items, most often expected to be ESMF FieldBundles (groups of Fields on the same grid). Other things which can be placed in a State object are Fields, Arrays (raw data with no gridding/coordinate information) and other States (generally used by coupling code). Any data which is going to be received from other components or sent to other components needs to be represented as an ESMF object. To create an ESMF Field the code must create an ESMF Array object to contain the data values, and usually an ESMF Grid object to describe the computational grid where the values are located. If this is an observational data stream the locations of the data values will be held in an ESMF Location Stream object instead of a Grid. \item Be able to read an ESMF clock During the execution of the run routine, information about time is transferred between components through ESMF Clocks. The component needs to be able to at least query a Clock for the current time using framework methods. \item Decide how much of the lower level infrastructure to use The ESMF framework provides a rich set of time management functions, data management and query functions, and other utility routines which help to insulate the user's code from the differences in hardware architectures, system software, and runtime environments. It is up to the user to select which parts of these functions they choose to use. \end{itemize} \subsection{Full Application} \begin{itemize} \item Decide on which components to use Select from the set of ESMF components available. \item Understand the data flow in order to customize a Coupler Component Examine what data is produced by each component and what data is needed by each component. The role of Coupler Components in the ESMF is to set up any necessary regridding and data conversions to match output data from one component to input data in another. \item Write or adapt a Coupler Component Decide on a strategy for how to do the coupling. There can be a single coupler for the application or multiple couplers. Single couplers follow a "hub and spoke" model. Multiple couplers can couple between subsets of the components, and can be written to couple either only one-way (e.g. output of component A into input of component B), or two-way (both A to B and B to A). The coupler must understand States, Fields, FieldBundles, Grids, and Arrays and ESMF execution/environment objects such as DELayouts. \item Use or adapt a main program The main program can be a copy of a driver found in any of the {\tt system\_tests} sub-directories. The customization needed is to {\tt use} the correct Component module files, to gain access to the {\tt SetServices} routines. Although ESMF provides example source code for the main program, it is {\bf not} considered part of the framework and can be changed by the user as needed. The final thing the main program must do is call {\tt ESMF\_Finalize()}. This will close down the framework and release any associated resources. The main program is responsible for creating a top-level Gridded Component, which in turn creates other Gridded and Coupler Components. We encourage this hierarchical design because it aids in extensibility - the top level Gridded Component can be nested in another larger application. The top-level component contains the main time loop and is responsible for calling the {\tt SetServices} entry point for each child component it creates. \end{itemize}
{ "alphanum_fraction": 0.7963279249, "avg_line_length": 40.1485714286, "ext": "tex", "hexsha": "5c71cfc5c916f9fdeed56f157a3ee2c7ed45edd5", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "0e1676300fc91000ecb43539cabf1f342d718fb3", "max_forks_repo_licenses": [ "NCSA", "Apache-2.0", "MIT" ], "max_forks_repo_name": "joeylamcy/gchp", "max_forks_repo_path": "ESMF/src/doc/ESMF_adoption.tex", "max_issues_count": 1, "max_issues_repo_head_hexsha": "0e1676300fc91000ecb43539cabf1f342d718fb3", "max_issues_repo_issues_event_max_datetime": "2022-03-04T16:12:02.000Z", "max_issues_repo_issues_event_min_datetime": "2022-03-04T16:12:02.000Z", "max_issues_repo_licenses": [ "NCSA", "Apache-2.0", "MIT" ], "max_issues_repo_name": "joeylamcy/gchp", "max_issues_repo_path": "ESMF/src/doc/ESMF_adoption.tex", "max_line_length": 76, "max_stars_count": 1, "max_stars_repo_head_hexsha": "0e1676300fc91000ecb43539cabf1f342d718fb3", "max_stars_repo_licenses": [ "NCSA", "Apache-2.0", "MIT" ], "max_stars_repo_name": "joeylamcy/gchp", "max_stars_repo_path": "ESMF/src/doc/ESMF_adoption.tex", "max_stars_repo_stars_event_max_datetime": "2018-07-05T16:48:58.000Z", "max_stars_repo_stars_event_min_datetime": "2018-07-05T16:48:58.000Z", "num_tokens": 1545, "size": 7026 }
%!TEX root = ../dissertation.tex % Introduction \section{Introduction} \label{sec:introduction} Your introduction goes here \ldots \\ This is an example in how to cite a bibliography entry \cite{johndoe} \\ This is an example in how to referring an acronym \gls{IEEE}
{ "alphanum_fraction": 0.75, "avg_line_length": 22.6666666667, "ext": "tex", "hexsha": "ea7aaca0f21d283135f1dbe4d8273da59a3521e2", "lang": "TeX", "max_forks_count": 14, "max_forks_repo_forks_event_max_datetime": "2020-08-18T06:53:33.000Z", "max_forks_repo_forks_event_min_datetime": "2016-01-23T05:40:48.000Z", "max_forks_repo_head_hexsha": "c16571d1d18ca9dfd4b4e15b95a00d9244cc8f11", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "farruhnet/Latex", "max_forks_repo_path": "sections/introduction.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "c16571d1d18ca9dfd4b4e15b95a00d9244cc8f11", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "farruhnet/Latex", "max_issues_repo_path": "sections/introduction.tex", "max_line_length": 72, "max_stars_count": 23, "max_stars_repo_head_hexsha": "c16571d1d18ca9dfd4b4e15b95a00d9244cc8f11", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "farruhnet/Latex", "max_stars_repo_path": "sections/introduction.tex", "max_stars_repo_stars_event_max_datetime": "2021-12-18T22:24:00.000Z", "max_stars_repo_stars_event_min_datetime": "2015-10-05T13:04:34.000Z", "num_tokens": 71, "size": 272 }
\documentclass{article}% \usepackage[T1]{fontenc}% \usepackage[utf8]{inputenc}% \usepackage{lmodern}% \usepackage{textcomp}% \usepackage{lastpage}% \usepackage{amsmath}% % \title{2019 AIME II Problems}% \author{MAA}% \date{\today}% % \begin{document}% \normalsize% \maketitle% \section{Problems}% \label{sec:Problems}% \begin{enumerate}% \item% Two different points, $C$ and $D$, lie on the same side of line $AB$ so that $\triangle ABC$ and $\triangle BAD$ are congruent with $AB = 9$, $BC=AD=10$, and $CA=DB=17$. The intersection of these two triangular regions has area $\frac mn$, where $m$ and $n$ are relatively prime positive integers. Find $m+n$. % \item% Lily pads $1,2,3,\ldots$ lie in a row on a pond. A frog makes a sequence of jumps starting on pad $1$. From any pad $k$ the frog jumps to either pad $k+1$ or pad $k+2$ chosen randomly with probability $\frac{1}{2}$ and independently of other jumps. The probability that the frog visits pad $7$ is $\frac{p}{q}$, where $p$ and $q$ are relatively prime positive integers. Find $p+q$. % \item% Find the number of $7$-tuples of positive integers $(a,b,c,d,e,f,g)$ that satisfy the following systems of equations: \begin{align*} abc&=70,\\ cde&=71,\\ efg&=72. \end{align*} % \item% A standard six-sided fair die is rolled four times. The probability that the product of all four numbers rolled is a perfect square is $\frac{m}{n}$, where $m$ and $n$ are relatively prime positive integers. Find $m+n$. % \item% Four ambassadors and one advisor for each of them are to be seated at a round table with $12$ chairs numbered in order $1$ to $12$. Each ambassador must sit in an even-numbered chair. Each advisor must sit in a chair adjacent to his or her ambassador. There are $N$ ways for the $8$ people to be seated at the table under these conditions. Find the remainder when $N$ is divided by $1000$. % \item% In a Martian civilization, all logarithms whose bases are not specified as assumed to be base $b$, for some fixed $b\ge2$. A Martian student writes down \[3\log(\sqrt{x}\log x)=56\] \[\log_{\log x}(x)=54\] and finds that this system of equations has a single real number solution $x>1$. Find $b$. % \item% Triangle $ABC$ has side lengths $AB=120,BC=220$, and $AC=180$. Lines $\ell_A,\ell_B$, and $\ell_C$ are drawn parallel to $\overline{BC},\overline{AC}$, and $\overline{AB}$, respectively, such that the intersections of $\ell_A,\ell_B$, and $\ell_C$ with the interior of $\triangle ABC$ are segments of lengths $55,45$, and $15$, respectively. Find the perimeter of the triangle whose sides lie on lines $\ell_A,\ell_B$, and $\ell_C$. % \item% The polynomial $f(z)=az^{2018}+bz^{2017}+cz^{2016}$ has real coefficients not exceeding $2019$, and $f\left(\frac{1+\sqrt{3}i}{2}\right)=2015+2019\sqrt{3}i$. Find the remainder when $f(1)$ is divided by $1000$. % \item% Call a positive integer $n$ $k$-pretty if $n$ has exactly $k$ positive divisors and $n$ is divisible by $k$. For example, $18$ is $6$-pretty. Let $S$ be the sum of positive integers less than $2019$ that are $20$-pretty. Find $\frac{S}{20}$. % \item% There is a unique angle $\theta$ between $0^{\circ}$ and $90^{\circ}$ such that for nonnegative integers $n$, the value of $\tan{\left(2^{n}\theta\right)}$ is positive when $n$ is a multiple of $3$, and negative otherwise. The degree measure of $\theta$ is $\frac{p}{q}$, where $p$ and $q$ are relatively prime integers. Find $p+q$. % \item% Triangle $ABC$ has side lengths $AB=7, BC=8,$ and $CA=9.$ Circle $\omega_1$ passes through $B$ and is tangent to line $AC$ at $A.$ Circle $\omega_2$ passes through $C$ and is tangent to line $AB$ at $A.$ Let $K$ be the intersection of circles $\omega_1$ and $\omega_2$ not equal to $A.$ Then $AK=\frac mn,$ where $m$ and $n$ are relatively prime positive integers. Find $m+n.$ % \item% For $n \ge 1$ call a finite sequence $(a_1, a_2 \ldots a_n)$ of positive integers progressive if $a_i < a_{i+1}$ and $a_i$ divides $a_{i+1}$ for all $1 \le i \le n-1$. Find the number of progressive sequences such that the sum of the terms in the sequence is equal to $360$. % \item% Regular octagon $A_1A_2A_3A_4A_5A_6A_7A_8$ is inscribed in a circle of area $1.$ Point $P$ lies inside the circle so that the region bounded by $\overline{PA_1},\overline{PA_2},$ and the minor arc $\widehat{A_1A_2}$ of the circle has area $\frac{1}{7},$ while the region bounded by $\overline{PA_3},\overline{PA_4},$ and the minor arc $\widehat{A_3A_4}$ of the circle has area $\frac{1}{9}.$ There is a positive integer $n$ such that the area of the region bounded by $\overline{PA_6},\overline{PA_7},$ and the minor arc $\widehat{A_6A_7}$ of the circle is equal to $\frac{1}{8}-\frac{\sqrt2}{n}.$ Find $n.$ % \item% Find the sum of all positive integers $n$ such that, given an unlimited supply of stamps of denominations $5,n,$ and $n+1$ cents, $91$ cents is the greatest postage that cannot be formed. % \item% In acute triangle $ABC$ points $P$ and $Q$ are the feet of the perpendiculars from $C$ to $\overline{AB}$ and from $B$ to $\overline{AC}$, respectively. Line $PQ$ intersects the circumcircle of $\triangle ABC$ in two distinct points, $X$ and $Y$. Suppose $XP=10$, $PQ=25$, and $QY=15$. The value of $AB\cdot AC$ can be written in the form $m\sqrt n$ where $m$ and $n$ are positive integers, and $n$ is not divisible by the square of any prime. Find $m+n$. % \end{enumerate} % \end{document}
{ "alphanum_fraction": 0.7043478261, "avg_line_length": 76.1267605634, "ext": "tex", "hexsha": "9eb8e935fa70a09974a0f1b0fba41d77d602d011", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "d9fb3bfe5ffa57295795cb830a4584a1baa344ef", "max_forks_repo_licenses": [ "zlib-acknowledgement" ], "max_forks_repo_name": "chen-siyuan/awg", "max_forks_repo_path": "output/2019_AIME_II.tex", "max_issues_count": 18, "max_issues_repo_head_hexsha": "d9fb3bfe5ffa57295795cb830a4584a1baa344ef", "max_issues_repo_issues_event_max_datetime": "2020-06-17T03:32:10.000Z", "max_issues_repo_issues_event_min_datetime": "2020-06-13T00:38:03.000Z", "max_issues_repo_licenses": [ "zlib-acknowledgement" ], "max_issues_repo_name": "chen-siyuan/amc-worksheet", "max_issues_repo_path": "output/2019_AIME_II.tex", "max_line_length": 607, "max_stars_count": 2, "max_stars_repo_head_hexsha": "d9fb3bfe5ffa57295795cb830a4584a1baa344ef", "max_stars_repo_licenses": [ "zlib-acknowledgement" ], "max_stars_repo_name": "chen-siyuan/amc-worksheet", "max_stars_repo_path": "output/2019_AIME_II.tex", "max_stars_repo_stars_event_max_datetime": "2021-12-31T23:09:26.000Z", "max_stars_repo_stars_event_min_datetime": "2020-06-13T20:59:37.000Z", "num_tokens": 1709, "size": 5405 }
\documentclass{beamer} \usepackage{appendixnumberbeamer} \graphicspath{{../figures/}} % % Choose how your presentation looks. % % For more themes, color themes and font themes, see: % http://deic.uab.es/~iblanes/beamer_gallery/index_by_theme.html % \mode<presentation> { \usetheme{Warsaw} % or try Darmstadt, Madrid, Warsaw, ... \usecolortheme{beaver} % or try albatross, beaver, crane, ... \usefonttheme{default} % or try serif, structurebold, ... \setbeamertemplate{navigation symbols}{} \setbeamertemplate{caption}[numbered] \setbeamertemplate{footline}[frame number] \setbeamertemplate{headline}{} } \usepackage{graphicx} \usepackage[english]{babel} \usepackage[utf8x]{inputenc} \usepackage{subcaption} \usepackage{amsmath,amssymb,bm,bbm} \newtheorem{proposition}{Proposition} \usepackage{graphicx} \usepackage[]{natbib} \bibliographystyle{aer} \usepackage{hyperref} \hypersetup{colorlinks,citecolor=blue,urlcolor=blue,linkcolor=blue} \DeclareMathOperator*{\argmin}{arg\,min} \title[]{Targeting Treatments with Strategic Agents} \author{Evan Munro (Stanford GSB)} \date{Causal Inference Seminar \\ December 2, 2021 } \begin{document} \begin{frame} \titlepage \end{frame} \begin{frame}{Introductory Example} \begin{itemize} \item An online store want to target a coupon to reluctant buyers \citep{rossi1996value} \begin{enumerate} \item Who will not order the product in the absence of the coupon \item Who will buy the product if they receive the coupon \end{enumerate} \item The store wants to avoid targeting coupons to customers who are always-buyers \item The store runs a small randomized experiment to learn what kind of online shopping behaviors are predictive of reluctant buyers (e.g adding an item to the cart but not checking out) \item The store runs an ongoing coupon program which targets consumers that are predicted to be reluctant buyers \citep{manski2004statistical} \item What kind of incentives are introduced and how does it affect the optimal targeting rule? \end{itemize} \end{frame} \begin{frame}{Motivation} \begin{itemize} \item Treating individuals heterogeneously rather than uniformly can improve outcomes when treatment effects are heterogeneous \item With more individual-level data available, personalized policy is increasingly common \item In many marketplace settings, treating individual heterogeneously introduces incentives for individuals to change their behavior to receive a better treatment. \end{itemize} \end{frame} \begin{frame}{More Examples } \begin{enumerate} \item Heterogeneous pricing of consumer products based on browsing patterns. \item Blocking social media posts based on keywords and engagement metrics. \item Allocating credit based on non-traditional consumer metrics, such as cellphone data. \end{enumerate} \vspace{0.5in} \textbf{Personalized Policy Introduces Incentives to Misreport} \begin{itemize} \item Incentives in all three of these settings for individuals to change characteristics to receive a better treatment (lower price, more credit, or avoid social media bans) \end{itemize} \end{frame} \begin{frame}{Overview} \begin{enumerate} \item Model the treatment allocation problem as a Stackelberg game. Agents react strategically in response to any given treatment rule. \item Show how standard approaches from the literature on policy learning are sub-optimal. \item Derive experiments under which the optimal targeting rule is recovered that do not rely on parametric assumptions about how agents behave. \end{enumerate} \end{frame} \begin{frame}{Related Literature} \begin{itemize} \item CS/econ theory literature on strategic classification and manipulation: \cite{dong2018strategic}, \cite{hardt2016strategic}, \cite{ball2020scoring}, \cite{frankel2019improving}, \cite{bjorkegren2020manipulation} \begin{itemize} \item This paper applies to the causal setting rather than prediction setting, and requires new techniques. \end{itemize} \item Statistical treatment rules and experiment design: \citet{manski2004statistical}, \citet{hirano2009asymptotics}, \cite{athey2020policy}, \cite{wager2019experimenting} \begin{itemize} \item Existing literature treats observed characteristics as i.i.d. \end{itemize} \item Online optimization: \cite{flaxman2004online}, \cite{duchi2015optimal} \item Robustness: \cite{bergemann2013robust} \begin{itemize} \item The procedure is robust to different distributions of types and different actions of individuals. \end{itemize} \end{itemize} \end{frame} \begin{frame}{Treatment Rules with Exogenous Covariates} Each of $i \in 1, \ldots, n$ individuals have exogenously characteristics and potential outcomes jointly drawn from some unknown distribution: $X_i, \{Y_i(1), Y_i(0) \} \sim \mathcal G$. $X_i \in \mathcal X$ is discrete, with $| \mathcal X| = d$. \vspace{\baselineskip} \begin{enumerate} \item Planner specifies $\delta(x) = Pr(W_i = 1 | X_i = x)$ for each $x \in \mathcal X$. \item A binary treatment $W_i$ is sampled from $\mbox{Bernoulli}(\delta_i)$, where $\delta_i = \delta(X_i)$. \item The observed outcome is $Y_i = Y_i(W_i)$. \end{enumerate} \textbf{Goal:} Allocate treatments to maximize $J(\delta) = \mathbb E[Y_i(W_i)]$: \[ \arg \max_{\delta \in [0, 1]^d} \sum \limits_{x \in \mathcal X} \Big( \delta(x) \mathbb E[Y_i(1) | X_i = x] + ( 1- \delta(x)) \mathbb E[Y_i(0) | X_i = x] \Big) f(x)\] \end{frame} \begin{frame}{Cutoff Rules are Optimal} The Conditional Average Treatment Effect is \[ \tau(x) = \mathbb E[Y_i(1) - Y_i(0) | X_i = x] \] \begin{proposition} \label{prop:cesgood} Assume that $f(x) >0$ for all $x \in \mathcal X$, then the policy that maximizes $J(\delta)$ is defined by $\delta^c(x) = \mathbbm{1} ( \tau(x) > 0)$ for $x \in \mathcal{X}$. \end{proposition} The proof is very easy: \[ J(\delta) = \sum \limits_{x \in \mathcal X} \Big( \delta(x) \mathbb E[Y_i(1) | X_i = x] + ( 1- \delta(x)) \mathbb E[Y_i(0) | X_i = x] \Big) f(x) \] \[ \frac{\partial J(\delta)}{\partial \delta_x} = E[Y_i(1) - Y_i(0) | X_i = x] f(x) \] \end{frame} \begin{frame}{Conditional Empirical Success Rule} $\tau(x)$ can be estimated via a Bernoulli randomized experiment. \[ \hat \tau(x) = \frac{\sum \limits_{i=1}^n \mathbbm{1}(X_i = x, W_i = 1) Y_i }{ \sum \limits_{i=1}^n \mathbbm{1}(X_i = x, W_i = 1) } - \frac{\sum \limits_{i=1}^n \mathbbm{1}(X_i = x, W_i = 0) Y_i }{ \sum \limits_{i=1}^n \mathbbm{1}(X_i = x, W_i = 0) } \] \end{frame} \begin{frame}{Treatment Targeting with Endogenous Covariates} Each of $i \in 1, \ldots, n$ individuals have potential covariates and potential outcomes jointly drawn from some unknown distribution: $X_i(\cdot), Y_i(\cdot) \sim \mathcal G$. \vspace{\baselineskip} \textbf{Stackelberg Game} \begin{enumerate} \item Planner specifies $\delta(x) = Pr(W_i = 1 | X_i = x)$ for each $x \in \mathcal X$. \item For $ i \in [n]$, agent $i$ reports covariates $X_i(\delta) \in \mathcal X$. In many settings, this potential covariates function is the result of utility maximization of randomly drawn $U_i(\cdot)$: \[ X_i(\delta) = \arg \max_{x} \delta(x) U_i(x, 1) + (1 - \delta(x)) U_i(x, 0) \] \item For $i \in [n]$, $W_i$ is sampled from $\delta(X_i)$. \item The outcome $Y_i = Y_i(W_i)$ is observed \end{enumerate} \vspace{\baselineskip} \textbf{Goal:} Allocate treatments to maximize $J(\delta) = \mathbb E[Y_i(W_i)]$. \end{frame} \begin{frame}{Toy Model} $n$ individuals have unobserved $Z_i \sim \mbox{Bernoulli}(0.5)$ and $C_i = 0.75$. \[ X_i(\delta) = Z_i + (1 - Z_i) \mathbbm{1}(\delta(1) - \delta(0) > C_i) \] \[ Y_i(W_i) = Z_i W_i + (1- W_i) ( 1- Z_i) \] Under randomized treatment assignment, so that $\delta^0(x) = \frac{1}{2}$: \centering \includegraphics[width= 0.7\textwidth]{../figures/separated.png} Objective value $J(\delta^0) = 0.5 $ \end{frame} \begin{frame} {Strategic Behavior } \begin{itemize} \item Under randomized assignment, can estimate CATEs: $\tau(1, \delta^0) = 1$ and $ \tau(0, \delta^0) = -1$. \item Implement Manski (2004) Conditional Empirical Success rule: $\delta^c(1) = 1$ and $\delta^c(0) = 0$. \end{itemize} \vspace{\baselineskip} Resulting distribution of reported covariates: \centering \includegraphics[width= 0.7\textwidth]{../figures/pooled.png} Objective value $\Pi(\delta^c) = 0.5$ \end{frame} \begin{frame}{Endogenous CATEs} \textbf{Problem}: The marginal distribution of $X_i$ and the distribution of $Y_i(W_i)$ conditional on $x$ now depend on the choice of $\delta(x)$. \vspace{\baselineskip} We can now redefine the CATE: \[ \tau(x, \delta) = \mathbb E[ Y_i(1) - Y_i(0) | X_i(\delta) = x] \] A natural extension of the Manski (2004) cutoff rule is the fixed point rule: \[ \delta^c(x) = \mathbbm{1} (\tau (x, \delta^c) >0 ) \] \textbf{Question: } Does the optimal targeting rule have this form? \end{frame} \begin{frame}{Toy Model: Optimal Rule } \begin{itemize} \item $\delta^*(1) = 0.75$ and $\delta^*(0) = 0$ is optimal \item Under this rule, everyone with $X_i = 1$ has a positive CATE, but the planner does not treat them with probability 1. This maintains a distribution that is more amenable to targeting. \end{itemize} \centering \includegraphics[width= 0.7\textwidth]{../figures/separated.png} Objective value $\Pi(\delta^c) = 0.75$ \end{frame} \begin{frame} {Defining the Optimal Rule with Discrete Covariates} We can define $\mu(1, x, \delta) = \mathbb E[Y_i(1) | X_i(\delta) = x], \mu(0, x, \delta) = \mathbb E[Y_i(0) | X_i(\delta) = x]$ and $f(x, \delta) = Pr(X_i(\delta) = x)$, and assume that each of these three functions are continuously differentiable in $\delta$. \[ J(\delta) = \sum \limits_{x \in \mathcal X} f(x, \delta) \Big ( \delta(x) \mu(1, x, \delta) + (1 - \delta(x)) \mu(0, x, \delta) \Big) \] Taking the derivative with respect to $\delta(x)$: \begin{align*} \frac{ \partial J(\delta) }{\partial \delta_x} &= (\mu(1, x, \delta) - \mu(0, x, \delta)) f(x, \delta) \\ & + \sum \limits_{z \in \mathcal X} \frac{\partial f(z, \delta)}{\delta_x} \Big ( \delta(z) \mu(1, z, \delta) + (1 - \delta(z)) \mu(0, z, \delta) \Big) \\ & + \sum \limits_{z \in \mathcal X} f(z, \delta) \Big ( \delta(z) \frac{\partial \mu(1, z, \delta)}{\delta_x} + (1 - \delta(z)) \frac{\partial \mu(0, z, \delta)}{ \partial \delta_x} \Big) \end{align*} \end{frame} \begin{frame} It is possible that $\frac{ \partial J(\delta) }{\partial \delta_x} (\delta_x = 1) <0$ even when $\tau(x, \delta) >0$ so that $\mu(1, x, \delta) - \mu(0, x, \delta) >0$. \vspace{\baselineskip} Two take-aways: \begin{itemize} \item Can't guarantee a corner solution \item No longer estimable in a Bernoulli randomized experiment \end{itemize} \vspace{\baselineskip} The optimal rule can include \textbf{randomization}: $\delta^*(x) = \alpha_x \mathbbm{1}{(\tau(x, \delta^*) >0}) + \beta_x \mathbbm{1}( \tau(x, \delta^*) < 0 )$ with $\alpha_x <1$ or $\beta_x > 0$. \vspace{\baselineskip} \textbf{Next: Extension to Continuous Covariates} \end{frame} \begin{frame}{Coupon Policy Model(1)} \begin{itemize} \item Planner: Profit-maximizing online store that would like to target reluctant buyers with a discount coupon \item Customers have $\theta_i \sim \mbox{Bernoulli}(0.5)$. Those with $\theta_i =0$ are always buyers and those with $\theta_i = 1$ are reluctant buyers, complete purchase with 75\% probability only if they receive a coupon. \item Store observes $X_i \in \mathcal X$, which is behavior that may be correlated with their unobserved type. \item Customers have preferred behavior $Z_i \sim \mbox{Normal}(0, 1)$ and cost of changing behavior $C_i \sim \mbox{Uniform}(0, 10)$. $\theta_i= \mathbbm{1}( Z_i >0)$. Customers value coupon at \$5. \[ U_i(x) = 5\cdot \delta(x; \beta) - C_i (x - Z_i)^2. \] The reporting function is \[ X_i(\beta) = \arg \max_{x} U_i(x). \] \end{itemize} \end{frame} \begin{frame}{Coupon Policy Model(2)} The store's profit per product is \$10 for a purchase without the coupon and \$5 with it. The resulting potential outcomes are defined as the potential profit for each individual as a function of whether or not they receive a coupon: \[ Y_i(W_i) = 5 \cdot (0.75 \theta_i + ( 1- \theta_i)) W_i+ 10 \cdot ( 1 - \theta_i) (1 - W_i). \] With continuous covariates, to keep the optimization problem finite-dimensional, we parametrize the treatment allocation rule. \[ \delta(x) = \frac{ 1}{ 1 + e^{-x \beta}} \] The optimal policy is the logit allocation policy that maximizes profit: \[ \beta^* = \arg \max_{\beta} \mathbb E[Y_i(W_i)]. \] \end{frame} \begin{frame}{CES Rule without Strategic Behavior} \centering \includegraphics[width=0.7\textwidth]{nonstrategic} \end{frame} \begin{frame}{CES Rule with Strategic Behavior} \centering \includegraphics[width=0.7\textwidth]{strategic_ces} \begin{itemize} \item CES Rule discriminates too much ; strategic response to the cutoff rule leads to sub-optimal profit (Average of \$5.60). \end{itemize} \end{frame} \begin{frame}{Optimal Rule with Strategic Behavior} \centering \includegraphics[width=0.7\textwidth]{strategic_optimal.pdf} \begin{itemize} \item Fuzzy boundary leads to dampened strategic response and optimal profit of \$5.80. \end{itemize} \end{frame} \begin{frame}{How to Estimate Optimal Rule In Practice?} Two requirements of an alternative approach for finding the optimal targeting rule: \begin{itemize} \item Do not make overly parametric assumptions on agent behavior \item Learn how strategic behavior affects the objective \end{itemize} Need to randomize $\beta$ \begin{enumerate} \item At each step $t$, use random perturbations of $\hat \beta^t$ to estimate derivative of the objective function $\nabla J(\hat \beta^t)$ \item Take gradient steps towards $\beta^*$ \end{enumerate} Local experiment approach of Wager and Xu (2020). \end{frame} \begin{frame}{Assumptions on Environment} \begin{itemize} \item At each point in time $t = 1, \ldots, T$, $n$ agents arrive and the Stackelberg game is run \item The planner can vary the targeting rule for each agent, and the agents are aware of the parameters of the rule they receive \end{itemize} \end{frame} \begin{frame}{Algorithm: Dynamic Experiment} Planner completes the following procedure at each step $t=1, \ldots, T$. $\hat \beta^1$ is some naive initial estimate. \begin{enumerate} \item Zero-mean perturbation of $\hat \beta^{t}$; $\hat \beta^{t}_i = \hat \beta^{t} + \alpha_n \epsilon^t_i$, where $\epsilon^t_{i} \in \{-1,+1\}^K$ and sampled randomly. \item Planner announces treatment allocation function $\delta(X_i; \hat \beta_i^{t})$ \item Individuals report data $X_i(\hat\beta_i^{t})$ \item Planner treats individual based $\delta(X_i; \hat \beta_i^{t})$ \item Outcome $Y_i(W_i)$ is realized \item Run OLS to estimate $K$-length coefficient vector $\hat \gamma^t$: \[ \hat Y^t_i = \gamma^t \alpha_n \epsilon_i + r_i \] \item Update $\hat \beta $ using a version of online mirror descent with step size $\eta$: \[ \hat \beta ^{(t+1)} = \hat \beta^{(t)} + 2 \eta \frac{\hat \gamma^t} {t+1} \] \end{enumerate} \end{frame} \section{Results} \begin{frame}{Theorem: Consistency of Gradient Estimates} \begin{itemize} \item $\alpha_n \rightarrow 0$ as $n \rightarrow \infty$ \end{itemize} \begin{theorem} Fix some $\hat \beta^t$. If the perturbation size $\alpha_n = \alpha n^{-b}$ for $0 < b < 0.5$, then $\hat \gamma$ converges to the $K$-dimensional gradient of the objective: \[ \lim_{n \rightarrow \infty} \mathbb P \left( \left |\hat \gamma^t - \nabla J( \hat \beta^t) \right| > \epsilon \right) =0 \] for any $\epsilon>0$. \end{theorem} \end{frame} \begin{frame} {Theorem: Average Regret} \begin{theorem} If the norm of the gradient of $J$ is bounded by $M$, so $||\nabla J(\beta)||_2 \leq M$, $J$ is $\sigma-$strongly concave and the step size $\eta > \sigma^{-1}$. If we run the experimental procedure for $T$ time periods, then we have that the regret decays at rate $O(1/t)$, so for any $\beta \in \mathbb R^K$: \[ \lim_{n \rightarrow \infty} P\left[ \frac{1}{T} \sum \limits_{t=1}^T t(J (\beta) - J (\hat \beta^t)) \leq \frac{\eta M^2}{2} \right] = 1 \] \end{theorem} Interpretation: \begin{itemize} \item Average difference in outcomes between optimal targeting rule and the experiment goes to zero as $T$ and $n$ grows large \end{itemize} \end{frame} \begin{frame}{Other Possible Optimization Approaches} \begin{itemize} \item Zero-th order optimization without asymptotic arguments: For smooth and strongly convex objectives, the Gaussian smoothing of Flaxman (2005) requires function evaluations on the order of $O(d^2)$ in order to converge. \item Global optimization with smoothness assumptions only: Bayesian optimization approach \end{itemize} \end{frame} %\begin{frame}{Proof Sketch} %\begin{itemize} %\item Use Lemma 1 from \citet{orabona2014generalized} to show that: %\[ \sum \limits_{t=1}^T t(\beta - \hat \beta^t)' \hat \gamma^t \leq \frac{1}{2\eta} \sum \limits_{t=1}^T t||\beta -\hat \beta^t||_2^2 + \frac{\eta}{2} \sum \limits_{t=1}^T ||\hat\gamma^t||_2^2 \] %\item Replace gradient estimate with true gradient and use Lemma 1 to show that the error goes to zero as $n \rightarrow \infty$. %\item Bound gradient with $M$ and use strong concavity to obtain the result. %\end{itemize} %\end{frame} \begin{frame}{Simulation: Coupon Model} \begin{figure} \centering \includegraphics[width=0.8\textwidth]{regret.pdf} \caption{Convergence of Dynamic Experiment to Optimal Coupon Allocation Policy} \label{fig:price} \end{figure} \end{frame} \begin{frame}{Conclusion} \begin{itemize} \item Targeting rules that are optimal under strategic behavior are distinct from cut-off rule proposed in the policy learning literature. \item The optimal policy takes into account how the distribution of CATEs shift in response to heterogenous treatments. \item Estimating the optimal policy requires varying the dependence of treatments on observed characteristics and observing how outcomes respond. \item \textbf{Takeaway}: Combining the flexibility of statistical optimization approaches with economic models of incentives can be used to estimate robust policy in more complex environments. \end{itemize} \end{frame} \appendix \begin{frame}[allowframebreaks]{References} \bibliography{sample} \end{frame} \end{document}
{ "alphanum_fraction": 0.6969230769, "avg_line_length": 39.5178197065, "ext": "tex", "hexsha": "eec24854612cef0c9541ff900123809533889e23", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2020-12-19T02:51:44.000Z", "max_forks_repo_forks_event_min_datetime": "2020-12-19T02:51:44.000Z", "max_forks_repo_head_hexsha": "9361ad4c0220ac279a5e5193cc0698726d67f4aa", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "evanmunro/personalized-policy", "max_forks_repo_path": "writing/causal_inf.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "9361ad4c0220ac279a5e5193cc0698726d67f4aa", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "evanmunro/personalized-policy", "max_issues_repo_path": "writing/causal_inf.tex", "max_line_length": 311, "max_stars_count": null, "max_stars_repo_head_hexsha": "9361ad4c0220ac279a5e5193cc0698726d67f4aa", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "evanmunro/personalized-policy", "max_stars_repo_path": "writing/causal_inf.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 5871, "size": 18850 }
\graphicspath{{./lab04/Images/}} \maketitlepage{App Development}{in Android Studio}{Lab 4: Using Web APIs} \maketocpage \section{JSON} JSON stands for JavaScript Object Notation. It is a human readable data format storing its data in key-value pairs. All its keys must be strings but the values can be strings, numerical values, other JSON objects, arrays, boolean values or even null. Keys and values are seperated by a colon while key-value pairs are seperated by a comma. A JSON object must be wrapped in curly brackets. An example of a JSON would be the following. \begin{lstlisting}[style=A_JAVA] { "name": "John", "age": 30, "courses": ["math", "programming"], "contact": { "phone": "9915311", "email": "[email protected]" } } \end{lstlisting} For those familiar with Python, this looks just like a literal dictionary although the keys and values are more restricted in JSON. The main purpose of JSON in this course is to get data from REST APIs but it has other purposes in general. \subsection{JSONObject API} The \texttt{JSONObject} class supports creating JSON objects from strings. \begin{lstlisting}[style=A_Java] String jsonString = "{" + "\"name\": \"John\"," + "\"age\": 30," + "\"courses\": [\"math\", \"programming\"]," + "\"contact\": {\"phone\": \"9915311\", \"email\": \"[email protected]\"}" + "}"; try { JSONObject jsonObject = new JSONObject(jsonString); } catch (JSONException e) { e.printStackTrace(); } \end{lstlisting} We can now parse this \texttt{JSONObject} but in order to do so, we must know the keys for any value we might want as well as the type of their corresponding value. We can parse the \texttt{JSONObject} above into Java types with the following code. \begin{lstlisting}[style=A_Java] try { JSONObject jsonObject = new JSONObject(jsonString); String name = jsonObject.getString("name"); int age = jsonObject.getInt("age"); JSONArray jsonArray = jsonObject.getJSONArray("courses"); String course0 = jsonArray.getString(0); String course1 = jsonArray.getString(1); JSONObject nestedJsonObject = jsonObject.getJSONObject("contact"); String phone = nestedJsonObject.getString("phone"); String email = nestedJsonObject.getString("email"); } catch (JSONException e) { e.printStackTrace(); } \end{lstlisting} Another way to construct a JSON object is to create an initially empty object and add to it manually, like in the following code. \begin{lstlisting}[style=A_Java] try { JSONObject jsonObject = new JSONObject(); jsonObject.put("key", "value"); jsonObject.put("valid", false); } catch (JSONException e) { e.printStackTrace(); } \end{lstlisting} \subsection{Gson API} \texttt{Gson} is a library from Google that can convert Java objects into JSON and vice versa. Add the following to dependencies. \begin{lstlisting}[style=A_txt] compile group: 'com.google.code.gson', name: 'gson', version: '2.8.2' \end{lstlisting} Suppose we have the following class \begin{lstlisting}[style=A_Java] public class Person { String id; String name; int age; public Person(String id, String name, int age) { this.id = id; this.name = name; this.age = age; } } \end{lstlisting} We can use \texttt{Gson} to construct a JSON object with an instance of this class where the keys are the name of its field and their values are the value of the instance. Having reference instance variable will result in a nested JSON object (given they are not String or arrays). \begin{lstlisting}[style=A_Java] Gson gson = new Gson(); Person person = new Person("A341#5_X", "Mortimer", 99); String jsonPerson = gson.toJson(person); // outcome: {"id": "A341#5_X", "name": "Mortimer", "age": 99} \end{lstlisting} To convert the other way around we can use reflection. Note that any missing field in the JSON object will be set to its default value and excessive keys will be ignored. \begin{lstlisting}[style=A_Java] Gson gson = new Gson(); String jsonString = "{\"id\": \"1A\", \"name\": \"Lisa\", \"age\": 45}" Person person = gson.fromJson(jsonString, Person.class); \end{lstlisting} Like \texttt{JSONObject}, \texttt{gson} can also be used to add key-value pairs one by one to an intially empty JSON object. \begin{lstlisting}[style=A_Java] JsonObject jsonObject = new JsonObject(); jsonObject.addProperty("Key", "value"); \end{lstlisting} \section{Calling web services} There are multiple ways to call a web service. It can be done using only Java standard library but that can be somewhat tedious having to build a client from scratch and use threading so we will use a library called \href{https://github.com/koush/ion}{ion}, an asynchronous networking and image loading library.\\ There are a lot of web service out there and one will have to read their documentation to use them. Some provide means up getting data, uploading data, getting data based on input and so on. Others may requires keys or authentication. It will be your job to familiarize yourself with the webs service you want to use. The first example we will look at is a simple get request to the Chuck Norris API. Upon a click we get a joke and update some textview with it. To get a response for our API request, we can use the following given that we want the result as a string (other option include byte array or a \texttt{Gson} JSON object). \begin{lstlisting}[style=A_Java] Ion.with(/* your activity's context */) .load(/* url as string */) .asString() // if you want the results as String .setCallback(new FutureCallback<String>() { @Override public void onCompleted(Exception e, String result) { // when results are in } }); \end{lstlisting} There are various options available such as setting headers, body (for post requests), timeouts and if everything goes as expected, the exception parameter \texttt{e} should be \texttt{null}. If not, you should handle the exception appropriately (errors such as timeouts or invalid urls). The source code for the Chuck Norris example can be found \href{https://github.com/JonSteinn/AndroidDevelopment/tree/master/examples/lab4/chucknorris}{here} and a programming session making it \href{https://youtu.be/PS8CxRP2XjA}{here}.\\ \section{Creating a simple Web API (Optional)} One approach to writing apps is to keep the logic stored on a web service making Android do little more than update UI on events and serve as a client to the web service. This way it is easy to extend your app to another platform. Now we will look at creating a very simple web service with Flask. Feel free to adopt this to any other framework if you have prior knowledge of web services or if you have some other preferences over using python. Note that the localhost of the emulator, your phone and your computer are not the same so having a web service running on your localhost on your computer can not be accessed that way from the emulator or the phone. The emulator can us IP address \texttt{10.0.2.2} while you would probably have to deploy the service to use for your actual phone. In the provided example we create a very web service that returns the square of a number. Note that you need \href{https://www.python.org/downloads/}{Python} and \href{http://flask.pocoo.org}{Flask} to run the service. The source code can be found \href{https://github.com/JonSteinn/AndroidDevelopment/tree/master/examples/lab4/squarenumber}{here}, the programming session \href{https://youtu.be/3i2AVI5w6VE}{here}, a silent video of python and flask setup \href{https://youtu.be/8Z4pielSO70}{here} and a guide how one can deploy the web service \href{https://youtu.be/d15uchNJhak}{here}. \begin{figure}[H] \centering \includegraphics[scale=0.6]{web_api.png} \caption{Shared logic for multiple platforms} \label{fig:logshare} \end{figure} \section{Assignment} Use the \href{https://developer.oxforddictionaries.com}{Oxford dictionaries API} to create an app where you can enter an english word and get its definition in english displayed. The app should also contain a choice (radio button for example) between Spanish and German (or any two distinct non English languages) and display the word's meaning in the chosen language on the same event that displays its definition. A prototype for the app can be seen in figure \ref{fig:proto}. In order to do so you must get a key (there is a free one available) which you should add as header fields when making a request. \begin{lstlisting}[style=A_Java] Ion.with(this) .load(url) .addHeader("Accept", "application/json") .addHeader("app_id", "<someid>") .addHeader("app_key", "<somekey>") // and so on \end{lstlisting} \begin{figure}[H] \centering \includegraphics[scale=0.7]{ui_scetch_for_assig.png} \caption{Assignment prototype} \label{fig:proto} \end{figure}
{ "alphanum_fraction": 0.7413188553, "avg_line_length": 61.3958333333, "ext": "tex", "hexsha": "c570a1cb9bf4b080bbda0818f560d7d3b783e9ff", "lang": "TeX", "max_forks_count": 2, "max_forks_repo_forks_event_max_datetime": "2018-12-05T10:52:07.000Z", "max_forks_repo_forks_event_min_datetime": "2018-11-24T20:45:07.000Z", "max_forks_repo_head_hexsha": "2d4920d044b552ca1180ca11dfee7456cfc6218c", "max_forks_repo_licenses": [ "CC-BY-4.0" ], "max_forks_repo_name": "JonSteinn/AndroidDevelopment", "max_forks_repo_path": "tex/lab04/lab04.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "2d4920d044b552ca1180ca11dfee7456cfc6218c", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "CC-BY-4.0" ], "max_issues_repo_name": "JonSteinn/AndroidDevelopment", "max_issues_repo_path": "tex/lab04/lab04.tex", "max_line_length": 1380, "max_stars_count": null, "max_stars_repo_head_hexsha": "2d4920d044b552ca1180ca11dfee7456cfc6218c", "max_stars_repo_licenses": [ "CC-BY-4.0" ], "max_stars_repo_name": "JonSteinn/AndroidDevelopment", "max_stars_repo_path": "tex/lab04/lab04.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 2126, "size": 8841 }
% -*- TeX -*- -*- UK -*- -*- Soft -*- \chapter*{ABBREVIATIONS} \addcontentsline{toc}{chapter}{ABBREVIATIONS} % use: % \ac{IR} \begin{acronym}\itemsep=-10pt\parsep=-10pt \acro{2-D}{Two-Dimensional} \acro{2D}{Two-Dimensional} \acro{3-D}{Three-Dimensional} \acro{3D}{Three-Dimensional} \acro{A2D}{Analogue to Digital} \acro{AAM}{Air-to-Air Missile} \acro{AASRAM}{Air-to-Air Short Range Missile} \acro{AA}{Air-to-Air} \acro{ACE}{Adaptive Communication Environment} \acro{AC}{Alternating Current} \acro{ADC}{Analogue to Digital Converter} \acro{AD}{Analogue to Digital (conversion)} \acro{AFB}{Air Force Base} \acro{AFCRL}{Air Force Cambridge Research Laboratories} \acro{AFRL}{Air Force Research Laboratories} \acro{AGC}{Automatic Gain Control} \acro{AGL}{Above Ground Level} \acro{AHP}{Analytical Hierarchy Process} \acro{AI}{Artificial Intelligence} \acro{AIM}{Air Intercept Missile} \acro{AIRS}{Atmospheric Infrared Sounder} \acro{AMBBB}{Ambient Temperature Blackbody} \acro{AMRAAM}{Advanced Medium Range Air-to-Air Missile} \acro{AM}{Amplitude Modulation} \acro{ANSI}{American National Standards Institute} \acro{API}{Application Programming Interface} \acro{APU}{Auxiliary Power Unit} \acro{ASCII}{American Standard Code for Information Interchange} \acro{ASIC}{Application Specific Integrated Circuit} \acro{ASL}{Above Sea Level} \acro{ASRAAM}{Advanced Short Range Air-to-Air Missile} \acro{ASU}{Air Support Unit} \acro{ATP}{Acceptance Test Procedure} \acro{ATR}{Automatic Target Recognition} \acro{ATS}{Advanced Thermographic Systems} \acro{AVI}{Audio Video Interleave} \acro{BAe}{British Aerospace} \acro{BB}{Black Body} \acro{BCU}{Battery Coolant Unit} \acro{BDI}{Buffered Direct Injection} \acro{BDRF}{Bidirectional Reflection Function} \acro{BFS}{Bomem File System} \acro{BGRAMS}{Bomem Grams a software product} \acro{BGT}{Bodenseewerk Ger\"atetechnik Gmbh} \acro{BH}{I/O Bulkhead input/output} \acro{BIT}{Built-in Test} \acro{BLT}{Bilinear Transformation} \acro{BNC}{Bayonet Neill-Concelman} \acro{BPDF}{Bidirectional Polarization Distribution Functions} \acro{BPR}{Bad Pixel Replacement} \acro{BRDF}{Bidirectional Reflectance Distribution Function} \acro{BSP}{Binary Space Partitions} \acro{BVR}{Beyond Visual Range} \acro{C$^4$I$^3$RS}{Command, Control, Communication, Computers, Information, Intelligence, Informatics, Reconnaissance and Surveillance} \acro{CAD}{Computer Aided Design} \acro{CAT}{Computed Axial Tomography} \acro{CCD}{Charge Coupled Device} \acro{CCIR}{Comite Consultatif International des Radio Communications} \acro{CCM}{Counter-countermeasure} \acro{CDSD}{Carbon Dioxide Spectroscopic Database} \acro{CDS}{Correlated Double Sampling} \acro{CD}{Compact Disk} \acro{CEDIP}{Name of the camera manufacturer} \acro{CFAR}{Constant False Alarm Rate} \acro{CFI}{Customer Furnished Information} \acro{CML}{Configurable Math Library} \acro{CMOS}{Complementary Metal Oxide Semiconductor} \acro{CmSim2}{Countermeasure Simulator (version 2)} \acro{CmSim}{Countermeasure Simulator (version 1)} \acro{CMS}{Classified MicroSeek} \acro{CMT}{Cadmium Mercury Telluride} \acro{CnC}{Command and Control} \acro{CNES}{Centre National d'Etudes Spatiales, France} \acro{CNR}{Clutter-to-noise ratio} \acro{CNUC}{Continuous Non-Uniformity Correction} \acro{CO$_2$}{Carbon Dioxide} \acro{CODATA}{Committee on Data for Science and Technology} \acro{COM}{Common Object Model} \acro{CORBA}{Common Object Request Broker Architecture} \acro{COTS}{Commercial of the Shelf} \acro{CPU}{Central Processing Unit} \acro{CSCI}{Computer Software Configuration Item} \acro{CSIR}{Council for Scientific and Industrial Research} \acro{CSV}{Comma Separated Variable} \acro{CVS}{Concurrent Versions System} \acro{CTAN}{Comprehensive TeX Archive Network} \acro{CTIA} {Capacitive Transimpedance Amplifiers} \acro{CVD}{Chemical vapour deposition} \acro{DAS}{Denel Aerospace } \acro{DCS}{Direct Current Subtraction} \acro{DCM}{Direction Cosine Matrix} \acro{DC}{Direct Current} \acro{DD}{Denel Dynamics} \acro{DEMS}{Digital Elevation Maps} \acro{DEM}{Digital Elevation Map} \acro{DFPN}{Dark current Fixed Pattern Noise} \acro{DFT} {Discrete Fourier Transform} \acro{DIRCM}{Directed Infrared Countermeasure} \acro{DIRSIG}{Digital Imaging and Remote Sensing Image Generation} \acro{DITSIM}{Directed Infrared Countermeasure and Imaging Toolkit Simulator} \acro{DI}{Direct Injection} \acro{DLE}{Differential Linearity Error } \acro{DLSWC}{Denel Land Systems Western Cape} \acro{DLU}{Directed Laser Unit} \acro{DL}{Digital Level} \acro{dl}{Digital Level} \acro{DLea}[DL]{Deep Learning} \acro{DLL}{Dynamic Load Library} \acro{DOI}{Digital Object Identifier} \acro{DOTR}{Denel Overberg Test Range} \acro{DPA}{Digital Processing Assembly} \acro{DPSS}{Defence, Peace Safety and Security, an operating unit of the CSIR} \acro{DRI}{Detection, Recognition and Identification} \acro{DSIM}{Directed Infrared Countermeasure Simulator} \acro{DSP}{Digital Signal Processing} \acro{DTD}{Document Type Definition} \acro{DVD}{Digital Video Disk or Digital Versatile Disk} \acro{ECCM}{Electronic Counter-Countermeasure} \acro{ECEF}{Earth Centred Earth Fixed} \acro{EDERI}{Electronic Defence Evaluation and Research Institute} \acro{EDS}{Electronic Defence Systems} \acro{EEPROM}{Electronically Erasable Programmable Read-Only Memory} \acro{EGT}{Exhaust Gas Temperature} \acro{EICAS}{Engine Indication and Crew Alert System} \acro{EMF}{Electromotive force} \acro{EMC}{Electromagnetic Compatibility} \acro{EMI}{Electro-magnetic interference} \acro{EMP}{Electro-magnetic pulse} \acro{ENU}{East North Up} \acro{ENVI}{An image processing product} \acro{EofN}{east of north} \acro{EOSS}{Electro-Optical Simulation System} \acro{EO}{Electro Optics} \acro{EPS}{European Polar System } \acro{ESD}{East-South-Down (coordinate convention)} \acro{EULA}{End-User Licence Agreement} \acro{EUMETSAT}{EUropean organization for the exploitation of METeorological SATellites} \acro{EWC}{Electronic Warfare Centre} \acro{EW}{Electronic Warfare} \acro{FAR}{False alarm rate} \acro{FASCODE}{Fast Atmospheric Signature Code} \acro{FET}{Field-Effect Transistor} \acro{FIFO}{First In First Out memory system} \acro{FIGHTERS}{ HITRAN Atmospheric Workstation} \acro{FIRST}{Fourier Infrared Spectrometer Telops, an imaging FTIT camera} \acro{FLIR}{Forward Looking Infrared (an infrared camera, also a company} \acro{FMECA}{Failure modes, effects and criticality analysis} \acro{FM}{Frequency Modulation} \acro{FOM}{Figure of merit} \acro{FOVEF}{Field of view Expansion Factor} \acro{FOV}{Field of View} \acro{FOR}{Field of Regard} \acro{FPA}{Focal Plane Array} \acro{FPGA}{Field-Programmable Gate Array} \acro{FPN}{Fixed Pattern Noise} \acro{FPO}{Fixed Pattern Offset} \acro{FPP}{Focal Plane Processor} \acro{FR}{Frame Rate} \acro{FTIR}{Fourier-transform infrared spectrometer} \acro{FTSW}{Fourier Transform Software} \acro{FYI}{For Your Information} \acro{GA-Sim I/O}{Gimbal-Assembly Simulator Input Output} \acro{GA-Sim}{Gimbal-Assembly Simulator} \acro{GA}{Gimbal Assembly} \acro{GB}{Gigabyte} \acro{GCN}{Guidance, Navigation and Control} \acro{GEISA}{Gestion et Etude des Informations Spectroscopiques Atmosph\'{e}riques} \acro{GENLOCK}{Generator Lock} \acro{Ge}{Germanium} \acro{GIDSC}{GEISA/IASI Database Scientific Committee} \acro{GIL}{Global Interpreter Lock} \acro{GIS}{Geographical Information System} \acro{GLSL}{OpenGL Shading Language} \acro{GLUT}{Graphics Library Utilities} \acro{GMI}{Gate Modulated Input} \acro{GMT}{Greenwich Mean Time} \acro{GPS}{Global Positioning System} \acro{GPU}{Graphics Processor Unit} \acro{GP}{Grid Pixel Detector} \acro{GRS}{Geodetic Reference System} \acro{GSA}{Gimbal Servo Amplifier} \acro{GS}{Gimbal Simulator} \acro{GUI}{Graphical User Interface} \acro{GWIR}{Germanium (band) Wave Infrared, a 1.1--1.5~$\mu$m camera} \acro{H/W}{Hardware} \acro{HDR}{High Dynamic Range} \acro{HD}{High Definition} \acro{HE}{High Explosive} \acro{HFOV}{Horizontal Field-of-View} \acro{HGH}{Name of a company} \acro{HILS}{Hardware In the Loop Simulation } \acro{HITEMP}{High Temperature molecular absorption database} \acro{HITRAN}{High-resolution transmission molecular absorption database} \acro{HLAN}{HILS LAN} \acro{HLA}{High Level Architecture} \acro{HOTBB}{Hot Blackbody} \acro{HOTCBB}{Hot Calibration Blackbody} \acro{HTML}{Hypertext Markup Language} \acro{I/O}{Input/Output} \acro{IASI}{Infrared Atmospheric Sounding Interferometer} \acro{IAS}{Indicated Air Speed} \acro{ICD}{Interface Control Document} \acro{ICT}{Information, Computer and Telecommunications} \acro{IC}{Integrated circuit} \acro{ID}{Identification} \acro{IDE}{Integrated Development Environment} \acro{IEC}{International Electrotechnical Commission} \acro{IEEE}{Institute for Electrical and Electronics Engineers} \acro{IFF}{Identification Friend or Foe} \acro{IFOV}{Instantaneous Field of View} \acro{IF}{InterFace} \acro{IGES}{Initial Graphics Exchange Specification} \acro{IIR}{Infinite Impulse Response} \acro{IID}{Independent and Identically Distributed} \acro{ILE}{Integral Linearity Error} \acro{IMU}{Inertial Measurement Unit} \acro{InGaAs}{Indium Gallium Arsenide} \acro{InSb}{Indium Antimonide} \acro{IPnetwork}[IP]{Internet Protocol} \acro{IPintprop}[IP]{Intellectual Property} \acro{IRCM}{Infrared Countermeasure} \acro{IRECM}{Infrared Electronic Countermeasure} \acro{IRIG}{Inter-Range Instrumentation Group, a time code format } \acro{IRIS}{Infrared Imaging Seeker} \acro{IRML}{Infrared Mobile Laboratory} \acro{IRSA}{Infrared Sensor Assembly} \acro{IRST}{Infrared Search and Track} \acro{IR}{Infrared} \acro{ISA}{International Standard Atmosphere} \acro{ISLS}{Integrating Sphere Light Source} \acro{ISO}{International Organisation for Standardisation} \acro{ISSWG}{IASI Sounding Science Working Group} \acro{ITBMS}{International Infrared Target, Background Modelling \& Simulation} \acro{ITC}{International Institute for Geo-Information Science and Earth Observation } \acro{ITRF}{International Terrestrial Reference Frame} \acro{ITR}{Integrate Then Read} \acro{IUGG}{International Union for Geodesy and Geophysics} \acro{IWR}{Integrate While Read} \acro{JavaHAWKS}{{\bf Java} HITRAN Atmospheric Workstation} \acro{JCG}{Jam Code Generator} \acro{JMASS}{Joint Modelling and Simulation Systems} \acro{JPG}{Graphics file type developed by Joint Photographic Experts Group} \acro{KACST}{King Abdul Aziz City for Science and Technology, Saudi Arabia} \acro{KIAS}{Knots Indicated Air Speed} \acro{KSA}{Kingdom of Saudi Arabia} \acro{KTAS}{Knots True Air Speed} \acro{LAN}{Local Area Network} \acro{LIDAR}{Light Detection and Ranging} \acro{LN}{Liquid Nitrogen} \acro{LOD}{Level of Detail} \acro{LOS}{Line of sight} \acro{LOWTRAN}{Low Resolution Transmission} \acro{LPG}{Liquid Petroleum Gas} \acro{LST}{Local Solar Time} \acro{LTS}{Long Term Support} \acro{LUT}{Lookup Table} \acro{LVCMOS}{Low voltage Complementary Metal Oxide Silicon} \acro{LVDS}{Low Voltage Differential Signalling} \acro{LWIR}{Long Wave Infrared, 8--12~$\mu$m} \acro{LW}{Long Wave 8--12~$\mu$m} \acro{ManPADS}{Man Portable Air Defence System} \acro{masl}{meters above sea-level} \acro{MAW}{Missile Approach Warner} \acro{MBD}{Matra-BAe Dynamics} \acro{MB}{Megabyte} \acro{MCP}{Missile Control Processor} \acro{MCSO}{Modelling and Simulation Coordination Office} \acro{MCMC}{Markov-Chain Monte Carlo} \acro{MCT}{Mercury Cadmium Telluride} \acro{MDT}{Minimum detectable temperature} \acro{MIR}{Mid Infrared} \acro{MLS}{Mid-latitude Summer} \acro{ML}{Machine Learning} \acro{MODTRAN}{MODerate spectral resolution atmospheric TRANSmittance algorithm and computer model} \acro{MOSFET}{Metal Oxide Semiconductor Field Effect Transistor} \acro{MO}{Magneto-Optical} \acro{MOOC}{Massive Open Online Course} \acro{MORTICIA}{Monte-Carlo Optical Rendering for Theatre Investigation of Capability Under the Influence of the Atmosphere} \acro{MPI}{Message Passing Interface} \acro{MRT}{Minimum resolvable temperature} \acro{MSE}{Mean Squared Error} \acro{MSL}{Metres above (mean) Sea Level} \acro{MTBF}{Mean Time Between Failures} \acro{MTF}{Modulation Transfer Function} \acro{MTL}{Missile Approach-Warner-Tracker-Laser} \acro{MTV}{Magnesium-Teflon\textsuperscript{\textregistered}-Viton\textsuperscript{\textregistered}} \acro{MVP}{Missile Verification Processor} \acro{MVU}{Missile Verification Unit} \acro{MWIR}{Medium Wave Infrared, 3--5~$\mu$m} \acro{MW}{Medium Wave 3--5~$\mu$m} \acro{MWS}{Missile Warning System} \acro{NAD}{North American Datum} \acro{NASA}{National Aeronautics and Space Administration} \acro{NAVD}{North American Vertical Datum } \acro{NA}{Not Available} \acro{NA}{Numerical aperture} \acro{NDA}{Non-Disclosure Agreement} \acro{ND}{Neutral Density} \acro{NED}{North-East-Down (coordinate convention)} \acro{NEE}{Noise equivalent irradiance (the symbol E is used for irradiance)} \acro{NEL}{Noise equivalent radiance} \acro{NEM}{Noise equivalent exitance} \acro{NEP}{Noise equivalent power} \acro{NER}{Noise equivalent reflectance } \acro{NETC}{Noise equivalent target contrast} \acro{NETD}{Noise equivalent temperature difference} \acro{NETD}{Noise Equivalent Temperature Difference} \acro{NE}{North East} \acro{NFOV}{Narrow Field of View} \acro{NGF}{New Generation Flare} \acro{NIR}{Near Infrared, 1--2~$\mu$m} \acro{NMEA}{NMEA 0183 standard for communication} \acro{NMISA}{National Metrology Institute of South Africa} \acro{NOAA}{National Oceanic and Atmospheric Administration} \acro{NTSC}{National Television System Committee} \acro{NUC}{Non-Uniformity Correction} \acro{NULL}{zero} \acro{NU}{Non-Uniformity} \acro{NVESD}{Night Vision and Electronic Sensors Directorate} \acro{NVG}{Night Vision Goggles} \acro{NW}{North West} \acro{OEM}{Other Equipment Manufacturer} \acro{OM}{OSSIM Modelling} \acro{OO}{Object Oriented} \acro{OPSF}{Optical Point Spread Function } \acro{OSG}{OpenSceneGraph} \acro{OSMOSIS}{} \acro{OSSIM}{Optronic System Simulator} \acro{OSS}{Optronics Sensor Systems} \acro{OS}{Operating System} \acro{OSG}{Open Scene Graph} \acro{OTA}{Off-tail Angle} \acro{OTB}{Overberg Test Range} \acro{OTF}{Optical transfer function} \acro{OTP}{Operational Test Procedure} \acro{PAL}{Phase Alternation Line, television format} \acro{PbS}{Lead Sulphide} \acro{PbSe}{Lead Selenium} \acro{PCI}{Peripheral Compact Interface} \acro{PCI}{Peripheral Component Interconnect} \acro{PCP}{Platform Control Processor} \acro{PC}{Personal Computer} \acro{PDF}{Portable Document Format} \acro{PFD}{Pre-flight demonstration} \acro{PI}{Proportional-Integral} \acro{PID}{Proportional-Integral-Derivative} \acro{PG}{Parliamentary Grant} \acro{POC}{Point of Contact} \acro{PRNU}{Photo Response Non-Uniformity} \acro{PSD}{Power Spectral Density} \acro{PSF}{Point Spread Function} \acro{PSU}{Power Supply Unit} \acro{PVM}{Parallel Virtual Machine} \acro{PV}{Photovoltaic} \acro{QE}{Quantum Efficiency} \acro{QNH}{QNH is a Q code relating barometric pressure with altitude } \acro{QWIP}{Quantum Well Infrared Photodetector} \acro{RAM}{Random Access Memory} \acro{RCS}{Radar Cross Section} \acro{RF}{Radio Frequency} \acro{RGB}{Red, Green and Blue} \acro{RHC}{Right Handed Coordinate System} \acro{RH}{Relative humidity } \acro{RMS}{Root Mean Square} \acro{rms}{Root Mean Square} \acro{ROIC}{Read Out Integrated Circuit} \acro{ROI}{Region of interest} \acro{RPG}{Recommended Practice Guide} \acro{RPT}{Report} \acro{RS-232}{Recommended Standard-232} \acro{RSA} {Rivest-Shamir-Adleman} \acro{RSS}{Radiometry, Signatures and Simulation} \acro{RSSqr}[RSS]{Residual Sum of Squares} \acro{RT/CORBA}{Real Time Common Object Request Broker Architecture } \acro{RTAI}{Real-time Application Interface for Linux} \acro{RTCORBA}{Real Time Common Object Request Broker Architecture} \acro{RTF}{Real-time Framework (HILS environment)} \acro{RTNET}{Real-time Network} \acro{RTN}{Real-Time Network} \acro{RTS}{Random Telegraph Signal} \acro{RT}{Real Time} \acro{Rx}{Receive} \acro{S/W}{Software} \acro{SAAF}{South African Air Force} \acro{SAICSIT}{South African Institute for Computer Scientists and Information Technologists} \acro{SAM}{Surface to Air Missile} \acro{SANDF}{South African National Defence Force} \acro{SAP}{Systolic array processor} \acro{SAST}{South African Standard Time} \acro{SA}{Surface to Air (missile)} \acro{SBC}{Single-board Computers} \acro{SCD}{Semiconductor Devices (Israeli detector company)} \acro{SCOM}{Serial Communications} \acro{SCR}{Signal-to-clutter ratio} \acro{SCSI}{Small Computer System Interface} \acro{SCS}{The Society for Modeling & Simulation International} \acro{SDD}{Software Design Document} \acro{SDK}{Software Development Kit} \acro{SDLC}{Synchronous Data-Link Control} \acro{SDL}{Simple DirectMedia Layer} \acro{SDSP}{Sensor Digital Signal Processor} \acro{SEM}{Sensor Electronics Module} \acro{SE}{South East} \acro{SF}{Simply Fortran} \acro{SGD}{Saab Grintek Defence Pty. Ltd.} \acro{SHA}{Sample and Hold Amplifier} \acro{SID}{System Identification} \acro{SIGGRAPH}{Special Interest Group in Graphics} \acro{SIG}{Synthetic Image Generator} \acro{SIMD}{Single instruction multiple data} \acro{SIMIS}{Simulation for Imaging Infrared Systems} \acro{SIMS}{Science Inventory Management System} \acro{SI}{Systeme Internationale} \acro{SLR}{Sight-Line Rate} \acro{SNR}{Signal-to-Noise Ratio} \acro{SoF}{Start of Frame} \acro{SOP}{Safety Operating Procedures} \acro{SOW}{Statement of Work} \acro{SPIE}{International Society for Optical Engineering} \acro{SPP}{Sensor Pre-Processor} \acro{SP}{Single Pixel Detector} \acro{SRS}{Software Requirement Specification} \acro{ssh} {Secure Shell} \acro{SSS}{System/Subsystem Specification} \acro{STD}{Standard Deviation} \acro{STDP}{Spike-Timing-Dependent Plasticity} \acro{STLib}[STL]{Standard Template Library} \acro{STLit}[STL]{Stereo Lithography} \acro{STP}{Standard Temperature and Pressure} \acro{STR}{Signal to threshold ratio} \acro{SVC}{Software Version Control} \acro{svn}{subversion} \acro{SWIR}{Short Wave Infrared, 1.8--2.5~$\mu$m} \acro{SW}{South West} \acro{TAO}{The ACE Object Request Broker} \acro{TAS}{True Air Speed} \acro{TBA}{To Be Advised} \acro{TBC}{To Be Confirmed} \acro{TBDL}{To Be Determined Later} \acro{TB}{Terabyte} \acro{TCP/IP}{Transmission Control Protocol/Internet Protocol} \acro{TCP}{Transmission Control Protocol} \acro{TDI}{Time delayed integration} \acro{TEC}{Thermo-Electric Cooler} \acro{TFDC}{Test Flight and Development Center} \acro{TIFF}{Tagged Image File Format} \acro{TPM}{Technical performance measure} \acro{TP}{Test Point} \acro{BP}{Blue Test Point} \acro{TR1}{Technical Release 1, a C++ library} \acro{TTL}{Transistor-transistor Logic} \acro{TTP}{Targeting Task Performance} \acro{TUG} {TeX Users Group} \acro{TV}{Television} \acro{Tx}{Transmit} \acro{UAV}{Unmanned Aerial Vehicle} \acro{UDP}{User Datagram Protocol} \acro{UK}{United Kingdom} \acro{UML}{Unified Modelling Language} \acro{UNIX}{A computer operating system, not an acronym. UNIX is a pun on MULTICS} \acro{URI}{Uniform Resource Identifier} \acro{URL}{Uniform Resource Locator} \acro{URS}{User Requirement Specification} \acro{USAF}{Unites States Air Force} \acro{USA}{United States of America} \acro{USB}{Universal Serial Bus} \acro{USGS}{United States Geological Survey} \acro{USSR}{United Socialist Soviet Republics} \acro{USS}{United States Standard} \acro{US}{United States} \acro{UUT}{Unit Under Test} \acro{UV}{Ultraviolet} \acro{VBA}{Visual Basic for Applications} \acro{VFOV}{Vertical Field-of-View} \acro{VGA}{Video Graphics Array} \acro{VLWIR}{Very Long Wave Infrared} \acro{VLW}{Very Long Wave 8--12~$\mu$m } \acro{VM}{Virtual Machine} \acro{VPN}{Virtual Private Network} \acro{VS}{Visual Studio} \acro{VVA}{Validation, Verification and Accreditation} \acro{VV}{Validation and Verification} \acro{WGS}{The World Geodetic System} \acro{XML}{Extensible Markup Language} \acro{XPSP}{Microsoft Windows XP (operating system) Service Pack} \acro{XP}{Microsoft Windows XP (operating system)} \end{acronym}
{ "alphanum_fraction": 0.7786309345, "avg_line_length": 40.0120481928, "ext": "tex", "hexsha": "c8f8cc4055c433ec595d83a389a41bd6a7896bdf", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2021-05-23T05:09:24.000Z", "max_forks_repo_forks_event_min_datetime": "2021-05-23T05:09:24.000Z", "max_forks_repo_head_hexsha": "3c622ec0010d79aee210d2ce7cf2b6cba79c1835", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "NelisW/libraddask", "max_forks_repo_path": "doc/Report/abbrev.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "3c622ec0010d79aee210d2ce7cf2b6cba79c1835", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "NelisW/libraddask", "max_issues_repo_path": "doc/Report/abbrev.tex", "max_line_length": 136, "max_stars_count": 4, "max_stars_repo_head_hexsha": "3c622ec0010d79aee210d2ce7cf2b6cba79c1835", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "NelisW/libraddask", "max_stars_repo_path": "doc/Report/abbrev.tex", "max_stars_repo_stars_event_max_datetime": "2022-03-23T11:33:13.000Z", "max_stars_repo_stars_event_min_datetime": "2020-04-20T21:31:25.000Z", "num_tokens": 5783, "size": 19926 }
% Template for ICASSP-2010 paper; to be used with: % mlspconf.sty - ICASSP/ICIP LaTeX style file adapted for MLSP, and % IEEEbib.bst - IEEE bibliography style file. % -------------------------------------------------------------------------- \documentclass{article} \usepackage{amsmath,graphicx,mlspconf} \copyrightnotice{978-1-4673-7454-5/15/\$31.00 {\copyright}2015 IEEE} \toappear{2015 IEEE International Workshop on Machine Learning for Signal Processing, Sept.\ 17--20, 2015, Boston, USA} % Example definitions. % -------------------- \def\x{{\mathbf x}} \def\L{{\cal L}} % Title. % ------ \title{Filtering of Frequency-Transformed Image for Privacy Preserving Facial Recognition \\(DRAFT v1.0)} % % Single address. % --------------- \name{Author(s) Name(s)\thanks{Thanks to XYZ agency for funding.}} \address{Author Affiliation(s)} % % For example: % ------------ %\address{School\\ % Department\\ % Address} % % Two addresses (uncomment and modify for two-address case). % ---------------------------------------------------------- %\twoauthors % {A. Author-one, B. Author-two\sthanks{Thanks to XYZ agency for funding.}} % {School A-B\\ % Department A-B\\ % Address A-B} % {C. Author-three, D. Author-four\sthanks{The fourth author performed the work % while at ...}} % {School C-D\\ % Department C-D\\ % Address C-D} % \begin{document} %\ninept % \maketitle % \begin{abstract} This paper examines the use of filters on feature vectors for privacy protection in facial recognition. Feature vectors are the results of Fast Fourier Transform and Wavelet Transform on the Yale and Olivetti datasets. Several filters are proposed. Filters based on the signal to noise ratio and t test select feature which prevent privacy compromising reconstruction without sacrificing accuracy. The use of phase removal for FFT and normalization are also shown to protect privacy. \end{abstract} % \begin{keywords} One, two, three, four, five \end{keywords} % \section{Introduction} \label{sec:intro} In modern times, the growth of the Internet and digital sensors has lead to the collection of massive amount of personal information. Videos, photos, emails, banking transactions, browsing history, GPS tracks, and other data is collected and store in the cloud This data may be circulated around the Internet and severs without the owner's knowledge. The value and scale of the data creates the risk of privacy leakage. While encryption has been the traditional solution to this problem, recently a novel privacy preserving methodology, compressive privacy, was developed. In the compressive privacy paradigm, the data owner controls the privacy of the data before uploading it to the cloud [need to cite Prof Kung]. \\\\ As noted in \cite{bowyer2004face}, 9/11, Edward Snowden incident and other events have resulted in both a demand for recognition systems and a concern for privacy violation by such systems. A critical competent of such system is biometric identification of people, mainly by facial recognition which can be performed in without consent and at a distnce by suvellance camreas. While a lot of research has gone into improving facial recognition systems - \cite{bouzalmat2014comparative}, \cite{spies2000face}, \cite{bouzalmat2011facial}, \cite{dehai2013pca}, \cite{samra2003face} among others - relatively little research has been done on incorporating privacy into such systems; some examples being \cite{erkin2009privacy}, \cite{sadeghi2010efficient}, and \cite{kevenaar2005face}. \\\\ The primary approach to privacy in facial recognition is cryptography. In \cite{erkin2009privacy}, Eigenfaces recogition system is used on homomorphically encrypted data. In this first cryptographic system for facial recognition \cite{erkin2009privacy}, the client wants the server to identify a face without revealing the image to the server. The server also does not want to reveal the contents of it's database. In this approach, data is quantized for Pailler encryption and server and client share computations needed for matching faces. \cite{erkin2009privacy} Experimentally, 96\% accuracy was achieved in \cite{erkin2009privacy} on "ORL Database of Faces". \cite{sadeghi2010efficient} improved the algorithm presented in \cite{erkin2009privacy} by reducing the time and space complexity with the use of garbled circuits. \\\\ Along a different line of research, \cite{kevenaar2005face} used Helper Data Systems to provide privacy. The approach generates binary feature vector by determining reliable bits based on statistics from sample data. While the proposed process is similar to compressive privacy, it leaves pirvacy up to the cloud server and [......................................................................] \\\\ Our compressive privacy appoach to facial recognition rests on the idea that privacy is compromised when an image can be visually inspected by an unautorized individual. Therefore, as long as the reconstruction of an image from a feature vector is not meaningful to a human, privacy has been maintained. Facial recognition systems often utilize Fast Fourier Transform (FFT) or Wavelet Transform (WT) as part of feature engineering. For example \cite{spies2000face}, \cite{bouzalmat2011facial}, \cite{dehai2013pca}, and \cite{samra2003face} use FFT or WT. In recognition systems like these, it is possible to alter the output of FFT or WT to reduce the quality of the reconstructed image without sacrificing accuracy of the classification. \section{Our Classification Systems} \label{sec:system} We can break down a facial recognition system into two compnents, feature engineering and classification. Both compenents can be made up of several parts, for example several sequecnail WT transforms. We limit our feature engineering to one transform to keep image reconstruction simple, however this may be a real constraint in time or power sensative applications. As pictured in Figure \ref{fig:mysys}, our classification systems for this investigation, begin with an application of FFT or WT to an image. Then a filter is applied as part of feature selection. Classification is accomplished with an SVM. For all of the following experiments an SVM with a leaner kernel and C = 1 was found to produce the best results. \begin{figure}[htb] \begin{minipage}[b]{1.0\linewidth} \centering \centerline{\includegraphics[width=8.5cm]{mysys}} % \vspace{2.0cm} \end{minipage} % \caption{Block diagrams of classification systems used.} \label{fig:mysys} % \end{figure} \section{Filters} \label{sec:Filters} Part of our compressive privacy results from the use of filters for feature selection of FFT and WT frequencies. A filter is a binary matrix which selects features based on some selecting function. Let $F_{x,d}$ be a filter $F_{x,d} \in \{0,1\}^{d x m}$ where $m$ is the number of features in each example, $x$ is one of the selecting functions, and $d$ is the number of features selected by the filter. \\\\ $F_{x, d,i,j} = 1$ if $W_{x}(j)$ is with in the $d$ largest $W_{x}$, otherwise $F_{d,i,j} = 0$. Thus $m$ dimensional feature vectors are compressed to $d$ dimension feauter vectors, based on the ordering of $W_{x}$. \\\\ Let $X$ be feature vectors for an example: \begin{equation} \label{compress} X_{filtered} = F_{x,d}X \end{equation} $\\$ Selection functions assign a weight to each feature and divide filters into three categories: data independent, unsupervised, and supervised. Data indepednent filters do not utalize a traning set when determining which features should be selected. In our investigation the Rectangle and Triangle filters are data indepednet as they select a predefined region of features. The variance based filter is unsuperived in the sense that it does not consider labels of the training examples. Feature selection is done purely based on the variance of each feature accross the training set. The idea for this filter originates in \cite{spies2000face}. \\\\ We devised four supervised filters based on the signal to noise ratio, Fisher discriminant ratio, symmetric divergence and t statistical tests. The supervised filters require labels for positive and negative classes. During training, for each individual the training examples are divided into two classes. Positive class contains the pictures of that individual and the negative class contains all other pictures. The signal to noise ratio, Fisher discriminant ratio, symmetric divergence and t tests are computed based on those two classes for each feature, and the final weight for a feature is the mean of weights cross all of the individual in the training set. \\\\ The filters are mathematically defined below. For the equations below, let $\mu^{+}_{j}$, $\sigma^{+}_{j}$, and $N^{+}_{j}$ be the mean, standard deviation and number of examples for a target individual and let $\mu^{-}_{j}$, $\sigma^{-}_{j}$, and $N^{-}_{j}$ be the mean, standard deviation and number of examples for all other people in the training set. Let $\bar{X}$ be the mean of $X$ with respect to the training set in the case of unsurprised filters, and with respect to all individuals. \subsection{Variance} \begin{equation} \label{Var} W_{VAR}(j) = \sigma^{2}_{j} \end{equation} \subsection{Signal to Noise Ration} \begin{equation} \label{SNR} W_{SNR}(j) = \overline{\dfrac{| \mu^{+}_{j} - \mu^{-}_{j} |}{\sigma^{+}_{j} + \sigma^{-}_{j}}} \end{equation} \subsection{Fisher Discriminant Ratio} \begin{equation} \label{FDR} W_{FDR}(j) = \overline{\dfrac{( \mu^{+}_{j} - \mu^{-}_{j} )^{2}}{(\sigma^{+}_{j})^{2} + (\sigma^{-}_{j})^{2}}} \end{equation} \subsection{Symmetric Divergence} \begin{equation} \label{SD} W_{SD}(j) = \overline{\dfrac{1}{2} \dfrac{(\mu^{+}_{j})^{2}}{(\mu^{-}_{j})^{2}} + \dfrac{(\mu^{-}_{j})^{2}}{(\mu^{+}_{j})^{2}} + \dfrac{1}{2} \dfrac{( \mu^{+}_{j} - \mu^{-}_{j} )^{2}}{(\sigma^{+}_{j})^{2} + (\sigma^{-}_{j})^{2}} - 1} \end{equation} \subsection{T} \begin{equation} \label{T} W_{T}(j) = \overline{\dfrac{ \mu^{+}_{j} - \mu^{-}_{j}}{\sqrt{\dfrac{(\sigma^{+}_{j})^{2}}{N^{+}_{j}} + \dfrac{(\sigma^{-}_{j})^{2}}{N^{-}_{j}}}}} \end{equation} \subsection{Rectangle} let $J$ be a rectangular region of features \\ \begin{equation} W_{Rect}(j) = \begin{cases} 1 & \text{if } j \in J \\ 0 & \text{otherwise} \end{cases} \end{equation} \begin{figure}[htb] \begin{minipage}[b]{1.0\linewidth} \centering \centerline{\includegraphics[width=8.5cm]{rect}} % \vspace{2.0cm} \end{minipage} % \caption{White pixels represent selected features. The region is indented to select amplitudes of low frequencies in the FFT transform. } \label{fig:res} % \end{figure} \subsection{Triangle} Along with the Rectangle filter, this filter was inspired by results in \cite{spies2000face}, where features with high varince after FFT are the amplitues of low frequencies and they form a trianglar region. \\\\ let $J$ be a triangular region of features \\ \begin{equation} W_{Tri}(j) = \begin{cases} 1 & \text{if } j \in J \\ 0 & \text{otherwise} \end{cases} \end{equation} \begin{figure}[htb] \begin{minipage}[b]{1.0\linewidth} \centering \centerline{\includegraphics[width=8.5cm]{tri}} % \vspace{2.0cm} \end{minipage} % \caption{White pixels represent selected features. The region is indented to select amplitudes of low frequencies in the FFT transform.} \label{fig:res} % \end{figure} \section{Methods} \label{sec:Method} \subsection{Classification} Ten fold cross validation was used to test classification accuracies. Grid Search was used to optimize the SVM parameters for each fold. For all of the following experiments an SVM with a leaner kernel and C = 1 was found to produce the best results. 100 cross validations were performed and the training of supervised and unsupervised filters was constrained to the appropriate fold. Since the rectangle and triangle filters are parameterized by a region they were used to determine $d$ for all other filters. For FFT, the filters where only applied to the amplitudes of frequencies. For WT, the filters were applied to all four [....................................]. \subsection{Reconstruction} To reconstruct original images from filtered feature vectors, we set the values of all filtered features which were filtered out to be zero. The operation can be viewed as muliplication by the transpose of the filter. \begin{equation} \tilde{X} = F_{x,d}^{T}X_{filtered} \end{equation} Then the inverse of the FFT or WT is applied on $\tilde{X}$. As mentioned above, only one transform was used at a time to keep this process simple. \section{Experimental Results} \label{sec:ExperimentalResults} Our baseline accuracy with the FFT transform is 0.741 for the Yale database and 0.975 for the Olivetti database. With the WT transform, our baseline for Yale database is 0.807 and for the Olivetti database is 0.963. \subsection{Phase Removal} FFT produces amplitudes and phases for transformed image. If the phase is removed and the image is reconstructed based on the amplitude the result is a faceless image as seen in Figure \ref{fig:nophase}. This clearly protects privacy. The cost in accuracy is small. The accuracy drops 0.006 to 0.735 for the Yale database and by 0.001 to 0.974 for Olivetti database. \begin{figure}[!htb] \begin{minipage}[b]{.48\linewidth} \centering \centerline{\includegraphics[width=4.0cm]{recon/Original1o}} % \vspace{1.5cm} \centerline{Original Image}\medskip \end{minipage} \hfill \begin{minipage}[b]{0.48\linewidth} \centering \centerline{\includegraphics[width=4.0cm]{recon/Recon1o}} % \vspace{1.5cm} \centerline{Reconstruction}\medskip \end{minipage} % \caption{Result of reconstruction of a FFT transformed image with the phase removed.} \label{fig:nophase} % \end{figure} \subsection{Normalization} The amplitudes and phases of the FFT results for the training set can be normliazed. Normalization is done by computing the mean and standard deviation for each feature and then from each corresponding feature subtracting the mean and dividng the difference by the standard deviation. Performing reconstruction on the amplitudes and phases without undoing the transformation produces privacy as seen in Figure \ref{fig:norm}. On the Olliveti database the accuracy was 0.975. On the Yale database the accuracy was 0.739. \begin{figure}[!htb] \begin{minipage}[b]{.48\linewidth} \centering \centerline{\includegraphics[width=4.0cm]{recon/denorm_Yale}} % \vspace{1.5cm} \centerline{Original Image}\medskip \end{minipage} \hfill \begin{minipage}[b]{0.48\linewidth} \centering \centerline{\includegraphics[width=4.0cm]{recon/norm_Yale}} % \vspace{1.5cm} \centerline{Reconstructed Image}\medskip \end{minipage} % \caption{Reconstruction from normalized FFT output.} \label{fig:norm} % \end{figure} \subsection{Filtering} \subsubsection{FFT} Based on accuarcies in Figure \ref{fig:accFFT}, the T filter has the best or near best accuarcy across both datasets. Figure \ref{fig:reconFFT} shows that it also preserves privacy. Comapored to the other filters, the face is brealy visible in the reconstrcution from feaures selcted by the T filter. \begin{figure}[!htb] \begin{minipage}[b]{1.0\linewidth} \centering \begin{tabular}{ | l || c | c |} \hline Filter & Yale Acc & Olliveti Acc \\ \hline Rect & 0.737 & 0.967 \\ \hline Var & 0.739 & \textbf{0.970} \\ \hline SNR & 0.739 & 0.968 \\ \hline FDR & 0.735 & 0.965 \\ \hline SD & 0.686 & 0.952 \\ \hline T & \textbf{0.741} & 0.968 \\ \hline \end{tabular} \end{minipage} % \caption{Mean accuracy for the filters based on 100, 10 fold cross validations. Top 399 features are selected.} \label{fig:accFFT} % \end{figure} \begin{figure} [!htb] \begin{minipage}[b]{.48\linewidth} \centering \centerline{\includegraphics[width=4.0cm]{recon/rectF_399_yale}} % \vspace{1.5cm} \centerline{Rect 0.73}\medskip \end{minipage} \hfill \begin{minipage}[b]{0.48\linewidth} \centering \centerline{\includegraphics[width=4.0cm]{recon/varF_399_yale}} % \vspace{1.5cm} \centerline{Var 0.739}\medskip \end{minipage} % \begin{minipage}[b]{.48\linewidth} \centering \centerline{\includegraphics[width=4.0cm]{recon/snrF_399_yale}} % \vspace{1.5cm} \centerline{SNR 0.739}\medskip \end{minipage} \hfill \begin{minipage}[b]{0.48\linewidth} \centering \centerline{\includegraphics[width=4.0cm]{recon/tF_399_yale}} % \vspace{1.5cm} \centerline{T 0.741}\medskip \end{minipage} % \caption{Reconstructed images after a FFT transform and filter $d = 399$. The filter name and accuracy on the Yale dataset is listed under each image.} \label{fig:reconFFT} % \end{figure} \subsubsection{WT} Using the result of the wavelet transform, the T filter still produces best or second best accuracires across the two datasets. \begin{figure}[!h] \begin{minipage}[b]{1.0\linewidth} \centering \begin{tabular}{ | l || c | c |} \hline Filter & Yale Acc & Olliveti Acc \\ \hline Var & 0.737 & 0.941 \\ \hline SNR & \textbf{0.813} & 0.968 \\ \hline FDR & 0.807 & \textbf{0.969} \\ \hline SD & 0.672 & 0.883 \\ \hline T & 0.812 & \textbf{0.969} \\ \hline \end{tabular} \end{minipage} % \caption{Mean accuracy for the filters based on 100, 10 fold cross validations. Top 399 features are selected.} \label{fig:res} % \end{figure} As seen in Figure \ref{fig:wtRecon}, the T filter preserves provcary. The reconstructions for Var and T are similar to reconstrcutiosn for the other filters. The privacy protections appears to be a property of the WT tranform rather than any particlar filter. \begin{figure}[!htb] \begin{minipage}[b]{.48\linewidth} \centering \centerline{\includegraphics[width=4.0cm]{recon/varF_399_yaleWL}} % \vspace{1.5cm} \centerline{Var 0.737}\medskip \end{minipage} \hfill \begin{minipage}[b]{0.48\linewidth} \centering \centerline{\includegraphics[width=4.0cm]{recon/tF_399_yaleWL}} % \vspace{1.5cm} \centerline{T 0.812}\medskip \end{minipage} % \caption{Reconstructed images after a WT transform and filter $d = 399$. The filter name and accuracy on the Yale dataset is listed under each image.} \label{fig:wtRecon} % \end{figure} % To start a new column (but not a new page) and help balance the last-page % column length use \vfill\pagebreak. % ------------------------------------------------------------------------- \vfill \pagebreak \section{Discussion} \label{sec:conclusion} $\\$ There is a lot of potential for privacy protection with proper alterations in the feature engineering process. Across all of the methods, the cost, in the form of accuracy loss, is very small. In fact, for the wavelett transform, the accuracy increased when filters where applied. Additionally, the reconstructions from wavelett transform features yieled much better privacy by completely remoeving the eyes and mouth in the image. It appears that the wavelett transform forms a very good basis for privacy preserving feature vectors. \\\\ \begin{figure}[!h] \begin{minipage}[b]{1.0\linewidth} \centering \begin{tabular}{ | l | c | c |} \hline Mathod & From Yale & From Olliveti \\ \hline Phase Removal & -0.006 & -0.001 \\ \hline Normalization & -0.002 & 0.000 \\ \hline Filtering (FFT) & 0.000 & -0.006 \\ \hline Filtering (WT) & 0.005 & 0.006 \\ \hline \end{tabular} \end{minipage} % \caption{Changes in mean accuracy relative to the baseline for the four different methods. The T filter is used for filtering accuracies.} \label{fig:res} % \end{figure} $\\$ While FFT based systems lost accuracy under all of these methods, it is easier to see why the filters preserve accuarcy. The supervised statistical filters select both high and low fequecies as seen in Figure \ref{fig:spec}. Therefore, the reconstrcution is not just a blurried image as is the case with selecting just the low frequcies. Fewer of those frequcies are selected thus the image is even more blurred, but accuarcy is preserved by the high frequeices. The mix avoid details form being reconstructed fully. \begin{figure}[!htb] \begin{minipage}[b]{.48\linewidth} \centering \centerline{\includegraphics[width=4.0cm]{filter/rectF_399}} % \vspace{1.5cm} \centerline{Rect}\medskip \end{minipage} \hfill \begin{minipage}[b]{0.48\linewidth} \centering \centerline{\includegraphics[width=4.0cm]{filter/varF_399}} % \vspace{1.5cm} \centerline{Var}\medskip \end{minipage} % \begin{minipage}[b]{.48\linewidth} \centering \centerline{\includegraphics[width=4.0cm]{filter/snrF_399}} % \vspace{1.5cm} \centerline{SNR}\medskip \end{minipage} \hfill \begin{minipage}[b]{0.48\linewidth} \centering \centerline{\includegraphics[width=4.0cm]{filter/tF_399}} % \vspace{1.5cm} \centerline{T}\medskip \end{minipage} % \caption{Amplituedes selected by the filter (marked in white).} \label{fig:spec} % \end{figure} Regarding the filters, which should be applied even for performance improvements, SNR and T are the most useful for privacy protection. For both FFT and WT, these filters allow a large dimension reduction with almost no accuracy loss. \\\\ In future research consecutive search methods can be utilized to produce filters. Additionally, the reconstruction for WT should be explained in terms of the underlying bands. Lastly, a fully algorithm utilizing the filters and taking into the account the client server model should be developed. \\\\\\\\ This paper represents my own work in accordance with University regulations. \\\\\\\\ % References should be produced using the bibtex program from suitable % BiBTeX files (here: strings, refs, manuals). The IEEEbib.bst bibliography % style file from IEEE produces unsorted bibliography list. % ------------------------------------------------------------------------- \bibliographystyle{IEEEbib} \bibliography{sources} \end{document}
{ "alphanum_fraction": 0.7281637098, "avg_line_length": 39.0762411348, "ext": "tex", "hexsha": "eb736182aa06c50a612c797fa6cc1d72737f139b", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "8ff1b3e1e809b3547fd1f26f8101bcc13b17f82d", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "arturf1/FacialReconPrivacy", "max_forks_repo_path": "MLSP Paper/Template.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "8ff1b3e1e809b3547fd1f26f8101bcc13b17f82d", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "arturf1/FacialReconPrivacy", "max_issues_repo_path": "MLSP Paper/Template.tex", "max_line_length": 233, "max_stars_count": null, "max_stars_repo_head_hexsha": "8ff1b3e1e809b3547fd1f26f8101bcc13b17f82d", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "arturf1/FacialReconPrivacy", "max_stars_repo_path": "MLSP Paper/Template.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 6197, "size": 22039 }
%============================================================================= \subsection{The DFC Command Line Interface} \label{the-dfc-command-line-interface} %============================================================================= The DIRAC File Catalog (DFC) Command Line Interface (CLI), a.k.a. the \textbf{DFC CLI}, provides a way of interacting with DIRAC's File Catalog via - you guessed it - the command line. The DFC CLI lets you manually upload and download files to Storage Elements (SEs), browse the DFC associated with your Virtual Organisation (VO), create and remove directories in the DFC, and manage the replicas associated with each entry in the DFC. \begin{infobox}{Using the DFC CLI} \emph{The DFC CLI is great for small-scale tasks such as creating and tweaking test data sets, but ultimately we will want to use scripts to help coordinate large-scale upload operations and managing metadata (i.e.~data about the data).} \end{infobox} %----------------------------------------------------------------------------- \subsubsection{Getting started with the DFC CLI} \label{getting-started-with-the-dfc-cli} %----------------------------------------------------------------------------- \term{Accessing the DFC CLI} The DFC CLI is accessed via a DIRAC command, so we'll need to source our DIRAC environment and generate a DIRAC proxy. \begin{Shaded} \begin{Highlighting}[] \NormalTok{$ }\KeywordTok{source} \NormalTok{/cvmfs/ganga.cern.ch/dirac_ui/bashrc} \NormalTok{$ }\KeywordTok{dirac-proxy-init} \NormalTok{-g gridpp_user -M} \KeywordTok{Generating} \NormalTok{proxy... } \KeywordTok{Enter} \NormalTok{Certificate password: }\CommentTok{# Enter your grid certificate password...} \KeywordTok{.} \KeywordTok{.} \NormalTok{[}\KeywordTok{Proxy} \NormalTok{information-based output.]} \KeywordTok{.} \end{Highlighting} \end{Shaded} \begin{warningbox}{Which VO?} \emph{If you wish to use a different VO, replace gridpp with the name of the VO in the commands in this section.} \end{warningbox} The DFC CLI is then started with the following DIRAC command: \begin{Shaded} \begin{Highlighting}[] \NormalTok{$ }\KeywordTok{dirac-dms-filecatalog-cli} \KeywordTok{Starting} \NormalTok{FileCatalog client} \KeywordTok{File} \NormalTok{Catalog Client }\OtherTok{$Revision}\NormalTok{: 1.17 }\OtherTok{$Date}\NormalTok{: } \KeywordTok{FC}\NormalTok{:/}\KeywordTok{>} \end{Highlighting} \end{Shaded} \begin{infobox}{DIRAC command groupings} \emph{We'll come back to the DIRAC command line tools in the next section, but the} \code{dirac-dms-} \emph{at the start of the command refers to the DIRAC Data Management System tools. All DIRAC commands are grouped in this way which, combined with tab completion, can be very handy for finding the command you're looking for!} \end{infobox} The \texttt{FC:/\textgreater{}} at the command prompt tells you that you're in the DFC CLI. You can now explore the DFC using commands that are very similar to those used with a typical UNIX file system. Let's do this now. %----------------------------------------------------------------------------- \subsubsection{Finding your user space in the DFC} \label{finding-your-user-space-in-the-dfc} %----------------------------------------------------------------------------- Let's start by listing the root directories in the DFC, which will give us a list of the Virtual Organisations supported by GridPP DIRAC: \begin{Shaded} \begin{Highlighting}[] \KeywordTok{FC}\NormalTok{:/}\KeywordTok{>} \NormalTok{ls} \KeywordTok{cernatschool.org} \KeywordTok{gridpp} \KeywordTok{vo.londongrid.ac.uk} \KeywordTok{vo.northgrid.ac.uk} \KeywordTok{vo.scotgrid.ac.uk} \KeywordTok{vo.southgrid.ac.uk} \end{Highlighting} \end{Shaded} We're using GridPP DIRAC as a member of \texttt{gridpp} VO, so let's move into that directory. \begin{Shaded} \begin{Highlighting}[] \KeywordTok{FC}\NormalTok{:/}\KeywordTok{>} \NormalTok{cd gridpp/user} \end{Highlighting} \end{Shaded} If one hasn't been created for you already, you can create your own user space on the VO's File Catalog like so: \begin{Shaded} \begin{Highlighting}[] \KeywordTok{FC}\NormalTok{:/gridpp/user}\KeywordTok{>} \NormalTok{cd a} \KeywordTok{FC}\NormalTok{:/gridpp/user/a}\KeywordTok{>} \NormalTok{mkdir ada.lovelace} \KeywordTok{FC}\NormalTok{:/gridpp/user/a}\KeywordTok{>} \NormalTok{chmod 755 ada.lovelace} \KeywordTok{FC}\NormalTok{:/gridpp/user/a}\KeywordTok{>} \NormalTok{ls -la} \KeywordTok{drwxr-xr-x} \NormalTok{0 ada.lovelace gridpp_user 0 2015-12-16 10:24:54 ada.lovelace } \KeywordTok{FC}\NormalTok{:/gridpp/user/a}\KeywordTok{>} \NormalTok{exit} \end{Highlighting} \end{Shaded} \begin{infobox}{Your DIRAC username} \emph{If you don't know your DIRAC username (which should be used as your user directory), exit the DFC CLI and use the dirac-proxy-info command.} \end{infobox} \begin{infobox}{Listing files} \emph{Using the} \code{-la} \emph{option with the} \code{ls} \emph{command works just as it does with the normal command line, allowing you to see file owners, groups (VOs), permissions, etc.} \end{infobox} \begin{warningbox}{File permissions} \emph{Don't forget to change the file permissions on your files so that other users can't modify them.} \end{warningbox} You've now got your own space on the GridPP DFC. Let's put some files in it. %----------------------------------------------------------------------------- \subsubsection{Uploading files} \label{uploading-files} %----------------------------------------------------------------------------- Firstly, we'll need a file to upload. Any file will do, but to keep things simple let's create one in a temporary directory: \begin{Shaded} \begin{Highlighting}[] \NormalTok{$ }\KeywordTok{cd} \NormalTok{~} \NormalTok{$ }\KeywordTok{mkdir} \NormalTok{tmp}\KeywordTok{;} \KeywordTok{cd} \NormalTok{tmp} \NormalTok{$ }\KeywordTok{vim} \NormalTok{TEST.md }\CommentTok{# Or whichever editor you use...} \NormalTok{$ }\KeywordTok{cat} \NormalTok{TEST.md} \CommentTok{#Hello Grid!} \KeywordTok{This} \NormalTok{is a test **MarkDown file**.} \end{Highlighting} \end{Shaded} Next we'll need to know which \textbf{Storage Elements} are available to us. \begin{infobox}{Storage Elements} \emph{Storage Elements} ``are physical sites where data are stored and accessed, for example, physical file systems, disk caches or hierarchical mass storage systems. Storage Elements manage storage and enforce authorization policies on who is allowed to create, delete and access physical files. They enforce local as well as Virtual Organization policies for the use of storage resources. They guarantee that physical names for data objects are valid and unique on the storage device(s), and they provide data access. A storage element is an interface for grid jobs and grid users to access underlying storage through the Storage Resource Management protocol (SRM), the Globus Grid FTP protocol, and possibly other interfaces as well.'' \emph{Credit: Open Science Grid (2012)} \end{infobox} We can list the available SEs with the following DIRAC command: \begin{Shaded} \begin{Highlighting}[] \NormalTok{$ }\KeywordTok{dirac-dms-show-se-status} \KeywordTok{SE} \NormalTok{ReadAccess WriteAccess RemoveAccess CheckAccess } \NormalTok{=============================================================================} \NormalTok{[}\KeywordTok{...} \NormalTok{more disks ...]} \KeywordTok{UKI-LT2-QMUL2-disk} \NormalTok{Active Active Unknown Unknown } \NormalTok{[}\KeywordTok{...} \NormalTok{more disks ...]} \KeywordTok{UKI-NORTHGRID-LIV-HEP-disk} \NormalTok{Active Active Unknown Unknown} \NormalTok{[}\KeywordTok{...} \NormalTok{more disks ...]} \end{Highlighting} \end{Shaded} While we don't need to know the details of where and how our data will be stored on an SE, we do need to know its name. We'll use the \texttt{UKI-LT2-QMUL2-disk} SE for now. We add the file to the DFC as follows using the \texttt{add} command, which takes the following arguments: \begin{verbatim} add <LFN> <Local file name> <SE name> \end{verbatim} where: \begin{itemize} \tightlist \item \texttt{\textless{}LFN\textgreater{}} is the \textbf{Logical File Name} (LFN) of the file in the DFC. This can either be relative to your current position in the DFC (which can be found with the \texttt{pwd} command in the DFC CLI), or made absolute by preceeding the name with a slash \texttt{/}; \item \texttt{\textless{}Local\ file\ name\textgreater{}} should be the name of the local file you want to upload. Again, this can be relative to wherever you were on your local system when you started the DFC CLI, or the absolute path to the file on your local system; \item \texttt{\textless{}SE\ name\textgreater{}} is the name of the SE as retrived from the \texttt{dirac-dms-show-se-status} command. \end{itemize} Let's add our file to the grid now. \begin{Shaded} \begin{Highlighting}[] \NormalTok{$ }\KeywordTok{dirac-dms-filecatalog-cli} \KeywordTok{Starting} \NormalTok{FileCatalog client} \KeywordTok{File} \NormalTok{Catalog Client }\OtherTok{$Revision}\NormalTok{: 1.17 }\OtherTok{$Date}\NormalTok{: } \KeywordTok{FC}\NormalTok{:/}\KeywordTok{>} \NormalTok{cd /gridpp/user/a/ada.lovelace} \KeywordTok{FC}\NormalTok{:/gridpp/user/a/ada.lovelace}\KeywordTok{>} \NormalTok{mkdir tmp} \KeywordTok{FC}\NormalTok{:/gridpp/user/a/ada.lovelace}\KeywordTok{>} \NormalTok{cd tmp} \KeywordTok{FC}\NormalTok{:/gridpp/user/a/ada.lovelace}\KeywordTok{>} \NormalTok{add TEST.md TEST.md UKI-LT2-QMUL2-disk} \KeywordTok{File} \NormalTok{/gridpp/user/a/ada.lovelace/tmp/TEST.md successfully uploaded...} \KeywordTok{FC}\NormalTok{:/gridpp/user/a/ada.lovelace/tmp}\KeywordTok{>}\NormalTok{ls -la} \KeywordTok{-rwxrwxr-x} \NormalTok{1 ada.lovelace gridpp_user 47 2015-12-16 11:47:28 TEST.md} \end{Highlighting} \end{Shaded} And there we go! Your first file has been uploaded to a Storage Element on the grid. Have a biscuit. You've earned it. %----------------------------------------------------------------------------- \subsubsection{Replicating files} \label{replicating-files} %----------------------------------------------------------------------------- Part of the joy of using the grid is being able to distribute computational tasks to different sites. However, if you want to look at the same data with a different task at different sites in an efficient manner, ideally you'd need copies of that data at those sites. This strategy also makes sense from a backup/redundancy perspective. We can achieve this on the grid by using \emph{replicas}. \begin{infobox}{Replicas} \emph{A replica is a copy of a given file that is located on a different Storage Element (SE). The file is identified by its Logical File Name (LFN) in the DIRAC File Catalog (DFC). Associated with each LFN entry is a list of SEs where replicas of the file can be found.} \end{infobox} To list the locations of replicas for a given file catalog entry, we use the \texttt{replicas} command in the DFC CLI: \begin{verbatim} replicas <LFN> \end{verbatim} so continuing with our example: \begin{Shaded} \begin{Highlighting}[] \KeywordTok{FC}\NormalTok{:/gridpp/user/a/ada.lovelace/tmp}\KeywordTok{>}\NormalTok{replicas TEST.md} \KeywordTok{lfn}\NormalTok{: /gridpp/user/a/ada.lovelace/tmp/TEST.md} \end{Highlighting} \end{Shaded} We replicate files with the \texttt{replicate} command: \begin{verbatim} replicate <LFN> <SE name> \end{verbatim} Let's replicate our test file to the Liverpool disk and check that the replica list has been updated: \begin{Shaded} \begin{Highlighting}[] \KeywordTok{FC}\NormalTok{:/gridpp/user/a/ada.lovelace/tmp}\KeywordTok{>}\NormalTok{replicate TEST.md UKI-NORTHGRID-LIV-HEP-disk} \end{Highlighting} \end{Shaded} Replicas can be removed with the \texttt{rmreplica} command: \begin{verbatim} rmreplica <LFN> <SE name> \end{verbatim} Let's remove the Liverpool disk replica: \begin{Shaded} \begin{Highlighting}[] \KeywordTok{FC}\NormalTok{:/gridpp/user/a/ada.lovelace/tmp}\KeywordTok{>}\NormalTok{rmreplica TEST.md UKI-NORTHGRID-LIV-HEP-disk} \KeywordTok{lfn}\NormalTok{: /gridpp/user/a/ada.lovelace/tmp/TEST.md} \KeywordTok{Replica} \NormalTok{at UKI-NORTHGRID-LIV-HEP-disk moved to Trash Bin} \end{Highlighting} \end{Shaded} Finally, we can remove a file completely using the (somewhat familiar) \texttt{rm} command: \begin{verbatim} rm <LFN> \end{verbatim} Let's tidy up our test file: \begin{Shaded} \begin{Highlighting}[] \KeywordTok{FC}\NormalTok{:/gridpp/user/a/ada.lovelace/tmp}\KeywordTok{>}\NormalTok{rm TEST.md} \KeywordTok{lfn}\NormalTok{: /gridpp/user/a/ada.lovelace/tmp/TEST.md} \KeywordTok{File} \NormalTok{/gridpp/user/a/ada.lovelace/tmp/TEST.md removed from the catalog} \end{Highlighting} \end{Shaded} %----------------------------------------------------------------------------- \subsubsection{Downloading files} \label{downloading-files} %----------------------------------------------------------------------------- Finally, we can download files using the DFC CLI with the \texttt{get} command: \begin{verbatim} get <LFN> [<local directory>] \end{verbatim} Note that the local directory argument is optional. Let's download a test file from the \texttt{gridpp} examples directory now: \begin{Shaded} \begin{Highlighting}[] \KeywordTok{FC}\NormalTok{:/}\KeywordTok{>} \NormalTok{get /gridpp/userguide/WELCOME.md ./.} \KeywordTok{FC}\NormalTok{:/}\KeywordTok{>} \NormalTok{exit} \NormalTok{$ }\KeywordTok{cat} \NormalTok{WELCOME.md} \CommentTok{#Welcome to GridPP!} \KeywordTok{It} \NormalTok{looks like your download has worked. Congratulations!} \NormalTok{$ }\KeywordTok{rm} \NormalTok{WELCOME.md} \end{Highlighting} \end{Shaded} As we said earlier, the DFC CLI is only useful for small-scale operations. On our way to scaling up, we can look at starting to automate our workflows using scripts. In the next section we'll look at how the DIRAC command line tools can help with this.
{ "alphanum_fraction": 0.7019278651, "avg_line_length": 39.9346590909, "ext": "tex", "hexsha": "6adc889262a7cc6085212d4f690da9ed44c29ba7", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "4516f66a0aeba839f17ae8512787df5d694941de", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "gridpp/gridpp-userguide-doc", "max_forks_repo_path": "griddata/dfccli.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "4516f66a0aeba839f17ae8512787df5d694941de", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "gridpp/gridpp-userguide-doc", "max_issues_repo_path": "griddata/dfccli.tex", "max_line_length": 129, "max_stars_count": null, "max_stars_repo_head_hexsha": "4516f66a0aeba839f17ae8512787df5d694941de", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "gridpp/gridpp-userguide-doc", "max_stars_repo_path": "griddata/dfccli.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 3828, "size": 14057 }
% This must be in the first 5 lines to tell arXiv to use pdfLaTeX, which is strongly recommended. \pdfoutput=1 % In particular, the hyperref package requires pdfLaTeX in order to break URLs across lines. \documentclass[11pt]{article} % Remove the "review" option to generate the final version. \usepackage{acl} % Standard package includes \usepackage{times} \usepackage{latexsym} % For proper rendering and hyphenation of words containing Latin characters (including in bib files) \usepackage[T1]{fontenc} % For Vietnamese characters % \usepackage[T5]{fontenc} % See https://www.latex-project.org/help/documentation/encguide.pdf for other character sets % This assumes your files are encoded as UTF8 \usepackage[utf8]{inputenc} % This is not strictly necessary, and may be commented out, % but it will improve the layout of the manuscript, % and will typically save some space. \usepackage{microtype} \usepackage{graphicx} \usepackage{cleveref} % If the title and author information does not fit in the area allocated, uncomment the following % %\setlength\titlebox{<dim>} % % and set <dim> to something 5cm or larger. \title{Homework 1} \author{Pedro Matias \\ \texttt{[email protected]}} \begin{document} \maketitle \section{Preliminaries}\label{sec:prelim} A basic data analysis, summarized in \cref{fig:dataset}, showed that the dataset is unbalanced when it comes to total number of documents/words/letters, where higher numbers of these correlate with democrat affiliation. The average number of words per document, letters per document and letters per word is pretty identical across across the candidates. Indeed, initial experiments showed that assigning weights inversely proportional to each label frequency results (more often than not) in better accuracies, so we adopted this strategy throughout our experiments. During these initial experiments, we gained further information that made us adopt the following strategies in later experiments: \begin{itemize} \item using 5-fold \textbf{cross-validation}, scored using the accuracy averaged across folds, since the small labeled-to-unlabeled ratio could promote overfitting. We used \textit{stratified} folds to promote even label frequencies (since data is unbalanced). We also merged train and dev data for this purpose. \item using \textbf{TF-IDF}, as opposed to bag-of-words (counting or binary) model \item using both word-unigrams and \textbf{word-bigrams}, as opposed to just word-unigrams. In our case, the dependencies between consecutive words modeled by bigrams features helped the results slightly. We did not have time to try character n-grams \item using \texttt{nltk.word\_tokenize} on \textbf{lowercase} text, as opposed to \texttt{Scikit-learn}'s default tokenizer \item not removing \textbf{stopwords}, since early experiments did not make worthy improvements, but also because we (i) used TD-IDF (which already discounts words with high document-frequencies), but also (ii) entirely removed words with high document-frequencies cutoffs (see \cref{sec:super}) \item using L2-regularization, as opposed to L1, given early experiment results \item using a maximum of 100 \textbf{iterations} for Logistic Regression convergence, as opposed to larger values (which did not increase the cross-validation accuracy and were slower) or smaller values (for which we underfit). When combining TF-IDF with other features (see \cref{sec:semi}), we used larger values to avoid underfitting. \end{itemize} \begin{figure} \centering \includegraphics[height=3in]{figures/stats.png} \caption{Dataset analysis.} \label{fig:dataset} \end{figure} \section{Supervised}\label{sec:super} We experimented with different configurations and noticed an increase in accuracies for higher values of $C$, the inverse of regularization strength, as depicted in \cref{fig:supervised}. It is interesting to note that changes in accuracy caused by varying document-frequencies cutoffs (i.e. the minimum document-frequency of a word considered for removal) was only noticeable for larger values of $C$, where accuracies were higher for higher cutoffs (i.e. less words were removed). A possible explanation is the fact that the stratregy of assigning higher weights (give less regularization) to words works best when the vocabulary size considered is indeed higher. Indeed, increasing $C$ even further resulted in even better results, until $C$ assumes values bigger than 100, at which point the accuracy starts decreasing. Our best configuration, with respective accuracy scores was the following (see \cref{sec:prelim} for the remaining hyperparams setting): \begin{center} \begin{tabular}{r|c} $\mathbf{C}$ & 80 \\\hline \textbf{frequency cutoff} & 0.9 \\\hline \textbf{train accuracy} & 0.999 \\\hline \textbf{cross-validation accuracy} & 0.464 \\\hline \textbf{Kaggle accuracy} & 0.491 \end{tabular} \end{center} \begin{figure} \centering \includegraphics[width=3in]{figures/supervised.png} \caption{Varying document-frequency cutoff for removing words (x-axis) against regularization levels (y-axis) for Logistic Regression (higher values of $C$ correspond to less regularization). See \cref{sec:prelim} for remaining hyperparams configuration.} \label{fig:supervised} \end{figure} \section{Semi-supervised}\label{sec:semi} We tried incorporating word2vec embeddings, using Gensim's implementation. We generated these embeddings by training on both the labeled and unlabeled data provided. We also tried GloVe's pre-computed word2vec embeddings (Wikipedia 2014 + Gigaword 5, for 50-dimension), but results didn't differ significantly from results given by embeddings trained on the corpus itself. To combine the embeddings of each word in a single document, we tried the following approaches, while varying the number of dimensions (100, 150, 200) and the context window sizes (5, 10, 15, 20) (see \cref{fig:acc_embeddings}): \begin{itemize} \item Coordinate-wise \textbf{average} of all embeddings in the document \item Coordinate-wise \textbf{TF-IDF weighted average} of all embeddings in the document \item Coordinate-wise \textbf{sum} of all embeddings in the document \item Coordinate-wise \textbf{minimum} and/or \textbf{maximum} of all embeddings in the document \end{itemize} \begin{figure} \centering \includegraphics[width=2in]{figures/embeddings_windows_dims.png} \caption{Accuracy varying context window sizes and output embedding dimensions.} \label{fig:acc_embeddings} \end{figure} The best type of aggregation was TF-IDF weighted average. Close to this was regular average. The remaining types of aggregation had very small accuracies (less than half). Unfortunately, we were not able to find a good configuration of hyperparams for TF-IDF weighted average, and our best model delivered the following accuracies (we increased the number of iterations from 100, since the increased number of features requires more time to avoid underfitting): \begin{center} \begin{tabular}{r|c} $\mathbf{C}$ & 95 \\\hline \textbf{frequency cutoff} & 0.9 \\\hline \textbf{\#iterations} & 200 \\\hline \textbf{w2v aggregation} & TF-IDF weighted avg. \\\hline \textbf{train accuracy} & 0.999 \\\hline \textbf{cross-validation accuracy} & 0.453 \\\hline \textbf{Kaggle accuracy} & 0.476 \end{tabular} \end{center} Indeed, the embeddings produced are not representative of each presidential candidate -- see \cref{fig:labeled_centroids} for a comparison between the average embedding for each presidential candidate. \subsection{Clustering}\label{subsec:clustering} Next, we tried the following approach: \begin{enumerate} \item use a \textbf{clustering} technique to cluster the union of labeled (train+dev) and unlabeled data, into $K$ clusters \item determine a \textbf{mapping} between clusters and presidential candidates, using the labeled data. This mapping can be one-to-one, one-to-many or even many-to-one, depending on $K$ \item label the unlabeled data according to the previous mapping \item re-train the previous best supervised classifier using both the initial labels and the ones generated in the previous step \end{enumerate} To perform clustering, we resorted to lower-dimensional word2vec embeddings, using Gensim implementation. As in the previous section, we generated these by training on both the labeled and unlabeled data provided. Before implementing the idea above mentioned, we examined the quality of embeddings produced by our supervised classifier, in the sense of how good they cluster the labeled data -- see \cref{fig:labeled_centroids} for details. As illustrated, the average embeddings for each candidate are very similar to each other, suggesting that labeled ``clusters'' are overlapping quite a bit, yielding bad representative embeddings. The similarity between candidates seems even more pronounced when comparing using cosine similarity and this may be due to the fact that it ignores vectors norms. \begin{figure}[t] \label{fig:labeled_centroids} \centering \includegraphics[width=2.5in]{figures/word2vecs.png} \caption{Similarities between the average (a.k.a. centroid) embedding for each presidential candidate, suggest that embeddings are not very representative of each candidate. The embeddings have dimension 200, were produced using a context window of size 20 and were combined using TF-IDF weighted average (other kinds of aggregations with lower dimensions/windows had similar or worse results).} \end{figure} To evaluate the quality of our clusterings (using K-Means and Gaussian Mixture Model (GMM)), we estimated the likelihood that the average candidate belongs to each of the clusters generated -- see \cref{fig:gmm_kmeans} for details. Particularly, we computed the probabilities $P(X_i \mid Z_j)$ that a document $X_i$ is labeled with candidate $k$, given that we know it belongs to cluster $Z_j$ and we summed these up, for each candidate $k$, to obtain $P(C_k \mid Z_j)$, which essentially models the distribution over candidates for each cluster $Z_j$. By Bayes' Rule and the assumption that $X$ follows a uniform distribution $\mathcal{U}(1,19)$, the quantities $P(X_i \mid Z_j)$ are proportional to the \emph{membership probabilities} $P(Z_j \mid X_i)$, which can be obtained directly from the weights used for K-Means/GMM. The takeaway here is that the clusters generated were not good, since they contain (with some exceptions) the same proportion of candidates. Worth noting, however, is the fact that GMM gives more representative clusters when compared to K-Means, so we used this one to retrain our supervised classifier. Unfortunately, we weren't able to find a good configuration that allowed us to improve the Kaggle accuracy obtained using the supervised classifier. It is also possible that the clusterings obtained can be improved with further fine-tuning of K-Means/GMM hyperparams, which we did not conduct due to time restrictions. Our best model is given below (see \cref{sec:prelim} for details on remaining hyperparams setting). The difference between cross-validation and Kaggle accuracies suggests overfitting occurrence. \begin{center} \begin{tabular}{r|c} $\mathbf{C}$ & 80 \\\hline \textbf{frequency cutoff} & 0.9 \\\hline \textbf{\#iterations} & 200 \\\hline \textbf{train accuracy} & 0.999 \\\hline \textbf{cross-validation accuracy} & 0.616 \\\hline \textbf{Kaggle accuracy} & 0.467 \end{tabular} \end{center} ~ \subsubsection*{Implementation details.} For K-Means \textbf{weights initialization}, we tried k-means++ algorithm, random initialization, and using the labeled centroids (average embedding for each candidate -- see \cref{fig:labeled_centroids}). All of these seemed to perform similarly. Regarding GMM, the parameters were initialized in a way that every mixture component has zero mean and identity covariance. The \textbf{mapping} from cluster centroids to candidates was obtained by sampling, for each cluster centroid $Z_j$, according to the probabilities $P(C_k \mid Z_j)$ computed. We avoided selecting the candidate $C_k$ with maximum presence in $Z_j$ to prevent having most clusters labeled with the most popular candidates (which tend to be democrats, see \cref{fig:dataset,fig:gmm_kmeans}). We set the number of clusters for both K-Means and GMM to $K=19$ and, for the sake of analysis, $K=2$. The latter case of $K=2$ emphasized even more the difficulty of clustering, since we obtained membership probabilities close to 50\% (i.e. $P(C_k\mid Z_1)\approx P(C_k\mid Z_2)$, for each $k$). \begin{figure}[t] \centering \includegraphics[width=4in]{figures/gmm_kmeans.png} \caption{Each row $i$ contains the membership probabilities $P(Z_j \mid C_k)$, representing the probability that the average candidate $k$ belongs to the $j$\textsuperscript{th} cluster, out of 19 clusters generated, using K-Means (initialized with k-means++) and GMM (initialized for 0 means and identity covariance). We ``joined'' the democrats together in the top-4 rows, to assess any affiliation discrepancies -- we obtained higher membership probabilities due to the unbalance in the data which favors democrats more over republicans.} \label{fig:gmm_kmeans} \end{figure} \section{Statement of collaboration} Briefly discussed with Ramtin how to approach the semi-supervised model, namely which approach would be more interesting or more likely to work. \end{document}
{ "alphanum_fraction": 0.7846210996, "avg_line_length": 73.956043956, "ext": "tex", "hexsha": "9df67e5dc4ae61f61469859d01d92e2fcc351474", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "5f3e39508ba47a4731faec20aeb20f9d5f1568c3", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "zydeon/uci-statnlp", "max_forks_repo_path": "hw1/report/report.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "5f3e39508ba47a4731faec20aeb20f9d5f1568c3", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "zydeon/uci-statnlp", "max_issues_repo_path": "hw1/report/report.tex", "max_line_length": 1643, "max_stars_count": null, "max_stars_repo_head_hexsha": "5f3e39508ba47a4731faec20aeb20f9d5f1568c3", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "zydeon/uci-statnlp", "max_stars_repo_path": "hw1/report/report.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 3266, "size": 13460 }
\documentclass[12pt]{article} \usepackage{amsmath} \usepackage{gensymb} \usepackage{standalone} \usepackage{verbatim} \usepackage[utf8]{inputenc} \usepackage{setspace} \usepackage[a4paper,margin=1in,footskip=0.25in]{geometry} \usepackage{graphicx} \usepackage{mathptmx} \usepackage{booktabs} \usepackage{cite} \usepackage[english]{babel} \usepackage[utf8]{inputenc} \usepackage{setspace} \begin{document} \section{Problem Statement} The determination of the potential field in a detector leads by way of weighting potential to the determination of theoretical pulse output. The determination of this is done by solving Poisson's equation. This proves to be difficult, if not impossible, for geometries other than simple cases. To solve these geometries, a finite element approach can be taken and implemented to solve nearly any geometry. This work details the implementation of the finite approach to solve Poisson's equation to solve a three terminal device. \end{document}
{ "alphanum_fraction": 0.8128834356, "avg_line_length": 46.5714285714, "ext": "tex", "hexsha": "1c9b76e03a95705c836cc2ae9bbb805fe2c4832b", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "c57abf953b6e6d5342b4224e12254a0649b07817", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "alangburl/Finite_Element_Votlage", "max_forks_repo_path": "Submission_Building/Problem_Statement.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "c57abf953b6e6d5342b4224e12254a0649b07817", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "alangburl/Finite_Element_Votlage", "max_issues_repo_path": "Submission_Building/Problem_Statement.tex", "max_line_length": 528, "max_stars_count": null, "max_stars_repo_head_hexsha": "c57abf953b6e6d5342b4224e12254a0649b07817", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "alangburl/Finite_Element_Votlage", "max_stars_repo_path": "Submission_Building/Problem_Statement.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 243, "size": 978 }
\documentclass[12pt,a4paper]{article} \usepackage{algorithm, algpseudocode, amsmath, amssymb, csquotes, empheq, geometry, graphicx, hyperref, listings, multirow, physics, siunitx, subcaption, upgreek} \usepackage[section]{placeins} \usepackage[justification=centering]{caption} \title{Computational Physics\\Problem Set 5} \author{Saleh Shamloo Ahmadi\\Student Number: 98100872} \date{October 25, 2021} \hypersetup{colorlinks=true, urlcolor=cyan} \newcommand{\fig}{../fig} \begin{document} \newgeometry{top=1in, bottom=1in} \maketitle \section{2D Random Walk} As described in the lecture notes, we expect \begin{equation} \expval{r^2} = 2dDt \end{equation} where $d$ is the number of dimensions (degrees of freedom), $t$ is the time the walker has spent moving and \begin{equation} D = \frac{l}{2d\tau} \end{equation} where $l$ and $\tau$ are the space and time steps respectively. In our simulations, units are normalized such that both $l$ and $\tau$ are equal to 1. Hence, we expect \begin{equation} \expval{r^2} = t, \end{equation} \begin{figure}[htb!] \centering \includegraphics[width=\linewidth]{\fig/randomwalk-2d-small} \end{figure} \newgeometry{top=0.3in, bottom=1in} \begin{figure} \centering \includegraphics[width=\linewidth]{\fig/randomwalk-2d-large} \end{figure} \begin{figure} \centering \includegraphics[width=\linewidth]{\fig/randomwalk-2d-msd} \caption{The results of the simulations are consistent the theoretical analysis} \end{figure} \newgeometry{top=0.8in, bottom=0.9in} \section{Diffusion-Limited Aggregation (DLA)} In this process, particles undergoing a random walk stick together to form clusters. The generated shapes appear in many systems in nature. DLA can describe aggregation in any system where diffusion is the primary means of transport in the system. You can read more about DLA on \href{https://en.wikipedia.org/wiki/Mersenne_Twister}{Wikipedia} and \href{http://paulbourke.net/fractals/dla/}{Paul Bourke's blog}. The particles start moving from outside the boundaries of the shape and get stuck as soon as they come in contact with the cluster. The system starts growing from a \enquote{seed}, which consists of the starting particles. \begin{figure}[htb!] \centering \begin{subfigure}{\linewidth} \centering \includegraphics[width=\linewidth]{\fig/dla-500} \end{subfigure} \begin{subfigure}{\linewidth} \centering \includegraphics[width=\linewidth]{\fig/dla-350} \end{subfigure} \begin{subfigure}{\linewidth} \centering \includegraphics[width=\linewidth]{\fig/dla-200} \end{subfigure} \caption{DLA with periodic boundary conditions and a line seed. Particles start moving form the top of the cluster.} \end{figure} \section{Self-Avoiding Walk (SAW)} In this type of walk, the path of the walker cannot cross itself. The problem of counting the number of possible paths of a given length is NP-hard, but it can be approximated with good accuracy. \begin{algorithm} \caption{Recursively Counting Every Possible Self-Avoiding Walk of Length N in a Graph G} \begin{algorithmic}[1] \Function{CountSAW}{$N, G, p$} \Comment{$p$ is the starting position} \If{$N = 0$} \textbf{return} 1 \Else \State mark $p$ as passed \State $count \gets 0$ \ForAll{$q \in G.neighbors(p)$} \If{$q$ is not passed} \State $count \gets count + \mathrm{CountSAW}(N - 1, G, q)$ \EndIf \EndFor \State mark $p$ as unpassed \Comment{This step is called \enquote{backtracking}} \State \textbf{return} $count$ \EndIf \EndFunction \end{algorithmic} \end{algorithm} \begin{figure}[htb!] \centering \includegraphics[width=\linewidth]{\fig/saw-path-count-small} \end{figure} \newgeometry{top=1.5in, bottom=1.5in} \begin{figure}[htb!] \centering \includegraphics[width=\linewidth]{\fig/saw-path-count} \end{figure} \begin{figure}[htb!] \centering \includegraphics[width=\linewidth]{\fig/self-avoiding-vs-random-walk} \end{figure} \begin{figure}[htb!] \centering \includegraphics[width=\linewidth]{\fig/self-avoiding-to-random-walk-ratio} \end{figure} \begin{figure}[htb!] \centering \includegraphics[width=\linewidth]{\fig/self-avoiding-to-random-walk-ratio-log} \end{figure} As you can see, the growth is consistently exponential. This is explored in more detail in the lecture notes. \restoregeometry \section{Random Number Generator (RNG)} I use the Julia programming language for solving these problems. The default random engine on Julia is a \href{https://en.wikipedia.org/wiki/Mersenne_Twister}{Mersenne Twister}, so that is the engine that I analize here. \subsection{Distribution} The bare minimum requirement for an RNG is a uniform output distribution. \subsection{Coefficient of Variation} For random distributions, the coefficient of variation must be proportional to the inverse square root of the number of samples. \begin{equation} CV = \frac{\sigma}{N} \sim \frac{1}{\sqrt{N}} \end{equation} Note that this is exactly the same as random ballistic distribution, which we explored before. \subsection{Numbers Coming Before 4} We can also analize the behavior of an RNG in other ways; If we take every number coming before a 4 in a random series generated by the RNG, the results must stay the same. \newgeometry{top=0.1in, bottom=0.1in, left=0in, right=0in} \thispagestyle{empty} \begin{figure} \centering \begin{subfigure}{0.42\linewidth} \centering \includegraphics[width=\linewidth]{\fig/rng-distribution} \end{subfigure} \begin{subfigure}{0.42\linewidth} \centering \includegraphics[width=\linewidth]{\fig/rng-growth} \end{subfigure} \par\bigskip \begin{subfigure}{0.42\linewidth} \centering \includegraphics[width=\linewidth]{\fig/rng-cv-single} \end{subfigure} \begin{subfigure}{0.42\linewidth} \centering \includegraphics[width=\linewidth]{\fig/rng-cv} \end{subfigure} \caption{The distribution of numbers is fairly uniform, as expected.\\The numbers are more evenly distributed as the number of random samples increases.} \end{figure} \begin{figure} \centering \begin{subfigure}{0.42\linewidth} \centering \includegraphics[width=\linewidth]{\fig/rng-b4-distribution} \end{subfigure} \begin{subfigure}{0.42\linewidth} \centering \includegraphics[width=\linewidth]{\fig/rng-b4-growth} \end{subfigure} \par\bigskip \begin{subfigure}{0.42\linewidth} \centering \includegraphics[width=\linewidth]{\fig/rng-b4-cv-single} \end{subfigure} \begin{subfigure}{0.42\linewidth} \centering \includegraphics[width=\linewidth]{\fig/rng-b4-cv} \end{subfigure} \caption{Taking numbers before 4 in a series gives the same results, as expected} \end{figure} \restoregeometry \end{document}
{ "alphanum_fraction": 0.7304727273, "avg_line_length": 36.7647058824, "ext": "tex", "hexsha": "ecd1940e7f8118632fdc61836eb95ec134f5c7e7", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "04d6759e0eb9d7e16e2781417d389bc15e22b01b", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "slhshamloo/comp-phys", "max_forks_repo_path": "ps5-rw-dla-saw-rng/report/ps5-rw-dla-daw-rng.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "04d6759e0eb9d7e16e2781417d389bc15e22b01b", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "slhshamloo/comp-phys", "max_issues_repo_path": "ps5-rw-dla-saw-rng/report/ps5-rw-dla-daw-rng.tex", "max_line_length": 162, "max_stars_count": null, "max_stars_repo_head_hexsha": "04d6759e0eb9d7e16e2781417d389bc15e22b01b", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "slhshamloo/comp-phys", "max_stars_repo_path": "ps5-rw-dla-saw-rng/report/ps5-rw-dla-daw-rng.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 2166, "size": 6875 }
% % File acl-hlt2011.tex % % Contact: [email protected] %% %% Based on the style files for ACL2008 by Joakim Nivre and Noah Smith %% and that of ACL2010 by Jing-Shin Chang and Philipp Koehn \documentclass[11pt]{article} \usepackage{acl-hlt2011} \usepackage{times} \usepackage{latexsym} \usepackage{amsmath} \usepackage{multirow} \usepackage{url} \usepackage{graphicx} \DeclareMathOperator*{\argmax}{arg\,max} \setlength\titlebox{6.5cm} % Expanding the titlebox \title{The Computational Linguistics Summarization Pilot Task: \newline TALN-UPF system description and evaluation} \author{Francesco Ronzano, Beatriz Fisas, Horacio Saggion \\ NLP Research Group \\ Universitat Pompeu Fabra \\ Barcelona, Spain \\ {\tt \{francesco.ronzano,beatriz.fisas,horacio.saggion\}@upf.edu} \\} \date{} \begin{document} \maketitle \begin{abstract} This document introduces the system built by TALN-UPF team (TALN-UPFsys) to participate in the Computational Linguistic Summarization Pilot Task (CLsumm), organized in the context of the Text Analytics Conference 2014. TALN-UPFsys faces both citation-based document analysis Tasks proposed by CLsumm: reference span identification (Tasks 1a) and discourse facet classification (Task 1b). The system has been trained and evaluated on the corpus of manually annotated documents (CLcorpus) provided by CLsumm organizers. TALN-UPFsys is based on sentence-level computations, thus considering sentences as the main text analysis units. After a brief description of the CLcorpus corpus sanitization and pre-processing steps, we present and evaluate our approach to both CLsumm Tasks. \end{abstract} \section{Introduction} \label{sec:intro} The network of article-to-article connections built by citations together with the semantics that characterizes each citation have been leveraged by different scientific text analysis tasks including the assessment of the social relevance of a paper, the targeted selection of contents from scientific articles, single and multi-document summarization. The Computational Linguistic Summarization Pilot Task (CLsumm), organized in the context of the Text Analytics Conference 2014, proposed two citation-based document analysis Tasks. The first \textit{Task (1A)}, given a citation, required the participants to identify the text spans of the cited paper that are referenced by the same citation; these text spans are also referred to as reference text span. The second \textit{Task (1B)} required to classify each reference text span of the cited paper as belonging to a specific discourse facet, chosen from \textit{Aim}, \textit{Method}, \textit{Implication}, \textit{Results} and \textit{Hypothesis}. A manually annotated corpus, referred to as CLcorpus, was provided by CLsumm organizers in order to train and evaluate the participating systems. This document describes TALN-UPFsys, the system built by TALN-UPF team to participate in both CLsumm Tasks. In particular, in Section~\ref{sec:corpusSanitize} we discuss the process of data sanitization we had to apply to the documents of the CLcorpus so as to enable any further analysis. Section~\ref{sec:preProcessing} provides an overview of the pre-processing steps we performed to support any more complex computation over the textual contents of each paper. Sections~\ref{sec:task1a} and~\ref{sec:task1b} respectively describe our approach to Tasks 1A and 1B. In these Sections we also provide the results of the evaluation of our system on the CLcorpus. To conclude, Section~\ref{sec:conclusion} provides an overview of future venues of improvement. \section{Sanitizing the Computational Linguistic Summarization Corpus} \label{sec:corpusSanitize} CLsumm organizers provided task participants with the CLcorpus, made of 10 sets of manually annotated papers from the ACL Anthology\footnote{http://www.aclweb.org/anthology/}. Each set includes one cited paper and 5 to 10 citing papers. Each citing paper includes at least one citation to the cited one. The citing papers of CLcorpus include 135 citations, with an average of 1,58 citations per citing paper. See Table~\ref{table:corpusStats} for more details about CLcorpus. \begin{table}[h]\footnotesize \begin{center} \begin{tabular}{ l | c | c | c } \hline Docset & Citing papers & Num. cits. & Cits. / paper \\ \hline C94-2154 & 5 & 5 & 1,000 \\ \hline E03-1020 & 9 & 15 & 1,666 \\ \hline H05-1115 & 8 & 12 & 1,500 \\ \hline H89-2014 & 8 & 11 & 1,375 \\ \hline J00-3003 & 9 & 10 & 1,111 \\ \hline J98-2005 & 9 & 21 & 2,333 \\ \hline N01-1011 & 8 & 9 & 1,125 \\ \hline P98-1081 & 9 & 29 & 3,222 \\ \hline X96-1048 & 9 & 12 & 1,333 \\ \hline - & Avg: 8,4 & Sum: 135 & Avg: 1,577 \\ \hline \hline \end{tabular} \caption{Number of citing papers and citations of each document set (docset) of CLcorpus.} \label{table:corpusStats} \end{center} \end{table} In citing papers, citations are identified by their citation markers, each one with its formatting style: \textit{(Kogure, 1990)}, \textit{[16]}, \textit{(Gleen el al., 2004)}, etc. For the 135 citations of the CLcorpus, the following set of elements have been annotated by one person: \begin{itemize} \item the \textbf{citation context}, that is the text spans that include the citation marker and usually identify the reason why the paper has been cited; \item one to three \textbf{reference text spans} that are the text spans of the cited paper pointing that better reflect the excerpts that are actually referenced by the citation; \item the \textbf{discourse facet} characterizing the citation from a predefined set of 5 discourse facets. \end{itemize} For each paper the corpus includes both its original PDF version and the textual output of the PDF-to-text conversion. Stand-off annotations of citation contexts and text spans of the cited papers are provided by means of a CSV file, by specifying the start and end offsets of each text annotation. When parsing the CLcorpus we experimented several problems that significantly complicated our work: \begin{itemize} \item \textbf{Text encoding}: a small part of the textual documents provided are encoded as UTF-8. Different charset encodings are used including \textit{WINDOWS-1252} and \textit{GB18030}, thus making difficult the implementation of an automated homogeneous textual processing pipeline; \item \textbf{Content}: the textual version of the papers, especially with PDF files older than 10 years, presented several text formatting issues: hyphenation problems, words not separated by blank spaces, page headers and footnotes included in the textual flow, etc. The high frequency of these errors prevents analyzing such contents; \item \textbf{Stand-off annotations}: in the CSV files, the start and end offsets of each text annotation are not valid offsets of the textual version of the related paper. As a consequence, in order to retrieve the annotated texts, it is necessary to search them manually. \end{itemize} In order to solve all these issues and enable the automated processing of the annotated textual contents of the CLcorpus, we had to perform a heavy sanitization process. In particular we carried out the following steps: \begin{enumerate} \item conversion of each paper of the corpus from PDF-to-text by means of Poppler\footnote{http://poppler.freedesktop.org/}, a robust PDF-to-text conversor ; \item manual correction of the PDF-to-text conversion errors in order to get a clean textual version of each paper; \item by inspecting the textual contents of each CSV files, manual propagation of the annotations of all citing and cited papers to the clean textual version of each paper. \end{enumerate} In this way, we generated the sanitized version of the CLcorpus that we used as input for any further textual analysis. \section{Documents pre-processing pipeline} \label{sec:preProcessing} In this Section we provide a brief overview of the pre-processing steps that we perform over each paper of the sanitized version of CLsumm corpus (both citing and cited ones). In this way, we enrich papers with explicit linguistic and semantic metadata that will support the actual text analysis of Task 1A and 1B. We parse the papers by the following sets of text analysis tools: \begin{enumerate} \item \textbf{Custom rule-based sentence splitter}, to identify candidate sentences that will be validated or rejected by the following pre-processing steps; \item \textbf{Tokenizer} and \textbf{POS tagger}. We exploit the ANNIE NLP tools for English, integrated in GATE\footnote{https://gate.ac.uk/ie/annie.html} ; \item \textbf{Sentence sanitizer}, to remove incorrectly annotated sentences, relying on a set of rules and heuristics; \item \textbf{Sentence TF-IDF vector calculator}, useful to associate each sentence with a TF-IDF vector. The IDF values of the terms of each document are computed by considering a corpus including all the papers of the document set the document belongs to (up to 9 citing papers and one reference paper). \end{enumerate} \section{Task 1A: reference span identification} \label{sec:task1a} In Task 1A, starting from a citation, participants have to analyze the cited paper to point out one to three \textbf{reference text spans} that identify the excerpts of the cited paper that are actually referenced by that citation. For each citation we retrieve from CLcorpus the citation context identified by the annotator. Then, we select the sentences of the citing paper that overlap totally or partially the citation context. These sentences are referred to as the citation context sentences (CtxSent1,..., CtxSentN). We associate to each sentence of the cited paper a \textit{score} equal to the sum of the TF-IDF vector cosine similarities computed between that sentence and each sentence belonging to the citation context (CtxSent1,..., CtxSentN). We choose as reference text spans the N sentences of the cited paper with the highest \textit{score}. In the remaining part of this Section we present some experiments to evaluate the performance of our approach when N, the number of cited paper sentences with highest \textit{score} to include in the reference text span, varies. \newpage \textbf{CONTENT FROM THE FOLLOWING SECTION NOT YET ADDED TO PROCEEDINGS PAPER}\\ \textbf{EVALUATION}. We evaluate our reference text span identification approach against the gold standard reference text spans manually annotated in CLsumm corpus. To this purpose, we use the F1 score defined by the organizers of the BioMedical Summarization Track of the Text Analysis Conference 2014, since this Track proposed the same Task 1A of CLsumm over a set of papers from the biomedical domain. Such F1 score quantifies the offset-based overlap of pairs of automatically identified and manually selected reference text spans. In our evaluation we consider four sizes of reference text spans, ranging from the 2 sentences to the 5 sentences of the cited paper with highest similarity to the citation context. The results of our evaluation are summarized in Table~\ref{table:task1aEval}. \begin{table}[h]\footnotesize \begin{center} \begin{tabular}{ l | c | c | c | c } \hline Docset & TOP 2 & TOP 3 & TOP 4 & TOP 5 \\ \hline C90-2039 & 0,087 & 0,097 & 0,153 & 0,134 \\ \hline C94-2154 & 0,000 & 0,096 & 0,110 & 0,101 \\ \hline E03-1020 & 0,087 & 0.099 & 0.106 & 0,093 \\ \hline H05-1115 & 0,017 & 0,112 & 0,106 & 0,093 \\ \hline H89-2014 & 0,214 & 0,196 & 0,178 & 0,152 \\ \hline J00-3003 & 0,121 & 0,103 & 0,084 & 0,072 \\ \hline J98-2005 & 0,145 & 0,105 & 0,083 & 0,068 \\ \hline N01-1011 & 0,125 & 0,107 & 0,128 & 0,167 \\ \hline P98-1081 & 0,104 & 0,105 & 0,086 & 0,072 \\ \hline X96-1048 & 0,205 & 0,175 & 0,153 & 0,156 \\ \hline \textbf{AVG:} & 0,111 & 0,120 & 0,121 & 0,116 \\ \hline \hline \end{tabular} \caption{Variation of the F1 score when the reference text span is identified by considering the 2/3/4/5 sentences of the cited paper with highest similarity to the citation context. F1 score is computed both by document set and averaged across all document sets.} \label{table:task1aEval} \end{center} \end{table} From Table~\ref{table:task1aEval} we can notice that our approach obtains the best average result when we select a reference text span that includes the 4 sentences (TOP 4) that are most similar to the citation context. In general, apart from the TOP 2 sentence selection of the document set C94-2154, we can prove that our approach always manages to identify a set of reference text spans that partially overlaps the gold standard ones. In 4 document sets (H05-1115, H89-2014, J00-3003 and X96-1048) we can notice that the best F1 score is obtained by selecting only the TOP 2 sentences. Adding other sentences to the citation context lowers the F1 score. This happens because the 3rd, 4th and 5th most similar sentences are not included in the gold standard reference text spans, thus the precision decreases. \section{Task 1B: discourse facet classification} \label{sec:task1b} The purpose of Task 1B is the identification of the \textbf{discourse facet} of each reference text span of the cited paper. The discourse facet is the aspect of the cited paper referenced by the citing one. CLcorpus associates to each manually annotated reference text span one among 5 discourse facets: \textit{Aim}, \textit{Method}, \textit{Implication}, \textit{Results} and \textit{Hypothesis}. We face Task 1B as a sentence classification task. From the CLcorpus, we select the sentences of the cited papers that overlap totally or partially a manually annotated reference text span. We characterize these sentences by the discourse facet that is manually associated to the overlapping reference text span. As a consequence we get a set of 266 cited papers' sentences, each one characterized by a discourse facet (see Table~\ref{table:task1bSentDistrib}). \begin{table}[h]\footnotesize \begin{center} \begin{tabular}{ l | c } \hline Docset & Citing papers \\ \hline \textit{Aim} & 46 \\ \hline \textit{Hypothesis} & 1 \\ \hline \textit{Implication} & 25 \\ \hline \textit{Results} & 29 \\ \hline \textit{Method} & 165 \\ \hline \textbf{TOTAL}: & 266 \\ \hline \hline \end{tabular} \caption{Discourse facet of the sentences of cited papers belonging to a manually annotated reference text span.} \label{table:task1bSentDistrib} \end{center} \end{table} Considering this dataset we build and compare several sentence classifiers in order to automatically associate to each sentence belonging to a reference text span its discourse facet. Our best sentence classifier obtains an averaged F1 of 0,719. In the rest of this Section we discuss our evaluation of different classification algorithms. \textbf{EVALUATION}. In order to automatically classify the discourse facet of the sentences belonging to reference text spans, we model each sentence as a word vector that includes lemmas, bigrams and trigrams. When we compute these word vectors we do not remove stopwords. Once obtained the word vector representation of the sentences, we compare three classification algorithms by a 10-fold cross validation over the set of 266 cited papers' sentences (see Table~\ref{table:task1bSentDistrib}): \textit{Naive Bayes}, \textit{Support Vector Machine} with linear kernel and \textit{Logistic Rgression}. The results of this comparison are shown in Table~\ref{table:task1bAlgorithmComp}. \begin{table}[h]\footnotesize \begin{center} \begin{tabular}{ l | c | c | c} \hline Disc. facet & NB & SVM & LR \\ \hline \textit{Aim} & 0,725 & 0,734 & 0,732 \\ \hline \textit{Method} & 0,706 & 0,826 & 0,828 \\ \hline \textit{Implication} & 0,049 & 0,000 & 0,200 \\ \hline \textit{Results} & 0,509 & 0,533 & 0,533 \\ \hline \textit{Hypothesis} & 0,024 & 0,000 & 0,000 \\ \hline \textbf{WEIGHED AVG. F1}: & 0,623 & 0,698 & 0,719 \\ \hline \hline \end{tabular} \caption{Comparison of discourse facet classification algorithms (F1 score): \textit{Naive Bayes} (NB), \textit{Support Vector Machine} with linear kernel (SVM) and \textit{Logistic Rgression} (LR).} \label{table:task1bAlgorithmComp} \end{center} \end{table} \textbf{CONTENT BEYOND THIS POINT NOT YET ADDED TO PROCEEDINGS PAPER}\\ We can notice that the best performing sentence classifier is the \textit{Logistic Regression} that obtains a weighted average F1 score of 0,719. This classifier obtains good performances for the classes \textit{Aim}, \textit{Method} and \textit{Results}. With respect to these classes, the F1 score for the class \textit{Implication} is instead considerably lower. This probably happens because the examples of \textit{Implication} sentences provided are not enough to properly characterize this kind of class. The F1 score of the class \textit{Hypothesis} is equal to zero, because there is actually only one sentence classified as \textit{Hypothesis} among the 266 sentences of the training set. In general, we believe that the performances of these classifiers can considerably benefit from a greater and more balanced training set. \section{Conclusions and future work} \label{sec:conclusion} In this document we introduced the system we built to participate to the Computational Linguistic Summarization Pilot Task (CLsumm), organized in the context of the Text Analytics Conference 2014. After some remarks about the CLcorpus sanitation we had to perform, we described the text analysis methodologies we adopted to face the two CLsumm Tasks (Tasks 1A and 1B). We relied on sentence similarity measures to face Task 1A and on a sentence classifier to deal with Task 1B. We provided an evaluation of both Tasks by comparing the results of our system to the gold standard provided by the CLcorpus. In our future investigation we would like to experiment alternative text similarity approaches to identify reference text spans (Task 1A), taking into account richer structural and contextual information concerning each citation. To improve sentence classification results (Task 1B) we plan to experiment with new feature sets, tailored to the specific task. Moreover, to better validate any approach to these tasks it would be essential to get access to a greater corpus of annotated documents. \section*{Acknowledgments} The research described in this paper is funded by the EU Project Dr. Inventor (FP7-ICT-2013.8ERAGE.1 project number611383), the Project TIN2012-38584-C06-03 of the Ministerio de Econom\'{\i}a y Competitividad, Secretar\'{\i}a de Estado de Investigaci\'on, Desarrollo e Innovaci\'on, Spain and the Program Ram\'on y Cajal 2009 (RYC-2009-04291). \end{document}
{ "alphanum_fraction": 0.771040724, "avg_line_length": 85.3863636364, "ext": "tex", "hexsha": "0dc3f7cb74223240c47ee4c585594861401c414a", "lang": "TeX", "max_forks_count": 89, "max_forks_repo_forks_event_max_datetime": "2022-03-19T18:47:56.000Z", "max_forks_repo_forks_event_min_datetime": "2015-04-01T14:19:19.000Z", "max_forks_repo_head_hexsha": "3aa7f89afbe051d7202575b46e8f7449f7a088b0", "max_forks_repo_licenses": [ "CC-BY-4.0" ], "max_forks_repo_name": "yolochai/scisumm-corpus", "max_forks_repo_path": "publications/TAC2014_workshop_paper/upf/TAC2014_ACL_SciSumm_TALN_UPF_CAMERA_READY_CONTRIBUTION.tex", "max_issues_count": 24, "max_issues_repo_head_hexsha": "3aa7f89afbe051d7202575b46e8f7449f7a088b0", "max_issues_repo_issues_event_max_datetime": "2021-07-28T08:14:57.000Z", "max_issues_repo_issues_event_min_datetime": "2016-03-05T17:28:14.000Z", "max_issues_repo_licenses": [ "CC-BY-4.0" ], "max_issues_repo_name": "yolochai/scisumm-corpus", "max_issues_repo_path": "publications/TAC2014_workshop_paper/upf/TAC2014_ACL_SciSumm_TALN_UPF_CAMERA_READY_CONTRIBUTION.tex", "max_line_length": 836, "max_stars_count": 198, "max_stars_repo_head_hexsha": "3aa7f89afbe051d7202575b46e8f7449f7a088b0", "max_stars_repo_licenses": [ "CC-BY-4.0" ], "max_stars_repo_name": "yolochai/scisumm-corpus", "max_stars_repo_path": "publications/TAC2014_workshop_paper/upf/TAC2014_ACL_SciSumm_TALN_UPF_CAMERA_READY_CONTRIBUTION.tex", "max_stars_repo_stars_event_max_datetime": "2022-03-21T19:12:20.000Z", "max_stars_repo_stars_event_min_datetime": "2015-05-03T06:35:05.000Z", "num_tokens": 4886, "size": 18785 }
\section{Results and Discussion} We present results with different types of flow field data sets to evaluate our approach, including synthetic data and spatially aggregated data sets. To demonstrate the effectiveness of the proposed algorithm, we compared it with the Monte Carlo (MC) method, which is the general approach to stochastically trace particles in uncertain flow fields modeled in probability distributions. We performed quantitative comparisons on the resulting streamlines generated by our approach and the MC method with different settings and distance measurements. We also qualitatively compare the most likely streamlines as well as the distributions of possible traces produced by our approach and the MC method by visualizing sample traces on different data sets. \subsection{Synthetic Data} We first evaluate the proposed algorithm on the analytical static double-gyre data set proposed by Shadden et al. in~\cite{Shadden2005271}. Gaussian noise is added into the vector field to synthesize the uncertainty. The uncertain flow can be described as the stream-function \begin{equation} \psi(x,y) = A\sin(\pi x)\sin(\pi y) + N(0,{\sigma ^2}) \end{equation} over the domain $[0,2]\times[0,1]$. In order to quantitatively evaluate the robustness of the particle filtering algorithm under the influence of noise, we generate streamlines starting from regularly sampled seed positions for the certain double-gyre data set and use those streamlines as our ground truth. Then, a set of sample traces are generated by the MC method and the particle filtering method starting from the same seed position presented above for the uncertain double-gyre data set with different noise level, which is controlled by the standard deviation $\sigma$ of the Gaussian noise. All the streamlines were generated with a step size of $0.005$ and a maximum step number of $100$. For the particle filtering method, a concentration parameter $\kappa = 60$ and a resampling threshold $N_t = 4.0$ are used. For the Monte Carlo method and the proposed algorithm, a critical parameter is the number of particles used for each seed. Indeed, more particles will give a more accurate presentation of the target distribution but will take more time to generate the results. Hence, a particle count that balances the accuracy and the computation time need to be studied. Based on~\cite{journals/mia/PontabryROSKD13}, we use $100$ particles for both of the methods in the experiments. To compare the accuracy of the resulting traces, two pairwise distance metrics between streamlines $L_i$ and $L_j$ were used: 1. Hausdorff distance $d_H$~\cite{Roessl:2012:TVCG}, which measures how far two streamlines are from each other: \begin{equation} \begin{split} {d_H}({L_i},{L_j}) = max({d_h}({L_i},{L_j}),{d_h}({L_j},{L_i})) \\ \text{with } {d_h}({L_i},{L_j}) = ma{x_{{p_l} \in {L_i}}}{\min _{{p_k} \in {L_j}}}\left\| {{p_k} - {p_l}} \right\| \end{split} \end{equation} 2. Mean of the closest point distance $d_M$~\cite{Corouge04towardsa}, which gives the average distance between two streamlines: \begin{equation} \begin{split} {d_M}({L_i},{L_j}) = mean({d_m}({L_i},{L_j}),{d_m}({L_j},{L_i})) \\ \text{with } {d_m}({L_i},{L_j}) = mea{n_{{p_l} \in {L_i}}}{\min _{{p_k} \in {L_j}}}\left\| {{p_k} - {p_l}} \right\| \end{split} \end{equation} Figure~\ref{gerror} gives the average of the distances presented above between the most likely streamlines generated from each method and the ground truth, with increasing $\sigma$ values for the noise in the vector field. The figure reveals that our method can produce most likely traces that are closer to the ground truth and the average of the distances increases more slowly than the MC method as the noise increases. Besides comparing the accuracy of the most likely traces, it is also important to compare the whole distribution of possible traces generated by each method. We evaluate the accuracy of uncertain streamlines starting from a given seed position by measuring the distance between each individual trace and the ground truth, then we compute the weighted sum of all the distances. For the MC method, all traces are equally weighted by $\frac{1}{N_s}$. For the proposed method, the weights of the traces described above are used. Figure~\ref{gerror_r} shows that the proposed method can generate more accurate traces with less uncertainty. \begin{figure}[!htb] \centering \begin{subfigure}[b]{0.24\textwidth} \centering \includegraphics[height=1.0in]{../figures/doublegyre_h.eps} \caption{} \end{subfigure}~ \begin{subfigure}[b]{0.24\textwidth} \centering \includegraphics[height=1.0in]{../figures/doublegyre_m.eps} \caption{} \end{subfigure} \caption{Comparison of the distance between the most likely traces and the ground truth using Hausdorff and mean of the closest point measurements for our method and the MC method.} \label{gerror} \end{figure} \begin{figure}[!htb] \centering \begin{subfigure}[b]{0.24\textwidth} \centering \includegraphics[height=1.0in]{../figures/doublegyre_hr.eps} \caption{} \end{subfigure}~ \begin{subfigure}[b]{0.24\textwidth} \centering \includegraphics[height=1.0in]{../figures/doublegyre_mr.eps} \caption{} \end{subfigure} \caption{Comparison of overall trace accuracy. For each method, distances of all sample traces to the ground truth are measured and summed by their weights.} \label{gerror_r} \end{figure} Figure~\ref{case_1} shows sample traces generated by the MC method and the proposed method at a given seed location in the double-gyre flow field. As we can see in the figure, our method can generate more concentrated traces which are also closer to the ground truth compared with the MC method. The most likely trace generated by the proposed method is also closer to the ground truth, as shown in~\ref{case_1_c}. \begin{figure}[!htb] \centering \begin{subfigure}[!htb]{0.25\textwidth} \centering \includegraphics[height=0.8in]{../figures/double_gyre_mc35.eps} \caption{} \label{case_1_a} \end{subfigure}~ \begin{subfigure}[!htb]{0.25\textwidth} \centering \includegraphics[height=0.8in]{../figures/double_gyre_smc35.eps} \caption{} \label{case_1_b} \end{subfigure} \begin{subfigure}[!htb]{0.5\textwidth} \centering \includegraphics[height=1.0in]{../figures/double_gyre_opt35.eps} \caption{} \label{case_1_c} \end{subfigure} \caption{(a): Sampled streamlines computed by the MC method starting from seeding position $x=0.3, y=0.5$ in the analytical double-gyre data set. (b): Sampled streamlines computed by our method from the same seeding position in (a). (c): The most likely traces generated by both methods compared with the ground truth.} \label{case_1} \end{figure} \subsection{Spatially aggregated Data Sets} In this section, experiments were done on two real-world flow field data sets, including Hurricane Isabel and 2D Ocean. Hurricane Isabel is a data set with a resolution of $500 \times 500 \times 100$ that models a strong hurricane in the west Atlantic region in September 2003. The 2D Ocean data set is a flow data with a resolution of $574 \times 289$ produced by a 2D computational fluid dynamics simulation. In order to test the performance of our algorithm on uncertain data which represented by non-gaussian distributions, we decompose the data into small cubic blocks and construct a histogram for each block. To compute a histogram from a set of vector directions, we consider the surface patches of a unit sphere as histogram bins. We make use of the recursive zonal equal area sphere partitioning algorithm~\cite{leopardi2006} to partition the unit sphere into patches of equal area and use them as our histograms bins. Partition results for different number of patches on the $2$-dimensional unit sphere are shown in Figure~\ref{rzeasp}. In the experiments, we partition the sphere into $2048$ patches to make an accurate histogram. As Carr et at. presented in~\cite{Carr:2006:HIS:1187627.1187777}, histograms can be poor representations of data since they represent the distribution using the nearest neighbor interpolation. To address this issue, we densely sample the data in each local block to make the histogram converge to the true local distribution. For the test data sets, distribution-based data are generated with three different block size $8^3$, $16^3$, and $32^3$ to evaluate the performance of the proposed algorithm under the influence of uncertainty. Since the data are represented as block-wise histograms, how to interpolate between neighboring blocks to generate a more accurate probability distribution becomes a challenging question. Developing efficient and accurate interpolation technique from probability density functions is still an active area for research. A variety of techniques~\cite{10.1109/TVCG.2013.208, 10.1109/TVCG.2012.249} have been proposed to perform linear interpolation on different types of distributions. However, they are only focused on either uniform or parametric distributions. Furthermore, these techniques can only work on point based data sets. They may give incorrect value ranges for block-based data sets used in the experiments. Here, we perform the interpolation by aggregating the contributions of neighboring histograms to the distribution of interest based on the method presented by Clemen et al. in~\cite{aggregation99} \begin{equation} {h(\theta)} = k\prod\limits_{i = 1}^n {h{{_i}(\theta)}^{{\alpha _i}}} \end{equation} where $k$ is a normalizing constant and the weights ${{\alpha _i}}$ represent the relative quality of the different histograms. Typically, the weights are restricted to sum to one. The contribution of each histogram is estimated by applying convolution operations on it with a von Mises-Fisher kernel, where the number of the convolution operations is determined based on the distance between the neighboring block and target position. \begin{figure}[htb!f] \centering \begin{subfigure}[b]{0.16\textwidth} \centering \includegraphics[width=0.8in]{../figures/rzeasp_16.eps} \caption{16 patches} \end{subfigure}~ \begin{subfigure}[b]{0.16\textwidth} \centering \includegraphics[width=0.8in]{../figures/rzeasp_64.eps} \caption{64 patches} \end{subfigure}~ \begin{subfigure}[b]{0.16\textwidth} \centering \includegraphics[width=0.8in]{../figures/rzeasp_256.eps} \caption{256 patches} \end{subfigure} \caption{Recursive zonal equal area sphere partitioning with different number of patches.} \label{rzeasp} \end{figure} As presented above, we regularly sample a set of seed locations and compute the streamlines for both the raw data and the spatially down sampled data. The sample seed positions and the number of samples for the two test data sets are given in Table~\ref{seed_position}. To perform the quantitative analysis, we treat the streamlines computed from the raw data as the ground truth and compute the distance between the stochastic particle traces with the ground truth. $100$ particles were used with a integration step size $1.0$ and a maximum step number of $1000$ for the streamline computation. For the particle filtering method, a concentration parameter $\kappa = 60$ and a resampling threshold $N_t = 4.0$ are used. Figure~\ref{berror_r} gives the mean of the distances' weighted sum between sample streamline bundles generated from the test methods and the ground truth on different data sets with different block sizes. The figure reveals that the proposed method can produce traces that are closer to the ground truth. \begin{table}[ht!b] \centering \begin{tabular}{|l|l|l|} \hline Data Set & Number of Seeds & Interval of Seed Positions \\ \hline Isabel & $12 \times 12 \times 9$ (1296) & $40 \times 40 \times 10$ \\ \hline Ocean & $28 \times 14$ (392) & $20 \times 20$ \\ \hline \end{tabular} \caption{Sample seed positions for the two test data sets.} \label{seed_position} \end{table} % \begin{figure}[!htb] % \centering % \begin{subfigure}[b]{0.24\textwidth} % \centering % \includegraphics[height=0.9in]{../figures/doublegyre_h.eps} % \caption{} % \end{subfigure}~ % \begin{subfigure}[b]{0.24\textwidth} % \centering % \includegraphics[height=0.9in]{../figures/doublegyre_m.eps} % \caption{} % \end{subfigure} % \begin{subfigure}[b]{0.24\textwidth} % \centering % \includegraphics[height=0.9in]{../figures/doublegyre_h.eps} % \caption{} % \end{subfigure}~ % \begin{subfigure}[b]{0.24\textwidth} % \centering % \includegraphics[height=0.9in]{../figures/doublegyre_m.eps} % \caption{} % \end{subfigure} % \caption{Hausdorff and mean of the closest point distances between the ground truth and most likely traces generated by our method and the MC method for the two spacial down-sampled data sets.} % \label{berror} % \end{figure} \begin{figure}[!htb] \centering \begin{subfigure}[b]{0.24\textwidth} \centering \includegraphics[height=1.0in]{../figures/ocean_h.eps} \caption{} \end{subfigure}~ \begin{subfigure}[b]{0.24\textwidth} \centering \includegraphics[height=1.0in]{../figures/isabel_h.eps} \caption{} \end{subfigure} \begin{subfigure}[b]{0.24\textwidth} \centering \includegraphics[height=1.0in]{../figures/ocean_m.eps} \caption{} \end{subfigure}~ \begin{subfigure}[b]{0.24\textwidth} \centering \includegraphics[height=1.0in]{../figures/isabel_m.eps} \caption{} \end{subfigure} \caption{Hausdorff and mean of the closest point distances between the ground truth and sample traces generated by our method and the MC method for the two spatially aggregated data sets.} \label{berror_r} \end{figure} Figure~\ref{data_overview} shows the streamlines generated from the two test data sets on the seed positions presented above. In Figure~\ref{data_overview} (a) and (d), the ground truth streamlines are generated from the raw data. The most likely streamlines generated by the MC method on the distribution-based data set with block size $16^3$ are given in Figure~\ref{data_overview} (b) and (e); as expected, streamlines generated by the MC method are generally not as smooth as the ground truth and some flow features looks quiet different compare with the ground truth. Figure~\ref{data_overview} (c) and (f) show the streamlines produced by our method with the same block size, which give more accurate and smoother results. In addition to the most likely traces, the distribution approximated by the estimated sample traces is also important for understanding the distribution-based flow field. The visualization of the estimated streamline bundles on different flow field data sets are shown in Figure~\ref{case_5} and~\ref{case_4} to demonstrate the advantage of the proposed method. The resulting trajectories are generated with $1000$ particles for the Monte Carlo method and the particle filtering algorithm. By considering the correlation of vector directions in the spacial domain, our method is less sensitive to the local uncertainty, when MC traces can scatter to wrong directions, as show in ~\ref{case_5}. Figure~\ref{case_4} shows that our algorithm can produce more concentrated results than the basic MC method, because the correlation between consecutive integration steps (the prior information) are exploited. \begin{figure*}[!htb] \centering \begin{subfigure}[!htb]{0.32\textwidth} \centering \includegraphics[width=2.2in]{../figures/ocean_gt.eps} \caption{} \end{subfigure}~ \begin{subfigure}[!htb]{0.32\textwidth} \centering \includegraphics[width=2.2in]{../figures/ocean_mc.eps} \caption{} \end{subfigure}~ \begin{subfigure}[!htb]{0.32\textwidth} \centering \includegraphics[width=2.2in]{../figures/ocean_smc.eps} \caption{} \end{subfigure} \begin{subfigure}[!htb]{0.32\textwidth} \centering \includegraphics[width=2.2in]{../figures/isabel_gt.eps} \caption{} \end{subfigure}~ \begin{subfigure}[!htb]{0.32\textwidth} \centering \includegraphics[width=2.2in]{../figures/isabel_mc.eps} \caption{} \end{subfigure}~ \begin{subfigure}[!htb]{0.32\textwidth} \centering \includegraphics[width=2.2in]{../figures/isabel_smc.eps} \caption{} \end{subfigure} \caption{Streamlines generated on the 2D Ocean and the Hurricane Isabel data sets. The color is used to enhance the contrast among streamlines. (a) and (d): The ground truth streamlines generated on the raw data. (b) and (e): Results produced by the Monte Carlo method on the distribution data with block size $16^3$, introduce noisy patterns on the streamlines due to the local uncertainty and give inaccurate overview of the flow features. Our particle filter method gives more accurate overview results and smoother streamlines by exploiting the spatial coherence of the vector directions, shown in (c) and (f).} \label{data_overview} \end{figure*} \begin{figure}[!htb] \centering \begin{subfigure}[!htb]{0.5\textwidth} \centering \includegraphics[width=2.8in]{../figures/ocean_mc1.eps} \caption{} \label{case_5_a} \end{subfigure} \begin{subfigure}[!htb]{0.5\textwidth} \centering \includegraphics[width=2.8in]{../figures/ocean_smc1.eps} \caption{} \label{case_5_b} \end{subfigure} \caption{(a): Sampled streamlines computed by the MC method starting from seeding position $x=280, y=140$ in the 2D Ocean data set. (b): Sampled streamlines computed by our method from the same seeding position in (a).} \label{case_5} \end{figure} \begin{figure}[!htb] \centering \begin{subfigure}[!htb]{0.25\textwidth} \centering \includegraphics[height=1.4in]{../figures/isabel_mc1.eps} \caption{} \label{case_4_a} \end{subfigure}~ \begin{subfigure}[!htb]{0.25\textwidth} \centering \includegraphics[height=1.4in]{../figures/isabel_smc1.eps} \caption{} \label{case_4_b} \end{subfigure} \caption{(a): Sampled streamlines computed by the MC method starting from seeding position $x=250, y=150, z=45$ in the Isabel data set. (b): Sampled streamlines computed by our method from the same seeding position in (a).} \label{case_4} \end{figure} \subsection{Performance} All the experiments were performed on a desktop computer with an Intel(R) Core(TM) i7-4790K CPU 4.0GHz processor, 16GB memory, and an NVIDIA GTX 970 GPU. In Table~\ref{timing}, we compare the performance measurements between the particle filtering algorithm and the Monte Carlo method for streamlines estimated for a given seed position with $100$ sample points for all the test data sets used in this paper. In all the datasets, our approach is almost as fast as the Monte Carlo algorithm. \begin{table}[!htb] \centering \begin{tabular}{|c|c|c|c|c|} \hline \multirow{2}{*}{Data Set} & \multirow{2}{*}{Method} & \multicolumn{3}{c|}{Timing(sec)} \\ \cline{3-5} & & 40 Steps & 80 Steps & 120 Steps \\ \hline \multirow{2}{*}{Double Gyre} & MC & 0.0026 & 0.005 & 0.008 \\ \cline{2-5} & Particle Filter & 0.0035 & 0.007 & 0.01 \\ \hline \multirow{2}{*}{Ocean 2D} & MC & 2.8 & 5.9 & 9.1 \\ \cline{2-5} & Particle Filter & 2.9 & 6.1 & 9.3 \\ \hline \multirow{2}{*}{Isabel} & MC & 3.3 & 6.7 & 10.1 \\ \cline{2-5} & Particle Filter & 3.4 & 6.8 & 10.7 \\ \hline \end{tabular} \caption{Overview of the performance for the particle filtering algorithm and the Monte Carlo method tested on all the distribution-based data sets used in this paper.} \label{timing} \end{table}
{ "alphanum_fraction": 0.7350794762, "avg_line_length": 69.2249134948, "ext": "tex", "hexsha": "04c21a6dc89f8253e91b77bafd5730662e673791", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "d51ac35f2e425d818c5b272c74b3ab9ef01bd5ff", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "hewenbin/pspf", "max_forks_repo_path": "paper/pvis2016/draft-rs.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "d51ac35f2e425d818c5b272c74b3ab9ef01bd5ff", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "hewenbin/pspf", "max_issues_repo_path": "paper/pvis2016/draft-rs.tex", "max_line_length": 1678, "max_stars_count": null, "max_stars_repo_head_hexsha": "d51ac35f2e425d818c5b272c74b3ab9ef01bd5ff", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "hewenbin/pspf", "max_stars_repo_path": "paper/pvis2016/draft-rs.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 5361, "size": 20006 }
\section{The Real Numbers} \subsection{Cauchy Sequences} \declareexercise{5.1.1} \begin{proof} Since $(a_n)^\infty_{n=1}$ is a Cauchy sequence, it is 1-steady for some $N$. That implies, \[ \forall n \geq N(|a_n-a_N| \leq 1) \] And we know that $a_1,a_2,\dots,a_N$ is bounded by some number $M$, which means $|a_N| \leq M$. Expand $|a_n-a_N| \leq 1$ to obtain \[ a_N-1 \leq a_n \leq a_N+1 \] If $a_N \geq 0$, then $a_n \geq a_N-1 \geq -a_N-1$, then \[ -(a_N+1) \leq a_n \leq a_N+1 \equiv |a_n| \leq |a_N+1| \] So \[ |a_n| \leq |a_N+1| \leq |a_N| + |1| \leq M+1 \] If $a_N < 0$, then $a_n \leq a_N +1 < -a_N+1$, then \[ a_N - 1 \leq a_n \leq -(a_N-1) \equiv |a_n| \leq |a_N-1| \] We can also get \[ |a_n| \leq |a_N-1| \leq |a_N| + |1| \leq M+1 \] Therefore, we know that $(a_n)^\infty_1$ is bounded by $M+1$. \end{proof} \subsection{Equivalent Cauchy Sequences} \declareexercise{5.2.1} \begin{proof} Although we need to prove that $a_n$ being a Cauchy sequence is logically equivalent to $b_n$ being a Cauchy sequence, showing that one implies another is enough due to the structure of these statements. Now we show that $a_n$ being a Cauchy sequence implies that $b_n$ being a Cauchy sequence. We need to show that for any $\varepsilon >0$, there exists a $N$ such that for all $m,n\geq N$, $|a_m-a_n| \leq \varepsilon$. First we know that for any $\varepsilon >0$, there exists a $N$ such that $\forall m,j\geq N(|a_m-b_j|\leq \varepsilon)$. By substituting $m$ with $n$ we have both $|a_m-b_j|\leq \varepsilon$ and $|a_n-b_j|\leq \varepsilon$. Thus, \[ |a_m-a_n| = |(a_m-b_j) - (a_n-b_j)| \leq |a_m-b_j| + |a_n-b_j| \leq 2\varepsilon \] Then we find the $N'$ such that $\forall m,n\geq N(|a_m-b_n| \leq \varepsilon/2)$, and then for $i,j \geq N$, $|a_i-a_j| \leq \varepsilon$. \end{proof} \declareexercise{5.2.2} \begin{proof} We need only to show that $a_n$ being bounded implies that $b_n$ being so because of the structure of these statements. Consider \exerciseref{5.1.1}. Since that $(a_n)^\infty_{n=1},(b_n)^\infty_{n=1}$ are eventually $\varepsilon$-close, the sequence $(a_n-b_n)^\infty_{n=1}$ is eventually $\varepsilon$-steady. Thus, according to the exercise mentioned, it is bounded by some number $M$. And we say that $a_n$ is bounded by some number $N$. So $|a_n-b_n| \leq M$. Again, similar to the proof of $|a_n| \leq |a_N|+1$ in \exerciseref{5.1.1}, we can obtain that $|b_n| \leq |a_n| + M \leq N+M$. Therefore, $b_n$ is also bounded. \end{proof} \subsection{The Construction of the Real Numbers} \declareexercise{5.3.1} \begin{proof} Reflectivity: It is immediately derived from $|a_n-a_n| = 0$. Symmetry: It is immediately derived from $|a_n-b_n| = |b_n-a_n|$. Transitivity: For any $\varepsilon >0$, we can find $M,N$ such that $\forall n\geq M(|a_n-b_n| \leq \varepsilon)$ and $\forall n\geq N(|b_n-c_n| \leq \varepsilon)$. Let $B = \max{(M,N)}$. Then for $n\geq B$, \[ |a_n-c_n| \leq |a_n-b_n| + |b_n-c_n| \leq 2\varepsilon \] This can also be deduced by (c) in Proposition 4.3.7 So $a_n$ and $c_n$ are also equal. \end{proof} \declareexercise{5.3.2} \begin{proof} (1) We need to show that $(a_nb_n)^\infty_{n=1}$ is a Cauchy sequence. For any $\varepsilon>0$, we can find $M,N$ such that $\forall i,j\geq M(|a_i-a_j| \leq \varepsilon)$ and $\forall i,j\geq N(|b_i-b_j| \leq \varepsilon)$. Let $B = \max{(M,N)}$. Then for $i,j \geq B$, (See (h) in Proposition 4.3.7) \[ |a_ib_i - a_jb_j| \leq \varepsilon(|a_i| + |a_j|) + \varepsilon^2 \] Note that $(a_n)^\infty_{n=1}$ is a Cauchy sequence, so it is bounded by some number $M$. Thus, $|a_i| + |a_j| \leq 2M$. For any $\varepsilon' >0$, we need to find a $\varepsilon >0$ such that $\varepsilon(|a_i| + |a_j|) + \varepsilon^2 \leq \varepsilon'$. First, if $\varepsilon' \geq 1$, then by setting $\varepsilon < 1$ we can obtain $\varepsilon^2 < 1 \leq \varepsilon'$; if $\varepsilon' < 1$, then we let $\varepsilon < \varepsilon'$, and multiply it with $\varepsilon < 1$, we then have $\varepsilon^2 < \varepsilon'$. After these steps, we can ensure that $\varepsilon' - \varepsilon^2>0$. Consider the number $t = \dfrac{\varepsilon' - \varepsilon^2}{2M}>0$. If $\varepsilon$ already satisfies $\varepsilon <t$, then it is the number we want. If it doesn't, then we can shrink it. That is, let $\varepsilon'' < t \leq \varepsilon$. $\varepsilon'' < \varepsilon$ gives $(\varepsilon'')^2 < (\varepsilon)^2$, then $-(\varepsilon'')^2 > -(\varepsilon)^2$, and finally \[ t'' = \frac{\varepsilon' - (\varepsilon'')^2}{2M}>\frac{\varepsilon' - \varepsilon^2}{2M} \] So $\varepsilon'' < t < t''$. We can set $\varepsilon$ to this $\varepsilon''$. Then \begin{align*} \varepsilon < \frac{\varepsilon' - \varepsilon^2}{2M} &\Longrightarrow \\ \varepsilon \times 2M < \varepsilon' - \varepsilon^2 &\Longrightarrow \\ \varepsilon \times 2M + \varepsilon^2 < \varepsilon' \end{align*} So no matter what $\varepsilon'>0$ is, we can always find $\varepsilon >0$ such that \[ \varepsilon(|a_i| + |a_j|) + \varepsilon^2 \leq \varepsilon' \]. And for this $\varepsilon'$, there exists $N\geq 1$ such that \[ \forall i,j\geq N(|a_ib_i - a_jb_j| \leq \varepsilon(|a_i| + |a_j|) + \varepsilon^2 \leq \varepsilon') \] Then, $(a_nb_n)^\infty_{n=1}$ is a Cauchy sequence. So is $xy$ a real number. (2) For any $\varepsilon >0$, we can find $N$ such that $\forall n\geq N(|a_n-a'_n|\leq \varepsilon)$. Thus, for such $n$, \[ |a_nb_n - a'_nb_n| = |b_n||a_n - a'_n| \leq \varepsilon|b_n| \] Note that $(b_n)^\infty_{n=1}$ is bounded by some number $M$. So $|a_nb_n - a'_nb_n| \leq \varepsilon M$. Therefore, we find the $N'$ such that $\forall n\geq N'(|a_n-a'_n|\leq \varepsilon/M)$. Then for such $n$, $|a_nb_n - a'_nb_n| \leq \varepsilon$. Thus $\sequence{a_nb_n}{1} = \sequence{a'_nb_n}{1}$. \end{proof} \declareexercise{5.3.3} \begin{proof} On one hand, if $a=b$, then obviously $a,a,\cdots = b,b,\cdots$. On the other hand, if $a,a,\cdots \neq b,b,\cdots$, we try to show that $a=b$. Presume the negation, that is, $a \neq b$. Then, $|a_n-b_n| = |a-b| \geq |a-b|$. For any $0<\varepsilon< |a-b|$, the two Cauchy sequences cannot be $\varepsilon-$close, which is impossible. \end{proof} \paragraph{Lemma 5.3.14} Here it is asked that why can we deduce $|b_n| \geq \varepsilon/2$ from $|b_{n0} - b_n| \leq \varepsilon/2$ and $|b_{n0} > \varepsilon$. The book says that the triangle inequality can be used. In fact, we use the fact \[ ||b_{n0}| - |b_n|| \leq |b_{n0} - b_n| \] instead of $|b_{n0} - b_n| \leq |b_{n0}| + |b_n|$. Since that $|b_{n0}| - |b_n| \leq ||b_{n0}| - |b_n||$, we have \[ |b_{n0}| - |b_n| \leq \varepsilon/2 \equiv |b_{n0}| \leq \varepsilon/2 + |b_n| \] But $|b_{n0}| > \varepsilon$, so $\varepsilon/2 + |b_n| > \varepsilon \equiv |b_n| > \varepsilon/2$. It is not mentioned in (b) of Proposition 4.3.3, but it can be easily proven if we divide conditions, though the process is indeed very tedious. \declareexercise{5.3.4} \begin{proof} Since that the two Cauchy sequences are equivalent, they are eventually $\varepsilon-$steadiness for any $\varepsilon>0$. Then, according to \exerciseref{5.2.2}, $(a_n)^\infty_{n=1}$ being bounded implies that $(b_n)^\infty_{n=1}$ being so. \end{proof} \declareexercise{5.3.5} \begin{proof} We show that $(\frac{1}{n})^\infty_{n=1} = (0)^\infty_{n=1}$. For each $\varepsilon>0$, we want to find $N \in \mathbb{N}$ such that $n\geq N \longrightarrow |\frac{1}{n}-0|\leq \varepsilon$. Note that \[ |\frac{1}{n}-0|\leq \varepsilon \equiv \frac{1}{n} \leq \varepsilon \equiv \frac{1}{\varepsilon} \leq n \], which means that we need to find $N \geq \frac{1}{\varepsilon}$. This is always possible as stated by Proposition 4.4.1. Then the two sequences are equivalent, which proves our proposition. \end{proof} \subsection{Ordering the Reals} \declareexercise{5.4.1} \begin{proof} We try to show that if a real number $a$ is non-zero, then it must be either positive or negative (not both). We already know from Lemma 5.3.14 that $a$ can equal to $\LIM{a_n}$, where $\sequence{a_n}{1}$ is bounded away from zero. Now we show that every Cauchy sequence that is bounded away from zero can always equal to either a positively bounded away from zero or a negatively bounded away from zero Cauchy sequence. Let $\sequence{a_n}{1}$ be a Cauchy sequence that is bounded away from zero. Then for every $n$, $|a_n| \geq c$. Choose $\varepsilon$ so that $0<\varepsilon<c$. We can find $N$ such that $m,n \geq N \longrightarrow |a_n-a_m| \leq \varepsilon$. We know that $|a_n| \geq c \longrightarrow a_n\leq -c \vee a_n \geq c$. We will only show that $\sequence{a_n}{1}$ equals to some sequences positively bounded away from zero on the latter condition. It is easy to derive that $\sequence{a_n}{1}$ equals to some sequences negatively bounded away from zero on the former condition. We have \[ |a_n-a_m| \leq \varepsilon \longrightarrow a_n -\varepsilon <a_m \] and hence $a_m >c-\varepsilon>0$ since $\varepsilon<c \wedge a_n \geq c$. Let $\sequence{b_n}{1}$ be such a sequence that \begin{itemize} \item $m \geq N \longrightarrow b_n = a_n$, \item $0<n<N \longrightarrow b_n$ be any value bigger than $c-\varepsilon$. \end{itemize} Thus, $\sequence{b_n}{1}$ is positively bounded away from zero, and is also equivalent to $\sequence{a_n}{1}$ since that $m \geq N \longrightarrow b_n = a_n$. Now we show that it cannot equal to both. Denote it by $a_n$. Presume the negation, that is, it is equal to both a sequence $x_n \geq c$ and $y_n \leq -c$ for $c>0$. Choose $\varepsilon$ so that $0<\varepsilon<c$. It equals to $x_n$ implies that we can always find $N_1$ such that \[ n \geq N \longrightarrow |a_n-x_n| \leq \varepsilon \longrightarrow a_n > x_n -\varepsilon >c-\varepsilon >0 \] Similarly, we can find $N_2$ such that \[ a_n < -(c-\varepsilon)<0 \] This is impossible, as $a_n$ cannot be eventually $2(c-\varepsilon)$-close. From what we have shown, we can easily derive that a real number is either positive, negative, or zero. Now we show that $x$ is positive iff $-x$ is negative. We know $x = \LIM{a_n}$, where $a_n > c>0$. Then $-x = \LIM{-a_n}$. Since that $-a_n < -c <0$, $\sequence{-a_n}{1}$ is negatively bounded away from zero. So $-x$ is negative, as desired. Finally we show that if $x=\LIM{a_n},y=\LIM{b_n}$ are both positive, then $x+y=\LIM{a_n+b_n},xy = \LIM{a_nb_n}$ is also positive. It is immediately leaded to since \[ a_n >c>0 \wedge b_n >d>0 \Longrightarrow a_nb_n > cd>0 \wedge a_n+b_n > c+d>0 \] \end{proof} \declareexercise{5.4.2} \begin{proof} (a) It is immediately derived since $x-y$ satisfies Proposition 5.4.4. (b) Denote $x,y$ as $\LIM{a_n},\LIM{b_n}$, respectively. Note that by definition $y-x = \LIM{b_n-a_n} = \LIM{-(a_n-b_n)} = -(x-y)$. So from Proposition 5.4.4 we can see that \[ x>y \equiv x-y>0 \equiv y-x<0 \equiv y<x \] One might notice that we use $x-y>0$ to represent $x-y$ being positive here. It is quite easy to prove. Just notice that $x$ being positive is logically equivalent to $x-0$ being so, and thus is to $x>0$. We can further prove that $x<0$ is equivalent to $x$ being negative. (c) We know by Proposition 5.4.4 that \[ x-y >0 \wedge y-z>0 \longrightarrow (x-y)+(y-z) = x-z >0 \] So $x>y \wedge y>z \longrightarrow x>z$. (d) It is immediately deduced as $(x+z)-(y+z) = x-y$. (e) $x>y \equiv x-y>0$. As stated by Proposition 5.4.4, $z(x-y) >0$. So $xz-yz>0 \longrightarrow xz>yz$. \end{proof} \begin{prop} \label{prop.5.4.basicproperties} Since that reals numbers possess the same basic algebraic properties as the properties possessed by rational numbers, we can ascertain that the corollaries of them are also right. For example, \[ a<b\wedge c<d \longrightarrow a+c<b+d \] \[ a,b,c,d>0\wedge a<b\wedge c<d \longrightarrow ac<bd \] \end{prop} \declareexercise{5.4.3} \begin{proof} Existence: For a $\varepsilon>0$, there exists a $N$ such that $n\geq N \longrightarrow |a_n-a_N| \leq \varepsilon$. Choose an arbitrary $c>0$, and let a rational number $y$ be $\LIM{a_N-\varepsilon-c}$. On this occasion, the real number $y-x<0$, which means that $y<x$. Since $y$ is a rational number, there exists a natural number $M$ such that $M \leq y$. So $M<x$ the number $M+1$ may be bigger than $x$, and if it is, $M$ is the number we want. If $M+1 \leq x$, then we check if $(M+1)+1$ is bigger than $x$, and we repeat the step until we find the first natural number $M'$ such that $M'+1>x$. Hence $M'$ is the number we want. Uniqueness: We have already shown that $\exists M(M\leq x<M+1)$. Suppose that there exists another natural number $K$ such that $K\leq x<K+1$. We show that $K=M$. Suppose the negation. Then $K$ either $<M$ or $>M$. Under the former condition, $K+1\leq M\leq x$, which is impossible. Under the latter condition, $x<M+1\leq K$, which is also impossible. \end{proof} \declareexercise{5.4.4} \begin{proof} \[ x>0\rightarrow \frac{1}{x}>0\rightarrow \exists N(N>\frac{1}{x}>0) \] So, according to Proposition 5.4.8, \[ 0<\frac{1}{N}<x \] , as desired. \end{proof} \declareexercise{5.4.5} \begin{proof} Let $x=\LIM{x_n},y=\LIM{y_n}$. Since $x>y$, the sequence $\sequence{x_n-y_n}{1}$ equals to a sequence that is positively bounded away from zero. We may just let $\sequence{x_n-y_n}{1}$ be such a sequence. (This is always possible. Given a sequence $\sequence{z_n}{1}$, which is positively bounded away from zero, and satisfies $\LIM{z_n}=x-y$, we can define $x_n$ as $y_n+z_n$) This way, for some $c>0$, $x_n-y_n>c\equiv x_n>c+y_n$. Moreover, we can find $N$ such that for $0<\varepsilon<\frac{c}{2}$ and $n\geq N$, $|x_n-x_N|\leq \varepsilon\wedge|y_n-y_N|\leq \varepsilon$, which means \[ x_n\geq x_N -\varepsilon\wedge y_n \leq y_N+\varepsilon \] Adding $c>2\varepsilon$ to the inequality $x_N \geq y_N+c$ gives \[ x_N>y_N+2\varepsilon \equiv x_N-\varepsilon>y_N+\varepsilon \] This simplifies our work because both $x_N-\varepsilon$ and $y_N+\varepsilon$ are rational numbers, and we know that there exists a rational number $q$ such that $x_N-\varepsilon<q<y_N+\varepsilon$. We now know that for $n\geq N$, \[ x_n \geq x_N -\varepsilon \longrightarrow x_n-q \geq x_N-\varepsilon-q>0 \] and \[ y_n \leq y_N +\varepsilon \longrightarrow q-y_n \geq q-(y_N+\varepsilon)>0 \] Define a new sequence as the following: $x_n'=x_n$ if $n\geq N$, and $x_n'$ be any rational number such that $ |x_n'-x_N|\leq \varepsilon$, and define $y_n'$ in the same way. Obviously $x=\LIM{x_n'},y=\LIM{y_n'}$. And the sequences $\sequence{x_n'-q}{1},\sequence{q-y_n'}{1}$ are both positively bounded away from zero. Hence $x<q<y$, as desired. \end{proof} \declareexercise{5.4.6} \begin{proof} On one hand, suppose that $|x-y|<\varepsilon$. If $x=y$, then $y-\varepsilon<x<y+\varepsilon$ is obvious. If $x>y$, then $|x-y|=x-y<\varepsilon \rightarrow x<y+\varepsilon$, and $x>y \rightarrow x>y-\varepsilon$. If $x<y$, then $x<y+\varepsilon$ and $|x-y|=y-x<\varepsilon\rightarrow x>y-\varepsilon$. On the other hand, conversely, suppose that \[ y-\varepsilon<x<y+\varepsilon \rightarrow x-y<\varepsilon \wedge y-x<\varepsilon \]. If $x>y$, then $|x-y|=x-y<\varepsilon$. If $x=y$, then $|x-y|=0<\varepsilon$. If $x<y$, then $|x-y|=y-x<\varepsilon$. \end{proof} \declareexercise{5.4.7} \begin{proof} (a) On one hand, if $x\geq y$, then add $0<\varepsilon$ to it (See Proposition \ref{prop.5.4.basicproperties}), we have $x \geq y+\varepsilon$. On the other hand, conversely, if for all real number $\varepsilon>0$, $x \leq y+\varepsilon$, we show that $x\leq y$. Presume the negation, that is, $x>y$, then as stated by Proposition 5.4.14, we can find $q$ such that $x>q>y$, so $x>y+(q-y)$, a contradiction. (b) On one hand, suppose that $|x-y|\leq \varepsilon$ for all $\varepsilon>0$. If $x\geq y$, then $|x-y|=x-y \leq \varepsilon$, so we have $x\leq y$ by (a). If $x\leq y$, we can conclude that $x\geq y$ as well. So either way we will have $x=y$. On the other hand, if $x=y$, then $|x-y|=0<\varepsilon$. \end{proof} \declareexercise{5.4.8} \begin{proof} We shall just prove the first one here. Presume the negation, that is, $\LIM{a_n}>x$. Then we can find a rational $q$ such that $x<q<\LIM{a_n}$ (Proposition 5.4.14). Thus $a_n\leq x<q$, and then $\LIM{a_n}\leq\LIM{q}=q$ (Corollary 5.4.10). But we have $\LIM{a_n}>q$, a contradiction. \end{proof} \subsection{The Least Upper Bound Property} \paragraph{Example 5.5.3} This set has no upper bound because \begin{itemize} \item If $x$ is greater than an element in the set, then $x$ must also be an element in the set; \item $\forall x \in \mathbb{R}^+(\exists y(y \in \mathbb{R}^+ \wedge y>x))$. \end{itemize} \begin{proof} (1) This is immediately derived from the fact that order is transitive (Proposition 5.4.7). (2) Consider the number $x+1$, which, according to (1), is also in the set, and which, is greater than $x$ as $y-x = 1>0$. \end{proof} \declareexercise{5.5.1} \begin{proof} First we show that every upper bound of $E$, $N$, is a lower bound for $-E$, and vice versa. This can be concluded from \[ x \leq N \equiv -x \geq -N \] Then we show that $-M$ is the greatest lower bound. In fact, \[ M \leq N \to -M \geq -N \] \end{proof} \declareexercise{5.5.2} \begin{proof} We show that for all $L<m\leq K$, there at least exists one $m_0$ such that we can not have both $\frac{m_0}{n}$ and $\frac{m_0-1}{n}$ being upper bounds and they not being upper bounds. That is, for this $m_0$, either \[ \frac{m_0}{n} \text{ is an upper bound} \wedge \frac{m_0-1}{n} \text{ is not} \] , which is impossible as $\frac{m_0}{n}>\frac{m_0-1}{n}$, or \[ \frac{m_0-1}{n} \text{ is an upper bound} \wedge \frac{m_0}{n} \text{ is not}. \] Presume the negation, that is, (Note that this statement includes the situation when $\frac{m_0}{n}$ and $\frac{m_0-1}{n}$ are both not upper bounds) \[ (\forall L<m\leq K)(\frac{m_0}{n} \text{ is an upper bound} \equiv \frac{m_0-1}{n} \text{ is an upper bound}) \] We know that when $m=K$, $m/n$ is an upper bound. Then $(m-1)/n$ also is. Repeat the step until $m=L+1$, then we can conclude that $\frac{m-1=L}{n}$ is an upper bound, a contradiction. \end{proof} \declareexercise{5.5.3} \begin{proof} \emph{This is a different approach.} On one hand, for all $x<m_n \to x \leq m_n-1$, $x/n\leq (m_n-1)/n$, which means such $x/n$ cannot be upper bounds. On the other hand, for all $x>m_n \to x-1\geq m_n$, $(x-1)/n\geq m_n/n$. \end{proof} \declareexercise{5.5.4} \begin{proof} (1) Simply take $M \geq \varepsilon$, then $|m_n/n-m_{n'}/n'|\leq 1/M\leq \varepsilon$. (2) Since we want to use \exerciseref{5.4.8}, we need to prove that for some $N$, $(\forall n \geq N, q_n < q_M) \vee (\forall n \geq N, q_n > q_M)$. This thought introduces the following lemma: \begin{lem} For any Cauchy sequence $a_n$, if $a_M \neq \LIM{a_n}$, then there exists a $N$ such that for all $n$ greater than or equal to it, $a_n$ are either all lesser than $a_M$ or all greater than $a_M$. \end{lem} \begin{proof} Since $q_M \neq \LIM{a_n}$, \[ \neg (\forall \varepsilon>0(\exists N(n\geq N \to |a_n-a_M| \leq \varepsilon) )) \] , which is \[ \exists \varepsilon >0 (\forall N(\exists n\geq N(|a_n-a_M| > \varepsilon))) \] For this $\varepsilon$, $\exists N_1(n,n'\geq N_1 \to |a_n-a_{n'}| \leq \varepsilon)$. But we know $\exists N_2 \geq N_1(|a_{N_2}-a_M| > \varepsilon)$. So for all $n \geq N_2$, we have \[ |a_n-a_{N_2}| \leq \varepsilon \wedge |a_{N_2} - a_M| > \varepsilon \] This means either $a_n < q_{M}$ (when $a_M > a_{N_2}$), or $a_n > a_{M}$ (when $a_M < a_{N_2}$). \end{proof} If $q_M = S$, then $|q_M-S| = 0 \leq \frac{1}{M}$. If $q_M \neq S$, then according to the previous lemma, for some $N$ and all $n\geq N$, $q_n > q_M$ or $q_n<q_M$. We suppose that $q_n > q_M$ here. We also know that for all $n\geq M$, $|q_M - q_n| \leq \frac{1}{M}$. Then for $n \geq \max(N,M)$, we can say that $\frac{1}{M} \geq q_n - q_M >0$. The limit of the sequence $(q_n-q_M)_1^n$ is $S-q_M$ (To really make it a sequence, we define its value to be any number between $\frac{1}{M}$ and 0 when $n < \max(N,M)$). We can now apply \exerciseref{5.4.8} to it to obtain $\frac{1}{M}\geq S-q_M >0$ (Note that $S \neq q_M$ is our hypothesis.) A similar process produces $\frac{1}{M}\geq q_M-S >0$ when $q_n<q_M$. We can now finish the proof. \end{proof} \declareexercise{5.5.5} \begin{proof} We will prove that there exists an irrational number $0<i<y-x$ and then $x+i$ automatically becomes the number we want. Then we only need to prove that irrational numbers can be arbitrarily small. So we think of $\frac{\sqrt{2}}{N}$, which becomes smaller than any given $\varepsilon>0$ as $N$ approaches to infinite. But first we need to prove that it is an irrational number. Presume the negation, that is, it is rational. We know that rationals are closed under multiplication, which means $\frac{\sqrt{2}}{N} \cdot N = \sqrt{2}$ is rational, a contradiction. \end{proof} \subsection{Real Exponentiation, Part I} \paragraph{Lemma 5.6.5} The set contains 0 because $0 \geq 0$ and for $n \geq 1$, $0^n = 0$. If $y>1$, then according to Proposition 5.4.7 and Proposition 4.2.9, we know that we can multiply $y>1\wedge y>1$ to get $y^2>1^2=1$, and so on. In fact, $y>1 \to y^2 > y$ and $y>1\to y \times 1 > 1 \times 1 \equiv y >1$, so $y^2>y>1$. We can use induction to prove $y^n>1$. Since that we can multiply such inequalities where both sides are positive, we can conclude from $y>x$ and $y>1$ that $y^n > x \times 1^{n-1} \equiv y^n > x$. \declareexercise{5.6.1} \begin{proof} Let's denote the set $\{y \in \mathbb{R}:y \geq 0 \wedge y^n \leq x\}$ as $S$. Furthermore, we assert the following lemma without proof (the proof can be given by induction): \begin{lem} \[ (x+\varepsilon)^n = \sum_{i=0}^n{\binom{n}{i}n^i\varepsilon^{n-i}} \] \end{lem} (a) If $y^n < x$, then there exists a real number $c$ such that $y^n < c < x$. By definition, $c \in$ the set that determines $x^{1/n}$. But $y$ (c) By definition, for any $y \in S$, $y \geq 0$. \end{proof}
{ "alphanum_fraction": 0.6709785766, "avg_line_length": 44.3299798793, "ext": "tex", "hexsha": "13dbb14562f65eb06e6ace652d22232e24344b6b", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "e040260e4346ae65ce28af11dbd2bb5d9d5ac96b", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "Little-He-Guan/Notebook-for-Analysis-of-Tao", "max_forks_repo_path": "The Real Numbers.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "e040260e4346ae65ce28af11dbd2bb5d9d5ac96b", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "Little-He-Guan/Notebook-for-Analysis-of-Tao", "max_issues_repo_path": "The Real Numbers.tex", "max_line_length": 660, "max_stars_count": null, "max_stars_repo_head_hexsha": "e040260e4346ae65ce28af11dbd2bb5d9d5ac96b", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "Little-He-Guan/Notebook-for-Analysis-of-Tao", "max_stars_repo_path": "The Real Numbers.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 8255, "size": 22032 }
\documentclass{article} \title{Example document} \author{IOHK} \date{\today} \begin{document} \maketitle \begin{abstract} This is an example for building documents in CI. \end{abstract} \section{Section} Paragraph. \end{document}
{ "alphanum_fraction": 0.7478991597, "avg_line_length": 11.3333333333, "ext": "tex", "hexsha": "3de4ae675117c17077ea5d0a95adb5256e1d4358", "lang": "TeX", "max_forks_count": 4, "max_forks_repo_forks_event_max_datetime": "2021-04-09T10:00:38.000Z", "max_forks_repo_forks_event_min_datetime": "2021-03-08T12:56:43.000Z", "max_forks_repo_head_hexsha": "32e69cd7010686bc7438473e4c4c48511058a31d", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "MitchellTesla/cardano-db-sync", "max_forks_repo_path": "docs/iohk-skeleton.tex", "max_issues_count": 13, "max_issues_repo_head_hexsha": "32e69cd7010686bc7438473e4c4c48511058a31d", "max_issues_repo_issues_event_max_datetime": "2021-04-14T00:57:34.000Z", "max_issues_repo_issues_event_min_datetime": "2021-02-04T09:59:30.000Z", "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "MitchellTesla/cardano-db-sync", "max_issues_repo_path": "docs/iohk-skeleton.tex", "max_line_length": 48, "max_stars_count": 9, "max_stars_repo_head_hexsha": "340b96b65d471456b80e169c4a0b1c8f264af31d", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "ArturWieczorek/cardano-db-sync", "max_stars_repo_path": "docs/iohk-skeleton.tex", "max_stars_repo_stars_event_max_datetime": "2021-04-29T11:28:20.000Z", "max_stars_repo_stars_event_min_datetime": "2021-03-02T11:54:35.000Z", "num_tokens": 65, "size": 238 }
\section{Bot} \label{sec:bot} The bot is modeled as a Finite State Automaton (FSA) with the state space defined as follows and depicted in \Cref{fig:bot-fsa}. \begin{figure}[tp] \centering \includegraphics[scale=0.09]{./fig/FSA.eps} \caption{The bot's finite state automaton (FSA).} \label{fig:bot-fsa} \end{figure} \begin{description} \setlength\itemsep{1em} \item[INIT] the bot generates a unique identifier in the form \texttt{MAC:JVM}, where \texttt{MAC} is the local MAC address and \texttt{JVM} is the name of the Java Virtual Machine currently running the bot; it tries to join the botnet contacting the specified list of controllers, orderly; if successful, it loads further controller-specific configurations and allocates resources (e.g. shutdown hooks for resource releasing); Errors in this state are considered fatal, so they would cause bot termination. The pseudocode of this state is shown in Algorithm~\ref{alg:state-init-pseudocode}. \item[EXECUTION] the bot polls the controller for the next command; it waits a delay before command execution (if required); it executes the command; it produces the report and sends it to the controller (if required); it waits for the next polling. Errors in this state are considered warnings: they are handled and never cause bot termination. The pseudocode of this state is shown in Algorithm~\ref{alg:state-execution-pseudocode}. \item[SLEEP] no reports are sent to the controller nor attacks are executed. The bot polls the controller for the next command, waiting for a \texttt{WAKEUP} (it would transit to state \texttt{EXECUTION}) or a \texttt{KILL} (it would transit to state \texttt{DEAD}). In this state errors are considered warning: they are handled and never cause bot termination. The pseudocode of this state is shown in Algorithm~\ref{alg:state-sleep-pseudocode}. \item[DEAD] attacks are unscheduled, resources are released and the bot terminates. In this state all errors are ignored, because no one of them could never compromise the state purpose. The pseudocode of this state is shown in Algorithm~\ref{alg:state-dead-pseudocode}. \end{description} \bigskip \begin{algorithm}[H] \SetAlgoNoLine \KwData{localMAC, localJVMName, config} $id \leftarrow$ concat($mac$,$jvm$) \\ setBotId($id$) \\ $controller \leftarrow$ None \\ $joined \leftarrow$ False \\ $controllerIdx \leftarrow$ 0 \\ $reconnections \leftarrow$ 0 \\ $maxReconnections \leftarrow$ config.RECONNECTIONS \\ $reconnectionWaitInterval \leftarrow$ config.RECONNECTION-WAIT \\ \While{ not $joined$ } { $controller \leftarrow$ config.CONTROLLERS[$controllerIdx$] \\ \eIf{ isValidController($controller$) } { loadConfig($controller$) \\ $CONTROLLER \leftarrow$ $controller$ \\ $joined \leftarrow$ True \\ } { $reconnections \leftarrow reconnections + 1$ \\ \eIf{ $reconnections <= maxReconnections$ } { $reconnectionWait \leftarrow$ randomWithin($reconnectionWaitInterval$) \\ wait($reconnectionWait$) } { $controllerIdx \leftarrow controllerIdx + 1$ \\ \If{$controllerIdx >= len(config.CONTROLLERS)$} { \Return{ error } } } } } \caption{Pseudocode for state \texttt{INIT}} \label{alg:state-init-pseudocode} \end{algorithm} \bigskip \begin{algorithm}[H] \SetAlgoNoLine \KwData{CONTROLLER} $timestamp \leftarrow$ getTimestamp() \\ $pollWaitInterval \leftarrow$ CONTROLLER.POLL-WAIT \\ $reportType \leftarrow$ CONTROLLER.REPORT-TYPE \\ $command \leftarrow$ getNextCommand() \\ \If{ $command.timestamp <= timestamp$ } { \Return } \If{requiresDelay($command$)} { $cmdDelayInterval \leftarrow$ command.DELAY-INTERVAL \\ $cmdDelay \leftarrow$ randomWithin($cmdDelayInterval$) \\ wait($cmdDelay$) } executeCommand($command$) \\ \If{requiresReport($command$)} { $report \leftarrow$ generateReport($reportType$) \\ sendReportTo($controller$, $report$) } $pollWait \leftarrow$ randomWithin($pollWaitInterval$) \\ wait($pollWait$) \caption{Pseudocode for state \texttt{EXECUTION}} \label[alg]{alg:state-execution-pseudocode} \end{algorithm} \bigskip \begin{algorithm}[H] \SetAlgoNoLine \KwData{CONTROLLER} $timestamp \leftarrow$ getTimestamp() \\ $pollWaitInterval \leftarrow$ CONTROLLER.POLL-WAIT \\ $reportType \leftarrow$ CONTROLLER.REPORT-TYPE \\ $command \leftarrow$ getNextCommand() \\ \If{ $command.timestamp \leq timestamp$ or ($command \neq$ WAKEUP|KILL)} { \Return } \If{requiresDelay($command$)} { $cmdDelayInterval \leftarrow$ command.DELAY-INTERVAL \\ $cmdDelay \leftarrow$ randomWithin($cmdDelayInterval$) \\ wait($cmdDelay$) } executeCommand($command$) \\ $pollWait \leftarrow$ randomWithin($pollWaitInterval$) \\ wait($pollWait$) \caption{Pseudocode for state \texttt{SLEEP}} \label[alg]{alg:state-sleep-pseudocode} \end{algorithm} \bigskip \begin{algorithm}[H] \SetAlgoNoLine \KwData{CONTROLLER,WAIT-JOBS} stopScheduler(WAIT-JOBS) \\ freeScheduler() \\ freeIOResources() \\ executeShutdownHooks() \\ exit() \caption{Pseudocode for state \texttt{DEAD}} \label[alg]{alg:state-dead-pseudocode} \end{algorithm}
{ "alphanum_fraction": 0.7201983597, "avg_line_length": 40.0229007634, "ext": "tex", "hexsha": "042716f7f43114b72456c41ab016196fdbfd2a42", "lang": "TeX", "max_forks_count": 2, "max_forks_repo_forks_event_max_datetime": "2019-01-17T05:18:40.000Z", "max_forks_repo_forks_event_min_datetime": "2019-01-16T03:52:10.000Z", "max_forks_repo_head_hexsha": "bc61c3a65f2280728b86fd67f734147e9c78f25a", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "braineering/botnet", "max_forks_repo_path": "paper/sec/bot.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "bc61c3a65f2280728b86fd67f734147e9c78f25a", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "braineering/botnet", "max_issues_repo_path": "paper/sec/bot.tex", "max_line_length": 429, "max_stars_count": 6, "max_stars_repo_head_hexsha": "bc61c3a65f2280728b86fd67f734147e9c78f25a", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "braineering/botnet", "max_stars_repo_path": "paper/sec/bot.tex", "max_stars_repo_stars_event_max_datetime": "2022-03-06T19:44:19.000Z", "max_stars_repo_stars_event_min_datetime": "2017-07-27T13:21:31.000Z", "num_tokens": 1475, "size": 5243 }
\section{Common (Networking)}\label{sec:common} This library handles the communication between the \code{Client} and the \code{Server}. The \code{RequestMessage} class represents a request made by the client, while the \code{ResponseMessage} class represents a response from the server. Both classes contain a list of entities (Java classes) that contains the data. The \code{RequestMessage} class also contains the auth token used for authenticating the user. The library serialize all messages in XML\@. \figref{fig:common} shows the class diagram for the Common library. Only the main classes are shown. \begin{figure}[htb] \includegraphics[width=\textwidth]{module-common} \caption{Common class diagram.}\label{fig:common} \end{figure}
{ "alphanum_fraction": 0.7895442359, "avg_line_length": 39.2631578947, "ext": "tex", "hexsha": "ef8e2692ea39edb939f3836517681f1a8e7ab90f", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "8aacc5b357b2aeb441be6d0c231249e35163ba51", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "SpeedJack/lsmsd2", "max_forks_repo_path": "doc/chapters/modules/common.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "8aacc5b357b2aeb441be6d0c231249e35163ba51", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "SpeedJack/lsmsd2", "max_issues_repo_path": "doc/chapters/modules/common.tex", "max_line_length": 80, "max_stars_count": 2, "max_stars_repo_head_hexsha": "8aacc5b357b2aeb441be6d0c231249e35163ba51", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "SpeedJack/lsmsd2", "max_stars_repo_path": "doc/chapters/modules/common.tex", "max_stars_repo_stars_event_max_datetime": "2020-12-16T01:10:32.000Z", "max_stars_repo_stars_event_min_datetime": "2020-08-31T23:20:24.000Z", "num_tokens": 183, "size": 746 }
% !TEX root = thesis.tex \documentclass[thesis]{subfiles} \begin{document} \chapter{Inter-Filter Connectivity}\label{deeproots} \begin{chapquote}{Marvin Minsky, \textit{Prologue: A View from 1988, Perceptrons}} ``The marvelous powers of the brain emerge not from any single, uniformly structured connectionist network but from highly evolved arrangements of smaller, specialized networks which are interconnected in very specific ways.'' \end{chapquote} %Learning Filters with Limited Channel Extents %It is well understood that the structure of a neural network is critical to its ability to learn from labelled training data and to generalize. Whilst it has been shown that an infinitely wide hidden layer is a universal approximator~\citep{hornik89a}, in practice wide shallow networks do not learn as well as thinner deeper ones --- as shown by recent research~\citep{Krizhevsky2012,Szegedy2014going,Simonyan2014verydeep,He2015}. %This does not appear to be a limitation of finite capacity, since it has been proposed that at least some \glspl{dnn}\index{DNN} are by representable by shallow networks~\citep{Ba2013dothey}. Rather it seems to be a consequence of limitations in our methods of learning the weights of \glspl{dnn}\index{DNN}. With few exceptions, state-of-the-art \glspl{cnn}\index{CNN} for image recognition are largely monolithic, with each filter operating on the \glspl{featuremap}\index{feature map} of all filters on a previous layer. Interestingly, this is in stark contrast to what we understand of biological neural networks\index{neural network}, where we see ``highly evolved arrangements of smaller, specialized networks which are interconnected in very specific ways''~\citep{minsky1988perceptrons}. Yet it has been shown that a large proportion of the learned weights in \glspl{dnn}\index{DNN} are redundant~\citep{Denil2013predicting}, a property that has been widely exploited to make neural networks\index{neural network} smaller and more computationally efficient~\citep{Szegedy2014going,Denton2014efficient}. %It is unsurprising then that regularization is a critical part of training such networks using large datasets~\citep{Krizhevsky2012}. Without regularization \glspl{dnn}\index{DNN} are susceptible to overfitting. Regularization may be achieved by weight decay\index{weight decay} or dropout~\citep{Hinton2012}. Furthermore A carefully designed sparse network connection structure can have a regularizing effect. \Glspl{cnn}~\citep{Fuk80,Lecun1998} embody this idea, using a sparse convolutional connection structure to exploit the locality of natural image structure. In consequence, they are easier to train. In \cref{lowrankfilters} learning a low-rank spatial basis for filters was found to improve generalization while reducing the computational complexity and model size of a \gls{cnn} with only full rank filters. However, this work addressed only the spatial extents of the convolutional filters (\ie $h$ and $w$ in \cref{fig:normalconv}). In this work we will show that a similar idea can be applied to the channel extents --- \ie filter inter-connectivity --- by using \emph{filter groups\index{filter groups}}~\citep{Krizhevsky2012}. %We show that simple alterations to state-of-the-art \gls{cnn} architectures can drastically reduce computational cost and model size without compromising accuracy. % Try to find a creative commons version of this %\begin{figure}[t] %\centering %\includegraphics[width=0.25\columnwidth, page=1]{figs/tree_root.jpg} %\caption{This work introduces the idea of using hierarchical filter group\index{filter groups}ing to create \glsfmttext{cnn} connection structures that resemble tree roots. Applying the technique to existing state-of-the-art networks allows improved efficiency without compromising accuracy.} %\label{fig:tree} %\end{figure} % Maybe we need something like this here too, don't know %It has been shown that reducing the co-adaption of filters is beneficial to generalization in \glspl{dnn}\index{DNN}~\citep{Hinton2012,goodfellow2013maxout,Cogswell2016}. Co-adaption occurs when hidden layer filters (or neurons\index{neuron}) rely on one another to fit training data, such that their outputs become highly correlated. However, instead of using a modified loss, regularization penalty, or randomized network connectivity during training to prevent co-adaption of features, we take a much more direct approach. We use hierarchical filter groups\index{filter groups} (see \cref{regularizingstructure}) to allow the network itself to learn independent filters. By restricting the connectivity between filters on subsequent layers the network is forced to learn filters of limited dependence. We allow the standard network optimization (\eg \gls{sgd}) to learn appropriate weights within this restricted architecture. In this chapter we show that simple alterations to the architecture of state-of-the-art \glspl{cnn}\index{CNN} for image recognition can drastically reduce computational cost and model size while maintaining (or even increasing) accuracy, through a novel structural prior reducing the connectivity in monolithic networks to reflect more closely the sparse, localized filter co-dependencies within a trained network. \section{Related Work}\label{previouswork} Most previous work on reducing the computational complexity of \glspl{cnn}\index{CNN} has focused on approximating convolutional filters in the spatial (as opposed to the channel) domain, either by using low-rank approximations~\citep{mamalet2012simplifying,journals/corr/JaderbergVZ14, conf/cvpr/RigamontiSLF13, journals/corr/LebedevGROL14}, or Fourier transform based convolution~\citep{mathieu2013fast, rippel2015spectral}. More general methods have used reduced precision number representations~\citep{1502.02551v1} or compression of previously trained models~\citep{Chen2015,Kim2016}. Here we explore methods that reduce the computational impact of the large number of filter channels within state-of-the art networks. Specifically, we consider decreasing the number of incoming connections to neurons. \begin{figure}[tbp] \begin{subfigure}[b]{0.95\textwidth} \centering \includegraphics[width=0.66\textwidth, page=1]{Figs/PDF/groupfig} \caption{Convolution}\label{fig:normalconv} \end{subfigure} ~ \begin{subfigure}[b]{0.95\textwidth} \centering \includegraphics[width=0.66\textwidth, page=2]{Figs/PDF/groupfig} \caption{Convolution with filter groups\index{filter groups}}\label{fig:groupedconv} \end{subfigure} \caption[Convolutional filter groups\index{filter groups}]{\textbf{Convolutional Filter groups\index{filter groups}.} (a) Convolutional filters (yellow) typically have the same channel dimension $c_1$ as the input \glspl{featuremap}\index{feature map} (gray) on which they operate. However, (b) with filter group\index{filter groups}ing, $g$ independent groups of $c_2/g$ filters operate on a fraction $c_1/g$ of the input \gls{featuremap}\index{feature map} channels, reducing filter dimensions from $h$$\times$$w$$\times$$c_1$ to $h$$\times$$w$$\times$$c_1/g$. This change does not affect the dimensions of the input and output \glspl{featuremap}\index{feature map} but significantly reduces computational complexity and the number of model parameters.}\label{fig:groupconfig} \end{figure} \begin{figure}[tbp] \centering \pgfplotstableread[col sep=comma]{rootdata/alexnetma.csv}\datatable \pgfplotsset{major grid style={dotted,red}} \begin{tikzpicture} \begin{axis}[ width=0.95\textwidth, height=0.33\textheight, axis x line=bottom, ylabel=Top-5 Val.\ Error, xlabel=Model Parameters (\# floats), axis lines=left, enlarge x limits=0.12, enlarge y limits=0.1, grid=major, %xmin=0, ytick={0.01,0.02,...,0.21}, ymin=0.18,ymax=0.2, yticklabel={\pgfmathparse{\tick*100}\pgfmathprintnumber{\pgfmathresult}\%},style={ /pgf/number format/fixed, /pgf/number format/precision=1, }, legend style={at={(0.98,0.98)}, anchor=north east, column sep=0.5em}, legend columns=2, \setplotcyclecat{2}, every axis plot/.append style={fill}, ] \addplot+[mark=*,nodes near coords,only marks, point meta=explicit symbolic, % x filter/.code={ % \ifnum\coordindex>2\def\pgfmathresult{}\fi % }, ] table[meta=Network,x=Param.,y expr={1 - \thisrow{Top-5 Acc.} },]{\datatable}; %\addplot[mark=square*,mark options={fill=blue},nodes near coords,only marks, % point meta=explicit symbolic, % x filter/.code={ % \ifnum\coordindex<3\def\pgfmathresult{}\fi % }, %] table[meta=Network,x=Param.,y expr={1 - \thisrow{Top-5 Acc.} },]{\datatable}; %\legend{Standard AlexNet}%, Alternate Filter Grouping} \end{axis} \end{tikzpicture} \caption[AlexNet performance and filter groups\index{filter groups}]{\textbf{AlexNet Performance and Filter Groups.} Model Parameters \vs top-5 error for variants of the AlexNet model on \gls{ilsvrc} image classification dataset. Models with moderate numbers of filter groups\index{filter groups} have far fewer parameters, yet surprisingly maintain comparable error.}\label{fig:alexnetplots} \end{figure} \paragraph{AlexNet Filter groups\index{filter groups}}\label{alexnetfiltergroups} Amongst the seminal contributions made by \citet{Krizhevsky2012}~is the use of `filter groups\index{filter groups|textbf}' in the convolutional layers of a \gls{cnn} (see \cref{fig:groupconfig}). While their use of filter groups\index{filter groups} was necessitated by the practical need to sub-divide the work of training a large network across multiple \gls{gpu}s, the side effects are somewhat surprising. Specifically, the authors observe that independent filter groups\index{filter groups} learn a separation of responsibility (colour features \vs texture features) that is consistent over different random initializations. Also surprising, and not explicitly stated by \citet{Krizhevsky2012}, is the fact that the AlexNet network has approximately 57\% fewer connection weights than the corresponding network without filter groups\index{filter groups}. This is due to the reduction in the input channel dimension of the grouped convolution filters (see \cref{fig:alexnetplots}). Despite the large difference in the number of parameters between the models, both achieve comparable accuracy on \gls{ilsvrc} --- in fact the smaller grouped network gets $\approx1$\% lower top-5 validation error. This paper builds upon these findings and extends them to state-of-the-art networks. \paragraph{Low-dimensional Embeddings} \citet{Lin2013NiN} proposed a method to reduce the dimensionality of convolutional \glspl{featuremap}\index{feature map}. By using relatively cheap `1$\times$1' convolutional layers (\ie layers comprising $d$ filters of size 1$\times$1$\times$$c$, where $d<c$), they learn to map \glspl{featuremap}\index{feature map} into lower-dimensional spaces, \ie to new \glspl{featuremap}\index{feature map} with fewer channels. Subsequent spatial filters operating on this lower dimensional input space require significantly less computation. This method is used in most state-of-the-art networks for image classification to reduce computation~\citep{Szegedy2014going,He2015}. Our method is complementary. \paragraph{GoogLeNet} In contrast to much other work, \citet{Szegedy2014going} propose a \gls{cnn} architecture that is highly optimized for computational efficiency. GoogLeNet uses, as a basic building block, a mixture of low-dimensional embeddings~\citep{Lin2013NiN} and heterogeneously-sized spatial filters --- collectively an \Gls{inception} module. There are two distinct forms of convolutional layers in the \gls{inception}\index{inception} module, low-dimensional embeddings (1$\times$1) and spatial (3$\times$3, 5$\times $5). GoogLeNet keeps large, expensive spatial convolutions (\ie 5$\times$5) to a minimum by using few of these filters, using more 3$\times$3 convolutions, and even more 1$\times$1 filters. The motivation is that most of the convolutional filters respond to localized patterns in a small receptive field, with few requiring a larger receptive field. The number of filters in each successive \gls{inception}\index{inception} module increases slowly with decreasing \gls{featuremap}\index{feature map} size, in order to maintain computational performance. GoogLeNet is by far the most efficient state-of-the-art network for \gls{ilsvrc}, achieving near state-of-the-art accuracy with the lowest computation/model size. However, we will show that even such an efficient and optimized network architecture benefits from our method. \paragraph{Low-Rank Approximations} Various authors have suggested approximating learned convolutional filters using tensor decomposition \citep{journals/corr/JaderbergVZ14,journals/corr/LebedevGROL14,Kim2016}. For example, \citet{journals/corr/JaderbergVZ14} propose approximating the convolutional filters in a trained network with representations that are low-rank both in the spatial and the channel domains. This approach significantly decreases computational complexity, albeit at the expense of a small amount of accuracy. In this chapter we are not approximating an existing model's weights but creating a new network architecture with explicit structural sparsity, which is then trained from scratch. \paragraph{Learning a Basis for Filters} \begin{figure}[tbp] \centering \includegraphics[width=0.95\textwidth, page=3]{Figs/PDF/sparsification} \caption[Learning a spatial basis for filters]{\textbf{Learning a spatial basis for filters.} Learning a linear combination of mostly small, heterogeneously-sized spatial filters, as proposed in \cref{lowrankfilters}. Note that all filters operate on all $c$ channels of the input \gls{featuremap}\index{feature map}.} \label{fig:spatialbasis} \end{figure} Our approach is connected with that presented in \cref{lowrankfilters}, where we showed that replacing 3$\times$3$\times$c filters with linear combinations of filters with smaller spatial extent (\eg 1$\times$3$\times c$, 3$\times$1$\times c$ filters, see \cref{fig:spatialbasis}) could reduce the model size and computational complexity of state-of-the-art \glspl{cnn}\index{CNN}, while maintaining or even increasing accuracy. However, that work did not address the channel extent of the filters. \section{Root Architectures} \label{method} In this section we present the main contribution of our work: the use of novel sparsely-connected architectures resembling tree roots --- to decrease computational complexity and model size compared to state-of-the-art \glspl{dnn}\index{DNN} for image recognition. \paragraph{Learning a Basis for Filter Dependencies} It is unlikely that every filter (or neuron\index{neuron}) in a deep neural network\index{neural network} needs to depend on the output of all the filters in the previous layer. In fact, reducing filter co-dependence in \glspl{dnn}\index{DNN} has been shown to benefit generalization. For example, \citet{Hinton2012} introduced {\em dropout} for regularization of \glspl{dnn}\index{DNN}. When training a network layer with dropout, a random subset of neurons\index{neuron} is excluded from both the forward and backward pass for each mini-batch. Furthermore, \citet{Cogswell2016} observe a correlation between the covariance of hidden unit activations and overfitting. To explicitly reduce the covariance of hidden activations, they train networks with a loss function, based on the covariance matrix of the activations in a hidden layer. Instead of using a modified loss, regularization penalty, or randomized network connectivity during training to prevent co-adaption of features, we take a much more direct approach. We use filter groups\index{filter groups} (see \cref{fig:groupconfig}) to force the network to learn filters with only limited dependence on previous layers. Each of the filters in the filter groups\index{filter groups} is smaller in the channel extent, since it operates on only a subset of the channels of the input \gls{featuremap}\index{feature map}. \begin{figure}[tbp] \centering \begin{subfigure}[b]{0.95\textwidth} \centering \includegraphics[width=\textwidth, page=4]{Figs/PDF/groupfig} \caption{Convolution with $d$ filters of shape $h\times w\times c$.} \label{fig:normalresnet} \end{subfigure}\\ \begin{subfigure}[b]{0.95\textwidth} \includegraphics[width=\textwidth, page=5]{Figs/PDF/groupfig} \caption{Root-2 Module: Convolution with $d$ filters in $g = 2$ filter groups\index{filter groups}, of shape $h\times w\times c/2$.} \label{fig:rootresnet2} \end{subfigure} \begin{subfigure}[b]{0.95\textwidth} \includegraphics[width=\textwidth, page=6]{Figs/PDF/groupfig} \caption{Root-4 Module: Convolution with $d$ filters in $g = 4$ filter groups\index{filter groups}, of shape $h\times w\times c/4$.} \label{fig:rootresnet4} \end{subfigure} \caption[Root modules: learning a channel basis for filters]{\textbf{Root Modules.} Root modules (b), (c) compared to a typical set of convolutional layers (a) found in ResNet and other modern architectures. Grey blocks represent the \glspl{featuremap}\index{feature map} over which a layer's filters operate, while colored blocks represent the filters of each layer. } \label{fig:rootmodule} \end{figure} This reduced connectivity also reduces computational complexity and model size since the size of filters in filter groups\index{filter groups} are reduced drastically, as is evident in \cref{fig:rootmodule}. Unlike methods for increasing the efficiency of \glspl{dnn}\index{DNN} by approximating pre-trained existing networks (see \cref{previouswork}), our models are trained from random initialization using stochastic gradient descent. This means that our method can also speed up training and, since we are not merely approximating an existing model's weights, the accuracy of the existing model is not an upper bound on accuracy of the modified model. \paragraph{Root Module} The basic element of our network architecture, a \emph{root module}, is shown in \cref{fig:rootmodule}. A root module has a given number of filter groups\index{filter groups}, the more filter groups\index{filter groups}, the fewer the number of connections to the previous layer's outputs. Each spatial convolutional layer is followed by a low-dimensional embedding (1$\times$1 convolution). Like in \cref{lowrankfilters}, this configuration learns a linear combination of the basis filters (filter groups\index{filter groups}), implicitly representing a filter of full channel depth, but with limited filter dependence. \section{Results} Here we present image classification results obtained by replacing spatial convolutional layers within existing state-of-the-art network architectures with root modules (described in \cref{method}) . \subsection{Improving \Glsfmttext{nin} on \Glsfmttext{cifar10}} \Gls{nin}~\citep{Lin2013NiN} is a near state-of-the-art network for \gls{cifar10}~\citep{CIFAR10}. It is composed of 3 spatial (5$\times$5, 3$\times$3) convolutional layers with a large number of filters (192), interspersed with pairs of low-dimensional embedding (1$\times$1) layers. As a baseline, we replicated the standard NiN network architecture as described by \citet{Lin2013NiN} but used state-of-the-art training methods. We trained using random 32$\times$32 cropped and mirrored images from 4-pixel zero-padded mean-subtracted images, as used by \citet{goodfellow2013maxout, He2015}. We also used the initialization of \citet{He2015b} and batch normalization\index{batch normalization}~\citep{Ioffe2015}. With this configuration, \gls{zca} whitening was not required to reproduce validation accuracies obtained in~\citep{Lin2013NiN}. We also did not use dropout, having found it to have little effect, presumably due to our use of batch normalization\index{batch normalization}, as suggested by \citet{Ioffe2015}. \begin{table}[tbp] \caption[\Glsfmttext{nin} root architectures]{\textbf{\Glsfmttext{nin} Root Architectures}. Filter groups\index{filter groups} in each convolutional layer.} \label{table:ninconfig} \centering %\resizebox{\textwidth}{!}{ \begin{tabular}{@{}lccccccccc@{}} \toprule Model & \multicolumn{3}{c}{conv1} & \multicolumn{3}{c}{conv2} & \multicolumn{3}{c}{conv3} \\ & \textit{\footnotesize a} & \textit{\footnotesize b} & \textit{\footnotesize c} & \textit{\footnotesize a} & \textit{\footnotesize b} & \textit{\footnotesize c} & \textit{\footnotesize a} & \textit{\footnotesize b} & \textit{\footnotesize c} \\ & \textit{\footnotesize5$\times$5} & \textit{\footnotesize1$\times$1} & \textit{\footnotesize1$\times$1} & \textit{\footnotesize5$\times$5} & \textit{\footnotesize1$\times$1} & \textit{\footnotesize1$\times$1} & \textit{\footnotesize3$\times$3} & \textit{\footnotesize1$\times$1} & \textit{\footnotesize1$\times$1} \\ Orig. & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1\\ \midrule root-2 & 1 & 1 & 1 & 2 & 1 & 1 & 1 & 1 & 1\\ root-4 & 1 & 1 & 1 & 4 & 1 & 1 & 2 & 1 & 1\\ root-8 & 1 & 1 & 1 & 8 & 1 & 1 & 4 & 1 & 1\\ root-16 & 1 & 1 & 1 & 16 & 1 & 1 & 8 & 1 & 1\\ \bottomrule \end{tabular} %} \end{table} \begin{table}[tp] \caption[\Glsfmttext{nin} \gls{cifar10} results]{\textbf{\Glsfmttext{nin} \glsfmttext{cifar10} Results}} \label{table:nincifarresults} \centering \pgfplotstableread[col sep=comma]{rootdata/nincifar.csv}\data \pgfplotstableread[col sep=comma]{rootdata/nincifar_root_s.csv}\codata \pgfplotstableread[col sep=comma]{rootdata/nincifar_tree_s.csv}\tdatatable \pgfplotstableread[col sep=comma]{rootdata/nincifar_col_s.csv}\cdatatable \pgfplotstablevertcat{\data}{\codata} \pgfplotstablevertcat{\data}{\tdatatable} \pgfplotstablevertcat{\data}{\cdatatable} %\resizebox{\textwidth}{!}{ \pgfplotstabletypeset[ every head row/.style={before row=\toprule,after row=\midrule}, every last row/.style={after row=\bottomrule}, every row no 0/.style={after row=\midrule}, every nth row={4}{after row=\midrule}, fixed zerofill, % Fill numbers with zeros columns={full name, ma, param, accuracy, cpu, gpu}, columns/full name/.style={ column name=Model, string type }, columns/ma/.style={ column name=\gls{flops} {\small $\times 10^{8}$}, preproc/expr={{##1/1e8}} }, columns/param/.style={ column name=Param. {\small $\times 10^{5}$}, preproc/expr={{##1/1e5}} }, columns/accuracy/.style={ column name=Accuracy, precision=4 }, columns/gpu/.style={ column name=\gls{gpu} (ms), precision=3 }, columns/cpu/.style={ column name=\gls{cpu} (ms), precision=1 }, % column type/.add={@{}lrrrrrr@{}}{}, column type/.add={@{}lcccccc@{}}{}, highlight col max ={\data}{accuracy}, highlight col min ={\data}{param}, highlight col min ={\data}{ma}, col sep=comma]{\data} %} \end{table} To assess the efficacy of our method, we replaced the spatial convolutional layers of the original \gls{nin} network with root modules (as described in \cref{method}). We preserved the original number of filters per layer but subdivided them into groups as shown in \cref{table:ninconfig}. We considered the first of the pair of existing 1$\times$1 layers to be part of our root modules. We did not group filters in the first convolutional layer --- since it operates on the three-channel image space, it is of limited computational impact compared to other layers. \begin{figure}[tbp] \centering \begin{subfigure}[b]{0.95\textwidth} \pgfplotstableread[col sep=comma]{rootdata/nincifar.csv}\datatable \pgfplotstableread[col sep=comma]{rootdata/nincifar_root_s.csv}\rdatatable \pgfplotstableread[col sep=comma]{rootdata/nincifar_tree_s.csv}\tdatatable \pgfplotstableread[col sep=comma]{rootdata/nincifar_col_s.csv}\cdatatable \pgfplotsset{major grid style={dotted,red}} \centering \begin{tikzpicture} %\tikzstyle{every node}=[font=\footnotesize] \begin{axis}[ width=0.95\textwidth, height=0.66\textwidth, axis x line=bottom, ylabel=Error, xlabel=Model Parameters, axis lines=left, enlarge x limits=0.05, enlarge y limits=0.05, grid=major, %xmin=0, ytick={0.002,0.004,...,1.0}, %ymin=0.075,ymax=0.09, xticklabel style={ /pgf/number format/fixed, /pgf/number format/precision=1 }, yticklabel={\pgfmathparse{\tick*1}\pgfmathprintnumber{\pgfmathresult}\%},style={ /pgf/number format/fixed zerofill, /pgf/number format/precision=1 }, legend style={at={(1,1.1)}, anchor=south east, column sep=0.2em, font=\small\sffamily\sansmath}, legend columns=4, \setplotcyclecat{4}, every axis plot/.append style={fill}, ] \addplot+[mark=*, %nodes near coords, only marks, point meta=explicit symbolic, error bars/y dir=both, error bars/y fixed=0.00131497782, ] table[meta=name,x=param,y expr={1 - \thisrow{accuracy} },]{\datatable}; \addplot+[mark=square*, nodes near coords, only marks, every node near coord/.append style={inner sep=4pt}, point meta=explicit symbolic, ] table[meta=name,x=param,y expr={1 - \thisrow{accuracy} },]{\rdatatable}; \addplot+[mark=triangle*, nodes near coords, nodes near coords align = {below}, only marks, every node near coord/.append style={inner sep=4pt}, point meta=explicit symbolic, ] table[meta=name,x=param,y expr={1 - \thisrow{accuracy} },]{\tdatatable}; \addplot+[mark=diamond*, nodes near coords, nodes near coords align = {below}, only marks, every node near coord/.append style={inner sep=4pt}, point meta=explicit symbolic, ] table[meta=name,x=param,y expr={1 - \thisrow{accuracy} },]{\cdatatable}; \legend{NiN, Root, Tree, Column} \end{axis} \end{tikzpicture} %\caption{\textbf{Model Parameters \vs Error.}} %\label{fig:nincifarparamconvonly} \end{subfigure} ~ \begin{subfigure}[b]{0.95\textwidth} \pgfplotstableread[col sep=comma]{rootdata/nincifar.csv}\datatable \pgfplotstableread[col sep=comma]{rootdata/nincifar_root_s.csv}\rdatatable \pgfplotstableread[col sep=comma]{rootdata/nincifar_tree_s.csv}\tdatatable \pgfplotstableread[col sep=comma]{rootdata/nincifar_col_s.csv}\cdatatable \pgfplotsset{major grid style={dotted,red}} \centering \begin{tikzpicture} %\tikzstyle{every node}=[font=\footnotesize] \begin{axis}[ width=0.95\textwidth, height=0.66\textwidth, axis x line=bottom, ylabel=Error, xlabel=\gls{flops} (Multiply-Add), axis lines=left, enlarge x limits=0.05, enlarge y limits=0.05, grid=major, %xmin=0, ytick={0.002,0.004,...,1.0}, %ymin=0.075,ymax=0.09, xticklabel style={ /pgf/number format/fixed zerofill, /pgf/number format/precision=1 }, yticklabel={\pgfmathparse{\tick*1}\pgfmathprintnumber{\pgfmathresult}\%},style={ /pgf/number format/fixed zerofill, /pgf/number format/precision=1 }, \setplotcyclecat{4}, every axis plot/.append style={fill}, ] \addplot+[mark=*, %nodes near coords, only marks, point meta=explicit symbolic, error bars/y dir=both, error bars/y fixed=0.00131497782, ] table[meta=name,x=ma,y expr={1 - \thisrow{accuracy} },]{\datatable}; \addplot+[mark=square*, nodes near coords, only marks, every node near coord/.append style={inner sep=4pt}, point meta=explicit symbolic, ] table[meta=name,x=ma,y expr={1 - \thisrow{accuracy} },]{\rdatatable}; \addplot+[mark=triangle*, nodes near coords, nodes near coords align = {below}, only marks, every node near coord/.append style={inner sep=4pt}, point meta=explicit symbolic, ] table[meta=name,x=ma,y expr={1 - \thisrow{accuracy} },]{\tdatatable}; \addplot+[mark=diamond*, nodes near coords, nodes near coords align = {below}, only marks, every node near coord/.append style={inner sep=4pt}, point meta=explicit symbolic, ] table[meta=name,x=ma,y expr={1 - \thisrow{accuracy} },]{\cdatatable}; \end{axis} \end{tikzpicture} %\caption{\textbf{\glsfmttext{flops} (Multiply-Add) \vs Error.}} %\label{fig:nincifarmaconvonly} \end{subfigure} \caption[\Glsfmttext{nin} \glsfmttext{cifar10} results]{\textbf{\Glsfmttext{nin} \glsfmttext{cifar10} Results.} Spatial filters (3$\times$3, 5$\times$5) are grouped hierarchically. The best models are closest to the origin. For the standard network, the mean and standard deviation (error bars) are shown over 5 different random initializations. %(left) Parameters \vs Error, (right) \gls{flops} \vs Error. }\label{fig:nincifarplotsconvonly} \end{figure} Results are shown in \cref{table:nincifarresults} and \cref{fig:nincifarplotsconvonly} for various network architectures\footnote{Here (and subsequently unless stated otherwise) timings are per image for a forward pass computed on a large batch. Networks were implemented using Caffe (with \gls{cudnn} v2 and \gls{mkl}) and run on an Nvidia Titan Z \gls{gpu} and 2 10-core Intel Xeon E5-2680 v2 \gls{cpu}s.}. Compared to the baseline architecture, the root variants achieve a significant reduction in computation and model size without a significant reduction in accuracy. For example, the root-8 architecture gives equivalent accuracy with only 46\% of the \gls{flops}, 33\% of the model parameters of the original network, and approximately 37\% and 23\% faster \gls{cpu} and \gls{gpu} timings (see \cref{gpuexplanation} for an explanation of the \gls{gpu} timing disparity). \newcommand{\covarlabels}[5]{% \begin{tikzpicture}[anchor=south west] \node [inner sep=0pt] (c) { #5 }; \ifx\covarwidth\undefined \newlength{\covarwidth} \newlength{\covarheight} \fi \settowidth{\covarwidth}{#5} \settoheight{\covarheight}{#5} \path[use as bounding box] (c.south west) rectangle (c.north east); %\node [anchor=south west, xshift=-0.3em, yshift=-0.3em, rotate=45] at (c.north west) {\tiny 0}; \node [anchor=south east, xshift=\covarwidth, yshift=-0.2em] at (c.north west) {\tiny #4}; \node [anchor=south west, xshift=0.25em, yshift=-1.05\covarheight, rotate=90] at (c.north west) {\tiny #2}; \node [anchor=south, xshift=0.2\covarwidth, yshift=-0.2em] at (c.north west) {\scriptsize\texttt{#3}}; \node [anchor=south, xshift=0.2em, yshift=-0.2\covarheight, rotate=90] at (c.north west) {\scriptsize\texttt{#1}}; \end{tikzpicture}% } %\afterpage{ %\begin{landscape} \begin{figure}[p] \centering \begin{subfigure}[b]{0.28\textheight} \centering \covarlabels{conv2c}{192}{conv3a}{192}{\includegraphics[width=\textwidth]{Figs/PDF/msrc-cifar-nin-4pad-conv8-corr}} \caption{Standard}\label{fig:normalcovartestfull} \end{subfigure} \hspace{1em} \begin{subfigure}[b]{0.28\textheight} \centering \covarlabels{conv2c}{192}{conv3a}{192}{\includegraphics[width=\textwidth]{Figs/PDF/msrc-cifar-nin-4pad-funnel2-convonly-conv8-corr}} \caption{Root-2 (1 group)}\label{fig:root2corrfull} \end{subfigure} \hspace{1em} \begin{subfigure}[b]{0.28\textheight} \centering \covarlabels{conv2c}{192}{conv3a}{192}{\includegraphics[width=\textwidth]{Figs/PDF/msrc-cifar-nin-4pad-funnel4-convonly-conv8-corr}} \caption{Root-4 (2 groups)}\label{fig:root4corrfull} \end{subfigure} \hspace{1em} \begin{subfigure}[b]{0.28\textheight} \centering \vspace*{0.6em} \covarlabels{conv2c}{192}{conv3a}{192}{\includegraphics[width=\textwidth]{Figs/PDF/msrc-cifar-nin-4pad-funnel8-convonly-conv8-corr}} \caption{Root-8 (4 groups)}\label{fig:root8corrfull} \end{subfigure} \hspace{1em} \begin{subfigure}[b]{0.28\textheight} \centering \vspace*{0.6em} \covarlabels{conv2c}{192}{conv3a}{192}{\includegraphics[width=\textwidth]{Figs/PDF/msrc-cifar-nin-4pad-funnel16-convonly-conv8-corr}} \caption{Root-16 (8 groups)}\label{fig:root16corrfull} \end{subfigure} \hspace{1em} \begin{subfigure}[b]{0.28\textheight} \centering \vspace*{0.6em} \covarlabels{conv2c}{192}{conv3a}{192}{\includegraphics[width=\textwidth]{Figs/PDF/msrc-cifar-nin-4pad-funnel32-convonly-conv8-corr}} \caption{Root-32 (16 groups)}\label{fig:root32corrfull} \end{subfigure} \caption[Inter-layer filter covariance conv2c--conv3a]{\textbf{Inter-layer filter covariance conv2c--conv3a.} The block-diagonal sparsity learned by a root-unit is visible in the correlation of filters on layers \texttt{conv3a} and \texttt{conv2c} in the NiN network as observed on the \glsfmttext{cifar10} training data.}\label{fig:covar} \end{figure} %\end{landscape}} \Cref{fig:covar} shows the inter-layer covariance between the adjacent filter layers \texttt{conv2c} and \texttt{conv3a} in the network architectures outlined in \cref{table:ninconfig} as evaluated on the CIFAR training set. The block-diagonalization enforced by the filter group\index{filter groups} structure (as illustrated in \cref{fig:groupconfig}) is visible, more so with larger number of filter groups\index{filter groups}. This shows that the network learns an organization of filters such that the sparsely distributed strong filter relations, visible in \cref{fig:normalcovartestfull} as brighter pixels, are grouped into a denser block-diagonal structure, leaving a visibly darker, low-correlated background. \subsection{Inter-Layer Covariance} \label{interlayercovar} To show the relationships between filters between adjacent convolutional layers, as illustrated in \cref{fig:normalconv}, we calculate the covariance of the responses from two adjacent \glspl{featuremap}, the outputs of convolutional layers with $\gls{c}_1$ and $\gls{c}_2$ filters. Let $\gls{X}_i = [\gls{vectorx}_{i,1}; \gls{vectorx}_{i,2}; \ldots ; \gls{vectorx}_{i,N}]$ be the matrix of $N$ samples $\gls{vectorx}_{i,n}$ from the $c_i$ dimensional \gls{featuremap} for layer $i$. We consider each pixel across the two \glspl{featuremap} to be a sample, and thus each vector $\gls{vectorx}_{i,n}$ is a single pixel filter response of dimension $c_i$. If two \glspl{featuremap} have different spatial dimensions, due to pooling, we up-sample the smaller \gls{featuremap} (with nearest neighbor interpolation) such that there are the same number of pixels (and thus samples) in each \gls{featuremap}. Given two samples $\gls{X}_1, \gls{X}_2$ with zero mean (\ie mean subtracted) for two adjacent \glspl{featuremap}, we calculate the inter-layer covariance, \begin{align} \gls{covar}(\gls{X}_1, \gls{X}_2) &= \gls{expected}\left[\gls{X}_1\,\gls{X}_2^\textrm{T}\right],\\ &= \frac{1}{N-1} \gls{X}_1\,\gls{X}_2^\textrm{T}. \end{align} \begin{figure}[tbp] \centering \begin{subfigure}[b]{0.3\textheight} \centering \covarlabels{conv2c}{192}{conv3a}{192}{\includegraphics[width=\textwidth]{Figs/PDF/ninroot32/train/layercovar_conv8}} \caption{Non-whitened responses} \label{fig:notwhitened} \end{subfigure} \hspace{1em} \begin{subfigure}[b]{0.3\textheight} \centering \covarlabels{conv2c}{192}{conv3a}{192}{\includegraphics[width=\textwidth]{Figs/PDF/ninroot32/train/layercovarwhite_conv8}} \caption{Whitened responses} \label{fig:whitened} \end{subfigure} \caption[Inter-layer covariance with/without whitened responses]{Covariance for between two layers in the root-32 \glsfmttext{nin} model with and without whitened responses.} \label{fig:whitevsnot} \end{figure} While this shows the covariance between layers, it is conflated with the inherent covariances within $\gls{X}_1$ and $\gls{X}_2$ from the data (as shown in \cref{fig:notwhitened}). We can more clearly show the covariance between layers by first whitening (using \gls{zca}~\citep{CIFAR10}) the samples in $\gls{X}_1$ and $\gls{X}_2$. For a covariance matrix, \begin{equation} \gls{covar}(\gls{X}, \gls{X}) = \frac{1}{N-1} \gls{X}\gls{X}^\textrm{T}, \end{equation} The \gls{zca} whitening transformation is given by, \begin{equation} W = \sqrt{N-1}\left(\gls{X}\gls{X}^\textrm{T}\right)^{-\frac{1}{2}}. \end{equation} Since the covariance matrix is symmetric, it is easily diagonalizable (\ie \gls{pca}), \begin{align} \gls{covar}(\gls{X}, \gls{X}) &= \frac{1}{N-1} PDP^\textrm{T},\\ \end{align} where $P$ is a orthogonal matrix and $D$ a diagonal matrix. This diagonalization allows a simplified calculation of the whitening transformation (see the derivation in \citet[Appendix A]{CIFAR10}), \begin{equation} W = \sqrt{N-1}PD^{\circ-\frac{1}{2}}P^\textrm{T}, \end{equation} where $D^{\circ-\frac{1}{2}}$ is simply $D$ with an element-wise power of $-\frac{1}{2}$. The covariance between the whitened \gls{featuremap} responses is then, \begin{equation} \gls{covar}(W_1 \gls{X}_1, W_2 \gls{X}_2) = E\left[(W_1\gls{X}_1)\,(W_2\gls{X}_2)^\textrm{T}\right]. \end{equation} \afterpage{% \begin{landscape} \begin{figure}[p] \begin{subfigure}[c]{0.75\paperheight} \centering \vspace{0.6em} \covarlabels{conv1a}{192}{conv1a}{192}{\includegraphics[width=0.11\linewidth]{Figs/PDF/nin/train/corrcoef_conv0}} \hfill \covarlabels{conv1b}{160}{conv1b}{160}{\includegraphics[width=0.11\linewidth]{Figs/PDF/nin/train/corrcoef_conv1}} \hfill \covarlabels{conv1c}{96}{conv1c}{96}{\includegraphics[width=0.11\linewidth]{Figs/PDF/nin/train/corrcoef_conv2}} \hfill \covarlabels{conv2a}{192}{conv2a}{192}{\includegraphics[width=0.11\linewidth]{Figs/PDF/nin/train/corrcoef_conv4}} \hfill \covarlabels{conv2b}{192}{conv2b}{192}{\includegraphics[width=0.11\linewidth]{Figs/PDF/nin/train/corrcoef_conv5}} \hfill \covarlabels{conv2c}{192}{conv2c}{192}{\includegraphics[width=0.11\linewidth]{Figs/PDF/nin/train/corrcoef_conv6}} \hfill \covarlabels{conv3a}{192}{conv3a}{192}{\includegraphics[width=0.11\linewidth]{Figs/PDF/nin/train/corrcoef_conv8}} \hfill \covarlabels{conv3b}{192}{conv3b}{192}{\includegraphics[width=0.11\linewidth]{Figs/PDF/nin/train/corrcoef_conv9}} \caption{\Glsfmttext{nin}}\label{fig:corrroot1} \vspace{0.6em} \end{subfigure}\\ \begin{subfigure}[c]{0.75\paperheight} \centering \vspace{0.6em} \covarlabels{conv1a}{192}{conv1a}{192}{\includegraphics[width=0.11\linewidth]{Figs/PDF/ninroot4/train/corrcoef_conv0}} \hfill \covarlabels{conv1b}{160}{conv1b}{160}{\includegraphics[width=0.11\linewidth]{Figs/PDF/ninroot4/train/corrcoef_conv1}} \hfill \covarlabels{conv1c}{96}{conv1c}{96}{\includegraphics[width=0.11\linewidth]{Figs/PDF/ninroot4/train/corrcoef_conv2}} \hfill \covarlabels{conv2a}{192}{conv2a}{192}{\includegraphics[width=0.11\linewidth]{Figs/PDF/ninroot4/train/corrcoef_conv4}} \hfill \covarlabels{conv2b}{192}{conv2b}{192}{\includegraphics[width=0.11\linewidth]{Figs/PDF/ninroot4/train/corrcoef_conv5}} \hfill \covarlabels{conv2c}{192}{conv2c}{192}{\includegraphics[width=0.11\linewidth]{Figs/PDF/ninroot4/train/corrcoef_conv6}} \hfill \covarlabels{conv3a}{192}{conv3a}{192}{\includegraphics[width=0.11\linewidth]{Figs/PDF/ninroot4/train/corrcoef_conv8}} \hfill \covarlabels{conv3b}{192}{conv3b}{192}{\includegraphics[width=0.11\linewidth]{Figs/PDF/ninroot4/train/corrcoef_conv9}} \caption{Root-4}\label{fig:corrroot4} \vspace*{0.6em} \end{subfigure}\\ \begin{subfigure}[c]{0.75\paperheight} \centering \vspace{0.6em} \covarlabels{conv1a}{192}{conv1a}{192}{\includegraphics[width=0.11\linewidth]{Figs/PDF/ninroot8/train/corrcoef_conv0}} \hfill \covarlabels{conv1b}{160}{conv1b}{160}{\includegraphics[width=0.11\linewidth]{Figs/PDF/ninroot8/train/corrcoef_conv1}} \hfill \covarlabels{conv1c}{96}{conv1c}{96}{\includegraphics[width=0.11\linewidth]{Figs/PDF/ninroot8/train/corrcoef_conv2}} \hfill \covarlabels{conv2a}{192}{conv2a}{192}{\includegraphics[width=0.11\linewidth]{Figs/PDF/ninroot8/train/corrcoef_conv4}} \hfill \covarlabels{conv2b}{192}{conv2b}{192}{\includegraphics[width=0.11\linewidth]{Figs/PDF/ninroot8/train/corrcoef_conv5}} \hfill \covarlabels{conv2c}{192}{conv2c}{192}{\includegraphics[width=0.11\linewidth]{Figs/PDF/ninroot8/train/corrcoef_conv6}} \hfill \covarlabels{conv3a}{192}{conv3a}{192}{\includegraphics[width=0.11\linewidth]{Figs/PDF/ninroot8/train/corrcoef_conv8}} \hfill \covarlabels{conv3b}{192}{conv3b}{192}{\includegraphics[width=0.11\linewidth]{Figs/PDF/ninroot8/train/corrcoef_conv9}} \caption{Root-8}\label{fig:corrroot8} \vspace*{0.6em} \end{subfigure}\\ \begin{subfigure}[c]{0.75\paperheight} \centering \vspace{0.6em} \covarlabels{conv1a}{192}{conv1a}{192}{\includegraphics[width=0.11\linewidth]{Figs/PDF/ninroot32/train/corrcoef_conv0}} \hfill \covarlabels{conv1b}{160}{conv1b}{160}{\includegraphics[width=0.11\linewidth]{Figs/PDF/ninroot32/train/corrcoef_conv1}} \hfill \covarlabels{conv1c}{96}{conv1c}{96}{\includegraphics[width=0.11\linewidth]{Figs/PDF/ninroot32/train/corrcoef_conv2}} \hfill \covarlabels{conv2a}{192}{conv2a}{192}{\includegraphics[width=0.11\linewidth]{Figs/PDF/ninroot32/train/corrcoef_conv4}} \hfill \covarlabels{conv2b}{192}{conv2b}{192}{\includegraphics[width=0.11\linewidth]{Figs/PDF/ninroot32/train/corrcoef_conv5}} \hfill \covarlabels{conv2c}{192}{conv2c}{192}{\includegraphics[width=0.11\linewidth]{Figs/PDF/ninroot32/train/corrcoef_conv6}} \hfill \covarlabels{conv3a}{192}{conv3a}{192}{\includegraphics[width=0.11\linewidth]{Figs/PDF/ninroot32/train/corrcoef_conv8}} \hfill \covarlabels{conv3b}{192}{conv3b}{192}{\includegraphics[width=0.11\linewidth]{Figs/PDF/ninroot32/train/corrcoef_conv9}} \caption{Root-32}\label{fig:corrroot32} \vspace*{0.6em} \end{subfigure}\\ \begin{subfigure}[c]{0.77\paperheight} \centering \includegraphics[width=0.4\linewidth]{Figs/PDF/colorbar} \end{subfigure} \caption[Intra-layer filter correlation (train)]{\textbf{\Glsfmttext{nin} Intra-Layer Correlation (Train).} Absolute correlation of filters within each layer of a NiN model variant on the training data.} \label{fig:nincorr} \end{figure} \end{landscape} }%afterpage \afterpage{ \begin{landscape} \begin{figure}[p] \begin{subfigure}[c]{0.75\paperheight} \centering \vspace{0.6em} \covarlabels{conv1a}{192}{conv1a}{192}{\includegraphics[width=0.11\linewidth]{Figs/PDF/nin/test/corrcoef_conv0}} \hfill \covarlabels{conv1b}{160}{conv1b}{160}{\includegraphics[width=0.11\linewidth]{Figs/PDF/nin/test/corrcoef_conv1}} \hfill \covarlabels{conv1c}{96}{conv1c}{96}{\includegraphics[width=0.11\linewidth]{Figs/PDF/nin/test/corrcoef_conv2}} \hfill \covarlabels{conv2a}{192}{conv2a}{192}{\includegraphics[width=0.11\linewidth]{Figs/PDF/nin/test/corrcoef_conv4}} \hfill \covarlabels{conv2b}{192}{conv2b}{192}{\includegraphics[width=0.11\linewidth]{Figs/PDF/nin/test/corrcoef_conv5}} \hfill \covarlabels{conv2c}{192}{conv2c}{192}{\includegraphics[width=0.11\linewidth]{Figs/PDF/nin/test/corrcoef_conv6}} \hfill \covarlabels{conv3a}{192}{conv3a}{192}{\includegraphics[width=0.11\linewidth]{Figs/PDF/nin/test/corrcoef_conv8}} \hfill \covarlabels{conv3b}{192}{conv3b}{192}{\includegraphics[width=0.11\linewidth]{Figs/PDF/nin/test/corrcoef_conv9}} \caption{\Glsfmttext{nin}}\label{fig:corrroot1test} \vspace{0.6em} \end{subfigure}\\ \begin{subfigure}[c]{0.75\paperheight} \centering \vspace{0.6em} \covarlabels{conv1a}{192}{conv1a}{192}{\includegraphics[width=0.11\linewidth]{Figs/PDF/ninroot4/test/corrcoef_conv0}} \hfill \covarlabels{conv1b}{160}{conv1b}{160}{\includegraphics[width=0.11\linewidth]{Figs/PDF/ninroot4/test/corrcoef_conv1}} \hfill \covarlabels{conv1c}{96}{conv1c}{96}{\includegraphics[width=0.11\linewidth]{Figs/PDF/ninroot4/test/corrcoef_conv2}} \hfill \covarlabels{conv2a}{192}{conv2a}{192}{\includegraphics[width=0.11\linewidth]{Figs/PDF/ninroot4/test/corrcoef_conv4}} \hfill \covarlabels{conv2b}{192}{conv2b}{192}{\includegraphics[width=0.11\linewidth]{Figs/PDF/ninroot4/test/corrcoef_conv5}} \hfill \covarlabels{conv2c}{192}{conv2c}{192}{\includegraphics[width=0.11\linewidth]{Figs/PDF/ninroot4/test/corrcoef_conv6}} \hfill \covarlabels{conv3a}{192}{conv3a}{192}{\includegraphics[width=0.11\linewidth]{Figs/PDF/ninroot4/test/corrcoef_conv8}} \hfill \covarlabels{conv3b}{192}{conv3b}{192}{\includegraphics[width=0.11\linewidth]{Figs/PDF/ninroot4/test/corrcoef_conv9}} \caption{Root-4}\label{fig:corrroot4test} \vspace*{0.6em} \end{subfigure}\\ \begin{subfigure}[c]{0.75\paperheight} \centering \vspace{0.6em} \covarlabels{conv1a}{192}{conv1a}{192}{\includegraphics[width=0.11\linewidth]{Figs/PDF/ninroot8/test/corrcoef_conv0}} \hfill \covarlabels{conv1b}{160}{conv1b}{160}{\includegraphics[width=0.11\linewidth]{Figs/PDF/ninroot8/test/corrcoef_conv1}} \hfill \covarlabels{conv1c}{96}{conv1c}{96}{\includegraphics[width=0.11\linewidth]{Figs/PDF/ninroot8/test/corrcoef_conv2}} \hfill \covarlabels{conv2a}{192}{conv2a}{192}{\includegraphics[width=0.11\linewidth]{Figs/PDF/ninroot8/test/corrcoef_conv4}} \hfill \covarlabels{conv2b}{192}{conv2b}{192}{\includegraphics[width=0.11\linewidth]{Figs/PDF/ninroot8/test/corrcoef_conv5}} \hfill \covarlabels{conv2c}{192}{conv2c}{192}{\includegraphics[width=0.11\linewidth]{Figs/PDF/ninroot8/test/corrcoef_conv6}} \hfill \covarlabels{conv3a}{192}{conv3a}{192}{\includegraphics[width=0.11\linewidth]{Figs/PDF/ninroot8/test/corrcoef_conv8}} \hfill \covarlabels{conv3b}{192}{conv3b}{192}{\includegraphics[width=0.11\linewidth]{Figs/PDF/ninroot8/test/corrcoef_conv9}} \caption{Root-8}\label{fig:corrroot8test} \vspace*{0.6em} \end{subfigure}\\ \begin{subfigure}[c]{0.75\paperheight} \centering \vspace{0.6em} \covarlabels{conv1a}{192}{conv1a}{192}{\includegraphics[width=0.11\linewidth]{Figs/PDF/ninroot32/test/corrcoef_conv0}} \hfill \covarlabels{conv1b}{160}{conv1b}{160}{\includegraphics[width=0.11\linewidth]{Figs/PDF/ninroot32/test/corrcoef_conv1}} \hfill \covarlabels{conv1c}{96}{conv1c}{96}{\includegraphics[width=0.11\linewidth]{Figs/PDF/ninroot32/test/corrcoef_conv2}} \hfill \covarlabels{conv2a}{192}{conv2a}{192}{\includegraphics[width=0.11\linewidth]{Figs/PDF/ninroot32/test/corrcoef_conv4}} \hfill \covarlabels{conv2b}{192}{conv2b}{192}{\includegraphics[width=0.11\linewidth]{Figs/PDF/ninroot32/test/corrcoef_conv5}} \hfill \covarlabels{conv2c}{192}{conv2c}{192}{\includegraphics[width=0.11\linewidth]{Figs/PDF/ninroot32/test/corrcoef_conv6}} \hfill \covarlabels{conv3a}{192}{conv3a}{192}{\includegraphics[width=0.11\linewidth]{Figs/PDF/ninroot32/test/corrcoef_conv8}} \hfill \covarlabels{conv3b}{192}{conv3b}{192}{\includegraphics[width=0.11\linewidth]{Figs/PDF/ninroot32/test/corrcoef_conv9}} \caption{Root-32}\label{fig:corrroot32test} \vspace*{0.6em} \end{subfigure}\\ \begin{subfigure}[c]{0.77\paperheight} \centering \includegraphics[width=0.4\linewidth]{Figs/PDF/colorbar} \end{subfigure} \caption[Intra-layer filter correlation (test)]{\textbf{\Glsfmttext{nin} Intra-Layer Correlation (Test).} Absolute correlation of filters within each layer of a NiN model variant on the test data.} \label{fig:nincorrtest} \end{figure} \end{landscape} }%afterpage \afterpage{ \begin{landscape} \begin{figure}[p] \begin{subfigure}[c]{0.75\paperheight} %\centering \covarlabels{conv1a}{192}{conv1b}{160}{\includegraphics[width=0.1\linewidth]{Figs/PDF/nin/train/layercovarwhite_conv1}} \hfill \covarlabels{conv1b}{160}{conv1c}{96}{\includegraphics[width=0.05\linewidth]{Figs/PDF/nin/train/layercovarwhite_conv2}} \hfill \covarlabels{conv1c}{96}{conv2a}{192}{\includegraphics[width=0.12\linewidth]{Figs/PDF/nin/train/layercovarwhite_conv4}} \hfill \covarlabels{conv2a}{192}{conv2b}{192}{\includegraphics[width=0.12\linewidth]{Figs/PDF/nin/train/layercovarwhite_conv5}} \hfill \covarlabels{conv2b}{192}{conv2c}{192}{\includegraphics[width=0.12\linewidth]{Figs/PDF/nin/train/layercovarwhite_conv6}} \hfill \covarlabels{conv2c}{192}{conv3a}{192}{\includegraphics[width=0.12\linewidth]{Figs/PDF/nin/train/layercovarwhite_conv8}} \hfill \covarlabels{conv3a}{192}{conv3b}{192}{\includegraphics[width=0.12\linewidth]{Figs/PDF/nin/train/layercovarwhite_conv9}} \hfill \covarlabels{}{192}{}{10}{\includegraphics[height=0.12\linewidth]{Figs/PDF/nin/train/layercovarwhite_conv10}} \caption{\Glsfmttext{nin}} \vspace*{0.6em} \label{fig:covaroot1} \end{subfigure} \begin{subfigure}[c]{0.75\paperheight} %\centering \covarlabels{conv1a}{192}{conv1b}{160}{\includegraphics[width=0.1\linewidth]{Figs/PDF/ninroot4/train/layercovarwhite_conv1}} \hfill \covarlabels{conv1b}{160}{conv1c}{96}{\includegraphics[width=0.05\linewidth]{Figs/PDF/ninroot4/train/layercovarwhite_conv2}} \hfill \covarlabels{conv1c}{96}{conv2a}{192}{\includegraphics[width=0.12\linewidth]{Figs/PDF/ninroot4/train/layercovarwhite_conv4}} \hfill \covarlabels{conv2a}{192}{conv2b}{192}{\includegraphics[width=0.12\linewidth]{Figs/PDF/ninroot4/train/layercovarwhite_conv5}} \hfill \covarlabels{conv2b}{192}{conv2c}{192}{\includegraphics[width=0.12\linewidth]{Figs/PDF/ninroot4/train/layercovarwhite_conv6}} \hfill \covarlabels{conv2c}{192}{conv3a}{192}{\includegraphics[width=0.12\linewidth]{Figs/PDF/ninroot4/train/layercovarwhite_conv8}} \hfill \covarlabels{conv3a}{192}{conv3b}{192}{\includegraphics[width=0.12\linewidth]{Figs/PDF/ninroot4/train/layercovarwhite_conv9}} \hfill \covarlabels{}{192}{}{10}{\includegraphics[height=0.12\linewidth]{Figs/PDF/ninroot4/train/layercovarwhite_conv10}} \caption{Root-4} \vspace*{0.6em} \label{fig:covarroot4} \end{subfigure} \begin{subfigure}[c]{0.75\paperheight} %\centering \covarlabels{conv1a}{192}{conv1b}{160}{\includegraphics[width=0.1\linewidth]{Figs/PDF/ninroot8/train/layercovarwhite_conv1}} \hfill \covarlabels{conv1b}{160}{conv1c}{96}{\includegraphics[width=0.05\linewidth]{Figs/PDF/ninroot8/train/layercovarwhite_conv2}} \hfill \covarlabels{conv1c}{96}{conv2a}{192}{\includegraphics[width=0.12\linewidth]{Figs/PDF/ninroot8/train/layercovarwhite_conv4}} \hfill \covarlabels{conv2a}{192}{conv2b}{192}{\includegraphics[width=0.12\linewidth]{Figs/PDF/ninroot8/train/layercovarwhite_conv5}} \hfill \covarlabels{conv2b}{192}{conv2c}{192}{\includegraphics[width=0.12\linewidth]{Figs/PDF/ninroot8/train/layercovarwhite_conv6}} \hfill \covarlabels{conv2c}{192}{conv3a}{192}{\includegraphics[width=0.12\linewidth]{Figs/PDF/ninroot8/train/layercovarwhite_conv8}} \hfill \covarlabels{conv3a}{192}{conv3b}{192}{\includegraphics[width=0.12\linewidth]{Figs/PDF/ninroot8/train/layercovarwhite_conv9}} \hfill \covarlabels{}{192}{}{10}{\includegraphics[height=0.12\linewidth]{Figs/PDF/ninroot8/train/layercovarwhite_conv10}} \caption{Root-8} \vspace*{0.6em} \label{fig:covarroot8} \end{subfigure} \begin{subfigure}[c]{0.75\paperheight} %\centering \covarlabels{conv1a}{192}{conv1b}{160}{\includegraphics[width=0.1\linewidth]{Figs/PDF/ninroot32/train/layercovarwhite_conv1}} \hfill \covarlabels{conv1b}{160}{conv1c}{96}{\includegraphics[width=0.05\linewidth]{Figs/PDF/ninroot32/train/layercovarwhite_conv2}} \hfill \covarlabels{conv1c}{96}{conv2a}{192}{\includegraphics[width=0.12\linewidth]{Figs/PDF/ninroot32/train/layercovarwhite_conv4}} \hfill \covarlabels{conv2a}{192}{conv2b}{192}{\includegraphics[width=0.12\linewidth]{Figs/PDF/ninroot32/train/layercovarwhite_conv5}} \hfill \covarlabels{conv2b}{192}{conv2c}{192}{\includegraphics[width=0.12\linewidth]{Figs/PDF/ninroot32/train/layercovarwhite_conv6}} \hfill \covarlabels{conv2c}{192}{conv3a}{192}{\includegraphics[width=0.12\linewidth]{Figs/PDF/ninroot32/train/layercovarwhite_conv8}} \hfill \covarlabels{conv3a}{192}{conv3b}{192}{\includegraphics[width=0.12\linewidth]{Figs/PDF/ninroot32/train/layercovarwhite_conv9}} \hfill \covarlabels{}{192}{}{10}{\includegraphics[height=0.12\linewidth]{Figs/PDF/ninroot32/train/layercovarwhite_conv10}} \caption{Root-32} \label{fig:covarroot32} \end{subfigure} \caption[Inter-layer covariance all layers (train)]{\textbf{\Glsfmttext{nin} Inter-layer Covariance (Train).} The inter-layer covariance for all layers in variants of the NiN network} \label{fig:suppcovariances} \end{figure} \end{landscape} }%afterpage \afterpage{ \begin{landscape} \begin{figure}[p] \begin{subfigure}[c]{0.75\paperheight} %\centering \covarlabels{conv1a}{192}{conv1b}{160}{\includegraphics[width=0.1\linewidth]{Figs/PDF/nin/test/layercovarwhite_conv1}} \hfill \covarlabels{conv1b}{160}{conv1c}{96}{\includegraphics[width=0.05\linewidth]{Figs/PDF/nin/test/layercovarwhite_conv2}} \hfill \covarlabels{conv1c}{96}{conv2a}{192}{\includegraphics[width=0.12\linewidth]{Figs/PDF/nin/test/layercovarwhite_conv4}} \hfill \covarlabels{conv2a}{192}{conv2b}{192}{\includegraphics[width=0.12\linewidth]{Figs/PDF/nin/test/layercovarwhite_conv5}} \hfill \covarlabels{conv2b}{192}{conv2c}{192}{\includegraphics[width=0.12\linewidth]{Figs/PDF/nin/test/layercovarwhite_conv6}} \hfill \covarlabels{conv2c}{192}{conv3a}{192}{\includegraphics[width=0.12\linewidth]{Figs/PDF/nin/test/layercovarwhite_conv8}} \hfill \covarlabels{conv3a}{192}{conv3b}{192}{\includegraphics[width=0.12\linewidth]{Figs/PDF/nin/test/layercovarwhite_conv9}} \hfill \covarlabels{}{192}{}{10}{\includegraphics[height=0.12\linewidth]{Figs/PDF/nin/test/layercovarwhite_conv10}} \caption{\Glsfmttext{nin}} \vspace*{0.6em} \label{fig:covaroot1test} \end{subfigure} \begin{subfigure}[c]{0.75\paperheight} %\centering \covarlabels{conv1a}{192}{conv1b}{160}{\includegraphics[width=0.1\linewidth]{Figs/PDF/ninroot4/test/layercovarwhite_conv1}} \hfill \covarlabels{conv1b}{160}{conv1c}{96}{\includegraphics[width=0.05\linewidth]{Figs/PDF/ninroot4/test/layercovarwhite_conv2}} \hfill \covarlabels{conv1c}{96}{conv2a}{192}{\includegraphics[width=0.12\linewidth]{Figs/PDF/ninroot4/test/layercovarwhite_conv4}} \hfill \covarlabels{conv2a}{192}{conv2b}{192}{\includegraphics[width=0.12\linewidth]{Figs/PDF/ninroot4/test/layercovarwhite_conv5}} \hfill \covarlabels{conv2b}{192}{conv2c}{192}{\includegraphics[width=0.12\linewidth]{Figs/PDF/ninroot4/test/layercovarwhite_conv6}} \hfill \covarlabels{conv2c}{192}{conv3a}{192}{\includegraphics[width=0.12\linewidth]{Figs/PDF/ninroot4/test/layercovarwhite_conv8}} \hfill \covarlabels{conv3a}{192}{conv3b}{192}{\includegraphics[width=0.12\linewidth]{Figs/PDF/ninroot4/test/layercovarwhite_conv9}} \hfill \covarlabels{}{192}{}{10}{\includegraphics[height=0.12\linewidth]{Figs/PDF/ninroot4/test/layercovarwhite_conv10}} \caption{Root-4} \vspace*{0.6em} \label{fig:covarroot4test} \end{subfigure} \begin{subfigure}[c]{0.75\paperheight} %\centering \covarlabels{conv1a}{192}{conv1b}{160}{\includegraphics[width=0.1\linewidth]{Figs/PDF/ninroot8/test/layercovarwhite_conv1}} \hfill \covarlabels{conv1b}{160}{conv1c}{96}{\includegraphics[width=0.05\linewidth]{Figs/PDF/ninroot8/test/layercovarwhite_conv2}} \hfill \covarlabels{conv1c}{96}{conv2a}{192}{\includegraphics[width=0.12\linewidth]{Figs/PDF/ninroot8/test/layercovarwhite_conv4}} \hfill \covarlabels{conv2a}{192}{conv2b}{192}{\includegraphics[width=0.12\linewidth]{Figs/PDF/ninroot8/test/layercovarwhite_conv5}} \hfill \covarlabels{conv2b}{192}{conv2c}{192}{\includegraphics[width=0.12\linewidth]{Figs/PDF/ninroot8/test/layercovarwhite_conv6}} \hfill \covarlabels{conv2c}{192}{conv3a}{192}{\includegraphics[width=0.12\linewidth]{Figs/PDF/ninroot8/test/layercovarwhite_conv8}} \hfill \covarlabels{conv3a}{192}{conv3b}{192}{\includegraphics[width=0.12\linewidth]{Figs/PDF/ninroot8/test/layercovarwhite_conv9}} \hfill \covarlabels{}{192}{}{10}{\includegraphics[height=0.12\linewidth]{Figs/PDF/ninroot8/test/layercovarwhite_conv10}} \caption{Root-8} \vspace*{0.6em} \label{fig:covarroot8test} \end{subfigure} \begin{subfigure}[c]{0.75\paperheight} %\centering \covarlabels{conv1a}{192}{conv1b}{160}{\includegraphics[width=0.1\linewidth]{Figs/PDF/ninroot32/test/layercovarwhite_conv1}} \hfill \covarlabels{conv1b}{160}{conv1c}{96}{\includegraphics[width=0.05\linewidth]{Figs/PDF/ninroot32/test/layercovarwhite_conv2}} \hfill \covarlabels{conv1c}{96}{conv2a}{192}{\includegraphics[width=0.12\linewidth]{Figs/PDF/ninroot32/test/layercovarwhite_conv4}} \hfill \covarlabels{conv2a}{192}{conv2b}{192}{\includegraphics[width=0.12\linewidth]{Figs/PDF/ninroot32/test/layercovarwhite_conv5}} \hfill \covarlabels{conv2b}{192}{conv2c}{192}{\includegraphics[width=0.12\linewidth]{Figs/PDF/ninroot32/test/layercovarwhite_conv6}} \hfill \covarlabels{conv2c}{192}{conv3a}{192}{\includegraphics[width=0.12\linewidth]{Figs/PDF/ninroot32/test/layercovarwhite_conv8}} \hfill \covarlabels{conv3a}{192}{conv3b}{192}{\includegraphics[width=0.12\linewidth]{Figs/PDF/ninroot32/test/layercovarwhite_conv9}} \hfill \covarlabels{}{192}{}{10}{\includegraphics[height=0.12\linewidth]{Figs/PDF/ninroot32/test/layercovarwhite_conv10}} \caption{Root-32} \label{fig:covarroot32test} \end{subfigure} \caption[Inter-layer covariance all layers (test)]{\textbf{\Glsfmttext{nin} Inter-layer Covariance (Test).} The inter-layer covariance for all layers in variants of the NiN network} \label{fig:suppcovariancestest} \end{figure} \end{landscape} }%afterpage \Cref{fig:nincorr} shows the per-layer (intra-layer) filter correlation. This shows the correlation of filters is more structured in root-networks, filters are learned to be linearly combined into useful filters by the root module, and thus filters are often grouped together with other filters with which they correlate strongly. \Cref{fig:covar} shows the inter-layer filter covariances between layers \texttt{conv3a} and \texttt{conv2c}. \Cref{fig:suppcovariances} shows the full set of inter-layer covariances between all convolutional layers in the NiN models. Block-diagonal sparsity is visible on the layers with filter groups\index{filter groups}, \texttt{conv2a} and \texttt{conv3a}. This block-diagonal is shown for all variants in more detail in \cref{fig:suppcovariances}. \subsection{Grouping Degree with Network Depth} \afterpage{ \begin{landscape} \begin{figure}[p] \begin{subfigure}[b]{0.98\linewidth} %\centering \begin{tikzpicture}[ampersand replacement=\&] \begin{scope}[] \matrix[column sep=0em]{ \node (1a) { \raisebox{-0.5\height}{\includegraphics[height=0.07\linewidth, page=15]{groupfig}} };\& \node (1b) { \raisebox{-0.5\height}{\includegraphics[height=0.1\linewidth, page=17]{groupfig}} };\& \node (1c) { \raisebox{-0.5\height}{\includegraphics[height=0.1\linewidth, page=17]{groupfig}} };\& \node (2a) { \raisebox{-0.5\height}{\includegraphics[height=0.08\linewidth, page=16]{groupfig}} };\& \node (2b) { \raisebox{-0.5\height}{\includegraphics[height=0.1\linewidth, page=17]{groupfig}} };\& \node (2c) { \raisebox{-0.5\height}{\includegraphics[height=0.1\linewidth, page=17]{groupfig}} };\& \node (3a) { \raisebox{-0.5\height}{\includegraphics[height=0.08\linewidth, page=16]{groupfig}} };\& \node (3b) { \raisebox{-0.5\height}{\includegraphics[height=0.1\linewidth, page=17]{groupfig}} };\& \node (3c) { \raisebox{-0.5\height}{\includegraphics[height=0.1\linewidth, page=17]{groupfig}} };\& \node (7) { {\large $\cdots$} };\\ \draw node{{\footnotesize \textit{image}}/{\footnotesize \textit{conv1a}}};\& \draw node{\footnotesize \textit{conv1b}};\& \draw node{\footnotesize \textit{conv1c}};\& \draw node{\footnotesize \textit{conv2a}};\& \draw node{\footnotesize \textit{conv2b}};\& \draw node{\footnotesize \textit{conv2c}};\& \draw node{\footnotesize \textit{conv3a}};\& \draw node{\footnotesize \textit{conv3b}};\& \draw node{\footnotesize \textit{conv3c}};\\ }; \end{scope} \end{tikzpicture} \caption{Standard} \label{fig:standardtopology} \end{subfigure}\\ \begin{subfigure}[b]{0.98\linewidth} %\centering \begin{tikzpicture}[ampersand replacement=\&] \begin{scope}[] \matrix[column sep=0em]{ \node (1a) { \includegraphics[height=0.07\linewidth, page=15]{groupfig} };\& \node (1b) { \includegraphics[height=0.1\linewidth, page=17]{groupfig} };\& \node (1c) { \includegraphics[height=0.1\linewidth, page=17]{groupfig} };\& \node (2a) { \includegraphics[height=0.13\linewidth, page=19]{groupfig} };\& \node (2b) { \includegraphics[height=0.1\linewidth, page=17]{groupfig} };\& \node (2c) { \includegraphics[height=0.1\linewidth, page=17]{groupfig} };\& \node (3a) { \includegraphics[height=0.08\linewidth, page=18]{groupfig} };\& \node (3b) { \includegraphics[height=0.1\linewidth, page=17]{groupfig} };\& \node (3c) { \includegraphics[height=0.1\linewidth, page=17]{groupfig} };\& \node (4) { {\large $\cdots$} };\\ }; \draw[decorate,decoration={brace,mirror},](2a.south west) -- node[below=3pt] {\small root-4 module} ++(3.5, 0); \draw[decorate,decoration={brace,mirror},yshift=-2em](3a.south west) + (0, -0.5) -- node[below=3pt] {\small root-2 module} ++(3.5, -0.5); \end{scope} \end{tikzpicture} \caption{Root-4 Architecture} \label{fig:root4topology} \end{subfigure} \caption[\Glsfmttext{nin} standard \vs root architecture]{\textbf{\Glsfmttext{nin} Root Architecture.} The Root-4 architecture as compared to the original architecture for all the convolutional layers. Colored blocks represent the filters of each layer. Here we don't show the intermediate \glspl{featuremap}\index{feature map} over which a layer's filters operate, or the final fully-connected layer, out of space considerations (see \cref{fig:networkinnetwork,fig:rootmodule}). The decreasing degree of grouping in successive root modules means that our network architectures somewhat resemble tree roots, hence the name root. }\label{fig:networktopology} \end{figure} \end{landscape} } An interesting question concerns how the degree of grouping in our root modules should be varied as a function of depth in the network. For the \gls{nin}-like architectures described earlier, we might consider having the degree of grouping: \begin{enumerate*}[label= (\textbf{\roman*})] \item decrease with depth after the first convolutional layer, \eg 1--8--4 (`root'); \item remain constant with depth after the first convolutional layer, \eg 1--4--4 (`column'); or \item increase with depth, \eg 1--4--8 (`tree'). \end{enumerate*} To determine which approach is best, we created variants of the NiN architecture with different degrees of grouping per layer. Results are shown in \cref{fig:nincifarplotsconvonly}. The results show that the so-called root topology (illustrated in \cref{fig:networktopology}) gives the best performance, providing the smallest reduction in accuracy for a given reduction in model size and computational complexity. Similar experiments with deeper network architectures have delivered similar results and so we have reported results for root topologies. This aligns with the intuition of \glspl{dnn}\index{DNN} for image recognition subsuming the deformable parts model. If we assume that filter responses identify parts (or more elemental features), then there should be more filter dependence with depth, as more parts (filter responses) are assembled into complex concepts. \subsection{Improving Residual Networks on \Glsfmttext{ilsvrc}} Residual networks (\gls{resnet}\index{ResNet}s)~\citep{He2015} are the state-of-the-art network for \gls{ilsvrc}\@. \Glspl{resnet}\index{ResNet} are more computationally efficient than the VGG architecture~\citep{Simonyan2014verydeep} on which they are based, due to the use of low-dimensional embeddings~\citep{Lin2013NiN}. ResNets are also more accurate and quicker to converge due to the use of identity mappings. \subsubsection{ResNet 50} \label{resnet50results} As a baseline, we used the \gls{resnet}\index{ResNet} 50 model~\citep{He2015} (the largest residual network model to fit onto 8 \gls{gpu}s with Caffe). \gls{resnet}\index{ResNet} 50 has 50 convolutional layers, of which one-third are spatial convolutions (non-1$\times$1). We did not use any training augmentation aside from random cropping and mirroring. For training, we used the initialization scheme described by~\citep{He2015b} modified for compound layers, as presented in \cref{initialization}, and batch normalization\index{batch normalization}~\citep{Ioffe2015}. \begin{table}[tp] \caption[ResNet 50 root architectures]{\textbf{ResNet 50.} Filter groups\index{filter groups} in each conv.\ layer.} \label{table:resnet50config} \centering \begin{tabular}{@{}lcccccccccccc@{}} \toprule Model & conv1 & \multicolumn{2}{c}{res2\{a--c\}} & \multicolumn{2}{c}{res3\{a--d\}} & \multicolumn{2}{c}{res4\{a--f\}} & \multicolumn{2}{c}{res5\{a--c\}} \\ & \textit{\footnotesize7$\times$7} & \textit{\footnotesize1$\times$1} & \textit{\footnotesize3$\times$3} & \textit{\footnotesize1$\times$1} & \textit{\footnotesize3$\times$3} & \textit{\footnotesize1$\times$1} & \textit{\footnotesize3$\times$3} & \textit{\footnotesize1$\times$1} & \textit{\footnotesize3$\times$3} \\ Orig. & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 \\ \midrule root-2 & 1 & 1 & 2 & 1 & 1 & 1 & 1 & 1 & 1 \\ root-4 & 1 & 1 & 4 & 1 & 2 & 1 & 1 & 1 & 1 \\ root-8 & 1 & 1 & 8 & 1 & 4 & 1 & 2 & 1 & 1 \\ root-16 & 1 & 1 & 16 & 1 & 8 & 1 & 4 & 1 & 2 \\ root-32 & 1 & 1 & 32 & 1 & 16 & 1 & 8 & 1 & 4 \\ root-64 & 1 & 1 & 64 & 1 & 32 & 1 & 16 & 1 & 8 \\ \bottomrule \end{tabular} \end{table} To assess the efficacy of our method, we replaced the spatial convolutional layers of the original network with root modules (as described in \cref{method}). We preserved the original number of filters per layer but subdivided them into groups as shown in \cref{table:resnet50config}. We considered the first of the existing 1$\times$1 layers subsequent to each spatial convolution to be part of our root modules. \begin{table}[tp] \caption[ResNet 50 \Glsfmttext{ilsvrc} Results]{\textbf{ResNet 50 Results.}} \label{table:resnet50imagenetresults} %\resizebox{\textwidth}{!}{ \centering \pgfplotstableread[col sep=comma]{rootdata/resnet50ma.csv}\data %\pgfplotstableread[col sep=comma]{rootdata/resnet50maall.csv}\alldata \pgfplotstableread[col sep=comma]{rootdata/resnet50maconvonly.csv}\codata %\pgfplotstablevertcat{\data}{\alldata} \pgfplotstablevertcat{\data}{\codata} \pgfplotstableset{ create on use/singlegpu/.style={ create col/expr={\thisrow{GPU Forward} / \thisrow{Batch Size}}}, } \pgfplotstableset{ create on use/singlecpu/.style={ create col/expr={\thisrow{CPU Forward} / \thisrow{Batch Size}}}, } \pgfplotstabletypeset[ every head row/.style={ before row=\toprule,after row=\midrule}, every last row/.style={ after row=\bottomrule}, every first row/.style={ after row=\midrule}, fixed zerofill, % Fill numbers with zeros columns={Full Name, Multiply-Acc., Param., Top-1 Acc., Top-5 Acc., singlecpu, singlegpu}, columns/Full Name/.style={ column name=Model, string type }, columns/singlegpu/.style={ column name=GPU (ms), precision=1 }, columns/singlecpu/.style={ column name=\gls{cpu} (ms), precision=0 }, columns/Multiply-Acc./.style={ column name=\gls{flops} {\small $\times 10^{9}$}, preproc/expr={{##1/1e9}} }, columns/Param./.style={ column name=Param. {\small $\times 10^{7}$}, preproc/expr={{##1/1e7}} }, columns/Top-1 Acc./.style={precision=3}, columns/Top-5 Acc./.style={precision=3}, highlight col max ={\data}{Top-1 Acc.}, highlight col max ={\data}{Top-5 Acc.}, highlight col min ={\data}{Param.}, highlight col min ={\data}{Multiply-Acc.}, column type/.add={@{}llp{3em}lp{3em}rrrr@{}}{}, %column type/.add={@{}lp{3em}p{3em}p{3em}p{3em}p{3em}p{3em}@{}}{}, col sep=comma]{\data} %} \end{table} \begin{figure}[p] \centering \begin{subfigure}[b]{\textwidth} \pgfplotstableread[col sep=comma]{rootdata/resnet50ma.csv}\gdatatable \pgfplotstableread[col sep=comma]{rootdata/resnet50maconvonly.csv}\codatatable \pgfplotsset{major grid style={dotted,red}} \pgfplotstableset{ create on use/singlecpu/.style={ create col/expr={\thisrow{\gls{cpu} Forward} / \thisrow{Batch Size}}}, } \centering \begin{tikzpicture} \begin{axis}[ width=0.95\textwidth, height=0.2\textheight, axis x line=bottom, ylabel=Top-5 Error, xlabel=Model Parameters (\# Floats), axis lines=left, enlarge x limits=0.05, %enlarge y limits=0.1, grid=major, %xmin=0, ytick={0.01,0.02,...,0.2}, ymin=0.07,ymax=0.1, xticklabel style={ /pgf/number format/fixed, /pgf/number format/precision=3 }, yticklabel={\pgfmathparse{\tick*100}\pgfmathprintnumber{\pgfmathresult}\%},style={ /pgf/number format/fixed, /pgf/number format/precision=1 }, \setplotcyclecat{2}, every axis plot/.append style={fill}, legend style={at={(0.98,0.98)}, anchor=north east, column sep=0.5em}, legend columns=3, ] \addplot+[mark=*, %nodes near coords, only marks, point meta=explicit symbolic, ] table[meta=Network,x=Param.,y expr={1 - \thisrow{Top-5 Acc.} },]{\gdatatable}; \addplot+[mark=square*, nodes near coords, nodes near coords align = {below}, only marks, every node near coord/.append style={inner sep=4pt}, only marks, point meta=explicit symbolic, ] table[meta=Network,x=Param.,y expr={1 - \thisrow{Top-5 Acc.} },]{\codatatable}; \legend{ResNet 50, Roots} \end{axis} \end{tikzpicture} \caption{Model Parameters \vs Top-5 Error} \label{fig:resnet50param} \end{subfigure} ~ \begin{subfigure}[b]{\textwidth} \pgfplotstableread[col sep=comma]{rootdata/resnet50ma.csv}\gdatatable \pgfplotstableread[col sep=comma]{rootdata/resnet50maconvonly.csv}\codatatable \pgfplotsset{major grid style={dotted,red}} \centering \begin{tikzpicture} \begin{axis}[ width=0.95\textwidth, height=0.2\textheight, axis x line=bottom, ylabel=Top-5 Error, xlabel=\gls{flops} (Multiply-Add), axis lines=left, enlarge x limits=0.05, %enlarge y limits=0.1, grid=major, %xmin=0, ytick={0.01,0.02,...,0.2}, ymin=0.07,ymax=0.1, xticklabel style={ /pgf/number format/fixed, /pgf/number format/precision=3 }, yticklabel={\pgfmathparse{\tick*100}\pgfmathprintnumber{\pgfmathresult}\%},style={ /pgf/number format/fixed, /pgf/number format/precision=1 }, \setplotcyclecat{2}, every axis plot/.append style={fill}, legend style={at={(0.98,0.98)}, anchor=north east, column sep=0.5em}, legend columns=3, ] \addplot+[mark=*, %nodes near coords, only marks, point meta=explicit symbolic, ] table[meta=Network,x=Multiply-Acc.,y expr={1 - \thisrow{Top-5 Acc.} },]{\gdatatable}; \addplot+[mark=square*, nodes near coords, nodes near coords align = {below}, only marks, every node near coord/.append style={inner sep=4pt}, only marks, point meta=explicit symbolic, ] table[meta=Network,x=Multiply-Acc.,y expr={1 - \thisrow{Top-5 Acc.} },]{\codatatable}; \end{axis} \end{tikzpicture} \caption{\Glsfmttext{flops} (Multiply-Add) \vs Top-5 Error} \label{fig:resnet50ma} \end{subfigure} ~ \begin{subfigure}[b]{\textwidth} \pgfplotstableread[col sep=comma]{rootdata/resnet50ma.csv}\gdatatable \pgfplotstableread[col sep=comma]{rootdata/resnet50maconvonly.csv}\codatatable \pgfplotsset{major grid style={dotted,red}} \centering \begin{tikzpicture} \begin{axis}[ width=0.95\textwidth, height=0.2\textheight, axis x line=bottom, ylabel=Top-5 Error, xlabel=\gls{gpu} Forward (ms), axis lines=left, enlarge x limits=0.05, %enlarge y limits=0.1, grid=major, %xmin=0, ytick={0.01,0.02,...,0.2}, ymin=0.07,ymax=0.1, xticklabel style={ /pgf/number format/fixed, /pgf/number format/precision=3 }, yticklabel={\pgfmathparse{\tick*100}\pgfmathprintnumber{\pgfmathresult}\%},style={ /pgf/number format/fixed, /pgf/number format/precision=1 }, \setplotcyclecat{2}, every axis plot/.append style={fill}, legend style={at={(0.98,0.98)}, anchor=north east, column sep=0.5em}, legend columns=3, ] \addplot+[mark=*, %nodes near coords, only marks, point meta=explicit symbolic, ] table[meta=Network, x expr={\thisrow{GPU Forward} / \thisrow{Batch Size}}, y expr={1 - \thisrow{Top-5 Acc.} } ]{\gdatatable}; \addplot+[mark=square*, nodes near coords, nodes near coords align = {below}, only marks, every node near coord/.append style={inner sep=4pt}, only marks, point meta=explicit symbolic, ] table[meta=Network, x expr={\thisrow{GPU Forward} / \thisrow{Batch Size}}, y expr={1 - \thisrow{Top-5 Acc.} }, ]{\codatatable}; %\legend{ResNet 50, All Filters, Spatial Filters} \end{axis} \end{tikzpicture} \caption{\Glsfmttext{gpu} Forward Time \vs Top-5 Error} \label{fig:resnet50gpuforward} \end{subfigure} ~ \begin{subfigure}[b]{\textwidth} \pgfplotstableread[col sep=comma]{rootdata/resnet50ma.csv}\gdatatable \pgfplotstableread[col sep=comma]{rootdata/resnet50maconvonly.csv}\codatatable \pgfplotsset{major grid style={dotted,red}} \centering \begin{tikzpicture} \begin{axis}[ width=0.95\textwidth, height=0.2\textheight, axis x line=bottom, ylabel=Top-5 Error, xlabel=\gls{cpu} Forward (ms), axis lines=left, enlarge x limits=0.05, %enlarge y limits=0.1, grid=major, %xmin=0, ytick={0.01,0.02,...,0.2}, ymin=0.07,ymax=0.1, xticklabel style={ /pgf/number format/fixed, /pgf/number format/precision=3 }, yticklabel={\pgfmathparse{\tick*100}\pgfmathprintnumber{\pgfmathresult}\%},style={ /pgf/number format/fixed, /pgf/number format/precision=1 }, \setplotcyclecat{2}, every axis plot/.append style={fill}, legend style={at={(0.98,0.98)}, anchor=north east, column sep=0.5em}, legend columns=3, ] \addplot+[mark=*, %nodes near coords, only marks, point meta=explicit symbolic, ] table[meta=Network, x expr={\thisrow{CPU Forward} / \thisrow{Batch Size}}, y expr={1 - \thisrow{Top-5 Acc.} }, ]{\gdatatable}; \addplot+[mark=square*, nodes near coords, nodes near coords align = {below}, only marks, every node near coord/.append style={inner sep=4pt}, only marks, point meta=explicit symbolic, ] table[meta=Network, x expr={\thisrow{CPU Forward} / \thisrow{Batch Size}}, y expr={1 - \thisrow{Top-5 Acc.} }, ]{\codatatable}; %\legend{ResNet 50, All Filters, Spatial Filters} \end{axis} \end{tikzpicture} \caption{CPU Forward Time \vs Top-5 Error} \label{fig:resnet50cpuforward} \end{subfigure} \caption[ResNet-50 \Glsfmttext{ilsvrc} results]{\textbf{ResNet-50 \glsfmttext{ilsvrc} Results.} Models with filter groups\index{filter groups} have fewer parameters, and less floating point operations, while maintaining error comparable to the baseline.} \label{fig:resnet50plots} \end{figure} Results are shown in \cref{table:resnet50imagenetresults} and \cref{fig:resnet50plots} for various network architectures. Compared to the baseline architecture, the root variants achieve a significant reduction in computation and model size without a significant reduction in accuracy. For example, the best result (root-16) exceeds the baseline accuracy by 0.2\% while reducing the model size by 27\% and floating-point operations (multiply-add) by 37\%. \gls{cpu} timings were 23\% faster, while \gls{gpu} timings were 13\% faster. With a drop in accuracy of only 0.1\% however, the root-64 model reduces the model size by 40\%, and reduces the floating point operations by 45\%. \gls{cpu} timings were 31\% faster, while \gls{gpu} timings were 12\% faster. \subsubsection{ResNet 200}\label{resnet200results} \begin{table}[tp] \caption[ResNet 200 \Glsfmttext{ilsvrc} results]{\textbf{ResNet-200 \glsfmttext{ilsvrc} Results}} \label{table:resnet200imagenetresults} %\resizebox{\columnwidth}{!}{ \centering \pgfplotstableread[col sep=comma]{rootdata/resnet200.csv}\data \pgfplotstabletypeset[ every head row/.style={ before row=\toprule,after row=\midrule}, every last row/.style={ after row=\bottomrule}, every first row/.style={ after row=\midrule}, columns={full name, ma, param, top1, top5}, columns/full name/.style={ column name=Model, string type }, columns/ma/.style={ column name=\gls{flops}~{\small$\times 10^{12}$}, fixed zerofill, preproc/expr={{##1/1e12}}, }, columns/param/.style={ column name=Param.~{\small$\times 10^{7}$}, fixed zerofill, preproc/expr={{##1/1e7}}, }, columns/top1/.style={ precision=4, column name=Top-1 Err., fixed zerofill, }, columns/top5/.style={ precision=4, column name=Top-5 Err., fixed zerofill, }, column type/.add={@{}lrrrrrr@{}}{}, highlight col min ={\data}{top1}, highlight col min ={\data}{top5}, highlight col min ={\data}{param}, highlight col min ={\data}{ma}, col sep=comma]{\data} %} \end{table} \begin{figure}[tbp] \centering \begin{subfigure}[b]{\textwidth} \pgfplotstableread[col sep=comma]{rootdata/resnet200orig.csv}\gdatatable \pgfplotstableread[col sep=comma]{rootdata/resnet200roots.csv}\codatatable \pgfplotsset{major grid style={dotted,red}} \pgfplotstableset{ create on use/singlecpu/.style={ create col/expr={\thisrow{\gls{cpu} Forward} / \thisrow{Batch Size}}}, } \centering \begin{tikzpicture} \begin{axis}[ width=0.95\textwidth, height=0.22\textheight, axis x line=bottom, ylabel=Top-5 Error, xlabel=Model Parameters (\# Floats), axis lines=left, enlarge x limits=0.05, %enlarge y limits=0.1, grid=major, %xmin=0, %ytick={0.01,0.02,...,0.2}, ymin=5.6,ymax=6.6, xticklabel style={ /pgf/number format/fixed zerofill, /pgf/number format/precision=1 }, yticklabel={\pgfmathparse{\tick*1}\pgfmathprintnumber{\pgfmathresult}\%},style={ /pgf/number format/fixed zerofill, /pgf/number format/precision=1 }, \setplotcyclecat{2}, every axis plot/.append style={fill}, legend style={at={(0.98,0.98)}, anchor=north east, column sep=0.5em}, legend columns=3, ] \addplot+[mark=*, %nodes near coords, only marks, point meta=explicit symbolic, ] table[meta=name,x=ma,y expr={\thisrow{top5}*100},]{\gdatatable}; \addplot+[mark=square*, nodes near coords, nodes near coords align = {below}, only marks, every node near coord/.append style={inner sep=4pt}, only marks, point meta=explicit symbolic, ] table[meta=name,x=ma,y expr={\thisrow{top5}*100},]{\codatatable}; \legend{ResNet 50, Roots} \end{axis} \end{tikzpicture} \caption{Model Parameters \vs Top-5 Error} \label{fig:resnet200param} \end{subfigure} ~ \begin{subfigure}[b]{\textwidth} \pgfplotstableread[col sep=comma]{rootdata/resnet200orig.csv}\gdatatable \pgfplotstableread[col sep=comma]{rootdata/resnet200roots.csv}\codatatable \pgfplotsset{major grid style={dotted,red}} \centering \begin{tikzpicture} \begin{axis}[ width=0.95\textwidth, height=0.22\textheight, axis x line=bottom, ylabel=Top-5 Error, xlabel=\gls{flops} (Multiply-Add), axis lines=left, enlarge x limits=0.05, %enlarge y limits=0.1, grid=major, %xmin=0, %ytick={0.01,0.02,...,0.2}, ymin=5.6,ymax=6.6, xticklabel style={ /pgf/number format/fixed zerofill, /pgf/number format/precision=1 }, yticklabel={\pgfmathparse{\tick*1}\pgfmathprintnumber{\pgfmathresult}\%},style={ /pgf/number format/fixed zerofill, /pgf/number format/precision=1 }, \setplotcyclecat{2}, every axis plot/.append style={fill}, legend style={at={(0.98,0.98)}, anchor=north east, column sep=0.5em}, legend columns=3, ] \addplot+[mark=*, %nodes near coords, only marks, point meta=explicit symbolic, ] table[meta=name,x=ma,y expr={\thisrow{top5}*100},]{\gdatatable}; \addplot+[mark=square*, nodes near coords, nodes near coords align = {below}, only marks, every node near coord/.append style={inner sep=4pt}, only marks, point meta=explicit symbolic, ] table[meta=name,x=ma,y expr={\thisrow{top5}*100},]{\codatatable}; \end{axis} \end{tikzpicture} \caption{\Glsfmttext{flops} (Multiply-Add) \vs Top-5 Error.} \label{fig:resnet200ma} \end{subfigure} \caption[ResNet-200 \Glsfmttext{ilsvrc} results]{\textbf{ResNet-200 \glsfmttext{ilsvrc} Results.} Models with filter groups\index{filter groups} have fewer parameters, and less floating point operations, while maintaining error comparable to the baseline.} \label{fig:resnet200plots} \end{figure} To show that the method applies to deeper architectures, we also applied our method to \gls{resnet}\index{ResNet} 200, the deepest network for ILSVRC 2012. To provide a baseline we used code implementing full training augmentation to achieve state-of-the-art results\footnote{\url{https://github.com/facebook/fb.resnet.torch}}. \Cref{table:resnet200imagenetresults} and \cref{fig:resnet200plots} show the results, top-1 and top-5 error are for center cropped images. The models trained with roots have comparable or lower error, with fewer parameters and less computation. The root-64 model has 27\% fewer FLOPS and 48\% fewer parameters than \gls{resnet}\index{ResNet} 200. \subsection{Improving GoogLeNet on \Glsfmttext{ilsvrc}} \label{googlenet50results} \begin{table}[tp] \caption[GoogLeNet \Glsfmttext{ilsvrc} results]{\textbf{GoogLeNet \glsfmttext{ilsvrc} Results.}} \label{table:googlenetimagenetresults} %\resizebox{\columnwidth}{!}{ \centering \pgfplotstableread[col sep=comma]{rootdata/googlenetma.csv}\data \pgfplotstableread[col sep=comma]{rootdata/googlenetmaconvonly.csv}\codata \pgfplotstablevertcat{\data}{\codata} \pgfplotstableset{ create on use/singlegpu/.style={ create col/expr={\thisrow{GPU Forward} / \thisrow{Batch Size}}}, } \pgfplotstableset{ create on use/singlecpu/.style={ create col/expr={\thisrow{CPU Forward} / \thisrow{Batch Size}}}, } \pgfplotstabletypeset[ every head row/.style={ before row=\toprule,after row=\midrule}, every last row/.style={ after row=\bottomrule}, every first row/.style={ after row=\midrule}, fixed zerofill, % Fill numbers with zeros columns={Full Name, Multiply-Acc., Param., Top-1 Acc., Top-5 Acc., singlecpu, singlegpu}, columns/Full Name/.style={ column name=Model, string type }, columns/singlegpu/.style={ column name=\gls{gpu} (ms), precision=2 }, columns/singlecpu/.style={ column name=\gls{cpu} (ms), precision=0 }, columns/Multiply-Acc./.style={ column name=\gls{flops} {\small $\times 10^{9}$}, preproc/expr={{##1/1e9}} }, columns/Param./.style={ column name=Param. {\small $\times 10^{7}$}, preproc/expr={{##1/1e7}} }, columns/Top-1 Acc./.style={precision=3}, columns/Top-5 Acc./.style={precision=3}, highlight col max ={\data}{Top-1 Acc.}, highlight col max ={\data}{Top-5 Acc.}, highlight col min ={\data}{Param.}, highlight col min ={\data}{Multiply-Acc.}, column type/.add={@{}lp{4em}p{4em}rrrr@{}}{}, %column type/.add={@{}lp{3em}p{3em}p{3em}p{3em}p{3em}p{3em}@{}}{}, col sep=comma]{\data} %} \end{table} \begin{table}[tp] \caption[GoogLeNet root architectures]{\textbf{GoogLeNet}. Filter groups\index{filter groups} in each conv.\ layer and \Gls{inception}\index{inception} module (\textit{incp}.)} \label{table:googlenetconfig} \centering \begin{tabular}{@{}lcccccccccccc@{}} \toprule Model & conv1 & \multicolumn{2}{c}{conv2} & \multicolumn{3}{c}{incp.~3\{a,b\}} & \multicolumn{3}{c}{incp.~4\{a--e\}} & \multicolumn{3}{c}{incp.~5\{a,b\}} \\ & \textit{\footnotesize7$\times$7} & \textit{\footnotesize1$\times$1} & \textit{\footnotesize3$\times$3} & \textit{\footnotesize1$\times$1} & \textit{\footnotesize3$\times$3} & \textit{\footnotesize5$\times$5} & \textit{\footnotesize1$\times$1} & \textit{\footnotesize3$\times$3} & \textit{\footnotesize5$\times$5} & \textit{\footnotesize1$\times$1} & \textit{\footnotesize3$\times$3} & \textit{\footnotesize5$\times$5} \\ Orig. & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1\\ \midrule root-2 & 1 & 1 & 2 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1\\ root-4 & 1 & 1 & 4 & 1 & 2 & 2 & 1 & 1 & 1 & 1 & 1 & 1\\ root-8 & 1 & 1 & 8 & 1 & 4 & 4 & 1 & 2 & 2 & 1 & 1 & 1\\ root-16 & 1 & 1 & 16 & 1 & 8 & 8 & 1 & 4 & 4 & 1 & 2 & 2\\ \bottomrule \end{tabular} \end{table} We replicated the network as described by \citet{Szegedy2014going}, with the exception of not using any training augmentation aside from random crops and mirroring, as supported by Caffe~\citep{Jia2014}). To train we used the initialization of~\citep{He2015b} modified for compound layers, as described in \cref{initialization} and batch normalization\index{batch normalization} without the scale and bias~\citep{Ioffe2015}. At test time we only evaluate the center crop image. While preserving the original number of filters per layer, we trained networks with various degrees of filter group\index{filter groups}ing, as described in \cref{table:googlenetconfig}. While the \gls{inception}\index{inception} architecture is relatively complex, for simplicity, we always use the same number of groups within each of the groups of different filter sizes, despite them having different cardinality. For all of the networks, we only grouped filters within each of the `spatial' convolutions (3$\times$3, 5$\times$5). \begin{figure}[p] \centering \begin{subfigure}[b]{\textwidth} \pgfplotstableread[col sep=comma]{rootdata/googlenetma.csv}\gdatatable \pgfplotstableread[col sep=comma]{rootdata/googlenetmaconvonly.csv}\codatatable \pgfplotsset{major grid style={dotted,red}} \centering \begin{tikzpicture} \begin{axis}[ width=0.95\textwidth, height=0.2\textheight, axis x line=bottom, ylabel=Top-5 Error, xlabel=Model Parameters (\# Floats), axis lines=left, enlarge x limits=0.05, grid=major, %xmin=0, ytick={0.01,0.02,...,0.2}, ymin=0.095,ymax=0.125, xticklabel style={ /pgf/number format/fixed, /pgf/number format/precision=3 }, yticklabel={\pgfmathparse{\tick*100}\pgfmathprintnumber{\pgfmathresult}\%},style={ /pgf/number format/fixed, /pgf/number format/precision=1 }, legend style={at={(0.98,0.98)}, anchor=north east, column sep=0.5em}, legend columns=3, \setplotcyclecat{2}, every axis plot/.append style={fill}, ] \addplot+[mark=*, %nodes near coords, only marks, point meta=explicit symbolic, ] table[meta=Network,x=Param.,y expr={1 - \thisrow{Top-5 Acc.} },]{\gdatatable}; \addplot+[mark=square*, nodes near coords, nodes near coords align = {below}, only marks, every node near coord/.append style={inner sep=4pt}, only marks, point meta=explicit symbolic, ] table[meta=Network,x=Param.,y expr={1 - \thisrow{Top-5 Acc.} },]{\codatatable}; \legend{GoogLeNet, Roots} \end{axis} \end{tikzpicture} \caption{\textbf{Model Parameters \vs Top-5 Error.}} \label{fig:googlenet50param} \end{subfigure} ~ \begin{subfigure}[b]{\textwidth} \pgfplotstableread[col sep=comma]{rootdata/googlenetma.csv}\gdatatable \pgfplotstableread[col sep=comma]{rootdata/googlenetmaconvonly.csv}\codatatable \pgfplotsset{major grid style={dotted,red}} \centering \begin{tikzpicture} \begin{axis}[ width=0.95\textwidth, height=0.2\textheight, axis x line=bottom, ylabel=Top-5 Error, xlabel=\gls{flops} (Multiply-Add), axis lines=left, enlarge x limits=0.05, grid=major, %xmin=0, ytick={0.01,0.02,...,0.2}, ymin=0.095,ymax=0.125, xticklabel style={ /pgf/number format/fixed, /pgf/number format/precision=3 }, yticklabel={\pgfmathparse{\tick*100}\pgfmathprintnumber{\pgfmathresult}\%},style={ /pgf/number format/fixed, /pgf/number format/precision=1 }, legend style={at={(0.98,0.98)}, anchor=north east, column sep=0.5em}, legend columns=3, \setplotcyclecat{2}, every axis plot/.append style={fill}, ] \addplot+[mark=*, %nodes near coords, only marks, point meta=explicit symbolic, ] table[meta=Network,x=Multiply-Acc.,y expr={1 - \thisrow{Top-5 Acc.} },]{\gdatatable}; \addplot+[mark=square*, nodes near coords, nodes near coords align = {below}, only marks, every node near coord/.append style={inner sep=4pt}, only marks, point meta=explicit symbolic, ] table[meta=Network,x=Multiply-Acc.,y expr={1 - \thisrow{Top-5 Acc.} },]{\codatatable}; \end{axis} \end{tikzpicture} \caption{\textbf{\Glsfmttext{flops} (Multiply-Add) \vs Top-5 Error.}} \label{fig:googlenet50ma} \end{subfigure} ~ \begin{subfigure}[b]{\textwidth} \pgfplotstableread[col sep=comma]{rootdata/googlenetma.csv}\gdatatable \pgfplotstableread[col sep=comma]{rootdata/googlenetmaconvonly.csv}\codatatable \pgfplotsset{major grid style={dotted,red}} \centering \begin{tikzpicture} \begin{axis}[ width=0.95\textwidth, height=0.2\textheight, axis x line=bottom, ylabel=Top-5 Error, xlabel=\gls{gpu} Forward (ms), axis lines=left, enlarge x limits=0.05, grid=major, %xmin=0, ytick={0.01,0.02,...,0.2}, ymin=0.095,ymax=0.125, xticklabel style={ /pgf/number format/fixed, /pgf/number format/precision=3 }, yticklabel={\pgfmathparse{\tick*100}\pgfmathprintnumber{\pgfmathresult}\%},style={ /pgf/number format/fixed, /pgf/number format/precision=1 }, \setplotcyclecat{2}, every axis plot/.append style={fill}, legend style={at={(0.98,0.98)}, anchor=north east, column sep=0.5em}, legend columns=3, ] \addplot+[mark=*, %nodes near coords, only marks, point meta=explicit symbolic, ] table[meta=Network, x expr={\thisrow{GPU Forward} / \thisrow{Batch Size}}, y expr={1 - \thisrow{Top-5 Acc.} },]{\gdatatable}; \addplot+[mark=square*, nodes near coords, nodes near coords align = {below}, only marks, every node near coord/.append style={inner sep=4pt}, only marks, point meta=explicit symbolic, ] table[meta=Network, x expr={\thisrow{GPU Forward} / \thisrow{Batch Size}}, y expr={1 - \thisrow{Top-5 Acc.} },]{\codatatable}; %\legend{GoogLeNet, All Filters, Spatial Filters} \end{axis} \end{tikzpicture} \caption{\textbf{\Glsfmttext{gpu} Forward Time \vs Top-5 Error.}} \label{fig:googlenet50gpuforward} \end{subfigure} ~ \begin{subfigure}[b]{\textwidth} \pgfplotstableread[col sep=comma]{rootdata/googlenetma.csv}\gdatatable \pgfplotstableread[col sep=comma]{rootdata/googlenetmaconvonly.csv}\codatatable \pgfplotsset{major grid style={dotted,red}} \centering \begin{tikzpicture} \begin{axis}[ width=0.95\textwidth, height=0.2\textheight, axis x line=bottom, ylabel=Top-5 Error, xlabel=\gls{cpu} Forward (ms), axis lines=left, enlarge x limits=0.05, grid=major, %xmin=0, ytick={0.01,0.02,...,0.2}, ymin=0.095,ymax=0.125, xticklabel style={ /pgf/number format/fixed, /pgf/number format/precision=3 }, yticklabel={\pgfmathparse{\tick*100}\pgfmathprintnumber{\pgfmathresult}\%},style={ /pgf/number format/fixed, /pgf/number format/precision=1 }, \setplotcyclecat{2}, every axis plot/.append style={fill}, legend style={at={(0.98,0.98)}, anchor=north east, column sep=0.5em}, legend columns=3, ] \addplot+[mark=*, %nodes near coords, only marks, point meta=explicit symbolic, ] table[meta=Network, x expr={\thisrow{CPU Forward} / \thisrow{Batch Size}}, y expr={1 - \thisrow{Top-5 Acc.} },]{\gdatatable}; \addplot+[mark=square*, nodes near coords, nodes near coords align = {below}, only marks, every node near coord/.append style={inner sep=4pt}, only marks, point meta=explicit symbolic, ] table[meta=Network, x expr={\thisrow{CPU Forward} / \thisrow{Batch Size}}, y expr={1 - \thisrow{Top-5 Acc.} },]{\codatatable}; %\legend{GoogLeNet, All Filters, Spatial Filters} \end{axis} \end{tikzpicture} \caption{\textbf{\Glsfmttext{cpu} Forward Time \vs Top-5 Error.}} \label{fig:googlenet50cpuforward} \end{subfigure} \caption[GoogLeNet \Glsfmttext{ilsvrc} results]{\textbf{GoogLeNet \glsfmttext{ilsvrc} Results.} Models with filter groups\index{filter groups} have fewer parameters, and less floating point operations, while maintaining error comparable to the baseline.} \label{fig:googlenet50plots} \end{figure} As shown in \cref{table:googlenetimagenetresults}, and plotted in \cref{fig:googlenet50plots}, our method shows significant reduction in computational complexity --- as measured in \gls{flops} (multiply-adds), \gls{cpu} and \gls{gpu} timings --- and model size, as measured in the number of floating point parameters. For many of the configurations the top-5 accuracy remains within 0.5\% of the baseline model. The highest accuracy result, is 0.1\% off the top-5 accuracy of the baseline model, but has a 0.1\% higher top-1 accuracy --- within the error bounds resulting from training with different random initializations. While maintaining the same accuracy, this network has 9\% faster \gls{cpu} and \gls{gpu} timings. However, a model with only 0.3\% lower top-5 accuracy than the baseline has much higher gains in computational efficiency --- 44\% fewer floating point operations (multiply-add), 7\% fewer model parameters, 21\% faster \gls{cpu} and 16\% faster \gls{gpu} timings. While these results may seem modest compared to the results for \gls{resnet}\index{ResNet}, \gls{googlenet}\index{GoogLeNet} is by far the smallest and fastest near state-of-the-art model \gls{ilsvrc} model. We believe that more experimentation in using different cardinalities of filter group\index{filter groups}ing in the heterogeneously-sized filter groups\index{filter groups} within each \gls{inception}\index{inception} module would improve results further. \subsection{The Effect on Image-level Filters of Root Modules} \begin{figure}[tb] \centering \begin{subfigure}[b]{0.4\textwidth} \centering \includegraphics[width=\textwidth]{Figs/Raster/msrc-resnet-50-conv1} \caption{Standard} \label{fig:resnet50normalconv0} \end{subfigure} ~ \begin{subfigure}[b]{0.4\textwidth} \centering \includegraphics[width=\textwidth]{Figs/Raster/msrc-resnet-50-conv1-root4-convonly} \caption{Root-2} \label{fig:resnet50root2conv0} \end{subfigure} ~ \begin{subfigure}[b]{0.4\textwidth} \centering \includegraphics[width=\textwidth]{Figs/Raster/msrc-resnet-50-conv1-root8-convonly} \caption{Root-4} \label{fig:resnet50root4conv0} \end{subfigure} ~ \begin{subfigure}[b]{0.4\textwidth} \centering \includegraphics[width=\textwidth]{Figs/Raster/msrc-resnet-50-conv1-root16-convonly} \caption{Root-8} \label{fig:resnet50root8conv0} \end{subfigure} ~ \begin{subfigure}[b]{0.4\textwidth} \centering \includegraphics[width=\textwidth]{Figs/Raster/msrc-resnet-50-conv1-root32-convonly} \caption{Root-16} \label{fig:resnet50root16conv0} \end{subfigure} ~ \begin{subfigure}[b]{0.4\textwidth} \centering \includegraphics[width=\textwidth]{Figs/Raster/msrc-resnet-50-conv1-root64-convonly} \caption{Root-32} \label{fig:resnet50root32conv0} \end{subfigure} \caption[ResNet 50 conv1 filters]{\textbf{ResNet 50 \texttt{conv1} filters.} With filter groups\index{filter groups} directly after \texttt{conv1}, in \texttt{conv2}, some of the organization of filters can be directly observed, and give us intuition as to what is happening in root networks.} \label{fig:resnet50conv0} \end{figure} In the \gls{resnet}\index{ResNet} root models, filter groups\index{filter groups} are used in \texttt{conv2}, directly after the image level filters of \texttt{conv1} some of the organization of filters can be directly observed, and give us intuition as to what is happening in root networks. \Cref{fig:resnet50conv0} shows the \texttt{conv0} filters learned for each of the \gls{resnet}\index{ResNet} 50 models. It is apparent that the filters learned in these networks are very similar to those learned in the original model, although sometimes inverted or with a different ordering. This ordering is somewhat consistent in models with filter groups\index{filter groups} however, even with different random initializations. This is because filter groups\index{filter groups} cause filters with strong mutual information to be grouped adjacent to each other. For example, in the root-8 network (\cref{fig:resnet50root8conv0}), each row of filters corresponds to the input of an independent filter group\index{filter groups} in \texttt{conv2}. We can see that the first row primarily is composed of filters giving various directions of the same color gradient. These filters can be combined in the next layer to produce color edges easily. Due to the shortcut layer and the learned combinations of filters however, not all filter group\index{filter groups}ings are so obvious. \subsection{Layer-wise Compute/Parameter Savings} \afterpage{ \begin{landscape} \begin{figure}[tbp] \centering \begin{subfigure}[b]{0.97\linewidth} \pgfplotstableread[col sep=comma]{rootdata/resnet50-layerma.csv}\datatable \pgfplotstableread[col sep=comma]{rootdata/resnet50-root64-layerma.csv}\datatablea \begin{tikzpicture} \begin{axis}[ axis x line=bottom, axis y line=left, ybar=0pt, bar width=0.3em, width=0.95\linewidth, height=0.4\textheight, enlarge x limits=0.02, ylabel=\gls{flops}, y label style={at={(axis description cs:-0.04,.5)},anchor=south}, y tick label style={ /pgf/number format/.cd, fixed, fixed zerofill, precision=1, /tikz/.cd }, ymin=0, xticklabels from table={\datatable}{layer}, xticklabel style = {rotate = 90, xshift = -0.8ex, anchor = mid east, font=\footnotesize\sffamily\sansmath}, xtick=data, %xmajorticks=false, legend style={at={(0.5,1.2)}, draw=none, anchor=north,legend columns=-1}, area legend, every axis plot/.append style={fill, draw=none, opacity=0.7}, \setplotcyclecat{2}, ] \addplot+ table [x expr=\coordindex,y=MA]{\datatable}; \addplot+ table [x expr=\coordindex,y=MA]{\datatablea}; \legend{ResNet 50, Root-64}; \end{axis} \end{tikzpicture} \end{subfigure} ~ \begin{subfigure}[b]{0.97\linewidth} \pgfplotstableread[col sep=comma]{rootdata/resnet50-layerparam.csv}\datatable \pgfplotstableread[col sep=comma]{rootdata/resnet50-root64-layerparam.csv}\datatablea \begin{tikzpicture} \begin{axis}[ axis x line=bottom, axis y line=left, ybar=0pt, bar width=0.3em, width=0.95\linewidth, height=0.4\textheight, enlarge x limits=0.02, ylabel=Parameters, y label style={at={(axis description cs:-0.04,.5)},anchor=south}, y tick label style={ /pgf/number format/.cd, fixed, fixed zerofill, precision=1, /tikz/.cd }, ymin=0, xticklabels from table={\datatable}{layer}, xticklabel style = {rotate = 90, xshift = -0.8ex, anchor = mid east, font=\footnotesize\sffamily\sansmath}, xtick=data, every axis plot/.append style={fill, draw=none, opacity=0.7}, \setplotcyclecat{2}, ] \addplot+ table [x expr=\coordindex,y=param,]{\datatable}; \addplot+ table [x expr=\coordindex,y=param,]{\datatablea}; \end{axis} \end{tikzpicture} \end{subfigure} \caption[ResNet 50 layer-wise \glsfmttext{flops}/parameters]{\textbf{ResNet 50 Layer-wise \glsfmttext{flops}/Parameters.} } \label{fig:resnet50layerwisema} \end{figure} \end{landscape} } \Cref{fig:resnet50layerwisema} shows the difference in compute and parameters for each layer in a standard \gls{resnet}\index{ResNet} 50 model and a root-64 variant. The layers in the original networks with the highest computational complexity are clearly the spatial convolutional layers, \ie layers with 3$\times$3 spatial filters. When instead a root-module is used, the computational complexity of these layers is reduced dramatically. While the low dimensional embedding layers (1$\times$1) are not changed, these have less than half the compute of the spatial convolution layers. The number of parameters in spatial convolution layers with large numbers of input channels, which increase towards the end of the network, are similarly reduced. \section{GPU Implementation}\label{gpuexplanation} Our experiments show that our method can achieve a significant reduction in \gls{cpu} and \gls{gpu} runtimes for state-of-the-art \glspl{cnn}\index{CNN} without compromising accuracy. However, the reductions in \gls{gpu} runtime were smaller than might have been expected based on theoretical predictions of computational complexity (FLOPs). We believe this is largely a consequence of the optimization of Caffe for existing network architectures (particularly AlexNet and GoogLeNet) that do not use a high degree of filter group\index{filter groups}ing. Caffe presently parallelizes over filter groups\index{filter groups} by using multiple \gls{cuda} streams to run multiple \gls{cublas} matrix multiplications simultaneously. However, with a large degree of filter group\index{filter groups}ing, and hence more, smaller matrix multiplications, the overhead associated with calling \gls{cublas} from the host can take approximately as long as the matrix computation itself. To avoid this overhead, \gls{cublas} provides batched methods (\eg \texttt{cublasXgemmBatched}), where many small matrix multiplications can be batched together in one call. \citet{Jhurani2015} explore in depth the problem of using \gls{gpu}s to accelerate the multiplication of very small matrices (smaller than 16$\times$16), and show it is possible to achieve high throughput with large batches, by implementing a more efficient interface than that used in the \gls{cublas} batched calls. We have modified Caffe to use \gls{cublas} batched calls, and achieved significant speedups for our root-like network architectures compared to vanilla Caffe without \gls{cudnn}, e.g. a 25\%\ speed up on our root-16 modified version of the GoogleNet architecture. However, our optimized implementation still is not as fast as Caffe with \gls{cudnn} (which was used to generate the results in this chapter), presumably because of other unrelated optimizations in the (proprietary) \gls{cudnn} library. Therefore we suggest that direct integration of \gls{cublas}-style batching into \gls{cudnn}\footnote{note that in August 2017, approximately a month before the submission of this disseration, the latest version of \gls{cudnn}, version 7, now supports the acceleration of filter groups.} could improve the performance of filter groups\index{filter groups} significantly. \section{Discussion} We explored the effect of using complex hierarchical arrangements of filter groups\index{filter groups} in \glspl{cnn}\index{CNN} and show that imposing a structured decrease in the degree of filter group\index{filter groups}ing with depth --- a `root' (inverse tree) topology --- can allow us to obtain more efficient variants of state-of-the-art networks without compromising accuracy. Our method appears to be complementary to existing methods, such as low-dimensional embeddings, and can be used more efficiently to train \glspl{dnn}\index{DNN} than methods that only approximate a pre-trained model's weights. We validated our method by using it to create more efficient variants of state-of-the-art \gls{nin}, Googlenet, and \gls{resnet}\index{ResNet} architectures, which were evaluated on the \gls{cifar10} and \gls{ilsvrc} datasets. Our results show comparable accuracy with the baseline architecture with fewer parameters and much less compute (as measured by \gls{cpu} and \gls{gpu} timings). For \gls{nin} on \gls{cifar10}, our model has 47\% of the parameters of the original network, and approximately 22\% faster \gls{cpu} and \gls{gpu} timings. For \gls{resnet}\index{ResNet} 50, our model has 27\% fewer parameters, and was 24\% (11\%) faster on a \gls{cpu} (\gls{gpu}). For \gls{resnet}\index{ResNet} 200 our model has 27\% fewer FLOPS and 48\% fewer parameters. Even for the most efficient of the near state-of-the-art ILSVRC network, GoogLeNet, our model uses 7\% fewer parameters and is 21\% (16\%) faster on a CPU (GPU). Even for the most efficient of the near state-of-the-art \gls{ilsvrc} network, GoogLeNet, our model uses 7\% fewer parameters and is 21\% (16\%) faster on a \gls{cpu} (\gls{gpu}). \end{document}
{ "alphanum_fraction": 0.7073620027, "avg_line_length": 57.1125723304, "ext": "tex", "hexsha": "82d54d2cf55dcef93f8ba242d20fef6a0d47de2a", "lang": "TeX", "max_forks_count": 2, "max_forks_repo_forks_event_max_datetime": "2020-07-03T09:19:46.000Z", "max_forks_repo_forks_event_min_datetime": "2019-02-16T21:52:01.000Z", "max_forks_repo_head_hexsha": "8d21690458f77c0cfefcb6ba528d421a83408b0e", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "yanii/phd-thesis", "max_forks_repo_path": "filterbasis.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "8d21690458f77c0cfefcb6ba528d421a83408b0e", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "yanii/phd-thesis", "max_issues_repo_path": "filterbasis.tex", "max_line_length": 1110, "max_stars_count": 4, "max_stars_repo_head_hexsha": "8d21690458f77c0cfefcb6ba528d421a83408b0e", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "yanii/phd-thesis", "max_stars_repo_path": "filterbasis.tex", "max_stars_repo_stars_event_max_datetime": "2021-11-08T08:47:40.000Z", "max_stars_repo_stars_event_min_datetime": "2019-02-16T21:51:41.000Z", "num_tokens": 35787, "size": 108571 }
\section{FT Data Importer} \label{sec:FTdataImporter} This Post-Processor is designed to import a FT as a PointSet in RAVEN. The FT must be specified in a specific format: the OpenPSA format (\href{<url>}{https://github.com/open-psa}). The details about this post-processors can be found in raven user manual subsection \textbf{PostProcessor} of section \textbf{Models}. \subsection{FT Importer reference tests} \begin{itemize} \item test\_FTimporter\_and\_withNOT\_embedded.xml \item test\_FTimporter\_and\_withNOT\_withNOT\_embedded.xml \item test\_FTimporter\_and\_withNOT.xml \item test\_FTimporter\_and.xml \item test\_FTimporter\_atleast.xml \item test\_FTimporter\_cardinality.xml \item test\_FTimporter\_component.xml \item test\_FTimporter\_doubleNot.xml \item test\_FTimporter\_iff.xml \item test\_FTimporter\_imply.xml \item test\_FTimporter\_multipleFTs.xml \item test\_FTimporter\_nand.xml \item test\_FTimporter\_nor.xml \item test\_FTimporter\_not.xml \item test\_FTimporter\_or\_houseEvent.xml \item test\_FTimporter\_or.xml \item test\_FTimporter\_xor.xml \end{itemize}
{ "alphanum_fraction": 0.7967479675, "avg_line_length": 38.1724137931, "ext": "tex", "hexsha": "1b7985a1ba45c2464895a964d603514d22245e2c", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2017-08-29T16:09:13.000Z", "max_forks_repo_forks_event_min_datetime": "2017-08-29T16:09:13.000Z", "max_forks_repo_head_hexsha": "30764491e7ecaa16de2a4e0ddab3bc9e169e5f95", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "sonatsen/raven", "max_forks_repo_path": "plugins/PRAplugin/doc/include/FTdataImporter.tex", "max_issues_count": 1, "max_issues_repo_head_hexsha": "30764491e7ecaa16de2a4e0ddab3bc9e169e5f95", "max_issues_repo_issues_event_max_datetime": "2018-03-27T13:06:00.000Z", "max_issues_repo_issues_event_min_datetime": "2018-03-27T13:06:00.000Z", "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "sonatsen/raven", "max_issues_repo_path": "plugins/PRAplugin/doc/include/FTdataImporter.tex", "max_line_length": 111, "max_stars_count": 2, "max_stars_repo_head_hexsha": "30764491e7ecaa16de2a4e0ddab3bc9e169e5f95", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "sonatsen/raven", "max_stars_repo_path": "plugins/PRAplugin/doc/include/FTdataImporter.tex", "max_stars_repo_stars_event_max_datetime": "2021-04-08T18:23:57.000Z", "max_stars_repo_stars_event_min_datetime": "2019-10-11T15:59:10.000Z", "num_tokens": 350, "size": 1107 }
% !Mode:: "TeX:UTF-8" % !TEX program = xelatex \newgeometry{margin=1in} \title{Assignment 6} \section{Question 1} \begin{statebox}{Timescale Invariance}{question-1} \begin{align*} S(t_{i+1}) &= S(t_i) + \mu\delta tS(t_i) + \sigma\delta tY_iS(t_i) \\S(t_{i+1}) &= S(t_i) + \mu\delta t^{1/4}S(t_i) + \sigma\delta tY_iS(t_i) \end{align*} Please verify or dispute the timescale invariance of the two models above by numerical experiments. \end{statebox} Therefore, we have the following code, which verifies the timescale invariance of the models by numerical experiments. And the result of the code below with the arguments $S_0=1$, $\mu=0.05$, $\sigma=0.5$, is Figure~\ref{F:1}. \lstset{showspaces=false, showtabs=false, tabsize=2, framexleftmargin=5mm, frame=shadowbox, numbers=left, numberstyle=\tiny, breakautoindent=false} \lstinputlisting[style=Matlab-Pyglike]{code/timescale_invariance_asset_path.m} \begin{figure}[H] \centering \includegraphics[width=\textwidth]{figures/2019-10-30-timescale-invariance.png} \caption{Timescale Invariance Asset Path}\label{F:1} \end{figure}
{ "alphanum_fraction": 0.7321911632, "avg_line_length": 42.6538461538, "ext": "tex", "hexsha": "30b6d32fa912e69377dfaafc63f43a5b2be7dbb4", "lang": "TeX", "max_forks_count": 2, "max_forks_repo_forks_event_max_datetime": "2020-03-12T23:11:28.000Z", "max_forks_repo_forks_event_min_datetime": "2019-11-02T05:46:01.000Z", "max_forks_repo_head_hexsha": "65bd3372df197bec5e152a37cdc1f6f5432b7f3e", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "AllenYZB/homework", "max_forks_repo_path": "MA216/sections/6.tex", "max_issues_count": 2, "max_issues_repo_head_hexsha": "65bd3372df197bec5e152a37cdc1f6f5432b7f3e", "max_issues_repo_issues_event_max_datetime": "2022-03-12T00:49:10.000Z", "max_issues_repo_issues_event_min_datetime": "2022-01-13T03:04:10.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "AllenYZB/homework", "max_issues_repo_path": "MA216/sections/6.tex", "max_line_length": 226, "max_stars_count": 8, "max_stars_repo_head_hexsha": "253d4746528ef62d33eba1de0b90dcb17ec587ed", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "iydon/homework", "max_stars_repo_path": "MA216/sections/6.tex", "max_stars_repo_stars_event_max_datetime": "2021-07-11T12:14:56.000Z", "max_stars_repo_stars_event_min_datetime": "2019-10-20T08:18:54.000Z", "num_tokens": 369, "size": 1109 }
\documentclass[article,oneside]{memoir} \usepackage{amsmath} \usepackage{amssymb} \usepackage{cclicenses} \usepackage{color} \usepackage{euler} \usepackage[colorlinks=true,linkcolor=blue,citecolor=red]{hyperref} \usepackage{float} \usepackage{framed} \usepackage[letterpaper,margin=2.5cm]{geometry} \usepackage{graphicx} \usepackage{indentfirst} \usepackage{lipsum} \usepackage{longtable} \usepackage[final]{microtype} \usepackage{multicol} \usepackage{nameref} \usepackage{pdflscape} \usepackage{pdfpages} \usepackage{tabularx} \usepackage{textcomp} \usepackage{todonotes} \usepackage{libertine} \usepackage{inconsolata} \usepackage{upgreek} %\usepackage{mathptmx} % <-- Times font % PAGE SETUP \setlength{\parindent}{5mm} \nonzeroparskip \raggedbottom \raggedcolumns % CONFIG FOR TOC, BIBLIO, ETC \setcounter{secnumdepth}{5} \setcounter{tocdepth}{2} \renewcommand{\bibname}{References} \renewcommand{\abstractname}{Executive Summary} \renewcommand{\contentsname}{Table of Contents} % HANDY DEFINITIONS \newcommand{\note}[1]{ \begin{framed} #1 \end{framed} } \newcommand{\refdes}[1]{\texttt{#1}} \newcommand{\mr}[1]{\ensuremath{\mathrm{#1}}} \newcommand{\dg}{\ensuremath{{}^\circ}} \newcommand{\schematicpage}[2]{ \begin{framed} Relevant schematic page: \hyperlink{schematic.pdf.#1}{\texttt{#2/#1}} \end{framed} } \newcommand{\greylipsum}[1]{{\color{gray} \lipsum[#1]}} \newcommand{\kOhm}{\ensuremath{\mathrm{k}\Omega}} \newcommand{\Ohm}{\ensuremath{\Omega}} \newcommand{\Pos}[1]{{\ensuremath{+}#1}} \newcommand{\Neg}[1]{{\ensuremath{-}#1}} \newcommand{\uA}{\ensuremath{\upmu\mathrm{A}}} \newcommand{\uF}{\ensuremath{\upmu\mathrm{F}}} \newcommand{\uV}{\ensuremath{\upmu\mathrm{V}}} \title{Design Report} \author{WCP52} \begin{document} \pagestyle{headings} \input{title} %\begin{multicols}{2} \input{ExecSummary} \newpage \tableofcontents* %\listoffigures \newpage \input{ProblemDefinition} \input{DesignDescription} \input{Evaluation} \input{NextSteps} %\end{multicols} %\newpage %\begin{landscape} %\chapter{Electrical parts} %\texttt{ %\input{autogen-bom} %} %\end{landscape} %\newpage %\chapter{Full schematics} %The following pages contain schematics exported directly from the CAD software. %While they are documented, it is intended that readers will first familiarize %themselves with the workings of the circuits by reading through the %\hyperref[chap:too]{Theory of Operation}. %\includepdf[landscape=true,pages=-,link]{schematic.pdf} \newpage \input{biblio} \begin{appendices} \newpage \input{ProjectRequirements} \newpage \input{AppTestProcedures} \newpage \chapter{Computer Code} \begin{verbatim} 10 PRINT "HELLO WORLD" 20 GOTO 10 \end{verbatim} \newpage \chapter{CAD Drawings} \end{appendices} \end{document}
{ "alphanum_fraction": 0.7554426705, "avg_line_length": 24.8288288288, "ext": "tex", "hexsha": "e109997e48983cb0febe8b39e49ea318a0137cd2", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2019-07-27T08:38:22.000Z", "max_forks_repo_forks_event_min_datetime": "2019-07-27T08:38:22.000Z", "max_forks_repo_head_hexsha": "1f5af1389d0802bc2b3451cba2bf30b3614d67f9", "max_forks_repo_licenses": [ "BSD-3-Clause" ], "max_forks_repo_name": "WCP52/docs", "max_forks_repo_path": "official/rept-midterm/source/main.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "1f5af1389d0802bc2b3451cba2bf30b3614d67f9", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "BSD-3-Clause" ], "max_issues_repo_name": "WCP52/docs", "max_issues_repo_path": "official/rept-midterm/source/main.tex", "max_line_length": 131, "max_stars_count": 5, "max_stars_repo_head_hexsha": "1f5af1389d0802bc2b3451cba2bf30b3614d67f9", "max_stars_repo_licenses": [ "BSD-3-Clause" ], "max_stars_repo_name": "WCP52/docs", "max_stars_repo_path": "official/rept-midterm/source/main.tex", "max_stars_repo_stars_event_max_datetime": "2020-02-29T01:59:56.000Z", "max_stars_repo_stars_event_min_datetime": "2015-04-24T16:57:33.000Z", "num_tokens": 894, "size": 2756 }
\section{Blue Odem Flame}\label{perk:blueOdemFlame} \textbf{Cost:} Varies, see "\nameref{ch:odemPerks}"\\ \textbf{Requirements:}\nameref{perk:odemCurse} I\\ \textbf{Active, Repeatable}\\ You can call forth the power of domination inside of you, producing a blue, flame-like substance from your hands.\\ Any melee weapon attacks you do while this flame is active deal an additional 1 electricity damage. It also deals 1d4 Action Point damage.\\ While the effect is active, each action costs you an additional 1 Stamina per Action Points spent.\\ You can end this effect at any given time\\ This effect also ends early if you take damage.\\ You can only use the blue flame once every 24 hours.\\ While activating a Blue Odem Flame, you can not call forth any other Odem Flames.\\ \\ Level Progression:\\ \\ II: You can call it forth once every 12 hours. It also deals an additional 1 electricity damage.\\ III: It doesn't end anymore if you take damage. It also deals an additional 1d4 AP damage.\\ IV: You can call it forth once every 6 hours. It also deals an additional 1 electricity damage.\\ V: It only costs 1 Stamina per attack you use the flame with, not per AP. It also deals an additional 1d4 AP damage.\\
{ "alphanum_fraction": 0.7642209398, "avg_line_length": 52.7391304348, "ext": "tex", "hexsha": "d1d4a89ff376473d942b7e3e0c111dec5852b546", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "73781f7cd7035b927a35199af56f9da2ad2c2e95", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "NTrixner/RaggedLandsPenAndPaper", "max_forks_repo_path": "perks/odem/blueOdemFlame.tex", "max_issues_count": 155, "max_issues_repo_head_hexsha": "73781f7cd7035b927a35199af56f9da2ad2c2e95", "max_issues_repo_issues_event_max_datetime": "2022-03-03T13:49:05.000Z", "max_issues_repo_issues_event_min_datetime": "2018-03-18T13:19:57.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "NTrixner/RaggedLandsPenAndPaper", "max_issues_repo_path": "perks/odem/blueOdemFlame.tex", "max_line_length": 115, "max_stars_count": 6, "max_stars_repo_head_hexsha": "73781f7cd7035b927a35199af56f9da2ad2c2e95", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "NTrixner/RaggedLandsPenAndPaper", "max_stars_repo_path": "perks/odem/blueOdemFlame.tex", "max_stars_repo_stars_event_max_datetime": "2022-02-03T09:32:08.000Z", "max_stars_repo_stars_event_min_datetime": "2018-03-13T09:33:31.000Z", "num_tokens": 328, "size": 1213 }
\chapter{Topics to be aware of} \label{chp:howto} \section{Permissions} \label{chp:howto:sec:permissions} As the Android OS is built on top of the Linux Kernel it also comes with the permission approach of Linux. This way an application can only access a limited range of system resources by default and every access to a resource is managed by the OS. To get more access than the standard provided by the basic sandbox, an application must define the resources it wants to have access to in its manifest. The user gets asked to grant these permissions the first time an app wants the access. This gets saved to the device for later usage and the user doesn't have to grant it again. Like in Linux the permission model is a user based model and as every application is its own user every application has its own permissions. This also isolates the user resources from one another. In addition every application has to explicitly define which resources it shares with other applications. A good way of requesting permissions is to minimize the number of requested permissions. Simply because if an app can't do more than it should, unexpected situations won't arise. So if a permission is not required it should not be requested. Also self created permissions should be as few as possible and rather system defined ones should be used as while granting the permissions a user could get confused by a big list of unknown permissions. \section{Networking} \label{chp:howto:sec:networking} Networking with any device is always risky just because of the fact, that the data that has to be transmitted is potentially private data of the user. Any loss or "publication" of this data can harm the user and/or the trust a user has in the application. The highest endeavour should always be to keep user data secure at all times. For this reason it is important to use secure connections to any network an app wants to send data to or receive from. The key to secure network traffic is to use appropriate protocols for the connections. For trivial example would be to use HTTPS over HTTP whenever the server provides it. On a mobile device any network connection covers an additional possible security threat, as it is frequently connected to unsecure networks via Wi-Fi. Here the threat is first that the network itself could be unsecure and second it is not known which other users are in the network and possibly have bad intentions. Another point is that some developer tend to use localhost ports for sending data over Inter Process Communication (IPC). This is not a good approach as these interfaces are accessible by other applications on the device and so the data could be read by the wrong process. The way to solve this is to use IPC mechanisms provided by android. \section{Input Validation} \label{chp:howto:sec:inputValidation} This topic is the most common security problem [https://developer.android.com/training/articles/security-tips] if not done correctly or not done at all. Every data an app is receiving over the network, as input from the user or from an IPC is potentially threatening even if it is not meant to be harmful in first place. The most common problems are buffer overflows, use after free and off-by-one errors.[] This threat can be reduced if the data gets validated. Type-safe languages already tend to reduce the likelihood of input validation issues. Also pointers should always be handled very carefully so that they don't point on the wrong address and when using buffers they should always be managed. When using queries for an SQL database there could be the issue of SQL injection that should be taken care of by using parameterized queries or limiting permissions to read-only or write-only. Another way is to use well formated data formats and verify each expected format. \section{Handeling User Data} \label{chp:howto:sec:userData} Nowadays users get more and more aware that their personal data is important and that it should be handled with care. For this it is necessary to not lose the trust of the user while using an app because he wouldn't come back to it then anymore. One good approach to accomplish this to minimize the use of APIs where the data is sent to. If this is necessary for the functionality of the app the way to go would be to not send the plain information but a hash or a non-reversible form of the data. This way the data doesn't get exposed to third parties for services or advertisement. This approach is also valid when storing the data on the phone. The data could also get leaked to other apps through permissions to IPCs meaning that one app has the data and another app normaly has no permission for accessing this data but it gets it through a connection to the first app. Third party services or advertisement often require personal information of users to work for not obvious reasons. Here a good thing to do is not to provide that data if the developer is not sure why the service would need that information. \section{Cryptography} \label{chp:howto:sec:cryptography} Next to other security features that were mentioned, Android also provides a wide array of algorithms for protecting data using cryptography. So there is no need to implement your own algorithms. A good approach for this is to always use the highest level of the existing implementations that still support the usecase in the app. This way the developer can make the data as save as possible because the levels of security differentiate in how strong they are. While using these algorithms it is also important to use secure random number generators to initialize these algorithms. If in this case the developer only uses a non secure random number generator this weakens the cryptographic algorithms significantly. \section{Credentials} \label{chp:howto:sec:credentials} A good approach for handeling user credentials is to minimize the frequency of asking for the user credentials because asking for it a lot makes phishing attacks more likely to be successful each time. What is quite obvious is that the developer should never store user names or passwords on the device itself. Instead the way to go should be to initially authenticate a user and then use an authorization token. \section{Interprocess Communication} \label{chp:howto:sec:interprocessCommunication} This communication appears when sending data from one app to another or also from process to process within an app. For this Android provides a functionality using intents. These intents get sent indirectly to another app via Reference Monitor which takes care of the security in this case. When the sending process or application specifies a permission that is needed to receive the intent the Reference Monitor checks if the receiver actually has this permission and if not it cancels the communication. So the permission gets mandatory to receive this specific intent. These intents can be broadcasted so that every process has the chance to get it or sent directly to a receiver. If the data that has to be sent is sensitive the better approach would be to send it directly for obvious reasons. \section{Dynamically Loaded Code} \label{chp:howto:sec:dynamicallyLoadedCode} This is loading executable code from somewhere else which is obviously a security risk as the source is not always fully transparent and so it may be harmful. It is a big threat for code injection or code tampering which then can do harmful activities as it eather adds or manipulates the code of the apk. Dynamically loaded code can make it impossible to verify the behaviour of an app which then in the next step can make it prohibited in some environments as the code is more or less unpredictable. One important thing to keep in mind is that dynamically loaded code runs with the same security permissions as the app itself.
{ "alphanum_fraction": 0.8110728409, "avg_line_length": 122.484375, "ext": "tex", "hexsha": "58022c88510dac9623b77c8040791dda04211ff1", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "e36d5870748467d7b785921f8ce7344177fbf856", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "worldpotato/MobileSecurityHandout", "max_forks_repo_path": "content/03_topicsWithAwareness.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "e36d5870748467d7b785921f8ce7344177fbf856", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "worldpotato/MobileSecurityHandout", "max_issues_repo_path": "content/03_topicsWithAwareness.tex", "max_line_length": 455, "max_stars_count": null, "max_stars_repo_head_hexsha": "e36d5870748467d7b785921f8ce7344177fbf856", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "worldpotato/MobileSecurityHandout", "max_stars_repo_path": "content/03_topicsWithAwareness.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 1557, "size": 7839 }
\documentclass[DIV13]{scrartcl} \usepackage[T1]{fontenc} \usepackage[utf8]{inputenc} \usepackage[english]{babel} \usepackage[default,scale=0.95]{opensans} \usepackage[scaled=0.85]{beramono} \usepackage{listings,minted} \usepackage{grid-system} \usepackage{tabularx} \usepackage{booktabs} \usepackage{grid-system} \usepackage{xcolor} \definecolor{linkcolor}{rgb}{0, 0, 0.93} \usepackage[ hidelinks, colorlinks=true, urlcolor=linkcolor ]{hyperref} \setlength{\parindent}{0cm} \lstset{language=[LaTeX]tex,xleftmargin=2em} \title{Grid System} \author{Marcus Bitzl\\ \url{[email protected]}} \renewcommand{\emph}[1]{\textcolor{red!65!black}{#1}} \begin{document} \maketitle \begin{abstract} Grid system is a package that implements grid-like layouts for \LaTeX, as it is commonly known from CSS. You can easily divide your horizontal space into equal parts and assign these to boxes containing your content. \end{abstract} \section{Usage} \subsection{Overview} %There are two methods to divide your row into multiple columns. The first one with uppercase \emph{Cell} and \emph{Row} is easier to use as it collects the content of the cells and calculates everything for you. As a result, it might break on certain contents (e.g. footnotes). For such cases, the second method with lowercase \emph{row} and \emph{cell} will work. These are more capable, but need more configuration. \medskip \subsection{Default use:} \minisec{Example:} \begin{minted}{latex} \begin{Row}% \begin{Cell}{2} This is a long row spanning two thirds of the text width. \end{Cell} \begin{Cell}{1} This is a long row spanning one third of the text width. \end{Cell} \end{Row} \end{minted} \minisec{Output:} \begin{Row}% \begin{Cell}{2} This is a long row spanning two thirds of the text width. This one cannot have footnotes. \end{Cell} \begin{Cell}{1} This is a long row spanning one third of the text width. \end{Cell} \end{Row} \clearpage \subsection{The (old) fallback} In earlier implementations, the default interface from above would not work for many content. As this has changed, this fallback will become deprecated in the next version unless someone finds an important case to keep it. \begin{lstlisting} \begin{row}{<Total number of columns}{<Number of cells>}% \begin{cell}{<Number of columns to span>} ... \end{cell} \begin{cell}{<Number of columns to span>} ... \end{cell} \end{row} \end{lstlisting} \minisec{Example:} \begin{lstlisting} \begin{row}{3}{2}% \begin{cell}{2} ... \end{cell} \begin{cell}{1} ... \end{cell} \end{row} \end{lstlisting} \minisec{Output:} \begin{row}{3}{2}% \begin{cell}{2} This is a long row spanning two thirds of the text width\footnote{Yes, really!}, telling your nothing\footnote{But it has footnotes, yeah!}. \end{cell} \begin{cell}{1} This is a long row spanning one third of the text width. \end{cell} \end{row} \bigskip Each cell is created using a \texttt{minipage} environment. In future versions the will be a switch to choose either minipages or parboxes. \section{Parameters} There are two optional keyword parameters for the environment \texttt{row} right now: \medskip \begin{tabularx}{\linewidth}{llX}\toprule \textbf{Parameter} & \textbf{Default} & \textbf{Description},\\ \midrule cellsep & 1.75em & horizonal space between two cells.\\\bottomrule rowwidth & \linewidth & total horizontal space for the row.\\\bottomrule \end{tabularx} \section{Contribute} You want to contribute? Just fork me on Github, start discussions and send me you pull requests: \begin{center} \href{https://github.com/bitzl/latex-grid-system}{\tt bitzl/latex-grid-system} \end{center} \section{License} Copyright 2013 Marcus Bitzl \medskip Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at \medskip \hspace*{1.2em}\href{http://www.apache.org/licenses/LICENSE-2.0}{\texttt{http://www.apache.org/licenses/LICENSE-2.0}} \medskip Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. \section{Example} \begin{row}[cellsep=0.75cm]{3}{3} \begin{cell}{1} \section*{Overview} \vspace{-1.5ex} This shows what you can do with the \texttt{grid-system} package and three columns. \end{cell} \begin{cell}{1} \section*{Why use it?} \vspace{-1.5ex} Sometimes you need to split your text in several independent columns based on equal division of the available space. This package allows you to easily do so. \end{cell} \begin{cell}{1} \section*{Contribute} \vspace{-1.5ex} You want to contribute? Just fork me on Github, start discussions and send me you pull requests: \begin{center} \href{https://github.com/bitzl/latex-grid-system}{\tt bitzl/latex-grid-system} \end{center} \end{cell} \end{row} \bigskip \begin{row}[cellsep=0.75cm]{3}{2} \begin{cell}{2} \section*{License} \vspace{-1.5ex} Copyright 2013 Marcus Bitzl \medskip Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at \medskip \hspace*{1.2em}\href{http://www.apache.org/licenses/LICENSE-2.0}{\texttt{http://www.apache.org/licenses/LICENSE-2.0}} \medskip Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. \end{cell}% \begin{cell}{1} \section*{Why use it?} \vspace{-1.5ex} Sometimes you need to split your text in several independent columns based on equal division of the available space. This package allows you to easily do so. \end{cell} \end{row} \bigskip \begin{row}[cellsep=0.75cm]{3}{2} \begin{cell}{1} \section*{Short} \vspace{-1.5ex} This shows the capabilities of grid-system with three equal columns. \end{cell} \begin{cell}{2} \section*{Long} \vspace{-1.5ex} Sometimes you need to split your text in several independent columns based on equal division of the available space. This package allows you to easily do so. \end{cell} \end{row} \end{document}
{ "alphanum_fraction": 0.7426640037, "avg_line_length": 29.0580357143, "ext": "tex", "hexsha": "0d1cbde8c21bd95ef404eb8fa601ef4f75753a25", "lang": "TeX", "max_forks_count": 2, "max_forks_repo_forks_event_max_datetime": "2019-11-18T13:29:23.000Z", "max_forks_repo_forks_event_min_datetime": "2016-06-24T13:05:56.000Z", "max_forks_repo_head_hexsha": "e77e30216a133a91284448002156c7df11c3ead0", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "bitzl/latex-grid-system", "max_forks_repo_path": "grid-system.tex", "max_issues_count": 3, "max_issues_repo_head_hexsha": "e77e30216a133a91284448002156c7df11c3ead0", "max_issues_repo_issues_event_max_datetime": "2016-07-24T14:46:27.000Z", "max_issues_repo_issues_event_min_datetime": "2016-06-24T16:44:50.000Z", "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "bitzl/latex-grid-system", "max_issues_repo_path": "grid-system.tex", "max_line_length": 418, "max_stars_count": 6, "max_stars_repo_head_hexsha": "e77e30216a133a91284448002156c7df11c3ead0", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "bitzl/latex-grid-system", "max_stars_repo_path": "grid-system.tex", "max_stars_repo_stars_event_max_datetime": "2020-02-25T03:48:49.000Z", "max_stars_repo_stars_event_min_datetime": "2016-06-24T15:11:05.000Z", "num_tokens": 1901, "size": 6509 }
% Created 2016-08-10 Wed 14:22 \documentclass[11pt]{article} \usepackage[utf8]{inputenc} \usepackage[T1]{fontenc} \usepackage{fixltx2e} \usepackage{graphicx} \usepackage{longtable} \usepackage{float} \usepackage{wrapfig} \usepackage{rotating} \usepackage[normalem]{ulem} \usepackage{amsmath} \usepackage{textcomp} \usepackage{marvosym} \usepackage{wasysym} \usepackage{amssymb} \usepackage{hyperref} \tolerance=1000 \setcounter{secnumdepth}{3} \author{Luke M Perez} \date{Summer 2016} \title{Outline International Relations} \hypersetup{ pdfkeywords={}, pdfsubject={}, pdfcreator={Emacs 24.5.1 (Org mode 8.2.10)}} \begin{document} \maketitle \section{General Concepts} \label{sec-1} \subsection{Levels of Analysis} \label{sec-1-1} \begin{enumerate} \item Individual Level \begin{enumerate} \item Human Behavior \begin{enumerate} \item Classical IR (Carr, Morgenthau, Neibuhr) focused on ``human nature'' as \emph{the} cause of war. Rejected as reductionist by Waltz and structural theorists. \item Structuralism's strength from 1980ff waning in light of evolutionary psychology, GT, constructivism \item Renewed interest of individual levels and their interaction with state and system suggests potential for dynamic models of IR (IPE, systems theory, etc.) ::need citation:: \end{enumerate} \item Human Nature \item Criticisms \begin{enumerate} \item Arguments of human nature (cf. Morgenthau, Neibuhr) are reductionist \item Individuals are not the essential actors in IR \end{enumerate} \end{enumerate} \item State Level \begin{enumerate} \item Domestic politics pushing upward into the system \item Examples included \begin{enumerate} \item Open Economy Politics \item Neoclassical realism \end{enumerate} \end{enumerate} \item System Level \begin{enumerate} \item Anarchy is a material variable, creates incentives and constraints on state behavior \item Criticisms \begin{enumerate} \item Waltz relies on theoretical reductionism, treating the state as a microeconomic firm. \end{enumerate} \end{enumerate} \end{enumerate} \subsection{Agent-Structure Problem} \label{sec-1-2} \begin{enumerate} \item Who influences who, \emph{agents on structure} or \emph{structures on agents}? \item Rationalists emphasize the agents as those who make the system and institutions \begin{enumerate} \item Wagner (2010) suggests the international system is the product of international bargains between states \item Milner (199?) raises the possibility that it could be \emph{rationalism all the way down} such that important concepts, like sovereignty, thought to be firm are much more malleable. \end{enumerate} \item Constructivists stress the constitutive ontology of agents and structures \begin{enumerate} \item Agents and structure emerge together \item Structure shapes agents in ways that are largely imperceptible. \begin{enumerate} \item Wendt (1999) on the culture's of anarchy: Hobbesian, Lockean, Kantian \item Ruggie (1992): Embedded liberalism thesis. Logic of free-market, global capitalism baked into the system by the framers of post-war order. \end{enumerate} \end{enumerate} \end{enumerate} \subsection{Principle-Agent Model} \label{sec-1-3} \subsection{Strategic models} \label{sec-1-4} \subsubsection{Interests vs. Preferences} \label{sec-1-4-1} \begin{enumerate} \item Not identical \begin{enumerate} \item Preferences are \emph{what} individual actors want. \item Interests are \emph{why} they want. \end{enumerate} \item Norms, morality, or interest may drive interests (Wagner 2010; Frieden 1999 [Lake and Powell]) \begin{enumerate} \item preferences and the conflict between them are what drive strategy. \item NB: Hobbes on the causes of war: competition, diffidence, glory \emph{vs} Thucydides' fear, pride, interest. \end{enumerate} \item Interests are \emph{shaped} by the system. \begin{enumerate} \item Finnemore argues international politics is about defining, not defending, national interests (1996). \item Constructivism asks why non-like states produce like behavior and suggest the answer lies in the conditioning. \item Waltz's dictum that states evolve toward like units suggests normative processes at play. \end{enumerate} \end{enumerate} \subsection{Institutions} \label{sec-1-5} \subsubsection{Rationalists Definitions} \label{sec-1-5-1} \begin{enumerate} \item International regimes \begin{enumerate} \item Laws of War \item International Organizations \end{enumerate} \item Institutions as human made constraints and economic models \begin{enumerate} \item ``Institutions are the humanly devised constraints that structure political, economic and social interaction. They consist of both informal constraints\ldots{} and formal rules. \ldots{} Together with the standard constraints of economics they define the choice set and therefore determine transaction and production costs and hence the profitability and feasibility of engaging in economic activity'' (North 1991). \end{enumerate} \item Actors (states, non-states) behave in predictable patterns and seek utility maximizing strategies for any given strategy space (Lake and Powell 1999). \end{enumerate} \subsubsection{Normative Definitions} \label{sec-1-5-2} \begin{enumerate} \item The rules and patterns of behavior Keohane (1987). \item Cultures of anarchy and norm dynamics \begin{enumerate} \item Multiple ``cultures'' of enmity, competition, friendship that form a path dependency between any two (or groups) of nations (Wendt 1999) \item Change within and between cultures depends on entrepreneurs who bring about change in state behavior, ultimately changing the path dependency of relationships between actors (Finnemore and Sikkink). \end{enumerate} \item \end{enumerate} \subsection{Cooperation} \label{sec-1-6} \subsubsection{Cooperation \emph{vs} Anarchy} \label{sec-1-6-1} \begin{enumerate} \item Anarchy frustrates cooperation because states are preoccupied with security (Waltz) \begin{enumerate} \item Logic of security dilemma (Jervis 1976) \item The system incentivizes autarky (Mearshimer) \end{enumerate} \item Anarchy \emph{predicts} cooperation because self-help suggests outsourcing what cannot be accomplished internally (Keohane, etc.) \begin{enumerate} \item Anarchy as culture and \emph{the meaning} between two states (Wendt 1992) \end{enumerate} \item Anarchy is called into question because cooperation suggests hierarchy and order and not Hobbesian system. \begin{enumerate} \item Milner and the appearance of order (????) \item Cooperation is a bargaining game (Schelling 1960) and it may be within a state's interest to cooperate. \end{enumerate} \end{enumerate} \subsubsection{Cooperation and state behavior} \label{sec-1-6-2} \begin{enumerate} \item Harmony and Discord require no change in behavior on the part of actors. \item Cooperation is \emph{contingent} change in behavior interdependent on the actions of other partners in the deal. \end{enumerate} \subsection{Audience Costs} \label{sec-1-7} \subsubsection{Theory (Fearon 1994)} \label{sec-1-7-1} \subsubsection{Criticisms} \label{sec-1-7-2} \begin{itemize} \item Limited Scope \label{sec-1-7-2-1} \begin{enumerate} \item The relative strength of the ``changed circumstances'' appeal calls into question the scope of conditions when audience cost theory holds \item i.e., if a leader can escape punishment by same ``oh, it was prudent to raise stakes when I said it, but imprudent to carry out the threat'' then we might being to wonder if audience costs has any meaning. \end{enumerate} \item Empirical challenges: \label{sec-1-7-2-2} Snyder and Borghard 2011 find four points of concern: \begin{enumerate} \item Leaders prefer flexibility in crisis and are therefore more likely to prefer ambiguity. \item Domestic public will care more about the substance of the final policy more than whatever perceived consistency \item The public concern with the national honor is largely independent of whatever threats were made. \item Authoritarian regimes interpret the dynamics of audience costs differently than democracies, thereby weakening the strength of audience costs in practice \end{enumerate} \end{itemize} \section{International Political Economy} \label{sec-2} \subsection{OEP} \label{sec-2-1} \subsubsection{Method and approach} \label{sec-2-1-1} \subsubsection{Key findings} \label{sec-2-1-2} \subsubsection{Criticisms} \label{sec-2-1-3} \begin{itemize} \item Oatley 2011. \label{sec-2-1-3-1} Methodological reductionism produces inaccurate knowledge. Most OEP seems to drop the final step (model the system with necessary) by assuming rather than showing that the system does not have an effect. \end{itemize} \section{International Organization} \label{sec-3} \section{Foreign Policy} \label{sec-4} \subsection{History vs Social Science} \label{sec-4-1} \begin{enumerate} \item Three major differences between IR and Diplomatic History (Dip-hist) \begin{enumerate} \item Chronology (history) vs Causal mechanisms (IR) \item Individual events (history) vs Comparative cases (IR) \item Morality: history more comfortable, IR emphasizes facts over values \end{enumerate} \item IR can, and should, draw from history as it builds theories and hypotheses without falling into an inductive-qualitative trap. \end{enumerate} \subsection{Small group dynamics} \label{sec-4-2} \begin{enumerate} \item How do groups perceive another actors behavior (Jervis 1978) \end{enumerate} \section{Practice outlines:} \label{sec-5} \subsection{IR Fall 2015:} \label{sec-5-1} Suppose you are putting together a syllabus for a graduate seminar providing a survey of the field of international relations to Ph.D. students who expect to take comprehensive exams in political science. First, how would you go about organizing your syllabus and why (e.g. according to research questions? theoretical frameworks? Research approach- es? Chronologically? etc.)? Second, what are your goals for what the students should take away from the course? Explain. Third, what are some alternatives to the answers you've given to the first two questions, and why would you not adopt those alternatives? \subsection{Answer} \label{sec-5-2} \subsubsection{How would I approach the question:} \label{sec-5-2-1} \begin{enumerate} \item Organize around levels of analysis \begin{enumerate} \item Include discussion of major themes within the levels of analysis \item Focus on problem driven debates within the field \item Read works around questions chronologically so as to strengthen students understanding of theoretical and empirical development to major questions \end{enumerate} \item Why around the levels of analysis? \begin{enumerate} \item Unit of analysis is largely an unspoken methodological hunch on what variable best answers a specific question. \item Is it always purely pragmatic (cf. Fearon and Wendt 1992; Kydd 2008) \item -isms are bad and produce pathologies of analysis (Lake 2011; 2013) \end{enumerate} \item First image \begin{enumerate} \item Political psychology of decision making (Jervis 1976; 1978) \item Johnson (1974) on presidential leadership styles \item Rationalism all the way down (Allison and Zelikow 1999) suggests looking at utility maximizing of state agents \item Preference formation of citizens (Braumoeller 2010, Scheve and Slaughter?) \item Democratic theory and audience costs (Fearon 1995): i.e., are the foreign policy decision makers really bound by the commitments they make (Schelling 1960)? \end{enumerate} \item Second image \begin{enumerate} \item Defining of state interests (Finnemore 1996) \begin{enumerate} \item arguable that this is a systemic question, but our main question is how states respond and then then legitimate their interests \item Frieden (1999): interests are deduced or assumed (when necessary) \item Frieden and Martin (2001) on the domestic-international interaction. I.e., that domestic institutions, electoral design, and other such factors affect interest aggregation \item Relaxing the assumption of the state as a unitary actor (Milner 1998; cf. Kydd 2008) \end{enumerate} \item Open Economy Politics \begin{enumerate} \item Openness as dependent variable; politics as IV (Lake 2009) \item How much room do states really have (Mosely 2000,2005,2007) \begin{enumerate} \item Influence of global finance markets strong, but narrow because fund managers look to industry-wide metrics \item RTB logic is flawed but persists because of ideology: critiques and champions of global capitalism have interest in narrative that the state cannot, or should not, be the legitimate arbiter of value \item In some areas, like FDI, a counter logic of climb to the top results because best-practice transfer, firms as advocates for rights, and domestic interests want protections \end{enumerate} \end{enumerate} \item Domestic causes of war \begin{enumerate} \item Democratic Theory \begin{enumerate} \item Oneal and Russett (1997) on interdemocratic peace and the challenges by Gartzke () and McDonald (2015) \item Bennet and Stam suggest that research design could affect outcome of findings, but some dv/iv relationships, like democracy, are relatively consistent (2000). \end{enumerate} \item Snyder (1991) and the myths of expansionism (echoes perception problem highlighted by Jervis) \item War as commitment problem (Powell 2006) and how states seem to solve it. \end{enumerate} \end{enumerate} \item Third image \begin{enumerate} \item Democratic theory \item Interstate bargaining \item Questions of anarchy \begin{enumerate} \item Materialism of neorealism/neoliberalism \begin{enumerate} \item Waltz and the TIP \item Order within anarchy (Morrow) \end{enumerate} \item Constructivism and the agent-structure question \begin{enumerate} \item Wendt (1992, 1999) \item Ruggie (1998?) \end{enumerate} \end{enumerate} \item IPE and the pressures on states \begin{enumerate} \item Mosley (RTB vs CTT) \end{enumerate} \end{enumerate} \end{enumerate} \subsubsection{What are the goals for students to take away} \label{sec-5-2-2} \begin{enumerate} \item Research design: \begin{enumerate} \item Approach methods as tool box \item Model similar to ``med school'' wherein rationalist, qualitative, and empirical do not compete but compliment (Shapario ????). \end{enumerate} \item Exposure to contemporary questions within the field to date \item \end{enumerate} \subsubsection{Alternatives and why not adopt them} \label{sec-5-2-3} \begin{enumerate} \item Debates around paradigms \begin{enumerate} \item flawed approach and generally harmful \item Did signal how much was generally agreed upon (Waever 1998) \item But theory driven approach to field has everyone talking past each other \begin{enumerate} \item Trying to ``prove'' other theories wrong, rather than answering deepest questions about international political phenomena \item Hard to adjudicate which method is appropriate to the question when grand theory is major fault-line of discourse \end{enumerate} \end{enumerate} \item Major problems and mid-level theorizing \begin{enumerate} \item Strongest alternative possible \item But possible to miss the forest for the trees \begin{enumerate} \item e.g. would be democratic peace \item Seeing debates over its empirical status rather than phenomenon that penetrates many other research agendas \item Debate over its causal pathways cuts across levels of analysis, and we can see the various weakness and strengths of different approaches better when grouped by level of analysis. \end{enumerate} \item Other research agendas situate better within discussion of levels of analysis \begin{enumerate} \item IOs, for instance, arguably best within second image and behavior of states vis-a-vis their institutions \item Cooperation, e.g., could cut across both 2nd and 3rd image but the other issues at play are best treated on a per-image analysis. \end{enumerate} \end{enumerate} \end{enumerate} % Emacs 24.5.1 (Org mode 8.2.10) \end{document}
{ "alphanum_fraction": 0.7954616884, "avg_line_length": 35.7505617978, "ext": "tex", "hexsha": "8f8b81cd271528f8802c64e8a4a7b438b125102f", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "c1284a44ea21d662cd6be72b4ff0487161dd40ec", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "lukemperez/IR_Comp_Exam", "max_forks_repo_path": "IR_Study.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "c1284a44ea21d662cd6be72b4ff0487161dd40ec", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "lukemperez/IR_Comp_Exam", "max_issues_repo_path": "IR_Study.tex", "max_line_length": 85, "max_stars_count": null, "max_stars_repo_head_hexsha": "c1284a44ea21d662cd6be72b4ff0487161dd40ec", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "lukemperez/IR_Comp_Exam", "max_stars_repo_path": "IR_Study.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 4089, "size": 15909 }
\documentclass[a4paper, 12pt]{book} \usepackage{pdfpages, keyval, amsfonts} \title{\bfseries Collected Papers of Morgan Ward} \author{Masum Billal} \begin{document} \maketitle \frontmatter \section*{About This Collection} This book is a collection of papers authored by Morgan Ward. Morgan Ward may not be that well known among most of the students now a days but his work has been really influential in many ways. He has some research results in some very important and interesting areas, specially recurring series. He authored 82 papers according to Lehmer \cite{lehmer}. The papers collected here are kept as the original ones. However, there are some papers that I could not get. By my count, I got 68 of those 82 papers here. A particular chapter contains the papers published in that year. \textbf{An important note }please do not use this book for any commercial purposes. I do not own any copyright of the papers in any way. For adding the missing ones to this collection or for any suggestion, feel free to contact me at \texttt{[email protected]} \begin{flushright} Masum Billal\\ \date{\today} \end{flushright} \tableofcontents \mainmatter \chapter{1927} \includepdf[pages={-}]{1927-01 A generalization of recurrents.pdf} \includepdf[pages={-}]{1927-02 General Arithmetic.pdf} \chapter{1928} \includepdf[pages={-}]{1928-01 Postulates for an Abstract Arithmetic.pdf} \includepdf[pages={-}]{1928-02 A Simplification of Certain Problems in Arithmetical Division.pdf} \chapter{1929} \includepdf[pages={-}]{1929-01 On Certain Functional Relations.pdf} \includepdf[pages={-}]{1929-02 Certain Expansions Involving Doubly Infinite Series.pdf} \chapter{1930} \includepdf[pages={-}]{1930-01 A Certain Class of Polynomials.pdf} \includepdf[pages={-}]{1930-04 Postulates for the inverse operations in a group.pdf} \chapter{1931} \includepdf[pages={-}]{1931-01 The characteristic number of a sequence of integers satisfying a linear recursion relation.pdf} \includepdf[pages={-}]{1931-02 The Algebra of Recurring Series.pdf} \includepdf[pages={-}]{1931-03 Some Arithmetical Properties of Sequences Satisfying a Linear Recursion Relation.pdf} \includepdf[pages={-}]{1931-04 The distribution of residues in a sequence satisfying a linear recursion relation.pdf} \includepdf[pages={-}]{1931-05 Conditions for the solubility of the Diophantine equation .pdf} \chapter{1932} \includepdf[pages={-}]{1932-01 The Linear Form of Numbers Represented by a Homogeneous Polynomial in Any Number of.pdf} \includepdf[pages={-}]{1932-02 On the behavior of non-static models of the universe when the cosmological term is omitted.pdf} \chapter{1933} \includepdf[pages={-}]{1933-01 A Type of Multiplicative Diophantine System.pdf} \includepdf[pages={-}]{1933-02 A property of recurring series.pdf} \includepdf[pages={-}]{1933-03 The cancellation law in the theory of congruences to a double modulus.pdf} \includepdf[pages={-}]{1933-04 The arithmetical theory of linear recurring series.pdf} \includepdf[pages={-}]{1933-05 A Certain Class of Trigonometric Integrals.pdf} \chapter{1934} \includepdf[pages={-}]{1934-01 The Representation of Stirling's Numbers and Stirling's Polynomials as Sums of Factorials.pdf} \includepdf[pages={-}]{1934-02 On the Vanishing of the Sum of the Nth Powers of the Roots of a Cubic Equation.pdf} \includepdf[pages={-}]{1934-03 Note on the period of a mark in a finite field.pdf} \includepdf[pages={-}]{1934-04 Note on an arithmetical property of recurring series.pdf} \includepdf[pages={-}]{1934-05 An arithmetical property of recurring series of the second order.pdf} \includepdf[pages={-}]{1934-06 Note on the iteration of functions of one variable.pdf} \includepdf[pages={-}]{1934-07 The Numerical Evaluation of a Class of Trigonometric Series.pdf} \chapter{1935} \includepdf[pages={-}]{1935-01 An enumerative problem in the arithmetic of linear recurring series.pdf} \includepdf[pages={-}]{1935-02 A Determination of all Possible Systems of Strict Implication.pdf} \includepdf[pages={-}]{1935-03 Conditions for Factorization in a Set Closed Under a Single Operation.pdf} \includepdf[pages={-}]{1935-04 On the Factorization of Polynomials to a Prime Modulus.pdf} \includepdf[pages={-}]{1935-05 The diophantine equation.pdf} \chapter{1936} \includepdf[pages={-}]{1936-01 A Calculus of Sequences.pdf} \includepdf[pages={-}]{1936-02 The continuous iteration of real functions.pdf} \includepdf[pages={-}]{1936-04 Note on divisibility sequences.pdf} \chapter{1937} \includepdf[pages={-}]{1937-01 Linear divisibility sequences.pdf} \includepdf[pages={-}]{1937-02 Arithmetic Functions on Rings.pdf} \includepdf[pages={-}]{1937-03 Some Arithmetical Applications of Residuation.pdf} \chapter{1938} \includepdf[pages={-}]{1938-01 Arithmetical Properties of Sequences in Rings.pdf} \includepdf[pages={-}]{1938-02 Residuated Lattices.pdf} \includepdf[pages={-}]{1938-03 The Law of Apparition of Primes in a Lucasian Sequence.pdf} \includepdf[pages={-}]{1938-04 Structure Residuation.pdf} \chapter{1939} \includepdf[pages={-}]{1939-01 A note on divisibility sequences.pdf} \includepdf[pages={-}]{1939-02 Evaluations Over Residuated Structures.pdf} \includepdf[pages={-}]{1939-03 Residuated Lattices.pdf} \includepdf[pages={-}]{1939-05 Ring Homomorphisms which are Also Lattice Homomorphisms.pdf} \includepdf[pages={-}]{1939-06 A characterization of Dedekind structures.pdf} \includepdf[pages={-}]{1939-07 Note on the General Rational Solution of the Equation ax2 - by2 = z3.pdf} \includepdf[pages={-}]{1939-08 The Lattice Theory of Ova.pdf} \includepdf[pages={-}]{1939-09 A Characterization of Boolean Algebras.pdf} \chapter{1942} \includepdf[pages={-}]{1942-01 The Closure Operators of a Lattice.pdf} \chapter{1945} \includepdf[pages={-}]{1945-01 Eulers three biquadrate problem.pdf} \chapter{1948} \includepdf[pages={-}]{1948-01 Memoir on Elliptic Divisibility Sequences.pdf} \chapter{1949} \includepdf[pages={-}]{1949-01 A Generalized Integral Test for Convergence of Series.pdf} \includepdf[pages={-}]{1949-02 Note on a paper by C E Rickart.pdf} \chapter{1950} \includepdf[pages={-}]{1950-01 Arithmetical Properties of the Elliptic Polynomials Arising from the Real Multiplication of the Jacobi Functions.pdf} \includepdf[pages={-}]{1950-02 Arithmetical properties of polynomials associated with the lemniscate elliptic functions.pdf} \chapter{1951} \includepdf[pages={-}]{1951-01 A class of soluble Diophantine equations.pdf} \chapter{1954} \includepdf[pages={-}]{1954-02 The maximal prime divisors of linear recurrences.pdf} \includepdf[pages={-}]{1954-03 Cyclotomy and the Converse of Fermat's Theorem.pdf} \chapter{1955} \includepdf[pages={-}]{1955-01 The Intrinsic Divisors of Lehmer Numbers.pdf} \includepdf[pages={-}]{1955-02 On the Number of Vanishing Terms in an Integral Cubic Recurrence.pdf} \includepdf[pages={-}]{1955-03 The laws of apparition and repetition of primes in a cubic recurrence.pdf} \includepdf[pages={-}]{1955-04 The mappings of the positive integers into themselves which preserve division.pdf} \chapter{1959} \includepdf[pages={-}]{1959-01 Testsfor primality based on Sylvester's cyclotomic numbers.pdf} \chapter{1960} \includepdf[pages={-}]{1960-02 The Calculation of the Complete Elliptic Integral of the Third Kind.pdf} \chapter{1961} \includepdf[pages={-}]{1961-01 The prime divisors of Fibonacci numbers.pdf} \chapter{1962} \includepdf[pages={-}]{1962-01 The linear p-adic recurrence of order two.pdf} \backmatter \begin{thebibliography}{99} \bibitem{lehmer} Lehmer, D. (1993). \textit{The Mathematical Work of Morgan Ward}. Mathematics of Computation, 61(203), 307-311. \end{thebibliography} \end{document}
{ "alphanum_fraction": 0.7630325653, "avg_line_length": 65.2857142857, "ext": "tex", "hexsha": "d1a4a487b29e30e79c578d0cff3b6a79fee0bf10", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "076aef4587db77ae47f12c57b7941f6773800974", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "fifaboy/ward-collection", "max_forks_repo_path": "ward-collection.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "076aef4587db77ae47f12c57b7941f6773800974", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "fifaboy/ward-collection", "max_issues_repo_path": "ward-collection.tex", "max_line_length": 575, "max_stars_count": null, "max_stars_repo_head_hexsha": "076aef4587db77ae47f12c57b7941f6773800974", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "fifaboy/ward-collection", "max_stars_repo_path": "ward-collection.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 2324, "size": 7769 }
\chapter{The Greek dialects in confrontation}\label{chap:8} “It is common knowledge that there are nowhere better-known and more distinct dialects than in the Greek language”.\footnote{\citet[23]{Wesley1736}: “In propatulo est quod nullibi notiores aut distinctiores sint dialecti quam in lingua Graeca”.} This is how the English clergyman and poet Samuel Wesley (1662–1735) introduced his concern that it was difficult to formulate rules of dialectal change. Wesley did so when discussing the style and language of the Old Testament Book of Job, which he regarded as a kind of \ili{Hebrew} that had features of related dialects. By dialects he mainly meant \ili{Arabic} and \ili{Syriac}. He attempted to discover a certain regularity in Oriental variation and referred in this context to the ancient Greek dialects. In fact, Wesley assumed that the letter mutations among the Greek dialects could be transposed to the Oriental context without any problem. This implies a presupposition on Wesley’s part that both linguistic contexts were comparable, which also emerges from his explicit connecting of specific Greek dialects to individual Oriental tongues. Indeed, \citet[24]{Wesley1736} attributed similar linguistic properties to \ili{Doric} Greek and \ili{Syriac}, on the one hand, and to \ili{Attic} Greek and \ili{Arabic}, on the other. Samuel Wesley was not alone in comparing ancient Greek dialectal diversity to other contexts of dialectal or dialect-like variation. Indeed, it was common early modern practice to assert that the Greek dialects were either comparable with, or clearly different from, diversity within other languages or language families, especially the Western European vernaculars, \ili{Latin}, and the close-knit group of the so-called Oriental tongues, now known as the \ili{Semitic} \isi{language family}. What arguments did early modern scholars invoke when claiming comparability or lack thereof? And how do their views relate to the intellectual and linguistic context in which they operated? It is these two major questions I want to address in the final main chapter of this book. \section{The vernaculars of Western Europe and the Greek reflex}\label{sec:8.1} It comes as no surprise that scholars from early modern Western Europe compared the ancient Greek dialects most frequently to their native vernaculars. The confrontation with Greek triggered a reflex among Western European scholars to relate Greek variation to the regional diversity which they encountered in their mother tongues. It is, however, remarkable that they did so in various ways and for various purposes. What were their most significant incentives to emphasize or dismiss the comparability of ancient Greek with \isi{vernacular} dialects? \subsection{Explanation: The Greek dialects in need of clarification}\label{sec:8.1.1}\largerpage When Greek studies started to develop on the Italian peninsula from the end of the Trecento onward, Renaissance Hellenists were initially compelled to focus primarily on one principal form of the language, consisting in the \ili{Koine} interspersed with some occasional features typical of \ili{Attic} and \ili{Ionic}. Toward the end of the Quattrocento, however, Hellenists developed an ever-growing interest in the Greek dialects per se and their individual features (see also Chapter 1, \sectref{sec:1.2}). In this process, the dialects obtained a more clearly defined position in the teaching of the Greek language, being usually reserved for more advanced students, often in connection with the study of poetry and its dialectally diversified genres. Grammarians soon realized that if they wanted to efficiently explain the nature of the ancient Greek dialects to their students, they needed to appeal to a situation more familiar to their audience, in particular the regional diversity in their native \isi{vernacular} tongue. As Greek studies boomed first in the states of northern Italy, it is not hard to see why \isi{vernacular} dialects were first invoked by Italian grammarians to explain the existence of different forms of \ili{Ancient Greek}. For instance, in his updated commentary on Guarino’s abridgement of Manuel Chrysoloras’s Greek grammar, published in Ferrara in 1509, the professor of Greek Ludovico Da Ponte noted that there were five principal tongues among the Greeks: the \ili{Koine}, \ili{Doric}, \ili{Aeolic}, \ili{Ionic}, and \ili{Attic}, the most pre-eminent among them. \Citet[20\textsc{\textsuperscript{v}}–21\textsc{\textsuperscript{r}}, 46\textsc{\textsuperscript{v}}–47\textsc{\textsuperscript{r}}]{Da1509} compared these dialects at two different occasions to the varieties of \ili{Italian} spoken by the \il{Venetian}Venetians, the \il{Italo-Romance!Bergamasque}Bergamasques, the \il{Florentine}Florentines, etc. (on Da Ponte, see also Chapter 2, \sectref{sec:2.6}). Originally from the city of Belluno in the Veneto region, he drew a comparison between his native \ili{Venetian} and elegant \ili{Attic} speech, even claiming that \ili{Venetian} was “the most beautiful and learned speech of all, scented with the entire majesty of the Greek language”.\footnote{\Citet[47\textsc{\textsuperscript{r}}]{Da1509}: “pulcherrimus et doctissimus omnium sermo, in quo redolet tota linguae Graecae maiestas”.} Such explanatory comparisons, in this case with a distinctly patriotic touch, occurred very frequently from the early sixteenth century onward, usually in a didactic context. The procedure was quickly picked up by grammarians outside of the Renaissance heartland of Italy. It happened particularly early in Philipp Melanchthon’s successful Greek grammar, first published in 1518, in which the \isi{Protestant} Hellenist assumed the existence of a certain south-western \ili{High German} \isi{common language} in Bavaria and Swabia. Melanchthon might have been thinking of the \il{German!Southern}southern German print language, one of the three regional print languages\linebreak emerging after 1500 (see \citealt{Mattheier2003}: 216), or some other form of a regional koine. The reference to his native \ili{German} context served to explain the status of the Greek \ili{Koine} to his readership of prospective Hellenists (\citealt{Melanchthon1518}: a.i\textsc{\textsuperscript{v}}). The first Greek grammar composed by a Spanish scholar, Francisco de Vergara, adopted the same technique; a brief description of native regional varieties was offered to help the Spanish reader understand ancient Greek diversity (\citealt{Vergara1537}: 209–210). Revealing in this context is the 1561 edition of the Greek grammar composed by the German pedagogue Michael Neander (1525–1595), who silently copied the bulk of Vergara’s discussion of the Greek dialects. In doing so, however, \citet[340--343]{Neander1561} left out the reference to \ili{Spanish} variation, as this would not have been very helpful to a reader with a \ili{German} background.\footnote{The first edition of Neander’s work (i.e. \citealt{Neander1553}) did not yet contain the passage in question.} The explanatory use of \ili{German} dialects in Greek handbooks occurred extremely frequently.\footnote{See e.g. \citet[3--4]{Schmidt1604}; \citet[83]{Rhenius1626}; \citet[\textsc{b.4}\textsc{\textsuperscript{r}}]{Schorling1678}; \citet[\textsc{b.2}\textsc{\textsuperscript{v}}]{KirchmaierCrusius1684}; \citet[376]{Kober1701}; \citet[\textsc{c.2}\textsc{\textsuperscript{v}}]{Thryllitsch1709}; \citet[b.2\textsc{\textsuperscript{v}}\textsc{–}b.3\textsc{\textsuperscript{r}}]{Nibbe1725}; \citet[141]{Georgi1733}; \citet[13]{Schuster1737}; \citet[207--209]{Simonis1752}; \citet[191--192]{Peternader1776}; \citet[\textsc{xxvi}]{Harles1778}.} It is summed up neatly by the renowned Saxon lexicographer of \ili{Latin} Immanuel Johann Gerhard Scheller (1735–1803), who, though not a grammarian of Greek, briefly discussed the Greek dialects in his reflections on the properties of the \il{German!Schriftsprache@\textit{Schriftsprache}}German \textit{Schriftsprache}. In this context, Scheller remarked: \begin{quote} I want to adduce only a few examples that demonstrate the similarity of the \ili{German} and Greek dialects, so that in this manner a young person, if he knows it in \ili{German}, will not be so astonished at it in Greek.\footnote{\citet[229]{Scheller1772}: “Ich will nur wenige Beyspiele anführen, die die Aehnlichkeit der deut\-schen und griechischen Dialecte beweisen: daß also ein junger Mensch, wenn er es im Deut\-schen wüste, im Griechischen nicht sich so verwundern würde”.} \end{quote} \is{standard (language)|(} The intensive Greek–\ili{German} comparison seems to be related to two main historical circumstances: the continuous early modern interest in the history, language, and literature of ancient Greece in \ili{German}-speaking areas and the flourishing of regional dialects there, which from the end of the seventeenth century onward received monograph-length studies, with a focus on lexical particularities (see \citealt{Hasler2009}: 877). Clarifying the Greek dialects by referring to native \isi{vernacular} diversity also occurred in grammars by native speakers of \ili{French} and \ili{English}, albeit much less frequently.\footnote{For \ili{French}, see e.g. \citet[11--12]{Antesignanus1554}, on which see \citet{VanRooy2016c}. For \ili{English}, see e.g. \citet[191--192]{Milner1734} and \citet[121]{Holmes1735}.} This might be related to the fact that in these politically unified areas grammarians had more easily reached a consensus on the \isi{vernacular} standard to be adopted. As a result, Hellenists in these regions might have sensed that \ili{French} and \ili{English} dialects, conceived as \is{corruption}corrupt deviations from the revered standard, could not be so easily compared with the highly valued literary dialects of \ili{Ancient Greek}. Early modern Hellenists did not only fall back on their native context when Greek dialectal variation needed to be explained as a general phenomenon. It was also employed as a point of reference for clarifying the different evaluative attitudes toward the Greek dialects (cf. Chapter 7, \sectref{sec:7.3}). Notably, in his monograph on the Greek dialects, the German professor Otto Walper presented \ili{Attic} and \ili{Ionic} as more polished and smooth, whereas he claimed \ili{Aeolic} and \ili{Doric} to be less cultivated and not as pleasant to the ears. This, Walper explained, was not very different in “our \ili{German} language”, which “some provinces speak more smoothly, elegantly, and neatly than others”.\footnote{\citet[61]{Walper1589}: “Vt autem superiores dialecti politiores et suauiores fuere; ita hae duae (Dorica et Aeolica) incultiores et auribus ingratiores existimantur, haud secus atque in lingua nostra Germanica prouinciae aliae aliis loquuntur suauius, concinnius atque politius”.} Also a specialist of \ili{Hebrew}, Walper went on to suggest that \ili{Hebrew} resembled \ili{Attic} and \ili{Ionic}, whereas \ili{Syriac} and \ili{Aramaic} had properties similar to \ili{Aeolic} and \ili{Doric}. Hellenists addressing a more international audience referred to various \isi{vernacular} contexts when explaining features of the sociolinguistic situation of ancient Greece. In his comprehensive Greek grammar, destined for Jesuit schools in various parts of Europe, the Jesuit Jakob \citet[20]{Gretser1593} referred to the \il{German!Common}German, \il{Italian!Common}Italian, and \il{French!Common}French “\is{common language}common languages”, the allegedly \is{geography}geographically neutral standard languages that were being developed, to explain the status of the Greek \ili{Koine} to his student readers (for his intended audience, see \citealt{Gretser1593}: {\footnotesize{)(}}.4\textsc{\textsuperscript{r}}). The French Hellenist Petrus \citet[11--12]{Antesignanus1554}, one of Gretser’s main sources of methodological inspiration, also clarified the status of the Greek \ili{Koine} using a more familiar situation, his native \ili{French} context. Antesignanus’s case is revealing in that it shows that the explanation did not occur in an entirely unidirectional manner from the \isi{vernacular} to the ancient Greek context. Instead, certain aspects of the \ili{French} context seem to have been forced into the Greek straitjacket, as, for instance, the idea that the \il{French!Common}French \isi{common language} could be adorned by features of certain approved \ili{French} dialects. Not all grammarians of \ili{French} would have agreed with this rather bold claim by Antesignanus. Something similar happened when the eighteenth-century Frisian Hellenist Tiberius Hemsterhuis (1685–1766) took the comparability of Greek and \ili{Dutch} for granted, using his native context to clarify the status of Greek variation and the Greek \isi{common language}. In order to explain what the \ili{Koine} was, “I will use”, Hemsterhuis said, “the example of our fatherland”.\footnote{\citet[102]{Hemsterhuis2015}: “Mirabitur quis quae sit illa κoινή. Exemplo utar nostrae patriae, ut id possim explicare”.} This led him to boldly present both the Greek and the \ili{Dutch} common languages as the standard speech of high society composed out of different dialects and not bound to a specific region (\citealt{Hemsterhuis2015}: 102–104). In doing so, he neglected the fact that the Greek \ili{Koine} and the \ili{Dutch} standard were based principally on specific dialects: \ili{Attic} in the case of the \ili{Koine}, and \ili{Brabantian} and \ili{Hollandic} in the case of \ili{Dutch}. \is{standard (language)|)} In summary, Hellenists widely assumed that it was possible to explain and clarify the foreign as well as ancient phenomenon of Greek dialectal diversity by means of a more familiar context. This usually coincided with the dialects of the native language of the early modern Hellenist grammarian and – more importantly – of his intended readership. Needless to say, this practice emerged out of didactic concerns. As such, it was a neat realization of Juan Luis Vives’s pedagogical insight that a teacher was better equipped to give instruction in \ili{Latin} and Greek if he also possessed a thorough knowledge of his \isi{mother tongue} and that of his audience (see \citealt{Padley1985}: 146). The explanatory usage also appeared outside of strictly grammaticographic and didactic contexts, in which case no thorough knowledge of the Greek dialects was required on the part of the author. For example, in his \ili{Latin}–\ili{Polish} dictionary of 1564, Jan Mączyński (ca. 1520–ca. 1587) invoked variation among \ili{Slavic} tongues alongside the Greek dialects to explain the \ili{Latin} term \textit{dialectus}, without mentioning, however, any Greek dialect by name: \begin{quote} The Greeks call \textit{dialects} species of languages, \textit{A property of languages, like in our \ili{Slavic} language, the Pole speaks differently, the \ili{Russian} differently, the \ili{Czech} differently, the \ili{Illyrian} differently, but it is nevertheless still one language. Only every region has its own property, and likewise it was in the Greek language}.\footnote{\citet[ \textit{s.v.} “dialectus”]{Maczynski1564}: “Dialectos Graeci uocant linguarum species, Vlasność yęzyków yáko w nászim yęzyku Slawáckim ynáczey mowi Polak ynáczey Ruśyn, ynáczey Czech ynaczey Ilyrak, á wzdy yednak yeden yęzyk yest. Tylko ysz każda ziemiá ma swę wlasność, y tákże też w Greckim yęzyku bylo”. The form “wzdy” should be “wżdy”, but the diacritic dot above the ⟨z⟩ does not appear in the original text. I kindly thank Herman Seldeslachts for this information and for helping me translate this early modern \ili{Polish} passage.} \end{quote} Before Mączyński, Thomas \citet[\textsc{xxxiii}\textsc{\textsuperscript{v}}]{Eliot1538} had likewise defined \textit{dialectus} with reference to his native context. However, unlike Mączyński, Eliot made no mention at all of Greek variation. This suggests that Eliot preferred to explain the \ili{Latin} word \textit{dialectus} by means of a familiar situation instead of troubling his reader with the diversity of \ili{Ancient Greek}, far distant in time and space from the sixteenth-century \ili{English} audience of his dictionary. Later dictionaries focusing on \ili{English} did, however, include references to both \ili{English} and Greek dialects.\footnote{See e.g. \citet[\textit{s.v.} “dialect”]{Bullokar1616} and \citet[\textit{s.v.} “dialect”]{Blount1656}. See \citet[7]{Blank1996}.} In a sixteenth-century English controversy on early Church practices, including the language used during \is{Mass, language of}Mass in the east of the Roman Empire, a more familiar linguistic situation was invoked to make claims about ancient Greek diversity. In this so-called \isi{Challenge controversy} – so named because it started out as a challenge mounted by the \isi{Protestant} John Jewell (1522–1571) – the English recusant John Rastell (1530/1532–1577) assumed a certain degree of comparability between Greek and \ili{English} variation, claiming that in both cases there was no \isi{mutual intelligibility}. He did so as he was trying to demonstrate that not all speakers of Greek would have understood the learned Greek used in \is{Mass, language of}Mass.\footnote{On the controversy, see e.g. \citet[115--154]{Jenkins2006}. On the use of the \ili{English} word \textit{dialect} in this context, see \citet[647--651]{VanRooyConsidine2016}.} Since an Englishman could not understand a \il{English!Scots}Scotsman, there was no reason to stipulate that speakers of different Greek dialects were able to comprehend each other, \citet[68\textsc{\textsuperscript{r}}]{Rastell1566} argued. Rastell’s native \ili{English} context thus clearly informed his views on the lack of \isi{mutual intelligibility} among the ancient Greek dialects to make a point in a theological controversy. The explanatory comparison of \isi{vernacular} with ancient Greek dialectal diversity occurred in various genres other than Greek grammars, dictionaries, and theological invectives, too. These ranged from philological commentaries on classical works and monographs on \isi{New Testament} \il{Ancient Greek!New Testament}Greek to \is{geography}geographical publications, prefaces to lexica, and various \is{historiography}historiographical works.\footnote{For a philological commentary, see e.g. \citet[68; \ili{French}–Greek comparison]{Casaubon1587}. For a monograph on \isi{New Testament} \il{Ancient Greek!New Testament}Greek, see e.g. \citet[212--213; also \ili{French}–Greek]{Cottiere1646}. For a \is{geography}geographical publication, see e.g. \citet[60; \ili{English}–Greek comparison]{Speed1676}. For a preface to a lexicon, see e.g. \citet[(b.3)\textsuperscript{v}\textsc{;} also \ili{English}–Greek]{Phillips1658}. For a historiographical work, see e.g. \citet[108, 117; \ili{French}/\ili{Italian}–Greek comparison]{Freret1809}.} A particular case in point is John Williams (?1636–1709), who, in his discourse on the language of church service, mentioned \ili{English} and Greek variation alongside each other when explaining the concept of \textsc{dialect} to his readership, interestingly adding that the Greek dialects were “well known to the learned” (\citeyear[5]{Williams1685}). Does this imply that Williams was providing a reference to the readers’ native context for those who were not as learned? Whatever the case, Williams drew a direct parallel between the Greek \ili{Koine} \is{standard (language)}“standard” – he used this exact term – and \il{English!court}court \ili{English}, projecting along the way his conception of the \ili{English} \is{standard (language)}standard back onto the Greek \ili{Koine}. Before proceeding to the next early modern trend in comparing ancient Greek with \isi{vernacular} dialectal diversity, I want to point out briefly here that the explanatory function did not come into being with the Renaissance revival of Greek studies. As a matter of fact, explaining one linguistic context of variation by means of another occasionally occurred in the late Middle Ages as well, for instance in exegetical works on biblical passages alluding to regional linguistic differences, in particular the \isi{shibboleth} incident in Judges 12.\footnote{See \citet[199--200]{VanRooy2018b} for the views of Nicholas of Lyra (1265–1349).} This was especially frequent in travel writings. \ili{Chinese} variation was compared with \ili{Gallo-Romance} diversity in the \textit{Book of the marvels of the world}; this work constitutes the written version of what the famous Venetian traveler Marco Polo (1254–1324) dictated to his cell mate in Genoa, Rustichello da Pisa, in 1298–1299. Rustichello, drawing up Marco Polo’s words in \ili{Old French}, wanted to explain \ili{Chinese} diversity by referring to the native context of his intended readership. Interestingly, an Italian translator substituted the allusion to diversity in France by referring to \ili{Italo-Romance} variation, clearly adapting the text to his Italian audience.\footnote{See \citet[157]{Polo1938}, where both the French original and the Italian rendering are offered in an English translation. Cf. \citet[855]{Borst1959}.} Mutual intelligibility was explicitly posited for \ili{Chinese} variation as well as the \ili{Italo-Romance} dialects, but not for the \ili{Gallo-Romance} context. This kind of comparison, omitting any reference to \ili{Ancient Greek}, continued to be drawn throughout the early modern period, even though these comparisons were far less frequent than those between \ili{Ancient Greek} and the \isi{vernacular}. The French explorer and diplomat Pierre Belon (1517–1564), for example, employed his native linguistic context to explain to his readers that inhabitants of Constantinople mocked the \il{Early Modern Greek@(Early) Modern Greek}Vernacular Greek spoken by outsiders. Just as the French laughed at \ili{Picard} speech and any other \ili{Gallo-Romance} variety that was not true \ili{French}, residents of Constantinople jibed at other varieties of \il{Early Modern Greek@(Early) Modern Greek}Vernacular Greek, \citet[5\textsc{\textsuperscript{v}}]{Belon1553} remarked.\footnote{On Belon as a traveler in Greece, see \citet[esp. 122]{Vingopoulou2004}.} Such comparisons of different contexts of variation excluding \ili{Ancient Greek} also occurred outside of travel writings. The Spanish Dominican Domingo de Santo Tomás (1499–1570) explained \ili{Quechua} diversity by referring to \ili{Romance} differences in pronouncing \ili{Latin} in his grammar of the South-American language.\footnote{See \citet[1\textsc{\textsuperscript{v}}]{Santo1560}. On his eye for variation, see \citet[140]{Calvo2005}.} Another noteworthy example stems from the correspondence of the Parisian humanist Claude Dupuy (1545–1594). \citet[274]{Dupuy2001} clarified \ili{Provençal} diversity to his Neapolitan colleague Gian Vincenzo Pinelli (1535–1601) by comparing it to the variation in his addressee’s native \ili{Italian} language in a letter dated December 12, 1579. \subsection{Justification and description: Greek as a polyvalent model}\label{sec:8.1.2}\largerpage It came as a relief to many humanists that, unlike \ili{Latin}, the revered \ili{Ancient Greek} language was not a monolithic linguistic whole. This reminded them of the situation in their native vernaculars and, at the same time, made them aware of the fact that dialectal variation was not necessarily an insurmountable obstacle to the regulation and grammatical \is{standardization!codification}codification of their \isi{mother tongue}. An observation in the first printed grammar of \ili{Dutch} is revealing in this regard. This language, its authors argued, could be regarded as one entity, even though there were regional differences in \isi{pronunciation}, “but not in such a manner that they do not understand each other very well”. Interestingly, they added that “in like manner the Greek language, which enjoys such high esteem, also had its different ‘dialects’”.\footnote{\citet[110]{[spieghel]1584}:\ia{Spieghel, Hendrik Laurensz@Spieghel, Hendrik Laurensz}\ia{Coornhert, Dirck Volckertszoon@Coornhert, Dirck Volckertszoon}\ia{Visscher, Roemer@Visscher, Roemer}\ia{Fallet, G.@Fallet, G.} “Ick spreeck […] int ghemeen vande Duytse taal, die zelve voor een taal houdende, […] wel iet wat inde uytspraack verschelende, maar zó niet óf elck verstaat ander zeer wel, tis kenlyck dat de Griexe taal, die zó waard gheacht is, óóck haar verscheyden \textit{dialectos} had”. For the authorship of this grammar, see \citet{Peeters1982}.} The addition of the relative clause “which enjoys such high esteem” clearly points to a justificatory use of ancient Greek diversity. This suggests that an acquaintance with the Greek literary dialects, however slight, catalyzed the emancipation of the vernaculars from \ili{Latin}, which, certainly in the fourteenth and fifteenth centuries, was often conceived of as a highly uniform language synonymous with grammar.\footnote{On the catalyzing effect, see e.g. already \citet[688]{Bonfante1953}; \citet[9]{Trapp1990}; \citet[67]{Rhodes2015}. On \ili{Latin} as an allegedly uniform tongue, see \sectref{sec:8.2} below.} The catalyzing effect seems confirmed by the fact that early comparisons of Greek with \ili{Italian} diversity sometimes included an explicit contrast with the unity of \ili{Latin}.\footnote{See e.g. \citet[\textsc{ii}.41]{Landino1974} and \citet[*.ii\textsc{\textsuperscript{v}}]{Manutius1496Aldus}. See \citet[172--173]{Alinei1984}; \citet[209--210, 215]{Trovato1984}. For the justificatory use of the Greek model in Italy, see \citet[46, 50]{Tavoni1998}.} It comes as no surprise then that the diversified linguistic patchwork of ancient Greece widely functioned as a model for scholars engaged in elevating, \is{standard (language)}standardizing, and describing their native \isi{vernacular} language. This intriguing tendency manifested itself in various ways. To begin with, many sixteenth-century scholars saw in the Greek dialects a literary model, which must be framed in the tradition of claiming a close link between \ili{Ancient Greek} and one’s native \isi{vernacular}.\footnote{On the Greek–\isi{vernacular} link, see \citet{Demaiziere1982}, with a focus on the \ili{French} context; \citet{Trapp1990}; \citet{Dini2004}, with reference to \ili{Prussian}. For a late example, see \citet[435--436]{VanHal2016}, who concentrates on \citet[119--132]{Reitz1730} and his linking of \ili{Dutch} to Greek.} The most telling example of this use of the Greek dialects can be found in the work of the renowned French printer and Hellenist Henri Estienne.\footnote{On Estienne’s comparison of \ili{French} and Greek diversity, see already \citet[70]{Demaiziere1988}.} In his \textit{Treatise on the conformity of the \ili{French} language with the Greek}, Estienne defended the usage of dialect words in \ili{French} literary works, adding that dialect words needed to be adapted to the common \ili{French} tongue, just like meat imported from elsewhere must be prepared in the French manner and not as it was cooked in the land of origin.\footnote{\citet[¶¶.ii\textsc{\textsuperscript{v}}]{Estienne1565}. Cf. \citet[\texttt{\char"2720}\textsc{\textsuperscript{r}}]{Ronsard1550}, on which see \citet[170]{Alinei1984}; \citet[24]{Barbier-mueller1990}; \citet[14]{Trapp1990}. Cf. also \citet[456, 458]{Mambrun1661}. Similar views were expressed by scholars from other areas: see e.g. \citet[\textsc{e.}iii\textsc{\textsuperscript{v}}–\textsc{e.}iv\textsc{\textsuperscript{r}}]{Oreadini1525} for \ili{Italian} and \citet[\textsc{a}.vi\textsc{\textsuperscript{r}}]{Craige1606} for \ili{English}.} Estienne (\citeyear[133]{Estienne1579}; \citeyear[*.iii\textsc{\textsuperscript{v}}–*.iiii\textsc{\textsuperscript{r}}]{Estienne1582}) propagated the usage of the ancient Greek satirical author Lucian as a model for this practice. Inspired by the Greek heritage, he regarded \ili{French} dialectal diversity as a source of richness that could adorn the \ili{French} language (\citealt{Estienne1582}: *.iii\textsc{\textsuperscript{v}}; see \citealt{Auroux1992}: 366–367). The fact that \citet[143]{Estienne1579} allowed for dialect words and even dialect endings in \ili{French} implies that to his mind “the pure and native \ili{French} language” (\textit{le pur et nayf langage françois}) did not entirely correspond to \ili{Parisian} speech, the variety on which the \ili{French} norm was primarily based. He explained this by drawing a comparison with the \ili{Attic} dialect, in which not every \il{Ancient Greek!Athenian}Athenian feature was allegedly approved. \citet[133--134]{Estienne1579} denied the same flexibility to \ili{Italian}, since its \ili{Tuscan}-based standard was much less prone to adopt features from other dialects. To sum up, Estienne, inspired by the Greek dialects he knew so well, viewed the \ili{French} dialects as a source of richness that could embellish \ili{French} language and literature. He perceived esthetic and typological similarities between \ili{French} and Greek dialects, even though he did not go so far as to make any claims about the genealogical dependency of \ili{French} on Greek (\citealt{Droixhe1978}: 99; \citealt{Considine2008a}: 62).\largerpage Such ideas also appeared outside of France. The great German grammarian Justus Georg Schottel (1612–1676), for instance, argued that not everything outside of the \is{standardization!selection}selected dialect – in particular \ili{Attic} Greek and the \ili{German} of \il{German!Misnian (of Meissen)}Meissen~– was faulty (\citealt{Schottel1663}: 176; cf. \citealt{Roelcke2014}: 250). What is more, not all dialect words must be avoided, since some could be current in certain technical jargons. These considerations led Schottel to conclude that frequent and important dialect words needed to be included in a dictionary. The value he attached to dialectal material clashes somewhat with his view, expressed only some pages earlier, that dialects were inherently incorrect and unregulated \citep[174]{Schottel1663}. William J. \citet[1110]{Jones2001} has summed up this contradiction nicely: \begin{quote} Himself a native speaker of \ili{Low German}, Schottelius was caught between admiration for a[n] […] etymologically valuable dialect, and an awareness that prestige and currency precluded any choice but \ili{High German}. \end{quote} Other German scholars stressed the richness of \isi{vernacular} dialects as well, often with reference to the ancient Greek context.\footnote{See e.g. \citet[\textsc{a.3}\textsc{\textsuperscript{r}}\textsc{–a.3}\textsc{\textsuperscript{v}}]{Chytraeus1582}; \citet[\textsc{c.1}\textsc{\textsuperscript{r}}]{Meisner1705}; \citet[73]{Hertling1708}.}\largerpage Occasionally, patriotic sentiments tempted scholars to accord a special status to the dialects of their native \isi{vernacular} tongue. This happened in Manuel de Larramendi’s (1690–1766) \ili{Basque} grammar, which contains a section “On the dialects of the \ili{Basque} language” (“De los dialectos del bascuenze”; \citealt{Larramendi1729}: 12–15). Larramendi’s views were clearly informed by early modern scholarship on the Greek dialects. He emphasized that, much like Greek, \ili{Basque} had a \isi{common language}, a “body of language common and universal to all its dialects”.\footnote{\citet[12--13]{Larramendi1729}: “cuerpo de lengua, comun y universal à todos sus dialectos”.} Further, he seems to have projected the distinction between principal and minor dialects from early modern grammars of Greek onto the \ili{Basque} context (\citealt{Larramendi1729}: 12; see Chapter 2, \sectref{sec:2.6}). Greek and \ili{Basque} diversity was, however, not comparable on every level, claimed \citet[12]{Larramendi1729}: \begin{quote} The difference is that the dialects of the \ili{Basque} language are very regulated and consistent, as if they were invented with devotion, discretion, and expediency, which the Greek dialects did not have and others in many other languages do not have.\footnote{“La diferencia está que los dialectos del bascuenze son muy arreglados y consiguientes, como inventados con estudio, discrecion y oportunidad: lo que no tenian, ni tienen los dialectos griegos, y otros en otras muchas lenguas”. On this passage, see also \citet[876]{Hasler2009}.} \end{quote}\largerpage In other words, the Greek dialects served as a model for Larramendi in several respects, but were at the same time valued less highly than their \ili{Basque} counterparts, an idea quite unusual in the early modern period. In a work published a year earlier, however, \citet[142]{Larramendi1728} had presented the Greek dialects as also being regulated. It is unclear exactly why he had this change of heart, but \isi{patriotic sentiment} no doubt played a role. Not all scholars associated the Greek dialects with spoken varieties of the vernaculars. The \ili{Dutch} grammarian Adriaen Verwer (ca. 1655–1717) was aware of the literary character of the Greek dialects and compared them with different written registers of his native \isi{vernacular} rather than with spoken regional dialects. \citet[53--54]{Verwer1707} divided written \ili{Dutch} into three main forms: (1) \il{Dutch!Common}the \isi{common language} (\textit{lingua communis}), (2) the dialect used in \il{Dutch!government}government (\textit{dialectus curiae senatuique familiaris}), and (3) \il{Dutch!Poetical}the \isi{poetical dialect} (\textit{dialectus poetis familiaris}). Verwer also mentioned a \il{Dutch!court}court dialect (\textit{dialectus forensis}), a variety closely cognate to the \isi{common language}, from which it only differed in \is{rhetoric}rhetorical – and not in grammatical – terms. The focus on register variation is also apparent from his definition of the \ili{Latin} term \textit{dialectus}; dialects were “various particular speech forms in our written language”.\footnote{\citet[53]{Verwer1707}: “dese ende gene, bysondere spraekvormen in onse schrijftaele”.} The situation of ancient Greece also functioned as a model for \is{standardization!selection}selecting a variety to be \is{standardization!codification}codified as the \isi{vernacular} norm. A very straightforward example of such an approach can be found in Nathan Chytraeus’s (1543–1598) preface to his \ili{Latin}–\ili{Low Saxon} lexicon of 1582. In it, \citet[\textsc{a.3}\textsc{\textsuperscript{r}}\textsc{–a.3}\textsc{\textsuperscript{v}}]{Chytraeus1582} described the constitution and elevation of a \il{German!Common}German \isi{common language} as a process awaiting completion and stressed the model function of the Greek \ili{Koine} in this context. He moreover saw a key role for the dialects, which could beautify the \isi{common language}. More theoretical still were the proposals by certain early Cinquecento Italian scholars to create a mixed \isi{common language} after the example of the Greek \ili{Koine} as an artificial solution to the \is{questione della lingua@\textit{questione della lingua}}\textit{questione della lingua}.\footnote{See Vincenzo Colli’s ideas as quoted by Pietro \citet[\textsc{xii}\textsc{\textsuperscript{v}}\textsc{–xiii}\textsc{\textsuperscript{r}}]{Bembo1525}. See \citet[119]{Melzi1966}; \citet[215--218]{Trovato1984}; \citet[12]{Trapp1990}.} Not all humanists limited themselves to mere reflection. The \ili{Dutch} scholar and priest Pontus de Heuiter (1535–1602) put the active creation of a \isi{vernacular} \isi{common language} through mixture to actual practice in his \textit{\ili{Dutch} orthography}. De Heuiter explicitly mentioned his debt to the ancient Greek model for his initiative:\largerpage[2] \begin{quote} I have taken the Greeks as an example, who, having the four good tongues of the country in usage, namely \textit{Ionic}, \textit{Attic}, \textit{Doric}, and \textit{Aeolic}, have created a fifth one out of them, which they called the \textit{common language}. Thus I have created my \ili{Dutch} over a period of twenty-five years out of \ili{Brabantian}, \ili{Flemish}, \ili{Hollandic}, \ili{Guelderish}, and \ili{Kleverlandish}.\footnote{\Citet[93]{De1581}: “[…] heb ic exempel ande Grieken genomen, die vier lants goude talen in ufenijng hebbende, te weten: \textit{Ionica, Attica, Dorica, Aeolica}, die vijfste noh daer uit gesmeet hebben, die zij nommen \textit{gemeen tale}: aldus heb ic mijn Nederlants over vijf en twintih jaren gesmeet uit Brabants, Flaems, Hollants, Gelders en Cleefs”. See also \citet[110]{Dibbets2008} and \citet[13--14]{De1917}. The latter has linked this passage to Hieronymus Wolf’s reference to Greek in his discussion of \ili{German} dialects. However, Wolf did not explicitly take the Greek context as a model and seems to have stressed, instead, the \isi{incomparability} of both contexts. See \sectref{sec:8.1.3} below.} \end{quote} Not all scholars using the Greek \ili{Koine} as a model for their \isi{vernacular} norm believed the \ili{Koine} to be created out of the different dialects. The grammarian Kaspar von Stieler (1632–1707) held that the Greek \ili{Koine}, which he saw as a model for his \ili{High German} norm, was exempt from dialectal elements \citep[2]{Stieler1691}. Interestingly, later authors emphasized the frequently drawn parallel between the Greek \ili{Koine} and the \ili{German} norm by referring to the former as “High Greek” (\textit{Hoch-Griechisch}) by \isi{analogy} to “\ili{High German}” (\textit{Hochdeutsch}, \citealt{Schuster1737}:13). Not everybody regarded the Greek \ili{Koine} as the model for the \is{standardization!selection}selected, normative variety of their \isi{vernacular} tongue. Almost equally often, scholars put forward the \ili{Attic} dialect as the main form of Greek and the principal model after which one’s \isi{mother tongue} should be developed. This holds especially true in cases where scholars emphasized the literary function of the \is{standardization!selection}selected variety. A telling example is Henri \citet[*.iii\textsc{\textsuperscript{v}}]{Estienne1582}, who put \ili{French} in the capital city of the kingdom; just as Athens was the “Greece of Greece” in terms of speech, Paris was the “France of France”. Estienne added, however, that this was the case not because the French capital was frequently visited by the royal court, but because it had a parliament – he was perhaps inspired here by the example of Athenian democracy. He was thus comparing the \ili{French} language to \ili{Attic} rather than to the Greek \ili{Koine}. This was surely prompted by his emphasis on the \is{standardization!codification}codification of \ili{French} as a respected literary norm similar to \ili{Attic} rather than a language understood by all inhabitants of the kingdom. In fact, \citet[*.iii\textsc{\textsuperscript{r}}]{Estienne1582} seems to have regarded pure \ili{French} as a social privilege which the lower classes could never attain.\footnote{Cf. \citet[\textsc{xxxiii}\textsc{\textsuperscript{v}}]{Marineo1497} for an early comparison of \ili{Castilian} \ili{Spanish} with \ili{Attic} Greek.} Taking \ili{Attic} and especially the Greek \ili{Koine} as the model for \is{standardization!selection}selection had far-going \is{glottonymy}glottonymic consequences. Indeed, the designation “\isi{common language}” was widely used to refer to the selected variety of a \isi{vernacular} language in imitation of the Greek \ili{Koine}, usually termed \textit{lingua communis} in \ili{Latin}. What is more, some even referred to the \isi{vernacular} norm, by the procedure of antonomasia, as “\ili{Attic}”. The Greek scholar Alexander Helladius (1686–after mid-1714) attributed the label of “\ili{Attic}” to what he called the “\ili{High German} par excellence” (“κατ’ ἐξoχὴν \textit{das Hochteutsche}”; \citealt{Helladius1714}: 187). \ili{Attic} or \ili{Koine} Greek were not, however, the only speech forms that could serve as the model for selecting a \isi{vernacular} norm. In cases where a \isi{vernacular} variety was described that was not or not yet fully established as the selected norm but which an author wanted to see established, it was sometimes compared to varieties of languages other than Greek that were widely accepted as the standard form. One scholar writing in 1595 wanted to promote his native \ili{Croatian} dialect as the \ili{Slavic} norm, for which \ili{Tuscan} \ili{Italian} constituted his model (\citealt{Veranzio1595}: *.3\textsc{\textsuperscript{v}}; cf. also \citealt{Schoppe1636}: 46). Apart from \is{standardization!selection}selection, Greek could also be the model for another key \is{standard (language)}standardization process in \isi{vernacular} tongues: \is{standardization!codification}codification in spite of the presence of dialectal variation. Early in the sixteenth century, the French humanist Geoffroy Tory (ca. 1480–before late 1533) commented as follows on the regulation and grammatical \is{standardization!codification}codification of \ili{French}, which he regarded more as a set of varieties rather than a unitary language with a single norm: \begin{quote} Our language is as easy to regulate and put in good order as the Greek language once was, in which there are five speech varieties, which are the \ili{Attic}, \ili{Doric}, \ili{Aeolic}, \ili{Ionic}, and \isi{common language}. These have certain mutual differences in their noun declensions, verb conjugations, orthography, accents, and \isi{pronunciation}.\footnote{\citet[\textsc{iv}\textsc{\textsuperscript{v}}\textsc{–v}\textsc{\textsuperscript{r}}]{Tory1529}: “Nostre langue est aussi facile a reigler et mettre en bon ordre, que fut jadis la langue grecque, en la quelle ya cinq diversites de langage, qui sont la langue attique, la dorique, la aeolique, la ionique et la commune, qui ont certaines differences entre elles en declinaisons de noms, en conjugations de verbes, en orthographe, en accentz et en \isi{pronunciation}”. See \citet[466--467]{Trudeau1983} for Tory’s “pandialectal” conception of \ili{French}. Cf. \citet[19--20]{Defaux2003}, where the passage is contextualized within the French grammatical tradition; \citet[23]{Cordier2006}, who frames it in Tory’s general reception of antiquity.} \end{quote} Tory proceeded by mentioning a number of \ili{French} speech forms: the \il{French!court}court variety, \ili{Parisian} (which he seems to have associated closely with the court variety), \ili{Picard}, \ili{Lyonnais}, \ili{Limousin}, and \ili{Provençal}. Inspired by the Greek model, he did not view dialectal variation as a negative property hindering the regulation of the \isi{vernacular}. Other scholars were not as optimistic about the \is{standardization!codification}codification of dialect-ridden tongues. The Hellenist Erasmus \citet[239]{Schmidt1615} emphasized the impossibility of reducing the dialects of both Greek and his native \ili{German} to a norm. It goes without saying that not only Greek was used as a model for the \is{standardization!codification}codification of a norm. \ili{Latin} or other \isi{vernacular} contexts were a major source of inspiration as well. The renowned grammarian Johann Christoph Gottsched (1700–1766), for instance, was inspired by the example of the \ili{Latin} tongue in declaring it necessary to ban dialectal features from the \ili{German} norm (\citeyear[334]{Gottsched1748}). The ancient Greek dialect context also served as a descriptive model, taken here in a very broad sense and therefore encompassing a range of approaches. To start with, the Greek prototype was projected onto the linguistic situation on the Iberian peninsula by the Spanish humanist Bartolomé Jiménez Patón (1569–1640). More particularly, Jiménez Patón relied on the traditional classification of Greek into five dialects to map out variation in his native land: \begin{quote} And thus we say that among the Greeks there are five manners of tongue with different dialects, which are the \ili{Attic}, \ili{Ionic}, \ili{Doric}, \ili{Aeolic}, and common tongue. And in Spain there are five others, which are the \il{Valencian (Catalan)}Valencian, \ili{Asturian}, \ili{Galician}, and \ili{Portuguese}. All of these derive from this fifth, or principal and first Original \ili{Spanish} of ours, different from the \ili{Cantabrian}.\footnote{\citet[10\textsc{\textsuperscript{r}}\textsc{–10}\textsc{\textsuperscript{v}}]{Jimenez1604}:“Y asi entre los Griegos decimos aver cinco maneras de lengua con differentes dialectos que son la lengua attica, ionica, dorica, aeolica y comun. Y en España ay otros cinco, que son la valenciana, asturiana, gallega, portuguesa. Las quales todas se an derivado de esta nuestra, quinta o principal y primera, originaria española differente de la cantabria”.} \end{quote} Jiménez Patón’s circumscription of the historical position of “Original \ili{Spanish}” vis-à-vis the four other dialects may suggest that he envisioned the relationship of the \ili{Koine} to the Greek dialects in much the same terms. If so, the projection did not happen solely from Greek to \ili{Spanish}, but partly also vice versa. In other cases, the Greek dialects were unmistakably forced into a \isi{vernacular} straitjacket, reversing the directionality of the comparison. For example, Friedrich Gedike’s analysis and classification of the Greek dialects were modeled on his tripartite conception of the \ili{German} dialects (see Chapter 7, \sectref{sec:7.3}). The Greek dialects were also eagerly used as a descriptive point of reference by scholars wanting to sketch the degree of kinship among certain \isi{vernacular} varieties, even among varieties that today are usually considered to be distinct but related languages. The preacher from Dordrecht Abraham Mylius (1563–1637) compared in his \textit{Belgian language} the superficial variation among some of the languages now known as \ili{Germanic} to differences between \ili{Aeolic} and \ili{Ionic}, stressing that, in both cases, the root and character of speech had remained the same (\citealt{Mylius1612}: 90; cf. e.g. also \citealt{Boxhorn1647}: 75–76). This also occurred on a lower level, as in Sven Hof’s (1703–1786) pioneering monograph on the dialect of \ili{Västergötland}, a province in the west of modern-day Sweden. In this work, \citet[esp. 12–13, 23]{Hof1772} relied on his familiarity with the Greek context in seeing dialects as classifiable entities and in describing individual dialect features. For some scholars, using the Greek dialects as a model context had \is{glottonymy}glottonymic consequences. The Italian humanist Claudio Tolomei (ca. 1492–1556), writing around 1525, contended that in much the same way as it was justified to group the Greek dialects together and designate them with one and the same label, the varieties of \ili{Italian} should be seen as one linguistic class and should be called by one and the same name (\citealt{Tolomei1555}: 14; see \citealt{Trovato1984}: 216). Individual Greek dialects were frequently proposed as a point of comparison for clarifying the status and position of a \isi{vernacular} dialect in its broader linguistic landscape. \ili{Attic} was said to be similar to \il{German!Misnian (of Meissen)}Misnian – the \ili{German} of Meissen – often presented as the \is{standard (language)}standard variety of \ili{German} (see e.g. \citealt{Borner1705}: \textsc{b.4}\textsc{\textsuperscript{v}}; \citealt{Simonis1752}: 214–215). Henri Estienne perceived parallel features in individual \ili{French} and Greek dialects. For instance, \citet[3--4]{Estienne1582} compared the \isi{broadness} of \ili{Franco-Provençal} speech – \textit{sermo Romantius} he termed it in \ili{Latin} – to that of \ili{Doric} Greek, pointing out that both varieties were characterized by the prominence of the vowel [a]; examples he cited were \ili{Franco-Provençal} \textit{cla} and \ili{Doric} \textit{kláks} (κλάξ), both words meaning ‘key’. In a similar vein, the \isi{Enlightenment} scholar Ferdinando Galiani (1728–1787), in his monograph on his native \ili{Neapolitan} dialect, stressed its archaism and contended that it had phonetic properties – open vowels, a great expressivity of words, and strong consonants – similar to \ili{Doric}, the Greek dialect spoken by the ancient inhabitants of Naples and surroundings. In sum, Galiani claimed, “\ili{Neapolitan} could well be called the \ili{Doric} dialect of the \ili{Italian} tongue”.\footnote{\citet[16]{Galiani1779}: “il napoletano potrebbe ben dirsi il dorico della favella italiana”.} His \is{glottonymy}glottonymic suggestion did not, however, enjoy any success. \is{Scots as Doric|(}Things are very different with an early modern comparison of a Greek with an \ili{English} dialect. As a matter of fact, a development with consequences that resonate today began around the mid-seventeenth century, when the church historian Thomas Fuller (1608–1661) linked \ili{Scots} with \ili{Doric} Greek. According to Fuller, “the speech of the modern Southern-\textit{Scot} [was] onely a \textit{Dorick} dialect of, no distinct language from \textit{English}” \citep[81]{Fuller1655}. Forty years later, Patrick \citet[20]{Hume1695}, a commentator of John Milton’s \textit{Paradise Lost}, remarked on Milton’s use of the verb \textit{to rouse} that it signified ‘to get up’, being “a more northern \isi{pronunciation} of rise, like the Dorick dialect”. Around the same time, the writer John Dryden (1631–1700) characterized the English poet Edmund Spenser’s (1552/1553–1599) language as follows: \begin{quote} But Spencer, being master of our Northern dialect and skill’d in Chaucer’s \ili{English}, has so exactly imitated the \ili{Doric} of \iai{Theocritus}, that his love is a perfect image of that passion which God infus’d into both sexes, before it was \is{corruption}corrupted with the knowledge of arts and the ceremonies of what we call good manners. (Dryden in \citealt{Virgil1697}: \textsc{a.2}\textsc{\textsuperscript{r}}) \end{quote} Why was there such a close association between \ili{Doric} and \ili{Scots}? This parallel seems to have been informed not only by certain shared linguistic features, such as the frequency of [a] and a presumed \is{broadness}broad \isi{pronunciation}, but also – and probably primarily – by the alleged rustic nature and status of both dialects as well as their being used in \isi{bucolic poetry}. This practice continued into the modern period (\citealt{Colvin1999}: v). A vestige of this early modern tradition is reflected in current \is{glottonymy}glottonymic practice; the variety of \ili{Scots} spoken in the \il{English!Scots!Aberdeen (Mid-Northern/North-East)}Aberdeen area, now known as Mid-Northern or North-East \ili{Scots} among linguists, is still labeled \textit{Doric} in popular usage to this day.\footnote{See \citet[116]{Mccoll2007}: “In the course of the twentieth century, the North-East variety became known as The \ili{Doric}, a term previously applied to all \ili{Scots} varieties”.} The history of the association of \ili{Scots} with \ili{Doric}, which I have shown to go back at least to the seventeenth century, deserves a closer investigation, but this lies outside the scope of this book.\is{Scots as Doric|)} Yet another important manner in which Greek diversity was used as a descriptive point of reference was the extrapolation of letter permutations closely and prototypically associated with Greek to the diversity among the tongues of Western Europe. Greek letter changes were already around the turn of the sixteenth century a source of inspiration to describe similar variations in \ili{Italo-Romance}.\footnote{See e.g. \citet[*.ii\textsc{\textsuperscript{v}}]{Manutius1496Aldus} and \Citet[97\textsc{\textsuperscript{r}}]{Da1509}. See also Chapter 6, \sectref{sec:6.2}.} Especially in \ili{West Germanic}-speaking Europe, this was a prominent phenomenon; there, the sigma–tau alternation present in, for instance, \ili{Koine} \textit{glôssa} (γλῶσσα) and \ili{Attic} \textit{glôtta} (γλῶττα), meaning ‘tongue’, was very often understood as somehow cognate to the ⟨s⟩–⟨t⟩ alternation among varieties of \ili{West Germanic}, as in \ili{High German} \textit{Wasser} vs. \ili{Dutch} \textit{Water}.\footnote{See e.g. \citet[21]{Mylius1612}. Cf. also \citet[\textsc{m}.ii\textsc{\textsuperscript{r}}]{Althamer1536}; \citet[\textsc{a.3}\textsc{\textsuperscript{r}}]{Chytraeus1582}; \citet[119--132]{Reitz1730}; \citet[61--62]{Ruhig1745}; \citet[23--24]{Hof1772}.} A final and somehow peculiar use of Greek diversity as a model can be found in the work of the \isi{Enlightenment} pedagogue Friedrich \citet[7]{Gedike1782}, who assumed that the Greek context could assist in predicting dialectal evolution in other languages. Gedike’s knowledge of the history of Greek colonization and its impact on dialect formation led him to prophesize the emergence of a new \ili{English} dialect in the \il{English!American}United States, which at his time of writing in 1782 had just recently declared independence from Great Britain 1776, even though this was officially recognized by Great Britain only in September 1783 through the Treaty of Paris. Gedike was, however, probably not very familiar with the linguistic situation in the US; otherwise he would have realized that his prediction was, in fact, already becoming a reality at his time of writing. In summary, Greek variation was eagerly used as a model by early modern scholars engaged in the elevation, \isi{standardization}, and description of the \isi{vernacular} tongues of Western Europe, usually their native ones. This happened in various ways, which can be placed under three main, not always easily distinguishable headings; the Greek linguistic context with its characteristic dialectal diversity was employed as (1) a literary \textit{exemplum}, (2) a model for \isi{standardization}, and (3) a descriptive point of reference, this in very broad terms. The fascination with the Greek model was sometimes so intense that one could speak of a true Hellenomania, as with the printer-philologist Henri Estienne. An intimate acquaintance with the Greek language and its dialects was not always an indispensable prerequisite, even though it usually stimulated the exemplary use of the Greek language strongly, as again in Estienne’s case. \subsection{Dissociation: The particularity of the Greek dialects foregrounded}\label{sec:8.1.3} At first, humanist scholars seem to have largely agreed upon the comparability of Greek and \isi{vernacular} dialectal variation, which for them seems to have been a kind of uncontested assumption. Gradually, however, different voices were heard, especially from the end of the sixteenth century onward, when the \is{standardization!selection}selection of the linguistic norm was more or less settled for many Western European vernaculars, even though this process was completed at different moments for each language.\footnote{See e.g. \citet[217--222]{Mattheier2003}, who points out that Luther’s \ili{German} and so-called general \ili{German} (an \ili{East Upper German} koine) competed for most of the early modern period, even though the former eventually gained the upper hand.} Two early scholars with a particularly outspoken opinion on the issue were Benedetto Varchi (1503–1565) and Vincenzo Borghini (1515–1580), both Italian humanists involved with the \is{questione della lingua@\textit{questione della lingua}}\textit{questione della lingua}. Benedetto \citet[95]{Varchi1570} regarded the Greek dialects as “equal” (\textit{eguali}) – they were of the same noblesse and dignity – whereas there was inequality among \ili{Italian} varieties, since \ili{Florentine} speech was elevated above the rest. This seems to be reflected in Varchi’s usage of the term \textit{dialetto}, which he restricted to varieties of the Greek language. He nevertheless reserved a particular place for \ili{Attic}, which he claimed to be similar to \ili{Italian}, by which he meant \ili{Tuscan} \citep[141]{Varchi1570}. Siding with Pietro Bembo (1470–1547) against Baldassare Castiglione (1478–1529) and Gian Giorgio Trissino (1478–1550), Varchi was fiercely opposed to the use of the Greek \ili{Koine} as a model for a \il{Italian!Common}common Italian language.\footnote{\citet[269--271]{Varchi1570}, with reference to \citet{Bembo1525}, \citet{Castiglione1528}, and \citet{Trissino1529}.} Varchi argued that there were only four Greek dialects, out of which the Greeks easily created a common tongue, but the varieties in Rome were innumerable, making it impossible to produce an Italian koine out of them. Like Varchi, the Italian monk and exceptional Hellenist Vincenzo Borghini was convinced that Greek and \ili{Italo-Romance} variation were incomparable, a train of thought he developed in a manuscript treatise entirely devoted to this problem – it bears the title \textit{Whether the diversity of the Greek language is the same as the Italian} and was likely composed in the first half of the 1570s (edition in \citealt{Borghini1971}; see \citealt{Alinei1984}: 171, 191). \citet[335]{Borghini1971} argued instead that if the Greek context really needed to be compared with variation on the Italian peninsula, it should be with variation in the \ili{Tuscan} subgroup rather than with \ili{Italian} as a whole. After all, \ili{Italo-Romance} tongues differed from each other to a far greater extent than the Greek dialects did. The \ili{Tuscan}–Greek comparison was all the more preferable, Borghini continued, since the varieties of both linguistic groups were approved speech forms, in contrast to other \ili{Italian} varieties such as \ili{Lombard}. \citet[338--340]{Borghini1971} dismissed the comparison of \ili{Italian} and Greek also for historical reasons. Speakers of \ili{Italian} did not have a common tongue because, unlike the ancient Greeks, there was originally no unitary Italian people speaking a \isi{common language}. In fact, \ili{Italian} emerged out of the mixture and \isi{corruption} of the tongues of several different peoples. This was why constructing a \il{Italian!Common}common Italian language was a bad idea. What is more, much like Varchi, Borghini contrasted the approved and written Greek dialects, which only showed slight mutual differences, with the innumerable \ili{Italo-Romance} varieties, which could not be reduced to writing and which exhibited substantial divergences.\footnote{\citet[341]{Borghini1971}. See \citet[171]{Alinei1984}; \citet[210]{Trovato1984}; \citet[32--37]{Beninca1988}. Cf. \citet[253--254]{Salviati1588} for an argument similar to Borghini’s.} During the sixteenth century, voices similar to Varchi’s and Borghini’s were heard outside of Italy as well.\footnote{See e.g. \citet[595--596]{Wolf1578}, on whom see \citet{Von1856}, \citet[58–59]{Jellinek1898, Jellinek1913}, and \citet[esp. 214--218]{Mattheier2003}. Cf. also \citet[xiii.\textsc{\textsuperscript{v}}]{Palsgrave1530}.} This continued throughout the seventeenth century and reached its peak in the eighteenth century, especially in France, to which I turn now.\footnote{For seventeenth-century examples, see \citet[458--459]{Mambrun1661} and \citet[146--147]{Morhof1685}.}\largerpage The stress on \isi{incomparability} was particularly prominent in the widely read works of the French historian and classical scholar Charles Rollin, who distinguished between the dialects of the Greek language, termed \textit{idiomes} and \textit{dialectes}, and the patois of the different provinces of France, called \textit{jargons}. Rollin characterized these latter as vulgar and \is{corruption}corrupted manners of speaking not deserving the label of “language” (\textit{langage}). A dialect, in contrast, was “a language perfect in its own right”, apt for \is{literary usage}literary use, having its own rules and elegant features.\footnote{\citet[117]{Rollin1726}: “Chaque dialecte étoit un langage parfait dans son genre”. See also \citet[395]{Rollin1731}.} In a later work, \citet[395]{Rollin1731} connected this to the political fragmentation of Greece as opposed to the high degree of centralization in France (cf. Chapter 7, \sectref{sec:7.5}). The comparability was subsequently denied in Greek grammars composed by French scholars, as in the 1752 edition of a lengthy \textit{Introduction to the Greek language} by the French Jesuit Bonaventure Giraudeau (1697–1774). This grammar, composed in \ili{Latin}, was first published in Rome thirteen years earlier, but that edition lacked a reference to the \ili{French} dialects, as it would not have been useful to its Italian audience. Only when it was published in \ili{French}-speaking territory – the edition of 1752 appeared in La Rochelle and was sold in Paris – did a comment about \ili{French} linguistic diversity become relevant \citep[117]{Giraudeau1752}. The criticism of the comparability of \ili{French} and ancient Greek regional diversity reached an apogee in the “Langue” article included in the ninth volume of Diderot and d’Alembert’s \is{Encyclopédie@\textit{Encyclopédie}|(}\textit{Encyclopédie}, published in 1765. The author of the entry was the French grammarian Nicolas Beauzée (1717–1789). In his lengthy article, \citet[249]{Beauzee1765} elaborated on two types of regional language variation, correlating with political differences. He contrasted \ili{Latin} and \ili{French} diversity with variation in ancient Greece, Italy, and Germany. Greeks, Italians, and Germans were made up of “several equal and mutually independent peoples” (“plusieurs peuples égaux et indépendans les uns des autres”), which was why their dialects were “equally legitimate” (“également légitimes”) forms of their respective national language. The situation was different for \ili{Latin}, which was the language of a politically unified empire. It therefore had only “one legitimate usage” (“un usage légitime”), while everything deviating from it did not deserve the label “dialect of the national language” (“dialecte de la langue nationale”). Instead, it should be circumscribed as “a patois abandoned to the populace of the provinces” (“un patois abandonné à la populace des provinces”).\footnote{Cf. \citet[135--136]{Priestley1762}, who expressed a view similar to Beauzée’s in the English context.} The same held true for his contemporary \ili{French} context, claimed Beauzée. Yet not every contributor to the \is{Encyclopédie@\textit{Encyclopédie}|)}\textit{Encyclopédie} seems to have been convinced of the differences between \ili{French} and Greek diversity. The anonymous author of the “Patois” entry asked himself: “What are the different dialects of the Greek language other than the patois of the different areas of Greece?”\footnote{\citet[174]{Anon.1765}: “Qu’est-ce que les différens dialectes de la langue greque, sinon les patois des différentes contrées de la Grece?”}\largerpage The emphasis on the \isi{incomparability} of \isi{vernacular} and Greek variation also occurred outside of France, especially in \ili{German}-speaking territories.\footnote{See e.g. \citet[b.2\textsc{\textsuperscript{v}}\textsc{–}b.3\textsc{\textsuperscript{r}}]{Nibbe1725}, who stressed differences in \isi{literary usage}; \ia{Frisch, Johann Leonhard@Frisch, Johann Leonhard}\citet[1131--1132]{[frisch]1730}, who opposed the literary Greek dialects to the \ili{German} dialects of the lower social classes (\textit{Pöbel-Sprach}); \ia{Frederick the Great@Frederick the Great}\citet[6--8]{[frederick1780}; \citet[203--204]{Ries1786}. For an example from England, see \citet[13--14]{Bayly1756}.} Of particular interest is the work of the eighteenth-century German classical scholar Johann Matthias Gesner, who provided an insightful account of the comparability of \ili{German} and ancient Greek diversity. In the past, Gesner argued, they were comparable. The absence of a centralized government and capital caused dialectal variation in both areas.\footnote{\citet[160--161]{Gesner1774}. Cf. \citet[lxviii]{Court1778}, who limited the comparability to the period before France had a centralized government.} Moreover, Greek as well as \ili{German} dialects were initially used in writing. Starting with the Lutheran era, the \ili{German} dialects lost their prominence and social prestige, leading them to be ridiculed and to attain a status different from the ancient Greek dialects. \citet[162]{Gesner1774} likewise considered it unacceptable to compose dialectally mixed poetry in \ili{German}, arguing at the same time that this was equally inappropriate for Greek authors writing in or after late antiquity. In conclusion, scholars frequently stressed the \isi{incomparability} of Greek and \isi{vernacular} dialects, especially toward the end of the early modern period, when most \isi{vernacular} dialects had slipped into the shadows of their overarching \is{standard (language)}standard varieties and the comparison must have appeared less convincing. In assessing this lack of comparability, authors were generally inspired by language-external circumstances, usually geopolitical and sociocultural. On some occasions, however, \isi{incomparability} was maintained on a more strictly linguistic basis, for instance, when attempting to map out different degrees of linguistic kinship. This is what happened when certain eighteenth-century Scottish scholars compared the Greek dialects with the relationship among a number of tongues known today as \ili{Celtic}. The early eighteenth-century Scottish antiquarian David Malcolm stressed the \isi{incomparability} of both contexts, leading him to propose a different terminology for each situation: \begin{quote} Many indeed say that the \il{Welsh}\textit{Welsh} and \il{Irish}\textit{Irish} are but different dialects of the same language, but those who have enquired into them will easily see that they differ more widely than the dialects of the \textit{Greeks}. Perhaps it may not be amiss to call them \isi{sister languages}. (\citealt{Malcolm1738}: 46–47; cf. \citealt{Macnicol1779}: 311) \end{quote} The Greek dialects were not always directly involved when scholars emphasized the \isi{incomparability} of two dialect contexts. Comparisons of different Western European vernaculars sometimes served to devalue the dialects of one language in favor of the dialects of another. Henri \citet[133--134]{Estienne1579}, for example, praised the richness and utility of \ili{French} dialectal diversity, both properties he denied to \ili{Italian} (see \citealt{Swiggers1997}: 306; \citeyear{Swiggers2009}: 73). Also, when comparing two or more \isi{vernacular} dialect contexts, scholars noticed different degrees of \isi{mutual intelligibility} and variation.\footnote{For \isi{mutual intelligibility}, see e.g. \citet[158\textsc{\textsuperscript{r}}\textsc{–158}\textsc{\textsuperscript{v}}]{Hosius1560}; \citet[77 – I refer to the German translation of the Swedish original, published in 1746/1747]{Hogstrom1748}. For different degrees of variation, see e.g. \citet[27, 57]{Sajnovics1770}.} \subsection{Synthesis} Vernacular diversity was very often compared to the ancient Greek dialects during the early modern period. This happened for various purposes, most importantly, (1) to explain the nature of Greek dialectal diversity, mainly to would-be Hellenists or to an intended readership unacquainted with the Greek language, (2) to justify and describe (certain uses of) dialectal variation in the Western European vernaculars, and (3) to emphasize differences between Greek and \isi{vernacular} variation, especially in literary and sociopolitical terms. I have visualized the directionality of the comparisons in \tabref{tab:8.1} below. \begin{figure} \caption{Directionality of comparison of ancient Greek with vernacular dialects\label{tab:8.1}} \begin{tikzpicture}[baseline] \matrix (directionality) [anchor=base,baseline,matrix of nodes] { (1) & ancient Greek &[5cm] \isi{vernacular}\\ (2) & ancient Greek & \isi{vernacular}\\ (3) & ancient Greek & \isi{vernacular}\\}; \draw[-{Triangle[]}] (directionality-1-3) -- (directionality-1-2); \draw[{Triangle[]}-] (directionality-2-3) -- (directionality-2-2); \draw[{Triangle[]}-{Triangle[]}] (directionality-3-3) -- (directionality-3-2) node[midway,circle,draw,fill=white] (circle) {}; \draw [] (circle.north east) -- (circle.south west); \end{tikzpicture} \end{figure} In the cases of (1) and (2), the figure suggests a strictly unidirectional movement. However, as I have argued, especially in \sectref{sec:8.1.1} above, this is too simple a picture. Scholars often suppressed, usually silently, the differences between both dialect contexts in order to underline the similarities, and they sometimes even forced one situation into the straitjacket of the other. This could happen either consciously or subconsciously. It is, however, difficult to tell the degree of consciousness from the actual evidence, as the suppressing of the differences was nearly always left unmentioned. The reason for this is obvious; mentioning differences would invalidate the scholar’s claim of comparability. The enumeration above may be taken to carry chronological implications as well. At first, the tendency to explain the phenomenon of ancient Greek dialectal diversity prevailed, soon after which the directionality was reversed with the Greek linguistic context functioning as a model for justifying and describing \isi{vernacular} variation. The third element, dissociation, came about as a reaction against this latter use of the Greek dialects in the second half of the sixteenth century and culminated in the eighteenth century. This occurred especially in France, where the devalued patois were emphatically opposed to the literary Greek dialects. Even though it is possible to distinguish certain tendencies throughout the early modern period, one must be aware that, once the three main approaches toward Greek vis-à-vis \isi{vernacular} diversity were established, they often coexisted. What is more, one scholar could reflect and reunite different approaches in their writings, even as seemingly contradictory attitudes as (2) and (3). For example, in Henri Estienne’s work, the model function of Greek took center stage, as I have established above in \sectref{sec:8.1.2}. Elsewhere in his work, however, \citet[93--94]{Estienne1587} granted that the \is{literary usage}literary use of dialects was much more restricted in \ili{French} than it had been in \ili{Ancient Greek}, thus displaying an awareness of differences between both dialect contexts. He noticed that Homer was allowed to mix different dialects in his epic poems, but in \ili{French} this primarily happened in comic pieces and was uncommon in more serious writings, with the exception of certain dialect words in the poetry of Pierre de Ronsard and Joachim du Bellay. What \isi{vernacular} varieties were compared most intensively to the ancient\linebreak Greek dialects? It should come as no surprise that Italian humanists were the first to compare ancient Greek diversity with their \isi{vernacular} context, as they were at the cradle of Renaissance Greek studies.\footnote{On the comparison of the Greek and \ili{Italian} contexts, see also \citet[2–3, 51]{Dionisotti1968}, \citet[179]{Alinei1984}, \citet[215]{Trovato1984}, and \citet[36--37]{Lepschy2002}.} Indeed, \ili{Italian} diversity was frequently compared to the Greek dialects, primarily in the sixteenth century. After the \is{standardization!selection}selection of the normative variety was more or less settled, comparisons of Greek and \ili{Italian} variation became less frequent. It seems to have occurred only occasionally in the seventeenth and eighteenth century, mainly to stress the similarities both contexts displayed (e.g. Salvini in \citealt{Muratori1724}: 99–100). Almost immediately after the revival of Greek studies reached the other side of the Alps, intuitive comparisons of the Greek and \ili{German} dialect contexts started to appear. Soon, they occurred in the work of Frenchmen, too, in which it seems to have been related to the patriotic claim that \ili{French} derived from Greek and not from \ili{Latin}. Paradoxically, it turned out to be French scholars who stressed most strongly the \isi{incomparability} of Greek and \ili{French} variation in the eighteenth century. This was no doubt related to the purist and prescriptivist attitudes current in French linguistic thought at the time as well as to a reverence for the literary dialects of Greek.\footnote{On French purism in the eighteenth century, with specific reference to the \textit{Académie française}, see \citet[]{Francois1905}.} In England, comparisons were frequent, too, albeit less so than in Italy, Germany, and France, and the comparability of Greek and \ili{English} variation was usually taken for granted. It was somewhat less customary to compare the ancient Greek dialects with variation in \ili{Dutch}, \ili{Spanish}, and \ili{North Germanic}, and much less so with varieties of \ili{Baltic}, \ili{Basque}, \ili{Celtic}, \ili{Portuguese}, and \ili{Slavic}. This is not really astonishing; intense comparisons of Greek with \isi{vernacular} variation were principally conducted by scholars active in areas and cities that were centers of Greek studies, including most importantly Italy, Germany, and France. Comparative approaches toward ancient and \isi{vernacular} Greek variation were exceptional, most likely because Western European scholars did not feel the need to justify or describe the dialectal variation of a foreign language they considered barbarous and because they approached the matter largely in terms of discontinuity rather than \isi{incomparability} (see Chapter 2, \sectref{sec:2.10}; Chapter 5, \sectref{sec:5.5}). A notable exception was the Italian Jesuit missionary Girolamo Germano (1568–1632), who tried to justify his focus on the dialect of \il{Early Modern Greek@(Early) Modern Greek!Chian}Chios in his \il{Early Modern Greek@(Early) Modern Greek}Vernacular Greek grammar by referring to the central status of \ili{Attic} among the ancient dialects.\footnote{\citet[10]{Germano1622}. Cf. \citet[vi-vii]{Du1688}, who reminded his readers of ancient Greek dialectal diversity in order to explain \isi{vernacular} Greek variation.} Early modern scholars positioned the ancient Greek dialects in various ways vis-à-vis those of the Western European vernaculars. Yet how did they relate the Greek dialects to other languages they eagerly studied, primarily \ili{Latin} and the so-called Oriental tongues, including \ili{Hebrew} and \ili{Arabic}? \section{Latin: Uniquely uniform or diversified like Greek?}\label{sec:8.2} In the early stages of the Renaissance, there was a common belief that, in contrast to \ili{Ancient Greek}, \ili{Latin} was uniform and therefore exempt from dialectal variation. This view was most famously championed by the Italian humanist \ia{Valla, Lorenzo}Lorenzo Valla. For Valla, as I have shown, the unifying power of \ili{Latin} was a great advantage, in sharp contrast to the internal linguistic discord among the Greeks. Later humanist scholars such as Aldus Manutius and Juan Luis Vives also adhered to the idea of \ili{Latin} uniformity, which lived on throughout the early modern period.\footnote{See \citet[*.ii\textsc{\textsuperscript{v}}]{Manutius1496Aldus}; \citet[\textsc{x}.iii\textsc{\textsuperscript{v}}]{Vives1533}: “Romana dialectos non habet, unica est et simplex”. See \citet[11]{Trapp1990}. Cf. \citet[34--35]{Erasmus1528}; \citet[121]{Rapin1659}; \citet[29]{Wesley1736}; \citet[113--114]{Primatt1764}.} Unlike Valla, however, Manutius regarded it as a cause of poetical poverty. Vives, on the other hand, denied the existence of diversity in classical \ili{Latin}, but at the same time felt compelled to grant that \ili{Latin} had clearly changed over time~– he was no doubt thinking of the traditional four-stage periodization offered by the Early Christian author \iai{Isidore of Seville} (ca. 560–636).\footnote{See \citet[229--232]{Denecker2017} on Isidore’s division of the history of Latin into \il{Latin!Ancient}ancient, Latin, \ili{Roman}, and \il{Latin!mixed}mixed.} Valla, Manutius, and Vives all opposed Greek diversity directly to \ili{Latin} uniformity. The illusion of \ili{Latin} internal harmony seems to have obstructed an early recognition of the universality of dialectal variation and perhaps also a more avid interest in language-internal diversity in general. Regional variation in \ili{Latin} was nevertheless gradually recognized in the sixteenth century.\footnote{For a modern linguistic study of regional variation within Latin, see the detailed account of \citet{Adams2007}.} A telling early example is the \ili{Flemish} nobleman Georgius Haloinus’s (ca. 1470–1536/1537) \textit{Restauration of the \ili{Latin} language}, a strong plea for usage and against grammar in learning correct \ili{Latin}; this work was first published in 1533, but Haloinus had already composed it several decades earlier, around 1508. \citet[55]{Haloinus1978} stressed that \ili{Latin}, too, was internally diversified and pointed to the alleged \ili{Paduan} touch to Livy’s speech, his so-called \isi{Patavinity} (\textit{Patauinitas}), to prove this. Livy’s Patavinity became a prototype and leitmotiv in demonstrating the existence of \ili{Latin} dialects.\footnote{See e.g. also \citet[b.viii\textsc{\textsuperscript{v}}]{Castiglione1528}; \citet[*.iii\textsc{\textsuperscript{r}}]{Estienne1582}; \citet[174, 176]{Schottel1663}; \citet[311]{Rice1765}; \citet[: \textsc{lix}]{Mazzarella-farao1779}; \citet[203--204]{Ries1786}. See \citet{VanRooy2018a} for a more extensive discussion of sixteenth and seventeenth-century ideas about Livy’s Patavinity.} Some scholars even posited the existence of several other \ili{Latin} varieties by \isi{analogy} with Patavinity. In an eighteenth-century dissertation presented in Copenhagen, reference was made to Vergil’s alleged \ili{Mantuan} dialect, his \is{Mantuanity}“Mantuanity” (\textit{Mantuanitas}; \citealt[22]{Munthe1748}). Scholars went further than simply varying on the Patavinity theme, however. The Dutch scholar and politician Ernst Brinck (1582/1583–1649) even made a list of \ili{Latin} dialects in his manuscript catalogue of linguistic specimens. Brinck referred to “dialects” (\textit{dialecti}) specific to a certain social or gender group – \il{Latin!peasant}peasants or \il{Latin!female}women, for instance – as well as to “dialects” characteristic of a certain locality, including \il{Latin!Praenestine}Praeneste and \il{Latin!Tusculan}Tusculum, noting some particular words for each variety.\footnote{\citet[56\textsc{\textsuperscript{v}}]{Brinck1615}. Cf. also \citet[43]{Stubbe1657}, where a list of Latin dialects is provided, albeit mixed up with \ia{Isidore of Seville}Isidore of Seville’s four-stage periodization of \ili{Latin}.} Once it had been established that \ili{Latin} also must have had its dialects, seven\-teenth-century scholars began to compare the \ili{Latin} dialect context with its ancient Greek counterpart, always resulting in the a priori affirmation that they showed great differences. In his monograph on Livy’s Patavinity, Daniel Georg \citet[146]{Morhof1685} emphasized that the Greek language had greater dialectal variation and license than \ili{Latin} because of the political diversity of ancient Greece, which he opposed to the highly centralized Roman Empire. This did not mean, however, that \ili{Latin} did not have any dialects at all, and \citet[148--149]{Morhof1685} indeed listed several dialects of the language. About a decade later, the Hebraist Louis Thomassin (1619–1695) stressed that \ili{Latin}, in comparison to \ili{Ancient Greek}, “had few or no dialects”, with the exception of “a number of native and \isi{vernacular} tongues of certain cities”. Thomassin attributed this to the Roman desire for unity and simplicity.\footnote{\citet[xix]{Thomassin1697}: “Graeca rursus lingua dialectis etiam statim ab initio luxuriata est. Quattuor quidem ex iis eminent; sed plurium supersunt uestigia. Porro singulae dialecti de iure mutandi uetera nouaque superstruendi uocabula cum suis dicendi modis, inter se quasi certatim contenderunt. Latina uero suae tum unitatis tum simplicitatis tenacior, paucas aut nullas habuit dialectos, si aliquot excipias quarundam ciuitatum patrios uernaculosque sermones”.} It was, however, only in the eighteenth century that \ili{Latin} dialects were described in explicitly negative terms in comparison to the ancient Greek dialects. The German theologian (Johannes) Nicolaus Hertling (1666–1710) contrasted\linebreak Greek dialects with \ili{Latin} varieties in esthetic terms. Greek had various dialects pleasant to the ears, which \ili{Latin} and most other languages lacked, as they only contained \is{corruption}corrupt dialects \citep[73]{Hertling1708}. The English grammarian Joseph Priestley (1733–1804) provided a more neutral and down-to-earth account. \citet[138]{Priestley1762} stressed that, in \ili{Latin}, “dialects are unknown”, since these were not introduced into writings. “The \textit{Patavinity} of Livy is not to be perceived”. Put differently, “the \textit{Romans}, having one seat of power and of arts, allowed of no dialects”.\footnote{\citet[280]{Priestley1762}. See \citet[52]{Amsler1993}. Cf. \citet[49]{Galiani1779}; \citet[203--205]{Ries1786} for similar views.} In sum, Priestley did not deny that \ili{Latin} dialects existed, but pointed out that they were no longer knowable, since, unlike the Greek dialects, they had not received written \is{standardization!codification}codification. The diversity of the \ili{Romance} languages that developed out of \ili{Latin} was sometimes compared to the Greek dialects. The sixteenth-century Hellenist and orientalist Angelo Canini even forced the \ili{Romance} tongues into the straitjacket of Greek as well as Oriental diversity. This involved \citet[\textsc{a}.iii\textsc{\textsuperscript{r}}]{Canini1554} interpreting both Greek and \ili{Latin} as linguistic tetrads, the former consisting of \ili{Attic}, \ili{Ionic}, \ili{Doric}, and \ili{Aeolic}, and the latter encompassing \ili{Latin}, \ili{Italian}, \ili{French}, and \ili{Spanish} (cf. also \citealt{Canini1555}: a.3\textsc{\textsuperscript{v}}). Oddly enough, he did not elaborate on the precise relationship of \ili{Latin} to the three \ili{Romance} tongues he mentioned. Together with the \ili{Hebrew} tetrad, consisting of \ili{Hebrew}, \ili{Syriac}, \ili{Arabic}, and \ili{Ethiopian}, the Greek and \ili{Latin} tetrads constituted a linguistic triad, Canini suggested. This makes it clear that Canini’s scheme, into which \ili{Latin} and three \ili{Romance} tongues descending from it were forced in an ahistorical way, was very much numerologically inspired and not based on much linguistic evidence. In a nutshell, \ili{Latin} was regarded as uniform by many scholars throughout the entire early modern period. However, an alternative view emerged in the early sixteenth century, attributing regional variation to \ili{Latin}, a realization which paved the way for the insight that regional variation was a universal phenomenon. In the seventeenth century, some scholars even attempted to list \ili{Latin} dialects despite the scarcity of the evidence available to them. At the same time, they started to intuitively compare \ili{Latin} to Greek variation with a focus on lan\-guage-ex\-ter\-nal, sociopolitical differences. In the eighteenth century, the superiority of Greek over \ili{Latin} dialects was explicitly stressed on account of the literary value of the former. In other words, the main aim of the comparison was dissociation (cf. \sectref{sec:8.1.3} above). Exceptionally, Greek dialectal variation was put forward as a descriptive model for \ili{Romance} diversity (cf. \sectref{sec:8.1.2} above). \section{The Oriental language family and the Greek dialects}\label{sec:8.3} \subsection{The Oriental dialects}\label{sec:8.3.1}\largerpage Early modern scholars compared the ancient Greek dialects very frequently to the Oriental tongues, up to the point that it seems to have become a refrain. Why was this the case? A large part of the answer can be found by looking at what the Swiss humanist Theodore Bibliander (1504/1509–1564) had to say about the interrelationship of a number of Oriental languages: \begin{quote} By means of a diligent investigation one knows that the \il{Aramaic}Chaldean, \ili{Assyrian}, \ili{Arabic}, and \ili{Syriac} tongues are so cognate that some take them to be one, which is true if the matter would be understood in terms of all dialects of the Greeks, which are called one Greek language.\footnote{\citet[58]{Bibliander1542}: “diligentique inquisitione cognitum est Chaldaeum, Assyrium, Arabicum, Syriacum sermonem ita finitimos, ut pro uno quidam accipiant, quod uerum est, si, ut omnes Graecorum dialecti una lingua Graeca dicuntur, ea res intelligatur”.} \end{quote} \citet[58--59]{Bibliander1542} proceeded by elaborating on the close connection between these Oriental languages and the \is{primeval language (language of Adam)}primeval \ili{Hebrew} tongue. It is obvious that he employed the example of the Greek context to justify the idea that these Oriental tongues actually constituted one language (see also \citealt{Metcalf2013}: 61). Bibliander used Greek dialectal variation as a touchstone and a descriptive point of reference to analyze and approach Oriental diversity, a method omnipresent in early modern descriptions of this \isi{language family}.\footnote{\ili{Semitic} variation was often also explained by referring to one’s native or another more familiar linguistic context. See e.g. \citet[41]{Purchas1613}; \citet[197]{Kircher1679}; \citet[b.1\textsc{\textsuperscript{v}}]{Le1696}; \citet[\textsc{i.}230, 4th sequence of pagination]{Chambers1728}; \citet[57--58]{Kals1752}.} Consider, for instance, how Bibliander’s pupil Conrad Gessner described \ili{Aramaic} and its relationship to \ili{Hebrew}: \begin{quote} Today, the more erudite men use the \il{Aramaic}Chaldean language in Egypt and Ethiopia, as far as I hear. It is close to \ili{Hebrew} and, perhaps, does not differ much more from it than \ili{Doric} from the common Greek.\footnote{\citet[15\textsc{\textsuperscript{r}}]{Gessner1555}: “Chaldaica lingua hodie eruditiores in Aegypto et Aethiopia utuntur, ut audio. Hebraicae confinis est, nec forte multo amplius differt quam Dorica a Graeca communi”. See \citet[43]{Peters1970}. Cf. e.g. also \citet[325]{Rocca1591}, silently adopting Gessner’s phrase; \citet[459]{Saumaise1643a}; \citet[88]{Bagnati1732}; \citet[24]{Wesley1736}; \citet[22]{Eichhorn1780}.} \end{quote} The comparability of the Greek and Oriental contexts was especially prominent in the work of the eighteenth-century Dutch orientalist Albert \citeauthor{Schultens1739}, who held that the four Oriental tongues \ili{Hebrew}, \ili{Aramaic}, \ili{Syriac}, and \ili{Arabic} derived from a now lost \is{primeval language (language of Adam)}primeval tongue just like the four Greek dialects descended from a common Greek, “\ili{Pelasgian}” mother language (\citeyear{Schultens1739}: 234–235). \citet[\textsc{xcvi}]{Schultens1748} also believed that \ili{Attic} and \ili{Hebrew} were similar because of their tendency toward contractions, whereas \ili{Ionic} and \ili{Arabic} shared the property of being conservative varieties (see \citealt{Eskhult2015}: 85). Other scholars likewise paired a Greek dialect with an Oriental tongue. Like Schultens, some perceived similarities between \ili{Attic} and \ili{Hebrew}, whereas others connected \ili{Doric} to \ili{Syriac} because of their alleged \isi{broadness}.\footnote{See \citet[425--432]{Lakemacher1730} for the \ili{Attic}–\ili{Hebrew} comparison. For \ili{Doric} and \ili{Syriac} \isi{broadness}, see Chapter 5, \sectref{sec:5.7}.} Comparing Greek to Oriental variation is truly a topos throughout Schultens’s work, in which Greek diversity always served as a point of reference for understanding Oriental variation.\footnote{See e.g. \citet[490]{Schultens1769}, \citet[4]{Schultens1732}, \citet[5]{Schultens1737}, \citet[19--21]{Schultens1738a}, \citet[106--107, stressing that the Oriental and the \ili{Germanic} contexts were less comparable]{Schultens1738b}; \citet[187]{Schultens1739}, \citet[\textsc{xcvi}]{Schultens1748}; Schultens (ca. 1748–1750) in \citet[§\textsc{xxvii}]{Eskhult_albert_nodate}. On this topos in Schultens’s work, see also \citet[105]{Fuck1955}; \citet[707]{Covington1979}; Eskhult (fc.). Cf. in Schultens’s tracks \ia{Polier de Bottens, Antoine-Noé@Polier de Bottens, Antoine-Noé}\citet[5]{Polier1739}; \citet{Groddeck1747}.} This procedure occurred in the work of other scholars as well, whether or not in combination with a reference to \isi{vernacular} variation (see e.g. \citealt{Bochart1646}: 778; \citealt{Blount1680}: 71–72). Some scholars even claimed that Greek dialects differed more from each other than the Oriental tongues, thus dissociating both linguistic contexts (cf. \sectref{sec:8.1.3} above). Angelo \citet[34]{Canini1554} already did so when discussing verb conjugations in his 1554 comparative grammar of a number of Oriental tongues (see \citealt{Contini1994}: 50; \citealt{Kessler-mesguich2013}: 211). The idea was expressed more clearly still by the orientalist Christian Ravis (Raue/Ravius; 1613–1677).\footnote{\citet[*.2\textsc{\textsuperscript{r}}]{Ravis1646}. See e.g. also \citet[51--52]{Hunt1739}; \citet[\textsc{xxvi}]{Groddeck1747}.} \citet[48]{Ravis1650} also emphasized that even though there were separate chairs for each Oriental language at universities, but not for the Greek dialects, this institutional fact should not lead to the conclusion that \ili{Hebrew}, \ili{Syriac}, \ili{Arabic}, and so on were truly “divers tongues”. In fact, just like the Greek language, they were “only one”. The practice of comparing Oriental to Greek diversity was criticized by Johann Heinrich Hottinger (1620–1667), who explicitly reacted against his colleague Christian Ravis’s views on the matter. \citeauthor{Hottinger1661}'s two main points were that \ili{Hebrew} was not an Oriental dialect, but the \is{primeval language (language of Adam)}primeval language, and that the differences among the Oriental tongues were much greater than those among the Greek dialects (\citeyear[a.3\textsc{\textsuperscript{v}}–a.4\textsc{\textsuperscript{r}}]{Hottinger1661}) . The Dutch orientalist Sebald Rau (Sebaldus Ravius; ca. 1725–1818) adopted a similar perspective. \citet[20--21]{Rau1770} argued that the Greek dialects were spoken by one nation, whereas the “Oriental dialects” (\textit{dialecti Orientales}) were current among different nations, living in various climates and having diverging ways of living, customs, and rites. This resulted in greater linguistic differences, he argued. In rare instances, the Oriental context served as a reference point to understand developments in the history of the Greek language (cf. \sectref{sec:8.1.1} above). A late seventeenth-century Hellenist active in Leipzig used the alleged decay and \isi{dialectal diversification} of the \ili{Hebrew} language during the Babylonian captivity in the sixth century \textsc{bc} to clarify the decline of the Greek language (\citealt{Eling1691}: 318–319). In a sixteenth-century handbook on the Greek literary dialects, the Oriental context was cited as an additional example, next to the grammarian’s native one, to explain differences in elegance among the ancient Greek varieties (\citealt{Walper1589}: 61–62). \subsection{Hebrew dialects} As to variation within \ili{Hebrew}, identified by many authors as the \is{primeval language (language of Adam)}primeval language spoken by Adam and Eve and confused at the \is{Babel, Tower of}Tower of Babel, early modern opinions differed greatly.\footnote{In the present section I discuss views on variation within \ili{Hebrew}, thus excluding cases in which \ili{Semitic} tongues such as \ili{Arabic} were dubbed “dialects” of \ili{Hebrew} (see e.g. \citealt{Bochart1646}: 56; \citealt{Martin1737}: 134–135).} Some scholars were eager to claim that \ili{Hebrew} did not have any dialects. The Leipzig theologian Bartholomaeus (1598–1631) did so while citing \ia{Valla, Lorenzo}Lorenzo Valla’s comparison of \ili{Latin} and \ili{Ancient Greek} (\citeyear[10]{Mayer1629}). \citet[\textsc{b.3}\textsc{\textsuperscript{v}}]{Junius1579} took a more moderate stance, as he contrasted the immense variability of Greek to the relatively uniform \ili{Hebrew} tongue, claiming that the latter did not have as many dialects as \ili{Ancient Greek} (cf. Chapter \ref{chap:7}). The English orientalist Thomas Greaves (1612–1676) also attributed dialects to both \ili{Hebrew} and \ili{Ancient Greek}, while praising \ili{Arabic} for lacking them in his \textit{Oration on the utility and preeminence of the \ili{Arabic} language}, held at Oxford in 1637 and published there in 1639.\footnote{See \citet[19--20]{Greaves1639}, who inspired \citet[60]{Leigh1656} and \citet[73]{Blount1680}.} Scholars often found it sufficient to prove regional variation within \ili{Hebrew} by simply referring to the \isi{shibboleth} incident in the Old Testament at Judges 12.5–6 or to the supposed Galilean character of St Peter’s speech, alluded to in the \isi{New Testament} at Matthew 26.73.\footnote{See e.g. \citet[6]{Bovelles1533}; \citet[\textsc{b.3}\textsc{\textsuperscript{v}}]{Bachmann1625}; \citet[102]{Weemes1632}; \citet[2]{Wyss1650}; \citet[18]{Walton1657}; \citet[180]{Webb1669}; \citet[6]{Kiesling1712}; Salvini in \citet[103]{Muratori1724}; \citet[30]{Hauptmann1751}; \citet[13--14]{Hof1772}. For the relevant biblical passages, see also \citet[199--200]{VanRooy2018b}.} Gradually, however, philologists focusing on the Bible started to recognize that St Peter was more likely to have spoken a variety of \ili{Aramaic} or – in early modern terms – of (Chaldeo-)\ili{Syriac} (e.g. \citealt{Pfeiffer1663}), whereas others denied that the \isi{shibboleth} incident was evidence of variation within \ili{Hebrew} (e.g. \citealt{Mayer1629}: 10–11). Sometimes, they developed historically nuanced answers to the question of whether \ili{Hebrew} was dialectally diversified. In a dissertation entirely devoted to the question of St Peter’s speech and presented in Wittenberg, a periodization of \ili{Hebrew} was designed in order to show the development of the language. The authors of the dissertation argued, among other things, that \ili{Hebrew} was originally a unitary language like \ili{Latin}, but underwent \isi{dialectal diversification} after the Babylonian captivity (\citealt{Pfeiffer1663}: \textsc{a.4}\textsc{\textsuperscript{v}}). In these more focused investigations into the question of whether \ili{Hebrew} had dialects, the Greek dialects occupied a marginal position at best. \subsection{Summary} Briefly put, the Greek dialects were frequently used as a point of reference to map out the close genealogical relationship among the Oriental tongues, which aroused great philological interest in the early modern era. Scholars were struck by the close kinship between these languages and tried to find an adequate way to express it. Since most orientalists were also trained as Hellenists, many of them thought of the Greek dialects as a revealing parallel. These, too, were closely cognate, despite their many formal differences. What is more, the Greek dialects had received written \is{standardization!codification}codification, just like the Oriental tongues. These two similarities made ancient Greek diversity a helpful reference point for early modern orientalists. Some of them went a step further and claimed that the Oriental tongues were even more alike than the Greek dialects. Such an exaggerated conception was usually rejected by orientalists in the seventeenth and eighteenth century, who preferred to maintain the comparability of both contexts. This stance culminated in the work of the Dutch philologist Albert Schultens, who formulated the Greek–Oriental simile in nearly every one of his publications. Finally, as with \ili{Latin}, scholars struggled to assert \ili{Hebrew} uniformity, even though from the sixteenth century onward there were voices admitting that \ili{Hebrew}, too, that sacred tongue often identified with the \is{primeval language (language of Adam)}language of Adam, had its dialects just like Greek, \ili{Latin}, and the vernaculars. \section{Conclusion: Between exemplarity and particularity}\label{sec:8.4} In the present chapter, I have attempted to demonstrate that early modern scholars compared and contrasted the linguistic diversity of \ili{Ancient Greek} to dialectal or dialect-like variation in a wide range of other languages and language families. This occurred most frequently with reference to Oriental diversity and dialectal variation in Western European vernaculars, especially \ili{Italian}, \ili{German}, \ili{French}, and \ili{English}. Modern scholars have often emphasized the exemplarity of the Greek context to grasp or ennoble \isi{vernacular} diversity. For example, Peter \citet[35--36]{Burke2004} states that, for the early modern awareness of dialectal variation, “the model situation was that of Ancient Greece with its \ili{Ionic}, \ili{Doric}, \ili{Attic}, and other varieties of speech”.\footnote{Cf. \citet[923]{Haugen1966}; \citet[216]{Giard1992}: “la question des dialectes portée au passif des vernaculaires est considérée autrement dès lors qu’on remarque la signification et l’usage positifs qu’ils avaient en grec”.} In \is{standardization!selection}selecting the variety to be adopted as the literary \is{standard (language)|(}standard in the so-called \is{questione della lingua@\textit{questione della lingua}}language questions during the Renaissance, the Greek context indeed seems to have functioned as a paradigmatic touchstone and was taken as a noble and close parallel to \isi{vernacular} dialectal diversity (\citealt{Alinei1984}; \citealt{Trovato1984}; \citealt{Trapp1990}). Moreover, the Greek example with its allegedly dialectally mixed \ili{Koine} suggested that \isi{vernacular} dialects, too, could contribute to the literary standard language under construction. However, as I have endeavored to demonstrate in this chapter, this is only part of the picture, albeit a very important one. The situation was very different in early modern manuals for \ili{Ancient Greek}. As a matter of fact, there, the Greek context did not serve as a model at all. Instead, the grammarians needed to explain it by referring to the native \isi{vernacular} context of their intended readership. In other words, ancient Greek diversity constituted a phenomenon that very often required elucidation. This was especially common in works published in \ili{German}-speaking areas, where Greek studies flourished throughout the entire early modern period and \isi{vernacular} dialectal diversity was not easily transcended by an established standard language. In order to maintain the comparability of Greek with other dialect contexts, early modern scholars could tone down some of the differences between them so as to emphasize their similarities. To this end, they projected certain characteristics of one context onto the other, a process of which they were not always fully aware (cf. \citealt{Alinei1980}: 20). Even though most early modern scholars seem to have assumed that ancient Greek dialectal diversity was highly similar to variation within other languages, the point of comparison being the close kinship among the dialects, there were nonetheless also a considerable number of authors who emphasized the particular place of ancient Greek diversity, certainly during later stages of the early modern period and particularly in eighteenth-century France. In the large majority of cases, the \isi{incomparability} of \ili{Ancient Greek} with another dialect context was mainly motivated by language-external circumstances. This included, most importantly, the political diversity of ancient Greece and the literary and \is{standardization!codification}codified status of the canonical Greek dialects. Several scholars contrasted this to cases of political centralization, as in France, or to the existence of a sole written \is{standard (language)|)}standard, as in the case of \ili{German}. Authors emphasizing comparability likewise concentrated on language-external circumstances, but less exclusively so. The relative lack of reference to specific linguistic features in this discussion may seem remarkable at first sight, but this should be seen in connection with the main goal of the early modern discourse on comparability; this consisted in making a statement – either explanatory, justificatory, descriptive, or dissociating – about the precise status of a specific dialect situation in its broader sociolinguistic and cultural context rather than about the actual linguistic forms of the dialects. A scholar’s emphasis on comparability or lack thereof depended to a large extent on his discursive intentions as well as his underlying language ideology. For instance, when explaining ancient Greek diversity in a grammar, comparability was usually stressed, since the grammarian hoped to help his readers understand the status of the Greek dialects by referring to a similar and more familiar context. Early modern literary critics, however, tended to deny comparability, as they emphasized the literary insignificance of \isi{vernacular} dialects, which stood in glaring contrast to the high esteem of the ancient Greek dialects. This lack of comparability made it to the mind of certain scholars impossible to apply the term ‘dialect’ to any linguistic context other than \ili{Ancient Greek}. Put differently, early modern scholars vacillated between exemplarity and particularity. On the one hand, the ancient Greek linguistic situation was used as a model to approach variation within other languages or language families or turned out to be the situation in need of clarification by means of a more familiar \isi{vernacular} example. On the other hand, scholars could stress, whenever it suited them, the extreme idiosyncrasy of the Greek dialects and the exceptional historical coincidence that these speech forms have been eagerly used as literary media. The level of competence in \ili{Ancient Greek} was also of relevance for the discourse on comparability. It seems that the better a scholar’s competence was, the more detailed their comparison tended to be and the more likely it was that their ideas were picked up by later scholars, as in the cases of Henri Estienne, Charles Rollin, and Albert Schultens. Inspired by their thorough knowledge of Greek – Estienne even claimed it to be his second language before \ili{Latin} – they put forward various ideas on the (in)comparability of Greek with \isi{vernacular} or Oriental diversity, all with considerable influence. As a final point, I want to add that not all comparisons of different dialect contexts involved the ancient Greek dialects, even though this Greek-free approach occurred with a noticeably lower frequency. The relative rarity of such instances demonstrates the tremendous importance of ancient Greek diversity in triggering early modern interest in dialectal variation as a general phenomenon affecting every language. It also clearly indicates that the widespread comparison of dialect contexts was largely an early modern development, catalyzed by the Renaissance revival of Greek studies, all the more since the procedure was so exceptional in the Middle Ages. In sum, the well-chosen words of the Austrian Germanist Max Hermann Jellinek (1868–1938), which pertained specifically to German grammarians, may well be generalized: early modern scholars “cannot speak of dialects and written language without calling in \ili{Attic}, \ili{Ionic}, \ili{Doric}, and the \ili{Koine}”.\footnote{\citet[21]{Jellinek1913}: “Diese Männer können nicht von Dialekten und \il{German!Schriftsprache@\textit{Schriftsprache}}Schriftsprache reden, ohne das Attische, Jonische, Dorische, Aeolische und die κoινή aufmarschieren zu lassen”.}
{ "alphanum_fraction": 0.7943392073, "avg_line_length": 479.2339449541, "ext": "tex", "hexsha": "84c09bd235f35edcc243ba50209866c35531a512", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "d5ec08842ec0a62ed43629432bcf24d521f32aac", "max_forks_repo_licenses": [ "CC-BY-4.0" ], "max_forks_repo_name": "langsci/253", "max_forks_repo_path": "chapters/08.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "d5ec08842ec0a62ed43629432bcf24d521f32aac", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "CC-BY-4.0" ], "max_issues_repo_name": "langsci/253", "max_issues_repo_path": "chapters/08.tex", "max_line_length": 4238, "max_stars_count": null, "max_stars_repo_head_hexsha": "d5ec08842ec0a62ed43629432bcf24d521f32aac", "max_stars_repo_licenses": [ "CC-BY-4.0" ], "max_stars_repo_name": "langsci/253", "max_stars_repo_path": "chapters/08.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 27240, "size": 104473 }
\documentclass[]{article} \usepackage{lmodern} \usepackage{amssymb,amsmath} \usepackage{ifxetex,ifluatex} \usepackage{fixltx2e} % provides \textsubscript \ifnum 0\ifxetex 1\fi\ifluatex 1\fi=0 % if pdftex \usepackage[T1]{fontenc} \usepackage[utf8]{inputenc} \else % if luatex or xelatex \ifxetex \usepackage{mathspec} \else \usepackage{fontspec} \fi \defaultfontfeatures{Ligatures=TeX,Scale=MatchLowercase} \fi % use upquote if available, for straight quotes in verbatim environments \IfFileExists{upquote.sty}{\usepackage{upquote}}{} % use microtype if available \IfFileExists{microtype.sty}{% \usepackage[]{microtype} \UseMicrotypeSet[protrusion]{basicmath} % disable protrusion for tt fonts }{} \PassOptionsToPackage{hyphens}{url} % url is loaded by hyperref \usepackage[unicode=true]{hyperref} \hypersetup{ pdftitle={Air Quality in Southampton (UK)}, pdfauthor={Ben Anderson ([email protected] @dataknut)}, pdfborder={0 0 0}, breaklinks=true} \urlstyle{same} % don't use monospace font for urls \usepackage[margin=1in]{geometry} \usepackage{color} \usepackage{fancyvrb} \newcommand{\VerbBar}{|} \newcommand{\VERB}{\Verb[commandchars=\\\{\}]} \DefineVerbatimEnvironment{Highlighting}{Verbatim}{commandchars=\\\{\}} % Add ',fontsize=\small' for more characters per line \usepackage{framed} \definecolor{shadecolor}{RGB}{248,248,248} \newenvironment{Shaded}{\begin{snugshade}}{\end{snugshade}} \newcommand{\KeywordTok}[1]{\textcolor[rgb]{0.13,0.29,0.53}{\textbf{#1}}} \newcommand{\DataTypeTok}[1]{\textcolor[rgb]{0.13,0.29,0.53}{#1}} \newcommand{\DecValTok}[1]{\textcolor[rgb]{0.00,0.00,0.81}{#1}} \newcommand{\BaseNTok}[1]{\textcolor[rgb]{0.00,0.00,0.81}{#1}} \newcommand{\FloatTok}[1]{\textcolor[rgb]{0.00,0.00,0.81}{#1}} \newcommand{\ConstantTok}[1]{\textcolor[rgb]{0.00,0.00,0.00}{#1}} \newcommand{\CharTok}[1]{\textcolor[rgb]{0.31,0.60,0.02}{#1}} \newcommand{\SpecialCharTok}[1]{\textcolor[rgb]{0.00,0.00,0.00}{#1}} \newcommand{\StringTok}[1]{\textcolor[rgb]{0.31,0.60,0.02}{#1}} \newcommand{\VerbatimStringTok}[1]{\textcolor[rgb]{0.31,0.60,0.02}{#1}} \newcommand{\SpecialStringTok}[1]{\textcolor[rgb]{0.31,0.60,0.02}{#1}} \newcommand{\ImportTok}[1]{#1} \newcommand{\CommentTok}[1]{\textcolor[rgb]{0.56,0.35,0.01}{\textit{#1}}} \newcommand{\DocumentationTok}[1]{\textcolor[rgb]{0.56,0.35,0.01}{\textbf{\textit{#1}}}} \newcommand{\AnnotationTok}[1]{\textcolor[rgb]{0.56,0.35,0.01}{\textbf{\textit{#1}}}} \newcommand{\CommentVarTok}[1]{\textcolor[rgb]{0.56,0.35,0.01}{\textbf{\textit{#1}}}} \newcommand{\OtherTok}[1]{\textcolor[rgb]{0.56,0.35,0.01}{#1}} \newcommand{\FunctionTok}[1]{\textcolor[rgb]{0.00,0.00,0.00}{#1}} \newcommand{\VariableTok}[1]{\textcolor[rgb]{0.00,0.00,0.00}{#1}} \newcommand{\ControlFlowTok}[1]{\textcolor[rgb]{0.13,0.29,0.53}{\textbf{#1}}} \newcommand{\OperatorTok}[1]{\textcolor[rgb]{0.81,0.36,0.00}{\textbf{#1}}} \newcommand{\BuiltInTok}[1]{#1} \newcommand{\ExtensionTok}[1]{#1} \newcommand{\PreprocessorTok}[1]{\textcolor[rgb]{0.56,0.35,0.01}{\textit{#1}}} \newcommand{\AttributeTok}[1]{\textcolor[rgb]{0.77,0.63,0.00}{#1}} \newcommand{\RegionMarkerTok}[1]{#1} \newcommand{\InformationTok}[1]{\textcolor[rgb]{0.56,0.35,0.01}{\textbf{\textit{#1}}}} \newcommand{\WarningTok}[1]{\textcolor[rgb]{0.56,0.35,0.01}{\textbf{\textit{#1}}}} \newcommand{\AlertTok}[1]{\textcolor[rgb]{0.94,0.16,0.16}{#1}} \newcommand{\ErrorTok}[1]{\textcolor[rgb]{0.64,0.00,0.00}{\textbf{#1}}} \newcommand{\NormalTok}[1]{#1} \usepackage{longtable,booktabs} % Fix footnotes in tables (requires footnote package) \IfFileExists{footnote.sty}{\usepackage{footnote}\makesavenoteenv{long table}}{} \usepackage{graphicx,grffile} \makeatletter \def\maxwidth{\ifdim\Gin@nat@width>\linewidth\linewidth\else\Gin@nat@width\fi} \def\maxheight{\ifdim\Gin@nat@height>\textheight\textheight\else\Gin@nat@height\fi} \makeatother % Scale images if necessary, so that they will not overflow the page % margins by default, and it is still possible to overwrite the defaults % using explicit options in \includegraphics[width, height, ...]{} \setkeys{Gin}{width=\maxwidth,height=\maxheight,keepaspectratio} \IfFileExists{parskip.sty}{% \usepackage{parskip} }{% else \setlength{\parindent}{0pt} \setlength{\parskip}{6pt plus 2pt minus 1pt} } \setlength{\emergencystretch}{3em} % prevent overfull lines \providecommand{\tightlist}{% \setlength{\itemsep}{0pt}\setlength{\parskip}{0pt}} \setcounter{secnumdepth}{5} % Redefines (sub)paragraphs to behave more like sections \ifx\paragraph\undefined\else \let\oldparagraph\paragraph \renewcommand{\paragraph}[1]{\oldparagraph{#1}\mbox{}} \fi \ifx\subparagraph\undefined\else \let\oldsubparagraph\subparagraph \renewcommand{\subparagraph}[1]{\oldsubparagraph{#1}\mbox{}} \fi % set default figure placement to htbp \makeatletter \def\fps@figure{htbp} \makeatother \usepackage{etoolbox} \makeatletter \providecommand{\subtitle}[1]{% add subtitle to \maketitle \apptocmd{\@title}{\par {\large #1 \par}}{}{} } \makeatother \title{Air Quality in Southampton (UK)} \providecommand{\subtitle}[1]{} \subtitle{Exploring the effect of UK covid 19 lockdown on air quality: Summary for BBC South} \author{Ben Anderson (\href{mailto:[email protected]}{\nolinkurl{[email protected]}} \texttt{@dataknut})} \date{Last run at: 2020-06-18 21:52:53 (Europe/London)} \begin{document} \maketitle { \setcounter{tocdepth}{2} \tableofcontents } \section{Introduction}\label{introduction} This report describes exploratory analysis of changes in air quality in the City of Southampton, UK in Spring 2020. \begin{Shaded} \begin{Highlighting}[] \NormalTok{lastHA <-}\StringTok{ }\KeywordTok{max}\NormalTok{(fixedDT[source }\OperatorTok{==}\StringTok{ "hantsAir"}\NormalTok{]}\OperatorTok{$}\NormalTok{dateTimeUTC)} \NormalTok{diffHA <-}\StringTok{ }\NormalTok{lubridate}\OperatorTok{::}\KeywordTok{now}\NormalTok{() }\OperatorTok{-}\StringTok{ }\NormalTok{lastHA} \NormalTok{lastAURN <-}\StringTok{ }\KeywordTok{max}\NormalTok{(fixedDT[source }\OperatorTok{==}\StringTok{ "AURN"}\NormalTok{]}\OperatorTok{$}\NormalTok{dateTimeUTC)} \NormalTok{diffAURN <-}\StringTok{ }\NormalTok{lubridate}\OperatorTok{::}\KeywordTok{now}\NormalTok{() }\OperatorTok{-}\StringTok{ }\NormalTok{lastAURN} \end{Highlighting} \end{Shaded} Data for Southampton downloaded from : \begin{itemize} \tightlist \item \url{http://www.hantsair.org.uk/hampshire/asp/Bulletin.asp?la=Southampton} (see also \url{https://www.southampton.gov.uk/environmental-issues/pollution/air-quality/}); \item \url{https://uk-air.defra.gov.uk/networks/network-info?view=aurn} \end{itemize} Southampton City Council collects various forms of air quality data at the sites shown in Table @ref(tab:showSites). The data is available in raw form from \url{http://www.hantsair.org.uk/hampshire/asp/Bulletin.asp?la=Southampton\&bulletin=daily\&site=SH5}. Some of these sites feed data to \href{https://uk-air.defra.gov.uk/networks/network-info?view=aurn}{AURN}. The data that goes via AURN is \href{https://uk-air.defra.gov.uk/assets/documents/Data_Validation_and_Ratification_Process_Apr_2017.pdf}{ratified} to check for outliers and instrument/measurement error. AURN data less than six months old has not undergone this process. AURN data is (c) Crown 2020 copyright Defra and available for re-use via \url{https://uk-air.defra.gov.uk}, licenced under the \href{http://www.nationalarchives.gov.uk/doc/open-government-licence/version/2/}{Open Government Licence} (OGL). \section{Data}\label{data} In this report we use data from the following sources: \begin{itemize} \tightlist \item \url{http://www.hantsair.org.uk/hampshire/asp/Bulletin.asp?la=Southampton} last updated at 2020-06-08 10:00:00; \item \url{https://uk-air.defra.gov.uk/networks/network-info?view=aurn} last updated at 2020-06-07 23:00:00. \end{itemize} Table @ref(tab:showSites) shows the available sites and sources. Note that some of the non-AURN sites appear to have stopped updating recently. For a detailed analysis of recent missing data see Section @ref(annexMissing). \begin{Shaded} \begin{Highlighting}[] \NormalTok{t <-}\StringTok{ }\NormalTok{fixedDT[}\OperatorTok{!}\KeywordTok{is.na}\NormalTok{(value), .(}\DataTypeTok{nObs =}\NormalTok{ .N, }\DataTypeTok{firstData =} \KeywordTok{min}\NormalTok{(dateTimeUTC), }\DataTypeTok{latestData =} \KeywordTok{max}\NormalTok{(dateTimeUTC), }\DataTypeTok{nMeasures =} \KeywordTok{uniqueN}\NormalTok{(pollutant)), } \NormalTok{ keyby =}\StringTok{ }\NormalTok{.(site, source)]} \NormalTok{kableExtra}\OperatorTok{::}\KeywordTok{kable}\NormalTok{(t, }\DataTypeTok{caption =} \StringTok{"Sites, data source and number of valid observations. note that measures includes wind speed and direction in the AURN sourced data"}\NormalTok{, } \DataTypeTok{digits =} \DecValTok{2}\NormalTok{) }\OperatorTok{%>%}\StringTok{ }\KeywordTok{kable_styling}\NormalTok{()} \end{Highlighting} \end{Shaded} (\#tab:showSites)Sites, data source and number of valid observations. note that measures includes wind speed and direction in the AURN sourced data site source nObs firstData latestData nMeasures Southampton - A33 Roadside (near docks, AURN site) hantsAir 85918 2017-01-01 00:00:00 2020-06-08 10:00:00 3 Southampton - Background (near city centre, AURN site) hantsAir 162148 2017-01-25 11:00:00 2020-06-08 10:00:00 6 Southampton - Onslow Road (near RSH) hantsAir 82232 2017-01-01 00:00:00 2020-04-15 07:00:00 3 Southampton - Victoria Road (Woolston) hantsAir 60078 2017-01-01 00:00:00 2020-04-01 06:00:00 3 Southampton A33 (via AURN) AURN 220010 2017-01-01 00:00:00 2020-06-07 23:00:00 8 Southampton Centre (via AURN) AURN 343216 2017-01-01 00:00:00 2020-06-07 23:00:00 13 Table @ref(tab:showPollutants) shows the poillutants recorded at each site. \begin{Shaded} \begin{Highlighting}[] \NormalTok{t <-}\StringTok{ }\KeywordTok{with}\NormalTok{(fixedDT[}\OperatorTok{!}\KeywordTok{is.na}\NormalTok{(value)], }\KeywordTok{table}\NormalTok{(pollutant, site))} \NormalTok{kableExtra}\OperatorTok{::}\KeywordTok{kable}\NormalTok{(t, }\DataTypeTok{caption =} \StringTok{"Sites, pollutant and number of valid observations"}\NormalTok{, }\DataTypeTok{digits =} \DecValTok{2}\NormalTok{) }\OperatorTok{%>%}\StringTok{ }\KeywordTok{kable_styling}\NormalTok{()} \end{Highlighting} \end{Shaded} (\#tab:showPollutants)Sites, pollutant and number of valid observations Southampton - A33 Roadside (near docks, AURN site) Southampton - Background (near city centre, AURN site) Southampton - Onslow Road (near RSH) Southampton - Victoria Road (Woolston) Southampton A33 (via AURN) Southampton Centre (via AURN) no 29055 28702 27412 20026 29641 28657 no2 29020 28702 27408 20026 29640 28657 nox 0 22877 27412 20026 29640 28658 nv10 0 0 0 0 23208 21105 nv2.5 0 0 0 0 0 22627 o3 0 0 0 0 0 28645 pm10 27843 26124 0 0 25681 26180 pm2.5 0 27632 0 0 0 27702 so2 0 0 0 0 0 28261 sp2 0 28111 0 0 0 0 v10 0 0 0 0 23208 21105 v2.5 0 0 0 0 0 22627 wd 0 0 0 0 29496 29496 ws 0 0 0 0 29496 29496 To avoid confusion and `double counting', in the remainder of the analysis we replace the Southampton AURN site data with the data for the same site sourced via AURN as shown in Table @ref(tab:selectFinalSites). This has the disadvantage that the data is slightly less up to date (see Table @ref(tab:showSites)). As will be explained below in the comparative analysis we will use only the AURN data to avoid missing data issues. \begin{Shaded} \begin{Highlighting}[] \NormalTok{fixedDT <-}\StringTok{ }\NormalTok{fixedDT[}\OperatorTok{!}\NormalTok{(site }\OperatorTok{%like%}\StringTok{ "AURN site"}\NormalTok{)]} \NormalTok{t <-}\StringTok{ }\NormalTok{fixedDT[}\OperatorTok{!}\KeywordTok{is.na}\NormalTok{(value), .(}\DataTypeTok{nObs =}\NormalTok{ .N, }\DataTypeTok{nPollutants =} \KeywordTok{uniqueN}\NormalTok{(pollutant), }\DataTypeTok{lastDate =} \KeywordTok{max}\NormalTok{(dateTimeUTC)), keyby =}\StringTok{ }\NormalTok{.(site, } \NormalTok{ source)]} \NormalTok{kableExtra}\OperatorTok{::}\KeywordTok{kable}\NormalTok{(t, }\DataTypeTok{caption =} \StringTok{"Sites, data source and number of valid observations"}\NormalTok{, }\DataTypeTok{digits =} \DecValTok{2}\NormalTok{) }\OperatorTok{%>%}\StringTok{ }\KeywordTok{kable_styling}\NormalTok{()} \end{Highlighting} \end{Shaded} (\#tab:selectFinalSites)Sites, data source and number of valid observations site source nObs nPollutants lastDate Southampton - Onslow Road (near RSH) hantsAir 82232 3 2020-04-15 07:00:00 Southampton - Victoria Road (Woolston) hantsAir 60078 3 2020-04-01 06:00:00 Southampton A33 (via AURN) AURN 220010 8 2020-06-07 23:00:00 Southampton Centre (via AURN) AURN 343216 13 2020-06-07 23:00:00 We use this data to compare: \begin{itemize} \tightlist \item pre and during-lockdown air quality measures \item air quality measures during lockdown 2020 with average measures for the same time periods in the preceding 3 years (2017-2019) \end{itemize} It should be noted that air pollution levels in any given period of time are highly dependent on the prevailing meteorological conditions. As a result it can be very difficult to disentangle the affects of a reduction in source strength from the affects of local surface conditions. This is abundantly clear in the analysis which follows given that the Easter weekend was forecast to have \href{https://airqualitynews.com/2020/04/07/people-at-risk-from-coronavirus-warned-with-very-high-air-pollution-episode-predicted-for-uk/}{very high import of pollution from Europe} and that the wind direction and speed was highly variable across the lockdown period (see Figure @ref(fig:recentWind)). Further, air quality is not wholly driven by sources that lockdown might suppress and indeed that suppression may lead to rebound affects. For example we might expect more emissions due to increased domestic heating during cooler lockdown periods. As a result the analysis presented below must be considered a preliminary `before meteorological adjustment' and `before controlling for other sources' analysis of the affect of lockdown on air quality in Southampton. For much more detailed analysis see a longer and very messy \href{https://dataknut.github.io/airQual/sccAirQualExplore_Exploring\%20the\%20SSC\%20and\%20AURN\%20data.html}{data report}. \section{WHO air quality thresholds}\label{who-air-quality-thresholds} A number of the following plots show the relevant WHO air quality thresholds and limits. These are taken from: \begin{itemize} \tightlist \item \url{https://www.who.int/news-room/fact-sheets/detail/ambient-(outdoor)-air-quality-and-health} \end{itemize} \section{Nitrogen Dioxide (no2)}\label{nitrogen-dioxide-no2} \begin{Shaded} \begin{Highlighting}[] \NormalTok{yLab <-}\StringTok{ "Nitrogen Dioxide (ug/m3)"} \NormalTok{no2dt <-}\StringTok{ }\NormalTok{fixedDT[pollutant }\OperatorTok{==}\StringTok{ "no2"}\NormalTok{]} \end{Highlighting} \end{Shaded} Figure @ref(fig:theilSenNO2) shows the NO2 trend over time. Is lockdown below trend? \begin{Shaded} \begin{Highlighting}[] \NormalTok{no2dt[, }\StringTok{`}\DataTypeTok{:=}\StringTok{`}\NormalTok{(date, }\KeywordTok{as.Date}\NormalTok{(dateTimeUTC))] }\CommentTok{# set date to date for this one} \NormalTok{oaNO2 <-}\StringTok{ }\NormalTok{openair}\OperatorTok{::}\KeywordTok{TheilSen}\NormalTok{(no2dt[date }\OperatorTok{<}\StringTok{ }\KeywordTok{as.Date}\NormalTok{(}\StringTok{"2020-06-01"}\NormalTok{)], }\StringTok{"value"}\NormalTok{, }\DataTypeTok{ylab =} \StringTok{"NO2"}\NormalTok{, }\DataTypeTok{deseason =} \OtherTok{TRUE}\NormalTok{, }\DataTypeTok{xlab =} \StringTok{"Year"}\NormalTok{, } \DataTypeTok{date.format =} \StringTok{"%Y"}\NormalTok{, }\DataTypeTok{date.breaks =} \DecValTok{4}\NormalTok{)} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [1] "Taking bootstrap samples. Please wait." \end{verbatim} \begin{figure} \centering \includegraphics{/home/ba1e12/github/dataknut/airQual/docs/sccAirQualExplore_covidLockdown2020ForBBCsnaphot_files/figure-latex/theilSenNO2-1.pdf} \caption{(\#fig:theilSenNO2)Theil-Sen trend (NO2)} \end{figure} \begin{Shaded} \begin{Highlighting}[] \NormalTok{p <-}\StringTok{ }\NormalTok{oaNO2}\OperatorTok{$}\NormalTok{plot} \NormalTok{getModelTrendTable <-}\StringTok{ }\ControlFlowTok{function}\NormalTok{(oa, fname) \{} \CommentTok{# oa is an openAir object created by theilSen calculates the % below trend using the theil sen slope line} \CommentTok{# parameters oa <- oaGWh} \NormalTok{ oaData <-}\StringTok{ }\KeywordTok{as.data.table}\NormalTok{(oa}\OperatorTok{$}\NormalTok{data}\OperatorTok{$}\NormalTok{main.data)} \NormalTok{ rDT <-}\StringTok{ }\NormalTok{oaData[, .(date, conc, a, b, slope)]} \CommentTok{# https://github.com/davidcarslaw/openair/blob/master/R/TheilSen.R#L192 and} \CommentTok{# https://github.com/davidcarslaw/openair/blob/master/R/TheilSen.R#L625} \NormalTok{ rDT[, }\StringTok{`}\DataTypeTok{:=}\StringTok{`}\NormalTok{(x, }\KeywordTok{time_length}\NormalTok{(date }\OperatorTok{-}\StringTok{ }\KeywordTok{as.Date}\NormalTok{(}\StringTok{"1970-01-01"}\NormalTok{), }\DataTypeTok{unit =} \StringTok{"days"}\NormalTok{))] }\CommentTok{# n days since x = 0} \NormalTok{ rDT[, }\StringTok{`}\DataTypeTok{:=}\StringTok{`}\NormalTok{(expectedVal, a }\OperatorTok{+}\StringTok{ }\NormalTok{(b }\OperatorTok{*}\StringTok{ }\NormalTok{x))] }\CommentTok{# b = slope / 365} \CommentTok{# checks} \NormalTok{ p <-}\StringTok{ }\NormalTok{ggplot2}\OperatorTok{::}\KeywordTok{ggplot}\NormalTok{(rDT, }\KeywordTok{aes}\NormalTok{(}\DataTypeTok{x =}\NormalTok{ date)) }\OperatorTok{+}\StringTok{ }\KeywordTok{geom_line}\NormalTok{(}\KeywordTok{aes}\NormalTok{(}\DataTypeTok{y =}\NormalTok{ conc)) }\OperatorTok{+}\StringTok{ }\KeywordTok{labs}\NormalTok{(}\DataTypeTok{y =} \StringTok{"Value"}\NormalTok{, }\DataTypeTok{caption =}\NormalTok{ fname)} \NormalTok{ p <-}\StringTok{ }\NormalTok{p }\OperatorTok{+}\StringTok{ }\KeywordTok{geom_line}\NormalTok{(}\KeywordTok{aes}\NormalTok{(}\DataTypeTok{y =}\NormalTok{ expectedVal), }\DataTypeTok{linetype =} \StringTok{"dashed"}\NormalTok{)} \NormalTok{ ggplot2}\OperatorTok{::}\KeywordTok{ggsave}\NormalTok{(here}\OperatorTok{::}\KeywordTok{here}\NormalTok{(}\StringTok{"docs"}\NormalTok{, }\StringTok{"plots"}\NormalTok{, }\KeywordTok{paste0}\NormalTok{(}\StringTok{"SSC_trendModelTestPlot_"}\NormalTok{, fname, }\StringTok{".png"}\NormalTok{)))} \NormalTok{ rDT[, }\StringTok{`}\DataTypeTok{:=}\StringTok{`}\NormalTok{(diff, conc }\OperatorTok{-}\StringTok{ }\NormalTok{expectedVal)]} \NormalTok{ rDT[, }\StringTok{`}\DataTypeTok{:=}\StringTok{`}\NormalTok{(pcDiff, (diff}\OperatorTok{/}\NormalTok{expectedVal) }\OperatorTok{*}\StringTok{ }\DecValTok{100}\NormalTok{)]} \NormalTok{ t <-}\StringTok{ }\NormalTok{rDT[, .(date, conc, a, b, slope, expectedVal, diff, pcDiff)]} \KeywordTok{return}\NormalTok{(t)} \NormalTok{\}} \NormalTok{t <-}\StringTok{ }\KeywordTok{getModelTrendTable}\NormalTok{(oaNO2, }\DataTypeTok{fname =} \StringTok{"NO2"}\NormalTok{)} \NormalTok{ft <-}\StringTok{ }\KeywordTok{dcast}\NormalTok{(t[date }\OperatorTok{>=}\StringTok{ }\KeywordTok{as.Date}\NormalTok{(}\StringTok{"2020-01-01"}\NormalTok{) }\OperatorTok{&}\StringTok{ }\NormalTok{date }\OperatorTok{<}\StringTok{ }\KeywordTok{as.Date}\NormalTok{(}\StringTok{"2020-06-01"}\NormalTok{)], date }\OperatorTok{~}\StringTok{ }\NormalTok{., }\DataTypeTok{value.var =} \KeywordTok{c}\NormalTok{(}\StringTok{"diff"}\NormalTok{, }\StringTok{"pcDiff"}\NormalTok{))} \NormalTok{ft[, }\StringTok{`}\DataTypeTok{:=}\StringTok{`}\NormalTok{(date, }\KeywordTok{format.Date}\NormalTok{(date, }\DataTypeTok{format =} \StringTok{"%b %Y"}\NormalTok{))]} \NormalTok{kableExtra}\OperatorTok{::}\KeywordTok{kable}\NormalTok{(ft, }\DataTypeTok{caption =} \StringTok{"Units and % above/below expected"}\NormalTok{, }\DataTypeTok{digits =} \DecValTok{2}\NormalTok{) }\OperatorTok{%>%}\StringTok{ }\KeywordTok{kable_styling}\NormalTok{()} \end{Highlighting} \end{Shaded} (\#tab:theilSenNO2)Units and \% above/below expected date diff pcDiff Jan 2020 -3.93 -12.91 Feb 2020 -4.13 -13.70 Mar 2020 -6.11 -20.49 Apr 2020 -3.17 -10.75 May 2020 -4.65 -15.94 \section{Oxides of Nitrogen (nox)}\label{oxides-of-nitrogen-nox} \begin{Shaded} \begin{Highlighting}[] \NormalTok{yLab <-}\StringTok{ "Oxides of Nitrogen (ug/m3)"} \NormalTok{noxdt <-}\StringTok{ }\NormalTok{fixedDT[pollutant }\OperatorTok{==}\StringTok{ "nox"}\NormalTok{]} \end{Highlighting} \end{Shaded} Figure @ref(fig:theilSenNOx) shows the NOx trend over time. Is lockdown below trend? \begin{Shaded} \begin{Highlighting}[] \NormalTok{noxdt[, }\StringTok{`}\DataTypeTok{:=}\StringTok{`}\NormalTok{(date, }\KeywordTok{as.Date}\NormalTok{(dateTimeUTC))] }\CommentTok{# set date to date for this one} \NormalTok{oaNOx <-}\StringTok{ }\NormalTok{openair}\OperatorTok{::}\KeywordTok{TheilSen}\NormalTok{(noxdt[date }\OperatorTok{<}\StringTok{ }\KeywordTok{as.Date}\NormalTok{(}\StringTok{"2020-06-01"}\NormalTok{)], }\StringTok{"value"}\NormalTok{, }\DataTypeTok{ylab =} \StringTok{"NOx"}\NormalTok{, }\DataTypeTok{deseason =} \OtherTok{TRUE}\NormalTok{, }\DataTypeTok{xlab =} \StringTok{"Year"}\NormalTok{, } \DataTypeTok{date.format =} \StringTok{"%Y"}\NormalTok{, }\DataTypeTok{date.breaks =} \DecValTok{4}\NormalTok{)} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [1] "Taking bootstrap samples. Please wait." \end{verbatim} \begin{figure} \centering \includegraphics{/home/ba1e12/github/dataknut/airQual/docs/sccAirQualExplore_covidLockdown2020ForBBCsnaphot_files/figure-latex/theilSenNOx-1.pdf} \caption{(\#fig:theilSenNOx)Theil-Sen trend (NOx)} \end{figure} \begin{Shaded} \begin{Highlighting}[] \NormalTok{p <-}\StringTok{ }\NormalTok{oaNOx}\OperatorTok{$}\NormalTok{plot} \NormalTok{t <-}\StringTok{ }\KeywordTok{getModelTrendTable}\NormalTok{(oaNOx, }\DataTypeTok{fname =} \StringTok{"NOx"}\NormalTok{)} \NormalTok{ft <-}\StringTok{ }\KeywordTok{dcast}\NormalTok{(t[date }\OperatorTok{>=}\StringTok{ }\KeywordTok{as.Date}\NormalTok{(}\StringTok{"2020-01-01"}\NormalTok{) }\OperatorTok{&}\StringTok{ }\NormalTok{date }\OperatorTok{<}\StringTok{ }\KeywordTok{as.Date}\NormalTok{(}\StringTok{"2020-06-01"}\NormalTok{)], date }\OperatorTok{~}\StringTok{ }\NormalTok{., }\DataTypeTok{value.var =} \KeywordTok{c}\NormalTok{(}\StringTok{"diff"}\NormalTok{, }\StringTok{"pcDiff"}\NormalTok{))} \NormalTok{ft[, }\StringTok{`}\DataTypeTok{:=}\StringTok{`}\NormalTok{(date, }\KeywordTok{format.Date}\NormalTok{(date, }\DataTypeTok{format =} \StringTok{"%b %Y"}\NormalTok{))]} \NormalTok{kableExtra}\OperatorTok{::}\KeywordTok{kable}\NormalTok{(ft, }\DataTypeTok{caption =} \StringTok{"Units and % above/below expected"}\NormalTok{, }\DataTypeTok{digits =} \DecValTok{2}\NormalTok{) }\OperatorTok{%>%}\StringTok{ }\KeywordTok{kable_styling}\NormalTok{()} \end{Highlighting} \end{Shaded} (\#tab:theilSenNOx)Units and \% above/below expected date diff pcDiff Jan 2020 -5.11 -9.30 Feb 2020 -6.81 -12.55 Mar 2020 -9.41 -17.54 Apr 2020 -0.35 -0.66 May 2020 -5.28 -10.08 \section{Sulphour Dioxide}\label{sulphour-dioxide} \begin{Shaded} \begin{Highlighting}[] \NormalTok{yLab <-}\StringTok{ "Sulphour Dioxide (ug/m3)"} \NormalTok{so2dt <-}\StringTok{ }\NormalTok{fixedDT[pollutant }\OperatorTok{==}\StringTok{ "so2"}\NormalTok{]} \end{Highlighting} \end{Shaded} Figure @ref(fig:theilSenSO2) shows the SO2 trend over time. Is lockdown below trend? \begin{Shaded} \begin{Highlighting}[] \NormalTok{so2dt[, }\StringTok{`}\DataTypeTok{:=}\StringTok{`}\NormalTok{(date, }\KeywordTok{as.Date}\NormalTok{(dateTimeUTC))] }\CommentTok{# set date to date for this one} \NormalTok{oaSO2 <-}\StringTok{ }\NormalTok{openair}\OperatorTok{::}\KeywordTok{TheilSen}\NormalTok{(noxdt[date }\OperatorTok{<}\StringTok{ }\KeywordTok{as.Date}\NormalTok{(}\StringTok{"2020-06-01"}\NormalTok{)], }\StringTok{"value"}\NormalTok{, }\DataTypeTok{ylab =} \StringTok{"SO2"}\NormalTok{, }\DataTypeTok{deseason =} \OtherTok{TRUE}\NormalTok{, }\DataTypeTok{xlab =} \StringTok{"Year"}\NormalTok{, } \DataTypeTok{date.format =} \StringTok{"%Y"}\NormalTok{, }\DataTypeTok{date.breaks =} \DecValTok{4}\NormalTok{)} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [1] "Taking bootstrap samples. Please wait." \end{verbatim} \begin{figure} \centering \includegraphics{/home/ba1e12/github/dataknut/airQual/docs/sccAirQualExplore_covidLockdown2020ForBBCsnaphot_files/figure-latex/theilSenSO2-1.pdf} \caption{(\#fig:theilSenSO2)Theil-Sen trend (SO2)} \end{figure} \begin{Shaded} \begin{Highlighting}[] \NormalTok{t <-}\StringTok{ }\KeywordTok{getModelTrendTable}\NormalTok{(oaSO2, }\DataTypeTok{fname =} \StringTok{"SO2"}\NormalTok{)} \NormalTok{ft <-}\StringTok{ }\KeywordTok{dcast}\NormalTok{(t[date }\OperatorTok{>=}\StringTok{ }\KeywordTok{as.Date}\NormalTok{(}\StringTok{"2020-01-01"}\NormalTok{) }\OperatorTok{&}\StringTok{ }\NormalTok{date }\OperatorTok{<}\StringTok{ }\KeywordTok{as.Date}\NormalTok{(}\StringTok{"2020-06-01"}\NormalTok{)], date }\OperatorTok{~}\StringTok{ }\NormalTok{., }\DataTypeTok{value.var =} \KeywordTok{c}\NormalTok{(}\StringTok{"diff"}\NormalTok{, }\StringTok{"pcDiff"}\NormalTok{))} \NormalTok{ft[, }\StringTok{`}\DataTypeTok{:=}\StringTok{`}\NormalTok{(date, }\KeywordTok{format.Date}\NormalTok{(date, }\DataTypeTok{format =} \StringTok{"%b %Y"}\NormalTok{))]} \NormalTok{kableExtra}\OperatorTok{::}\KeywordTok{kable}\NormalTok{(ft, }\DataTypeTok{caption =} \StringTok{"Units and % above/below expected"}\NormalTok{, }\DataTypeTok{digits =} \DecValTok{2}\NormalTok{) }\OperatorTok{%>%}\StringTok{ }\KeywordTok{kable_styling}\NormalTok{()} \end{Highlighting} \end{Shaded} (\#tab:theilSenSO2)Units and \% above/below expected date diff pcDiff Jan 2020 -5.11 -9.30 Feb 2020 -6.81 -12.55 Mar 2020 -9.41 -17.54 Apr 2020 -0.35 -0.66 May 2020 -5.28 -10.08 \section{Ozone}\label{ozone} \begin{Shaded} \begin{Highlighting}[] \NormalTok{yLab <-}\StringTok{ "Ozone (ug/m3)"} \NormalTok{o3dt <-}\StringTok{ }\NormalTok{fixedDT[pollutant }\OperatorTok{==}\StringTok{ "o3"}\NormalTok{]} \end{Highlighting} \end{Shaded} Figure @ref(fig:theilSenO3) shows the O3 trend over time. Is lockdown below trend? \begin{Shaded} \begin{Highlighting}[] \NormalTok{o3dt[, }\StringTok{`}\DataTypeTok{:=}\StringTok{`}\NormalTok{(date, }\KeywordTok{as.Date}\NormalTok{(dateTimeUTC))] }\CommentTok{# set date to date for this one} \NormalTok{oaO3 <-}\StringTok{ }\NormalTok{openair}\OperatorTok{::}\KeywordTok{TheilSen}\NormalTok{(o3dt[date }\OperatorTok{<}\StringTok{ }\KeywordTok{as.Date}\NormalTok{(}\StringTok{"2020-06-01"}\NormalTok{)], }\StringTok{"value"}\NormalTok{, }\DataTypeTok{ylab =} \StringTok{"O3"}\NormalTok{, }\DataTypeTok{deseason =} \OtherTok{TRUE}\NormalTok{, }\DataTypeTok{xlab =} \StringTok{"Year"}\NormalTok{, } \DataTypeTok{date.format =} \StringTok{"%Y"}\NormalTok{, }\DataTypeTok{date.breaks =} \DecValTok{4}\NormalTok{)} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [1] "Taking bootstrap samples. Please wait." \end{verbatim} \begin{figure} \centering \includegraphics{/home/ba1e12/github/dataknut/airQual/docs/sccAirQualExplore_covidLockdown2020ForBBCsnaphot_files/figure-latex/theilSenO3-1.pdf} \caption{(\#fig:theilSenO3)Theil-Sen trend (O3)} \end{figure} \begin{Shaded} \begin{Highlighting}[] \NormalTok{t <-}\StringTok{ }\KeywordTok{getModelTrendTable}\NormalTok{(oaO3, }\DataTypeTok{fname =} \StringTok{"O3"}\NormalTok{)} \NormalTok{ft <-}\StringTok{ }\KeywordTok{dcast}\NormalTok{(t[date }\OperatorTok{>=}\StringTok{ }\KeywordTok{as.Date}\NormalTok{(}\StringTok{"2020-01-01"}\NormalTok{) }\OperatorTok{&}\StringTok{ }\NormalTok{date }\OperatorTok{<}\StringTok{ }\KeywordTok{as.Date}\NormalTok{(}\StringTok{"2020-06-01"}\NormalTok{)], date }\OperatorTok{~}\StringTok{ }\NormalTok{., }\DataTypeTok{value.var =} \KeywordTok{c}\NormalTok{(}\StringTok{"diff"}\NormalTok{, }\StringTok{"pcDiff"}\NormalTok{))} \NormalTok{ft[, }\StringTok{`}\DataTypeTok{:=}\StringTok{`}\NormalTok{(date, }\KeywordTok{format.Date}\NormalTok{(date, }\DataTypeTok{format =} \StringTok{"%b %Y"}\NormalTok{))]} \NormalTok{kableExtra}\OperatorTok{::}\KeywordTok{kable}\NormalTok{(ft, }\DataTypeTok{caption =} \StringTok{"Units and % above/below expected"}\NormalTok{, }\DataTypeTok{digits =} \DecValTok{2}\NormalTok{) }\OperatorTok{%>%}\StringTok{ }\KeywordTok{kable_styling}\NormalTok{()} \end{Highlighting} \end{Shaded} (\#tab:theilSenO3)Units and \% above/below expected date diff pcDiff Jan 2020 4.30 9.31 Feb 2020 8.84 19.07 Mar 2020 4.77 10.25 Apr 2020 4.31 9.22 May 2020 6.80 14.48 \section{PM 10}\label{pm-10} \begin{Shaded} \begin{Highlighting}[] \NormalTok{yLab <-}\StringTok{ "PM 10 (ug/m3)"} \NormalTok{pm10dt <-}\StringTok{ }\NormalTok{fixedDT[pollutant }\OperatorTok{==}\StringTok{ "pm10"}\NormalTok{]} \end{Highlighting} \end{Shaded} Figure @ref(fig:theilSenPM10) shows the PM10 trend over time. Is lockdown below trend? \begin{Shaded} \begin{Highlighting}[] \NormalTok{pm10dt[, }\StringTok{`}\DataTypeTok{:=}\StringTok{`}\NormalTok{(date, }\KeywordTok{as.Date}\NormalTok{(dateTimeUTC))] }\CommentTok{# set date to date for this one} \NormalTok{oaPM10 <-}\StringTok{ }\NormalTok{openair}\OperatorTok{::}\KeywordTok{TheilSen}\NormalTok{(pm10dt[date }\OperatorTok{<}\StringTok{ }\KeywordTok{as.Date}\NormalTok{(}\StringTok{"2020-06-01"}\NormalTok{)], }\StringTok{"value"}\NormalTok{, }\DataTypeTok{ylab =} \StringTok{"PM10"}\NormalTok{, }\DataTypeTok{deseason =} \OtherTok{TRUE}\NormalTok{, }\DataTypeTok{xlab =} \StringTok{"Year"}\NormalTok{, } \DataTypeTok{date.format =} \StringTok{"%Y"}\NormalTok{, }\DataTypeTok{date.breaks =} \DecValTok{4}\NormalTok{)} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [1] "Taking bootstrap samples. Please wait." \end{verbatim} \begin{figure} \centering \includegraphics{/home/ba1e12/github/dataknut/airQual/docs/sccAirQualExplore_covidLockdown2020ForBBCsnaphot_files/figure-latex/theilSenPM10-1.pdf} \caption{(\#fig:theilSenPM10)Theil-Sen trend (PM10)} \end{figure} \begin{Shaded} \begin{Highlighting}[] \NormalTok{t <-}\StringTok{ }\KeywordTok{getModelTrendTable}\NormalTok{(oaPM10, }\DataTypeTok{fname =} \StringTok{"SPM10"}\NormalTok{)} \NormalTok{ft <-}\StringTok{ }\KeywordTok{dcast}\NormalTok{(t[date }\OperatorTok{>=}\StringTok{ }\KeywordTok{as.Date}\NormalTok{(}\StringTok{"2020-01-01"}\NormalTok{) }\OperatorTok{&}\StringTok{ }\NormalTok{date }\OperatorTok{<}\StringTok{ }\KeywordTok{as.Date}\NormalTok{(}\StringTok{"2020-06-01"}\NormalTok{)], date }\OperatorTok{~}\StringTok{ }\NormalTok{., }\DataTypeTok{value.var =} \KeywordTok{c}\NormalTok{(}\StringTok{"diff"}\NormalTok{, }\StringTok{"pcDiff"}\NormalTok{))} \NormalTok{ft[, }\StringTok{`}\DataTypeTok{:=}\StringTok{`}\NormalTok{(date, }\KeywordTok{format.Date}\NormalTok{(date, }\DataTypeTok{format =} \StringTok{"%b %Y"}\NormalTok{))]} \NormalTok{kableExtra}\OperatorTok{::}\KeywordTok{kable}\NormalTok{(ft, }\DataTypeTok{caption =} \StringTok{"Units and % above/below expected"}\NormalTok{, }\DataTypeTok{digits =} \DecValTok{2}\NormalTok{) }\OperatorTok{%>%}\StringTok{ }\KeywordTok{kable_styling}\NormalTok{()} \end{Highlighting} \end{Shaded} (\#tab:theilSenPM10)Units and \% above/below expected date diff pcDiff Jan 2020 -0.10 -0.62 Feb 2020 -1.58 -10.10 Mar 2020 1.55 9.91 Apr 2020 -0.06 -0.38 May 2020 1.57 10.19 \section{PM 2.5}\label{pm-2.5} \begin{Shaded} \begin{Highlighting}[] \NormalTok{yLab <-}\StringTok{ "PM 2.5 (ug/m3)"} \NormalTok{pm25dt <-}\StringTok{ }\NormalTok{fixedDT[pollutant }\OperatorTok{==}\StringTok{ "pm2.5"}\NormalTok{]} \end{Highlighting} \end{Shaded} Figure @ref(fig:theilSenPM25) shows the PM10 trend over time. Is lockdown below trend? \begin{Shaded} \begin{Highlighting}[] \NormalTok{pm25dt[, }\StringTok{`}\DataTypeTok{:=}\StringTok{`}\NormalTok{(date, }\KeywordTok{as.Date}\NormalTok{(dateTimeUTC))] }\CommentTok{# set date to date for this one} \NormalTok{oaPM25 <-}\StringTok{ }\NormalTok{openair}\OperatorTok{::}\KeywordTok{TheilSen}\NormalTok{(pm25dt[date }\OperatorTok{<}\StringTok{ }\KeywordTok{as.Date}\NormalTok{(}\StringTok{"2020-06-01"}\NormalTok{)], }\StringTok{"value"}\NormalTok{, }\DataTypeTok{ylab =} \StringTok{"PM2.5"}\NormalTok{, }\DataTypeTok{deseason =} \OtherTok{TRUE}\NormalTok{, }\DataTypeTok{xlab =} \StringTok{"Year"}\NormalTok{, } \DataTypeTok{date.format =} \StringTok{"%Y"}\NormalTok{, }\DataTypeTok{date.breaks =} \DecValTok{4}\NormalTok{)} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [1] "Taking bootstrap samples. Please wait." \end{verbatim} \begin{figure} \centering \includegraphics{/home/ba1e12/github/dataknut/airQual/docs/sccAirQualExplore_covidLockdown2020ForBBCsnaphot_files/figure-latex/theilSenPM25-1.pdf} \caption{(\#fig:theilSenPM25)Theil-Sen trend (PM2.5)} \end{figure} \begin{Shaded} \begin{Highlighting}[] \NormalTok{t <-}\StringTok{ }\KeywordTok{getModelTrendTable}\NormalTok{(oaPM25, }\DataTypeTok{fname =} \StringTok{"PM2.5"}\NormalTok{)} \NormalTok{ft <-}\StringTok{ }\KeywordTok{dcast}\NormalTok{(t[date }\OperatorTok{>=}\StringTok{ }\KeywordTok{as.Date}\NormalTok{(}\StringTok{"2020-01-01"}\NormalTok{) }\OperatorTok{&}\StringTok{ }\NormalTok{date }\OperatorTok{<}\StringTok{ }\KeywordTok{as.Date}\NormalTok{(}\StringTok{"2020-06-01"}\NormalTok{)], date }\OperatorTok{~}\StringTok{ }\NormalTok{., }\DataTypeTok{value.var =} \KeywordTok{c}\NormalTok{(}\StringTok{"diff"}\NormalTok{, }\StringTok{"pcDiff"}\NormalTok{))} \NormalTok{ft[, }\StringTok{`}\DataTypeTok{:=}\StringTok{`}\NormalTok{(date, }\KeywordTok{format.Date}\NormalTok{(date, }\DataTypeTok{format =} \StringTok{"%b %Y"}\NormalTok{))]} \NormalTok{kableExtra}\OperatorTok{::}\KeywordTok{kable}\NormalTok{(ft, }\DataTypeTok{caption =} \StringTok{"Units and % above/below expected"}\NormalTok{, }\DataTypeTok{digits =} \DecValTok{2}\NormalTok{) }\OperatorTok{%>%}\StringTok{ }\KeywordTok{kable_styling}\NormalTok{()} \end{Highlighting} \end{Shaded} (\#tab:theilSenPM25)Units and \% above/below expected date diff pcDiff Jan 2020 0.70 7.70 Feb 2020 -0.42 -4.72 Mar 2020 0.38 4.27 Apr 2020 1.59 18.11 May 2020 1.73 19.89 \section{About}\label{about} \subsection{Code}\label{code} Source: \begin{itemize} \tightlist \item \url{https://github.com/dataknut/airQual} \end{itemize} History: \begin{itemize} \tightlist \item \url{https://github.com/dataknut/airQual/commits/master} \end{itemize} \subsection{Comments and feedback}\label{comments-and-feedback} If you wish to comment please open an issue: \begin{itemize} \tightlist \item \url{https://github.com/dataknut/airQual/issues} \end{itemize} \subsection{Citation}\label{citation} If you wish to refer to any of the material from this report please cite as: \begin{itemize} \tightlist \item Anderson, B., (2020) Air Quality in Southampton (UK): Exploring the effect of UK covid 19 lockdown on air quality: Summary for BBC South , \href{http://www.energy.soton.ac.uk}{Sustainable Energy Research Group}, University of Southampton: Southampton, UK. \end{itemize} Report circulation: \begin{itemize} \tightlist \item Public \end{itemize} This work is (c) 2020 the University of Southampton and is part of a collection of \href{https://dataknut.github.io/airQual/}{air quality} data analyses. \section{Runtime}\label{runtime} Report generated using \href{https://cran.r-project.org/package=knitr}{knitr} in \href{http://www.rstudio.com}{RStudio} with R version 3.6.0 (2019-04-26) running on x86\_64-redhat-linux-gnu (\#1 SMP Thu May 7 19:30:37 EDT 2020). \begin{Shaded} \begin{Highlighting}[] \NormalTok{t <-}\StringTok{ }\KeywordTok{proc.time}\NormalTok{() }\OperatorTok{-}\StringTok{ }\NormalTok{myParams}\OperatorTok{$}\NormalTok{startTime} \NormalTok{elapsed <-}\StringTok{ }\NormalTok{t[[}\DecValTok{3}\NormalTok{]]} \end{Highlighting} \end{Shaded} Analysis completed in 13.226 seconds ( 0.22 minutes). R packages used in this report: \begin{itemize} \tightlist \item data.table - (Dowle et al. 2015) \item ggplot2 - (Wickham 2009) \item here - (Müller 2017) \item kableExtra - (Zhu 2019) \item lubridate - (Grolemund and Wickham 2011) \item openAir - (Carslaw and Ropkins 2012) \item skimr - (Arino de la Rubia et al. 2017) \item viridis - (Garnier 2018) \end{itemize} \section*{References}\label{references} \addcontentsline{toc}{section}{References} \hypertarget{refs}{} \hypertarget{ref-skimr}{} Arino de la Rubia, Eduardo, Hao Zhu, Shannon Ellis, Elin Waring, and Michael Quinn. 2017. \emph{Skimr: Skimr}. \url{https://github.com/ropenscilabs/skimr}. \hypertarget{ref-openair}{} Carslaw, David C., and Karl Ropkins. 2012. ``Openair --- an R Package for Air Quality Data Analysis.'' \emph{Environmental Modelling \& Software} 27--28 (0): 52--61. doi:\href{https://doi.org/10.1016/j.envsoft.2011.09.008}{10.1016/j.envsoft.2011.09.008}. \hypertarget{ref-data.table}{} Dowle, M, A Srinivasan, T Short, S Lianoglou with contributions from R Saporta, and E Antonyan. 2015. \emph{Data.table: Extension of Data.frame}. \url{https://CRAN.R-project.org/package=data.table}. \hypertarget{ref-viridis}{} Garnier, Simon. 2018. \emph{Viridis: Default Color Maps from 'Matplotlib'}. \url{https://CRAN.R-project.org/package=viridis}. \hypertarget{ref-lubridate}{} Grolemund, Garrett, and Hadley Wickham. 2011. ``Dates and Times Made Easy with lubridate.'' \emph{Journal of Statistical Software} 40 (3): 1--25. \url{http://www.jstatsoft.org/v40/i03/}. \hypertarget{ref-here}{} Müller, Kirill. 2017. \emph{Here: A Simpler Way to Find Your Files}. \url{https://CRAN.R-project.org/package=here}. \hypertarget{ref-ggplot2}{} Wickham, Hadley. 2009. \emph{Ggplot2: Elegant Graphics for Data Analysis}. Springer-Verlag New York. \url{http://ggplot2.org}. \hypertarget{ref-kableExtra}{} Zhu, Hao. 2019. \emph{KableExtra: Construct Complex Table with 'Kable' and Pipe Syntax}. \url{https://CRAN.R-project.org/package=kableExtra}. \end{document}
{ "alphanum_fraction": 0.7298883556, "avg_line_length": 30.2573007103, "ext": "tex", "hexsha": "c30cc9091e67d585131766f96e7ae32c659a9170", "lang": "TeX", "max_forks_count": 2, "max_forks_repo_forks_event_max_datetime": "2020-06-08T11:51:42.000Z", "max_forks_repo_forks_event_min_datetime": "2020-05-30T01:27:15.000Z", "max_forks_repo_head_hexsha": "333e31c22615e616364d65c72fcf30a1f16867d2", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "dataknut/airQual", "max_forks_repo_path": "docs/sccAirQualExplore_covidLockdown2020ForBBCsnaphot.tex", "max_issues_count": 1, "max_issues_repo_head_hexsha": "333e31c22615e616364d65c72fcf30a1f16867d2", "max_issues_repo_issues_event_max_datetime": "2020-04-17T14:32:41.000Z", "max_issues_repo_issues_event_min_datetime": "2020-04-17T14:32:41.000Z", "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "dataknut/airQual", "max_issues_repo_path": "docs/sccAirQualExplore_covidLockdown2020ForBBCsnaphot.tex", "max_line_length": 481, "max_stars_count": null, "max_stars_repo_head_hexsha": "333e31c22615e616364d65c72fcf30a1f16867d2", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "dataknut/airQual", "max_stars_repo_path": "docs/sccAirQualExplore_covidLockdown2020ForBBCsnaphot.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 12968, "size": 38336 }
\chapter{Conclusion} \label{ch:conclusion} Conclusion
{ "alphanum_fraction": 0.8148148148, "avg_line_length": 13.5, "ext": "tex", "hexsha": "dcb9324c8a2768d22fa591a1ecea856b0d958012", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "9508af937a25776f99b9696e0ccc5080328d584a", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "joschal/LaTeX-Thesis-Template", "max_forks_repo_path": "chapters/3_conclusion.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "9508af937a25776f99b9696e0ccc5080328d584a", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "joschal/LaTeX-Thesis-Template", "max_issues_repo_path": "chapters/3_conclusion.tex", "max_line_length": 21, "max_stars_count": null, "max_stars_repo_head_hexsha": "9508af937a25776f99b9696e0ccc5080328d584a", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "joschal/LaTeX-Thesis-Template", "max_stars_repo_path": "chapters/3_conclusion.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 15, "size": 54 }
\chapter{Undirected graphical models (Markov random fields)}
{ "alphanum_fraction": 0.8064516129, "avg_line_length": 20.6666666667, "ext": "tex", "hexsha": "e7cf104c9a4dc5cafba1c068c42bd711dd19fddf", "lang": "TeX", "max_forks_count": 4, "max_forks_repo_forks_event_max_datetime": "2019-09-25T04:26:03.000Z", "max_forks_repo_forks_event_min_datetime": "2017-02-25T15:40:39.000Z", "max_forks_repo_head_hexsha": "114a6c2424f206cb57715c7de29ae3d26abcb081", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "Alexoner/Statistical-formula", "max_forks_repo_path": "mlapp/chapterUGM.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "114a6c2424f206cb57715c7de29ae3d26abcb081", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "Alexoner/Statistical-formula", "max_issues_repo_path": "mlapp/chapterUGM.tex", "max_line_length": 60, "max_stars_count": 24, "max_stars_repo_head_hexsha": "114a6c2424f206cb57715c7de29ae3d26abcb081", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "Alexoner/Statistical-formula", "max_stars_repo_path": "mlapp/chapterUGM.tex", "max_stars_repo_stars_event_max_datetime": "2021-05-24T13:46:00.000Z", "max_stars_repo_stars_event_min_datetime": "2015-02-15T17:00:48.000Z", "num_tokens": 13, "size": 62 }
\documentclass{article} \usepackage[utf8]{inputenc} \title{PS5_Gillingham} \author{Andrew Gillingham } \date{February 2019} \begin{document} \maketitle \section{Data} The data I chose was 2018 MLB hitting data from ESPN. Originally, I wanted to try and pull from Baseball Savant but the website doesn't appear to be very friendly for scraping data. I then moved on to Baseball Reference but was met with similar problems. Finally I found that ESPN had similar stats but was much easier to scrape.I am going to use hitting stats in my thesis so this is a step in the right direction. The guides I used were the one from class as well as one I found online that was more specific to ESPN. (https://www4.stat.ncsu.edu/~post/marschall/SLG_scraping.html#1) When it came to using an API to gather data, I initially had some success in finding a free sports API at SportsRadar. They had a trial for their API's in sports ranging from cricket to football. I found a key for Global Baseball but quickly realized that I didn't know what package to use in R for this API. I ended up finding an API with a guide on MySportsFeed and was able to start trying to use it, until it was going to charge me for access to any of their sites. I was able to get up to the authentication of the key before it wouldn't accept the key. I guess I will need to go back and figure out better resources for sports data using API keys. \end{document}
{ "alphanum_fraction": 0.7811188811, "avg_line_length": 79.4444444444, "ext": "tex", "hexsha": "89d62791557a4a2802532037a7022aa0b765bbb7", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "ac055b70bb06ef413b0470c78a221dc3daa0c651", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "agillingham15/DScourseS19", "max_forks_repo_path": "ProblemSets/PS5/PS5_Gillingham.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "ac055b70bb06ef413b0470c78a221dc3daa0c651", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "agillingham15/DScourseS19", "max_issues_repo_path": "ProblemSets/PS5/PS5_Gillingham.tex", "max_line_length": 654, "max_stars_count": null, "max_stars_repo_head_hexsha": "ac055b70bb06ef413b0470c78a221dc3daa0c651", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "agillingham15/DScourseS19", "max_stars_repo_path": "ProblemSets/PS5/PS5_Gillingham.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 345, "size": 1430 }
\section{TCP In Depth} TCP or the transmission control protocol handles the retransmission, acknowledgement, and flow of packets. Naturally a pre-requisite to this section is
{ "alphanum_fraction": 0.8181818182, "avg_line_length": 44, "ext": "tex", "hexsha": "f8585b47e53d4d90680e7604712320e0ea4f052f", "lang": "TeX", "max_forks_count": 68, "max_forks_repo_forks_event_max_datetime": "2022-03-25T12:30:25.000Z", "max_forks_repo_forks_event_min_datetime": "2019-01-11T15:47:26.000Z", "max_forks_repo_head_hexsha": "d8f7464a84c9a358d7e2f07afbe1fcfceb3b89ce", "max_forks_repo_licenses": [ "CC-BY-3.0", "CC-BY-4.0" ], "max_forks_repo_name": "pkgamma/coursebook", "max_forks_repo_path": "honors/tcp.tex", "max_issues_count": 128, "max_issues_repo_head_hexsha": "d8f7464a84c9a358d7e2f07afbe1fcfceb3b89ce", "max_issues_repo_issues_event_max_datetime": "2021-11-05T02:39:40.000Z", "max_issues_repo_issues_event_min_datetime": "2019-01-11T01:10:14.000Z", "max_issues_repo_licenses": [ "CC-BY-3.0", "CC-BY-4.0" ], "max_issues_repo_name": "pkgamma/coursebook", "max_issues_repo_path": "honors/tcp.tex", "max_line_length": 151, "max_stars_count": 547, "max_stars_repo_head_hexsha": "d8f7464a84c9a358d7e2f07afbe1fcfceb3b89ce", "max_stars_repo_licenses": [ "CC-BY-3.0", "CC-BY-4.0" ], "max_stars_repo_name": "pkgamma/coursebook", "max_stars_repo_path": "honors/tcp.tex", "max_stars_repo_stars_event_max_datetime": "2022-03-30T09:12:54.000Z", "max_stars_repo_stars_event_min_datetime": "2019-01-15T08:51:58.000Z", "num_tokens": 36, "size": 176 }